There has been lots of chatting about software supply chain security recently, motivated by popular package exploits.

Well, I have some relevant news: cargo-crev now supports LLM-assisted code reviews. Go try it!

Read on to get more information and background.

History

I started working on Crev in 2018. The idea was (is) simple: if each of us (developers) reviewed at least some of our dependencies, and then we could share and distribute such reviews between each other with the help of a Web of Trust, we could get a good review coverage of the ecosystem, gain more trust in our supply chains, report back issues upstream and improve overall health of the open source ecosystem.

cargo-crev was an attempt to implement such a system, for the language ecosystem I cared about most — Rust.

If I may say so, on a technical level I'm satisfied with the UX and flow that was achieved. However, around 2020 I lost my enthusiasm for this project.

Why? Because it became apparent that no matter how well cargo-crev actually works, the biggest obstacle to fully realize the idea was lack of time by developers.

Reviewing code, even superficially, is a lot of effort and it takes a very long time. And it does not feel nearly as satisfying as actually creating something new — writing code.

Open Source community was already overburdened with just maintaining code. Asking developers to take on yet another unpaid responsibility of trying to secure supply chains is just asking too much.

LLMs getting good at finding issues

Just a few weeks ago I was reading some articles about new LLM models finding non-trivial security issues, and Linux kernel and curl developers admitting that after a deluge of mostly worthless slop security reports they used to complain about, now they tend to receive actually worthwhile AI-assisted bug and security reports. It reminded me about cargo-crev and I realized that AI can actually fill the gap that made me doubt it.

I'm not trying to overhype LLMs. But the fact is that they can do, and in high volume, what developers themselves have no time for: the 90/10 security scanning that was otherwise quite hard to automate.

An LLM can easily and reliably check if a code version published on https://crates.io matches the code published in git.

An LLM can easily scan build.rs and the rest of the code and look if anything looks out of place.

It is actually very hard to hide key-stealing malware in a package that was supposed to format units, etc.

Especially in Rust, doing things that are wrong or out of place creates a lot of noise, making such code easy to notice, even by an LLM reviewer.

It might not be a silver bullet, but it is definitely better than doing nothing.

How to use it

Note: In the initial release cargo-crev supports only Claude Code agent. If you're interested in adding support for other coding agents, it should be relatively easy — most scaffolding is already there. Feel free to chat and create a PR.

Since version 0.27 cargo-crev has a built-in review loop.

cargo crev ai review-loop --iterations 10

which will start the agent 10 times, each time selecting and reviewing a single dependency.

The agent will produce&update a single shell script that can be used to conveniently review and sign all reviews.

While the above is meant as a standard mass-review flow, the core built-in agent review skill is available as an output of:

cargo crev ai skill review

and it should be easy for anyone to modify it and/or build their own LLM-assisted workflows.

How it works

The core change is that Crev's reviews now have fields to indicate that an LLM was used for the review.

The rest is just relatively minor functionality to make producing LLM reviews convenient end to end.

For people skeptical of LLMs, options to ignore LLM-generated reviews have been and will be added where appropriate. You can just ignore the slop reviews if you don't trust them, fine with me.

How well it works

While working on this feature and testing it myself, I have produced quite a few LLM-assisted reviews. Judge by yourself.

To me these meet the bar of being useful. And they turned some spare capacity from my Claude subscription into something that I otherwise would not be able to do myself.

Summary

This is only an initial attempt at harnessing the AI in cargo-crev. There still might be lots of things to improve and extend, but we have to start somewhere.

If you like the idea and find it promising, I encourage you to try it out, give some feedback, and submit improvements.