
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
- Torvalds and the Linux maintainers are taking a pragmatic approach to using AI in the kernel.
- AI or no AI, it's people, not LLMs, who are responsible for Linux's code.
- If you try to mess around with Linux code using AI, bad things will happen.
After months of heated debate, Linus Torvalds and the Linux kernel maintainers have officially codified the project's first formal policy on AI-assisted code contributions. This new policy reflects Torvald's pragmatic approach, balancing the embrace of modern AI development tools with the kernel's rigorous quality standards.
The new guidelines establish three core principles:
AI agents cannot add Signed-off-by tags: Only humans can legally certify the Linux kernel's Developer Certificate of Origin (DCO). This is the legal mechanism that ensures code licensing compliance. In other words, even if you turned in a patch that was written entirely by AI, you, and not the AI or its creator, are solely responsible for the contribution.
Mandatory Assisted-by attribution: Any contribution using AI tools must include an Assisted-by tag identifying the model, agent, and auxiliary tools used. For example: “Assisted-by: Claude:claude-3-opus coccinelle sparse.”
Full human liability: Put it all together, and you, the human submitter, bear full responsibility and accountability for reviewing the AI-generated code, ensuring license compliance, and for any bugs or security flaws that arise. Do not try to sneak bad code into the kernel, as a pair of University of Minnesota students tried back in 2021, or you can kiss your chances of ever becoming a Linux kernel developer or programmer in any other respectable open-source project goodbye.
The Assisted-by tag serves as both a transparency mechanism and a review flag. It enables maintainers to give AI-assisted patches the extra scrutiny they may require without stigmatizing the practice itself.
Also: Linux after Linus? The kernel community finally drafts a plan for replacing Torvalds
The Assisted-by attribution was forged in the fire of controversy when Nvidia engineer and prominent Linux kernel developer Sasha Levin submitted a patch to Linux 6.15 entirely generated by AI, including the changelog and tests. Levin reviewed and tested the code before submission, but he didn't disclose to the reviewers that an AI had written it.
That did not go over well with other kernel developers.
AI's role as a tool rather than a co-author
The upshot of all the subsequent fuss? At the 2025 North America Open Source Summit, Levin himself began advocating for formal AI transparency rules. In July 2025, he proposed the first draft of what would become the kernel's AI policy. He initially suggested a Co-developed-by tag for AI-assisted patches.
Initial discussions, both in person and on the Linux Kernel Mailing List (LKML), debated whether to use a new Generated-by tag or repurpose the existing Co-developed-by tag. Maintainers ultimately settled on Assisted-by to better reflect AI's role as a tool rather than a co-author.
The decision comes as AI coding assistants have suddenly become genuinely useful for kernel development. As Greg Kroah-Hartman, maintainer of the Linux stable kernel, recently told me, “something happened a month ago, and the world switched” with AI tools now producing real, valuable security reports rather than hallucinated nonsense.
Also: Linux explores new way of authenticating developers and their code – how it works
The final choice of Assisted-by rather than Generated-by was deliberate and influenced by three factors. First, it's more accurate. Most AI use in kernel development is assistive (code completion, refactoring suggestions, test generation) rather than full code generation. Second, the tag format mirrors existing metadata tags like Reviewed-by, Tested-by, and Co-developed-by. Finally, Assisted-by describes the tool's role without implying the code is suspicious or second-class.
This pragmatic approach got a kickstart when, in an LKML conversation, Torvalds said, “I do *not* want any kernel development documentation to be some AI statement. We have enough people on both sides of the ‘sky is falling' and ‘it's going to revolutionize software engineering.' I don't want some kernel development docs to take either stance. It's why I strongly want this to be that ‘just a tool' statement.”
The real challenge is credible-looking patches
Despite the Linux kernel's new AI disclosure policy, maintainers aren't relying on AI-detection software to catch undisclosed AI-generated patches. Instead, they're using the same tools they've always used: Deep technical expertise, pattern recognition, and good, old-fashioned code review. As Torvalds said back in 2023, “You have to have a certain amount of good taste to judge other people's code.“
Also: This is my favorite Linux distro of all time – and I've tried them all
Why? As Torvalds pointed out. “There is zero point in talking about AI slop. Because the AI slop people aren't going to document their patches as such.” The hard problem isn't obvious junk; that's easy to reject regardless of origin. The real challenge is credible-looking patches that meet the immediate spec, match local style, compile cleanly, and still encode a subtle bug or a long-term maintenance tax.
The new policy's enforcement doesn't depend on catching every violation. It depends on making the consequences of getting caught severe enough to discourage dishonesty. Ask anyone who's ever been the target of Torvalds' ire for garbage patches. Even though he's a lot more mild-tempered than he used to be, you still don't want to get on his bad side.

