policy: Added to the PR template, and an AGENTS.md, refusing AI contributions.#15353
policy: Added to the PR template, and an AGENTS.md, refusing AI contributions.#15353icculus wants to merge 3 commits intolibsdl-org:mainfrom
Conversation
|
Marked draft for discussion. I think even if this is our policy, we should wordsmith it to be more gentle. |
|
Under the assumption this is the policy, I'd like to get the pull request template down to one line, and, uh, less shouty. :) Or maybe just a check box: I think AGENTS.md is fairly reasonable, but I'll accept any feedback or rewrites. |
I think this would be the gentler approach and would probably be the way to go? Generally both approaches work.
|
|
For the check box, maybe something like: |
|
For AGENTS.md, how about something like this? |
|
Whatever we add here should be also added to the SDL satellite libraries and propagated to all release branches. |
I'd say maybe replace
|
I'm not sure if "of unknown origin" helps there, because
|
Updated with "may not": |
Good point. Simplified: |
👍 Nice, no complaints with this now. |
|
I've updated the PR with feedback. I've made small grammar changes, and wordwrapped AGENTS.md, but that's all. |
|
This looks good to me. I’d socialize this on Discord to see if there’s any reason we shouldn’t have this policy that we missed. |
|
I'm not an SDL contributor but if I were to write a patch, I would not be able to tick that box. I use clion which uses a proprietary code completion engine. If I write something like f(1);
f(2);
f(then it will suggest completing the last line to f(3);This falls under AI since some intelligence is required to detect the pattern and to infer the developer's intention. |
CLion uses machine learning auto completion, so yes, it wouldn't be allowed. You can disable it within the settings to get standard LSP autocompletion though, so you would need to contribute using either that or another editor that does not use ML/AI completions (e.g. KDE Advanced Text Editor (KATE), VSCodium + Clangd) |
|
"It notices that there's a simple pattern within a specific programming language's syntax and offers to fill it in for me if I agree with the autocomplete" feels like a long way from "I vibe-coded a pull request where I might not have even read the code, let alone understood it." I get that LLMs are also predictive models looking for patterns, so if you squint at both they might have similarities...but I think it probably doesn't need clarification. |
|
The checkbox mentions neither LLMs nor vibe coding. It says AI-generated code. |
We know what our intent is, but we probably want to be clear somewhere. Other people may not know how militant or relaxed our intentions are. |
|
I don't really think it was necessary, but it doesn't hurt, so I clarified this in db525f6. |
|
FWIW I still wouldn't tick that box because Clion is proprietary and I have no way of knowing if the underlying technology is "LLM". I think what you actually want to say is what you wrote above:
|
CLion's documentation/readme describes it. I do think it's your responsibility as someone who makes pull requests to understand the technology you used to write the code being used in the pull request. |
|
Where can I find the readme? |
|
As I understood @icculus above, he wants to allow this kind of simple completion whether or not it's an "LLM". |
|
Yeah, I don't want to get into the weeds trying to define this thing. If the current wording leaves someone legitimately confused, I don't think more words will help, and it's counter-productive to try to manage a complete list of all known tools. I don't believe this will be the threshold where someone feels uncomfortable contributing, or confused about whether they've used generative AI to create patches. |
|
Last call on this before I press the Merge button! |
|
I am not a big fan of IA but I think it's too strict :) |
AI has an incredibly bad record with regards to quality, legality and ethics. I feel that we could live in 2026 and not support slave labor. |
The definition of slave: "a person who is forced to work for and obey and is considered to be their property" Kenya is a free country. Workers must not have been forced to work, nor they are the property of a company. btw, for legality and ethics, you focus on IA, but there are other industrial sectors that are much more concerned: food, cloth, medicine, electronics, etc. I may not know all the ethics issues with IA, but my opinion to accept it right now (I mean this can change) is that I am optimistic enough to imagine that IA have pros that will at some point overtake by far all cons. |
|
artifical intelligence would have more pros to speak of if the primary motivator of the companies making these models wasn't money. Ultimately using cheap labour and littering the environment is cheaper than giving a fuck. We shouldn't support that. |
This is a very strange angle to take, not just because a lot of things are unethical right now means that we shouldn't strive to be more ethical, and even when we leave the entire topic of ethics in general and only think of this topic purely in terms of quantifiable measures and results, AI still isn't exactly fit for that either. LLMs have been proven to get worse over time as they scale, being practically unable to rationalize (of course, they are just words on a chain), and produce dubious quality of code, with bots and users here in this platform generating entire PRs and issues using Claude and similar tools, without them actually understanding what the hell is going on in the codebase to begin with. I do not think that this is worth it, nor a good compromise at all. |
|
Just to be clear, @1bsyl has written a lot of excellent code for SDL, over many years, so he has certainly earned the right to have his opinion heard here. |
If AI changes significantly in the future, I'm sure the guidance here would be updated accordingly at that time. |
Yes, certainly. As I said, this is evolving rapidly, so I think we'll have some discussions about this every few months. |
|
Just want to continue answer : It's clear that there are now ethical issues. As said, there are using "using cheap labour", "littering the environment". etc. I believe there are laws for this and there are ethical commissions to guide this. So, at some point, company will be pursued for this. And also, there are people, professionals, whose job is to sort this out. It's clear we can refuse IA's because of this. and then that's it. and I am not going to defend them. but we can look at the technical aspects:
for SDL:
My experience, is that if you instruct clearly, step by step, and also if you can evaluate what you're doing. you can manage to have something of good quality and save time. I just think IA can be used as another tool (like grep, sed, awk, indent, semantic patch) with more potential but also more difficult to use. And If someone provide PR with some part done IA. It something is erroneous, the IA isn't to blame, but the author of the PR. |

This is just a proposed solution; if we go a different way (even the same direction but with gentler text), it's okay to close this and do something else.
This will warn people, when they try to create a PR, that we won't accept AI-generated contributions. It also adds an AGENTS.md file, which is what Claude/Copilot/etc read for instructions on how to work with the project (in this case, we tell it not to).
Fixes #15350. (Which one should read fully before pressing Merge here.)