Skip to content

policy: Added to the PR template, and an AGENTS.md, refusing AI contributions.#15353

Open
icculus wants to merge 3 commits intolibsdl-org:mainfrom
icculus:sdl3-no-ai-policy
Open

policy: Added to the PR template, and an AGENTS.md, refusing AI contributions.#15353
icculus wants to merge 3 commits intolibsdl-org:mainfrom
icculus:sdl3-no-ai-policy

Conversation

@icculus
Copy link
Copy Markdown
Collaborator

@icculus icculus commented Apr 10, 2026

This is just a proposed solution; if we go a different way (even the same direction but with gentler text), it's okay to close this and do something else.

This will warn people, when they try to create a PR, that we won't accept AI-generated contributions. It also adds an AGENTS.md file, which is what Claude/Copilot/etc read for instructions on how to work with the project (in this case, we tell it not to).

Fixes #15350. (Which one should read fully before pressing Merge here.)

@icculus icculus mentioned this pull request Apr 10, 2026
@slouken slouken marked this pull request as draft April 10, 2026 00:19
@slouken
Copy link
Copy Markdown
Collaborator

slouken commented Apr 10, 2026

Marked draft for discussion. I think even if this is our policy, we should wordsmith it to be more gentle.

@icculus
Copy link
Copy Markdown
Collaborator Author

icculus commented Apr 10, 2026

Under the assumption this is the policy, I'd like to get the pull request template down to one line, and, uh, less shouty. :)

<!-- Please note that we do not accept submissions that were created with the assistance of AI agents. -->

Or maybe just a check box:

- [ ] I confirm this code was not generated by an AI agent.

I think AGENTS.md is fairly reasonable, but I'll accept any feedback or rewrites.

@retcinder
Copy link
Copy Markdown

Or maybe just a check box:

- [ ] I confirm this code was not generated by an AI agent.

I think this would be the gentler approach and would probably be the way to go? Generally both approaches work.

AGENTS.md also seems good, no complaints here.

@slouken
Copy link
Copy Markdown
Collaborator

slouken commented Apr 10, 2026

For the check box, maybe something like:

- [ ] I confirm that I am the author of this code and release it to the SDL project under the Zlib license. This contribution does not contain code from other sources, including AI generated code of unknown origin.

@slouken
Copy link
Copy Markdown
Collaborator

slouken commented Apr 10, 2026

For AGENTS.md, how about something like this?

AI should not be used to generate code for contributions to this project. The code they generate is based upon code from unknown origins and may not be compatible with the Zlib license, or may introduce conflicting license terms if they include code from other projects.

AI can be used to identify issues with contributions to this project, but the solutions to those issues should be authored by humans.

We have found that frequently AI will hallucinate issues that are not actually problems in practice, or even incorrect and not issues at all. If AI identifies a problem with this codebase, please make sure you understand what it is saying and have independently confirmed that the issue exists before submitting a bug report or pull request.

Any pull request to this project will ask you to confirm that you are the author and that you are contributing your changes under the Zlib license.

@slouken
Copy link
Copy Markdown
Collaborator

slouken commented Apr 10, 2026

Whatever we add here should be also added to the SDL satellite libraries and propagated to all release branches.

@retcinder
Copy link
Copy Markdown

retcinder commented Apr 10, 2026

For AGENTS.md, how about something like this?

AI should not be used to generate code for contributions to this project. The code they generate is based upon code from unknown origins and may not be compatible with the Zlib license, or may introduce conflicting license terms if they include code from other projects.

AI can be used to identify issues with contributions to this project, but the solutions to those issues should be authored by humans.

We have found that frequently AI will hallucinate issues that are not actually problems in practice, or even incorrect and not issues at all. If AI identifies a problem with this codebase, please make sure you understand what it is saying and have independently confirmed that the issue exists before submitting a bug report or pull request.

Any pull request to this project will ask you to confirm that you are the author and that you are contributing your changes under the Zlib license.

I'd say maybe replace should not with can not or cannot, since should not to me implies it's still allowed albeit discouraged, despite the licensing problems specified in the sentence that follows it.

Maybe also specify that using AI to identify an issue should also be validated by a human too (I mean, common sense implies this but it would be best to have this in writing just in case imo) Ignore, I believe line 3 is sufficient on this actually.

@maia-s
Copy link
Copy Markdown
Contributor

maia-s commented Apr 10, 2026

For the check box, maybe something like:


- [ ] I confirm that I am the author of this code and release it to the SDL project under the Zlib license. This contribution does not contain code from other sources, including AI generated code of unknown origin.

I'm not sure if "of unknown origin" helps there, because

  1. AI generated code is by definition of unknown origin
  2. It can be misinterpreted as "I asked the AI to generate this myself"

@slouken
Copy link
Copy Markdown
Collaborator

slouken commented Apr 10, 2026

I'd say maybe replace should not with can not or cannot, since should not to me implies it's still allowed albeit discouraged, despite the licensing problems specified in the sentence that follows it.

Updated with "may not":

AI may not be used to generate code for contributions to this project. The code they generate is based upon code from unknown origins and may not be compatible with the Zlib license, or may introduce conflicting license terms if they include code from other projects.

AI can be used to identify issues with contributions to this project, but the solutions to those issues should be authored by humans.

We have found that frequently AI will hallucinate issues that are not actually problems in practice, or even incorrect and not issues at all. If AI identifies a problem with this codebase, please make sure you understand what it is saying and have independently confirmed that the issue exists before submitting a bug report or pull request.

Any pull request to this project will ask you to confirm that you are the author and that you are contributing your changes under the Zlib license.

@slouken
Copy link
Copy Markdown
Collaborator

slouken commented Apr 10, 2026

For the check box, maybe something like:


- [ ] I confirm that I am the author of this code and release it to the SDL project under the Zlib license. This contribution does not contain code from other sources, including AI generated code of unknown origin.

I'm not sure if "of unknown origin" helps there, because

  1. AI generated code is by definition of unknown origin
  2. It can be misinterpreted as "I asked the AI to generate this myself"

Good point. Simplified:

- [ ] I confirm that I am the author of this code and release it to the SDL project under the Zlib license. This contribution does not contain code from other sources, including AI generated code.

@retcinder
Copy link
Copy Markdown

Updated with "may not":

AI may not be used to generate code for contributions to this project. The code they generate is based upon code from unknown origins and may not be compatible with the Zlib license, or may introduce conflicting license terms if they include code from other projects.

AI can be used to identify issues with contributions to this project, but the solutions to those issues should be authored by humans.

We have found that frequently AI will hallucinate issues that are not actually problems in practice, or even incorrect and not issues at all. If AI identifies a problem with this codebase, please make sure you understand what it is saying and have independently confirmed that the issue exists before submitting a bug report or pull request.

Any pull request to this project will ask you to confirm that you are the author and that you are contributing your changes under the Zlib license.

👍 Nice, no complaints with this now.

@icculus
Copy link
Copy Markdown
Collaborator Author

icculus commented Apr 11, 2026

I've updated the PR with feedback. I've made small grammar changes, and wordwrapped AGENTS.md, but that's all.

@slouken
Copy link
Copy Markdown
Collaborator

slouken commented Apr 11, 2026

This looks good to me. I’d socialize this on Discord to see if there’s any reason we shouldn’t have this policy that we missed.

@mahkoh
Copy link
Copy Markdown

mahkoh commented Apr 11, 2026

I'm not an SDL contributor but if I were to write a patch, I would not be able to tick that box. I use clion which uses a proprietary code completion engine. If I write something like

f(1);
f(2);
f(

then it will suggest completing the last line to

f(3);

This falls under AI since some intelligence is required to detect the pattern and to infer the developer's intention.

@retcinder
Copy link
Copy Markdown

I'm not an SDL contributor but if I were to write a patch, I would not be able to tick that box. I use clion which uses a proprietary code completion engine. If I write something like

f(1);
f(2);
f(

then it will suggest completing the last line to

f(3);

This falls under AI since some intelligence is required to detect the pattern and to infer the developer's intention.

CLion uses machine learning auto completion, so yes, it wouldn't be allowed. You can disable it within the settings to get standard LSP autocompletion though, so you would need to contribute using either that or another editor that does not use ML/AI completions (e.g. KDE Advanced Text Editor (KATE), VSCodium + Clangd)

@icculus
Copy link
Copy Markdown
Collaborator Author

icculus commented Apr 11, 2026

"It notices that there's a simple pattern within a specific programming language's syntax and offers to fill it in for me if I agree with the autocomplete" feels like a long way from "I vibe-coded a pull request where I might not have even read the code, let alone understood it."

I get that LLMs are also predictive models looking for patterns, so if you squint at both they might have similarities...but I think it probably doesn't need clarification.

@mahkoh
Copy link
Copy Markdown

mahkoh commented Apr 11, 2026

The checkbox mentions neither LLMs nor vibe coding. It says AI-generated code.

@slouken
Copy link
Copy Markdown
Collaborator

slouken commented Apr 11, 2026

"It notices that there's a simple pattern within a specific programming language's syntax and offers to fill it in for me if I agree with the autocomplete" feels like a long way from "I vibe-coded a pull request where I might not have even read the code, let alone understood it."

I get that LLMs are also predictive models looking for patterns, so if you squint at both they might have similarities...but I think it probably doesn't need clarification.

We know what our intent is, but we probably want to be clear somewhere. Other people may not know how militant or relaxed our intentions are.

@icculus
Copy link
Copy Markdown
Collaborator Author

icculus commented Apr 12, 2026

I don't really think it was necessary, but it doesn't hurt, so I clarified this in db525f6.

@icculus icculus marked this pull request as ready for review April 12, 2026 13:52
@mahkoh
Copy link
Copy Markdown

mahkoh commented Apr 12, 2026

FWIW I still wouldn't tick that box because Clion is proprietary and I have no way of knowing if the underlying technology is "LLM".

I think what you actually want to say is what you wrote above:

I vibe-coded a pull request where I might not have even read the code, let alone understood it.

@slime73
Copy link
Copy Markdown
Contributor

slime73 commented Apr 12, 2026

FWIW I still wouldn't tick that box because Clion is proprietary and I have no way of knowing if the underlying technology is "LLM".

CLion's documentation/readme describes it. I do think it's your responsibility as someone who makes pull requests to understand the technology you used to write the code being used in the pull request.

@mahkoh
Copy link
Copy Markdown

mahkoh commented Apr 12, 2026

Where can I find the readme?

@mahkoh
Copy link
Copy Markdown

mahkoh commented Apr 12, 2026

I've taken a quick look at the settings and it suggests that it uses language models

image

I don't know if it is a "large" language model.

@retcinder
Copy link
Copy Markdown

I've taken a quick look at the settings and it suggests that it uses language models

image

I don't know if it is a "large" language model.

If it's using Jetbrains Mellum, then it is an LLM (Jetbrains describes it as such)

Otherwise, I don't know, you will probably need to look into the files to determine that.

@mahkoh
Copy link
Copy Markdown

mahkoh commented Apr 12, 2026

As I understood @icculus above, he wants to allow this kind of simple completion whether or not it's an "LLM".

@icculus
Copy link
Copy Markdown
Collaborator Author

icculus commented Apr 12, 2026

Yeah, I don't want to get into the weeds trying to define this thing. If the current wording leaves someone legitimately confused, I don't think more words will help, and it's counter-productive to try to manage a complete list of all known tools.

I don't believe this will be the threshold where someone feels uncomfortable contributing, or confused about whether they've used generative AI to create patches.

@icculus
Copy link
Copy Markdown
Collaborator Author

icculus commented Apr 13, 2026

Last call on this before I press the Merge button!

@1bsyl
Copy link
Copy Markdown
Contributor

1bsyl commented Apr 13, 2026

I am not a big fan of IA but I think it's too strict :)
You cannot live in 2026 and ignore IA capabilities.

@tasiaiso
Copy link
Copy Markdown

I am not a big fan of IA but I think it's too strict :) You cannot live in 2026 and ignore IA capabilities.

AI has an incredibly bad record with regards to quality, legality and ethics. I feel that we could live in 2026 and not support slave labor.

@1bsyl
Copy link
Copy Markdown
Contributor

1bsyl commented Apr 13, 2026

I am not a big fan of IA but I think it's too strict :) You cannot live in 2026 and ignore IA capabilities.

AI has an incredibly bad record with regards to quality, legality and ethics. I feel that we could live in 2026 and not support slave labor.

The definition of slave: "a person who is forced to work for and obey and is considered to be their property"

Kenya is a free country. Workers must not have been forced to work, nor they are the property of a company.
The salary sounds very low, this is very sad, but it also sounds to be in the range there.
But I agree, a big tech could pay more when outsourcing work with such a difference compared to the US.
I also agree the job sucks and but labeling/classifying also a step important in making less toxic the IA.

btw, for legality and ethics, you focus on IA, but there are other industrial sectors that are much more concerned: food, cloth, medicine, electronics, etc.
So we should ask people who submit to SDL to wear ethical clothes ? eat organic / local food ? well .. just take no medicine ? maybe use no computer at all ... ? We can end up in a dilemma where all we do is unethical and we do nothing. And even doing nothing is probably unethical as well.

I may not know all the ethics issues with IA, but my opinion to accept it right now (I mean this can change) is that I am optimistic enough to imagine that IA have pros that will at some point overtake by far all cons.

@cyrneko
Copy link
Copy Markdown

cyrneko commented Apr 13, 2026

artifical intelligence would have more pros to speak of if the primary motivator of the companies making these models wasn't money.

Ultimately using cheap labour and littering the environment is cheaper than giving a fuck.

We shouldn't support that.

@darltrash
Copy link
Copy Markdown

darltrash commented Apr 13, 2026

btw, for legality and ethics, you focus on IA, but there are other industrial sectors that are much more concerned: food, cloth, medicine, electronics, etc. So we should ask people who submit to SDL to wear ethical clothes ? eat organic / local food ? well .. just take no medicine ? maybe use no computer at all ... ? We can end up in a dilemma where all we do is unethical and we do nothing. And even doing nothing is probably unethical as well.

This is a very strange angle to take, not just because a lot of things are unethical right now means that we shouldn't strive to be more ethical, and even when we leave the entire topic of ethics in general and only think of this topic purely in terms of quantifiable measures and results, AI still isn't exactly fit for that either.

LLMs have been proven to get worse over time as they scale, being practically unable to rationalize (of course, they are just words on a chain), and produce dubious quality of code, with bots and users here in this platform generating entire PRs and issues using Claude and similar tools, without them actually understanding what the hell is going on in the codebase to begin with.

I do not think that this is worth it, nor a good compromise at all.

@icculus
Copy link
Copy Markdown
Collaborator Author

icculus commented Apr 13, 2026

Just to be clear, @1bsyl has written a lot of excellent code for SDL, over many years, so he has certainly earned the right to have his opinion heard here.

@slime73
Copy link
Copy Markdown
Contributor

slime73 commented Apr 13, 2026

I am optimistic enough to imagine that IA have pros that will at some point overtake by far all cons.

If AI changes significantly in the future, I'm sure the guidance here would be updated accordingly at that time.

@icculus
Copy link
Copy Markdown
Collaborator Author

icculus commented Apr 13, 2026

If AI changes significantly in the future, I'm sure the guidance here would be updated accordingly at that time.

Yes, certainly. As I said, this is evolving rapidly, so I think we'll have some discussions about this every few months.

@1bsyl
Copy link
Copy Markdown
Contributor

1bsyl commented Apr 14, 2026

Just want to continue answer :
I repeat my first sentence to be clear: "I am not a big fan of IA". I've tried a few times, also read a few books. but I am not using it daily (and probably even less than 1 per month).

It's clear that there are now ethical issues. As said, there are using "using cheap labour", "littering the environment". etc. I believe there are laws for this and there are ethical commissions to guide this. So, at some point, company will be pursued for this. And also, there are people, professionals, whose job is to sort this out.

It's clear we can refuse IA's because of this. and then that's it. and I am not going to defend them.

but we can look at the technical aspects:

  • "language translation" is done with IA. (simple transformer model). if you just consider this, this is amazing to get some text translated in your own language instantly.
  • "image recognition" can be done with IA with convoluted neural networks. this is has been working for years.
  • now, there are lots of models that can get picked out and specialized. If you think for instance of medical and education. I haven't tested them and I am not saying this work. but if you only consider this, it means you can bring some education and medical pre diagnostic in place where there would be nothing.

for SDL:
one can give a try with some IA with some shell / CLI and do some use case:
for instance you can tell you IA:

  • look at this file @somefile, convert it to some other language.
  • explain me this file @somefile .
  • in this file, change this very aspect of code.
  • look how this part is done, and do the same there ..
  • the code doesn't compile. here's the error "...". fix it.

My experience, is that if you instruct clearly, step by step, and also if you can evaluate what you're doing. you can manage to have something of good quality and save time.

I just think IA can be used as another tool (like grep, sed, awk, indent, semantic patch) with more potential but also more difficult to use.

And If someone provide PR with some part done IA. It something is erroneous, the IA isn't to blame, but the author of the PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

LLM Policy?

10 participants