Godot Engine suffering from lots of "AI slop" code submissions https://www.gamingonlinux.com/2026/02/godot-engine-suffering-from-lots-of-ai-slop-code-submissions/
Timeline
Post
Remote status
Replies
16
@SuperDicq @gamingonlinux @feld you submitted a change to a project using ai, didnt you? how did that go?
@mischievoustomato @gamingonlinux @SuperDicq some people will be able to quickly spot AI-slop
Most people will not be able to spot AI generated code that had the slop-patterns manually removed by the developer submitting it. Because it just looks like normal code.
Most people will not be able to spot AI generated code that had the slop-patterns manually removed by the developer submitting it. Because it just looks like normal code.
@mischievoustomato @SuperDicq @gamingonlinux also if you prompt the model to read other code and copy their code style -- that works great. They're absolutely clueless that anything was generated.
@SuperDicq @gamingonlinux @mischievoustomato
> I personally see no problem with running LLMs locally using free software.
sure that would be great if it was possible, but the size of the models required to get good results are too big to run on consumer hardware right now. We just aren't there yet.
> I personally see no problem with running LLMs locally using free software.
sure that would be great if it was possible, but the size of the models required to get good results are too big to run on consumer hardware right now. We just aren't there yet.
@SuperDicq @gamingonlinux @mischievoustomato there are advancements coming that will crunch down the required hardware to run a large model too. I've seen one WIP inference engine for this. I have hope.
@SuperDicq @gamingonlinux @mischievoustomato this is true and I think that might be where we are heading. Just swap models. Working on frontend now? Swap in frontend model. etc etc
@feld @gamingonlinux @SuperDicq @mischievoustomato there's also three open weight models now that are frontier level, kimi 2.5, glm-5 and qwen 3.5. they can be run by anyone who has the hardware.
@lain @gamingonlinux @SuperDicq @mischievoustomato can you define what these hardware requirements look like though?
@feld @SuperDicq @gamingonlinux @mischievoustomato (if you want to run it at home, that is. there's plenty of services that run it for you: https://openrouter.ai/z-ai/glm-5)
@SuperDicq @gamingonlinux @feld @mischievoustomato same reason a programmer who knows multiple languages (deeply) will be better at writing good code in each one. problem solving is general rather than particular, even if you're comparing vastly different toolsets
@lain @feld @gamingonlinux @SuperDicq @mischievoustomato no one has the hardware to run these models are 8-bit quants, and anyone who does is just gonna use Opus.
Local fags are so BTFO.
Local fags are so BTFO.
@SuperDicq @gamingonlinux @feld @mischievoustomato there's good 2-3B open weight models that should run fairly well on most non-ancient machines. try one of those and tell me if they're good enough.
on a related note, i've been daydreaming for about a month of making a prose-only, en_US (1700-1900)-only dataset pruned from public domain datasets currently on huggingface. i've been trying and failing to figure out where to start but if i'm successful, that should create a very focused dataset for conversational and creative work. is that close to what you were asking?
on a related note, i've been daydreaming for about a month of making a prose-only, en_US (1700-1900)-only dataset pruned from public domain datasets currently on huggingface. i've been trying and failing to figure out where to start but if i'm successful, that should create a very focused dataset for conversational and creative work. is that close to what you were asking?
@HatkeshiatorTND @gamingonlinux @SuperDicq @mischievoustomato idk if this will help, but maybe???
https://github.com/stealthwater/model_tools
https://github.com/stealthwater/model_tools