Time Nick Message 00:02 celeron55 @mistere_123 my personal take is i think it's fine if you're talking to the AI in terms of code, not (just) in terms of the final user facing end result. if you talk to it in terms of code, it's then acting as a combined autocomplete + api reference search + stackoverflow search. if you only describe a user facing goal, then it's likely to take too much inspiration from someone else's project (which 00:02 celeron55 creates problems in both copyright and quality, and actually in my experience it's also likely to miss functions and features that already exist in the code, creating duplication) 01:11 MTDiscord yes, and rarely works. thx celeron, yes, as a combined autocomplete + reference is how I would use it. 01:11 MTDiscord one that nonetheless should not be fully trusted even as a reference 09:21 celeron55 well you need to use it as if it were a junior coder working for you, rather than some kind of all knowing programming authority 09:23 celeron55 if you review its result as good, use it. if it's crap, then work on it or discard it. i think pretty much the biggest feature of these is exactly that - it makes code very cheap, and you don't grow personally attached to the code when making it. it allows you to be very impartial and try out ideas in quick succession without getting weary 09:26 celeron55 reading and understanding code is now a much more important skill than writing code 09:28 celeron55 (and yes, i know what this means in terms of junior coder employment. that part is kind of sad. but in the meantime you can e.g. improve your portfolio by contributing to luanti :--)) 09:31 sfan5 personally my use of AI for coding has been limited to asking questions to an LLM in a separate tab (no editor integration) 09:31 sfan5 so the largest amount of code I really copied from an AI is like 5 lines so far 09:46 celeron55 that's how i started more than a year ago. now i use an agent, and the editor is mostly for reviewing code and making single line edits 09:48 celeron55 i can confortably code literally without an editor at all 09:48 celeron55 comfortably* 09:50 celeron55 (aside from grep, git and such basic tools which would have counted as an editor 30 years ago) 09:57 sfan5 worth mentioning that my dayjob isn't software developer so the pressure to get things done in bulk isn't there 09:57 sfan5 (what I said applies to the development I do for Luanti though) 10:01 celeron55 makes sense. if i had a lot of time and my goal was just to leisurely program, most optimizations to the process wouldn't matter. i do like to have a high output for time spent in my hobbies also but that's probably mostly because i don't enough time left for those 10:01 celeron55 +have 10:13 rubenwardy I'd say the value of junior devs is more than just pumping out code and it's quite sad how shortsighted companies are being over this 10:15 rubenwardy I do also think that too many people have gone into compsci/sfe that shouldn't have, due to it being overpromoted in schools and such 10:15 rubenwardy I do it because I genuinely enjoy it and am passionate for it 10:47 rubenwardy I guess our policy for AI is that we shouldn't be able to tell that you're using it. If you repeatedly submit vibe coded slop we'd probably treat it as spam 10:50 [MatrxMT] The world is changing so fast, heeeh 11:05 celeron55 i think it's fair to disallow slop. both human slop and ai slop. being able to generate code quickly is in no way a reason to allow bad performance, difficult to maintain code, skip reviews or anything like that 11:09 celeron55 however: using AI as a review tool is a good idea IMO. AI often catches some of the kind of stupid little things humans are bad at noticing 11:09 celeron55 like spelling mistakes 11:21 [MatrxMT] What it annoys me is that what I put into the LLM is gonna get used to train the LLM. I don't want big companies (or proprietary software in general) to profit from my work, especially if we're not talking about code but about creative writing 11:43 celeron55 it's not that much different from proprietary social media platforms where people post their stuff all the time but yeah, it's all questionable 11:45 celeron55 i mean, it's not like it's not going to be used to train the LLM if you post it somewhere else on the internet. and if you post it on a proprietary platform, it's going to be used to train both an LLM and also the advertising and content selection algorithms 11:45 celeron55 we're publishing things mostly for machines nowadays. humans are a minor part of the audience 11:45 celeron55 which is, of course, insane, but true 11:49 celeron55 and yes, i get your point: luanti is and should be maintained as a way to get away from that, if possible with reasonable means. it's the reasonable thing to do as a non-profit community, nobody else is going to do it 11:51 celeron55 we need to value human crafts and human communication, because most won't, and as it becomes scarce, it will also become valuable 11:51 celeron55 it used to be that anywhere you went, humans made the things you saw, even on the internet 11:52 celeron55 like 5 years ago 12:00 celeron55 and then, looking towards the future: one day, AI probably will be able to just take the luanti codebase and develop it into anything you want. but that day it will also be able to start from nothing and develop anything you want from scratch - so there's no value in the existing codebase. from that standpoint, the only way the codebase will have value if it's maintained essentially like a 12:00 celeron55 historical artefact. like an old knife, an old piano or a classic car. you don't put them into the shredder and transport the materials to the factory to quickly modernize them. that's what LLM does to information 12:03 celeron55 yeah it's hard to swallow and the AI bubble could crash any day. but this is what the current trajectory is looking like 12:36 rubenwardy I think that relying on big tech companies for LLMs is quite risky, they're all very unprofitable and subsidized currently and eventually the cost will need to be passed to users. Enshittification intensifies. So I'd like to see more focus on open source and self hosted models 12:37 rubenwardy This is what Mozilla could be doing rather than adding a chatbot sidebar to proprietary services 12:37 rubenwardy I have the issue with depending on Aws, azure, or gcp. At least they are already profitable 12:39 sfan5 there are reasonable local models, and IIRC the translation feature mozilla added is also a local AI model 12:46 MTDiscord For now, local models are keeping pace with proprietary. They might fall behind further, but never too far away that it becomes a societal issue. It's probably the best thing for competition. Someday, when companies stop having an incentive to release them, I figure I'll start donating to something like olmo. Cause you really only need one or two great local models to keep competition up 12:47 celeron55 i agree. purpose-built models can be small enough to run locally while still being capable enough. big tech is aiming for universal models with AGI as the end game, and they're ready to go through huge amounts of resources to get there. and they probably don't really know what to actually do with the power it will give them 13:38 MTDiscord If I were you I'd kindly ask (read: optional!) modmakers to make clear which part (code, doc, design, assets, ...) of the mod were AI generated. Which model and version. If I were you, I'd also provide a machine-readable template or at least guidelines for such a document. 13:40 celeron55 we already do that here, but it's relatively hidden. you don't really find it unless you're specifically looking for it https://content.luanti.org/help/copyright/#i-used-an-ai 13:42 celeron55 it would be best if we didn't bully people to hide their AI use but rather encouraged documenting it as well as possible. for many reasons (awareness, copyright, science, teaching others) 14:14 MTDiscord Yep, for example the agentX game I made for the jam, used two tools to make the voice audio TTS. Documented in the license file for how that was generated and the tool licenses 14:24 DragonWrangler my latest fireworks also had some ai assistance. Which I documented in Code_source.md file. 16:04 jonadab I do think AI-generated material should always be reviewed by a human. 16:12 DragonWrangler You'd hope that everyone actually goes through and reviews the code and makes the needed changes to ensure it is actually proper. 16:13 celeron55 one of the best things to do is to first review it using a different AI model. i just made github copilot to review code made by claude code and all i can say is it's good at doing that. and doesn't get tired like a human would. after that i'll browse through the diff myself and consider merging (this isn't luanti) 16:14 jonadab That makes sense to me. 16:15 jonadab I would expect AI to be better at noticing things a human might miss, than at catching things a human *would* notice. 16:20 DragonWrangler Celeron55: Smart way of doing it. 16:24 celeron55 then of course full use of unit tests and other automated checks that the AI can iterate on is required for efficent results. Any time a human is needed in the loop, that's immediately the bottleneck and it frankly isn't enjoyable at all to repeatedly be testing AI slop yourself. if the AI is making slop, you want it to beat itself with a stick on its own. you're there only to drink the coffee 16:26 celeron55 once the code actually builds, runs and passes tests, then it's worth getting looked at by a human 16:27 celeron55 a codebase designed for this is at a huge advantage compared to one that isn't 16:27 jonadab I mean, tools for automated testing are useful regardless of the details of how development is happening. 16:30 celeron55 of course. but an experienced human can wing it without, by using various strategies and tactics 16:31 sfan5 strategy called "thinking" 16:31 jonadab Sure. 16:31 jonadab But just because you *can* do without a useful tool, doesn't mean you want to.'