

If you got good internet you could look into GeForce Now as a stopgap / headstart.


If you got good internet you could look into GeForce Now as a stopgap / headstart.


Enlightened dumbness 🧘


It regurgitates old code, it cannot come up with new stuff.
The trick is, most of what you write is basically old code in new wrapping. In most projects, I’d say the new and novel part is maybe 10% of the code. The rest is things like setting up db models, connecting them to base logic, set up views, api endpoints, decoding the message on the ui part, displaying it to user, handling input back, threading things so UI doesn’t hang, error handling, input data verification, basic unit tests, set up settings, support reading them from a file or env vars, making UI look not horrible, add translatable text, and so on and on and on. All that has been written in some variation a million times before. All can be written (and verified) by a half-asleep competent coder.
The actual new interesting part is gonna be a small small percentage of the total code.


I guess I’m one of the idiots then, but what do I know. I’ve only been coding since the 90s


That’s kinda wrong though. I’ve seen llm’s write pretty good code, in some cases even doing something clever I hadn’t thought of.
You should treat it as any junior though, and read the code changes and give feedback if needed.


I’ve used Claude code to fix some bugs and add some new features to some of my old, small programs and websites. Not things I can’t do myself, but things I can’t be arsed to sit down and actually do.
It’s actually gone really well, with clean and solid code. easily readable, correct, with error handling and even comments explaining things. It even took a gui stream processing program I had and wrote a server / webapp with the same functionality, and was able to extend it with a few new features I’ve been thinking to add.
These are not complex things, but a few of them were 20+ files big, and it manage to not only navigate the code, but understand it well enough to add features with the changes touching multiple files (model, logic, view layer for example, or refactor a too big class and update all references to use the new classes).
So it’s absolutely useful and capable of writing good code.


I’ve found it useful to write test units once you’we written one or two, write specific functions and small scripts. For example some time ago I needed a script that found a machine’s public ip, then post that to an mqtt topic along with timestamp, with config abstracted out in a file.
Now there’s nothing difficult with this, but just looking up what libraries to use and their syntax takes some time, along with actually writing the code. Also, since it’s so straight forward, it’s pretty boring. ChatGPT wrote it in under two minutes, working perfectly on first try.
It’s also been helpful with bash scripts, powershell scripts and ansible playbooks. Things I don’t really remember the syntax on between use, and which are a bit arcane / exotic. It’s just a nice helper to have for the boring and simple things that still need to be done.


Just can’t waste time on trying to make it do anything complicated because that never goes well.
Yeah, that’s a waste of time. However, it can knock out simple code you can easily write yourself, but is boring to write and take time out of working on the real problems.


So someone thinks it’s not entirely stupid


Yes, which has improved some tasks measurably. ~20% improvement on programming tasks, as a practical example. It has also improved tool use and agentic tasks, allowing the llm to plan ahead and adjust it’s initial approach based on later parts.
Having the llm talk through the tasks allows it to improve or fix bad decisions taken early based on new realizations on later stages. Sort of like when a human thinks through how to do something.


We’ll still have models like deepseek, and (hopefully) discount used server hardware


I’ve seen some saying that “lifetime” refers to product lifetime, which is not expected to be more than X years. So yeah, slimes gonna slime


Since I already use ZFS for my data storage, I just created a private dataset for sensitive data. I also have my services split based on if it’s sensitive or not, so the non sensitive stuff comes up automatically and the sensitive stuff waits for me to log in and unlock the dataset.


I’m sorry, but what is ill informed or opinion about it? Fact is it can do things no other image generator can do, open source or not. It can also effortlessly do things that would require a lot of tinkering with controlnet in comfyui, or even making custom lora’s. It’s a multimodal model that can do image and text both input and output, and does it well. All other useful image generators are diffusion based, which doesn’t read a prompt in the same way, and is more about weighting patterns based on keywords rather than any real understanding of the prompt. That’s why they’re struggling with relatively simple things like “a full glass of wine” or “a horse riding an astronaut on the moon”. If I’m wrong about this, please prove me wrong. Nothing would make me happier than finding an open source model that can do what openai’s new image model can do, really. I already run llama.cpp servers and comfyui locally, I have my own AI server in the basement with a P40 and a 3090. Please, please prove me wrong here.
I love open models, and been running them locally since first llama model, but that doesn’t mean I willfully ignore and pretend what claude and openai and google develops doesn’t exist. Rather I want awareness about it, that it does exist, and I want an open source version of it.


ah yes, I forgot we live in post-truth society where reality doesn’t matter and only your feelings are important. And since your feelings say AI bad, proprietary bad, and reddit bad, you don’t have to actually think or take into consideration reality.


I know them, and used them a bit. I even mentioned them in an earlier comment. The capabilities of OpenAI’s new model is on a different level in my experience.
https://www.reddit.com/r/StableDiffusion/comments/1jlj8me/4o_vs_flux/ - read the comments there. That’s a community dedicated to running local diffusion models. They’re familiar with all the tricks. They’re pretty damn impressed too.
I can’t help but feel that people here either haven’t tried the new openai image model, or have never actually used any of the existing ai image generators before.


No other model on market can do anything like that. The closest is diffusion based where you could train a lora with a person’s look or a specific clothing, then generate multiple times and / or use controlnet to sorta control the output. That’s fast hours or days of work, plus it’s quite technical to set it up and use.
OpenAI’s new model is a paradigm shift in both what the model can do and how you use it, and can easily and effortlessly produce things that was extremely difficult or impossible without complicated procedures and post processing in Photoshop.
Edit Some examples. Try to make any of this in any of the existing image generators


It understands what you’re telling it, and can generate images from vague descriptions, combine things from different images just by telling it, modify it and understand the context - like knowing that “me” is the person in the image, for example.
Edit: From OpenAI - “4o image generation is an autoregressive model natively embedded within ChatGPT”


OpenAI is so lagging behind in terms of image generation it is comical at this point.
You’re the one lagging behind. OpenAI’s new image model is on a different level, way ahead of the competition
You’re pushing code to prod without pr’s and code reviews? What kind of jank-ass cowboy shop are you running?
It doesn’t matter if an llm or a human wrote it, it needs peer review, unit tests and go through QA before it gets anywhere near production.