

Only problem is when dealing with things like, what happens when a cop demands to bury all videos of his police brutality scandal etc… Bottom line 90% of time when the pressure hits enough, the words don’t do it justice. Especially when the cops release the report using past exonorative tense etc… to describe things in way that downplay and obfuscate what happened.




I fully agree as a tool LLMs are amazing. Throw in a config file or code that you know 99% of what it should be, but can’t find what’s wrong… and I’d say there’s a good 70% chance it will find it… maybe chasing down one or 2 red herrings before it solves it.
The bad rap of course is simply the 2 main factors.
idiots that use it to do the entire coding, and thus wind up with something they themselves don’t have even the basic understanding of how it goes together, so they can’t spot when it does something horrifically wrong.
The overall reality that, no matter how you slice it, it costs an absurd amount to run these things. so… while the AI companies are letting us use these things for free or off really cheap plans, it’s actually costing real money to process, and realistically there’s no sign of it reaching a point where there’s actually a fair trade of value…