

I don’t see anything there that indicates an AI positive agenda. What am I missing?


I don’t see anything there that indicates an AI positive agenda. What am I missing?


Why is this being downvoted so heavily?


Even with the comment making a lot of sense, if someone has a good summary / write up / video that helps build an intuition or understanding a bit more of thermodynamics then I’d love the recommendation


I think the first part you wrote is a bit hard to parse but I think this is related:
I think the problematic part of most genAI use cases is validation at the end. If you’re doing something that has a large amount of exploration but a small amount of validation, like this, then it’s useful.
A friend was using it to learn the linux command line, that can be framed as having a single command at the end that you copy, paste and validate. That isn’t perfect because the explanation could still be off and it wouldn’t be validated but I think it’s still a better use case than most.
If you’re asking for the grand unifying theory of gravity then:


Fuck the YouTube PMs
They were condescending on the bug with the fourth highest internal ratings that simply requested that shorts could be removed (particularly for children and for mental health). A particular gripe of some engineers was that it couldn’t be removed from the subscriptions page. I was impressed they removed the condescending comment after a month but they never really addressed the large volume of employees telling them this was the wrong thing to be doing


Thanks the taking the time. I always find it hard to follow up and point out the ambiguity / alternative without coming across in some unwelcome way


Have you ever tried writing a scrapper. I have for offline reference material. You’ll make a mistake like that a few times and know but there are sure to be other times you don’t notice. I usually only want a relatively small site (say a Khan Academy lesson which doesn’t save text offline, just videos) and put in a large delay between requests but I’ll still come back after thinking I have it down and it’s thrashed something


It’s funny, I had half been avoiding it for languages. I had lots of foreign friends and they often lived together in houses and those houses would almost have this creole. They came to learn English and were reinforcing their own mistakes but it was mutually intelligible so the mistakes were reinforced and not caught. I suspect LLMs would be amazing at doing that to people and their main use case along these lines seems like it would be to practice at a slightly higher level than you so I suspect some of those errors would be hard to catch / really easy to take as correct instead of validating


Strongly disagree with the TLDR thing
At least, the iPhone notifications summaries were bad enough I eventually turned them off (but periodically check them) and while I was working at Google you couldn’t really turn of the genAI summaries of internal things (that evangelists kept adding to things) and I rarely found them useful. Well… they’re useful if the conversation is really bland but then the conversation should usually be in some thread elsewhere, if there was something important I don’t think the genAI systems were very good at highlighting it


Yeah, I’m not disagreeing with the probable outcome here. I just think that it’s more likely at this point in time for the LLM output to be doing its stochastic thing in a way your human brain is seeing patterns in. But, I was also curious how wrong I was and that’s part of why I asked for some examples. Not that I could really validate them


Yeah. Strongly agreed for most of the behaviour. I think most amusingly in Grok where obvious efforts have been made to update the output beyond rails and accuracy checks
But the guy here talking about how these will be used control the information diet of people, he’s probably right about how this turns out unless there’s changes to legislation (and I’m expecting any changes to be in the wrong direction) even if he’s possibly misinterpreting some LLM output now


A bunch of this can be expected failure modes for LLMs. Do you have a list of short examples to get an idea?


There’s huge risk here but I don’t think most are designed to control people’s opinions. I think most are chasing the cheapest option and it’s expensive to have people upset about racist content so they try to train around that sometimes too much leading to black Nazi images etc.
But yeah, it is a power that will get abused by more than just grok


For comments like this I really wish they were split so I could upvote the bits I agree with and not touch the parts I’m ignorant of


In your OP, sure.
But this comment reads as a desired state, and in some situations thats a feature request (in this case it seems like there are architecture / system workarounds):
I don’t want email to be accessible to those services. I don’t want those services to use email at all.
Did you get an explanation you’re happy with?


I don’t think that assumption was inherent in the comment
If you want an unpopular feature that doesn’t exist on an open source platform sometimes your only options are to code it, or ask someone else to. The skillset of the feature requester doesn’t change that
What’s new:
And honestly I’d stop there and say “and more”
🤦♂️ Thanks