• 10 Posts
  • 219 Comments
Joined 2 years ago
cake
Cake day: September 21st, 2023

help-circle

  • I’m just a person interested in / reading about the subject so I could be mistaken about details, but:

    When we train an LLM we’re trying to mimic the way neurons work. Training is the really resource intensive part. Right now companies will train a model, then use it for 6-12 months or whatever before releasing a new version.

    When you and I have a “conversation” with chatgpt, it’s always with that base model, it’s not actively learning from the conversation, in the sense that new neural pathways are being created. What’s actually happening is a prompt that looks like this is submitted: "{{openai crafted preliminary prompt}} + “Abe: Hello I’m Abe”.

    Then it replies, and the next thing I type gets submitted like this: "{{openai crafted preliminary prompt}} + "Abe: Hello I’m Abe + {{agent response}} + “Abe: Good to meet you computer friend!”

    And so on. Each time, you’re only talking to that base level llm model, but feeding it the history of the conversation at the same time as your new prompt.

    You’re right to point out that now they’ve got the agents self-creating summaries of the conversation to allow them to “remember” more. But if we’re trying to argue for consciousness in the way we think of it with animals, not even arguing for humans yet, then I think the ability to actively synthesize experiences into the self is a requirement.

    A dog remembers when it found food in a certain place on its walk or if it got stabbed by a porcupine and will change its future behavior in response.

    Again I’m not an expert, but I expect there’s a way to incorporate this type of learning in nearish real time, but besides the technical work of figuring it out, doing so wouldn’t be very cost effective compared to the way they’re doing it now.


  • Yeah, it seems like the major obstacles to saying an llm is conscious, at least in an animal sense, is 1) setting it up to continuously evaluate/generate responses even without a user prompt and 2) allowing that continuous analysis/response to be incorporated into the llm training.

    The first one seems like it would be comparatively easy, get sufficient processing power and memory, then program it to evaluate and respond to all previous input once a second or whatever

    The second one seems more challenging, as I understand it training an llm is very resource intensive. Right now when it “remembers” a conversation it’s just because we prime it by feeding every previous interaction before the most recent query when we hit submit.




  • Thanks for sharing. Although I’m an enthusiastic open source user, I haven’t written any code of significance, so I’m not aware: has anyone made a license where use is restricted to individuals and democratically controlled organizations? I’m picturing that would allow for some degree of profit motive while encouraging things like worker co-ops and excluding venture capital controlled entities.






  • Thank you for clarifying you didn’t intend to be patronizing. That said, I invite you to re-read the content of what you wrote:

    Hey, OP. Welcome to your 30s. 🎉 New to the US, or just seeing it for the first time?

    After OP just expressed emotional distress, you said “Welcome to your 30s”, which implies you think that has something to do with what they just said. “This feeling is part of being in your thirties, when you start to notice things are wrong in the world.” Not a stereotype I’ve heard before, but reading it as a straight welcome doesn’t make sense either because OP didn’t say “I just turned 30,” they said they’re 32. This isn’t a new age bracket for them.

    That reading is bolstered by the next part. After OP is crashing out about the state of the world, you asked if they’re new to the US or just seeing it for the first time. The “or” implies these are the only two reasons why OP might be bringing this up. “Are you new here or did you just start paying attention?”

    Even if you were going for a “Welcome to the club friend, things have been fucked up for a long time,” I don’t think it’s an effective comment because of the tone, and you’re assuming this is the first time they’ve thought about it, and that they don’t know that things have been bad before. That may be true, but sometimes people just want to vent. I understand it’s common to become jaded over time if only just for self protection, but it’s valuable to still be able to be outraged and to experience injustice fresh.

    I appreciate you having good intentions. I think a sincere reading of the words can end up with the impression I shared.


  • For me I’ve been involved in an anti gerrymandering group, which definitely fulfills the “like-minded” category. But there’s still value just in connecting with people outside work even if it’s not ideological, so things like classes, hobbies, casual sports/exercise, anything that has regular meetings.


  • LesserAbe@lemmy.worldtomemes@lemmy.worldthere is a solution
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    3 months ago

    If you read the OP it doesn’t say anything about it being a new phenomenon. Even if that’s what this comment “accomplished” is it worth shitting on someone for sharing their emotions? It comes across as self congratulatory, like the commenter is above the OP because they’ve experienced negative circumstances for longer.


  • I feel overwhelmed a lot lately.

    I think one of the causes for the situation in the U.S. is isolation. Both that the bad actors are too isolated from their fellow human, and that people of good will feel alone in facing rising assholedom.

    It’s helped me to physically go to gatherings of people who are like minded. To be reminded that you’re not alone. We’ll never get everyone on our side, but I believe we can get enough.




  • I think the main thing is even if they were using the same underlying model (like chatgpt or Claude), they give them different prompts. For example, the one you linked seems more clearly prompted to give you a humorous roast style summary. Just from the screenshot from Reddit I get the impression they gave it a prompt about “you are an assistant for community moderators who are evaluating what course to take with a user” or something like that.