• 0 Posts
  • 57 Comments
Joined 6 months ago
cake
Cake day: June 4th, 2025

help-circle
  • I can tell where a laser is pointed on me without looking. Like if you blindfold me and got a laser pen and shined it on my arm, I can point to where it feels like it is with pretty good accuracy. It’s easier to detect motion than precise placement, and sensation wise it’s not touch or heat like you’d expect it’s more like raw proprioception.

    Also it felt the same regardless of the color of laser we used which seems odd since you’d think higher frequency light would be easier to detect.

    Tbf I haven’t done the experiment since I did it with my siblings when I was pretty young. Not sure if I can still do it, but my siblings and cousins couldn’t do it even back then.


  • If I don’t have a choice to leave or feel irrationally compelled to actually try to debate them 10.

    It’s not a choice it’s a fucking curse. I don’t have to think, my mind will eventually start predicting what they say and eventually I want to gut myself because I can think of a hundred things to say and know that it won’t change their fucking minds.

    Worse, mind reading is a fallacy. Sure predictions can be pretty accurate, but there’s no way to know for sure if those arguments will play out exactly as I think. But there’s real curse is that just because all the things I can think to say won’t change their mind, that doesn’t mean there isn’t something that will. I might just be too dumb to think of a good argument. So I rot as the conversation happens to me trying to think of anything that could make a difference.

    Oh also yeah when they say horrible shit and your mind decides to go “here this is how their victims feel” that’s pretty fucking horrible too.

    But if I get up or get upset or react strongly it’ll likely ruin any chance of me changing this person’s mind. Not that that chance existed in the first place.

    Anyway, it isn’t difficult to see things from other people’s perspective but let me tell you I much prefer talking to psychopaths than delusional idiots.

    I had a roommate who was full blown psychopath (and business major to boot lol) who, once he found out I could see things from his perspective, would debate politics with me in a completely candid manner. I once brought up “so you’d support slavery then?” And he deadass said “if it benefitted me then yes”

    Fucked up, but the thing is, he’d listen to my arguments when they were logical. And he wasn’t sadistic, slightly narcissistic, but like he didn’t derive pleasure from other’s pain. Anyway the point is that when you talk to someone who is sane it doesn’t hurt even if they feel no empathy because you can start to understand why they think the way they do and it always feels like you can change their mind, and they don’t feel an active desire to hurt people.

    Nazis typically aren’t that. Nazis are typically idiots who can’t face the real sources of pain in their life, so they direct their hatred of their lives and themselves to others. Same with manosphere incels, same with bigots of almost every kind. They want to hurt others, they want to break things, to be mad, because they’re hurt. But you can’t get them to see what they don’t want to see in the first place.

    So you just feel bad for them, feel bad for others harmed by people like them, and hate yourself for feeling hatred for them because you get why they are doing it.

    It isn’t fun and it’s not even fucking useful because it’s not like you being emotionally stressed out is helping anyone ever and you aren’t changing their minds.

    Its a curse to feel irrationally compelled to talk to those who won’t listen because “maybe this time it’ll work” it doesn’t.


    Edit: okay clearly I’m not in a very good place mentally right now, but I’m leaving this here. If anyone can relate, here’s some external reinforcement since you’ve likely said it to yourself and it doesn’t work: you do not need to feel compelled to feel bad for others constantly especially if it isn’t galvanizing you to take solid action to help. If your suffering stops you from functioning well enough to help anyone then it’s actually a bad thing to feel that empathy. So let yourself relax.


  • People used to tell me I had a great imagination, but honestly I’m more on the aphantasia side of things.

    It’s difficult for me to hold mental images in my head. Like if I imagine an object it’s like it’s in my minds eye for a single frame and then black. It takes serious effort to visualize things in detail and keep them in my head for more than just a flash of detail.

    However, I’ve realized that the raw data of the thing im imagining is still there. I might not be able to see it but I can experience other imagined senses. I definitely get lost in maladaptive fantasies a lot. I can imagine a story, I can imagine a universe, I can know how it feels to be in a different place or even a different body than my own, I can’t imagine the proprioception and tactile sensation of different limbs, Imagine emotional scenarios far beyond my personal experience, all of that… but I can’t picture the face of someone I know.

    Brains are weird


  • If you define “not normal” as “not having empathy” then your argument is vacuously true. Like “I’m a good person because I say I am”

    If you define normal as the average of everyone then statistically you’re wrong about empathy. The Stanford Prison Experiment or basically any other social experiment that is now banned proves you wrong (hence they had to ban them because people do shitty things to each other just because).

    A good one (which was banned for causing stress to the participants via some amount of empathy) I could name would be the [Milgram Experiment](https://en.wikipedia.org/wiki/Milgram_experiment. Most people will question their actions if they can directly see they are harming a stranger… unfortunately most people will also apparently hurt others even while hearing the victim scream and beg them to stop just because an authority figure tells them to keep going and that it’s all part of the plan.

    I don’t think that people are sadistic or malicious by nature, but they sure as hell do not have strong empathy by default mate, and the prison experiment alone proves sadism is much more prevalent than you seem to think. As is the existence of the holocaust, the genocide in Gaza, all the other genocides, the existence of Guantanamo bay, the existence of capitalism in the first place, the need for a list of what is a war crime, war itself, etc.

    The reason any of these happen is because people care more about the status quo or themselves than certain other people. Soldiers kill soldiers because their desire to live and not be shamed as a defector outweighs any pain they’ll cause others. Ergo, there is seemingly an endless supply of people who will choose themselves/self-interest over others, in contrast to your hope that universal empathy is the default.

    You can feel bad for others and do shitty things just like you can be a psychopath and do kind things. Empathy doesn’t necessarily make someone good and the lack of it doesn’t make someone bad. Unless you define good and evil to mean that in which case there’s no shower thought just another definition of good and evil.


  • The claim that humans are always terrible by default is false, but claiming the polar opposite is also false.

    Many people have empathy, but not all, and it varies in strength/quality from one person to another.

    Many well adjusted people do not feel empathy. Many people are depresssed/over-stressed and not well adjusted because they have empathy.

    As for PTSD, the ability (or inability) to adjust to or move on from traumatic experiences is not directly correlated to empathy.

    Furthermore the ability to kill those who wish you (or those you care about) harm is evolutionarily advantageous. Anger and violence in response to stress and pain allows you to fight off predators/enemies/sources-of-pain. The majority of humanity feels these emotions.

    When in a state of anger and pain it is harder for us to think about our actions. Your claim that someone with empathy will always feel conflicted about hurting others is therefore false.

    Now most people with empathy might feel remorse but if their mind doesn’t put enough weight on that moment to remember it, there’s nothing for them to feel sorry for later. Does that mean they don’t feel empathy? Nope, they can still empathize with friends and family and characters on TV shows, they just don’t have a mind that catalogues their guilt. (There are unfortunately many people like this)

    I do think many people cause significant pain to others. But out of ignorance not malice. And there in lies a major problem with empathy. If you don’t think someone is actually hurting you won’t feel empathy for them even if you feel empathy for others. So if you aren’t aware of the pain others might feel around you, you won’t experience empathic responses even if you might for other kinds of pain.

    People might not be generally good or generally bad but we are typically stupid.

    If you can convince someone that some person is “just faking it for attention” they won’t feel empathy. Now the reverse is also typically true: if you can convince a person with empathy that that someone else’s pain is real they’ll feel empathy. Unfortunately people don’t like being told they’re wrong or having to change viewpoint or listen to evidence rationally so there are many people you cannot convince to feel bad for certain other people.

    Another thing to note is that many of the terms you’ve used are indefinite. What does well-adjusted mean? Psychopathy is prevalent in many fields and psychopaths can live healthy/stable lives. (Sadism and psychopathy are different btw) Are they well adjusted?

    What does good mean? The greater good or empathy? Because those two do not agree on everything. How far does empathy need to go for someone to be good in your opinion? Are people who eat meat evil because they lack empathy for animals?

    If there was a trolley problem-esque situation where you could save five lives but only if you killed a child with your bare hands, would your idea of a good person commit murder or let five people die because they couldn’t overcome their empathy?

    Lastly—and slightly unrelated—I’d like to note that I just had an odd thought: if you tried to logically dichotomize all actions into good or bad, you would need arithmetic to deal with the idea of a greater-good / utilitarianism. However by Gödel’s theorems, in any logical system in which arithmetic can be performed, there will be things that cannot be proven good or bad no matter how many axioms you add to the system. In other words it is actually by definition impossible to dichotomize actions into good or bad. Adding a third category won’t even fix it. Right? Any mathematician/logician/philosopher that can back me up or tell me I’m wrong?


  • The real fascinating thing is that Impossible Colors exist, which means it’s kind of impossible to actually represent all colors or impossible to precisely represent them.

    Imo it seems colors are relative to how our brain and eyes are adapting to their current field of view, meaning the color you experience is not fully dependent on the light an object actually reflects nor the activation of your rods and cones but is dependent on the way your brain processes those signals with each other. Ergo, you can’t actually represent all colors precisely unless you can control every environmental variable like the color of every object in someone’s field of view and where someone’s eyes have been looking previously etc.


  • I’m not sure I understand what you’re trying to explain with states. Do you mean measured externally? Or does part of the system discretize the signals? Or are you saying that while the driving fields may be continuous the molecular structure enforces some sort of granularity to the signals?

    You seem to know much more than me on the hardware side.

    The last time I looked at hardware I came across “ferroelectric synapses” which do the STDP learning. I think it had something to do with the way magnetic dipoles align when current is applied. I don’t think it requires measurement at any step and is continuous whether we have good enough hardware to measure those changes or not.

    So you ran a simulation of those neurons?

    Yes. A very slow and very inaccurate one. I had to approximate the parallelization by setting a time step and then numerically compute the potentials of every neuron and synapse before moving on to the next time step and repeating.

    I should state more clearly that I think it’s the temporal aspects of continuity that lead to undecidable behavior rather than just the number of states a neuron has.

    Because each neuron in a neuromorphic net is running in parallel with all others, the signals produced by that neuron will not necessarily be in sync with the signals of any other neuron. As in, theoretically no two neurons are really ever firing at the exact same time.

    As I previously stated, since timing is everything for STDP the time difference could be very significant when a neuron recieves multiple inputs in a short time window and fires.

    An additional thing to note is that in more advanced models of neurons like the Hodgkin Huxley model, one can account for multiple synapses along the same dendritic tree which absolutely makes timing matter more since input to a synapse near the soma causes a localized change in ions that would stop the propagation of signals from the farther reaches of the dendrite. And if a far signal were to propagate to that synapse just prior to a an input signal, that signal might not be strong enough to get through.

    Depending on how the hardware is build I’d imagine you could get similar effects from the nearness of electrical signals in a neurochip, where the local signaling causes non-trivial effects to the system.

    Anyway, I’ve realized that I likely don’t know enough to say with real certainty whether spiking neural nets are incomputable or not. This is the most rigorous explanation of my thoughts I can write right now:

    • If the true state of each neuron (and synapse) is continuous then there are uncountably infinite states to any net built with them.
    • If these nets are comprised of densely connected layers with any recursion then the state of the system (assuming LIF neurons) could be represented as a large set of dependent non-linear differential equations.
    • Systems of ODEs with more than three variables can give rise to chaotic behavior such that the tiniest changes in input can produce vastly different outputs (the Three Body Problem or the double pendulum are common examples of this) (this also means most likely there would be no “closed form” solution to these systems of differential equations which I don’t think necessarily means uncomputability but it would be a pain to solve one of these numerically and I don’t think runge-kutta would be accurate at all)
    • it should then be possible (if not probable) that a spiking neural net gives rise to chaotic behavior
    • Since we’ve assumed the voltage and current are continuous, they are real numbers that cannot be known with absolute certainty
    • This means that even if you were to build an algorithm for determining the future state of the machine from its present state, you would not be able to know the true state well enough to know for certain you are not in a chaotic area for which your error of measurement does not eventually give rise to behavior significantly different than your algorithm predicted.
    • Ergo, the neural net could be—at least with our flaws in measurement—uncomputable.

    I think the problem is still uncomputable even with fully precise measurements simply due to the continuity and timing I mentioned before, but I guess I don’t have enough knowledge on the topic to prove it so perhaps I’m wrong.

    I think someone else in this comment section mentioned analog-computing (which I thought included neuromorphic hardware) being capable of non-algorithmic computing so they might have more answers than me on the topic of what non-algorithmic means.

    If it could be calculated it could solve the halting problem.

    …would it? I don’t think you can derive a solution to the hard halting problem from knowing the longest finite runtime of a set of machines with n-states.

    A function for the busy beaver numbers would only tell you that there exists some machine with n states that halts after a certain number of steps. It cannot be used to determine if any specific machine of that size halts or not, just that at least one does and it takes x number of steps.

    Hell, it doesn’t even tell you what input would make a machine halt at that many steps only that there is at least one input for which you get that output.

    So I think that means—if by some miracle you were able to construct an oracle for the busy beaver numbers—you wouldn’t really solve the halting problem yes? (Again wayy outside my expertise but still fascinating)


  • I’m definitely not an expert on the topic, but I recently messed around with a creating a spiking neural net made of “leaky integrate and fire” (LIF) neurons. I had to do the integration numerically which was slow and not precise. However, hardware exists that does run every neuron continuously and in parallel.

    LIF neurons don’t technically have a finite number of states because their voltage potential is continuous. Similarly, despite the fact they either fire or don’t fire, the synapses between the neurons also work with integration and a decay constant and hence are continuous.

    This continuity means that neurons don’t fire at discreet time intervals and—coupled with the fact inputs are typically coded into spike chains with randomness—you get different behavior basically every time you turn the network on.

    The curious part is that it can reliably categorize inputs and the fact that inputs are given some amount of noise leads to robust functionality. A paper I read was using a small, 2 layer net to recognize MNIST numbers and were able to remove 50% of their neurons after training and still have a 60% success rate on identifying the numbers.

    Anyway, as for your second question, analog computing, including neuromorphic hardware, is continuous since electric current is necessarily continuous (electricity is a wave unfortunately). You are right that other things will add noise to this network, but changes in electric conductivity from heat and/or voltage fluctuations from electromagnetic interference are also both continuous.

    Most importantly is that these networks—when not hardcoded—are constantly adapting their weights.

    Spike Timing Dependent Plasticity (STDP) is, as it sounds, dependent on spike timing. The weights of synapses are incredibly sensitive to timing so if you have enough noise that a neuron fires before another, even by a very tiny amount, that change in timing changes which neuron is strengthened most. Those tiny changes will add up as the signals propagate through the net. Even for a small network, a amount of noise is likely to change its behavior significantly over enough time. And if you have any recurrence in the net, those tiny fluctuations might continually compound forever.

    That is also the best answer I have for your third question. The non-algorithmic part is due to the fact no state of the machine can really be used to predict a future state of the machine because it is continuous and its behavior is heavily dependent on external inherent noise. Both the noise and the tiniest of changes from the continuity can create novel chaotic behavior.

    You are right in saying that we can minimize the effects of noise; people have used SNNs to accurately mimic ANNs on neuromorphic hardware for faster compute times, but those networks do not have volatile weights and are built to not be chaotic. If they were non-algorithmic you wouldn’t be able to do gradient descent. The only way to train a truly non-algorithmic net would be to run it.

    Anyway the main point of “non-algorithmic” is that you can’t compute it in discrete steps. You couldn’t build a typical computer that can fully simulate the behavior of the system because you’ll lose information if you try to encode a continuous signal discretely. Though I should note, continuity isn’t the only thing that makes something non computable since the busy beaver numbers are incomputable but still entirely discrete and very simple machines.

    Theoretically, if a continuous extension of the busy beaver numbers existed, then it should be possible for a Liquid State Machine Neural Net to approximate that function. Meaning we could technically build an analog-computer capable of computing an uncomputable/undecidable problem.


  • This isn’t about simulating atom by atom. It is just saying that there exist pieces of the universe that can’t be simulated.

    If we find undecidable aspects of physics (like we have) then they must be part of this simulation. But it’s not possible to simulate those by any step by step program. Ergo, the universe cannot be a simulation.

    The use of render optimization tricks has no effect on this.

    You can’t even patch it like you said with wiping minds because it would require you to do the undecidable work which can’t be done by any algorithm.


  • Bro our hyperfixations are slightly aligned, I was thrown into this rabbit hole because I was once again trying to build a formal symbolic language to describe conscious experience using qualia as the atomic formulae for the system. It’s also been giving me lots of fascinating insight and questions about the nature of thought and experience and philosophy in general.

    Anyway to answer your question: yes and no.

    If you require that the AGI be built using current architecture that is algorithmic then yes, I think the implication holds.

    However, I think neuromorphic hardware is able to bypass this limitation. Continuous simultaneous processes interacting with each other are likely non-algorithmic. This is how our brains work. You can get some pretty discrete waves of thoughts through spiking neurons but the complexity arising from recurrence and the lack of discrete time steps makes me think systems built on complex neuromorphic hardware would not be algorithmic and therefore could also achieve AGI.

    Good news: spiking neural nets are a bitch to prototype and we can’t train them fast like we can with ANNs so most “AI” is built on ANNs since we can easily do matrix math.

    Tbf, I personally don’t think consciousness is necessarily non-algorithmic but that’s a different debate.


    Edit: Oh wait, that means the research only proves that you just can’t simulate the universe on a Turing-machine-esque computer yeah?

    As long as there are non-algorithmic parts to it, I think a system of some kind could still be producing our universe. I suppose this does mean that you probably can’t intentionally plan or predict the exact course of the “program” so it’s not really a “simulation” but still that does make me feel slightly disappointed in this research.



  • Disclaimer: an engineering student not a logician. However, one of my recent hyper fixations lead me down the rabbit hole of mathematics specifically to formal logic systems and the languages and semantics of them. So here’s my understanding of the concepts.


    TLDR: Undecidable things in physics aren’t capable of being computed by a system based on finite rules and step-by-step processes. This means no algorithm/simulation could be designed to actually run the universe.

    A language is comprised of the symbols used in a formal system. A system’s syntax is basically the rules by which you can combine those symbols into valid formulas. While a system’s semantics are what determines the meaning behind those formulas. Axioms are formulas that are “universally valid” meaning they hold true in the system regardless of of the values used within them (think of things the definitions of logical operators like AND and NOT etc)

    Gödel’s incompleteness theorems say that any system which is powerful enough to define multiplication is incomplete. This means that you could write a syntactically valid statement which cannot be proven from the axioms of that system even if you were to add more axioms.

    Tarski’s undefinability theorem shows that not only can you write statements which cannot be proven true or false, you cannot actually describe the system using itself. Meaning you can’t really define truth unless you do it from outside the formal language you’re using. (I’m still a little fuzzy on this one)

    Information-theoretic incompleteness is new to me, but seems to be similar to Gödel’s theorem but with a focus on computation saying that if you have a complex enough system there are functions that won’t be recursively definable. As in you can’t just break it down into smaller parts that can be computed and work upwards to it.

    The paper starts by assuming there is a computational formal system which could describe quantum gravity. For this to be the case, the system

    • must have a finite set of axioms and rules
    • be able to describe arithmetic
    • be able to describe all physical phenomena and “resolve all singularities)

    Because the language of this system can define arithmetic, Gödel’s theorems apply. This leads to the fact that this system, if it existed, can’t prove that it itself is sound.

    I don’t know what it means for the “truth-predicate” of the system to not be defined, but it apparently ties into Chaitan’s work and means that there must exist statements which are undecideable.

    Undecidable problems can’t be solved recursively by breaking them into smaller steps first. In other words you can’t build an algorithm that will definitely lead to a yes/no or true/false answer.

    All in all this means that no algorithmic theory could actually describe everything. This means you cannot break all of physics down into a finite set of rules that can be used to compute reality. Ergo, we can’t be in a simulation because there are physical phenomena that exist which are impossible to compute.



  • Considering people seem to correlate scarcity with value, yeah, big time.

    I also doubt people would be willing to hunt/farm xenomorphs if they couldn’t get paid exorbitantly.

    Oh and I’d imagine people who have that eccentric desire to be the top of the food chain would probably think it’s the best food ever. “You’re not a real man ™️ till you’ve eaten xenomorph meat” lol


    Sidenote: I just had an idea for xenomorph farming:

    1. Find asteroid with enough gravity to keep the xenomorphs from yeeting themselves into space.
    2. Place egg.
    3. Add hosts.
    4. wait.
    5. Use robot to retrieve an egg and make sure it stays entirely sealed away with no chance of human contact.
    6. Throw another asteroid at the main one fast enough it liquifies both.
    7. Collect obliterated xenomorph parts and cook them as they enter collection to make sure they’re dead.
    8. sell to patrons for enough money to buy new asteroid and repeat the process.
    9. profit
    10. Eventually make a mistake and die a horrible death


  • I decided to look into this because I was curious.

    The unification and regulation of the French language came about in 1653 with the founding of the Académie Française and it actually took a while for the revolutionaries to pivot from “liberty of language” to “the only language in France should be French” English was already established by this time and the vowel shift was basically complete.

    According to Wikipedia, Middle French died out in the 17th century while Middle English died out in the 15th. Ergo: Modern English predates Modern French

    If we check back farther it seems the two languages developed similarly though the arbitrary divides for each age of language (old, middle, modern) seem to show with English being first by roughly a century.

    Of course this is all arbitrary since language doesn’t evolve discretely. However the Wikipedia entries for the oldest Gallo-Romance (precursor to French) is from 842CE, whereas old English poetry dates as early as 650-700CE. Once again suggesting English predates French.

    Now there is a difficulty here with French because it originates from Vulgar Latin which could be considered older than English, but I’m not sure many would call it French since lots of European languages branched from Vulgar Latin

    As for silliness… yeah no arguments there lol





  • Proof. We seek to prove that regardless of the existence of an objective morality people will only adhere/accept their own personal morality, thus making objective morality irrelevant.

    We have three cases:

    1. Objective Morality doesn’t exist: If there is no objective morality, people can only default to their own morality.
    2. Objective morality exists and doesn’t align with an individual’s own moral compass: Imagine objective morality was defined by some Aztec or eldritch god and tells you it is morally imperative to torture people. If you have a sense of empathy your moral compass will not align with this and you will choose to disobey this morality. Hence, if an objective moral compass exists and does not align with one’s own morality, the individual will reject it and default to their own morality.
    3. Objective morality exists and does align with an individual’s own morality: Trivially this means an individual is still just following their own default morality.

    In all cases the individual will only act on their own morality regardless of the existence or nonexistence of an objective morality. Hence, objective morality is irrelevant. QED.


    Because the existence of objective morality has no relevance one can assume objective morality doesn’t exist which, by Occam’s razor, is already the most likely case. Your ideas of right and wrong or good and bad will never be objective in a way that would matter. It is, in my opinion, a much better idea to explain what you think the positive effect of your “moral” actions are because those cause effect relationships can be objective. “I think we should provide free basic needs to everyone because a significant portion of crimes are committed as crimes of necessity, and I would like my country to feel safer” is much more objective than “I think we should provide free basic needs to everyone because it’s the right thing to do.”

    Anyone can claim their ideas are “right” or “good” without any explanation of why. I mean that’s basically the strategy of the Republican Party. “Being trans is wrong” “Anti-capitalism is evil” etc. And you saying “Anticapitalism is good” is just as empty and meaningless.


    Also, fun fact the proof above works for the existence of god as well. Basically just swap out morality with god and ta-da it is morally irrelevant if god exists, you’re only going to do what you personally think is right regardless.