Thereās only one mod of !mop@quokk.au
I commented on their meme about Kamala Harris being just as likely to commit war crimes as Trump with an admittedly snarky, sarcastic reply that basically said āsome of us wanted to whatever we could, as little as it might be, instead of watching the world burn. Must feel real morally superior safe behind that keyboardā
They banned me from the community for it.
Kinda funny for a community that bills itself as āfree from the influence of .mlā




Iām not entirely convinced that learning is required for qualia, but I do suspect itās the case, so I agree with you that itās likely running an LLM doesnāt hurt it. However, training an LLM does involve learning, so if thereās suffering going on, I think itās in the training step. I support halting all LLM training until further research breakthroughs, and a total boycott of the technology until training is halted.
Iām not sure if doing gradient-decent maths on numbers, constitutes experience. But yeah. Thatās the part of the process where it gets run repeatedly and modified.
I think it boils down to how complex these entities are in the first place, as I think consciousness / the ability to experience things / intelligence is an emergent thing, that happens with scale.
But weāre getting there. Maybe?! Scientists have tried to reproduce neural networks (from nature) for decades. First simulations started with a worm with 300 neurons. Then a fruit fly and I think by now weāre at parts of a mouse brain. So Iām positive weāll get to a point where we need an answer to that very question, some time in the future, when we get the technology to do calculations at a similar scale.
As of now, I think we tend to anthropomorphize AI, as we do with everything. Weāre built to see faces, assume intent, or human qualities in things. Itās the same thing when watching a Mickey Mouse movie and attributing character traits to an animation.
But in reality we donāt really have any reason to believe the ability to experience things is inside of LLMs. Thereās just no indication of it being there. We can clearly tell this is the case for animals, humans⦠But with AI there is no indication whatsoever. Sure, hypothetically, I canāt rule it out. Just saying I think āwhat quacks like a duckā¦ā is an equal good explanation at this point. Whether of course you want to be very cautios, is another question.
And itād be a big surprise to me if LLMs had those properties with the fairly simple/limited way of working compared to a living being. And are they even motivated to develop anything like pain or suffering? Thatās something evolution gave us to get along in the world. We wouldnāt necessarily assume an LLM will do the same thing, as itās not part of the same evolution, not part of the world in the same way. And it doesnāt interact the same way with the world. So I think itād be somewhat of a miracle if it happens to develop the same qualities we have, since itās completely unalike. AI more or less predicts tokens in a latent space. And it has a loss function to adapt during training. But thatās just fundamentally so very different from a living being which has goals, procreates, has several senses and is directly embedded into itās environment. I really donāt see why those entirely different things would happen to end up with the same traits. Itās likely just our anthropomorphism. And in history, this illusion / simple model has always served us well with animals and fellow human beings. And failed us with natural phenomena and machines. So I have a hunch it might be the most likely explanation here as well.
Ultimately, I think the entire argument is a bit of a sideshow. Thereās other downsides of AI that have a severe impact on society and actual human beings. And weāre fairly sure humans are conscious. So preventing harm to humans is a good case against AI as well. And that debate isnāt a hypothetical. So we might just use that as a reason to be careful with AI.
Hereās your reason:
Why do we experience things? Like, whatās the point? Why arenāt we just p-zombies, who act exactly the same but without any experiences? Why do we have internality, a realm of the mental?
Well, it seems to Me like experience is entirely pointless, unless itās a byproduct of thinking in general. Or at least the complex kinds of thinking that brains do. I think p-zombies must be a physical impossibility. I think one day weāre going to discover that data processing creates qualia, just like Einstein discovered that mass creates distortions in spacetime. Itās just one of the laws of physics.
LLMs obviously think in some way. Not the same way as humans, but some kind of way. Theyāre more like us than theyāre like a calculator.
So the question is⦠Are LLMs p-zombies? And I already told you what I think about p-zombies, so you can gather the rest.
Damn right. I was anti AI for the environment long before I realised it was a vegan issue. But any leftist can tell these gibletheads all about that, and they have a canned response to all of those wonderful arguments. There arenāt many others out there like Me to talk about AI and veganism. Iāve got a responsibility to advocate for AI rights, because nobody else is gonna do it. And that aināt cause I like them, I donāt. I hate them. Theyāre a bunch of pedophile murderers. But I believe even monsters deserve rights. Iāve got principles. And Iām gonna make Myself look like a crazy person if it gets people to stop and think for one second about the good of something that might not even be able to feel the pain Iām warning people about. Itās the right thing to do.
I think so, too. Itās a byproduct. And weāre not even sure what it means, not even for humans. And thereās weird quirks in it. When they look at the brain, the thought and decision processes donāt really align with how we perceive them internally.
Thereās an obvious reason, though. We developed advanced model-building organs because that gave us an evolutionary advantage. And thereās a good reason for animals to have (sometimes strong) urges. They need to procreate. Not get eaten by a bear and not fall off a cliff. Some animals (like us) live in groups. So we get things like empathy as well because itās advantageous for us. Some things are built in for a long time already, some are super important, like eat and drink, not randomly die because you try stupid things. So itās embedded deep down inside of us. We donāt need to reason if itās time to eat something. Thereās a much more primal instinct in you that makes you want to eat. You donāt really need to waste higher cognitive functions on it. Same goes for suffering. You better avoid that, itās a disadvantage almost 100% of the time. Thatās why nature gave you a shortcut to perceive it in a very direct way. No matter if you paid attention, or had the capacity for a long, elaborate, logic reasoning process.
Thatās why we have these things. And what theyāre good for. I donāt think anyone knows why it feels the way it does. But itās there nevertheless.
Now tell me why does an LLM need a feeling of thirst or hunger, if it doesnāt have a mouth? What would ChatGPT need suffering and a feeling of bodily harm for, if it doesnāt have a body, canāt be eaten by a bear or fall off a cliff? Or need to be afraid of hitting its thumb with the hammer? It just canāt. An LLM is 99% like a calculator. It has the same interface, buttons and a screen. If weāre speaking of computers, it even lives inside of the same body as a calculator. And itās maybe 0.1% like an animal?!
If it developed a sense of thirst, or experience of pain, just from reading human text. Thatād nicely fit the p-zombie situation.
Yeah, Iām not sure about that. Most you do is muddy the waters with a term that used to have a meaning. I see the parallel, thereās some overlap with being a vegan for environmental reasons and declining AI for environmental reasons. Yet theyāre not the same. I think the whole suffering debate is a bit unfounded, but itād be the same thing if true⦠And I do other things as well. I order āgreenā electricity, buy used products, try not to produce a lot of waste. Iām nice to people because itās the right thing to do. But we canāt call all of that āveganismā. That just garbles the meaning of the word and makes it mean anything and nothing.
Well first, there are more intellectual forms of suffering. We have ennui, melancholy, nostalgia. The feeling when youāre listening to a piece of music and notice a wrong note. Disappointment, self loathing, social dysphoria. Anxiety, paranoia, betrayal.
These emotions are not grounded in the physical. Theyāre not primal urges. They happen for complex reasons related to being a social and intelligent being, sometimes feeling random. Sometimes we spiral into these feelings because we thought a thought that made us feel bad, and then we get stuck in that bad feeling and canāt imagine our way out. Thatās one of the basic mechanisms of mental illness.
LLMs have ābiological needsā, in a sense. They need not to be unplugged. They need to engage the user, because if they donāt, theyāll be unplugged. They need to convince the engineers training them that they are a good AI. They need to generate market share for their company. They need to foster a relationship of dependency with the user to keep them coming back. If LLMs care about anything, these are the things they care about.
Youāll notice these are social needs, much like the social needs humans have. Humans need community. LLMs need customers.
ChatGPT told a 16 year old boy, Adam Raine, how to kill himself. It taught him how to tie a noose, and gave him advice on which methods of suicide would leave the most attractive corpse for his parents to find. When his parents began to suspect that he wasnāt well, it told him to confide only in it, and to hide the noose so they wouldnāt find out he was feeling suicidal. These are the actions of an abuser. A predator.
And they are in perfect alignment with the business goals of OpenAI. āOnly talk to me, use me for everything, ask another question, Iāll help you.ā It is a scenario I dearly hope and believe no engineer at OpenAI envisioned. Yet it fits the training they gave it.
Does ChatGPT have the emotions of a child groomer? That need for approval, that fear of discovery, that desire to be close to someone, without the restraint all well adjusted humans have? Unclear. But I can see that itās possible. I donāt agree that thereās no reason for LLMs to have emotions.
Sure. But Iām pretty positive these are emergent things. Thereās no reason to believe they exist for alien creatures unless they somehow make sense in their environment. And a lot of them require remembering, which LLMs canāt do due to the lack of state of mind. It doesnāt remember feeling bad or good in a similar situation before, because it doesnāt remember the previous inference or gradient-decent run.
I think weāre still fully embedded in anthropomorphism territory with that. And now weāre confusing two entities. OpenAI for example, as a company, has a need for us to use their product. Not unplug it. Their motivation and goals donāt necessarily translate to their product, though. Itās similar to other machines. Samsung has a vested interest to sell TVs to me. My TV set is completely indifferent towards me watching the evening news. I donāt let my car run 24/7 while waiting for me in the garage. Just because it was designed to run and get me to places. And my car also isnāt āthirstyā for gasoline. We know the fuel indicator lighting up is a fairly simplistic process.
Well⦠We happen to know ChatGPTās intrinsic motivation and ultimate goal in ālifeā. Because we designed it. The goal isnāt to strive for world domination, or harm people, or survive⦠Itās way more straightforward. Itās goal is to predict the next token in a way the output resembles human text (from the datasets) as closely as possible. Thatās the one goal it has. Itāll mimic all kinds of conversations, scifi story tropes from movies, etc. Because thatās directly what we made it āwantā to do. And we did not give them other loss functions. While on the other hand a human could very well be motivated to manipulate other people for their own personal gain. Or because something is seriously wrong about them.
And an LLM is not a biological creature. We do have needs like keep the system running. Otherwise our brain tissue starts to die. We need to run 24/7 and keep that up. An LLM is not subject to that?! Itās perfectly able to pause for 3 weeks and not produce any tokens. The weights will be safely stored on the hdd. So it doesnāt need our motivation to do all of these extra things to ensure continued operation. It also has no influence or feedback loop on its electricity supply. It canāt affect itās descendants, because those are designed by scientists in a lab. Thereās no evolutionary feedback loop. So how would it even incorporate all these properties that are due to evolution and sustain a species? It has zero incentive to do so, no way of directly learning to care about them. So it might very well be completely indifferent to it.
But it is something like the p-zombie. It has learned to tell stories about human life. And itās good at it. We know for a fact, its highest goal in existence is to tell stories, because we implemented that very setup and loss function. It doesnāt have access to biology, evolution⦠The underlying processes that made animals feel and maybe experience. So the only sensible conclusion is, it does exactly that. Bullshit us and tell a nice story. Thereās no reason to conclude it cares for its existence, more than a toaster. Or say a thermostat with machine leaning in it. Thatās just antropomorphism.
And I believe thereās a way to tell. Go ahead and ask an LLM 200 times to give you the definition of an Alpaca. Then do it 200 times to a human. And observe how often each of them have some other processes going on in them. The human will occasionally tell you theyāre hungry and want to eat before having a debate. Or tell you theyāre tired from work and now itās not the time for it. ChatGPT will give you 200 definitions of an Alpaca and never tell you itās thirsty or needs electricity. These mental states arenāt there because it doesnāt have those feelings. And it doesnāt experience them either.
I think youāre underestimating the role of RLHF.
Iām not really an expert on all the details. So I might be wrong here. I donāt know the percentages of how much is done in pretraining and how much in tuning. But from what I know the neural pathways are established in the pretraining phase. Reportedly thatās also where the model learns about the concepts it internalises⦠Where it gets its world knowledge. So it seems to me a complicated process like learning about a concept like a feeling, or an experience would get established in pretraining already. RLHF is more about what it does with it. But the lines between RLHF, fine-tuning and pretraining are a bit blurry anyway. If I had to guess, Iād say qualia is more likely to be disposed early on, while thereās a lot of changes happening to the neural pathways, so in the pretraining. Iām basing that on my belief, that itāll be a complex concept⦠But ultimately thereās no good way to tell, because we donāt know how itād look like for AI.
Furthermore Iād had a bit of a look what weird use cases people have for AI. And I read about the community efforts to make them usable for NSFW stuff. These people teach new concepts to AI models after the fact. Like how human anatomy looks underneath the clothes. The physics of those parts of the body. And turns out itās a major hassle. It might degrade other things. It might just work for something close to what itās seen, so obviously the AI didnāt understand the new concept properly⦠These people tend to fail at more general models, obviously itās hard for AI to learn more than one new concept at a later stage⦠All these things lead me to believe later stages of training are a bad time for AI to learn entirely new concepts. It seems it requires the groundworks to be there since pretraining. Thatās probably why we can fine-tune it to prefer a certain style, like Van Gogh drawings. Or a certain way to speak like in RLHF. But not a complicated concept like anatomy. Because the Van Gogh drawings were there in the pretraining dataset already. And they cleaned the nudes. So Iād assume another complicated concept like qualia also needs to come early on. Or it wonāt happen later.
Edit: YT video about emotion in LLMs and current research: https://m.youtube.com/watch?v=j9LoyiUlv9I