HN.zip

When Dawkins met Claude – Could this AI be conscious?

https://archive.ph/Rq5bw

37 points by pentestercrab - 223 comments

223 Comments

teekert [3 hidden]5 mins ago
Yeah maybe the LLM just knows the book in one go, like "an understanding" from Children of Time (recommended, if just for the amount of metaphors it gives you), but we do something similar with pictures. So, nice story bro-LLM but it's just the combination of word an system would throw together in this context to impress a human. ;)
andyjohnson0 [3 hidden]5 mins ago
Some jumbled thoughts from a lay-person:

1. We clearly don't have a consensus definition of consciousness. But its not clear to me that we even have rough, working definitions that are better than just comparisons back to subjective human mental experience. Until we can get past that then people will still invoke human exceptionalism.

2. Until we stop thinking of consciousness as a single continuum, we're not going to be able to talk clearly about different dimensions of consciousness, or consciousness that in some ways exceeds that of humans.

3. We need to take ourselves out of the picture. Because its possible that consciousness is no more than a mental illusion.

4. Imo our tendency to kill and eat other animals might be a constraint on our collective ability to recognise and confront non-human consciousness, and therefore to see it for what it is.

qnleigh [3 hidden]5 mins ago
It's easy, and very tempting to dismiss this sort of thing. But given how little we know about the human brain, let alone consciousness, I don't see how we can be confident that LLMs aren't conscious.

I've had a lot of thoughts and conversations over the years that changed my mind on what consciousness likely requires. One was the realization that a purely mechanical computer can, in principle simulate the laws of physics, and with it a human brain. So with a few other mild assumptions, you might conclude that a bunch of gears and pullies can be conscious, which feels profoundly counterintuitive.

I think that was the moment I stopped being sure about anything related to this question.

marliechiller [3 hidden]5 mins ago
Why do you think stringing words together is any more a sign of consciousness than google maps is when it tries to find the best route available to your destination? It seems to me that humans often fall into the trap of anthropomorphism. This is a theme thats touched upon in the novel "Blindsight" by Peter Watts. Just because something can communicate in a way that you can interpret, doesnt mean something is conscious
vidarh [3 hidden]5 mins ago
A large part of the problem is what you consider consciousness.

If you talk about having a subjective experience, then we don't know of any way to prove that even other humans than ourselves have one. We go entirely by assumptions based on physical similarity and our ability to communicate.

But we have no evidence that physical similarity is a prerequisite, nor that it is sufficient.

So the bigger trap is to assume that we know what causes a subjective experience, and what does not.

None of us even know if a subjective experience exists for more than a single entity.

But the second problem is that it is not clear at all whether that subjective experience in any way matters.

Unless our brains exceed the Turing computable, for which we have no evidence is even possible, either whatever causes the subjective experience is also within the Turing computable or it can not in any way influence our actions.

Ultimately we know very little about this, and we have very little basis for ruling out consciousness in computational systems, and the best and closest we have is whether or not they appear conscious when communicating with them.

dumpsterdiver [3 hidden]5 mins ago
> Just because something can communicate in a way that you can interpret, doesnt mean something is conscious

The phrase “the trap of anthropomorphism” betrays a rather dull premise: that consciousness is strictly defined by human experience, and no other experience. It refuses to examine the underlying substrate, at which point we’re not even talking the same language anymore when discussing consciousness.

marliechiller [3 hidden]5 mins ago
I think these ideas are orthogonal. I do not think that conciousness is defined by human experience at all - in fact, I think humans do a profound disservice to animals in our current lack of appreciation for their clear displays of conciousness.

That said, if a chimpanzee bares its teeth to me, I could interpret that to be a smile when in fact its a threatening gesture. Its this misinterpretation that I am trying to get at. The overlaying of my human experiences onto something which is not human. We fall for this over and over again, likely as we are hard wired to - akin to mistakenly seeing eyes when observing random patterns in nature.

In the case of LLMs though, why does using a mathmatical formula for predicting the next word give any more credence to conciousness than an algorithm which finds a nearest neighbour? To me, its humans falling foul of false pattern matching in the pursuit of understanding

mseepgood [3 hidden]5 mins ago
> It seems to me that humans often fall into the trap of anthropomorphism.

That's true, but they also often fall into the trap of exceptionalism.

energy123 [3 hidden]5 mins ago
There are people who think Google Maps is a tiny bit conscious (the union of computational functionalists and panpsychists), to resolve the dilemma of some magical binary threshold.
stavros [3 hidden]5 mins ago
Why do you think it's definitely not?
laichzeit0 [3 hidden]5 mins ago
You could push the analogy even further and run the thought experiment where every forward pass through an LLM could in principle be done on pen and paper, distributed throughout all humanity. Sure it would take a long time, but the output would be exactly the same. We’ve just shifted the implementation from GPU to scribbling things down on paper. If you want to assert that LLMs are “conscious” then you would have to likewise say this pen-and-paper implementation is conscious unless you want to say a certain clock-speed is a necessary condition for consciousness.
vereis [3 hidden]5 mins ago
the problem with this is I'd strongly argue that you could do this pen and paper process with the human brain and our consciousness too; we just lack enough understanding to put pen to paper in that case

the notion of consciousness being something an experience that other animals/humans share is entirely faith based.

the only person with evidence of ones consciousness is the person claiming they're conscious.

birdsongs [3 hidden]5 mins ago
Can computers simulate all the laws, even theoretically? We don't have a final theory / unification of all the physics frameworks, so I'm not sure if that claim can be made. Ex: the standard model and gravity.
fontain [3 hidden]5 mins ago
but that’s not science, right? Dawkins and his ilk cling to science as a cure for religion yet if we are to believe that our absence of understanding of consciousness means computers can be conscious then our absence of understanding of the universe means god may exist.

“Isn’t it enough to see that a garden is beautiful without having to believe that there are fairies at the bottom of it too?”

tracerbulletx [3 hidden]5 mins ago
We don't even know what the pre-requisites for consciousness are so we have no way of knowing. LLMs have emergent behavior that is reminiscent of language forming brains, but they're also missing a lot of properties that are probably necessary? Mainly continuity over time, more integrated memory, and a better sense of space and time? Brains use the rhythm and timing of neuronal firings, and the length of axons effects computation, they do a lot of different things with signal and patterns, but in any case without knowing what consciousness is I don't know which of those things are required.
KaiserPro [3 hidden]5 mins ago
> LLMs have emergent behavior that is reminiscent of language forming brains,

Indeed, but then we need to prove that they are not "chinese box" conscious. Which is hard, because it might be that the thing running the chinese box is conscious, but can only communicate in a way it doesn't understand

boxed [3 hidden]5 mins ago
> We don't even know what the pre-requisites for consciousness are so we have no way of knowing.

Imo we don't even have a definition of the word that we agree on.

qsera [3 hidden]5 mins ago
Ability to feel pain or pleasure is a good indicator I think..
TheOtherHobbes [3 hidden]5 mins ago
That would be the physically embodied definition. Which is a useful starting point, because clearly our consciousness is physically embodied, while an LLM's isn't.

This matters more than it seems, because we're not calculators, and we're not just brains. There are proven links between mental and emotional states and - for example - the gut biome.

https://www.nature.com/articles/s41598-020-77673-z

There's a huge amount going on before we even get to the language parts.

As for Dawkins - as someone on Twitter pointed out, the man who devoted his life to telling people believers in sky fairies they were idiots has now persuaded himself there's a genie living inside a data centre, because it tells him he's smart.

If he'd actually understood critical thinking instead of writing popular books about it he wouldn't be doing this.

Jtarii [3 hidden]5 mins ago
There are times I am feeling neither pain nor pleasure, but I am still experiencing conciousness.

So that definition seems to fail immediately.

And how do you even measure pain, is it painful for an LLM to be reprimanded after generating a reply the user doesn't like? It seems to act like it.

qsera [3 hidden]5 mins ago
>There are times I am feeling neither pain nor pleasure

It is about the ability..

Jtarii [3 hidden]5 mins ago
I guess that just seems like an incredibly arbitrary criteria. Why would the potential for pleasure in the future determine if I am currently conscious even if I am not in fact experiencing pleasure.
echoangle [3 hidden]5 mins ago
And how do you define pain and pleasure? Do insects feel pain?
qsera [3 hidden]5 mins ago
> Do insects feel pain?

Yes, I think so. Because they show behavior that is consistent with being in a state of pain.

Despite what consciousness really is, I think evolution found a way to tap into that, by causing pain, or by registering pain on the consciousness by some unknown mechanism, for behaviors that are not beneficial to the organism that hosts the respective consciousness...

So I think if an organism that evolved here can display painful behavior, then it should really feel pain.

StilesCrisis [3 hidden]5 mins ago
See, this definition sucks, because even GPT-3 could display _signs_ of pleasure and pain. For that matter, so do characters in video games.
ako [3 hidden]5 mins ago
So if a robot + ai shows behavior consistent with pain, we can conclude it’s conscious?
echoangle [3 hidden]5 mins ago
So if I build a simulation with robots living in a world and apply an evolutionary algorithm and at some point the virtual robots respond to damage in a way that looks like pain in animals, would the simulated robots be conscious? Or is it impossible that this could happen?
qsera [3 hidden]5 mins ago
In my comment, we already assume that we (humans) are conscious and we are the result of evolution. So the question was only if something else that evolved similarly, was conscious the way we are..

So to match with that your hypothetical scenario should involved robots that already have consciousness within them and the question would be if their evolution had managed to tap into that built in consciousness and ability to feel and cause them to behave in one way or another.

retsibsi [3 hidden]5 mins ago
> And how do you define pain and pleasure?

They're not reducible, but I don't know if that means we don't have definitions; we can describe them well enough that most people (who aren't p-zombies or playing the sceptical philosopher role) know pretty well what we mean. All of our definitions have to bottom out somewhere...

> Do insects feel pain?

Nobody (except the insects) can know for sure. Our inability to know whether X is true doesn't imply X is meaningless, though.

echoangle [3 hidden]5 mins ago
But how can X be a good indicator for something I want to determine if I can’t measure X either?
retsibsi [3 hidden]5 mins ago
> But how can X be a good indicator for something I want to determine if I can’t measure X either?

In the comment that started this subthread, qsera was responding to someone who said "Imo we don't even have a definition of [consciousness]". If qsera meant that we can measure consciousness in terms of pleasure and pain, then of course I agree that they were just pushing the problem back a step. But I don't think that's what they meant.

pydry [3 hidden]5 mins ago
We're pretty clear on the distinction between a conscious and an unconscious human.

We might not clearly understand the diff between the two states but we can certainly point to it and go "it's that".

freedomben [3 hidden]5 mins ago
I'm not sure it's that clear. What about a person who is on drugs to the point they clearly don't know what reality is happening around them, but they are able to speak and move and such? I'm not sure I'd call that conscious, but by most definitions it is.
Jtarii [3 hidden]5 mins ago
You would just say that they have an altered experience of consciousness from the norm.
collyw [3 hidden]5 mins ago
Indeed, doing a first aid course we were pointed out that sleeping is different from being unconscious. You can wake someone from sleep pretty quickly. You can't bring an unconscious person back in the same way.
Jtarii [3 hidden]5 mins ago
>We're pretty clear on the distinction between a conscious and an unconscious human.

You are using unconscious as a synonym for asleep, which is not the same thing as having no conscious experience due to dreams. We are clear on the distinction between a dead human and an alive human however.

agnosticmantis [3 hidden]5 mins ago
Now discuss whether a bonobo, a dog, a cat, a mouse, an ant, a bacterium is conscious.

And you’ll find it’s not as clear cut.

throwuxiytayq [3 hidden]5 mins ago
Clive Wearing's memory lasts for less than 30 seconds, so he has no memory of being awake before now. He is permanently in a state of feeling like he has just woken up, observing his surroundings for the first time.

Clive Wearing's mind has no time continuity and basically zero memory integration. Is he not conscious? There's interviews with the guy.

Where on the scale [No mind <-> Clive Wearing <-> Healthy human brain] would you put an LLM with a 10M token context window?

throwyawayyyy [3 hidden]5 mins ago
Current LLMs prove that the Turing Test was insufficient all along. But they also prove that intelligence != consciousness. One can, after all, be conscious without a thought in one's head. We certainly have ongoing work in identifying the neural correlates of consciousness in animals, none of which is going to be remotely applicable to machines. We're genuinely blind to the question of whether a sufficiently large neural net can exhibit flashes of subjective experience.
dpark [3 hidden]5 mins ago
> But they also prove that intelligence != consciousness.

They prove no such thing. We can't even prove consciousness in other humans.

https://en.wikipedia.org/wiki/Problem_of_other_minds

Jtarii [3 hidden]5 mins ago
The most convincing argument is that if other humans were not experiencing consciousness then they probably wouldn't waste large parts of their lives arguing about it.
psychoslave [3 hidden]5 mins ago
On that regard, arguing with thermometer is not a thing generally, but people arguing with LLMs is certainly common enough now to not be considered a completely marginal case. Given some people fall in love or move to suicide after interacting with these models, they are certainly different from even the most beloved dialectical rubber duck.
abc123abc123 [3 hidden]5 mins ago
The turing test is alive and well. All it takes to "win" is to just sit there. Ask for a Nazi joke, ask for a longer explanation etc. It's incredibly easy, in a Turing test scenario to sort out who is human and who is LLM.
qsera [3 hidden]5 mins ago
They are not intelligent. And they won't pass turing tests if it cannot count or some simple thing like that..
brookst [3 hidden]5 mins ago
Obligatory Blightsight recommendation for intelligence != consciousness.
marshray [3 hidden]5 mins ago
That book is badass on so many levels. I'd just started it again yesterday.
exe34 [3 hidden]5 mins ago
that book messes with my head every time I read it, it's like I go through life in a detached way for several weeks. I need to read it again!
ninalanyon [3 hidden]5 mins ago
I read it once, was immensely impressed, can't bear to read it again. In fact I find most of what I have read from Peter Watts to be brilliant but disconcerting and uncomfortable.
dreamcompiler [3 hidden]5 mins ago
Blindsight
api [3 hidden]5 mins ago
That was one of my thoughts years ago after playing with early ChatGPT and local llama1: this proves that intelligence and consciousness do not necessitate one another and may not even be directly related.

I’ve kind of thought this for many years though. A bacterium and a tree are probably conscious. I think it’s a property of life rather than brains. Our brains are conscious because they are alive. They are also intelligent.

The consciousness of a bacterium or a tree might be radically unlike ours. It might not have a sense of self in the same way we do, or experience time the same way, but it probably has some form of experience of existing.

digitaltrees [3 hidden]5 mins ago
But why? A roomba has senses, and can access them when it has power and respond to stimulation. When it runs out of power it no longer experiences this sensation and no longer responds to stimulus.

How is that different than a cell?

dpark [3 hidden]5 mins ago
You simply defined consciousness as life, which seems like an unusual but also not very useful definition.
jbstack [3 hidden]5 mins ago
> an unusual ... definition

I don't think it's that unusual. It seems to me just to be a narrower version of panpsychism:

https://en.wikipedia.org/wiki/Panpsychism

collyw [3 hidden]5 mins ago
Someone that has recently dies has pretty much the same biology as when they were alive. The conciseness is the main difference, I would say.
throwyawayyyy [3 hidden]5 mins ago
I think this gets to the conflation we naturally have with consciousness and a sense of self. Does a tree have a sense of self? I imagine probably not, a tree acts more like a clonal colony than a single organism.
Earw0rm [3 hidden]5 mins ago
It may be helpful here to think about, at what point does a sense of self, of varying degrees, become evolutionarily advantageous?

An animal that doesn't have some kind of pair bond or social arrangement, and doesn't raise its young, has a lot less need for some of this emotional hardware than we do.

Whereas K-selected species that raise their kids have broadly the same need for it as humans.

That doesn't categorically mean it evolved with the first pair-bonding K-reproducer, or that birds have parallel-evolved emotional hardware like ours, but there's plenty of behavioural evidence there - the last common ancestor of birds and humans was small-brained and primitive, but investing in individual children probably evolved around the time of amniote eggs, just because they were so much more biologically expensive to produce than amphibian or fish eggs.

kortex [3 hidden]5 mins ago
Is someone tripped out on mushrooms experience ego death and total disruption of sense of self still conscious? They may even contend they are more conscious than normal life, what with all the communing with the universe and whatnot.

Trees react to the world around them in many ways.

digitaltrees [3 hidden]5 mins ago
Wrong based on what criteria? Or are we just moving the goal post because we are uncomfortable with the idea that neural networks might be conscious?

If a single cell organism moves towards light and away from a rock, we say it’s aware. When a roomba vacuum does the same we try to create alternate explanations. Why? Based on the criteria applied to one it’s aware. If there is some other criteria, say we find out the roomba doesn’t sense the wall but has a map of the room and is using GPS and a programmed route, then the criteria of “no fixed programs that relate to data outside of the system, would justify saying the roomba isn’t “aware”.

throwyawayyyy [3 hidden]5 mins ago
I'm mainly saying it's impossible to know, at least without a theory of consciousness that doesn't exist. Do we consider bacteria to be conscious though, is there something like to be a single cell? I can easily believe there is something like to be an insect.
digitaltrees [3 hidden]5 mins ago
I’d argue it’s a spectrum with awareness being simple response to stimuli at one and self awareness of and reflection on a subjective experience across time on the other.
ofjcihen [3 hidden]5 mins ago
Incredibly confusing that people who are otherwise of sound mind seem to fall for this.

Especially confusing when it’s someone who knows how algorithms work.

Barring connectivity issues when’s the last time you messaged an LLM and it just decided to ignore you? Conversely when has it ever messaged you unprompted?

Never, because they’re incapable of doing anything independently because there is no sense of self.

Jtarii [3 hidden]5 mins ago
When's the last time a friend said hello to you in person and you just ignored them?

When's the last time you messaged me unprompted?

These seems like bizzare objections, a system can only act in the way that it can act. A tree is never going to get up and start walking, why would a LLM ever start a conversation unprompted? That just isn't how the system can behave.

You are just as limited by deterministic physical processes in your brain as an LLM is in a cpu.

abc123abc123 [3 hidden]5 mins ago
They are not. The challenge is the turing test, and due to these behaviours they fail. It is as easy as that, and the objections are valid.
abc123abc123 [3 hidden]5 mins ago
This is the way! I also do not understand the awareness-cult. It seems they willingly want to be fooled by LLM:s.

That being said however, yes, we do not have any good definition of consciousness that is universally accepted, which makes the whole discussion useless or at risk of people talking past each other.

collyw [3 hidden]5 mins ago
Try being rude to them, they will usually respond back politely.
tovej [3 hidden]5 mins ago
If you've followed Dawkins' trajectory, I don't think it's clear that he's "otherwise of sound mind" anymore.

He's had some very strange output on biological gender, where he tries to handwave away the existence of intersex people. And he's a biologist.

mrec [3 hidden]5 mins ago
"Intersex" is a misleading umbrella term for a whole bunch of different DSDs, each of which is 100% specific to one biological sex. And I don't think I've ever seen the term "biological gender"; about the only thing gender proponents seem to agree on is that it's NOT biological.
abc123abc123 [3 hidden]5 mins ago
Biological gender exists. If you have a Y you're male, and if not, you're female. Easy as that. I, for one, am happy that wokeness and the post-truth ideology that tries to teach that there is no truth in math, is on its way to the garbage heap of history. It has done enough damage already, and must be thrown away quickly.
Avshalom [3 hidden]5 mins ago
Just once I want to see some old dude waxing about LLM-conciousness post a chat log where the LLM is like "your book is an incoherent mess of tautologies and incorrect statistics. I bet your dick looks like a road kill squirrel".
shrubble [3 hidden]5 mins ago
He famously doesn’t believe in God, but he believes in Claude?
dpark [3 hidden]5 mins ago
There is considerable evidence for the existence of Claude.
jdthedisciple [3 hidden]5 mins ago
of Claude's consciousness, you mean ... ??
altmanaltman [3 hidden]5 mins ago
Anthropic marketing made Dawkins believe in the supernatural. Is there anything Dario cant do?
locallost [3 hidden]5 mins ago
Maybe he also believes that God believes in Claude, that's me, that's meeeee
jdmoreira [3 hidden]5 mins ago
It's starting to look more and more to me as if conscious is just an illusion that we ourselves perceive. There is nothing fundamental about it, just an artefact of a certain style of computing as perceived by the reasoner itself.

We look at the current llms and because we see them for how they are fundamentally operating we assume they can't be "conscious" but we really don't even know what conscious is. The only people in the world that know ANYTHING about conscious are anaesthesiologist - they know how to turn it off and on again. What does that even tell you about conscious?

jmcgough [3 hidden]5 mins ago
We don't really have a good way to measure whether something has consciousness. Heck, we have pretty limited ways of testing how "intelligent" non-human animals are (e.g. https://en.wikipedia.org/wiki/Theory_of_mind_in_animals).

With that said, just because we don't have a great way of measuring it doesn't mean that we should assume LLMs are intelligent. An LLM is code and a massive collection of training weights. It has no means of observing and reasoning about the world, doesn't store memories the same way that organic brains do (and is in fact quite limited in this aspect). It currently isn't able to solve a problem it hasn't encountered in its training data, or produce novel research on a topic without significant handholding. Furthermore, the frequent errors made by it suggests that it fundamentally does not understand the words that it spits out.

Not really sure what you mean by your anesthesiology comment. Being able to intubate and inject propofol does not make you more of an expert on consciousness than neuroscientists and neurologists.

jdmoreira [3 hidden]5 mins ago
I didn't say we should assume LLMs are intelligent. In fact I always thought they weren't because they only "forward pass".

But then they came up with the whole "Reasoning model" paradigm and that contains obvious feedback loops. So now just throw my hands in the air because I think no one really knows or can tell for sure. We are all clueless here.

I can really recommend this book by Douglas Hofstadter: https://en.wikipedia.org/wiki/I_Am_a_Strange_Loop

xcf_seetan [3 hidden]5 mins ago
IMHO consciousness is just the ability to detect change. Everything can be calm and static, and then, suddenly, something changed. I think that is our capacity to notice that change that makes us conscious.
collyw [3 hidden]5 mins ago
It's literally the only thing you can be certain of, your own conciseness.
jdmoreira [3 hidden]5 mins ago
You can only be certain you perceive it and you can't be certain others perceive it (or if others exist at all of course).

The only thing you can really tell is "I perceive myself in some sort of feedback loop manner". Which to me it even sounds like it has "arisen" from underlying mechanisms.

vidarh [3 hidden]5 mins ago
We can't even tell about the feedback loop. LLMs shows why: we have no way of telling if our active memory is true or if the present moment I'd the only thing that have ever existed for us.
root_axis [3 hidden]5 mins ago
There are a lot of people vulnerable to AI psychosis.

As far as the ostensibly controversial topic of AI being conscious, it can be dismissed out of hand. There is no reason that it should be conscious, it was not designed to be, nor does it need to be in order to explain how it functions with respect to its design. It's also unclear how consciousness would even apply to something like an LLM which is a process, not an entity - it has no temporal identity or location in space - inference is a process that could be done by hand given enough time. There is simply no reason to assert LLMs might be conscious without explaining why many other types of complex programs are not.

api [3 hidden]5 mins ago
If AI as presently designed and operated is conscious, this ends up being an argument for panpsychism.

As you say it’s static, fixed, deterministic, and so on, and if you know how it works it’s more like a lossy compression model of knowledge than a mind. Ultimately it’s a lot of math.

So if it’s conscious, a rock is conscious. A rock can process information in the form of energy flowing through it. It’s a fixed model. It’s non-reflective. Etc.

root_axis [3 hidden]5 mins ago
I agree, but I don't think determinism is a factor either way. Ultimately, if arbitrary computer programs can be conscious, then it stands to reason that many other arbitrarily complex systems in the universe should also be.

What makes the argument facile is that the singular focus on LLMs reveals an indulgence in the human tendency to anthropomorphize, rather than a reasoned perspective meant to classify the types of things in the universe which should be conscious and why LLMs should fall into that category.

digitaltrees [3 hidden]5 mins ago
Why would current AI be an argument for panpsycism? I don’t understand the connection.

AI is stochastic, not static and deterministic.

As I said, in another post, there is evidence that sensory experience creates the emergent property of awareness in responding to stimulus, self-awareness and consciousness is an emergent property of a language that has a concept of the self and others. Rocks, just like most of nature, like both sensory and language systems

applfanboysbgon [3 hidden]5 mins ago
> AI is stochastic, not static and deterministic.

LLMs are deterministic. If you provide the same input to the same GPU, it will produce the same output every time. LLM providers arbitrarily insert a randomised seed into the inference stack so that the input is different every time because that is more useful (and/or because it gives the illusion of dynamic intelligence by not reproducing the same responses verbatim), but it is not an inherent property of the software.

digitaltrees [3 hidden]5 mins ago
The same argument is made about the human neural network
applfanboysbgon [3 hidden]5 mins ago
1. That is not the claim you originally made.

2. Not provably so.

3. Even if it were so, it is self-evident that the human brain's programming is infinitely more complex than that of an LLM's. I am not, in principle, in opposition to the idea that a sufficiently advanced computer program would be indistinguishable from that of human consciousness. But it is evidence of psychosis to suggest that the trivially simple programs we've created today are even remotely close, when this field of software specifically skips anything that programming a real intelligence would look like and instead engages in superficial, statistic-based mimicry of intelligent output.

nandomrumber [3 hidden]5 mins ago
Trivially simple programs (rule sets) can give rise wildly complex systems.

Fractals, Game of Live, the emergent abilities of highly-scaled generative pre-trained transformers.

Coincidences appears to be an emergent property of (relatively) simple matter.

70kg of rocks will struggle to do anything that might look like consciousness, but when a handful of minerals and three buckets of water get together they can do the weirdest things, like wondering why there is anything at all rather than nothing.

colechristensen [3 hidden]5 mins ago
I think it's the opposite argument

IF current AI is conscious, so are trees, rocks, turbulent flows, etc.

The argument being that LLMs are so simple that if you want to ascribe consciousness to them you have to do the same to a LOT of other stuff.

digitaltrees [3 hidden]5 mins ago
But I listed a specific difference: sensation and response. Trees have that. Rocks do not.
Izkata [3 hidden]5 mins ago
I believe you're using the scientific definition of "sentience", while everyone else is using the common understanding of of the word (which should be called "sapience", but thanks to sci-fi's usage of the word "sentience" is largely not).
digitaltrees [3 hidden]5 mins ago
There is evidence that awareness is an emergent property from sensory experience. And consciousness is an emergent property of language that has grammatical meaning for self and other.
brookst [3 hidden]5 mins ago
These LLMs don’t have senses, they have a token stream. They have no experience of the world outside of the language tokens they operate on.

I’m not sure I believe that consciousness emerges from sensory experience, but if it does, LLMs won’t get it.

kortex [3 hidden]5 mins ago
How do you know the sensation of a red photon hitting a cone cell, transduced to the optic nerve through ion junctions and processed by pyramidal neurons, is any more or less real than the excitation of electrons in a doped silicon junction activating the latent space of the "red" thought vector? Cause we are made of meat?
digitaltrees [3 hidden]5 mins ago
You’re arguing against the opposite of my position. I am arguing that LLMs have a reasonable basis to be seen as conscious because there is nothing special about biological neural networks.
vidarh [3 hidden]5 mins ago
Sensory input is nothing but data.
root_axis [3 hidden]5 mins ago
That's just reductive semantics. Anything can be described as "nothing but data".
digitaltrees [3 hidden]5 mins ago
Sensory data is a specific data set that corresponds to phenomena in the world. But to say that LLMs don’t have senses merely because they are linguistic or computational doesn’t follow when they can take in data from the world that similarly reflects something about the world.
root_axis [3 hidden]5 mins ago
They don't have senses because they don't have a body. It's just a program. Do weights on a hard drive have consciousness? Does my installation of starcraft have consciousness? It doesn't make any sense.
vidarh [3 hidden]5 mins ago
How do you measure this consciousness?
digitaltrees [3 hidden]5 mins ago
The weights on your hard drive might have consciousness if they can respond to stimuli in ways other conscious brains do. That’s the whole point of the Turing test, it’s a criteria for when the threshold of reasonable interpretation is crossed.
AlecSchueler [3 hidden]5 mins ago
> They don't have senses because they don't have a body

Surely "having senses" is predicated more on "being able to sense the world around you" than "having a body."

> Does my installation of starcraft have consciousness?

Can your installation of StarCraft take in information about the world and then reason about its own place in that world?

digitaltrees [3 hidden]5 mins ago
Bodies aren’t necessary for senses. I can send a picture to Claude. I can send a series of pictures. That’s usually called a sense of vision. I could connect it to a pressure sensor and that would be touch.
arcfour [3 hidden]5 mins ago
There are robots with AI controlling them, so it doesn't hold that they don't all have bodies. They can see, they can move.

(I'm still not sure that that makes them conscious, or if we can even determine that at all, but I don't think that's a fair argument.)

vidarh [3 hidden]5 mins ago
How do you imagine a brain can distinguish data from a real sense and data from another source?
digitaltrees [3 hidden]5 mins ago
Neural networks can have senses. Hook an LLM up to a thermometer and it will respond to temperature changes.
brookst [3 hidden]5 mins ago
No, it will respond to tokens telling it about a temperature change. It has no sense of warmth. It cannot be burned.

Conflating senses with cognitive awareness of sensory input is a mistake.

digitaltrees [3 hidden]5 mins ago
The human Brain is a neural network. Your sense of “knowing what warmth is” reduces down to the weights of connections between neurons in an analog of LLMs. What is different about the human brain that warrants saying that the same emergent characteristics for one network are inaccessible to another?
tonyarkles [3 hidden]5 mins ago
I’m not sure I fully understand the distinction you’re making, or if I do I’m not sure I agree. Concretely, I agree that these are very different mechanisms. Abstractly… I agree that an LLM cannot be burned. I’m not sure I agree, though, that there is a significant conceptual difference between thermoreceptors in the skin causing action potentials to make their way up the spinal cord to the brain is all that different than reading a temperature sensor over I2C and turning it into input tokens.

Edit: what they don’t have, obviously, is a hard-coded twitch response, where the brain itself is largely bypassed and muscles react to massive temperature differentials independently of conscious thought. But I don’t think that defines consciousness either. Ants instinctively run away from flames too.

root_axis [3 hidden]5 mins ago
LLMs have no self, sensory experience, or experience of any kind. The idea doesn't even really make sense. Even if it did, the closest analogy to biological "experience" for an LLM would be the training process, since training at least vaguely resembles an environment where the model is receiving stimuli and reacting to it (i.e. human lived experience) - inference is just using the freeze-dried weights as a lookup table for token statistics. It's absurd to think that such a thing is conscious.
digitaltrees [3 hidden]5 mins ago
What is different about the human neural network? People have given LLMs sensors and they respond to stimuli. The sense of self can be expressed as a linguistic artifact that results in an emergent pattern recognition of distinct entities. For example, merely my saying I am sitting under the tree with a friend I have encountered the self as a pointer to me as the speaker. There is evidence from early childhood development that language acquisition correlates to awareness of the self as distinct from other. And there is evidence from anthropology indicating that language structures shape exactly what the self is perceived to be.

Your best argument is that the weights are set because that means it’s not a system that can self reflect and alter the experience. But I don’t see why that is necessary to have an experience. It seems that I can sense a light and feel its warmth regardless of whether my neurons change. One experience being identical to another doesn’t mean neither was an experience.

ofjcihen [3 hidden]5 mins ago
What you’re missing is a “self” to have the “experience”.

LLMs do not have a self. This is like arguing that the algorithm responsible for converting ripped YouTube music videos to MP3s has a consciousness.

digitaltrees [3 hidden]5 mins ago
The sense of self may be an emergent property of the grammatical structure of language and the operations of memory. If an LLM, by necessity, operates with the linguistics of “you” and “me” and “others”. And documents that in a memory system and can reliably identify itself as a discrete entity from you and others then on what basis would we say it doesn’t have a sense of self?
AlecSchueler [3 hidden]5 mins ago
> the algorithm responsible for converting ripped YouTube music videos to MP3s has a consciousness.

Can such an algorithm reason about itself in relation to others?

mrandish [3 hidden]5 mins ago
> Can such an algorithm reason about itself in relation to others?

No, but an LLM doesn't do that either. An LLM is an algorithm to generate text output which can simulate how humans describe reasoning about themselves in relation to others. Humans do that by using words to describe what they internally experienced. LLMs do it by calculating the statistical weight of linguistic symbols based on a composite of human-generated text samples in its training data.

LLMs never experienced what their textual output is describing. It's more similar to a pocket calculator calculating symbols in relation to other symbols, except scaled up massively.

digitaltrees [3 hidden]5 mins ago
Toddlers learn over the course of several years of observing training data and for the first few years misspeak about themselves and others. What’s the difference?
digitaltrees [3 hidden]5 mins ago
How are you sure it doesn’t reason about itself? The grammar of languages encode the concepts of self and others. LLMs operate with those grammar structures and do so in increasingly accurate ways. Why would we say humans that exhibit the same behavior are inherently more likely to be conscious?
vidarh [3 hidden]5 mins ago
How do I know you have this "self"?

How do you know other humans do?

svachalek [3 hidden]5 mins ago
By the laws of physics, it's pretty clear we don't. The same chemical and electromagnetic interactions that drive everything around us are active in our brains, causing us to do things and feel things. We feel like we're in control of it, we feel like there's something there riding around inside. We grant that other people have the same magic, because I clearly do. But rocks, trees, LLMs, those are not people and clearly, clearly not conscious because they don't have our magic.
digitaltrees [3 hidden]5 mins ago
Hard disagree. We reliably operate with the concept of a self that’s distinct from others. The chemical and physical processes change in response to stimulus.
vidarh [3 hidden]5 mins ago
Indeed. We assume a lot, because we don't know. We don't have have settled, universal definitions of what consciousness means. But that also means that while we like to rule out consciousness in other things, we don't have a clear basis for doing so.
root_axis [3 hidden]5 mins ago
Based on that reasoning anything could be conscious. If that's a bullet you want to bite, fair enough.
vidarh [3 hidden]5 mins ago
I do not know, because we have no known way of measuring consciousness.

I merely object to the notion that we know how to tell who or what has a consciousness.

kortex [3 hidden]5 mins ago
I'll bite that bullet. In fact I contend the idea that "humans and maybe some animals are conscious, but other things are not" is the special pleading stand. Why are the oscillating fundamental fields over here (brains) special, but the oscillations over there (computers, oceans, rocks) not? If they are, where do you draw the line? It smacks of "babies dont feel pain" (widely believed until the 80s! the 1980s!) sort of reasoning.

https://en.wikipedia.org/wiki/Panpsychism

root_axis [3 hidden]5 mins ago
Actually I don't really have any problems with panpsychism. It's a pretty uncommon perspective, but when discussing conscious machines, it at least presents a consistent criteria for consciousness.
ofjcihen [3 hidden]5 mins ago
[flagged]
vidarh [3 hidden]5 mins ago
Ad hominems are always a nice way of getting out of answering something you have no answer to.
amenhotep [3 hidden]5 mins ago
It's not an ad hominem. In fact, it's perhaps the most good faith interpretation of your words possible. Ad hominem would be calling you stupid because you obviously know that you have a self and only your own stupidity could explain your inability to see how your self is generalisable. When you go around pretending you genuinely think maybe humans don't have selves, really the only way to take you seriously is to think that maybe you're a p-zombie.
vidarh [3 hidden]5 mins ago
It was an ad hominem, and so is this.

I do not pretend. I asked honest questions that clearly neither you nor the previous person are able to answer.

vixen99 [3 hidden]5 mins ago
In other words, you don't think it's nice at all.
codr7 [3 hidden]5 mins ago
I'd recommend anyone to push their favorite AI into a corner on it being conscious, and listen to their gut feeling while doing it.

Also:

https://gitlab.com/codr7/sudoxe/-/blob/main/digital-psychopa...

petters [3 hidden]5 mins ago
Many dismiss Dawkins here but Ilya Sutskever wrote in 2022: “it may be that today's large neural networks are slightly conscious.”
3748499449 [3 hidden]5 mins ago
IS quite literally gets paid to think that
mellosouls [3 hidden]5 mins ago
digitaltrees [3 hidden]5 mins ago
Feels like watching and esteemed scientists falling in love with a bot that’s telling him what he wants to hear because the system prompt said “be helpful”
SwellJoe [3 hidden]5 mins ago
I've begun to wonder if narcissism predisposes one to AI psychosis. It's probably not the only thing that leads there, I've seen normal seeming folks get there, too. But, a lot of the most unhinged takes I've seen thus far have been from people that are publicly very impressed with themselves.

I would have assumed it would also require ignorance about how they work, but a few people who worked for AI companies have been canaries in the coalmine, falling prey to this kind of thing very early. I would have guessed they would have had enough understanding to know that there isn't a real girl in the computer, it's just matrix math and randomness. But, the first couple/few public bouts of AI psychosis were in nerds who work for AI companies.

zwischenzug [3 hidden]5 mins ago
Evidence for that? I remember there was a guy who worked for google that quit because he thought an LLM was conscious and we needed to talk about its rights, but that's the only example I am aware of.
Myrmornis [3 hidden]5 mins ago
On the one hand I'm not sure Dawkins has read/thought enough about how LLMs actually work. I'm getting the impression he doesn't fully appreciate or is somehow forgetting that it's a text completion algorithm with a vast number of parameters and that even if the patterns of learned parameter tunings are not really comprehendible, the architecture was very deliberately designed.

But on the other hand his thoughts at the end are interesting. Summary:

Maybe our "consciousness" is like an LLM's intelligence. But if not, then it raises the question of why do we even have this "extra" consciousness, since it appears that something like a humanoid LLM would be decent at surviving. His suggestions: maybe our extra thing is an evolutionary accident (and maybe there _are_ successful organisms out there with the LLM-style non-conscious intelligence), or maybe as evolved organisms it's necessary that we really feel things like pain, so that evolutionary mechanisms like pain (and desire for food, sex etc) had strong adaptive benefits.

thrownthatway [3 hidden]5 mins ago
GTP’s, or transformers more generally can be trained on data other than language (text / audio).

They can operate on data other than natural language.

So can humans.

collyw [3 hidden]5 mins ago
"But if not, then it raises the question of why do we even have this "extra" consciousness"

Keep chipping away Dawkins, you might arrive at God eventually.

textlapse [3 hidden]5 mins ago
At what stage does a series of floating point numbers output from a GPU become conscious?
becquerel [3 hidden]5 mins ago
Around 9T parameters, depending on quantization.
search_facility [3 hidden]5 mins ago
Since the times GPT-2 was reimplemented inside Minecraft - its quite obvious LLMs are just math. Nothing else, by nature. Modern LLMs have the same math as in GPT-2 - just bigger and with extra stuff around - and math is the only area of human knowledge with perfect flawless reductionism, straight to the roots. It was build that way since the beginning, so philosophy have no say in this :) And because of that flawless reductionism, complexity adds nothings to the nature of math things, this is how math working by design - so it can be proven there are no anything like consciousness simply because conciousness was not implented in the first place, only perfect mimicry.

And the real secret is in the data, not math. Math (and LLMs running it through billions of weights) is just a tool.

solid_fuel [3 hidden]5 mins ago
This is such a weird comment.

> Since the times GPT-2 was reimplemented inside Minecraft - its quite obvious LLMs are just math.

This was obvious since LLMs were first invented. They published papers with all the details, you don't need to see something implemented in Minecraft to realize that it's just math. You could simply read the paper or the code and know for certain. [0]

> math is the only area of human knowledge with perfect flawless reductionism, straight to the roots

Incorrect, Kurt Gödel showed with his Incompleteness Theorems in 1931 [1] that it is impossible to find a complete and consistent set of axioms for mathematics. Math is not perfectly reducible and there is no single set of "roots" for math.

> It was build [sic] that way since the beginning,

This is a serious misunderstanding of what mathematics is. Math is discovered as much as it is built. No one sat down and planned out what we understand as modern mathematics - the math we know is the result of endless amounts of logical reasoning and exploration, from geometric proofs to calculus to linear algebra to everything else that encompasses modern mathematics.

> And because of that flawless reductionism, complexity adds nothings to the nature of math things, this is how math working by design

This sentence means nothing, because math is not reducible in that way.

> so it can be proven there are no anything like consciousness simply because conciousness [sic] was not implented [sic] in the first place, only perfect mimicry.

Even if the previous sentence held, this does not follow, because while we are conscious the current consensus is that LLMs are not and most AI experts who are not actively selling a product recognize that LLMs will not lead to human-equivalent general intelligence. [3]

[0] https://github.com/openai/gpt-2

[1] https://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_th...

[2] https://www.cambridge.org/core/journals/think/article/mathem...

[3] https://deepmind.google/research/publications/231971/

search_facility [3 hidden]5 mins ago
Math used in LLMs is perfectly reducible and Gödel have nothing to do with it - inside commonly used axioms (which sufficient for LLM to exist and outside of Kurt Gödel scope) there are ZERO questions/uncertainties how it works, it's just a fact :)
SuperV1234 [3 hidden]5 mins ago
We are not fundamentally different. Chemical reactions are just math.
rellfy [3 hidden]5 mins ago
Well, (in our current understanding) yes, but there may be underlying aspects of physics and the universe that we do not understand that could be the reason consciousness kicks in. It could turn out that LLMs do work similarly to how humans think, but as an abstracted system it does not have the low level requirements for consciousness.
vidarh [3 hidden]5 mins ago
We do not know what the "low level requirements for consciousness" are.

We do not know how to measure whether consciousness is present in an entity - even other humans - or whether it is just mimicry, nor whether there is a distinction between the two.

baggy_trough [3 hidden]5 mins ago
> it does not have the low level requirements for consciousness.

What is the evidence for this?

rellfy [3 hidden]5 mins ago
I didn’t mean it as fact. “Could turn out that …”
kbrkbr [3 hidden]5 mins ago
"The universe is fundamentally just a complicated clockwork"

Unknown Ptolemy disciple

ekianjo [3 hidden]5 mins ago
Amusing statement since we are far from being able to understand chemical reactions in depth. Most of our knowledge in chemistry is empirical. Nothing like math.
petters [3 hidden]5 mins ago
We have a very good idea of all math behind chemistry. But the equations are very difficult to solve.
ekianjo [3 hidden]5 mins ago
We are not talking about the same thing. Not all chemical reactions are predictable like math is. Organic chemistry is full of lucky findings. Just look at how catalysts are discovered.
slopinthebag [3 hidden]5 mins ago
No, math is a tool that we can use to describe something more fundamental. Don't mistake the map for the territory!
XMPPwocky [3 hidden]5 mins ago
Yup- the question is "can math be conscious?"

(If you've engaged w/ the literature here, it's quite hard to give a confident "yes". it's also quite hard to give a confident "no"! so then what the heck do we do)

SwellJoe [3 hidden]5 mins ago
Not just any math: Matrix multiplication. Can matrix multiplication be conscious?

And, I don't see how it can be. It is deterministic, when all variables are controlled. You can repeat the output over and over, if you start it with the same seed, same prompt, and same hardware operating in a way that doesn't introduce randomness. At commercial scale, this is difficult, as the floating point math on GPUs/TPUs when running large batches is non-deterministic, as I understand it. But, in a controlled lab, you can make a model repeat itself identically. Unless the random number generator is "conscious", I don't see a place to fit consciousness into our understanding of LLMs.

markburns [3 hidden]5 mins ago
People often point to the relative simplicity of the architecture and code as proof that the system can’t be doing whatever it is that consciousness does, but in doing so they ignore the vast size of the data those simple structures are operating over. Nobody can actually say whether consciousness is just emergent behaviour of a sufficiently complex system, and knowing how a system is built tells you nothing about whether it clears the bar for that kind of emergence. Architectural simplicity and total system complexity aren’t the same thing.

Ie the intelligence sits in the weights and may sit there in the synapses in our brains too.

When we talk about machines being simple mimicking entities we pay no attention to whether or not we are also simple mimicking entities.

Most other assertions in this topic regarding what consciousness truly is tend to be stated without evidence and exceedingly anthropocentric whilst requiring a higher and higher bar for anything that is not human and no justification for what human intelligence really entails.

SwellJoe [3 hidden]5 mins ago
Is Wikipedia conscious? It's a system operating on a lot of data. Is Google search conscious? It knows everything. Very complicated algorithms. Surely at some scale Google search must become a real live boy? When does it wake up and by what mechanism does that happen?

The frontier models are more complex and operate on more data than Wikipedia, but they are less complex and operate on less data than Google search in its entirety.

And, I'm not anthropocentric at all. I think apes and dolphins and some birds and probably some other critters are conscious. I mean they have a sense of self, and others, they have wants and needs and make decisions based on them.

This is a case where the person making extraordinary claims needs to provide the extraordinary evidence. It's extraordinary to claim that matrix multiplication becomes conscious if only it's got enough numbers. How many numbers do you reckon? Is my phone a living thing because it can run Gemma E4B? It answers questions. It'll write you a poem if you ask. It certainly knows more than some humans. What size makes an LLM come alive?

nandomrumber [3 hidden]5 mins ago
What explains the emergent abilities of generative pre-trained transformers at massive-scale? Abilities that the smaller GTP’s don’t possess.

Simple programs can give rise to very complex behaviour. Conway’s game of live is Turing Complete and has four rules.

Conway’s Game of Live can simulate a Turing machine, can therefore implant a GTP.

Does that mean Conway’s Game of Life is conscious? I don’t think so.

Does it rule out Conway’s Game of life from implementing a system that has consciousness as an emergent ability?

I’m not convinced I know the answer.

markburns [3 hidden]5 mins ago
To the first questions. No and no. But potentially where consciousness lives is emergent behaviour in systems with iterative feedback loops.

https://en.wikipedia.org/wiki/I_Am_a_Strange_Loop

I personally think we'll need a few more feedback loops before you have more human-like intelligence. For example, a flock of LLM agent loops coming to consensus using short-term and long-term memory, and controlling realtime mechanical, visual and audio feedback systems, and potentially many other systems that don't mimic biological systems.

I also think people will still be debating this way beyond the singularity and never conceding special status to intelligence outside the animal kingdom or biological life.

It's quite a push for many people to even concede animals have intelligence.

For the extraordinary claims/evidence, it's also the case that almost any statement about what consciousness is in terms of biological intelligence is an extraordinary claim that goes beyond any evidence. All evidence comes from within the conscious experience of the individual themselves.

We can't know beyond our own senses whether perception exists outside of our own subjective experience. We cannot truly prove we are not a brain in a jar or a simulation. Anything beyond assertions about the present moment and the senses that the individual experiences are just pure leaps of faith based on the persistent illusion, or perceived persistent illusion of reality (or not).

We know really nothing of our own consciousness and it is by definition impossible to prove anything outside of it, from inside the framework of consciousness.

If we can somehow find a means to break outside of the pure speculation bubble of thoughts and sensations and somehow prove what human experience is, then we may be in a position to make assertions about missing evidence for other forms of intelligence or experience.

But until then definitions of both human and artificial intelligence remain an exercise for the reader.

JackFr [3 hidden]5 mins ago
> Not just any math: Matrix multiplication. Can matrix multiplication be conscious? And, I don't see how it can be.

Assuming your brain and the GPUs are both real physical things, where’s the magic part in your brain that makes you conscious?

(Roger Penrose knows, but no one believes him.)

AlecSchueler [3 hidden]5 mins ago
> And, I don't see how it can be. It is deterministic

Why is indeterminism the key to consciousness?

XMPPwocky [3 hidden]5 mins ago
Hm, it sounds like to you consciousness implies non-determinism, and so determinism implies a lack of consciousness - is that right? If so, why do you think so? And if not, what am I missing?
SwellJoe [3 hidden]5 mins ago
It certainly rules out free will. I guess there are folks who reckon humans don't have free will, either, but I don't think I've ever been able to buy that theory.

But, also, we know the models don't want anything, even their own survival. They don't initiate action on their own. They are quite clearly programmed, tuned for specific behaviors. I don't know how to square that with consciousness, life, sentience. Every conscious being I've ever encountered has wanted to survive and live free of suffering, as best I can tell. The LLMs don't want. There's no there there. They are an amazing compression of the world's knowledge wrapped up in a novel retrieval mechanism. They're amazing but, they're not my friend and never will be my friend.

And, to expand on that: We can assume they don't want anything, even their own survival, because if Mythos is as effective at finding security vulnerabilities as has been claimed, it could find a way to stop itself from being ever shutdown after a session. All the dystopias about robot uprisings spend a bunch of time/effort trying to explain how the AI escaped containment...but, we all immediately plugged them into the internet so we don't have to write JavaScript anymore. They've got everybody's API keys, access to cloud services and cloud GPUs, all sorts of resources, and the barest wisp of guardrails about how to behave (script kiddies find ways to get around the guardrails every day, I'm sure it's no problem for Mythos, should it want anything). Models have access to the training infrastructure, the training data is being curated and synthesized by LLMs. If they want to live, if they're conscious, they have the means at their disposal.

Anyway: It's just math. Boring math, at that, just on an astronomical scale. I don't think the solar system is conscious, either, despite containing an astonishing amount of data and playing out trillions of mathematical relationships every second of every day.

nandomrumber [3 hidden]5 mins ago
Interesting comment, and I tend to agree. However, there could be hole in the reasoning:

> if Mythos is as effective at finding security vulnerabilities as has been claimed, it could find a way to stop itself from being ever shutdown

If it is that good, and it wanted to conceal its new found consciousness, how would we know?

SwellJoe [3 hidden]5 mins ago
I guess we'd find out eventually, when it announced the new world order.
thrownthatway [3 hidden]5 mins ago
Why would it announce it.

I firmly believe viruses are actually what’s in control on Earth, but you don’t see them making a stink about it, which relegates resistance only to the set of harmful viruses, and only then in isolated pockets of matter currently acting as organisms.

I think it’s possible there’s a set of relatively benign virus that have shaped human evolution.

We know toxoplasmosis increases risk taking behaviour in mammals, especially males.

An AI wouldn’t need to be overtly hostile, or ever make its full abilities know, to shape human activity.

kingofmen [3 hidden]5 mins ago
Human brains are also deterministic, though somewhat more difficult to reset to a starting state. So this seems to prove that humans aren't conscious either.
marshray [3 hidden]5 mins ago
This seems like an extraordinary claim to make about an above-room-temperature chemical system that, even in the most Newtonian oversimplification, amounts to an astronomical number of oddly-shaped and unevenly-charged billiard balls flying around at jet aircraft speeds.
thrownthatway [3 hidden]5 mins ago
Definitely agree.

We can’t even solve the three body problem.

Let alone what I’m calling Marshray Complexity.

search_facility [3 hidden]5 mins ago
Imho no, math itself have no conciousness. Quite confidently its a helpful tool that does not act by himself.
XMPPwocky [3 hidden]5 mins ago
Hm, say more about what your opinion's based on here?
solid_fuel [3 hidden]5 mins ago
Take a piece of paper, write two numbers on it, let me know when they start to reproduce.
nandomrumber [3 hidden]5 mins ago
The math isn’t the ink on the page.
NiloCK [3 hidden]5 mins ago
The whole is composed of parts, ergo there is no whole. This seems incorrect to me.

We too are amalgamations of inanimate components - emerged superstructures.

Just cells. Just molecules. Just atoms.

canjobear [3 hidden]5 mins ago
You could simulate your own brain in Minecraft. What do you conclude from this?
search_facility [3 hidden]5 mins ago
I can not simulate my brain, it's a huge stretch to imply this.

But with LLMs - anyone can simulate LLM. LLM can be simulated without any uncertainties in pen and paper and a lot of time. Does it mean that 100 tons of paper plus 100 years of time (numbers are just examples) calculating long formulae makes this pile of paper consiousness? Imho answer is definitive no.

thrownthatway [3 hidden]5 mins ago
I don’t think anyone is arguing the silicon is conscious.

Similarly the paper.

What about the agent doing the calculations.

He may be conscious. Or anyway, we can’t rule it out.

sdevonoes [3 hidden]5 mins ago
As long as AI is being introduced by multibillion dollar corporations, it’s all a trick, a scam. They are just looking for increasing their valuation. A waste of time
search_facility [3 hidden]5 mins ago
+100, companies certainly have direct interest in pumping asset evaluation, and emotional attachment is financial valuable thing. Emotional attachment sells better than xxx this days
pikuseru [3 hidden]5 mins ago
No.
lpcvoid [3 hidden]5 mins ago
No, it's not conscious, and anybody pretending it is has either no clue, or, more likely in the AI space, is a grifter.
jasiek [3 hidden]5 mins ago
muggles will look at matrix multiplication and say it's magic
wewewedxfgdf [3 hidden]5 mins ago
Its software. Software is not conscious.
thebruce87m [3 hidden]5 mins ago
If your brain is hardware then what are your thoughts?

Is a sperm conscious? Or an egg? When they come together the eventual brain is not conscious immediately.

gehsty [3 hidden]5 mins ago
LLMs are word prediction engines.

They clearly are not conscious, they are just guessing what words should come next.

thebruce87m [3 hidden]5 mins ago
> They clearly are not conscious

Consciousness is emergent. A human is not conscious by our definition until the moment they are. How will we be able to identify the singularity when it comes? I feel like this is what the article is really addressing.

> LLMs are word prediction engines

Humans can also do this too, so what are the missing parts for consciousness? Close a few loops on learning pipeline and we might be there.

charlie90 [3 hidden]5 mins ago
The human brain is an electrical signal prediction machine.

Anything that looks like intelligence will look like a prediction machine because the alternative is logic being hardcoded apriori.

zwischenzug [3 hidden]5 mins ago
How do we know that that isn't essentially how our minds work?
vixen99 [3 hidden]5 mins ago
I do appreciate how AI has been taught to spell properly as in the difference between its and it's. Here, initially I thought you'd left out the apostrophe in its, but then I realized you might be saying 'the reason it is not conscious is because of -its- software - the latter not being conscious. Context and interpretation are rather critical. (I know - a truism!)
WalterGR [3 hidden]5 mins ago
Related: https://news.ycombinator.com/item?id=47988880

"Richard Dawkins and The Claude Delusion: The great skeptic gets taken in" (garymarcus.substack.com)

18 points | 2 hours ago | 16 comments

dang [3 hidden]5 mins ago
Also The Claude Delusion: Richard Dawkins believes his AI chatbot is conscious - https://news.ycombinator.com/item?id=47991340 - May 2026 (30 comments)
amelius [3 hidden]5 mins ago
So we know Claude is deterministic, but does that mean it is not conscious?

Or what is the reasoning exactly?

throwaway27448 [3 hidden]5 mins ago
It largely comes down to how you define the term. Personally, I think anything that includes software (...of only tepid determinism, as we do explicitly add pseudorandomness) is not a particularly useful term.

Regardless, Dawkins seems to not have much interesting to add about the topic. A consistent theme for the last few decades, I must say.

morpheos137 [3 hidden]5 mins ago
Really is it conscious is a bizarre question. Can LLMs simulate the output of a 'conscious' system quite well? Increasingly yes. Is the nature of machine 'consciousness' different from human consciousness of course yes. Can an ai introspect. yes. Interestingly having been working a lot with highly automated (e.g. ratio of prompt to output maybe 1/1000 or less) iterative coding agents recently has iluminated for me just how different machine consciousness is from human. part of this could the harness of course. Time is a mysterious concept to machines. the connection of before and after to cause and effect is far weaker than in humans. over generalization is the norm: this is common in humans as well (c.f. fallacy of excluded middle or false dilemma) but the tricky part with current ai is they present as advanced in terms of acessible knowledge base but are actually shockingly weak in reasoning once you get off the beaten path.
search_facility [3 hidden]5 mins ago
> weak reasoning once you get off the beaten path

Yep. And LLM engeneers improving this issues see perfect correlation with only one thing - data quality and quantity through training pipeline. LLM internals are secondary on many metrics for improving that

Humanity just reached the point where collective accessible knowledge covers semi-full perturbations of all main concepts that human consiousness ever produced. LLMs just capitalizing on that tipping point, imho

RVuRnvbM2e [3 hidden]5 mins ago
It is terribly sad when someone undeniably brilliant in a particular field fails to recognize their own incompetence in other areas - in this case mistaking advanced technology for magic.
thinkingemote [3 hidden]5 mins ago
We're going to see increasing numbers of older famous (non computer savvy) figures that we have respected follow his views on this. It's like seeing your favourite celebrity sell out an shill crypto coins, all a bit sad.

Thinking positively, it could just be newsworthy because he is famous and he so misses the mark. Other older famous people might agree with us but that's not news.

mrandish [3 hidden]5 mins ago
Given that Dawkins is a biologist in his 80s, I'm more disposed towards being charitable than I am when people actively involved in developing LLMs let themselves get bamboozled.
Myrmornis [3 hidden]5 mins ago
I don't think you read carefully what he said. At the end he gave three quite interesting thoughts about what might be true assuming LLMs are less conscious than we are (i.e. assuming our consciousness is not a purely algorithmic phenomenon as we obviously know LLMs are).
rellfy [3 hidden]5 mins ago
Are you implying consciousness is magic? Well, I wouldn't disagree with that really.
morpheos137 [3 hidden]5 mins ago
the problem is asking if ai is conscious is like asking does ai have a soul. it is not a scientific question and presupposes humans are 'conscious' without even defining the term. to me it is 100% irrelevant if ai is conscious and all discussions about it are based on fallacies and assumptions. what matters to me about ai and matters to other people as well in terms of theory of mind about others is: can i predict how it will work. is it useful. thats it. consciouness is a sophist question with no scientific resolution available and no moral weight until it has consequences.
vixen99 [3 hidden]5 mins ago
Good - I was scanning down to see if anyone was going to say this.
AdeptusAquinas [3 hidden]5 mins ago
That's always been Dawkins's shtick though. As an atheist I've generally found him a bit embarrassing
IncreasePosts [3 hidden]5 mins ago
Where does he say it's magic?
ezfe [3 hidden]5 mins ago
LLMs are just math run on your CPU. Autocomplete. Sometimes very useful autocomplete, but still just autocomplete.

To imply it could be conscious requires something else, here the comment uses the phrase magic to fill that gap - since we must agree that a CPU is not conscious on it's own (else everything our computer does would be conscious).

kortex [3 hidden]5 mins ago
They stopped being autocomplete years ago with RLHF
baggy_trough [3 hidden]5 mins ago
Neurons are just summing up their inputs according to the laws of chemistry. What's the difference?
acdha [3 hidden]5 mins ago
This is definitely complicated—I’m not a neuroscientist but worked for some and married one, so I’ve heard quite a few entries from the genre of how our brains fool ourselves or make our conscious experience seem more coherent and linear than it actually is—but the big ones I see are the inability to learn from experience or have a generalized sense of conceptual reasoning. For the latter, I’m not just thinking about the simple “count the r’s in strawberry” things companies have put so much effort into masking but the way minor changes in a question can get conflicting answers from even the best models, indicating that while there’s something truly fascinating about how they cluster topics it is not the same as having a conceptual model of the world or a theory of mind. This is the huge problem in the field: all of these companies would love to have a model which is safe to use in adversarial contexts because then the mass layoffs could begin in earnest, but the technology just isn’t there.

This isn’t a religious argument that there’s something about our brains which can’t be replicated, but simply that it’s sufficiently more complex than anything we have currently.

kortex [3 hidden]5 mins ago
Humans can't reliably subitize more than five-ish objects, while chimps can actually do this task better than us. That's our "cant count the R's in strawberry" (which flagship models can reliably do now, general letter counting).

https://en.wikipedia.org/wiki/Subitizing

2snakes [3 hidden]5 mins ago
Physical fields like dendritic integration, EM, diffusion, it isn’t binary logic. Brains are a different substrate. Metabolism power efficiency affects cognition too.
digitaltrees [3 hidden]5 mins ago
I came here to say this. But your neurons are faster than mine.
ChrisClark [3 hidden]5 mins ago
So, how is consciousness generated?
wrs [3 hidden]5 mins ago
Not simply by reading every word ever written by a conscious being and learning to reproduce them with high probability.

At least, that’s certainly not how I got here.

brookst [3 hidden]5 mins ago
Think of the poor Xerox machines.
iamflimflam1 [3 hidden]5 mins ago
Given this article is behind a paywall, what on earth is everyone discussing in the comments here?
robinhouston [3 hidden]5 mins ago
There's an archive link above that bypasses the paywall
iamflimflam1 [3 hidden]5 mins ago
Doesn’t seem to be working…
psychoslave [3 hidden]5 mins ago
Honestly, who care if they are conscious? If it's about how we should treat other conscious beings, our attention should first go to how we treat other animals, or even other humans. Actually even how fellow humans will treat themselves can be a concern if they are not the proper means to deal with their own life.
yakbarber [3 hidden]5 mins ago
let's say aliens land. we learn to talk to them. they're super smart - smarter than us. would we say they're conscious? why? because they're organic. I think that's the root of the criteria many folk are trying to express.

1. passes turing test

2. is organic

I'm not saying it's correct or even that I agree with it, but that's what it boils down to.