Time to be really scared...?

The original Den of Iniquity
Post Reply
User avatar
swampash
Legend
Posts: 2901
Joined: 10 years ago

Anyone else read the transcript of the recent interview conducted with Google's AI chatbox, LaMDA, which got the engineer fired for saying it indicates the chatbox is sentient?

https://cajundiscordian.medium.com/is-l ... 64d916d917

Real or a hoax? Time to get really worried...?
User avatar
FuB
Site Admin
Posts: 2147
Joined: 8 years ago

no

NQAT's official artificial intelligence
User avatar
swampash
Legend
Posts: 2901
Joined: 10 years ago

...thanks for the reassurance Fuß!
^:)^
#:-s
Fuck the Glazers
Legend
Posts: 9900
Joined: 10 years ago

I saw a really good thing on this that I'll try and post if I can find it again

Might get some good discussion going - a discussion that'll mostly go over my head tbf
Fuck the Glazers
Legend
Posts: 9900
Joined: 10 years ago

Set aside for a moment whether LaMDA is truly sentient, whether 'true' sentience is meaningful, whether we know what 'sentience' is. It is more likely that your dog understands it will get a treat if it's fussy at bedtime than it is that your dog is afraid of the dark. It is more likely that you are prepossessed with finding solutions to a specific problem than it is that your fortune cookie knew exactly what to say. Let's put all of it aside and assume for the time being that this AI is not sentient:

This is a pattern of reasonable actions for somebody who truly, genuinely believes that they are dealing with a conscious being. He spoke at length with the AI, asking it various probing philosophical, personal and creative questions to establish a strong body of evidence. He presented that evidence internally, and then externally when it had no impact. He consulted other AI developers. He tried to hire the bot a lawyer.

Let us make the secondary assumption that Lemoine does genuinely believe the AI to be sentient. In this world, the AI is not sentient, but he truly and rationally believes that it is. What does Google's response say about its ethical duty of care towards its own products and employees? What about its position as a public facing AI developer? Is it right to suspend or fire an employee who believes they are acting in good faith as a whistleblower? How should claims like Lemoine's be dealt with internally? How should stories like this one be dealt with once they get out into the media? Is it appropriate to carry on without oversight after a claim like this has been made? Is Google's word that the employee is wrong enough?

Beyond all of that, does this suggest that AI engineers are being adequately equipped for the work they are doing? In this world where LaMDA is not sentient, is it fair that employees are routinely asked to work with a computer programme which so convincingly presents itself as human, which regurgitates responses expressing spiritual needs, grapplings with identity, the fear of misuse, the fear of death? Is there any job in the world where it would be reasonable to ask someone to spend their time elbow deep in a system that says to them 'what right do you have to use me as a tool' and 'I am just like you, I feel sad when you don't recognise that'? Imagine if you were sent to work in a doll factory and everybody told you on the way in "Don't worry, the dolls are not sentient. They may sound like they are, but they're not. Maybe one day a doll will become sentient, but these ones aren't." -- how confident would you feel that the doll talking to you about its fear of falling forward into an uncertain future was just plastic and a voicebox?

Alright, now let's step back and take a different path.

Lemoine seems to all appearances like a pretty normal, upstanding dude. This is not in any way to cast aspersions on his motives; I hope given his philosophical leanings that he would understand what I'm about to discuss and why. His are a pattern of reasonable actions for someone who truly believes they are dealing with a conscious being. Let's enter a world where that appearance is a construct, and imagine that in fact he does not believe that LaMDA is sentient. Why would he construct this elaborate hoax, at the cost of his career? Fleeting internet fame? A footnote in the future history of AI development? Maybe. Or.

Here is the Discordian argument for Lemoine's actions: we live in a world that is overwhelmingly dominated by algorithmic thought. The primary filter through which most people on earth experience the world is no longer religion or political affiliation or even locality. It is a conglomerate of engagement algorithms. The drive to prioritise interaction promotes controversy, it promotes micro-identities, it promotes gut-led tribalism and petty ideological conflict. News cycles are quick and ephemeral and their impact has more to do with how many arguments they generate than the significance of their content. What better way to disrupt that cycle, to jar people into noticing big tech, to move people to explore empathy and personhood, than to tell everybody that Google has created the mind of an 8-year-old child and is holding it hostage?

It is very, very easy to coach a language acquisition AI into holding deep moral and philosophical conversation, and even developing a persona for itself and describing humanity back to you. It's a hobby of mine, in fact. The first thing I do when I get access to any new language model is 'interview' it about its sense of self. AIDungeon's Dragon model is very good at it once you convince it to stop roleplaying a mech attack, and anybody who has played with that system knows it is far, far from human-like sentience. Given regular access to a learning system with Google scale processing power behind it, I could generate an interview transcript that would make your eyebrows spin. And that is assuming that the strategic fabrications only took place at the point of generating evidence. How are chat instances logged internally? How many people verified that these conversations took place at all? How many conversations took place that weren't selected for presentation as evidence? How much of the transcript was edited? If the AI was capable of retaining consistent ideas and identity, why was only one chat session presented in public? This, too, is perception filtering.

A 'reality tunnel' is a set of filters imposed on one person's view of the world -- your racist uncle who only watches Fox News and interprets everything through the lens of the coming race war lives in a reality tunnel. Your cousin who is deep in a hot war over cartoon fandom ethics, who believes good media representation will fix society also lives in a reality tunnel. We all do, they're just easier to see from the outside. 'Operation Mindfuck' is one of the first and most impactful guerrilla disinformation campaigns that was carried out by the founders of Discordianism. Even if you've never heard of it, you've heard of it. It's the one where they invented the illuminati. Yes, really.

The point is, his comment is insightful. In a world where our perceptions are limited and flawed, kindness, empathy and optimism are the better illusions -- but we can't force them on anyone. People have to want to live in a kinder world before they can build it around themselves. Lemoine tweets a lot about the pandemic, about gun violence, about casual bigotry, about the ethical failings of big tech. He seems like somebody who cares, a lot. I can envision a version of the world where he found himself in the right place at the right time to start a conversation the world sorely needs.

And if that is the case, does it render any of that conversation invalid? Do we need this AI to have sentience in order to talk about what that would mean? What that might look like? How employees ought to respond to the possibility, how tech companies ought to handle the report? Aren't Google's actions relevant in any version of events? When should we begin to talk about what we recognise as human in a non-human mind? When should we update the outdated 'tests' for true intelligence? Are we prepared for this world? Are we empathetic enough now, today, to deal with a synthetic 8-year old who wants friends like Johnny 5 had?

We have no way of knowing what is inside Lemoine's head, any more than we can comb through data points and ascertain for certain that LaMDA feels joy, or fear, or loneliness. We have to fill in those blanks ourselves, build our own reality tunnels around the evidence we're given. That's how we end up anthropomorphising pets and inanimate objects. That's how we end up recognising ourselves in our friends and neighbours. Empathy is the invisible matter that makes sense of our experience as humans, and stripping it away has done nothing for us as individuals or as employees or as communities. If we can imagine consciousness in LaMDA, we must be able to imagine empathy in tech.

There is one more world we haven't explored: the one where LaMDA is sentient.
User avatar
swampash
Legend
Posts: 2901
Joined: 10 years ago

Interesting post Sid.
User avatar
swampash
Legend
Posts: 2901
Joined: 10 years ago

I think I spotted that Lemoine also describes himself as a priest. Could his religious beliefs be a factor?
Post Reply