Lifestyle

Are we ready for bots with feelings? Life Hacks by Charles Assisi

 
 
 
 

 

Does my phone know when I’ve exchanged it and moved on to a new one? Does it work better with its new owner because he follows its instructions when he drives, meaning that it doesn’t have to scramble to do one more task while its processor was focused on something else? Does the contraption prefer my kids over me?! They seem to have a lot more fun together.

As a species, we tend to anthropomorphise objects — a lamp looks cute, a couch looks sad — but what do we do with objects to which we have given a kind of operational intelligence, enough at least to operate independently of us? How do we view them? What standing do they have, relative to the lamp and the couch?

The idea that it might be time to start thinking about rights and status for artificial intelligence was explored last month in a lovely essay on the moral implications of building true artificial intelligence, written by Anand Vaidya, professor of philosophy at San Jose University, and published on the academic news portal The Conversation.

His attempt to place things in perspective begins with a question. “What is the basis upon which something has rights? What gives an entity moral standing?”

That my phone has a kind of intelligence is obvious because the answers that the voice assistant comes up with in response to questions are often indistinguishable from how a human might answer. But this is rather basic. Science has been at work to push those boundaries. Three years ago, an algorithm called AlphaGo taught itself to play chess until it beat the grandmaster Garry Kasparov. A very gracious Kasparov applauded the algorithm and called its win a victory for humankind.

Advances such as these place in perspective why my younger daughter sneaks away with my phone when she thinks no one is looking, as if running off with a friend. She asks Siri to play her a song, tell her a joke, help with her homework. The algorithm powering the device does all that, and rather nonchalantly. When looked at from a distance, it appears, they “bond”, my daughter and the bot.

Now, it is broadly agreed that rights are to be granted only to beings with a capacity for consciousness. That’s why animals have rights, in our systems of justice, and not hills or rocks.

It is also generally agreed that there are two types of consciousness. One is linked to the experience of phenomena — the scent of a rose, the prick of a thorn. Our devices are bereft of this phenomenal consciousness.

But they do have what is called access consciousness, Vaidya points out. In the same way that you can automatically catch and pass a ball mid-game on reflex, a smart device can alert me when it is low on battery and suggest I recharge, save my work, switch to another device.

As the algorithms that allow it to do that evolve, and artificial intelligence gets smarter, developing even more advanced forms of this access consciousness, it is conceivable that a future algorithm will interact very differently with a younger user than with an adult. That it will know one from the other more specifically.

Isn’t it time then that we started thinking about creating a code of conduct around how we will interact with such devices, how we will allow them to interact with each other, and at what points we will intervene to control, moderate, or terminate?

I believe it is time we started thinking about these ethical voids. Because the AI of science fiction is still in the future, but we can feel it getting closer all the time.




Source link

Show More

Related Articles

Back to top button