• Product

Giant Death Robots

24th September, 2021

Artificial Intelligence (AI) development is progressing fast. There are AIs today that seem to come close to passing the touring test. But according to many researches working on the subject, we are still far from creating an artificial general intelligence or AGI. As the term suggests, an AGI is able to handle any given task and would therefore match or most likely surpass human mental abilities.

The estimates on when we will create AGI span between 10 and 60 years, with some questioning if we are even able to estimate it. We might as well never achieve it. But if we do, the creation of an AGI might lead to an intelligence explosion, creating a singularity. This might sound like the script for an uninspired science fiction movie, but it already evolved into a field of scientific research.

AI safety research tries to answer questions regarding the future of AI and what impact it will have on human society. You might now be thinking about large intimidating machines that will declare war on us. But that is not really what AI safety research focuses on. Machines do not need to be "evil" to pose an existential threat to humanity.
One part of AI safety is the alignment problem. In simple terms, the alignment problem describes the difficulty of providing an AI with the right goals to achieve a desired outcome.

Suppose it is a rainy Sunday and you decide to do some cleaning. When digging through your basement, you find a bottle with a genie. You get all exited, rub it ferociously, and the genie appears. In ecstatic bewilderedness you ask the genie for immortality! Your wish is granted and the genie disappears (the remaining two wishes must have been used up by someone else).

As you grow older, your health starts to deteriorate. You start wondering if that genie back then was a fraud. More time passes and things are getting worse. You begin spending most of your time in hospital. One day the doctor comes to your bed and tells you that you only have a couple of weeks to live.
But the doctor was wrong. You will never die, because you are immortal. You will live your live in never-ending sickness and misery.

You already guessed what the moral of the story is. The AI alignment problem is similar to the genie problem. AI will interpret any goal (or wish) we tell it in the most literal sense and without any implied conditions that any human would intuitively infer. That is, unless we manage to teach AI the framework of values, ethics, social guidelines and "common sense". This OpenAI article with an interesting video shows an example of AI alignment gone wrong when playing a computer game.

I will leave it to your imagination what might happen if an artificial general intelligence with superhuman all-encompassing knowledge is provided with the wrong goals. The best outcome we could hope for would probably be "42".

AI safety comprises more than just the alignment problem. Here are some links to learn more about AI safety:

René Lenoir
Product Owner Web