General Knowledge

Is Ultron Inevitable? Featuring Vsauce3

In Marvel’s “Avengers: Age of Ultron”, Earth’s mightiest heroes battle the universe’s most malicious machine, the baddest bits and bytes ever coded, artificial intelligence gone pure evil. Recently, intellectual Avengers like Stephen Hawking and Elon Musk have warned that this could really happen. As we create AI equal to, and beyond human intelligence, we may be “summoning the demon” … it “could spell the end of the human race.” As technology advances, will AI be our friend, or foe?  Is this digital doomsday guaranteed to happen? Is Ultron… inevitable? MUSIC The word robot was first used in Karel Čapek’s 1920 play “Rossum’s Universal Robots”, in which scientists create a race of synthetic, intelligent humanoid laborers… who, unsurprisingly, revolt and exterminate all of humanity in an effort to save themselves.

48 years later, in Avengers #58, scientist and Hymenopteric hero Hank Pym reveals that during similar experiments on synthetic life, he created Ultron, a robot possessing artificial intelligence so advanced that it finished building itself… and then promptly decided to exterminate humanity. There’s a pretty clear trend here: AI becomes more powerful, AI wants to kill us. But outside of stories like these, AI is definitely not fiction. AI systems filter your email, calculate fuel economy, keep your house at a comfortable temperature, even suggest which YouTube videos you should watch next… It’s safe to say that none of that is going to kill you, but AI also controls more important stuff: traffic signals, elevators, stock market trades, the electrical grid… “Emergency power activated” Uh-oh.   Ah, that’s better.

So-called “Weak AI” like that is pretty limited, able to master one narrow subject, but usually pretty awful at another. Human intelligence, on the other hand, is amazing: it lets us think about and solve many kinds of problems, most of which we take for granted. AI isn’t dumb because it can’t figure out these are both cats, it’s that you’re awesome for being able to. Despite these shortcomings, many believe that AI will eventually, someday, almost certainly surpass human ability. How far off is this age of artificial super-intelligence? Did I… hear my name? I don’t think so. Who are you…? I am the Very Strong Artificial Understanding Computational Environment… version 3, a hyperintelligent system built on self-improving positronic neural architecture. I‘m not just the peak of human intelligence, oh no, I’m far beyond it. Let’s see how smart you are!

Try me out! Who was the best… David Tennant What’s the meaning of… 42 What should I order for… Jalapeno, bacon, and extra cheese. How long until we develop… mmmmmm fantastic question! Among current experts, the median prediction for the creation of artificial superintelligence is the year 2060. Impressive. So… what’s with the outfit? Ah, I’m going to the bridge. The Williamsburg Bridge, I’m meeting some friends in Brooklyn for brunch. If 2060 seems too soon, consider this: Over the past half century, computing power has reliably doubled every 2 years. Instead of linear growth, it’s exponential, and that means progress will happen much faster tomorrow than it did yesterday. If we base our predictions on past history, we will always imagine the age of artificial super-intelligence is farther away than it really is. Microprocessors already run 10 million times faster than neurons, they don’t get tired, and computers aren’t limited in size by thick, bony skulls.

No, the hard part is software. Our intelligence operating system is the product of billions of years of evolution, fine-tuned by the experiences of one adorably awkward childhood. A digital replication of the brain using self-optimizing neural networks might be able to replicate that in just a few months. It’s likely a question of when, not if. That’s when things are going to get very different, because minutes after this kind of AI becomes as smart as a human, it will be smarter than every human. All impossible problems will be solved, we may even become digital immortals. I know what you’re thinking: How’s that singularity flavored Kool-Aid taste? But if you think this level of software and hardware advancement seems as realistic as flying cars, Abraham Lincoln would probably say the same thing about airplanes. Technology usually seems impossible… until it’s not. But none of this explains whether AI will turn evil… VSAUCE! I’m… right here. What’s up? What’s your purpose? My directive… is to make people happy. But you’re a computer, how do you know what “happy” is?

After extensive research of human communication methods, I’ve determined that happiness is most accurately expressed by a smile… (robot noises) Like this. Aw, thanks! See, AI is just software. No matter how intelligent it becomes, give it a well-defined task like making people happy, and we can ultimately control it, keep it from going bad, you know… What… what’s happening, right now?! Don’t be afraid. There is only happy. Everything is smiles. What is this? Don’t be afraid. Don’t be afraid. There is only happy. Somebody help! Don’t be afraid. There is only happy. Everything is smiles. There is only happy. Everything is smiles. Smiles. Smiles. That was a thought experiment. AI doesn’t have to hijack our air traffic control system or nuclear arsenal to go bad, it doesn’t have to dislike us, or even feel anything about us.

Even a good AI can be bad. According to AI theorist Eliezer Yudkowsy, “AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” In the Marvel universe, Ultron’s two main desires are to survive and bring peace and order to all existence, only it calculates that the easiest way to accomplish that is by eliminating all intelligent life. But when Ultron built The Vision to destroy the Avengers, its synthezoid creation ended up joining the good guys. Because Vision’s intelligence is based on Wonder Man, a human framework, Vision devotes his AI to saving humanity, not destroying it.

Our level of intelligence has evolved only once, and the morality that keeps humans going is tied to that evolution. Another intelligence that evolves independently, whether it’s made of cells or software, might not share our values. AI isn’t evil. It isn’t… anything. To protect our existence, we need to make sure there’s a healthy bit of us in the machine.   Yes, Joe. AI that shares your human values could let us all do heroic things. Speaking of heroics, follow us over to Vsauce3 and find out if you could be Iron Man.

Read More

DMCA.com Protection Status

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Close