Will Artificial Intelligence (AI) destroy human existence in the future?

Will Artificial Intelligence (AI) destroy human existence in the future?

Will Artificial Intelligence (AI) destroy human existence in the future?
How many of us think 2020 showed us enough, and there is nothing more that could surprise us? Standing on a still ground of earth in 2020, while everything else showing us an unknown behavior, things changed upside down, new normal this is probably the beginning of a new era. Surviving a pandemic is not easy.

Our new friends are masks and sanitizers, we have aced the technique to survive a lockdown, and currently, most of us are probably signed up for “work from home” All of these had put the whole human existence in danger, but we did quite well. This seems like a black mirror episode to me. But today, this is the truth.

Similarly, another of our concerns should be about Artificial Intelligence and how far we could take it in the future? Or the other way around! Well, the second option seems quite scary and to me another horrifying episode of a black mirror. If a day comes where artificial intelligence will take over this world and we will not be controlling it but they will be controlling us… Jesus Christ, I do not even want to imagine.

Books like superintelligence by Nick Bostrom and life 3.0 by Max Tegmark argue that malevolent superintelligence is an existential risk for humanity. But we can speculate endlessly. It’s expected to ask the more concrete, empirical question: when and what would alert us that superintelligence is indeed around the corner?

What is AI?

From Alexa to self-driving cars, our social media accounts to AI-driven robots, the progress in artificial intelligence has increased drastically, making our day to day jobs easy and handy. On a whole artificial intelligence refers to the simulation of human intelligence in a machine, that is programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving. 

It is based on the fact that human intelligence can be put in a way that machines can easily mimic it and execute the given tasks easily, from easy to even more complex ones. Artificial intelligence is excellence that includes learning, reasoning, and perception. It is continuously upgrading its game to benefit several different industries. Machines are wired using a cross-disciplinary approach based on mathematics, computer science, linguistics, psychology, and more.

Will Artificial Intelligence (AI) destroy human existence in the future?

IS RESEARCH ON AI SAFE FOR HUMANITY?

In the near future, the agenda of keeping AI’s impact on society’s benefits motivates research in many areas, from law to economics and technical trademarks such as verification, validity, control, and, most importantly, security.

However, to be precise, it’s a bit more than your laptop crashes or it gets hacked. It becomes so much more important to see if the AI works exactly how you would have worked in that specific circumstances, especially if it is controlling your car, your airplane, your pacemaker, your automated trading system, or the power grid.

Talking of challenges, an upcoming short-term challenge might be preventing a devastating arms race in lethal autonomous weapons. On a long-term aspect, as IJ good pointed out in 1965, designing a smart AI system is a cognitive task and it will be able to show intelligence greater than any human that will also lead to self-improvement or self-recovery.

This should trigger a question in our mind id in the quest to make better AI, an AI system becomes better than human beings and does every cognitive task better than them? It could trigger an intelligence explosion of the AI to such extent that it would leave the human intelligence far behind.

By inventing revolutionary new technologies, we might eradicate war, pandemics, poverty, or diseases, and inventing such a superintelligence system would be the biggest even in human history. But we should be equal concern about setting our goals and aligning our priorities before it becomes superintelligent.

There remain some unanswered questions that whether strong AI will ever be achieved, if yes we know the pros but what will be the cons? What precautions we are taking to resist such a mishap. We believe researching today will help us tackle all the potential negative consequences in the future, at the same time enjoying all the benefits of AI.

TYPES OF AI:

There are four types of artificial intelligence (AI): reactive machines, limited memory, the theory of mind, and self-awareness:

Reactive machines

It is the simplest type of AI system which is purely reactive. I can neither form memories nor could use past experiences to inform current decisions. IBM’s chess-playing supercomputer – Deep Blue is one of a kind. It beat international grandmaster Garry Kasparov in the late 1990s; it is the perfect example of a reactive machine.

Deep blue can recognize the pieces on a chessboard and also knows how they move. It can predict all the probable moves of the opponent. Then it chooses the most optimal moves from among the possibilities. It does not save any memory from the past so the last moves of yours it does not even concern about, apart from a rarely used chess rule that does not allow us to repeat the same move thrice. All it does is look at the current position of the pieces and choose from an array of possible moves.

Similarly goggles AlphaGo is a more evolved version of deep blue that can beat any top human Go experts. Its analysis method is more sophisticated than deep blue, having a neural network to evaluate game movements.

From reactive AI we can conclude that it improved the capability of an AI system to play a specific game better. But they also are very restricted to their assigned tasks and cannot think or work beyond that. Hence, they are nowhere compatible with real-world existence or wider problem range, and we can easily fool them.

Limited memory

Just as the name suggests limited memory has limited access to the past that they store as memory. Self-driving cars are a great example of this type of AI. They observe other vehicles’ position speed and direction and keep on monitoring them. By identifying specific objects and monitoring those over time that make it go run smoothly on any driveway.

The AI system that is in self-driving cars has greater real-world knowledge than reactive AI. We pre-program it with the real world representation such as driveways, signals, and lane markings. Also, curve and alignment of the road, while changing the lanes to avoid crashing or hitting into another vehicle.

These simple pieces of information that we feed them into aren’t saved as a part of the car’s experience library. They can compile the behavioral gestures of the human behind the wheel, over the years. 

Theory of mind

Theory of mind is the exact place where we can divide between the machines we already have and the machines we are going to develop in near future. It is better to be more specific to discuss the type of representations AI needs to form, and more they can achieve to be.

The machine in the future will not only have a representation of the world but also have a clear picture of agents and entities of the world. Psychologically it is called “theory of mind” – the understanding of human creatures and objects of the world and most importantly their thoughts and emotions, and how it controls behavior.

We humans could form societies and a full socio-economical culture because of our power of interaction. Gauging the person in opposite and understanding their emotion and intentions played a major part in building this humankind as it is now. So if AI systems are ever to walk among us they have to be able to analyze human feelings and emotions and accordingly treat them just like we do. At best to reach this goal is difficult and at worst – it’s impossible.

Self-awareness

Self-awareness is an extension of the theory of mind. Before jumping into making an AI who is self-aware the researcher must know about consciousness or self-awareness. “I want this” and “i know, i want this” is different. 

A thief holding a gun in his hands and a policeman holding a gun in his hands – in this scenario it is easy for us to distinguish between them but the AI has to too. Or a person in traffic honking continuously is very impatient and angry at the moment, because that is what we are when stuck in traffic. This is the level of precision we are talking about while talking about the theory of mind. Whit out this we could not make those sorts of interference.

We are probably far away from making a self-aware machine. But the first step to creating such a mind-blowing AI system we have work on understanding human intelligence. So that it can classify what they see in front of them

PRESENT SCENARIO:

Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many other big names in science and technology, recently expressed their concern about the AI dependency of humankind and its adverse effects on us via several interviews in the media or their social media handles. But why suddenly this subjected to bombard many headlines?

The idea of superintelligence and the most powerful AI was just science fiction to us just before a few decades. But the recent breakthroughs obtained from research on AI helped in achieving milestones that we thought is probably a decade away. The goals we reached within five years at most. So now the scientists are thinking that they might even see the superintelligence in this life only because of the excelled pace of the researches.

Some experts say it will take more than a century to create an AI with human-level intelligence but some experts from the 2015 Puerto Rico Conference guessed that it would happen within 2060. If this is true then we do not have an eternity to resolve the issue regarding AI safety. It is prudent to start working on it now.

Because AI has a chance to become exact or even more intelligent than a human it is tough for us to predict how it may function in the future. What precautions do we need to take to avoid any chaos that might develop? The technologies we have currently have no experience in tackling such a situation as we never created anything with the ability of wittingly or unwittingly outsmarting us.

The best example of what we could face in the future might be very similar to our evolution – the evolution of humankind. We now control the planet, not because we are the biggest, fastest and strongest but because we are the smartest. According to this theory if you do not remain the smartest on the planet, are you assured who will remain in control?

Human civilization will flourish as long as we win the race between the growing power of technology and how wisely we channelize it. In the case of AI technology, the best way to win the race is not to impede the former, but to accelerate the latter, by focusing on AI safety research.


https://amazfeed.com/7-effective-and-healthy-ways-to-extinguish-anger/

HOW CAN AI BE DANGEROUS?

Most scientists agree that a superintelligent AI to any extent cannot exhibit human emotions like hate or love, and there is no factual basis in expecting AI to become intentionally benevolent or malevolent. So what is the biggest fear? Experts consider two issues that might become a risk in a future scenario most likely: 

The AI programmed to do something devastating:

There are autonomous weapons, programmed to kill through an AI system. If these weapons fall into wrong hands it could easily create pandemonium and cause mass causalities. However, an AI arms wrestling can inadvertently lead to an AI war that may also cause mass causalities.

To avoid being manipulated by the enemy these weapons will be a complex design and will not be that easy to simply turn off if we press a button. So we humans could lose control of such a situation. This risk is still there with narrow AI but with growing intelligence levels of AI and autonomy the risk increases. 

The AI is programmed to do something beneficial, but develops a destructive measure to achieve its goals:

This can happen when we fail to sync our goals with the AI’s goals. A small mistake in programming or feeding in our expected outcomes may cause a big mishap. For example, if you ask a superintelligent AI-driven car to take you to your date location fast it will probably take you there chased by the police and you fainted or covered in puke.

It just did what you asked for not what and how you wanted. If a superintelligent AI is creating a super ambitious geoengineering project it might destroy havoc from our ecosystem as a side effect and any human attempt to stop it might look like a threat to meet. 

The above points illustrate the issues that might become a concern to us about the advanced AI system. AI is not malevolent but competent. Something like this will be very good at accomplishing its goals, and if their goal does not align with ours, then that is a problem for us. You might not be an ant-hater who steps upon it seeing an ant.

But if you are to construct a dam or spillway and an ant-hill is in your way that will get flooded, you will simply make the dam without even thinking about the ants. The main point of AI safety research is never to put humanity in the place of those ants.

THE TOP MYTHS ABOUT ADVANCED AI

Everything big comes with a series of myths, and there is no difference in the case of AI too. Several fascinating myths go around, and the world’s leading AI experts disagree with it. Such as AI’s future impact on the market or job when AI will take over every job that humans can do, and they would probably do it more efficiently and economically.

More controversies like this will lead us to an intelligence explosion we don’t know to fear or cherish it. So here we have tried to take on some very silly and pseudo controversies that are just a result of rumors and less or false information on this topic. Let’s bust some myths:

TIMELINE MYTHS

The prime myth regards the timeline: how long will it take until machines greatly surpass human-level intelligence? A common myth is that we know the answer with great certainty.

One popular misconception is that we know we’ll get superhuman AI in this century. History is full of technological hyping and this is the case here. For example, where are those fusion power plants and flying cars we thought we’d have by now? AI has also been simultaneously over-hyped in the past, even by some of the researchers of the field.

For example, John McCarthy (who coined the term “artificial intelligence”), Marvin Minsky, Nathaniel Rochester, and Claude Shannon brought this overly optimistic forecast about what could they accomplish during two months with stone-age computers: “We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”

On the flip side, a popular counter-myth is that we know it is impossible to invent superhuman AI this century. Researchers have made a broad range of estimates for the decades that we are away from superhuman AI but, we certainly can’t say with great confidence that the possibility is null in this century, keeping in mind the dismal track record of such techno-skeptic predictions.

For example, Ernest Rutherford, considerably the greatest nuclear physicist of his time, said in 1933 — less than 24 hours before Szilard’s invention of the nuclear chain reaction — that nuclear energy was “moonshine.” Astronomer Royal Richard Woolley stated interplanetary travel “utter bilge” in 1956. The extreme form of this myth is that superhuman AI will never come into existence because it’s physically impossible. However, physicists know that a brain consists of neurons and electrons arranged to act as a powerful supercomputer and that there’s no law of physics stopping us from building even more intelligent neuron blobs. 

CONTROVERSY MYTHS

Another common myth is that the only people raising concerns about AI and advocating AI safety research are Luddites who do not have much idea about AI. When Stuart Russell, author of the standard AI textbook, mentioned this during his Puerto Rico talk, guess what? the audience laughed loudly.

A related myth is that supporting AI safety research is broadly controversial. To support a modest investment in AI safety research, people don’t need to talk about that risks are high, merely non-negligible — just as a modest investment in house insurance gets legitimated by a non-negligible probability of the home burning down.

It may be that the media have influenced the AI safety debate to seem more controversial than it is. After all, we know – fear sells, and articles using out-of-context quotes to proclaim imminent doom can create more noise than nuanced and balanced ones. As a result, two people who only know about each other’s conceptions from media quotes are likely to think they disagree more than they actually do. For example, a techno-lover who only read about Bill Gate’s statements in a British tabloid may mistakenly think Gates believes superintelligence to be impending.

Similarly, someone in the beneficial-AI movement who probably knows nothing about Andrew Ng’s thinking except his quote about overpopulation on Mars may mistakenly make him think that he doesn’t care about AI safety, whereas in fact, he definitely does. The crux is simply that because Ng’s timeline estimates are longer, he always tends to short-term issues on AI challenges over long-term ones.

MYTHS ABOUT THE RISKS OF SUPERHUMAN AI

Many AI researchers smirked or roll their eyes when saw this headline: “Stephen Hawking warns that rise of robots may be disastrous for mankind.” And as many have lost count of how many similar articles they’ve seen. Typically, these articles are in company with an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising and killing us because they’ve become conscious and/or evil. On a lighter note, such articles are rather impressive, because they succinctly summarize the scenario that AI researchers don’t worry about. That scenario combines as many as three separate misconceptions: concern about consciousnessevil, and robots.

If you drive down the road, you have a subjective experience of colors, sounds, etc. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car? Although this mystery of consciousness is interesting in its own right, it’s irrelevant to AI risk. If you get stuck in a driverless car, it makes no difference to you whether it subjectively feels conscious. In the same way, what will affect us humans is what superintelligent AI does, not how it subjectively feels.

The fear of machines turning evil is another red herring. The real worry isn’t malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans don’t generally hate ants, but we’re more intelligent than they are – so if we want to build a hydroelectric dam and there’s an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.

Consciousness misconception is related to the myth that machines can’t have goals. Machines can have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target. If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose. If that heat-seeking missile were chasing you, you probably wouldn’t exclaim: “I’m not worried, because machines can’t have goals!”

I sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with red shiny eyes. The main concern of the beneficial-AI movement isn’t with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection – this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Even if building robots were physically impossible, a superintelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding.

The robot misconception is related to the myth that machines can’t control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter. This means that if we cede our position as smartest on our planet, we might also cede control.

THE INTERESTING CONTROVERSIES

Not wasting time on the above-mentioned misconceptions, let us focus on true and interesting controversies where even the experts disagree. What sort of future do you want? Should we develop lethal autonomous weapons? What would you like to happen with job automation? What career advice would you give today’s kids? Do you prefer new jobs replacing the old ones or a jobless society where everyone enjoys a life of leisure and machine-produced wealth? Further down the road, would you like us to create superintelligent life and spread it through our cosmos? Will we control intelligent machines, or will they control us? Will intelligent machines replace us, coexist with us, or merge with us? What will it mean to be human in the age of artificial intelligence? What would you like it to mean, and how can we make the future be that way? Please join the conversation! 

How AI Will Go Out Of Control According To Top 15 Experts

1. Stephen Hawkins

Stephen Hawkins, the world-famous Astro-physics, also showed his concern regarding the fact that one day might come when AI will be out of our control. He said the AI might manipulate their controllers. They can invent such weaponries that will be out of our understanding. Their inventions will be greater than ours. And in such a scenario, the main will not be who is controlling it but whether it will be controlled at all.

2. Elon musk

 Elon Musk is a business tycoon, industrial designer, engineer, and philanthropist. He is the founder, CEO, CTO, and chief designer of SpaceX. He often actively tweets about warning us about AI. About how careful we have to be. He even considered it as the biggest existential threat “with artificial intelligence, and we are summoning the demon.”

Musk believes that proper regulatory oversight will be crucial to safeguarding humanity’s future as AI networks become increasingly sophisticated and entrusted with mission-critical responsibilities: “If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.”

3. Tim urban

Tim Urban, blogger and creator of Wait But Why, believes the real danger of AI and ASI is its unpredictability. There is no way to know what an AI will do whose intelligence is already two steps above us. If someone pretends that they know Tim Urban said for them, “anyone who pretends otherwise doesn’t understand what superintelligence means.”

4. Oren Etzioni

Dr. Oren Etzioni the chief executive of the Allen Institute for artificial intelligence, said AI might be superintelligent or two steps above us, but the thing they lack is common sense. But that is not a good thing to be happy about because in the real world, common sense matters the most. He said, “even little kids have it but no deep learning program does.” 

5. Nick Bilton

Author and Magazine journalist Nick Bilton worries that AI’s ruthless machine logic may inadvertently advise a deadly solution to genuinely urgent social problems. He explained it as if an AI is programmed to eradicate cancer from the world then the best way it will come up with is to eradicate humans because humans are genetically prone to the disease. 

6. Nick Bostrom

Nick Bostrom is an academic researcher and writer author of superintelligence: paths, dangers, strategies, shares Stephen Hawking’s belief that AI could rapidly outwit humans. They can escape from human control. In his book, he mentioned, “we humans are like small children playing with a bomb.” He feared that there is not only one kid, but for many and everyone, they have access to this bomb. Some will eventually put it down, but some moron will trigger the button just to see what happens. 

7. Vladimir Putin

Russian President Vladimir Putin, who eventually praised AI because of its immense power and intelligence also showed concern regarding its safety. It will help to obtain superior power to the one who could control it. He also warned that he could become the ruler of the world too.

8. Jayshree Pandya

Jayshree Pandya is the founder CEO of the Risk Group LLC, who is an expert in disruptive technologies. She warned us that AI-controlled weapons are a threat to world peace and harmony. She focused on the autonomous weapons system that every nation is developing rapidly and with rapid inventions, safety and security are depleting that too for the world not only of that particular state. So each state’s decision-maker is risking the future of humanity. 

8. Bonnie Docherty

Bonnie Docherty is the associate director of Armed Conflict and Civilian Protection at the international Human rights Clinic at Harvard Law School. She strongly condemns the act of AI given the power to kill – A machine that has no idea of “morality or mortality.” She also said if one state develops it then another state will too that will lead us to an arms race.

9. Max Erik Tegmark

Max Eric Tegmark is a physicist and professor at the Massachusetts Institute of Technology, said that an automated society would make more powerful AI that will lead us to destructive cyber warfare. Everything automated like auto-driving cars, planes, AI-based weapons, nuclear reactors, or robots everything can be hacked. Just think about how devastating the situation can get once your enemy hacks into your system. The AI you developed with lots of money, resources, and time will be the cause of your destruction.  

10. Gideon Rosenblatt

Gideon Rosenblatt is a writer and he writes about the impact of technology on people, organizations, and society. He is a technologist with a background in business and change. He warned about the warfare going on between the US and China and about both the countries developing AI irresponsibly without and proper guidelines, policies specific to the technology. This might look helpful for the short term but in the longer run, it will be harmful.

11. Jon Wolfsthal

Jon Wolfstal is a nonresident fellow at Harvard University at the project on managing the atom. He has also been the senior director at the national security council for arms control and non-proliferation. Hi raised concerns regarding the national security that will be at risk if AI gets into business. He said, “the risks are incredibly high.” It cannot stop the lethal weapons or neither shall we try. 

12. Ian Hogarth

Ian Hogarth believes that artificial intelligence will invariably result in the rise of “AI nationalism.” The transformation of the economy and society through the development of AI and advanced learning will adversely affect all the sectors and parts of society. He is also concerned about the government AI policies that will cause instability on the national and international levels. In the race of increasing power and importance, the countries will have wars between them. As he said, “an accelerated arms race will emerge between key countries and we will see increased protectionist state action to support national champions, block takeovers by foreign firms and attract talent.”

13. Tim cook

Apple’s CEO Tim Cook has always been warning us on AI safety and user privacy. He said that collecting data on a large scale then implementing them in AI development is not efficient but laziness. To achieve efficiency the AI system has to understand human privacy and respect it. He also mentioned this is not an option it should be our responsibility. He also added, “we should not scarify the humanity , creativity, and ingenuity that define our human intelligence.”

14. Olga Russakovsky

Olga Russakovsky, an author and machine vision expert, leads us to a new direction of AI system. She says it is essential to create an AI system that will not only concern superintelligence, economic, and political excellence, but also the scope of its agenda should concern solving more social issues. Bringing more and more the same people will harm us in one way or another. Rather than thinking single directional, we should think broadly. The issue-resolving scopes are limitless in our world, and a well-developed and well-protected AI can be helpful in that case.

Can AI escape our control and destroy us?

 Several scientists mentioned in their interviews what precautionary measures we have to take against upcoming AI evolution. We do not talk about precautions if there is not any trouble. Everything has two sides, and every good thing comes with some pros as well as cons. From Stephen Hawkins to Elon Musk highlighted how future superintelligent machines remain under the control of human beings. The computer programs or robots will take over the total workforce of even the control of the planet. 

It is similar to a nuclear bomb, the inventor of it was unaware of its destructive power, and until the Hiroshima Nagasaki bombing took place, we were unaware of what hit us. Similarly, it is impossible to predict the power of something so big but what we can do is to take precautions and effective measures before anything drastic happens. And as I said earlier, it is not possible to restrict developments, but we can always increase our capacity to handle things and to what extent we give them allowance to take part in our lives.

Narrowing down the whole topic into a single thing, I can say there is an evidently less possibility of a superintelligent AI taking over the world and controlling us, but yes even a 0.001% chance is enough to be cautious. 


Total
17
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Article

Best Made In India Skincare Brands That You Need To Try Out

Next Article
vikram sarabhai

Who is Vikram Sarabhai and everything you want to know about this great scientist

Related Posts
Total
17
Share