Soon AI will Predict End of Humanity and will Control Mankind: Research

0
Soon AI will Predict End of Humanity and will Control Mankind
Soon AI will Predict End of Humanity and will Control Mankind

Researchers from the University of Oxford and Google Deepmind conducted a study on AI and concluded that superintelligent AI could reveal the end of humanity in the near future. This scenario is predicted by more researchers. This research was recently published in an article in AI Magazine. The team consisted of a senior Deep Mind senior scientist named Marcus Hatter, and Oxford researchers Michael Cohen and Michael Osborne.

Both scientists revealed that through the research conducted they learned that in the near future machines will be encouraged to break the rules set by their creators to perform some tasks.

Will AI Predict the End of Humanity?

Cohen, Oxford University engineering student and co-author of the paper said:

“Under the conditions we have identified, our conclusion is much stronger than that of any previous publication — an existential catastrophe is not just possible, but likely,” 

After this, humanity will face its downfall in the form of extra advanced  “misaligned agents” that will think of humankind as a hurdle in way of reward.

“One good way for an agent to maintain long-term control of its reward is to eliminate potential threats, and use all available energy to secure its computer,” Losing this game would be fatal,” 

Unfortunately, in the study, researchers have revealed that we cannot do much about it.

While telling about the whole scenario in an interview, Cohen said:

“In a world with infinite resources, I would be extremely uncertain about what would happen,”  “In a world with finite resources, there’s unavoidable competition for these resources.” And if you’re in a competition with something capable of outfoxing you at every turn, then you shouldn’t expect to win,”

It may sound very scary, but I believe humanity should slow down the progress of AI technology to play it safe and prevent such bad things from happening.

The paper also warns that if these assumptions are true, “sufficiently advanced artificial agents are likely to interfere with the transmission of target information, with catastrophic consequences.”