Do Regulators Need To Start Taking Artificial General Intelligence Research More Seriously?

Machine learning, better known as artificial intelligence, has received a lot of publicity in recent years. While the origins of the concept have roots in science fiction literature, research and technological innovations in the field of machine learning have consistently broken through the preconceptions of what we thought was possible merely decades ago. The emergence of new technology in this field poses bigger questions, however, which are currently being looked at almost exclusively by those who are developing the technology.

What is AGI?

Artificial narrow intelligence (ANI), often referred to as “weak AI”, is responsible for all of the applications of AI which are currently used in our society. Facial recognition software, driverless cars, and disease mapping are all examples of how we use weak AI today. ANI systems are programmed and designed to perform specific human tasks, while not fully being able to operate as the human mind does independently.

The form of AI that researchers have speculated about for years that would be able to operate in this way is what has been deemed as Artificial General Intelligence (AGI), or “strong AI”. Once thought to be reserved to the realm of science fiction, some researchers believe that this form of artificial intelligence could be developed in the coming decades. While these assertions could prove to be overambitious, the potential that they hold could have technological implications which have never been seen before.

Why is AGI important?

The implications of AGI range from solving the most pressing issues for the human race, to posing some of the greatest dangers to it. Research and the elimination of human labour in the medical field due to AI has been predicted to one day extend human lifetimes by tenfold. On the other hand, AGI has been thought to have the ability to render the majority of humanity without work. Inequality resulting from the development of AGI could one day increase the welfare gap between countries that have it and those that don’t to an exponential degree.

In his book “21 Lessons for the 21st Century”, Professor Yuval Noah Harari from the Hebrew University in Jerusalem poses the notion that AI could one day produce a “useless” labour class in society. AGI could also exponentially expand the threat landscape in the realm of cyber security, by lowering the barriers of entry for bad actors who want to use cyber crime to advance their interests. The idea that the majority of human labour could one day be replaced by computers isn’t a new concept by any means, but the possibility of this occurring is almost guaranteed with the innovation of artificial general intelligence.

Are we closing in on AGI? Or another AI winter?

Since the first major computing innovations by Alan Turing, the question of computer-based consciousness has arisen. From the conception of the computer, however, Obstacles in AI research have pushed this type of theoretical construct back into the pages of science fiction and fantasy. The field of machine learning has thus been dotted by multiple stretches of stagnation, which have been coined as “AI winters“.

OpenAI, an open-source AI research lab that Elon Musk helped start in 2015, is one of the major research centres which are trying to develop AGI today. The company has grabbed the attention of investors and the media with its ambitions for the development of AGI that “benefits all of humanity”. In an annual poll, half of OpenAI’s employees stated that they believed AGI would be achieved in the next fifteen years.

Many AI researchers are sceptical, however, on the notion of AGI altogether. Much of the arguments on why AGI won’t be realized rely on the premise that the human brain is inimitable in machines due to our inability to fully map out the neural complexities of our own minds. New developments in the field of machine learning have challenged this obstacle, by proposing that AGI wouldn’t need to be modelled after our own brains. A recent paper published by Google-owned research centre Deepmind has presented a theoretical basis for how AGI is possible through reinforcement learning- a method of reward-based machine learning. Deepmind has been also putting a precedence on safety and ethics much like OpenAI, where the ambitions of the company are altruistic and seek to prevent the misuse of AGI through the responsible development of the technology.

What is currently being done by regulators?

In the EU, legislation has been recently proposed regarding artificial intelligence. Coined to be the “GDPR of AI”, this type of legislation could serve as the groundwork which will one day pertain to AGI. Such legislation, however, is focused mainly on imminent and current AI technologies that are already in circulation. The proposed regulation takes a risk-based approach, which covers the scope of AI technology that AGI could one day fall under.

Other research institutions such as the Centre for the Study of Existential Risk (CSER) in Cambridge, and the Future of Humanity Institute in Oxford, are studying the possible risks associated with AI regarding future implications. A recent project by CSER has set out to outline the risks of AGI, regarding the technology’s generality. A study published in the Royal Swedish Academy of Sciences in 2015 also laid out the various proposals arising in the field of AI research of its catastrophic risks to society. In this, the study explored options as hands-off as doing nothing, to restricting research on AGI altogether.

The majority of the conversations regarding strong AI however are currently being had in the offices where they are being developed. Many sceptics still believe that AGI is very far away from being a real threat. As of now, there is an even real possibility according to researchers that AGI may never exist. It could be irresponsible, however, to meet an overestimation of AI research with an underestimation on the part of regulators. If the development of artificial general intelligence continues to speed up, we can only hope to see a coinciding development of laws that will mitigate the worst of its attributes while preserving its benefits.