Skip main navigation

Opportunities and Risks of Artificial General Intelligence

In this article, we discuss the potential opportunities and risks involved in developing artificial general intelligence.
© Torrens University

Achieving Artificial General Intelligence (AGI) and the end of work

Achieving AGI

AGI is the term used to describe an AI program that can understand the world to at least the same level as a human and can also learn how to undertake new tasks as capable as the average human. This level of AI hasn’t been achieved yet. In fact, many AI researchers believe we are approximately fifty years away from achieving it. The same was thought of the self-driving car and that eventuated a lot quicker than the expected timeline. The difference between AI and AGI is the level of complexity. AI can perform very narrow scope tasks very well but is bad at anything outside of the scope they have built the AI to address. AGI is multi purpose in the same way as the human mind. It can handle innumerable tasks in the same way as a human, although it is supported by a level of information access that we can’t duplicate.

The belief is that an AGI could perform any job or task at least and a human can, and likely could perform many tasks exponentially better than humans because of its ability to combine human-like reasoning and creative thinking, added to being able to instantaneously draw on the world’s full knowledge base.

There is an expectation these machines will eventually replace every working role performed by humans, except, maybe, those roles humans undertake for personal gratification. In developing world environments, initially, humans would likely still be cheaper than these machines to utilise. Eventually, the costs of machines will decrease to the extent where this will not be the case. This raises many ethical and social questions, the most prominent of which is, what can be done to ensure income inequality doesn’t reach impossible levels, with a select few elites owning these companies, and most humanity with no way of earning an income.

AI Citizens

AI Citizens

Human Robot Interaction (HRI) deals with increasing the quality of interaction between robots and people
This is the very substantial area of study that explores human attitudes and behaviour towards robotics, specifically regarding the interactive, physical and technological features of the robot. The intent is to continually improve on human-robot interactions, so they are not only more productive but are also comfortable and natural for humans to engage within a fashion that meets their emotional and social needs and respects their values (Dautenhahn, 2013).

Theory of Mind (ToM) teaching AI to learn the same way humans do and facilitating natural learning similar to humans but powered by silicon
ToM, is the capability of being able to predict the actions and thoughts of others. Alan Winfield, a professor at the Bristol Robotics Laboratory in the University of the West of England, says that with a theory of mind AI may anticipate how others might behave in particular circumstances. The breakthrough that a ToM enabled AI/robot would produce would be the ability to teach this AI/robot the same way we would a human. This ToM enabled AI would learn much faster than in ways that are specifically relevant to humans (Pandey, 2018).

AI rights
The concept of granting an AI legal rights or even person-hood might have seemed far-fetched several years ago. Now, however, it is clear too many AI researchers that AI will eventually reach a level of complexity in which it will replicate the functioning of the human brain. There is the held opinion that AI exists only to serve humans in some capacity. Conversely, there are now some who are questioning this perspective, who say that AI is likely to have advanced to a point where it is indistinguishable from a human mind. Even if we don’t think it is ‘conscious’, the thing to do would be to afford it rights, if for no other reason than the impact treating it maliciously or even with disregard will have on our psyches.

The Threat of AI

This is an additional video, hosted on YouTube.

The Alignment Problem aligning AIs intent with ours has to be perfect or will lead to the end of us
The development of smarter-than-human artificial intelligence poses an existential and suffering risk to humanity. Given that it is unlikely we can prevent and may not want to prevent the development of smarter-than-human AI, we are faced with the challenge of developing AI while mitigating risks. Addressing this challenge is the focus of AI safety research, and it includes topics such as containment of AI, coordination of AI safety efforts, and commitment to AI safety protocols, but the primary issue people believe to be worth addressing is AI alignment because it seems the most likely to decrease the chances of a catastrophic scenario.

Briefly stated, the problem of AI alignment is to produce AI that is aligned with human values, but this only leads us to ask, what does it mean to be aligned with human values? Further, what does it mean to be aligned with any values, let alone human values? We could try to answer by saying AI is aligned with human values when it does what humans want, but this only invites more questions: Will AI do things some specific humans don’t want if other specific humans do? How will AI know what humans want given that current technology often does what we ask but not what we desire? And what will AI do if human values conflict with its own values? Answering these questions requires a more detailed understanding of what it would mean for AI to be aligned, thus the goal of the AI ethics and safety community is to put forward a precise, formal, mathematical statement of the AI alignment problem.

The AI Confinement Problem idea that if there is an AGI, it has to be kept in a closed system otherwise it’s the end of us

This is the inherent challenge of keeping an Artificial General Intelligence (AGI) or even an Artificial General Superintelligence (AGSI) (an intelligence much greater than human capability) confined to one physical location to turn it off if necessary. If the AI ‘escapes’ onto the internet the ‘genie would be out of the box’ and, most likely impossible to get back in.

Computer security researchers have taken on the challenge of designing, enhancing and improving secure AI confinement protocols. The reason it is felt this is an essential problem to solve is, if of an AGI or AGSI, the free roaming of this intelligence in society may well be a threat to humanity’s survival (see The Alignment Problem above).

“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”
Eliezer Yudkowsky – AI researcher and writer
© Torrens University
This article is from the free online

Introduction to Digital Transformation: Understand and Manage Digital Transformation in the Workplace

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now