Artificial superintelligence and its limits: why AlphaZero cannot become a general agent


Jebari, Karim and Lundborg, Joakim

(2019)

Artificial superintelligence and its limits: why AlphaZero cannot become a general agent.

[Preprint]

Abstract

An intelligent machine surpassing human intelligence across a wide set of skills has been proposed as a possible existential catastrophe (i.e., an event comparable in value to that of human extinction). Among those concerned about existential risk related to Artificial Intelligence (AI), it is common to assume that AI will not only be very intelligent, but also be a general agent (i.e., an agent capable of action in many different contexts).

This article explores the characteristics of machine agency, and what it would mean for a machine to become a general agent. In particular, it does so by articulating some important differences between belief and desire in the context of machine agency. One such difference is that while an agent can by itself acquire new beliefs through learning, desires need to be derived from preexisting desires or acquired with the help of an external influence. Such influence could be a human programmer or natural selection. We argue that to become a general agent, a machine needs productive desires, or desires that can direct behavior across multiple contexts. However, productive desires cannot sui generis be derived from non-productive desires. Thus, even though general agency in AI could in principle be created, it cannot be produced by an AI spontaneously through an endogenous process. In conclusion, we argue that a common AI scenario, where general agency suddenly emerges in a non-general agent AI, is not plausible.


Social Networking:


Monthly Views for the past 3 years

Monthly Downloads for the past 3 years

Plum Analytics

Actions (login required)

View Item View Item



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *