SOPhiA 2022

Salzburgiense Concilium Omnibus Philosophis Analyticis

SOPhiA ToolsDE-pageEN-page

Programm - Vortrag

Motivating AI Legal Personhood an Agency Account
(Ethics, Englisch)

It is an important and pressing question whether personhood ought to be adopted as a legal status for artificial intelligence systems. Consider for instance the case of a self-driving vehicle that kills an innocent pedestrian in an accident. It remains a question whether the engineers who developed the artificial intelligence system for such a vehicle should be held legally responsible for the death of this pedestrian, or whether the artificial intelligence system itself should be legally responsible. Indeed, in February 2017 the European Parliament launched a commission to investigate the implications of granting legal personhood status to artificial intelligence systems (2017). However, some recent scholarly work including Zevenbergen et. al. (2017) and Wagner (2018) has recommended against the ascription of legal personhood status to artificial intelligence systems in response to this European Parliament commission. Primarily, this scholarly work has criticised the vague notion of artificial intelligence and emphasised the instrumental and intrinsic risks such a status might bring to human society.



In this paper, I suggest such scholarship by Zevenbergen et. al. and Wagner would be more meaningful if further nuance was utilised in the conception of artificial intelligence. I argue that conceptualizing artificial intelligence via a continuum of agency rather than in binary terms allows the implications of the adoption of legal personhood status for artificial agents to be more appropriately framed. This paper is laid out as follows. Following the introduction in Section 1, I introduce the essential aspects of intentional agency as a conceptual framework for how to think about artificial intelligence agents in Section 2. In Section 3 I consider applied cases of legal personhood for AI system such as self-driving vehicles and in healthcare systems via a continuum of agency. I respond to counterarguments in Section 4 and conclude in Section 5. _Majority of sections 3-5 excluded from abstract_



The agency-based model of personhood is defined "not by what it is intrinsically but by what it does extrinsically: by the roles it plays, the functions it discharges" (List and Pettit, 171). In this sense, an agent is deemed a person according to the bundle of functions they discharge - the performances of the agent. Personhood is therefore acquired as a status upon completion of a minimum set of criteria: "the mark of personhood is the ability to play a certain role, to perform in a certain way" (171). As such, there is no objective _person._ Rather, persons can be understood as a collection of functions. I describe these functions in turn below, beginning with 1) rationality and moving toward 2) advanced interactivity and 3) responsibility.



1. Rationality: The minimal necessary and condition of an agent simpliciter can be understood as rationality, which involves the following three aspects. Here, I use Laukyte's example of a thermostat to explain what characteristics a rational agent requires (Laukyte 2017, 2).



1. Representative State: An agent can sense its environment.



(a) For example, a thermostat maintains a comprehension of the current state of the room. It understands, for instance that the room temperature is 15__ Celsius.



2. Motivational State: An agent can understand how its environment ought to be.



(a) For example, a thermostat maintains an understanding of how the room ought to be. For example, the thermostat might be set to 25__ Celsius.



3. Interactivity: An agent act in the gap between how its environment is and how it ought to be.



(a) For example, a thermostat can make the determination to increase the room temperature, assuming it understands the room temperature is 15__ Celsius and it ought to be 25__ Celsius.



(b) Said differently, the agent follows a set of criteria to arrive at an outcome.



Given the importance of rationality to the basic notion of intentional agency, a further example is helpful here. Consider the case of a simple robot whose function is to place upright wooden cylinders that have fallen onto their cylindrical side (List and Pettit 2011, 19-20). The robot can be understood as maintaining a minimal condition of rationality first because it can sense the state of its environment. This is the representative state. The robot recognises that the cylinder is not placed in an upright position. Further, it understands that within this environment the correct position for the wooden cylinder is an upright position. This comprehension is understood as its motivational state. Further, the agent can said to be interactive if it then acts on its understanding that the cylinder ought to be placed in an upright position but is not currently in that position. This action can be minimal. For instance, the agent may simply desire or have intention to act on placing the wooden cylinder upright. Its ability to place the cylander upright is a more stringent criteria that is not necessary for agent rationality in its simplest form.



2. Advanced interactivity is a further important aspect of rationality to consider in relation to agency. This refers to an agent__s ability to interact with other agents within its social environment. This means agents should "understand themselves both as part of a group" and "act in respect of that group" (Laukyte 2017, 3). To understand this distinction, consider the case of an autonomous robot that is sent to Mars for the purpose of obtaining samples of the soil composition. It can be understood as an agent of a basic sort because it 1) has goals (i.e. to collect soil samples) and 2) implements those goals (it collects the soil samples). However, this condition of advanced interactivity is not required if there are no other agents in the Mars environment for the robot to interact with. In this case the Mars robot would be an agent but not a social agent. (Note that the Mars robot could technically still be a social agent if it had the ability to interact with agents in its environment. For instance, a single isolated human person on a desert island can still maintain features of advanced interactivity despite the fact she does not have others to communicate with.)



For artificial intelligence agents in particular, advanced interactivity is important for two reasons (Laukye 2017, 4). First, any action initiated by an agent necessarily affects other agents. No agent works in isolation. Consider the hypothetical case of an autonomous vehicle controlled by an artificial intelligence agent. On the one hand, such an agent poses no moral or social danger if it functions in isolation. On the other hand, it is unlikely that an autonomous vehicle controlled via artificial agency might avoid interactions that have important moral and social implications. For instance, the simple act whereby an autonomous vehicle brakes in advance of an intersection with vehicles carrying human agents would be an instance of such a scenario. This is particularly the case in the hypothetical social environment where human and artificial intelligence agents come into regular contact with one another. Advanced interactivity is thus important because it allows for agents of different types (e.g. human agents, artificial intelligence agents, etc.) to communicate with one another within the social environment (Laukyte 2017, 4). To reiterate, advanced interactivity can be understood as an additional aspect of rationality (Laukyte 2017, 3). It can be distinguished from minimal interactivity (see 2.1) because an agent's ability to socially interact with other agents is not necessarily a pre-requisite for minimal agency. I discuss its importance to the consideration of legal personhood in further detail at 3.1.2. I now turn to responsibility as an additional characteristic of agency.



3. An additional important characteristic of agency for personhood status is responsibility (Laukyte 2017, 5). This can be understood as the ability for an agent to



a) understand the normative significance of a situation,



b) be able to make normative judgments on that situation and



c) maintain the capacity to wield control over these normative choices (understood as the "control requirement") (Laukyte 2017, 5). Normative significance simply refers to an agent's comprehension of a situation in which it can do something morally good or morally bad.



Consider Laukyte's example of an autonomous drone to more closely relate this agent characteristic of responsibility to artificial intelligence agents, (Laukye 2017, 5). Conceptualise a hypothetical situation where this drone must decide for itself whether to fire a missile that will either a) kill a child soldier or b) allow a group of innocent civilians to die (Laukyte 2017, 6). First, we can say the drone recognises the normative significance of the situation if it understands a moral choice must be made in this moment. Second, it can be said that drone practices normative judgments if it is able to draw morally significant conclusions from the assessed situation (e.g. the drone can determine whether to kill the child soldier in order to save the group of innocent civilians.) Laukyte here argues that an artificial intelligence agent ought to -at least roughly- follow the actions of humans in its moral judgments (Laukyte 2017, 7). Third, it ought to be able to choose to fire the missile. This is particularly important for an artificial intelligence agent because the drone itself arguably ought to maintain agency in such a scenario. If it is a human agent who controls the drone__s decision to fire the missile, then the responsibility belongs to the human agent and not the drone. For her argument, Laukyte assumes that the drone does maintain agency in such a scenario. Note that current artificial intelligence systems who must make choices similar to the drone described here already exist ("XQ-58A Valkyrie demonstrator completes inaugural flight", 2019).



Following an explanation of an agency-based model of personhood, I move to illustrate it according to a continuum. I observe that a weak notion of agency refers to an agent that is "more specialized and less flexible" (List 2019, 5). For example, an agent with basic rationality lands on the weak end of the continuum. This is generally with current capabilities of artificial intelligence systems. By contrast, a human agent and other sentient life forms are placed on the strong end of the continuum. As AI systems advance in their technological makeup, it is possible their degree of agency will move closer toward the strong side of the continuum, thus granting and conditions such as responsibility moral personhood.



In my conclusion I note there are several tests that have historically been suggested for determining whether an agent might deserve the status of legal personhood that support the arguments put forward by Zevenbergen et. al. and Wagner. A traditional one is the Turing Test, which __tests for an intangible quality x by seeing whether the entity in question can do y__ (Laukyte 2017, 8) (See also Solum 1992, 1235-36). Proposed by Alan Turing in 1950, this is famous for determining whether an artificial intelligence system can be considered __intelligent__ (Turing, 1950).The test is aimed at replacing the question __Can machines think?__ with the more nuanced __Can the machine convincingly imitate the human person?__ (Turing 1950, 433). More recently, Dennitt proposed the __intentional stance test__, in which he argues that an artificial intelligence system can be understood as an intentional agent if the agent__s behaviour can be reasonably understood if one views the agent as an intentional agent (Dennett (2009, 339 from List 2019, 7).



However, tests such as those by Turing and Dennitt fail because they are overly reductionistic of what it is to be a person (see List 2019 and Laukyte 2017). In contrast to Turing and Dennitt, I conclude that the continuum of agency from weak to strong as I have illustrated is a superior explanation of how legal personhood might be understood. This is due to the reason that agency cannot be understood in terms of simple imitation. In doing so, I refute Zevenbergen et. al. and Wagner__s suggestion that legal personhood for AI systems should not be adopted. Although moral questions remain in the case of a self-driving vehicle that kills an innocent pedestrian, this paper contributes nuance to how the legal question should be considered.

Chair: Stephen Müller
Zeit: 15:20-15:50, 07. September 2022 (Mittwoch)
Ort: SR 1.005
Anmerkung: (Online Talk)

Karl Reimer
(University of Zurich, Schweiz)



Testability and Meaning deco