SOPhiA 2019

Salzburgiense Concilium Omnibus Philosophis Analyticis

SOPhiA ToolsDE-pageEN-page

Programme - Talk

Towards a middle-ground of agency for artificial intelligence
(Philosophy of Mind, English)

With the emergence of complex artificial intelligence (AI) and advanced robotics, AI has been applied in various industries from automotive to healthcare and robotics. AI is now widely used to aid human decision making in medical diagnostics, to facilitate autonomous driving and to analyse complex data structures. Modern robotic AI-systems such as Joyforall's robotic Companion Pet or Hanson Robotics' Sophia are even often treated by humans as interactive agents and are expected to behave as such. This understanding of wide-spread agency is supported by rational choice theory and most of cognitive science. However, the philosophical tradition based on Anscombe and Davidson denies any attribution of agency to computational systems. The problem of whether to conceive complex AI systems as agents proper represent the topic of this paper. In particular, I will examine the topic of AI agency from two different standpoints: agency in the narrow sense in terms of Davidson's intentional action and agency in the broad sense in terms of norm-based interaction as commonly understood. I will show that agency in the narrow sense defines agency as the capacity to perform intentional actions which refers to a specific organisation of internal mental states and hence is unachievable for current AI systems. Agency in the broad sense, on the other hand, captures a broad range of intuitions about attributing agency to non-human systems and can function as a minimal theory of agency. After comparing each conception of agency, I will argue that neither conception alone can comprehensively address the various facets of AI agency. Therefore, I finally propose a middle ground between both theories which introduces an additional criterion of consistent intentionality-ascription based on Dennett's intentional stance. I will conclude that this middle ground provides a robust and comprehensive conception of AI agency.


Chair:
Time: 15:20-15:50, 20 September 2019 (Friday)
Location: SR 1.007

Louis Longin 
(Ludwig-Maximilians-Universität Munich, Germany)

I am a final-year Masters student in Philosophy at the Ludwig-Maximilians-Universität Munich. I have specialised on analysing the philosophical implications of artificial intelligence (AI) such as moral responsibility and agency. During my upcoming master thesis, I seek to develop a comprehensive, gradual account of AI agency which unifies the philosophical demands for higher cognitive capacities with the common intuition of agents as simple interactive systems.

Testability and Meaning deco