SOPhiA 2021

Salzburgiense Concilium Omnibus Philosophis Analyticis

SOPhiA ToolsDE-pageEN-page

Programme - Talk

Robotic deception and betrayal
(Ethics, )

A In the coming decades, sociable robots will be on their way to care institutions, bedrooms, schools, and businesses, but philosophers have expressed concerns about interacting with sociable robots. One common criticism of sociable robots is that they are deceptive, but there has been confusion about the nature of the deception they engage in. John Danaher is seemingly alone in his attempt to clarify robotic deception. He separates it into three types: (A) deception about the world external to the robot; (B) the robot suggesting it has states/capacities which it does not have; and (C) the robot suggesting it doesn_t have states/capacities which it does have. I build on Danaher's account and challenge it in two ways: (1) we should not conflate states and capacities, for deception about the former is potentially far more morally troubling than deception about the latter. A robot which says that it cannot do something when really it can is frustrating, but much less morally troubling than one which says that it is not doing something when really it is, e.g. video recording (2) Danaher claims that when a robot suggests it isn't doing something which it is doing, this can amount to betrayal, whereas if the robot suggests it is doing something when it isn't, this is not morally troubling. I show how the latter can amount to (morally troubling) betrayal in the same way as the former.

Chair: Jon Rueda
Time: 14:00-14:30, 11 September 2021 (Saturday)
Location: SR 1.004

Karen Lancaster  
(University of Nottingham, )



Testability and Meaning deco