1 of 14

Assessing and Addressing Ethical Risk from Anthropomorphism and Deception in Socially Assistive Robots

Katie Winkle, Praminda Caleb-Solly. Ute Leonards,Ailie Turton,Paul Bremner

Presented by Steve Lewis

2 of 14

BS 8611

Robots and Robotic Devices: Guide to the Ethical Design and Application of Robots and Robotic Systems

3 of 14

Asimov 3 Laws of Robotics

4 of 14

5 of 14

Teddy the Anthropomorphic Robot

6 of 14

Table 1 - Risks of Anthrophmorphism

Hazard

Risk

Mitigation

Deception

User believes robot has feelings

Minimise use of afective social interaction

Over-trusting

User believe the robot to be more capable than it actually is

Make robot’s capabilities (and limitations) clear

Uncanny Valley

User is uncomfortable

Minimise unnecessary social behaviour and/ design Cues

7 of 14

Studies

  1. Robot Trained Subjects (92) to exercise followed by Questionaire
  2. Subjects (121) watch 3 videos of the robot training and fill out a questionnaire.

8 of 14

Pepper as an Exercise Coach

9 of 14

Robot Versions

  1. Control ASocial
  2. Similarity Low Risk - Moderately Social
  3. Goodwill High Risk - Most ‘human’ behavior

10 of 14

Study 1 Robot Coaching

11 of 14

Study 2 - Videos

12 of 14

Preferences

13 of 14

Discussion

  • Robots either found not Deceptive or Acceptably Deceptive
  • Few believed the Robot could monitor performance and see things related to performance
  • Uncanny Valley not much of an issue - Pepper not convincing enough
  • Subjects like anthropomorphic Robots.

14 of 14

Conclusion

  • Most but not all people prefer robots to be anthropomorphic
  • Socially Assistive Robot designers should consider hte dangers of Anthropomorphism
  • It might be important to consider the possibility of different results in a different population - say neurodivergent