In the past five years, private companies, research institutions and public sector organizations have issued principles and guidelines for ethical artificial intelligence (AI). However, despite an apparent agreement that AI should be ‘ethical’, there is debate about both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analysed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.
The Guidelines for Human-AI Interaction synthesize more than 20 years of thinking and research in human-AI interaction. Developed in a collaboration between Aether, Microsoft Research, and Office, the guidelines were validated through a rigorous, 4-step process described in the CHI 2019 paper, Guidelines for Human-AI Interaction. They recommend best practices for how AI systems should behave upon initial interaction, during regular interaction, when they’re inevitably wrong, and over time.
We hope you can use these Guidelines for Human-AI Interaction throughout your design process as you evaluate existing ideas, brainstorm new ones, and collaborate with the multiple disciplines involved in creating AI.
Artifical intelligence (“AI”) raises ethical concerns for both individuals and organizations. Google, Facebook and Stanford University invested in AI ethics research centers, and in 2018, France and Canada jointly sponsored an international panel to discuss the “responsible adoption” of AI. Earlier this year, the European Commission released its guidelines to encourage ethical development of “trustworthy AI.”
And recently, the Korea Communications Commission (“KCC”) and the Korea Information Society Development Institute (”KISDI”), a global ICT policy institute, jointly announced similar principles to govern the creation and use of AI, focused on the proper protection of human dignity (“the AI Ethics Principles”). These basic rules are to be complied by all members of the society, including the government, corporations, users.