Invitation to Ethical Dilemma of AI: Guidelines on a human-centric AI Design Virtual Seminar
Artificial Intelligence (AI) has become ubiquitous in the field of robotics, and this will significantly impact the development of humanity in the near future. As technology makes humanity’s life more accessible, fundamental questions are raised about how these systems should be utilised, what they are allowed to do themselves and what risks entail with the usage of AI. Thus, there are significant ethical and technical challenges to overcome.
This dialogue session explores the ethics that outline the responsible use of AI. Throughout the three sessions delivered by area experts, we will explore the policies and guidelines that keep the AI in check, thereby helping companies and businesses to benefit from new technologies.
This session also focuses on the general reflections on the role of AI technologies in society. Our speakers will share more information regarding the implications for the self and our concept of humanity in business and industry, and the development of safeguards by experts worldwide for protecting against the risk of AI to humans.
DATE: 5 May 2022 (Thu)
Time: 4pm-5.15pm SGT (GMT+8)
(Conducted Via Zoom)
Prof. Dr. Christoph Lütge
Chair of Business Ethics
Technical University of Munich (TUM)
The Ethics of Human-Centric AI
With the rising sophistication and applicability of AI, technologies are getting integrated into almost every aspect of human life as individuals start to outsource ever more of their tasks. On the one hand, this bears advantages for society since they are liberated from mundane tasks while human self-realisation is enabled. On the other hand, since technologies’ consequences usually cannot be grasped ex-ante, their adoption may entail implications that go against the interests of consumers, users, and people in general. This paper seeks to define human-centric AI, discusses the ethical risks and benefits of the new technology for companies, and proposes ethical principles and policies for AI. We will be sharing some ethical guidelines to move towards human-centric design at the dialogue session (e.g., autonomous driving or health care).
Prof. Lütge conducts research in the field of business and corporate ethics. He explores regulatory ethics - ethical behavior in the socio-economic framework of the globalized world. The role of competition and the incentives created by frameworks are also examined, as is the adequacy of ethical categories. From 2007 to 2010, he served as acting professor at the Universities of Witten/Herdecke and Braunschweig. Since 2010, he has held the position of Peter Loescher Professor of Business Ethics at TUM.
Prof. Jacob Dahl Rendtorff
Department of Social Sciences and Business
Roskilde University (RU)
The Ethics of Responsible AI: Outline of ethical principles
This paper will discuss The Ethics of Responsible AI: Outline of ethical principles and the ethical principles of autonomy, dignity, integrity and vulnerability applied to the ethics of using AI technologies. This discussion will focus on general reflections on the role of AI technologies in society and consideration of the implications for the self and our concept of humanity in business and industry. This paper will also provide some suggestions for action principles of a code of ethics to regulate the ethics of AI in society.
Prof. Jacob Dahl Rendtorff, PhD and Dr. Scient. Adm. is professor of philosophy of management and business ethics at the Department of Business and Social Sciences, Roskilde University, Denmark. Rendtorff’s research has a broad perspective on organization theory, management, responsibility, ethics and legitimacy of business firms and corporations, corporate social responsibility, business ethics, sustainability, bioethics and biolaw, human rights, political theory and philosophy of law. Rendtorff is educated in philosophy and political sciences in Copenhagen, Paris and Berlin.
Prof. Mark Findlay
Professorial Research Fellow, School of Law, Director, Centre for AI & Data Governance
Singapore Management University (SMU)
The presentation will canvass difficulties with the current startup innovation model, where market pressures and demands for equity funding connected to short term profitability work against considerations of ethical decision-making across the ecosystem and responsible innovation in terms of social sustainability.
Professor Mark Findlay is a Professor of Law at Singapore Management University, and Director of its Centre for AI and Data Governance, where he is a Professorial Research Fellow. In addition, he has honorary professorial visitorships/fellowships in the law schools at the Australian National University, the University of Edinburgh, York University and the University of New South Wales, as well as being an Honorary Senior Research Fellow at the British Institute for International and Comparative Law.