· The world needs a “Salt March moment” to secure human autonomy from tech oligopolies controlling Artificial Intelligence (AI)
· An International collaborative approach is imperative to prevent the misuse of AI to manipulate public opinion
· Ethical principles, common to the plurality of philosophical traditions, must create the Vision for AI in the future
The consensus in the recently concluded two-day Dialogue on Ethics of Artificial Intelligence (AI) was that plural philosophical perspectives, especially those from the East, and a global and inclusive regulatory framework could balance the risks and benefits to humanity’s quest for social justice and equality.
Convened by the United Nations in India and the O P Jindal Global University’s School of Government and Public Policy, the conference brought together some of the world’s leading ethicists, technologists, legal experts, philosophers, theologists, mathematicians, scientists and diplomats to discuss the emergence of Artificial Intelligence as both a challenge and a solution to social issues, its ethical implications, and evaluated the existing frameworks for AI governance.
The two-day virtual dialogue advocated assimilating various philosophical schools of thought – both Eastern and Western – to make the ethical discourse around Artificial Intelligence diverse and inclusive, based on values and human rights.
Underlining the need for diverse philosophical views, Ms Renata Dessallien, United Nations Resident Coordinator in India, said: “India brings with it 2500 years of
extraordinarily profound, diverse, living, philosophical and spiritual heritage; and it seems only natural that India contributes proactively to this crucial topic.”
Ms Dessallien emphasised the inadequacy of the current ethical thinking around AI. “The impact of AI on human agency, relationality on intentionality, not to mention impacts on human flourishing and well-being, is missing from the broader discussion around Ethics of AI. These are often absent or are dealt with in a most cursory manner, yet they are fundamental. We have developed laws, regulatory frameworks to address many of these preoccupations. In some instances, frameworks only extend our existing ethical guardrails from the physical to the digital world. But in other instances, AI challenges our ethical thinking very profoundly.”
Mr Amitabh Kant, CEO, NITI Aayog, inaugurated the Dialogue and highlighted the strategy for Responsible AI in India. “There must be a robust and reliable enforcement mechanism that protects the safety of citizens, environment and businesses while promoting equal opportunity for research and innovation”, Mr Kant said. “Any mechanism to regulate AI must be proportional to the risk and strike a balance between innovation and responsible use. This requires a holistic understanding of the complex interaction of AI in our daily lives. It goes beyond the scope of just policymakers or technologists, and there is an increasing relevance of multidisciplinary thinking to think through and identify the various ethical ramifications,” Mr Kant added.
Stressing on the significance of AI in the global context and its prospects for humanity, Prof C Raj Kumar, the founding Vice-Chancellor of O P Jindal Global University, said, “Today, we are challenged by a plethora of global issues including the pandemic – a public health crisis, education, poverty, climate change and many more for which UN Sustainable Development Goals have been established to shape a positive trajectory in the evolution of humankind. With AI having the potential to help us overcome these challenges, it is all the more important to deliberate on the implementation of AI in achieving these goals in a more time-efficient manner — but within the context and spectrum of the ethical challenges,” he said.
Leading the discussion on ethics and regulatory frameworks, Dr Bibek Debroy, Chairperson of the Economic Advisory Council to the Prime Minister, counselled humility for humanity. “Whenever we talk about ethics, we are talking about laws. Who framed these laws, and do robots understand these laws? In our arrogance, we tend to assume that we have the right to frame these laws and that AI will automatically accept them.”
On whether AI will take over human jobs, he further emphasised: “I think the labour capital choice is a function of relative prices. In a country like India, the relative price of AI will always be higher than the relative price of labour because AI is capital intensive because AI is technology-intensive. So, in a limited sense, there are some segments where AI can generally substitute for labour, but the labour capital choice is a relative one.”
Positioning India as the garage of the artificial intelligence systems, Dr Anna Roy, Senior Advisor at NITI Aayog underlined the importance of focussing on Responsible AI ecosystem, which would be key realise the potential. “ Unless we deal with these issues, we do not generate the trust and that will be a barrier to scaling up. If you solve for India, you solve for 40% of the world. India’s strategy for #AIForAll aims to realise both economic and social potential of AI. Ethical aspects, if neglected, can have serious social and economic consequences.”
The conference examined existing ethical frameworks and how they should be re-engineered to meet future challenges by incorporating pluralistic ethical principles and the value-addition that different philosophical traditions can make to science and technology.
Some of the key takeaways from the conference are:
· Human consciousness would always be sacred, and it would not be possible for machines, no matter how human-like or intelligent they are, to acquire consciousness in the way the human soul (atman) has consciousness.
· Feelings of empathy are the source of moral behaviour or ethics, and hence robots (intelligent technologies) cannot be wrong; only their creators can be morally right or wrong.
· We need to focus on local conditions as we deal with AI ethics, including poverty, effects of global warming and gaping unemployment.
· A global governance network that includes a mandate of effective governance and representation from multiple stakeholders, both at the national and international levels, must effectively manage Artificial Intelligence.
· Though difficult, a regulatory framework for AI can only be brought about by consensus and a common approach by bringing together people who understand technology, the ramifications of global justice, human rights, and who understand regulations, laws and economics.
· Ethical diversity and a resilient, inter-cultural ethical ecosystem can address the predicament of #EthicalAI in intelligent technologies. Plural ethical perspectives originating from different philosophies must help humanity to harness AI technologies for the common good.
· Creating legal frameworks to support equality, democracy and human autonomy, incorporating insights into Eastern philosophical traditions, are the tasks ahead.
· AI must be used to optimise and balance the human impetus for altruism, social justice and equality.
· All AI devices must incorporate in their very design fundamental ethical principles. Therefore, the set of persons who work on AI design and programming cannot be
limited to scientists and technicians and must include philosophers, spiritual leaders and civil society representatives.
Dialogue is available here: https://www.youtube.com/c/unitednationsinindia
For more details, contact: Anjoo Mohun, email@example.com Shachi Chaturvedi, firstname.lastname@example.org