Without adequate safeguards, AI can widen social and economic schisms, leading to discriminatory outcomes
Since Czech writer Karel Čapek first mentioned robots in a 1920s play, humans have dreamed about intelligent machines. What if robots take over policing? What if nanny-bots look after our children and elderly? What if — and this has been rich fodder for dystopian literature — they became more intelligent than us?
Surrounded as we are by the vestiges of our analogue world, to many of us, these wonderings may seem decades from fruition. But artificial intelligence (AI), the engine of the Fourth Industrial Revolution, is already very much with us.
AI’s exponential growth
It is embedded in the recommendations we get on our favourite streaming or shopping site; in GPS mapping technology; in the predictive text that completes our sentences when we try to send an email or complete a web search. It promises to be even more transformative than the harnessing of electricity. And the more we use AI, the more data we generate, the smarter it gets. In just the last decade, AI has evolved with unprecedented velocity — from beating human champions at Jeopardy! in 2011, to vanquishing the world’s number one player of Go, to decoding proteins (https://go.nature.com/30N9BQz) last year.
Automation, big data and algorithms will continue to sweep into new corners of our lives until we no longer remember how things were “before”. Just as electricity allowed us to tame time, enabling us to radically alter virtually every aspect of existence, AI can leapfrog us toward eradicating hunger, poverty and disease — opening up new and hitherto unimaginable pathways for climate change mitigation, education and scientific discovery.
Google has identified over 2,600 use cases of “AI for good” worldwide (https://bit.ly/3qSmsM2). A study published in Nature (https://go.nature.com/3tlzJyj) reviewing the impact of AI on the Sustainable Development Goals (SDGs) finds that AI may act as an enabler on 134 — or 79% — of all SDG targets. We are on the cusp of unprecedented technological breakthroughs that promise to positively transform our world in ways deeper and more profound than anything that has come before.
Yet, the study in Nature also finds that AI can actively hinder 59 — or 35% — of SDG targets. For starters, AI requires massive computational capacity, which means more power-hungry data centres — and a big carbon footprint (https://bit.ly/3lmsof4). Then, AI could compound digital exclusion. Robotics and AI companies are building intelligent machines that perform tasks typically carried out by low-income workers: self-service kiosks to replace cashiers, fruit-picking robots to replace field workers, etc.; but the day is not far when many desk jobs will also be edged out by AI, such as accountants, financial traders and middle managers.
Without clear policies on reskilling workers, the promise of new opportunities will in fact create serious new inequalities. Investment is likely to shift to countries where AI-related work is already established (https://bit.ly/2NnrMt7), widening gaps among and within countries. Together, Big Tech’s big four — Alphabet/Google, Amazon, Apple and Facebook — are worth a staggering $5 trillion, more than the GDPs of just about every nation on earth. In 2020, when the world was reeling from the impact of the COVID-19 pandemic, they added more than $2 trillion to their value.
The fact is, just as AI has the potential to improve billions of lives, it can also replicate and exacerbate existing problems, and create new ones. Consider, for instance, the documented examples (https://bit.ly/30Ny8VI) of AI facial recognition and surveillance technology discriminating against people of colour and minorities.
Or how an AI-enhanced recruitment engine, based on existing workforce profiles, taught itself that male candidates were preferable to female.
AI also presents serious data privacy concerns. The algorithm’s never-ending quest for data has led to our digital footprints being harvested and sold without our knowledge or informed consent. We are constantly being profiled in service of customisation, putting us into echo chambers of like-mindedness, diminishing exposure to varied viewpoints and eroding common ground. Today, it is no exaggeration to say that with all the discrete bytes of information floating about us online, the algorithms know us better than we know ourselves. They can nudge our behaviour without our noticing. Our level of addiction to our devices, the inability to resist looking at our phones, and the chilling case of Cambridge Analytica — in which such algorithms and big data were used to alter voting decisions — should serve as a potent warning of the individual and societal concerns resulting from current AI business models.
In a world where the algorithm is king, it behoves us to remember that it is still humans — with all our biases and prejudices, conscious and unconscious — who are responsible for it. We shape the algorithms and it is our data they operate on. Remember that in 2016, it took less than a day for Microsoft’s Twitter chatbot, christened “Tay”, to start spewing egregious racist content, based on the material it encountered.
Ensuring our humane future
How then do we ensure that AI applications are as unbiased, equitable, transparent, civil and inclusive as possible? How do we ensure that potential harm is mitigated, particularly for the most vulnerable, including for children? Without ethical guard rails, AI will widen social and economic schisms, amplifying any innate biases at an irreversible scale and rate and lead to discriminatory outcomes.
It is neither enough nor is it fair to expect AI tech companies to solve all these challenges through self-regulation. First, they are not alone in developing and deploying AI; governments also do so. Second, only a “whole of society” approach to AI governance will enable us to develop broad-based ethical principles, cultures and codes of conduct, to ensure the needed harm-mitigating measures, reviews and audits during design, development and deployment phases, and to inculcate the transparency, accountability, inclusion and societal trust for AI to flourish and bring about the extraordinary breakthroughs it promises.
Given the global reach of AI, such a “whole of society” approach must rest on a “whole of world” approach. The UN Secretary-General’s Roadmap on Digital Cooperation (https://bit.ly/3cDBrV2) is a good starting point: it lays out the need for multi-stakeholder efforts on global cooperation so AI is used in a manner that is “trustworthy, human rights-based, safe and sustainable, and promotes peace”. And UNESCO has developed a global, comprehensive standard-setting draft Recommendation on the Ethics of Artificial Intelligence to Member States (https://bit.ly/3cC4pEH) for deliberation and adoption.
Many countries, including India, are cognisant of the opportunities and the risks, and are striving to strike the right balance between AI promotion and AI governance — both for the greater public good. NITI Aayog’s Responsible AI for All strategy (https://bit.ly/30LMXIv), the culmination of a year-long consultative process, is a case in point. It recognises that our digital future cannot be optimised for good without multi-stakeholder governance structures that ensure the dividends are fair, inclusive, and just.
Agreeing on common guiding principles is an important first step, but it is not the most challenging part. It is in the application of the principles that the rubber hits the road. It is where principles meet reality that the ethical issues and conundrums arise in practice, and for which we must be prepared for deep, difficult, multi-stakeholder ethical reflection, analyses and resolve. Only then will AI provide humanity its full promise. Until then, AI (and the humans who created it) will embody the myth of Prometheus: the Titan who shared the fire of the gods with mortals, and the trickster whose defiance of Zeus led to Pandora opening her box.
By Renata Dessallien is UN Resident Coordinator, India