A conversation about why outsourcing the moral responsibilities of AI is not an option. Miguel Luengo, AI expert, entrepreneur and chief scientist at UN Global Pulse, speaks with Konstanze Frischen, Ashoka’s global leader on Tech & Humanity. (Full bios below.)

Miguel, you have spent a lot of time examining the AI & Ethics landscape. What did you find?

Konstanze Frischen: In the AI field, there is a lack of consideration for some of the core principles that went into constitutions around the world and inspired the Universal Declaration of Human Rights and the Sustainable Development Goals (SDGs). When you look at what principles most corporations, think tanks, or governments propose should underpin AI, you’ll see there is overwhelming emphasis on trustworthy AI — AI that is transparent, reliable, non-biased, accountable, safe, etc. This is indeed a necessary condition, but it basically means technology working as it should. And I am thinking: this is great, but it’s not enough.

Why not? 

Take genetic engineering: We all like to have genetic engineering that is trustworthy in that it works as it should; but that doesn’t imply it is okay to copy and paste pieces of genome to create chimeras. Same with AI. 

What then?

I argue that we need to move to a humanity centric AI. If AI is a real game changer, we must take into considerations the implications of AI for humanity as a whole – at present, and in the future. I call that solidarity. Yet, only 6 out of 84 “ethical AI” guidelines examined (by the Health Ethics & Policy Lab from the University of Zurich) mention solidarity as a principle.

Miguel Luengo headshot

Miguel Luengo, social entrepreneur and chief data scientist at UN Global Pulse.

Ashoka

How do you define solidarity in the AI context?

We need to a) share the prosperity and burden of AI, which implies the re-distribution of productivity gains, and collaboration to solve global problems caused by AI; and b) we must assess the implications that AI has on humanity long-term before developing or deploying AI systems. In other words, solidarity lives in the present and also is strategic long term thinking for the future.

How do we share prosperity?

For instance, by giving back the productivity gains that stem from AI, literally. We can look at it from two perspectives. One is that we share directly with the ones who inputted data and actions to create and train the AI models. Currently, the norm is that they don’t benefit financially in the outcome. But what if the data didn’t belong to the company supplying a service, but to the people who contributed? Let’s assume patients provide data and doctors train an algorithm to detect a disease. Shouldn’t they all be compensated each time this algorithm is being used for diagnosis somewhere else? Not in a one-off way, but each time? There could be a royalty model, like it happens in the music industry: each time you play a song, the artist gets remunerated. 

And the other way to share prosperity is indirect, i.e. via taxes?

Yes, in my view, at the public level, taxes on AI or automation (i.e. robots) are an interesting option and could be a solution for a deep underlying problem: that these technologies will put many people out of jobs. As Yuval Harari says, we are in danger of creating a new, irrelevant class of people that can’t play in the digital economy, and – more importantly – that is not needed to create wealth. This is especially dangerous with the platform economy system, where the winner takes all. But if all our data is extracted and used and the gain is concentrated in the hands of few corporations, well, then we’ll need to tax the use of AI.  

How would we get to an AI economy that works on the principle of solidarity?

The change will happen in overlapping and iterative stages. First, there is awareness: citizens are recognizing that the status quo is not okay – that our data is taken for free and sold. Second, new businesses and initiatives will emerge that will take solidarity principles into account: they will give back to the people who helped them create their AI. Social entrepreneurs and B-corps can pave a way forward here. This alignment with citizens’ motivations and interests can give them a competitive advantage.  They will be the responsible choice. We expect big companies to then turn in this direction. And thirdly, that dynamic can push new regulation. We urgently need AI regulatory frameworks contextualized in each sector like marketing, healthcare, autonomous driving or energy. 

This will also enable international coordination to respond when AI fails or spirals out of control.

Absolutely. That is solidarity with humanity. Take deep fakes, for instance. Anyone who is tech savvy can train a machine to automate hate speech. Fake videos are easily made and look 100% real. Imagine it’s election day in a country with a history of genocide, and thousands of deep fakes on ethnic violence circulate on the internet. That might constitute an emergency where the red lines violating Human Rights are crossed in the digital world and the international community needs to act. Even in seemingly less dramatic instances, the complexities to respond to AI failures can be huge: assume someone finds there’s a bias in a widely used AI model that underpins X-ray analysis at scale or manages global telecommunications infrastructure. This finding will require an orchestrated, complex operation to roll that back everywhere. Right now, these digital cooperation mechanisms are not in place. 

And the principle of solidarity will require us to develop mechanisms in both case– prosperity and burdens.

Correct. The key is thinking far ahead and factoring in the impact of AI, even on future generations. I am concerned that leadership is too short-sighted when it comes to societal and economic implications. For instance, right now very, very few researchers and companies take into account the carbon footprint of their AI efforts. The CO2 implications of AI are huge. It should be the norm that you estimate the CO2 impact of creating and operating your AI model, and weigh it against the perceived benefits. Is it reasonable to waste incredible amounts of energy to teach a machine to spot pictures of cats and dogs on the internet? Not just solidarity, but sustainabilityshould also be a core principle for the development of AI. And the principles are just the frame. AI must be applied beyond internet businesses and marketing. There is a global untapped market for AI to accelerate the attainment  of the SDGs.

What do you think are some of the perhaps less-obvious obstacles that prevent a change towards thinking long-term about solidarity and sustainability in the tech industry?

Part of the problem is that people work in silos. Most of the time, discussions around ethical principles of AI and their practical implementations are made by lawyers and ethicists, or business people or technical experts separately. And they do not speak the same language, so a lot is lost in translation. I’ve seen meetings with many high ranking policy makers talking about AI with expert technologists and business leaders; and the higher up you go the clearer it often is that there’s a lack of real understanding that prevent people on either side to put all the pieces together. We need a new generation which deeply understands the technical details and societal and economic implications of AI. 

It seems employees of big tech companies are increasingly demanding that the industry takes more responsibility for their actions. I’m thinking for instance of the letter thousands of employees signed that got Google to abandon a plan to provide AI for drone analysis to the Pentagon. 

Yes, and that trend will grow. Top talent is starting to choose to work in companies whose values and impact make the world better. We cannot outsource the moral responsibilities of the technology we develop. It’s time to be clear: Those who develop scalable technology, we need to think upfront about the potential risks and harms – and take a precautionary approach if needed. And ultimately, we must be held accountable for the consequences of using that technology at scale. 

–––-

Next Now: The 21st century has ushered in a new age where all aspects of our lives are impacted by technology. How will humanity anticipate, mitigate, and manage the consequences of AI, robots, quantum computing and more? How do we ensure tech works for the good of all? This Ashoka series sheds light on the wisdom and ideas of leaders in the field.

Dr. Miguel Luengo-Oroz is a scientist and entrepreneur passionate about technology and innovation for social impact. As the first data scientist at the United Nations, Miguel has pioneered the use of artificial intelligence for sustainable development and humanitarian action. He is the Chief Data Scientist at UN Global Pulse, an innovation initiative of the United Nations Secretary-General. Over the last decade, Miguel has built teams worldwide bringing AI to operations and policy in domains including poverty, food security, refugees and migrants, conflict prevention, human rights, economic development, gender, hate speech, privacy and climate change. Miguel is the founder of the social enterprise Spotlab, which uses mobile phones, 3d printing and AI for diagnosis of global health diseases. He is the inventor of Malariaspot.org –videogames for collaborative malaria image analysis–, and is affiliated with the Universidad Politécnica de Madrid. He was elected an Ashoka Fellow in 2013.

Konstanze Frischen is the global leader for Ashoka’s emerging focus on AI, technology, and ethics. A journalist, entrepreneur and social anthropologist, among other things, she was one of the key actors introducing social entrepreneurship to Western Europe, founded Ashoka in her native Germany and co-led Ashoka in Europe to build up the largest network of social innovators. She is a member of the organization’s global leadership group and based in Washington, DC.

The article was origianlly posted at: %xml_tags[post_author]% %author_name% Source%post_title%


Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.