Gemma Galdon is a leading voice on tech ethics and algorithmic accountability. In our conversation, she explains the pitfalls of algorithms, the dangers of an “I’m just the tech guy” attitude, and how to help technologists overcome bias.
This piece is part of Ashoka’s series on Tech & Humanity
Konstanze Frischen: Gemma, your research and work have to dismantle strong popular beliefs around technology.
Gemma Galdon: Yes. Our society is in love with technology. We tend to believe that every technology works well, and that it’s actually better than humans, more objective.
Frischen: So how do you explain that is not the case, that technology is in fact often likely to reproduce bias and exclusion?
Galdon: At the end of the day, an algorithm is just a mathematical calculation: if A and B, then C, even though you can make that a lot more complex. But that’s what it is. It’s a set of commands. Algorithms can process a lot more data than humans, but they do not have creativity or the ability to improvise, and they learn off of what you feed them with. And that’s where bias comes in. Take banking algorithms. Up until very recently, the banking representative of a family more often than not was the father, the man. So banks have a lot more historical information on men. If you create an algorithm based on historical information only, the algorithm will understand that women are more risky to lend to. Even though statistics say that we are better at repaying loans. There are studies that show that women get 10 to 20 times less credit than men just because of discriminatory data that goes into the algorithm. So in technical terms, a perfect algorithm is an algorithm that perfectly reflects discrimination in society. On the Internet and in data sets, white men set the norm. And anyone who’s not a white middle-aged man is an outlier and is discriminated against.
Frischen: The paradox is that we often use algorithms because we want to be more objective, i.e. data driven.
Galdon: Exactly! Here’s another example for good intentions with bad outcomes. In the US, there have been experiments with teacher assessment via algorithms. The idea was: “Humans are biased. Maybe we are firing the wrong teachers. Let’s use a neutral and rational algorithm that will determine who the best teacher is!” When that algorithm was questioned, it was found that it made decisions on the quality of teachers based on students’ scores in math and language skills. That’s the only thing that the algorithm could measure.
Frischen: The algorithm assumed the performance of students reflected the quality of teaching, without taking into account socioeconomic conditions, race, zip codes, and all the other things that influence school performance.
MORE FOR YOU
Galdon: It’s also giving teachers a really bad message. Just put your effort in math and language because everything else doesn’t matter – which is completely the wrong message. We want teachers to be engaged, to transmit values. All that was lost.
Frischen: So, despite good intentions, as these examples show, algorithms will reproduce bias, discrimination and other unwanted outcomes — unless you make an effort to correct for that. Which is where the Eticas Foundation comes in.
Galdon: Correct. Eticas has many lines of work. We do research. We promote other organizations working on responsible technology. We do strategic litigation. But we put most of our efforts into auditing algorithms. We have been looking into AI systems for the last few years, and from that experience we have developed a methodology for auditing algorithms – to help companies and governments discontinue algorithmic discrimination.
Frischen: How does algorithmic auditing work?
Galdon: Our methodology has three main parts. First, we assess whether a specific social problem has been rightly conceptualized in terms of data inputs. That’s really important. How do we translate a social concern into data points? How and why do we choose the data that we choose? There’s an infamous example, linked to the prioritization of people in an emergency room. An insurance company in the US that owns several hospitals used an algorithm to triage in the ER. And what was found later was that because it was an insurance company, they built the algorithm with the data they had, not the data they needed. The seriousness of your injury or illness was assessed on the basis of whether your disease had been expensive to cure in the past. That’s an unethical way of approaching medicine, it’s just wrong. You’re making the wrong decisions, because you’re not taking medical issues into account. But because that’s the data the company had, that’s the data they used. We want people to avoid making those mistakes. So we try to first frame the issue, in terms that make sense.
Then we define who the communities are that could be discriminated against, like in the case of banking, women. We protect them using the tools of statistics to more properly assign risk.
And finally, we look at the impact of the algorithmic decision.. We want to make sure that the people who are using those algorithmic decisions understand how they work, understand how they should incorporate them in their decision-making process, in order to avoid situations where bias is reincorporated into the process by humans.
Frischen: And what is the response you’re seeing in the market?
Galdon: Over the last 3 years, the debate space has matured, but still, it often does not translate into action. I think one of the problems is that the debate on bias and tech is often limited to abstract notions of fairness and non-discrimination. What we need is to land these into technical specifications. We don’t need good intentions, we need good practices. We need people that are showing the way on how to protect women in banking algorithms; ethnic minorities in medical algorithms; children of migrants in child risk assessment algorithms, and so on. Through algorithmic auditing, that’s what we’re trying to do and it’s working pretty well. In our experience, no one has a budget for algorithmic auditing yet, but we are opening the space for that, and we are seeing progress. And we’re also fostering an ecosystem. We realize that auditing will not become the norm unless a lot of people know what it is and how to do it. So we also want to help others become algorithmic auditors, and go around the world fixing tech.
Frischen: Do you see the multidisciplinary approach you foster – engineers, social scientists and social entrepreneurs working together with coders – becoming the norm?
Galdon: Yes. It’s clearly a problem that engineering, until now, has been incredibly boxed into just technical issues. There’s a feeling that “social problems are not my problem. I’m just the techie guy.” And that’s just really, really wrong. The decisions that those engineers make have profound social impact on the world. But then, can we ask engineers to perfectly code for a world that they are not experts in? Take a company, not everyone has to know about accounting, but it lets itself be helped by people that know accounting. So translated to tech, that means you will also need help when it comes to coding and algorithms. You will have your team of engineers, but you’ll need to make sure that at critical points, you incorporate the knowledge of sociologists, philosophers, changemakers, communities, in order to come up with a quality product that is fair and equitable and produces the intended impact – and complies with the law, and avoids reputational risks.
Frischen: How is the changing regulatory landscape helping?
Galdon: The law has been changing for quite some time, especially in Europe, but we have a huge enforcement problem. And we need to see how to bridge that gap between having a law and having people actually realizing there’s a law and changing their practices to abide by it. That’s why algorithmic auditing is so timely, because it’s a practical response to the requirements of these laws. Then there is strategic litigation — going to courts and pointing out the bad players in the industry to show them there are consequences. We’re working on facial recognition at the moment to take to court some developments specifically in Spain. Luckily, we’re joined by lots of other groups at the EU level working on this. In the near future, we’ll look back at 2020 and be like, how come we were deploying algorithms back then that were not being audited? It’s like selling a car without a seatbelt. It will be unthinkable.
Dr. Gemma Galdon-Clavell is the founder and CEO of Eticas Consulting, where she is responsible for leading the management, strategic direction and execution of the Eticas vision. She conceived and architected the Algorithmic Audit Framework technology which now serves as the foundation for Eticas flagship product. Under Galdon-Clavell’s leadership, Eticas has forged the development of a new market in digital ethics and trustworthy AI, reaching all verticals including social services, healthcare, finance, government, education, cybersecurity and more. She also serves as a tech ethics adviser at international and national public and private institutions, and is a highly sought-after keynote speaker and media contributor with the mission to shift the way we think about the promise of technology. Gemma Galdon became an Ashoka Fellow in 2020.
The article was originally posted at: %xml_tags[post_author]% %author_name% Source%post_title%