Human Rights Magazine

AI and its impact on human rights

Charlotte Power Season 4 Episode 2

In this episode, we take a somewhat different approach to our focus on human rights, and look not at a social situation but rather look at technological possibilities. Artificial intelligence is rapidly emerging as a new tool, as computer technology accelerates in the ability of machines to learn and emulate human thinking. Listen as Charlotte Power explores the impact that AI may have on human rights, especially in humanitarian work.

Human Rights Magazine is produced by The Upstream Journal magazine. The host, Derek MacCuish, is editor of both. If you agree that informed reporting on human rights and social justice issues is important, your support would be welcome. Please rate the podcast wherever you listen to it, and tell your friends about episodes that you find interesting. Why not consider making a financial contribution to help us cover costs?  You are always welcome to email with your comments.

Support the show

Intro Derek MacCuish:  In this episode, we take a somewhat different approach to our focus on human rights, and look not at a social situation but rather look at technological possibilities. Artificial intelligence is rapidly emerging as a new tool, as computer technology accelerates in the ability of machines to learn and emulate human thinking. Listen as Charlotte Power explores the impact that AI may have on human rights, especially in humanitarian work.

Host Charlotte Power:  How can artificial intelligence be leveraged to protect human rights? AI’s life-saving capabilities have already been embraced by the humanitarian sector. Different AI tools are being used in both conflict and disaster zones to save time and lives. However, the consequences of deploying flawed AI technology in the field are severe. This podcast will explore the ways AI can uplift human rights, as well as the risks and dangers of AI misuse. Joining me today are three experts who work in this field, Professor Rachel Kiddell-Monroe, Jennifer Addison, and Dr. Rowena Rodrigues.

Host: To get a better understanding of AI’s potential in the humanitarian field, I spoke with Rachel Kiddell-Monroe, who worked in the humanitarian field for 30 years with MSF during the Rwandan genocide.

 “So, I'm trained as a lawyer. I started working on human rights issues, especially linked to climate and Indigenous peoples. And then after that, I went more into humanitarian work with Doctors Without Borders, MSF. And that really was a sort of a different angle of human rights for me which was really more linked to those humanitarian crises. I've spent a lot of time in the field. And so, I really saw how human rights fed through that, and especially in my work in Rwanda, where you're working in a genocide, you really see that interface between humanitarianism and human rights in a really tight way. ” 

 

Host: Could you tell me about your time with MSF? 

 

“I had to do one report a month. And that was sent by fax. And if they got it, they got it. And if their fax didn't work, they didn't get it. So it was a completely different world.  This was the pre-digital world, and it had all sorts of great characteristics about it. But it also had a lot of massive faults about it. And those massive faults were that we didn't know what was going on.”

 

Host: Given how much positive potential AI holds, do you think humanitarian actors have a duty to embrace new technology? You mentioned that one of the main struggles you dealt with in the field was due to the lack of technology and inaccessible information.   

 

“AI, I think the things that it can do positively it can. It can analyze information for us, it can bring together essential information … we can get all of that which saves us weeks and months of research. To do so is great and gives us a starting point. But that in itself is not going to change anything. It's then what we do with that information.” 

 

Host: Can you give me an example of a specific kind of technology that humanitarian actors are currently using that is making a positive difference? 

 

“AI can be really positive in things like scientific or medical developments. Now, MSF using AI has now developed an antibiotic test for multi drug resistance, it's amazing, and is 90% effective. I’m sure you’ve seen it, it's just extraordinary.” 

 

Host: Wow, I can see how that can be life changing, especially in remote regions. I mean, whether it be in emergency responses, medical diagnostics, or even search and rescue, we can’t deny that AI holds insane life-saving potential. To get a better understanding of what is necessary to develop AI tools that can support human rights, I spoke with Jennifer Addison. She is a project manager at Montreal’s AI4good lab, whose mission is to mentor women and gender-diverse individuals as they diversify the tech space. 

 

“I've always wanted to work at organizations that, from my perspective, are aiming to contribute something or have positive impact on society, I found my way to the AI for Good Lab through an organization called Queer Tech and they're working to queer the tech ecosystem, which is also super important to me as a queer person. The AI4Good Lab is a seven-week program and for the last three weeks the trainees are split into groups to work on an AI project that addresses a social issue or for social good. So the lab is delivered in partnership with CFAR and Vector, and the design fabrication zone is at Toronto Metropolitan University, and Amy” 

 

Host: That’s super cool. When it comes to tech development, I would love to understand why we first need to focus on the human being behind the code. 

“When we're talking about tech or AI, let's not forget that there is a human behind these things that we are developing or a team of people. And so to think that our own biases or thoughts in terms of we are constantly being fed information that is also rooted in bias and stereotypes and tropes, et cetera, to think that that's not then being coded into whatever we are working on, or that that's not touching the development of whatever projects we're working on; we’d be mistaken to think that. I think, talking about AI or anything tech related, as the thing that you're creating,  the code or the prototype can unfortunately create situations where we are forgetting or eclipsing the fact that there is still a person behind this and that that person is a part of, or the people, the project team. They are a part of this thing that they are creating. Their lived experience will probably be reflected in the final product. Their own biases, again, will be reflected in the final product.”

 

Host: Based on what you are saying, diversifying the tech space can be a tool to strengthen the technological product itself. What other approaches are being implemented to combat algorithmic bias? 

 

“DEI is important because it can present an opportunity to try to correct some of the inequities that persist today. Certainly, those cannot be corrected without an actual reckoning or acknowledgement of why those inequities exist or why the systems were built and designed intentionally to be inequitable or to be exclusionary.” 

 

“When I think about the AI for Good Lab, for example, I'm constantly reminded of why this space exists. On a very basic level, this program is for women and gender diverse individuals, and I was talking to one of our trainees recently and she said to me ‘you know, I am the only girl in my computer science classes, and I find it so difficult, I don't feel comfortable asking questions. I'm the only one in the space, or the only one that has made it in this space, and now I am carrying the weight of responsibility representing everyone that looks like me or is like me’, and that one is an awful feeling.” 

 Host: Because of human bias, AI cannot truly be neutral. The consequences of excessive bias can be very serious when it comes to human rights abuses. To understand more about AI’s potential negative effects on individuals, I spoke with Dr. Rowena Rodrigues who has a legal, ethical and technological background. Dr. Rodrigues is currently the co-lead of Trilateral Research, a UK-based organization providing ethical AI solutions.

“So I think the ability that AI has to like, identify, classify and discriminate I think that's what magnifies the potential for human rights abuses”

 

Host: Right, so why does this happen even if technology is designed to help individuals?

 

“An AI system is only as good as the data it is trained on. And I think if there are gaps in this, if this was not done well, and if it is focused, for example, on some parts of the population and not on others, then it can have disproportionate or the wrong impacts on the wrong groups…the use of AI and healthcare can be beneficial which is also a very good thing. It can also then, because the AI system, say for example, was trained on a certain cohort of the population, and then was taken and implemented on another than it could result in the wrong types of treatments. This affects the right to life. It might be the difference between life and death sometimes.”  

 

Host: Thank you, Dr. Rodrigues. Could you explain how AI also has the ability to enhance the vulnerability of individuals? What role does technology play in protecting human rights? 

 

“The other key issue, I think, is the robustness, security and safety one. I think if we get this wrong, nothing else matters, right? I think AI systems must function in a robust, secure and safe way because of the potential of the impact that they can have on human populations. And you know, they can, if they're not robust, if they're not secure, and if they're not safe, they make us more vulnerable. And that's not the intention of an AI system.” 

 

Host: That makes sense. Professor Kiddell-Monroe, what would the consequences of a data breach or a privacy violation look like on the ground?

 

“The big problem with all of this is that it's great to have this information, but it's all about how it is used by human beings in the end. And this is where I think the fear comes in. The fear is if you start to have that kind of information, and you'll be able to monitor people's movement, you can monitor movements of refugees and migrants, and that’s great! Okay, well let's say for humanitarian use, it's pure intention, it's ethical. But if that information gets into the warring factions' hands, it becomes extremely dangerous. So it's going to be about these checks and balances and boundaries around it.” 

 

Host: So, even if AI is originally being deployed to support individuals, it may lead to more harm than good if systems aren’t robust, or if data isn’t being properly protected. Because AI is developing so quickly, I know there is a lot of fear surrounding this topic at the moment. While it's important to talk about these risks, I also wanted to learn more about how AI is upholding communities across the globe. 

Host: Dr. Rodrigues, how can AI’s positively impact human rights? 

 

“So, in the simplest form, AI solutions can help prevent poverty and disease, and by implementing them in the medical sector, you could say, ‘hang on a minute, that's positive for the right to health care’. So, it is having a positive impact on health care, on the right to life. AI in education can affect your right to education, you know, with translation tools making education better for everyone.” 

 

Host: That’s fantastic. Professor Kiddell-Monroe spoke to me about her current work at the humanitarian organization she runs called See Change Initiative. She is calling for community-first approaches to address the shortcomings of relying solely on technology to address the needs of vulnerable communities. 

 

“I'm the executive director of an organization that I started in 2018, called See Change Initiative, which is working to reimagine humanitarian action, and see how we can put communities at the heart of humanitarian health crisis responses. Ideally, if I could, I would apply a community first approach to it… For instance, we are all talking a lot about traditional knowledge, ways of doing, ways of being which are not digitized, which are not based in the Western framework and concepts. How do we value those as much as we would value the information that comes from a nonhuman intelligence and how would we create something which is able to ensure that that human intelligence remains central, and that the nonhuman intelligence becomes something that supports and uplifts us and helps the human intelligence to advance and develop. It sounds a bit esoteric, but if I put it in a very practical way around a health crisis, say TB in the north. You know, the community of Pond Inlet (Nunavut) has a huge TB outbreak right now. There's a lot that AI can do to understand how TB spreads in the community. But how about we put first of all, the Indigenous knowledge, and the community's knowledge of TB, and its impact, as well as  the knowledge of how their community functions, where the elders are, where the youth are, and then, they do that kind of human mapping of the whole thing.” 

 

Host: Jennifer Addison reinforced this approach as well. In her work, she advocates for constant reflection and collaboration between parties. She believes that different perspectives can strengthen communities and improve AI’s successful implementation on the ground.

 

“If we're talking about communities that have been the most marginalized or maybe are the most vulnerable, let's acknowledge that there are existing trust issues and that AI is no different. And if I have had trust issues that are completely valid with various institutions, why would anything be different? If suddenly I'm supposed to just say, ‘oh, AI is just going to change everything and it's going to be great for you and we have solutions to all of your problems’, why would I suddenly say, ‘oh, okay, yes, this is going to be different this time’? And this is what I mean by going back to understanding the historical context of the groups that you're working with. And so building trust takes time.” 

 

Host: While AI’s potential is undeniable, all of my interviewees agreed that we need more accountability mechanisms. According to them: securitization, constant adaptation, and ethics are essential to ensure human rights be kept central. Dr. Rodrigues emphasizes the need for more AI guidelines to be developed.

 

“I think, with regard to human rights, what we did find was that you can only enjoy human rights as long as they are safeguarded. And there are effective mechanisms to report and address concerns. So there was a little bit of a gap in terms of the National Human Rights Institutions being fully equipped to monitor activities with regard to, you know, AI, but I think now there's more guidance being developed.” 

 

“I think any organizations that either use or deploy should be accountable for the system's proper functioning. And by proper functioning, it doesn't mean, you know, just the technical functioning, I think it's also respecting the other principles, respecting whether the ethical requirements, so what is it the system needs to do in terms of any roles. I think the system should also be consistent with the state of the art, the state of the art moves on quite a lot.”

 

“I think that it's not just about compliance, but it's also about ethics. As you know, ethics gives us that lens of looking at something from more than a compliance point of view to thinking about what are the consequences and to thinking about it constantly. So you don't stop. So I think ethics is that continued renewed ability to look at it… you might just do a human rights impact assessment, for example, or you might do an integrated one that looks at ethical issues, human rights issues, data protection issues, and all of that stuff. So there's many different ways to tailor it.” 

 

“Because of the way technologies develop and evolve, I think you need to keep refreshing your lens. Society needs change, as well.”

 

Host: Kiddell-Monroe also agreed with the need for more ethical guidelines in the creation of AI.

 

“But in humanitarian action, it needs an ethical framework, where everything goes through just like we do research. And we have to have research boards and approvals and all of that, we need to do the same thing for the use of AI, no idea how to do it. But this is the only way that we can keep that humaneness in it, because ethics or computers cannot analyze ethics, he can tell you what the ethics are, but he can't tell you what you should do. Or explain you know, only human beings can do that.” 

 

Host: We also discussed the last piece of the puzzle, regulation, and the role it plays in safeguarding rights. Regulation is definitely not simple, as we know, the law is not always able to catch up with innovation. Because of existing challenges Dr. Rodrigues argues that all actors at play will have to be creative and collaborate to come up with solutions that protect everyone and allow technological development to prosper.

  

“But I think with AI, what we need to address are challenges. There is no silver bullet here. I think what we need is something that is dynamic, because of the nature of the technology, we are still changing all the time, right? Things are evolving, new solutions are coming up, and some people, sometimes it's also stagnating. So some things are just not working out. So the problem with the law is the law is a slow-paced creature. So that's always going to be a challenge. So we've got the developments like the EU AI act that are coming about they're adopting a risk based approach.” 

 

“And I think, you know, that thing about “should we” it's not a “should we regulate” it's a, “we need to regulate for more responsible AI”, right. So I think we want to build responsible AI to address complex problems. And I think we also understand that we need to mitigate the risks and challenges of unregulated use. So that's where we are coming at it from because it's not a one versus the other approach. It does not work. It's a thing of whether it's the industry, whether it's the regulators, I think they need to work together. Because if you ask industry, they'll say, “The regulators don't get us because they don't understand the technology”, but you can only get that relationship when you talk to one another. So I think I love the fact that now we have those fora where people sit down, there's more technical engagement with the legal sector, and there's more legal engagement with the technical sector.” 

 

“Regulation with innovation, that's the way I would see it. How do we make AI a win-win for humanity?”