Technology

The actual risks of AI are nearer than we expect

the-actual-risks-of-ai-are-nearer-than-we-expect

William Isaac is a senior scientist on the ethics and social team at DeepMind, an AI startup that Google acquired in 2014. He also chairs the conference on fairness, accountability and transparency – the premier annual meeting of AI experts, social scientists and lawyers working in the field. I asked him about the current and potential challenges for AI development as well as the solutions.

Q: Should we be concerned about super-smart AI?

A: I want to postpone the question. The threats overlap, regardless of whether it is predictive policing and risk assessment in the short term or more scaled and advanced systems in the longer term. Many of these topics also have a historical basis. Potential risks and approaches are therefore not as abstract as we think.

There are three areas that I would like to identify. Perhaps the most pressing question is this value adjustment question: how do you actually design a system that can understand and implement the various forms of preferences and values ​​of a population? In the past few years we have seen attempts by policymakers, industry, and others to embed value into engineering systems on a large scale – in areas like predictive policing, risk assessment, attitudes, etc. It is clear that they have some form of bias that reflects society reflected. The ideal system would meet all the needs of many stakeholders and many people in the population. But how does society balance its own history with striving? We're still struggling with the answers, and that question is getting exponentially more complicated. Getting this problem right is not only something for the future but also for the here and now.

The second would be a demonstrable social benefit. To this point, there is little empirical evidence to confirm that AI technologies will deliver the broad social benefits we seek.

After all, I think anyone who works in the room worries the most: what are the robust mechanisms of oversight and accountability?

Q: How do we overcome these risks and challenges?

A: Three areas would go a long way. The first is to build a collective muscle for responsible innovation and control. Make sure you think about where the forms of misalignment, bias, or damage are. Make sure you develop good processes to ensure that all groups are involved in the technology design process. Groups that have been historically marginalized are often not the ones who meet their needs. So how we design processes to actually do this is important.

The second accelerates the development of the socio-technical tools to actually carry out this work. We don't have a lot of tools.

The last is to give researchers and practitioners – especially researchers and practitioners of paint – more resources and training to do this work. Not just in machine learning, but also in STS [Science, Technology and Society] and the social sciences. We want not just a few individuals but a community of researchers to really understand the potential harm AI systems can bring and how it can be successfully mitigated.

Q: How far have AI researchers come to ponder these challenges, and how far do they have to go?

A: I remember the White House just released a big data report in 2016, and there was a strong sense of optimism that we could use data and machine learning to solve some stubborn social problems. At the same time, there were researchers in the academic community who, in a very abstract sense, had pointed out, "Hey, there is some potential damage that could be caused by these systems." But for the most part they had not interacted at all. They existed in unique silos.

Since then, we've done much more research to examine this interface between known errors in machine learning systems and their application to society. And when people saw this interaction, they realized, “Okay, this is not just a hypothetical risk. It's a real threat. “So if you look at the field in phases, it became very clear in Phase 1 that these concerns are real. The second phase now begins to deal with more comprehensive systemic questions.

Q: So, are you optimistic about achieving broad, beneficial AI?

A: I am. The last few years have given me a lot of hope. Take a look at face recognition as an example. Joy Buolamwini, Timnit Gebru, and Deb Raji did a great job uncovering intersectional differences in accuracy between facial recognition systems [i.e. showing that these systems were far less accurate on black female faces than they were on white male faces]. Civil society advocated rigorously defending human rights against the misuse of facial recognition. And also the great work that policy makers, regulators, and community groups have done from the ground up to communicate exactly what facial recognition systems are and what potential risks they pose, and to demand clarity about the benefits to society. This is a model for how we can imagine connecting with other advances in AI.

The challenge with facial recognition, however, is that while we were using the technology in public, we had to resolve these ethical and value issues. I hope some of these conversations will take place in the future before the potential harm occurs.

Q: What do you dream of when you dream of the future of AI?

A: It could be a great balance. For example, if you had AI teachers or tutors who could be available to students and communities where access to education and resources is very limited, this would be very helpful. And that's a non-trivial thing to expect from this technology. How do you know it is empowering? How do you know it's socially beneficial?

I attended graduate school in Michigan during the Flint water crisis. When the first cases of lead pipes surfaced, the records of where the piping systems were located were on index cards at the bottom of an administrative building. The lack of access to technology had put them at a considerable disadvantage. This means that the people who grew up in these communities, over 50% of whom are African American, grew up in an environment where they did not receive basic services and resources.

So the question is: if properly done, could these technologies improve their standards of living? Machine learning was able to determine and predict where the lead pipes were, reducing actual repair costs for the city. But it was a big undertaking, and it was rare. And as we know, Flint still hasn't removed all the pipes, so there are political and social challenges too – machine learning won't solve all of them. But the hope is that we will develop tools that will empower these communities and make meaningful changes in their lives. That's what I think about when we talk about what we're building. I'd like to see that.

0 Comments
Share

Steven Gregory