Life at the Intersection of AI and Society with Dr. Ece Kamar

Published

(opens in new tab) Microsoft Senior Researcher Ece Kamar. Photo by Maryatt Photography.

Episode 9, January 24, 2018

As the reality of artificial intelligence (opens in new tab) continues to capture our imagination, and critical AI systems enter our world at a rapid pace, Dr. Ece Kamar (opens in new tab), a senior researcher in the Adaptive Systems and Interaction Group (opens in new tab) at Microsoft Research, is working to help us understand AI’s far-reaching implications, both as we use it, and as we build it.

Today, Dr. Kamar talks about the complementarity between humans and machines, debunks some common misperceptions about AI, reveals how we can overcome bias and blind spots by putting humans in the AI loop, and argues convincingly that, despite everything machines can do (and they can do a lot), humans are still “the real deal.”

Related:


Transcript

Ece Kamar: Machines are good at recognizing patterns. Machines are good at doing things in scale. Humans are good at common sense. We have creativity. So, I think if you can understand what this complementary means and then build AI that can use the power of AI to complement what humans are good at and support them in things that they want to spend time on, I think that is the beautiful future I foresee from the collaboration of humans and machines.

Host: You’re listening to the Microsoft Research podcast. A show that brings you closer to the cutting-edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.

Host: As the reality of artificial intelligence continues to capture our imagination, and critical AI systems enter our world at a rapid pace, Dr. Ece Kamar, a senior researcher in the Adaptive Systems and Interaction group at Microsoft Research, is working to help us understand AI’s far-reaching implications, both as we use it and as we build it. Today, Dr. Kamar talks about the complementarity between humans and machines, debunks some common misperceptions about AI, reveals how we can overcome bias and blind spots by putting humans in the AI loop and argues convincingly that, despite everything machines can do, and they can do a lot, humans are still the real deal.”

That and much more on this episode of the Microsoft Research podcast.

[Music plays]

Host: Ece Kamar, welcome.

Ece Kamar: Thank you.

Host: Hey. You are in the Adaptive Systems and Interaction group at Microsoft Research. What do you do there?

Ece Kamar: I’m really passionate about AI systems providing value for people for their daily tasks. And I’m very interested in the complementarity between machine intelligence and human intelligence and what kind of value can be generated from using both of them to make daily life better. We try to build systems that can interact with people, that can work with people and that can be beneficial for people. Our group has a big human component, so we care about modelling the human side. And we also work on machine-learning decision-making algorithms that can make decisions appropriately for the domain they were designed for. So, we work on dialogue systems, work on location-based services, but also the core principles of how we do machine learning and reasoning.

Host: So, what do you do particularly?

Ece Kamar: My main area is the intersection between humans and AI. I’m very interested in, as I said, AI being useful for people because my PhD advisor, Barbara Grosz (opens in new tab), says this very eloquently, she says, “We already know how to replicate human intelligence: we have babies. So, let’s look for what can augment human intelligence, what can make human intelligence better.” And that’s what my research focuses on. I really look for the complementarity in intelligences, and building these experience that can, in the future, hopefully, create super-human experiences. So, a lot of the work I do focuses on two big parts: one is how we can build AI systems that can provide value for humans in their daily tasks and making them better. But also thinking about how humans may complement AI systems.

Host: One of the biggest misconceptions maybe about AI is that it’s more advanced than it actually is. I think there’s been a lot of conversation around that. Where are we with AI, really?

Ece Kamar: Yeah, I think when people think about AI, they actually think AI is this thing that emerged in the last ten years or so. I think there are a lot of misconceptions about this. AI actually started in the 1950s and there was the Dartmouth Conference on AI where the fathers of AI came together and they actually met to discuss what AI is, define it and start working on it. And they were wrong, too, in their perception of what AI is. They actually… they wrote this Dartmouth Document where they stated that if they worked hard on this AI problem for a summer, they really believed we could achieve AI very, very quickly. So, this is a problem. Artificial intelligence is a problem where seeing a few very good examples lights up your imagination. It increases your expectations and think that, “Okay, if the machine can do this, I can imagine machine doing this harder thing very quickly as well.” Unfortunately, that’s not how AI works today. AI actually works on verticals. We know how to apply some specific techniques to some tasks, but unfortunately, just being able to do, let’s say, perception, doesn’t make it easier for us to do other cognitive tasks in machines. So, that’s a big misconception we have to correct. We are actually very far from generalized artificial intelligence. Our techniques still are very specialized. And applying AI to every single domain requires a lot of analysis, deliberation and careful engineering.

[Music plays]

Host: Talk about the blind spot problem in machine learning and where does it come from, how do we prevent or mitigate it?

Ece Kamar: This is a problem that we got very passionate about in the last five years. And the reason we are very passionate about this problem is because we are actually at an important point in the history of AI where a lot of critical AI systems are entering the real world and starting to interact with people. So, we are at this inflection point where, whatever AI does, and the way we build AI, have consequences for the society we live in.

Host: Right.

Ece Kamar: And when we look at our AI practices, it is actually very data-dependent these days. So, our models are data-hungry. We give data to these models and the quality of the performance we see from the models at the end of the day, depends on how well that data is collected. However, data collection is not a real science. We have our insights, we have our assumptions and we do data collection that way. And that data is not always the perfect representation of the world. This creates blind spots. When our data is not the right representation of the world and it’s not representing everything we care about, then our models cannot learn about some of the important things. For example, a face-recognition system that cannot recognize dark-skinned people is an example of the blind-spot problem. Our data may not have enough representation for these sub-populations, and then our systems do not work equally well for all people.

Host: Yeah, I think you mentioned an example of a machine seeing just pictures of white cats and black dogs and then when it sees a black cat, it thinks it’s a dog because that’s all it’s been fed and told.

Ece Kamar: That’s all it’s been fed.

Host: How do we deal with the problem of flawed data, or incomplete data, and make it better?

Ece Kamar: In the age of AI, we have to reinvent what our development pipelines look like. In software engineering, software engineering is a decades-old field and with the hard way people have learned what it means to troubleshoot and debug this code that they are developing. We are at this point that we have to have the same responsibility for the AI systems we are developing as well. That’s a problem that my research focuses on. In the last few years, we’ve been developing debugging and troubleshooting technologies where we can actually smartly inject human insights into AI systems to understand where AI systems fail, where our blind spots are. Or in integrated AI systems, how errors may propagate through a system. And we may not have any idea where errors are coming from and the ideal ways of fixing them. So, the approaches I work on really look into combining algorithms with human intelligence and think about how humans may guide us in places where they are not happy with the performance of our system, and show us clues about how they can be improved.

Host: I think one of the other biggest misconceptions about AI is that machines don’t make mistakes. And they do. You bring up this concept of humans-in-the-loop. Talk about that a little bit more.

Ece Kamar: So, for a lot of the tasks that we are designing AI systems, humans can actually be guides for us because a lot of things like computer vision (opens in new tab), face recognition at this point, human are the real deal. They can do these tasks very well. So, that’s what we do in these systems when we talk about human-in-the-loop. We think about how to bring humans into training, execution and troubleshooting of these systems as teachers.

Host: What does that look like? Because I’m envisioning research. And all data comes from humans, I mean, in the sense that we feed it into the machines. When you want to bring people into the process, what does that look like?

Ece Kamar: So, humans are already part of the process in AI. So, one thing I like to say is, “AI is developed by people, with people, for people.” And what I mean by that is, AI is not this magic that it just emerges itself. We have engineers, scientists, working in AI systems, putting their insights into the design and architecture of these systems. When I talk about development of AI with people, as I said, a lot of the resources for AI comes from people producing this data. And in this space, actually, the field of crowdsourcing, the field of human computation, has actually been, like, this hidden gem for the developments we are seeing in AI because what crowdsourcing does through marketplaces like Mechanical Turk, they provide scalable on-demand access to human intelligence. We can push these tasks into the internet and people come and do these tasks for you. The same with social media. People produce all of this data that AI systems can learn about and these web tools for accessing human intelligence is very important for us when we are building human-in-the-loop systems, because now we think about putting the human intelligence through these micro-tasks into the execution of systems for troubleshooting, for training, for execution, so that we can do these tasks better. And when I talk about building AI for people, a lot of the systems we care about are human-driven. We want to be useful for human. So, humans are actually very important also in evaluating how the systems we build are and as another area where crowdsourcing comes into play.

[Music plays]

Host: Ece, you serve on a number of program committees and study panels, and one of them is AI 100 which is a 100-year study of artificial intelligence (opens in new tab). And it has a report titled “Artificial Intelligence and Life in 2030.” What are the big takeaways of that for people who may not read that dense paper? Yeah, that dense report…

Ece Kamar: We actually have a nice summary at the beginning of it, so if you have five minutes… I would actually encourage everybody to read it (opens in new tab)! So, let me tell you a little bit about what AI 100 is and then I’ll tell you our main insights from the report. AI 100 is a project started at Stanford, actually through very generous contributions of Mary and Eric Horvitz, to study the impact of AI on society. And what makes AI 100 really special is that it’s a longitudinal study. It’s going to continue for a hundred years or more. Because right now, everybody is excited about AI. We can do one study and forget about it in the next 5 years. It’s not what this study is about. This study is going to repeat itself in every 5 years. So, what we studied in AI 100 in 2016, with seventeen other AI leaders, is understanding the consequences of AI for daily life. There was one big consensus: we really didn’t see much evidence for either general intelligence or the machine achieving some kind of consciousness in a way that we should be worried about right now. But we also identified areas where we have to be more careful about. For example, we realized that we are not paying enough attention to how we can build AI as team members for people, how we can build collaboration into the AI systems we have today. We also got very worried about the real-world, short-term impacts of current AI practices on our society.

Host: Yeah.

Ece Kamar: So, we identified issues around biases, fairness, explainability, reliability as real issues we should be worried about. Starting from today, we could actually see that these issues are coming into play. We were able to identify examples that we were worried about. So, we actually made a recommendation to attract more research and interdisciplinary attention to some of the problems coming in this space.

Host: What other disciplines might be applicable or important in this research?

Ece Kamar: It’s actually a very interdisciplinary problem. It touches law, because…

Host: Huge.

Ece Kamar: Hugely. Because we are talking about responsibilities. We are talking about, when something goes wrong, who is responsible for that? We are thinking about AI algorithms that can bias their decisions based on race, gender, age. They can impact society and there are a lot of areas like judicial decision-making that touches law. And also, for every vertical, we are building these systems and I think we should be working with the domain experts from these verticals. We need to talk to educators. We need to talk to doctors. We need to talk to people who understand what that domain means and all the special considerations we should be careful about.

Host: Do you know of any effort to, sort of, link arms with other developers and designers and companies to have this, sort of, ethical approach?

Ece Kamar: There is an exciting effort going on right now called Partnership on AI (opens in new tab), where a lot of big companies who are practicing AI, including Microsoft, came together because they recognize our responsibility in building reliable, trustworthy, unbiased AI into the real world that can help people. And these companies are now, together, working on best practices and other kind of awareness they can create, to start addressing these negative consequences that AI may have in the real world. I think this is a very promising step. We are going to see what comes out of it, but I think it is good to see that there’s a consensus among big AI companies to come together and recognize that this is actually an important time in our field to start asking the hard questions.

Host: Yeah, and you would hope that people would be aware that there’s always unintended consequences of any particular technology, so…

Ece Kamar: It’s also that right now the public perception is very important. And the current experiences we are putting into the real world are affecting public perception about AI. There is an important feedback loop we have to think about, so that the experiences humans have from our AI is actually a good one, and it’s going to promote more work in this space rather than creating a cloud of fear.

Host: You are doing work here, and you have out in, sort of, the general public a lot of movies and TV shows and books, even, sci-fi that are painting a picture of AI and giving people actual perceptions of what it can do. And… How do we help educate about where AI is and what it can do in the real world?

Ece Kamar: It’s an important question, how we can reach the public. Because you know as a researcher, I publish in academic venues, I talk to my colleagues, so we can have more discussions in our specialized fields. However, the public perception is very important. This is why when we were writing the AI 100 report, one of our target audiences was the public. So, we wrote everything in a way that any person can read and understand our thoughts. And that was very intentional because academic people can get access to a lot of talks on this space, and our main targets were the public, the regulators in this space and the press. So, we really wanted to make sure that our thoughts, this is actually our thoughts, right? This is not the field of AI, but these eighteen people who spent a year working on this problem. At least our thoughts, after careful considerations and deliberations, would be accessible to everybody who is interested about this. I think there are also things we can do as we are building our AI to make things more accessible for people and also get their feedback in. So, we know that our AI is not always right, so if we give ways for people to tell us when they are not happy about something and we can actually have feedback loops to watch for feedback, I think that’s a way we can kind of incorporate more of the public thoughts about the way we are building these systems.

Host: So where would you implement that if you were going to get feedback from the public?

Ece Kamar: It could be as simple as, like, thumbs-up and thumbs-down. Like, I didn’t like – this system didn’t work for me, I didn’t like what you were doing. Because if you get that kind of a feedback, we can have human eyes looking at those cases and see what we are getting wrong. Right now, for a lot of the systems we have, there’s no feedback loop. We are not really getting much human feedback about what our systems are doing. We care about how we can get a little bit more accuracy from our existing data sets, like how can we go from like 98% to 98.5. How we can do better stochastic gradient descent and how we can optimize all of the parameters we are doing in the way we are doing supervised learning. However, we are spending so little time into watching what these systems do, in practice, in the real world. That’s an area that my research focuses on very much these days because I think it’s a very important problem, watching what our systems are doing with real people in the field. What are their blind spots? What are the errors coming from? And what the performance in the world really looks like because the real world may really not look like your existing data sets in the lab. So, let’s put our focus a little bit more into the real-world performance and making sure that our stuff works in the real world. And I think for that we need a feedback loop for the real-world performance.

[Music plays]

Host: Talk a little bit about this delicate balance between self-governance and more external locus-of-control governance. How do people manage do the right thing up front so that somebody doesn’t have to come around and make a law that says, “You can’t do that.”

Ece Kamar: I think that is something that the AI community is realizing. So that we see efforts like Partnership on AI. As a community, we are coming together and saying, “Let’s get things right before somebody forces us to do so.” Because we see regulations coming from different countries all over the world. We saw some regulation coming from Germany on autonomous vehicles. We see regulations coming from European Union on data privacy and protection. And sometimes, in some cases, if the regulators are not working hand-in-hand with AI experts, these rules may not be the best things for the field, even for the real-world performance. I mentioned issues around biases for example. If we are not collecting demographic data about our users, it becomes impossible to detect how we are impacting different sub-populations and actually realize whether we have biases in real-world performance. So I think it is very important to have dialogue between different people, communities, who have a stake in this game: regulators, the press, the public, domain experts… to get them talking with each other, because if we cannot explain where we are, as a field, to the regulators, they are going to be making some decisions that are going to be actually impacting things in worse ways, like the ones we see in biases. And we are trying to communicate… I was talking at the Aspen forum in the summer about jobs in AI and trying to express that, no, not all of our jobs are going away… Some tasks may be automated, but we are very far from automating a lot of the jobs that humans make. So just trying to express what we think the opportunities and the real challenges that are in our field in clear ways I think is going to be very important as we go along.

Host: I’m trying to wrap my brain around how we do that. I mean, getting an audience that’s engaged in this topic, aside from being just entertained by this topic. That’s going to take some thought and research in itself I think.

Ece Kamar: I think we have an opportunity with this new interest in AI. We have eyes on us. People are watching what companies are doing in this field of artificial intelligence. We see articles every week talking about how AI is going to be changing everything, all the sectors. Let’s use this attention to make sure that our practices of AI is actually good and we are collaborating with the people we need to collaborate with to make sure that we actually have good AI systems in the real world. Let’s turn this attention into an opportunity, that’s what I would say.

Host: I love that. So, you’re professionally, and probably personally, interested in the societal impacts of AI. We’ve covered that. Particularly as it affects what I want to call the 3 Ds: disruption, displacement and distribution. Talk a little bit about those problems.

Ece Kamar: It is one of the things we studied in the AI 100 report as well. And it’s the number one question I get when I talk to people outside of AI, like, “Where are the jobs going? Can you tell me how many millions of jobs are going to be lost in the United States in the next 10, 20 years, and how many new jobs are going to be coming?” We are actually quite far from having general artificial intelligence. Because of that, we really don’t see this kind of “singularity” like AI happening where AI is going to, kind of, teach itself to become smarter and smarter and one day it’s going to be so smart that all the jobs are going to be going away. At the same time, we have an opportunity to think about how to prepare our society for some of the job displacement that may happen. For example, that crowdsourcing marketplaces I was telling you about, like how we are using that to train our AI, that’s actually a good way to provide work to people who don’t have traditional work right now.

Host: So that’s an economy unto itself there, a little micro-economy if you will…

Ece Kamar: It’s a micro-economy. And we’ve actually done some experiments, we have research paper on it, where we actually show that we can have algorithms that can sequence tasks in crowdsourcing marketplaces, so that by doing tasks, people actually learn how to do complex tasks better. So, we can train people for tasks without them even noticing that they are getting better at tasks by just doing them. At Microsoft, we have a platform like LinkedIn (opens in new tab) that can actually be a hub for providing work or for training people or for making aware of other opportunities they may not be aware of. So, I have a feeling that if you can think about how to prepare generations for upcoming opportunities and pair that up with platforms we have today, maybe that’s an area we have to focus on really hard so that as some automation is coming in, we are actually preparing the people for the changes they may face. Not all jobs are going away because as a person who is working in AI, I just know how hard it is to automate even the smallest of tasks. So, we are going to be seeing a transition, but I don’t buy the statement that all jobs are going to be going away.

Host: And not only that, we’ve seen this before, and we see new economies grow out of change. What things do you see that might paint this picture of the human/AI collaboration that you talked about earlier?

Ece Kamar: I think the key is understanding the complementary strengths of people and machines. We know that machines are good automating repetitive tasks that we have a lot of data for. Machines are good at recognizing patterns. Machines are good at doing things in scale. But there are a lot of things machines are not good at. Humans are good at common sense. We can do counter-factual reasoning. We are not even close to what this means for machines.

Host: No.

Ece Kamar: We can learn from very few examples and generalize that to things that we’ve never seen before. We have creativity. So, I think if we can understand what this complementary means, and then build AI that can use the power of AI to complement what humans are good at and support them in things that they want to spend time on, I think that is the beautiful future I foresee from the collaboration of humans and machines. We are already seeing examples of this. There is this great paper from – it’s a collaborative work between MIT and Harvard – they can actually have a machine do nearly as well as human radiologists on diagnosing breast cancer from images. So that’s a story that we hear a lot. But the story we don’t get to hear a lot is, because machines and humans make different kind of mistakes, when you actually put them together, the error rates you get is much, much lower from what the machine and humans can do alone. They can actually reduce human mistakes by 85% by just pairing them up with people. I think those kind of stories about like how humans and machines can think about problems in a different way and complement them, there are going to be ways for us to just as humans get better at what we are doing today.

Host: What should we be concerned about or interested about in our ability to impute human qualities to machines… form emotional connections to machines?

Ece Kamar: Yeah. We’ve done experiments in my PhD days and we were actually like showing the same algorithm to people: in one case we were telling them it was a person, and the other case we were saying it was an agent. And the humans were treating them very differently for example.

Host: Really.

Ece Kamar: Yeah. But then Cliff Nass has some work that shows that even putting like a smiley face on an agent makes people to treat that agent differently. So, perception is very, very important. You know, in some cases, it may actually be beneficial to have an emotional connection with a computer agent. For example, XiaoIce (opens in new tab), Microsoft’s agent XiaoIce, a lot of people in China connect with XiaoIce and maybe it helps them with their daily life and just to connect with somebody because they may not have, you know, somebody to talk to every day. So, in some cases, I think it’s really, again, application-dependent. In some cases, it may be beneficial to form an emotional connection.

Host: How did you end up at Microsoft Research?

Ece Kamar: I’m actually one of those that started as an intern and got to be full-time. So, I’m originally from Turkey, did my undergrad in Turkey, then came to Harvard for my PhD. And I think I was in my second year and I was doing work on human/computer collaboration, in particular about attention management, how we can build AI algorithms that can understand the human state and actually know when is the right time to ask a question or interact with a person. And I was reading a lot of articles coming from Microsoft Research, in particular from Eric Horvitz and his team. And I just loved the way those articles were written and the applications and the way they were approaching those problems. I wanted to do an internship. So, I came here for the first time as an intern in 2007, and I just loved it so much I came back again in 2008 as an intern, and then I became a Microsoft Research fellow… and I just loved the place so much, the freedom, the ability to work on real-world problems and the opportunity to do impact. Those were the things that really make this place the best of both worlds for me. The fact that you could work on real-world problems in scale, but also be in touch with academia, publish, share your work, learn every day about new problems. So, when I was graduating, it was a very easy decision to come here. And I’ve been here for the last seven-plus years. It’s hard to believe time goes so fast, but I’ve been in the same group, working with the same manager all of this time.

[Music plays]

Host: Ece Kamar, thanks.

Ece Kamar: Thank you. It was fun.

Host: To learn more about Dr. Ece Kamar’s work, and what Microsoft is doing to make artificial intelligence reliable, unbiased and trustworthy, visit Microsoft.com/research.

Related publications

Continue reading

See all podcasts