• Find out how @grok_ has explored the evolving world of human-robot interaction
  • Although privacy concerns linger, govts can do 'wonderful things' with AI, says @grok_
  • AI and robotics will – slowly – edge into the realm of reality for all of us, says @grok_

Dr Kate Darling is not your average doctor. Or your average researcher. Hers is a field which is sure to prompt many a conversation or quizzical look: Robot Ethics.

In other words, she spends her time at the Massachusetts Institute of Technology (MIT) Media Lab investigating social robotics and human-robot interaction. And she’s not one to leave her work in the lab – several domestic robots are in residence at her home. Darling, though, is anything but robotic in real life. Warm and chatty, her deep expertise is matched only by her approachability and a fine sense of humour – which is also on full display on her Twitter account.

The relationship between humans and robots has long been explored on celluloid, but for Darling it is rooted in her lifelong interest in how technology intersects with society.

“What I am really passionate about working on is human-robot interaction,” she explains. “It’s about the ways people psychologically relate to machines that display autonomy or have behaviour that we perceive as autonomous or lifelike. I do a lot of research about that and have also done some research around violence and empathy in that context. Machines that occupy this interesting space of not being devices and not living things.”

Tomorrow’s challenge – or today’s?

Certainly, there’s little doubt that technology continues to move rapidly forwards. Today’s environment is increasingly being shaped by Big Data, cognitive computing and the approaching revolution of artificial intelligence (AI). As a result, the debate about these technologies and what they mean for society and for our lives has become more widespread. Darling, however, believes that such voices often echo in empty stadia – and would be better focused elsewhere.

“What’s happening – at least in the United States – is that I am seeing a lot of hype and a lot of fear around these technologies,” she says. “It is a fear that is a little bit misplaced. I think we are primed by science fiction and pop culture to hear AI and immediately think of Skynet and of machines taking over the world. There is also a lot of fear about robots taking all of the jobs and the whole automation debate.”

Such concerns, she continues, actually get in the way of the positive impact that technology has had and will continue to have. “This hype really distracts from some of the really cool things that the technology has to offer, and it also distracts from some of the actual issues we might be facing in the near future,” she says. “I think there are so many things that we should be focusing our attention on. Take privacy, for example. A lot of these systems work by collecting a ton of data and yet we’re worried about the machines taking over and making autonomous decisions and that type of thing.”

Striking a balance, she believes, will be critical to identifying solutions to these challenges which – so far at least – have proved frustratingly elusive. “Right now we don’t have a lot of answers, and the biggest problem I see is that there is so much incentive to collect data because that’s how these systems work,” she points out. “So how do you balance that against the interests of the general public, who really don’t have a voice in this fight? I think it is really up to governments to try and strike that balance for citizens and to enact legislation that tries to balance the interests of the companies that want to collect the data against the interests of the rest of us.”

Governments’ evolving role

It would seem, on the face of it at least, that governments are ideally placed to take advantage of some of these technologies. After all, few – if any – organisations collect and hold as much data. But are they taking advantage of this opportunity or falling into a state of risk-averse inertia? Darling is quick to say that they are “absolutely” seeking to make use of it.

“There are some really great uses that come out of collecting this data,” she adds. “You can use algorithms to try and find patterns of energy usage and use that to try and create more efficient energy systems, for example. This is just one of many sorts of wonderful things that governments can do with this technology. That said, there are also some nefarious things that governments can use it for. It is a little bit tricky, and I would hope that democracy to some extent solves that problem. I hope that people care about their privacy enough to vote for people who do as well.”

In addition to democracy and privacy, another issue related to these technologies is accountability. This is because when you have some of the deeper neural-type approaches, it is hard to know how a decision has been taken – what logic was being used and what data was deployed and weighted in what way to arrive at a particular conclusion? Answers to such valid questions are all too rare. Asked whether you hold these technologies to account in the same way you can hold a person to account, Darling says we have to try.

“The loss of transparency in how decisions get made is a huge problem,” she admits. “There is a fantastic book by Cathy O’Neill called Weapons of Math Destruction that goes into a lot of these issues. I think that in some ways we have created even less transparency. It’s not just the problem of the technology not being transparent – that’s another issue. It’s also that we are outsourcing a lot of these decisions and systems to private companies that don’t want to share with us how their algorithms work, because they are proprietary and they want a competitive advantage.”

This commercial reality means that policymakers need to be more proactive when negotiating their outsourcing agreements, she continues. “Governments could try and think about outsourcing and to what extent we can still try to ensure transparency in how the algorithms are working, so we at least have that. And then, of course, there is the issue of sometimes the companies themselves not knowing exactly how their algorithm has come up with a specific result from the data. This one is a little bit more difficult to address. I would say – echoing Cathy O’Neill – that when you are using these systems and the decisions can impact people’s lives, this is probably not a good idea. But there are maybe other uses where there is less harm to people if things go wrong.”

The human touch

The pace of technological change is such that it can sometimes seem like a fool’s errand to predict exactly how things will develop in the future. So many unknowns. So many fast-evolving scientific trends and discoveries. It seems pertinent to ask, though, given that a lot of public services are defined by the interaction of one human with another – a doctor or a teacher, for example – will technology be able to replace some of these interactions in the near term? Darling says it’s not going to happen any time soon.

“There is the question of ‘can we replace it’ or ‘will people like it’,” she observes. “I think right now we don’t have good enough technology to fully replace a human interaction like that. Plus, people aren’t suddenly going to enjoy the situation when you call an airline, for example. The systems are pretty good when you call an airline – they understand a lot of what you say – but you still want to talk to a human and not to a robot. And for government services, I think there is even more of a desire to talk to a human.”

That said, she is clear that there are certain situations where AI can be an active force for good, particularly when deployed in support of a human. It shouldn’t be a trade-off between either or. “Why can’t we have both?” she asks. “Why can’t you have a doctor who is assisted by an AI system that can help with the diagnosis, but the patient still gets to interact with the doctor in a more humane way? The most effective approach to implementing robotics and AI is when they work together with humans, with their being a tool that humans use rather than being a full replacement for the positions and jobs that we have right now.”

As for whether this will be the everyday norm in the years to come, Darling hedges her bets but remains convinced that AI and robotics will – slowly – edge into the realm of reality for all of us. “People are going to adopt these systems very gradually,” she predicts. “But I think we are going to see more of these systems being used as tools and in contexts where they assist humans. I think this is the right direction to go in, and I hope this is the way it will go.”

Whatever happens in the years ahead, it is already clear that what we have become accustomed to today will be very different to what lies over the horizon. Exciting times await.

 

The Centre for Public Impact is investigating the way in which artificial intelligence (AI) can improve outcomes for citizens. 

Are you working in government and interested in how AI applies to your practice? Or are you are an AI practitioner who thinks your tools can have an application in government? If so, please get in touch. 

 

FURTHER READING