fbpx

I have studied computer science and developed in the area of artificial intelligence and since then pursuing my research with an emphasis on creating support technology and specifically the support of decision making. To a large extend, my aim is to form a debate in the liberation, that is, helping people argue for or against something and in developing negotiating support systems. It helps to think about people’s values. If one talks to people and understands why they have a certain opinion, one might still disagree but also discover that these are descent people, and understand why they choose different from one’s choice. Thus, the Delft University of Technology and several other universities are supporting the idea of responsible AI.

The AI is widely covered in the media. For instance, one of the headlines shows that by 2025, 52% of all jobs would be carried out by robots or another news that says by 2022, 75 million jobs will be gone. Whereas another news suggests that by 2022, 130 million new ICT jobs would have been created. It seems that, it’s a matter of how one analyses things? The people who would lost their jobs would be unhappy. Yet, there would be more jobs created also in 2022, but then they would be re-educated. These kinds of transitions are hard and it has to be studied on how to tackle with that.

The ideal AI autonomous system should be able to set and pursue its own goals, then also be held accountable and could be given some responsibilities. Maybe it is better to illustrate it here with Sophia, the citizen of Saudi Arabia. It is surprising that people believe that the robot Sophia actually can have a conversation. People are very disappointed when they’re explained that the questions are given like a week before so that Sophia can give sufficient answers. So, from my point of view, this a real problem. Because people are believing in AI in the last 50, 60, 70 years. And after such promises there is always a disappointment. So, what I hope to do is to describe what can be expected from AI, and on the other hand to re-consider, to think about AI as a supporting technology that has great power if it is deployed it in the right way.

What is deep learning then? It is a sub-branch of machine learning, the type of artificial intelligence that is capable of learning from more data. There are various types of that kind of learning, but one thing is important that it typically run from big data. So big data is basically the gold of the day, the material that we can get rich with. It’s fantastically true but it has some risks. Deep learning consists of huge network of small intelligence with enormous numbers of connections between those networks. It’s not a new idea, it was already around like 60s last century. By that time, it didn’t fly. The reason that it does fly now is also due to fantastic records of electrical engineering and constructing ever better, faster computers.

And it has really enormous power. It’s like a super brain which brings you all kinds of possibilities of extracting patterns from they can. Good for intelligism and therefore in the end set up its own goals. Is that what you want or is it what you don’t want? To what extend can it be controlled? And to what extend can be done differently?

So, if this is the engine, the next question is who is in control? Because in the one hand you have all the promises and in the other hand you’re being scared stiff by all kind of movies about artificial intelligence taking over the control of the world and do all those nasty things? Yes, then who is in charge? It seems like a logical question, but I think we should pose a different question. It’s not so much about who is in charge, but how control, on being in charge, flows from one of us to the next.

Google Deep Mind was based on deep learning efforts which is basically a principal truth program to ever be a proficient player at the game of a goal. That’s the game of chess and has been cracked by AI and it was in 1998. This is a major effort which took 20 years for them to do that. But it sells the game. It’s a confined game, it’s a game in which all information is feasible on board. The context is really scoped to that game. And that’s the first thing that we should realize. When it comes to the unbelievable amount of background knowledge, of contextual knowledge that one has to know to make decisions in the political world, it is not easy to solve for machines like deep learning and deep mind tools. Why is that so? Why would it be difficult to grasp what that thing actually will be doing?

An ordinary person can balance a pencil with two fingers or only one finger. How do we do that? Can we explain how that works? Well, that’s difficult. When I started my career as a scientist in the 90s, I was at a conference where initial race between symbolic representation approaches to AI first bound into machine learning and we started with flying. How can we teach a computer how to fly? So, there was two camps, both camps have the same number of engineers. Well after 3 weeks with 2 teams fully working on the knowledge representation, the computer didn’t fly yet. Asking so many pilots “How do you do that?”. All that pilots have so many things that say the same. The computer didn’t fly. But basically, after, working with a pilot in hooking a central system to the controls, the thing flew within a decade.

And that’s also power, it’s fantastic. It’s out there and you can really do that. So, what’s the problem? Let’s do machine learning all the time. There’s the point that we still don’t know why it flies? We have no clue. It’s not maybe that important to fly, but yes, for deploying all the stuff for political warfare and stuff like those spoken before me. This is so fundamental what you are doing, it’s so essential, we do really seriously contemplate and leaving that decision to a machine. It’s our lives, it’s our society. Don’t shift responsibility to something else that we can’t explain why did it something.

Big data resembles a very big sea in which AI seems to be adrift. Because there are streams in the data that we are not aware of. Basically, when you are thinking that you are going the right direction, that brings you to a wrong point. Amazon has recently applied deep learning to run through all the batches of all the applicants for the jobs to pick up the best 5 candidates. It was a biased decision so they stopped it. What did they give to the machine learning algorithm? The data of their past experiences of how they hired people and there was a bias in there.

So, what’s the problem? Why do we deploy machine learning in such cases? Because you don’t want to go through all the data. But, you have to realize that if you do unsupervised learning then there might be all kinds of biases that you are not aware of. Ok, then let’s supervise learning, let’s give it data that we already labelled like, these are good examples and these are bad examples and based on that you can categorize. Sounds good, but then the labelling would be done by humans with their own biases. So, doesn’t give any help. If we do it, we would bring our own problems again. So, can we do different? That’s basically the question we need. So, how do we stay on course? As a metaphor; we need a moral compass and a moral positioning system. And not only the compass because you can’t realize that you are drifting away in the big data sea with only a compass, so we need a moral positioning system instead of a GPS. That will tell you where you are in your sea of morality.

Here’s another metaphor to understand better. Probably you see a picture of a rider and a running horse. The horse is an enormously powerful machine, it can run much faster than we can or can carry heavy loads. And it’s a very intelligent agent. If there is a hole in the ground or an obstacle in front of it, the avoidance will be done by the horse automatically. It will not crush into that, it will jump, go around or do something intelligent. The type of intelligence that we love and we’d like to harvest also from AI. But you see a rider also over there. So, there is some screening mechanism. That’s what we are looking for, how to bring that into AI?

So essentially, the research should take all the power of the machine learning, give it a moral positioning system, make sure that the system is self-reflective. A few words to explain self-reflective systems. I would like to have AI that can say;

  • Where am I from a moral point of view?
  • What biases am I forming?
  • What is the quality of the data I am being fed?
  • Who can I turn to for discussion, for further help to talk about these possible biases?

That requires epistemic logic, stuff from basically knowledge representation area. Things that help ask questions; What do I know? What do I know that I don’t know? So those are the known knowns. At first it doesn’t help us yet on unknown unknows. But we can do that together, we can work with those systems to help discover that.

Thus, my goal here is to enhance you on Responsible AI. First of all, we have to design for values. So, it has to be designed transparent. It has to be designed so that we can explain what it is doing and why. It has to be designed with a moral compass, a moral positioning system for moral control and positioning. And then we can talk about shared leadership and shared control for everyone’s good.

 

 

* C.M. Jonker is professor of Interactive Intelligence in the Department of Intelligent Systems at Delft University and professor of Explainable Artificial Intelligence in the Department of Media Technology at Leiden University. Her research interests are the modelling and simulation of cognitive processes and concepts such as trust, negotiation, teamwork and the dynamics of individual agents and organisations. She enjoys working in interdisciplinary teams and creating synergy between humans and technology by understanding, shaping and using fundamentals of intelligence and interaction. She is inspired by social intelligence theories, and, e.g., use concepts such as social practice to improve the interactive intelligence of agents.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Loading...