The Ethics of Artificial Intelligence

ai-roboticsAs a speaker at the Investment Innovation Conference, Shannon Vallor, the William J. Rewak, S.J. Professor in the Department of Philosophy at Santa Clara University, spoke on the ethical implications of emerging technology, particularly artificial intelligence. We asked Shannon a few questions about human and machine values and the nature of moral complexity.

AI seems to be all around us and progressing quite rapidly — from chess playing to self-driving cars to technology that seems to be able to figure out your gender identity. Do we need a new moral framework to guide our interaction with technology?

I think in some cases we need significant changes to our existing ethical framework. In other areas, I think our existing ethical frameworks can still shed a considerable amount of light on what we’re talking about.

So, for example a lot of the issues have to do with the potential harm AI could cause. Even if this technology becomes more reliable, it could potentially be used for unjustifiable purposes. You do have existing moral theories that talk about harms to others and talk about violations of people’s dignity and rights and so forth. Those moral theories would still tell us a lot about how we can manage these kinds of risks responsibly.

For example, a lot of the issues we’re facing right now – technological unemployment, and harmful biases in machine judgement – a lot of these issues are already salient from the perspective of the moral frameworks that we have. Those frameworks give us reasons to avoid doing unjust harm to others or failing to recognize their dignity and the fact that they are persons.

But I think there are other developments in AI that are going to require new kinds of thinking.

With AI systems, it can be very tempting in many cases to fit our interactions with them into the category of our moral obligations to other humans, but that would be a very dangerous mistake. As they behave more and more like social partners, robots and other artificially intelligent agents will frequently trigger emotional and moral responses in us, such as concern and affection; but they are still merely objects—so this is something where we have to develop new moral frameworks to govern AI-human relations.

AI systems will create new moral situations that we are not wired for, which can be good or bad. What’s important is that we develop the ethical perspective and judgment needed to respond well to these new dilemmas.

What are machine values?

Machine values are human values. They are just ones that suit themselves especially well to implementation and maximization by machine judgment.

For example, one machine value is optimization. To get an AI system to be a good optimizer, you use a mathematical equation, called a utility function, that reduces complex value environments into something a machine can read and calculate. Then you program the machine to always maximize the expected utility in its environment.

But we know from human psychology and the failures of economic theory that human are not consistent expected utility maximizers. We struggle with competing values and tragic choices, and we act in ways that aren’t always narrowly rational.

The problem is that, while machines can implement well-defined value functions consistently and reliably, they do not mirror human processes of good, wise, or ethical decision-making in dynamic social environments. It’s important that in an AI-driven society, we don’t let go of those more complex human values that machines can’t handle.

What we have now is task-specific AI and it’s defined by behavourial competence, not by similarity with human minds. It can be understood as augmented cognition; AI systems can model a much larger space of action possibilities than our brains can. AI can help us think better and be better.

But there’s a problem of algorithm transparency and also accountability. How does the algorithm work and who will be accountable for decision systems that work more or less independently of human judgment?

You counterpose machine values to virtue ethics. What is virtue ethics?

There are lots of different ways that people structure their moral thinking. So we could talk about utilitarian ethics that motivates you to act in a way that will cause the least suffering and promote the greatest happiness for those who are affected by what you’re about to do. But that’s really difficult to apply in all situations, for a lot of reasons. Rule-based ethical systems are just as fragile; we often find ourselves in contexts where there is no pre-fixed rule for decision-making to rely upon. This is one reason why building ethical machines is such a challenge, because there is no fixed set of rules you can give a computer to follow that will guarantee that it behaves ethically in all situations it might encounter (this was the point of many of Isaac Asimov’s stories about the Laws of Robotics).

Virtue ethics is a way of thinking about the good life as achievable through specific moral traits and capacities that humans can actively cultivate in themselves. These traits—virtues—allow us to cope, and even flourish, under challenging conditions where fixed rules and principles fall short. It goes back to classical Greece but also to the Confucian and Buddhist traditions. These are ways of moral living that go beyond fixed rules or principles that apply universally under all conditions; these traditions also emphasize the fluid and skillful judgment and perception of virtuous individuals, and their ability to navigate challenging moral circumstances. This is the kind of judgment that AI systems don’t have, and won’t have for the foreseeable future.

How does virtue ethics address technology?

There’s tremendous pressure now, quite dangerous pressure, to delegate decisions to AI. These types of decisions are often being made without much oversight and we’ve already seen disasters coming from that, in cases where people didn’t understand the limitations of the algorithm or if they did, failed to take the appropriate care and caution needed to implement AI in that situation.

We’ve also seen a lack of transparency in how decisions are being made. It may be possible to improve the overall outcomesof a decision process by using a well-designed algorithmic decision-support system. But we’re still going to need to worry about how to handle the cases that the machines get wrong.

Let’s say that people get it right 90% of the time,but the algorithm can get it right 95% of the time. Overall, you’ve got an improvement in the output. But when a human screws up, it’s usually pretty easy for another human to figure out where the mistake got made, and correct it. This is often difficult or nearly impossible with an automated decision-system, especially one driven by machine learning techniques that rely upon deep neural networks, the detailed workings of which may be opaque and untraceable evenfor the system’s programmers.

One avenue to address this problem is explainable AI, the idea of making machines that aren’t opaque and can explain why they made a certain decision. We need machine reasoning that can be examined, so that if we know a machine is going to get it wrong sometimes, we can figure out where those mistakes are and how they were made.

We can see the need for this when we look at AI systems being developed to operate as caregivers, or advisors to employers, judges, lawyers, law enforcement, or military personnel. Advanced algorithms that are inscrutable to human inspection increasingly do the work of labelling us as combatant or civilian, good loan risk or future deadbeat, likely or unlikely criminal, hirable or unhirable.

We need to be able to question and challenge those decisions when they are wrong.

How do you get virtue ethics into technology?

One of the things I find interesting, being in Silicon Valley, is the extent to which among the skills employers in the Valley are looking for most is critical thinking and reasoning skills. That is something different from being able to apply a rule, or make accurate calculations, or write a lot of code. The kinds of skills that we increasingly want and need from people are the things that machines are still just no good at: flexible, nuanced thinking, social and emotional intelligence, critical analysis and reflection.

We have to develop new habits in society and education to cultivate these virtues. It’s incumbent on us to develop the intellectual and moral capacities we need if humans are to thrive in a future with artificial intelligence. We also need to conserve our understanding of the moral complexity of life, and not let it get flattened out by algorithmic representations of human behaviour.

Compassion, justice, hope – these are things that humans have been seeking and fighting for, for thousands of years. That they can’t be represented by an algorithm is no excuse to set them aside.



To learn more about the Investment Innovation Conference, please visit the
conferences section of the CIR website.