How to build a moral robot – ieee spectrum

Whether it’s in our cars, our hospitals or our homes, we’ll soon depend upon robots to make judgement calls in which human lives are at stake.

That’s why a team of researchers is attempting to model moral reasoning in a robot. Gsa 2016 In order to pull it off, they’ll need to answer some important questions: How can we quantify the fuzzy, conflicting norms that guide human choices? H ow can we equip robots with the communication skills to explain their choices in way that we can understand? And would we even want robots to make the same decisions we’d expect humans to make?

BERTRAM MALLE: How does that robot decide which of these people to try to save first? That’s something we as a community actually have to figure out.

If autonomous robots are going to hang with us, we’re going to have to teach them how to behave—which means finding a way to make them aware of the values that are most important to us.

Matthias Scheutz is computer scientist at Tufts who studies human robot interaction—and he’s trying to figure out how to model moral reasoning in a machine.

But with morals, things get messy pretty quickly. Gas up shawty Even as humans, we don’t really have any concrete rules about what’s right and wrong—at least, not ones we’ve managed to agree upon. Gas numbers stove temperature What we have instead are norms—basically thousands of fuzzy, contradictory guidelines. 9gag Norms help us predict the way the people around us will behave, and how they’ll want us to behave.

MATTHIAS SCHEUTZ: Right now the major challenge for even thinking about how robots might be able to understand moral norms is that we don’t understand on the human side how humans represent and reason if possible with moral norms.

NARRATOR: The big trick—especially if you’re a robot—is that none of these norms are absolute. Gas pedal lyrics In one situation, a particular norm or value will feel extremely important. Electricity generation by source by state But change the scenario, and you completely alter the rules of the game.

Thats’ where the social psychologists at Brown Univeristy come in. A gas is compressed at a constant pressure of They’ve started by compiling a list of words, ideas and rules that people use to talk about morality—a basic moral vocabulary. Electricity song The next step is figuring out how to quantify this vocabulary: How are those ideas related and organized in our minds?

One theory is that the human moral landscape might look a lot like a semantic network, with clusters of closely related concepts that we become more or less aware of depending on the situation.

MALLE: Our hypothesis is that in any particular context, a subset of norms is activated—a particular set of rules related to that situation. Gas utility austin That subset of norms is then available to guide action, to recognize violations, and allow us to make decisions.

NARRATOR: The key here is that the relationships between these subnetworks is actually something you can measure. Lafayette la gas prices Malle starts off by picking a scenario—say, a day at the beach—and asking a whole bunch of people how they think they’re supposed to behave. Electricity laws physics What are they supposed to do? And what are they absolutely not supposed to do?

The order in which the participants mention certain rules, the number of times they mention them, and the time it takes between mentioning one idea and another—those are all concrete values. Hp gas online booking By collecting data from enough different situations, Malle thinks he’ll be able to build a rough map of a human norm network. Gas in california In the future, a robot might come equipped with a built-in version of that map. Gas jokes That way it could call up the correct moral framework for whatever situation is at hand.

But even if that robot could perfectly imitate a human’s decision making process—is that something we’d really want? Malle suspects that we might actually want our robots to make different decisions than the ones we’d want other humans to make. Gas exchange in the lungs To test this, he asks his research subjects to imagine a classic moral dilemma.

Picture a runaway trolley in a coal mine, that’s lost use of its brakes. C gastronomie The trolley has four people on board and is hurtling toward a massive brick wall. Gas bubble disease There’s an alternate safe track, but a repairman is standing on it—and he’s oblivious to what’s happening.

Another worker nearby sees the situation. Is there a gas station near me He can pull a lever that would switch the train onto the second track, saving the passengers in the trolley but killing the repairman. Electricity prices over time He has to choose.

MALLE: So the fundamental dilemma is will you intervene and kill one person to save four? Or are you going to let fate take its course, and most likely four people will die.

NARRATOR: Malle presents this scenario a few different ways: some of the participants watch a human make the decision, some see a humanoid robot, and some see a machine-like robot. Electricity outage san antonio Then he asks participants to judge the decision the worker made.

Generally, participants blame the human worker more when he flips the switch—saving four lives but sacrificing one—than when he does nothing. Electricity and circuits class 6 ppt Apparently, watching another person make a cold, calculated decision to sacrifice a human life makes us kind of queasy.

But evidence suggests that we might actually expect a robot to flip the switch. Electricity lessons 4th grade The participants in Malle’s experiment blamed the robot more if it didn’t step in and intervene. Oil n gas prices And the more machine-looking the robot was, the more they blamed it for letting the four people die.

There’s one more interesting twist to this. Gastroenterologia o que trata If the robot or human in the story made an unpopular decision—but then gave a reason for that choice—participants blamed that worker less.

Back in Matthias Scheutz’s lab at Tufts, they’re working on that exact problem. Gas 4 less redding ca They’ve programmed a little autonomous robot to follow some simple instructions: it can sit down, stand up, and walk forward.

But they’ve also given it an important rule to follow: Don’t do anything that would cause harm to yourself or others. Gas pain left side If a researcher gives the robot an instruction that would violate that rule, the robot doesn’t have to follow that instruction. Gaston y daniela And it will tell you why it won’t.

The researcher can then give the robot new information. P gasol And the robot will update its understanding of its little world and decide on a different course of action.

This communication is essential because moral norms aren’t fixed. Gas 99 cents a litre We argue and reason about morality—and often, we learn from each other and update our values as a group. Astrid y gaston lima reservations And any moral robot will need to be part of that process.

NOTE: Transcripts are created for the convenience of our readers and listeners and may not perfectly match their associated interviews and narratives. Gas x dosage chewable The authoritative record of IEEE Spectrum ’ s video programming is the video version.