How can robots make responsible decisions?
As humans, we have the ability to study a situation and make sure we understand its dynamics before making a responsible decision. Understanding environmental dynamics means we have the foresight to predict the consequences of our actions and the actions of others, as well as events in nature. For instance, if whilst driving, we notice a road block up ahead, we will start to slow down so that we can come to a gentle stop. Foresight is essential for making responsible decisions in complex environments.
Unfortunately for robots, foresight is not so simple. It can be built in part, as we have seen already this week, by interpreting the environment. But the robot will need a sufficiently detailed and complete model of the environment, with rapidly updated positions of the other vehicles and pedestrians.
In this article, we’ll look at some of the ways that robots can be programmed to make responsible decisions.
Predicting the consequences of planned actions
The consequences of a robot’s actions can vary. They may result in a physical change of the environment, including changes in the robot’s location, they may have legal consequences, such as a drone not being allowed to fly over a specific area or they may have social consequences, such as causing a nuisance to neighbours.
There are two computational approaches for predicting the consequences of a robot’s actions. Firstly, we can physically simulate the robot’s actions in the environment to predict the future physical state of the world. Secondly, we can use logic to infer legal and social consequences in terms of breaking rules of behaviour.
Planning while keeping to rules
If concrete actions are not planned, the robot could use mathematical optimisation methods to decide how to act. Mathematical optimisation is the act of achieving the best possible result under given circumstances (called constraints). This type of planning can go beyond computing the consequences: it can optimise the consequences of the action to be taken for the best outcomes.
This area of planning includes a mechanism called ‘game theory’ where each robot is considered to be a player of a game and receives rewards dependent on the actions of the whole robotic team. We’ll be looking at game theory in more detail in Week 3 when we investigate how robots can work together.
Using a well-defined set of primitive actions
Planning in continuous time for long periods can be computationally heavy. Planning can be simplified, if the robot performs only a limited set of well-defined actions.
Primitive actions can reduce planning complexity by abstracting continuous world relationships into discrete ones. For instance “If I move straight along this corridor, turn left and go through the first door on the right, then I will be in the kitchen”. The primitive actions are “going straight”, “turning left”, “going through a door”. The associated discrete abstractions of perceptions are “recognise end of corridor”, “finding the first door on the right” and “being in the kitchen”.
Symbolic planning uses a rapidly updated model of the environment to decide on the most appropriate action of the robot.
Symbolic planning also needs to include rules of behavior and take into account the functional, safety and societal consequences of actions taken by the robot. For instance, robots searching a building need to take into account the damage their search may cause, their own safety in case of a burning building or the level of disturbance they may cause to related human activity. In the case of an autonomous car, the highway code needs to be observed as a rule set, they need to be polite with other drivers and watch their own expressiveness of intent to other traffic participants.
All this can be based on symbolic computations about timing of events (temporal logics) and about knowledge and intent by others (epistemic logic). We’ll look at symbolic planning in more detail in the next step.
© The University of Sheffield