The Trolley Problem, Autonomous Cars and the Road Traffic and Roads Bill 2021
Two seemingly innocuous sections in the Road Traffic and Roads Bill 2021, which passed Second Stage late last year, are likely to have outsized impact on our roads and on the cars and drivers using them into the future. While they are an instance of the law trying to keep pace or catch up with a reality on our roads, they are also codifying into law provisions and definitions that haven’t yet been subject to the necessary ethical or philosophical examination I think they deserve.
Section 5, a, ii, reads as follows:
(ii) by the substitution of the following definition for the definition of “driving”:
“ ‘driving’ includes—
(a) managing and controlling,
(b) in the case of an autonomous vehicle during periods of time in which the vehicle is moving autonomously, monitoring, overseeing and supervising
Part b) here includes autonomous vehicles (AVs) in the legal definition of driving as long as they are monitored or overseen by the driver, however we define that.
Section 44 b reads as follows:
The Act of 1993 is amended in section 13—
(b) by the substitution of the following subsection for subsection (8):
(e) provide any structure or infrastructure on, in, under or over a road for, or in connection with—
(i) the charging of electric vehicles,
(ii) the provision of information to road users, or
(iii) the transmission of information to vehicles being used on a road.”
Part (iii) here could be interpreted as allowing essentially for the vehicle to be able to access information systems enabling it to become self-driving or autonomous.
Now, I’m no Luddite and I have no particular objection to AVs in general. The technology isn’t there yet, but it’s getting there, although it’s taking a good deal longer than many anticipated. Many aspects are in place already, guised as driver-assist or something similar, and are already leading to road safety improvements. Indeed, in the future, if AVs become measurably safer than human drivers, as they are likely to do as they are driven on by machine learning, there may be an ethical question as to whether humans should be allowed to drive at all.
However, that’s not where we currently are, and in my view, neither the technology nor the unpinning ethical considerations are yet developed fully.
This is where the Trolley Problem comes in.
The Trolley Problem is an old ethical and philosophical thought experiment which goes as follows:
A trolley is running out of control down a train line. If it continues on its current path, it will kill five people. However, by pulling a lever, you can divert the trolley to another line. However, on that line it will kill one person. So which is the more moral choice: to allow 5 people to die through inaction or to kill one person through your action?
There’s not one set answer, and different philosophical approaches lead to different answers.
When it comes to driver behaviour and in particular how any of us would drive in the instance of a crash, there’s a tacit acknowledgement that philosophical considerations go out the window. I count myself very lucky never to have been involved in a serious collision. In that split-second, I don’t know if I would act as an altruist or in self-interest. In all likelihood, I wouldn’t think at all, and would react on instinct – I just don’t know.
But in the case of autonomous vehicles, those value judgements will have to be coded into the CPU from the start. Should the car choose not to act, thereby allowing five people to die, or choose to act, thereby killing one? What if the one person is the driver – does that change the calculus? And one can shift the parameters of the experiment to explore the edges of our thinking – what if the driver is very old and the five all children? How should the car then choose to act?
The word ‘choose’ is deliberately italicised above. Because while not every situation can be anticipated, including the physical ability of the car to enact its decisions successfully, the car will not, in fact, make any choice. It will act according to its coding. The ethical or moral choice will have been made on the factory floor and on the forecourt.
Because which car would you choose to buy, if both were sat side-by-side in the forecourt? The one that is biased to you, the driver, or the one programmed to minimise damage and death overall? On a society-wide level, I would prefer all cars to be programmed for the latter, but would I be quite so civic-minded if it was myself, my family, I was buying for?
In fact, we have played out an unthinking version of the Trolley Problem already in the explosion of SUV sales. These are perceived to be safer for the driver (though the evidence doesn’t necessarily support that) but could hardly be argued to be better for society generally, be that in terms of the safety of other drivers, other road users or the wider context in terms of emissions profile, for example. Yet they have become the most popular type of vehicle sold.
So, if the law is lagging reality, which is so often the case, it is still previous to a wider ethical discussion of what a world with AVs might look like and how that might interact with the rights and concerns of other road users. And while marketing executives and futurists alike might be falling over themselves to present AVs as the next big thing in transport, here to solve our many ills (as here), I remain to be convinced that the answer to urban congestion, life-cycle carbon emission or sub-urban spread is more cars, but self-driving ones.
Given the choice, I would prefer to create a framework and an infrastructure that favoured autonomous children over autonomous cars.