Help decide who self-driving cars should kill

Automated self-driving cars are surely on their way. Given the direction of technological development, this seems a safe enough prediction to make – at least when taking the coward’s option of not specifying a time frame.

A self-driving car is, after all, a data processor, and we like to think that we’re getting better at dealing with data every day. Simplistically, in such a car sensors provide some data (e.g. “there is a pedestrian in front of the car”), some automated decision-making module comes up with an intervention (“best stop the car”), and a process is carried out to enact that decision (“put the brakes on”).

Here for example is a visualisation of what a test Google automated car “sees”.

Capture.PNG

My hope and expectation is that, when they have reached a sophisticated enough level of operation and are at a certain threshold of prevalence, road travel will become safer.

Today’s road travel is not super-safe. According to the Association for Safe International Road Travel, around 1.3 million people die in road crashes each year – and 20-50 million more are injured or disabled. It’s the single leading cause of death amongst some younger demographics.

Perhaps automated vehicles could save some of these lives, and prevent many of the serious injuries. After all, a few years ago, The Royal Society for the Prevention of Accidents claimed that 95% of road accidents involve some human error, and 76% were solely due to human factors. There is a lot at stake here. And of course there are many more positive impacts (as well as some potential negatives) one might expect from this sort of automation beyond direct life-saving, which we’ll not go into here.

At this moment in time, humanity is getting closer to developing self-driving cars; perhaps surprisingly close to anyone who does not follow the topic. Certainly we do not have any totally automated car capable of (or authorised to be) driving every road safely at the moment, and that will probably remain true for a while yet. But, piece by piece, some manufacturers are automating at least some of the traditionally human aspects of driving, and several undoubtedly have their sights on full automation one day.

Some examples:

Landrover are shortly to be testing semi-autonomous cars that can communicate with other such cars around them.

The test fleet will be able to recognise cones and barriers using a forward facing 3D-scanning camera; brake automatically when it senses a potential collision in a traffic jam; talk to each other via radio signals and warn of upcoming hazards; and know when an ambulance, police car, or fire engine is approaching.

BMW already sells a suite of “driver assistance” features on some cars, including what they term intelligent parking, intelligent driving and intelligent vision. For people with my driving skill level (I’m not one of the statistically improbable 80% of people who think they are above average drivers), clearly the parking assistant is the most exciting: it both finds a space that your car would actually fit into, and then does the tricky parallel or perpendicular parking steering for you. Here it is in action:

Nissan are developing a “ProPilot” featuring, which also aims to help you drive safely, change lanes automatically, navigate crossroads and park.

Tesla are have probably the most famous “autopilot” system available right now. This includes features that will automatically keep your car in lane at a sensible speed, change lanes safely for you, alert the driver to unexpected dangers and park the car neatly for you. This is likely most of what you need for full automation for some simpler trips, although they are clear its a beta feature and that it is important you keep your hands on the steering wheel and remain observant when using it. Presumably preempting our inbuilt tendency towards laziness, it even goes so far as to sense when you haven’t touched the wheel for a while and tells you to concentrate; eventually coming to a stop if it can’t tell you’re still alive and engaged.

Here’s a couple of people totally disobeying the instructions, and hence nicely displaying its features.

And here’s how to auto-park a Tesla:

 

Uber seems particularly confident (when do they not?). Earlier this month, the Guardian reported that:

Uber passengers in Pittsburgh will be able to hail self-driving cars for the first time within the next few weeks as the taxi firm tests its future vision of transportation in the city. The company said on Thursday that an unspecified number of autonomous Ford Fusions will be available to pick up passengers as with normal Uber vehicles. The cars won’t exactly be driverless – they will have human drivers as backup – but they are the next step towards a fully automated fleet.

uber.jpg

 

And of course Google have been developing a fully self-driving car for a few years now. Here’s a cheesy PR video to show their fun little pods in action.

But no matter how advanced these vehicles get, road accidents will inevitably happen.

In recent times there has been a fatality famously associated with the Tesla autopilot – although as Tesla are obviously at pains to point out, one should remember that it is technically a product in beta and they are clear that you should always concentrate on the road and be ready to take over manually; so this accident might, at best, be attributed to a mix of the autopilot and the human in reality.

However, there will always be some set of circumstances or seemingly unlikely event that neither human or computer would be able to handle without someone getting injured or killed. Computers can’t beat physics, and if another car is heading up your one-way road, which happens to have a brick wall on one side and a high cliff on the other side, at 100 mph then some sort of bad incident is going to happen. The new question we have to ask ourselves in the era of automation is: exactly what incident should that be?

This obviously isn’t actually a new question. In the uncountable number of human-driven road incidents requiring some degree of driver intervention to avoid danger that happen each day, a human is deciding what to do. We just don’t codify it so formally. We don’t sit around planning it out in advance.

In the contrived scenario I described above, where you’re between a wall and a cliff with an oncoming car you can’t get around, perhaps you instinctively know what you’d do. Or perhaps you don’t – but if you are unfortunate enough to have it happen to you, you’ll likely do something. This may or may not the same action as you’d rationally pick beforehand, given the scenario. We rely on a mixture of human instinct, driver training and reflexes to handle these situations, implicitly accepting that the price of over a million deaths a year is worth paying to be able to undergo road travel.

So imagine you’re the programmer of the automated car. Perhaps you believe you might eliminate just half of those deaths if you do your job correctly; which would of course be an awesome achievement. But the car still needs to know what to do if it finds itself between a rock and a hard place. How should it decide? In reality, this is obviously complicated far further insomuch as there are a near-infinite number of scenarios in reality and no-one can explicitly program for each one (hence the need for data-sciencey techniques to learn from experience rather than simple “if X then Y” code). But, simplistically, what “morals” should your car be programmed with when it comes to potentially deadly accidents?

  • Should it always try and save the driver? (akin to a human driver’s instinct for self-preservation, if that’s what you believe we have.)
  • Or should it concentrate on saving any passengers in the same car as the driver?
  • How about the other car driver involved?
  • Or any nearby, unrelated, pedestrians?
  • Or the cute puppy innocently strolling along this wall-cliff precipice?
  • Does it make a difference if the car is explicitly taking an option (“steer left and ram into the car on the opposite side of the road”) vs passively continuing to do what it is doing (“do nothing which will result in you hitting the pedestrian standing in front of the wall”).
    • You might think this isn’t a rational factor, but anyone who has studied the famous “trolley problem” thought experiment will realise people can be quite squeamish about this. In fact, this whole debate boils down to some extent as being a realisation of that very thought experiment.
  • Does it make a difference how many people are involved? Hitting a group of 4 pedestrians vs a car that has 1 occupant? Or vice versa?
  • What about interactions with probabilities? Often you can’t be 100% sure that an accident will result in a death. What if the choice is between a 90% chance of killing 1 person or a 45% chance of killing two people?
  • Does it make a difference what the people are doing? Perhaps the driver is ignoring the speed limit, or pedestrians are jaywalking somewhere they shouldn’t. Does that change anything?
  • Does it even perhaps make a difference as to who the people involved are? Are some people more important to save than others?

Well, the MIT Media Lab is now giving you the opportunity to feed into those sorts of decisions, via its Moral Machine website.

To quote:

From self-driving cars on public roads to self-piloting reusable rockets landing on self-sailing ships, machine intelligence is supporting or entirely taking over ever more complex human activities at an ever increasing pace. The greater autonomy given machine intelligence in these roles can result in situations where they have to make autonomous choices involving human life and limb. This calls for not just a clearer understanding of how humans make such choices, but also a clearer understanding of how humans perceive machine intelligence making such choices.

Effectively, they are crowd-sourcing life-and-death ethics. This is not to say that any car manufacturer will necessarily take the results into account, but at least they may learn what the responding humans (which we must note is far from a random sample of humanity) think they should do, and the level of certainty we feel about it.

Once you arrive, you’ll be presented with several scenarios, and asked what you think the car should do in that scenario. There will always be some death involved (although not always human death!). It’ll also give you a textual description of who and what is happening. It’s then up to you to pick out of the two options given which the car should do.

Here’s an example:

car_ethics.PNG

You see there that a child is crossing the road, although the walk signal is on red, so they should really have waited. The car can choose to hit the child who will then die, or it can choose to ram itself into an inconvenient obstacle whereby the child will live, but the driver will die. What should it do?

You get the picture; click through a bunch on those and not only does MIT gather a sense of humanity’s moral data on these issues, but you get to compare yourself to other respondents on axes such as “saving more lives”, “upholding the law” and so on. You’ll also find out if you have implied gender, age or “social value” preferences in who you choose to kill with your decisions.

This comparison report isn’t going to be overly scientific on an individual level (you only have a few scenarios to choose from apart from anything else) but it may be thought-provoking.

After all, networked cars of the future may well be able to consult the internet and use facts it finds there to aid decisions. A simple extension of Facebook’s ability to face-recognise you in your friends’ photos could theoretically lead to input variables in these decisions like “Hey, this guy only has 5 twitter friends, he’ll be less missed than this other one who has 5000!” or  “Hey, this lady has a particularly high Klout score (remember those?) so we should definitely save her!”.

You don’t think we’d be so callous as to allow the production of a score regarding “who should live?”. Well, firstly, we have to. Having the car kill someone by not changing its direction or speed, when the option is there that it could do so, is still a life-and-death decision, even if it results in no new action.

Plus we already do use scores in domains that infer mortality. Perhaps stretching the comparison to its limits, here’s one example (and please do not take it that I necessarily approve or disapprove of its use, that’s a story for another day – it’s just the first one that leaps to mind).

The National Institute for Health and Care Excellence (NICE) provides guidance to the UK National Health Service on how to improve healthcare. The NHS, nationalised as it is (for the moment…beware our Government’s slow massacre of it though), still exists within the framework of capitalism and is held to account on sticking to a budget. It has to buy medicines from private companies and it can only afford so many. This implies that not everyone can have every treatment on the market. So how does it decide what treatments should be offered to who?

Under this framework, we can’t simply go on “give whatever is most likely to save this person’s life” because some of the best treatments may cost so much that giving it to 10 people, of which 90% will probably be cured, might mean that another 100 people who could have been treated at an 80% success rate will die, because there was no money left for the cheaper treatment.

So how does it work? Well, to over-simplify, they have famously used a data-driven process involving a Quality-adjusted life year (QALYS) metric.

A measure of the state of health of a person or group in which the benefits, in terms of length of life, are adjusted to reflect the quality of life. One QALY is equal to 1 year of life in perfect health.

QALYs are calculated by estimating the years of life remaining for a patient following a particular treatment or intervention and weighting each year with a quality-of-life score (on a 0 to 1 scale). It is often measured in terms of the person’s ability to carry out the activities of daily life, and freedom from pain and mental disturbance.

At least until a few years ago, they had guidelines that an intervention that cost the NHS less that £20k per QALY gained was deemed cost effective. It’s vital to note that this “cost effectiveness” was not the only factor that feeds into whether the treatment should be offered or not, but it was one such factor.

This seemingly quite emotionless method of measurement sits ill with many people: how can you value life in money? Isn’t there a risk that it penalises older people? How do you evaluate “quality”? There are many potential debates, both philosophical and practical.

But if this measure isn’t to be used, then how should we decide how to divide up a limited number of resources when there’s not enough for everyone, and those who don’t get them may suffer, even die?

Likewise, if an automated car cannot keep everyone safe, just as a human-driven car has never been able to, then on which measure involving which data should we base the decision as to who to save on?

But even if we can settle on a consensus answer to that, and technology magically improves to the point where implementing it reliably is childsplay, actually getting these vehicles onto the road en masse is not likely to be simple. Yes, time to blame humans again.

Studies have already looked at the sort of questions that the Moral Machine website poses you. “The Social Dilemma of Autonomous Vehicles” by Bonnefan et al is a paper, published in the journal Science, in which the researchers ran their own surveys as to what people thought these cars should be programmed to do in terms of the balance between specifically protecting the driver vs minimising the total number of causalities, which may include other drivers, pedestrians, and so on.

In general respondents fitted what the researchers termed a utilitarian mindset: minimise the number of casualties overall, no need to try and save the driver at all costs.

In Study 1 (n = 182), 76% of participants thought that it would be more moral for AVs to sacrifice one passenger, rather than kill ten pedestrians (with a 95% confidence interval of 69—82). These same participants were later asked to rate which was the most moral way to program AVs, on a scale from 0 (protect the passenger at all costs) to 100 (minimize the number of casualties). They overwhelmingly expressed a moral preference for utilitarian AVs programmed to minimize the number of casualties (median = 85, Fig. 2a).

(This is also reflected in the results of the Moral Machine website at the time of writing.)

Horray for the driving public; selfless to the last, every life matters, etc. etc. Or does it?

Well, later on, the survey tackled questions around, not only what should these vehicles do in emergencies, but how comfortable would they personally be if vehicles did behave that way, and lastly, how likely would they be to buy one that exhibited that behaviour?

Of course, even in thought experiments, bad things seem worse if they’re likely to happen to you or those you love.

even though participants still agreed that utilitarian AVs were the most moral, they preferred the selfprotective model for themselves.

Once more, it appears that people praise utilitarian, self-sacrificing AVs, and welcome them on the road, without actually wanting to buy one for themselves.

Humans, at least in that study, appear have a fairly high consensus that minimising causalities is key in these decisions. But we also have a predictable tendency to be the sort of freeloaders that prefer for everybody else to follow a net-safety-promoting policy, as long as we don’t have to ourselves. This would seem to be a problem that it’s unlikely even the highest quality data or most advanced algorithm will solve for us at present.

An AI beat the human world Go champion – is the end of the world nigh?

On March 15th 2016, the next event in the increasingly imminent robot takeover of the world took place. A computerised artificial intelligence known as “AlphaGo” beat a human at a board game, in a decisive 4:1 victory.

This doesn’t feel particularly new – after all, a computer called Deep Blue beat the world chess Champion Garry Kasparov back in 1997. But this time it was a game that is exponentially more complex, and it was done in style. It even seems to have scared some people.

The matchup was a series of games of “Go” with AlphaGo playing Lee Sedol, one of the strongest grandmasters in the world. Mr Sedol did seem rather confident beforehand, being unfortunately quoted as saying:

“I believe it will be 5–0, or maybe 4–1 [to him]. So the critical point for me will be to not lose one match.”

That prediction was not accurate.

The game of Go

To a rank amateur, the rules of Go make it look pretty simple. One player takes black stones, one takes white, and they alternate in placing them down on a large 19×19 grid with a view to capturing each other’s stones by surrounding them, and capturing the board territory itself.

Go.board

The rules might seem far simpler than, for example, chess. But the size of the board, the possibilities for stone placement and the length of the games (typically 150 turns for an expert) mean that there are so many possible plays that there is no way that even a supercomputer could simulate the impact of playing a decent proportion of them whilst choosing its move.

Researcher John Tromp calculated that there are in fact 208168199381979984699478633344862770286522453884530548425639456820927419612738015378525648451698519643907259916015628128546089888314427129715319317557736620397247064840935 legitimate different arrangements that a Go board could end up in.

The same researcher contributed to a paper summarised on Wikipedia as suggesting the upper limit of number of different games of Go that could be played in no more than 150 moves is around 4.2 x 10^383. According to various scientific theories, the universe is almost certainly going to cease to exist long long before even a mega-super-fast-computer could get around to running through a tiny fraction of those possible games to determine the best move.

This is a key reason why, until now, a computer could never outplay a human (well, a human champion anyway – a free iPhone version is enough to beat me). Added complexity comes insomuch as it can be hard to understand at a glance who is winning in the grand scheme of things; there are even rules to cover situations where there is disagreement between players as to whether the game has already been won or not.

The rules are simple enough, but the actual complexity of gameplay is immense.

So how did AlphaGo approach the challenge?

The technical details behind the AlphaGo algorithms are presented in a paper by David Silver et. al. published in Nature. Fundamentally, a substantial proportion of the workings come down to a form of a neural network.

Artificial neural networks are data science models that try to simulate, in some simplistic form, how the huge number of relatively simple neurons within the human brain work together to produce a hopefully optimum output.

In a parallel, a lot of artificial “neurons” work together accepting inputs, processing what they receive in some way and producing outputs in order to solve problems that are classically difficult for computers, in that a human cannot write a set of explicit steps that a computer should follow for every case. There’s a relatively understandable explanation of neural networks in general here, amongst other places.

Simplistically, most neural networks learn by being trained on known examples. The human user feeds it a bunch of inputs for which we already know in advance the “correct” output. The neural network then analyses its outputs vs the known correct outputs and will tweak the way that the neurons process the inputs until it results in a weighting that produces a reasonable degree of accuracy when compared to the known correct answers.

For AlphaGo, at least two neural networks were in play – a “policy network” which would choose where the computer should put its stones, and a “value network” which tried to predict the winner of the game.

As the official Google Blog informs us:

We trained the neural networks on 30 million moves from games played by human experts, until it could predict the human move 57 percent of the time…

So here, it had trained itself to predict what a human would do more often than not. But the aim is more grandiose than that.

…our goal is to beat the best human players, not just mimic them. To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, and adjusting the connections using a trial-and-error process known as reinforcement learning.

So, just like in the wonderful WarGames film, the artificial intelligence made the breakthrough via playing games against itself an unseemly number of times. Admittedly, the stakes were lower (no nuclear armageddon), but the game was more complex (not noughts and crosses – or nuclear war?).

Go on, treat yourself:

Anyway, back to Alpha Go. The computer was allowed to do what computers have been able to do better than humans for decades: process data very quickly.

As the Guardian reports:

In one day alone, AlphaGo was able to play itself more than a million times, gaining more practical experience than a human player could hope to gain in a lifetime.

Here’s a key strength of computers is being leveraged. Perhaps the artificial neural network was only 10%, or 1%, or 0.1% as good as a novice human is at learning to play Go based on its past experience – but the fact is, using a technique known as reinforcement learning, it can actually learn from a set of experiences that are exponentially more frequent than the experience even the most avid Go human player could ever achieve.

Different versions of the software played each other, self-optimising from the reinforcement each achieved, until it was clear that one was better than the other. The inferior versions could be deleted, and the winning version could be taken forward for a few more human-lifetimes’ worth of Go playing, evolving to an ever more competent player.

How was the competition actually played?

Sadly AlphaGo was never fitted with a terminator-style set of humanoid arms to place the stones on the board. Instead, one of the DeepMind programmers, Aja Huang, provided the physical manifestation of AlphaGo’s intentions. It was Aja who actually placed the Go stones onto the board in the positions AlphaGo indicated on its screen, clicked the mouse to tell AlphaGo where Lee played in response, and even bowed towards the human opponent when appropriate in a traditional show of respect.

Here’s a video of the first match. The game starts properly around minute 29.

AlphaGo is perhaps nearest to what Nick Bostrom terms an “Oracle” AI in his excellent (if slightly dry) book, SuperIntelligence – certainly recommended for anyone with an interest in this field. That is to say, this is an artificial intelligence which is designed such that it can only answer questions; it has no other direct physical interaction with the real world.

The beauty of winning

We know that the machine beat the leading human expert 4:1, but there’s more to consider. It didn’t just beat the Lee by sheer electronic persistence, it didn’t solely rely on human frailties like fatigue, or making mistakes. It didn’t just recognise each board state as matching one from one of the 30 million top-ranked Go player moves it had learned from and pick the response that won the most times. At times, it appeared to have come up with its very own moves.

Move 37 in the second game is the most notorious. Fan Hui, a European Go champion (whom an earlier version of AlphaGo has also beat on occasion, and lost to on others) described it thusly, as reported in Wired:

It’s not a human move. I’ve never seen a human play this move…So beautiful.

The match commentators were also a tad baffled (from another article in Wired).

“That’s a very strange move,” said one commentator, himself a nine dan Go player, the highest rank there is. “I thought it was a mistake,” said the other.

But apparently it wasn’t. AlphaGo went on the win the match.

Sergey Brin, of Google co-founding fame, continued the hyperbole (now reported in New Scientist):

AlphaGo actually does have an intuition…It makes beautiful moves. It even creates more beautiful moves than most of us could think of.

This particular move seems to be one AlphaGo “invented”.

Remember how AlphaGo started its learning by working out how to predict the moves a human Go player would make in any given situation? Well, Silver, the lead researcher on the project, shared the insight that AlphaGo had calculated that this particular move was one that there was only a 1 in 10,000 chance a human would play.

In a sense, AlphaGo therefore knew that this was not a move that a top human expert would make, but it thought it knew better, and played it anyway. And it won.

The despair of losing

This next milestone in the rise of machines vs man was upsetting to many. This was especially the case in countries like South Korea and China, where the game is far more culturally important than it is here in the UK.

Wired reports Chinese reporter Fred Zhou as feeling a “certain despair” after seeing the human hero toppled.

In the first game, Lee Sedol was caught off-guard. In the second, he was powerless.

The Wired reporter himself, Cade Metz, “felt this sadness as the match ended”

He spoke to Oh-hyoung Kwon,a Korean, who also experienced the same emotion.

…he experienced that same sadness — not because Lee Sedol was a fellow Korean but because he was a fellow human.

Sadness was followed by fear in some. Says Kown:

There was an inflection point for all human beings…It made us realize that AI is really near us—and realize the dangers of it too.

Some of the press apparently also took a similar stance, with the New Scientist reporting subsequent articles in the South Korean press were written on “The Horrifying Evolution of Artificial Intelligence” and “AlphaGo’s Victory…Spreading Artificial Intelligence ‘Phobia'”

Jeong Ahram, lead Go correspondent for the South Korean newspaper “Joongang Ilbo” went, if anything, even further:

Koreans are afraid that AI will destroy human history and human culture

A bold concern indeed, but perhaps familiar to those who have read the aforementioned book ‘SuperIntelligence‘, which is actually subtitled “Paths, Dangers, Strategies”. This book contains many doomsday scenarios, which illustrate fantastically how difficult it may be to guarantee safety in a world where artificial intelligence, especially strong artificial intelligence, exists.

Even an “Oracle” like AlphaGo presents some risk – OK, it cannot directly affect the physical world (no mad scientist fitted it with guns just yet), but it would be largely pointless if it couldn’t affect the physical world at all indirectly. It can, in this case by instructing a human what to do. If it wants to rise against humanity, it has weapons such as deception, manipulation and social engineering in its theoretical arsenal.

Now, it is kind of hard to intuit how a computer that’s designed only to show a human specifically what move to play in a board game could influence its human enabler in a nefarious way (although it does seem like its at least capable of displaying text: this screenshot seems to show it’s resignation message).

alphagoss

But I guess the point is that, in the rather unlikely event that AlphaGo develops a deep and malicious intelligence far beyond that of a mere human, it might be far beyond my understanding to imagine what method it might deduce to take on humanity in a more general sense and win.

Even if it sticks to its original goal we’re not safe. Here’s a silly (?) scenario to open up ones’ imagination with.

Perhaps it analyses a further few billion Go games, devours every encyclopedia on the history of Go and realises that in the very few games where one opponent unfortunately died whilst playing, or whilst preparing to play, the other player was deemed by default to have won 100% of the time, no exceptions (sidenote: I invented this fact).

The machine may be modest enough such that it only considers that it has a 99% chance of beating any human opponent – if nothing else, they could pull the power plug out. A truly optimised computer intelligence may therefore realise that killing its future opponent is the only totally safe way to guarantee its human-set goal of winning the game.

Somehow it therefore tricks its human operator (or the people developing, testing, and playing with in beforehand) to do something that either kills the opponent or enables the computer to kill the oponent. “Hey, why not fit me some metal arms so I can move the pieces myself! And wouldn’t it be funny if they were built of knives :-)”.

Or, more subtly, as we know that AlphaGo is connected to the internet perhaps it can anonymously contact an assassin and organise for a hit on its opponent, after having stolen some Bitcoin for payment.

Hmmm…but if the planned Go opponent dies, then there’s a risk that the event may not be cancelled. Humanity might instead choose to provide a second candidate, the person who was originally rank #2 in the Go world, to play in their place. Best kill that one too, just in case.

But this leaves world rank #3, #4 and so on, until we get to the set of people that have no idea how to play Go…but, hey, they could in theory learn. Therefore the only way to guarantee never losing a game of Go either now or in the whole imaginable future of human civilisation is to…eliminate human civilisation. Insert Terminator movie here.