Who’s responsible when an AI kills someone?

# # #

With the recent news that an autonomous Uber vehicle killed a woman crossing the street in Tempe, Arizona, this ethical question is very timely. Here is an element of response published in the MIT Technology Review

Criminal liability usually requires an action and a mental intent (in legalese an actus rea and mens rea). Kingston says Hallevy explores three scenarios that could apply to AI systems.

The first, known as perpetrator via another, applies when an offense has been committed by a mentally deficient person or animal, who is therefore deemed to be innocent. But anybody who has instructed the mentally deficient person or animal can be held criminally liable. For example, a dog owner who instructed the animal to attack another individual.

The whole article is interesting as it delves deeper in all the possible scenarios. 

Google’s new AI algorithm predicts heart disease by looking at your eyes

# # # #

Scientists from Google and its health-tech subsidiary Verily have discovered a new way to assess a person’s risk of heart disease using machine learning. By analyzing scans of the back of a patient’s eye, the company’s software is able to accurately deduce data, including an individual’s age, blood pressure, and whether or not they smoke. This can then be used to predict their risk of suffering a major cardiac event — such as a heart attack — with roughly the same accuracy as current leading methods.

Just like the first two technological revolutions (steam, electricity), the third one (software) we are experiencing now has just begun. 

[Source: Google’s new AI algorithm predicts heart disease by looking at your eyes]