Artificial Intelligence: Can Human sue Robots?
Artificial Intelligence was first proposed by John McCarthy in 1956 in his first academic conference on the subject. The idea of machines operating like human beings began to be the centre of scientist’s mind and whether if it is possible to make machines have the same ability to think and learn by itself was introduced by the mathematician Alan Turing. Alan Turing was able to put his hypotheses and questions into actions by testing whether “machines can think”? After series of testing (later was called as Turing Test) it turns out that it is possible to enable machines to think and learn just like humans. Turing Test uses the pragmatic approach to be able to identify if machines can respond as humans.
The first known case of humans going to court over investment losses triggered by autonomous machines will test the limits of liability. (news headline reads)
“People tend to assume that algorithms are faster and better decision-makers than human traders,” said Mark Lemley, a law professor at Stanford University. “That may often be true, but when it’s not, or when they quickly go wrong, investors want someone to blame.”
So, a Hong Kong tycoon Samathur Li Kin-kan is doing the next best thing. He’s going after the salesman who persuaded him to entrust and make believe to the supercomputer (AI) whose trades cost him more than $20 million. the legal battle was drawn from filings to the commercial court in London where the trial is scheduled to begin.
Facts of the case
It all started in Dubai Li, met Costa, the 49-year-old Italian who’s often known by peers in the industry as “Captain Magic.” During their meal, Costa described a robot hedge fund his company London-based Tyndaris Investments would soon offer to manage money entirely using AI, or artificial intelligence, developed by Austria-based AI Company 42.cx.
The supercomputer named K1 would comb through online sources like real-time news and social media to gauge investor sentiment and make predictions on U.S. stock futures. It would then send instructions to a broker to execute trades, adjusting its strategy over time based on what it had learned. The idea of a fully automated money manager inspired Li and he agreed to use this AI for his business which resulted in USD 20 million loss which resulted him to sue Tyndaris but Tyndaris claims that he never told the actual utilisation of K1 and the petitioner has exceeded the limit of working capacity of K1 by using it in his firm for the prediction of future stock market of US and claims USD 3 million for the unpaid amount in consideration of K1.
This is the first time when a robot (AI) is sued. Now the interesting questions which arises are-
- Who is responsible when intelligent systems fail or go crazy?
- Is the supplier of the system responsible or is it the operator of the system?
- Do intelligent systems need legal representation? And grant them human-like rights?
- Are autonomous systems that intelligent that they can take responsibility for their own actions?
Legal liability for artificial intelligence works the same as normal liability
So basically, the liability of artificial intelligence is similar to regular legal cases. Numerous law cases have been filed for liability of electric equipment, car accidents and guns where the manufacturer is responsible for the harm afflicted by the device. In all cases the outcome is obvious. In the case of a manufacturing defect, the producer is responsible. An absence of training makes the people responsible for education responsible. And deliberate malicious actions by the operator make the end user responsible.
This means; current legal procedures are still viable for the processing of legal liability cases on artificial intelligence and machine learning. These cases can even be processed by machine learning algorithms to ensure a predictable outcome.
So, as per above mentioned question i.e. CAN HUMAN SUE ROBOTS?
The answer is ‘NO’. For the very simple reason until and unless robots are given as a legal person identity till then robots can’t be sued directly. (as per current laws). But laws are meant to be made for protecting the rights of people and once if the AI is given a legal person identity it can be sued under all laws as a legal person is being sued and under all the statutes until then the liabilities can be determined by cause and adopt method i.e. as per the facts of the cases.
(By: Amit Nagar, The views expressed are writer’s personal. INBA owes no responsibility of any kind for the views expressed.)