The Emergence of AI: Innovation, Liability and Risk
- Nikkie Kitching
- Oct 29, 2020
- 9 min read
Updated: Feb 28, 2021

2020: A year of flying cars and personal robot servants, right? Well, not exactly. But that’s not to say that advancements in artificial intelligence haven’t already taken place. These days, whilst many of us may not be aware of it, AI forms a big part of our daily routines. This could be anything from when you command Siri on your iPhone, finding a location on Google Maps, video recommendations on Youtube and yes, even the emergence of self-driving cars. With the arrival of Covid-19 and local lockdowns, AI has played an even bigger part than it used to, particularly through apps like Deliveroo and Uber.
And whilst AI sounds cool from the outset, The Switch is also a place to learn about any interesting legal issues or a hot topics that may crop up along the way. For readers who are new to The Switch and have no idea what I’m talking about, let’s break it down for you.
Let's break it down...
In simple terms, AI is an umbrella term for types of technologies which have the ability to perform tasks at a human level. As the examples above suggest, AI uses algorithms and data to not only learn about their specific audience but also identify patterns and make decisions and predictions on our behalf.
AI can be broken down into a number of areas. The main ones are briefly outlined below:
If we're talking in a general sense, AI can either be:
Narrow/Weak AI: This AI is focused on one specific task e.g. when you ask Siri to play a song on your phone
Strong AI: This AI is more advanced and can perform a number of tasks e.g. a computer that can solve several puzzles at once
In a more specific sense, AI can fall under:
Reactive Machines: A machine which simply reacts to the user e.g. playing chess on your computer and the machine responds to the move you make
Limited Memory: A machine which uses previous events or experience to learn future events e.g. self-driving cars understand where roads and markings are but after some time, it will also understand your common routes and make better judgments
Theory of Mind: A machine designed to understand the user's emotions, thoughts and behaviour e.g. personal robots who cannot only obey orders but also respond in an emotional way. One cool example is Sophia, the robot who was developed by Hanson Robotics.
Self-awareness: A machine that will incorporate everything above; smart predictions, emotions and responses. These types of machines are rare and are still being developed.

Whilst there a never ending questions about the world of AI, this blog covers a few interesting legal issues. To keep this blog streamlined to my previous blogs, we will be understanding how the UK has looked at AI.
Liability
A key question that many tech and data protection lawyers face is what happens when AI fails to perform? Who is the onus on? The manufacturer or the person who purchased the AI? It is crucial to understand this further as malfunctions in AI can cause financial loss, personal injury and in some extreme cases, death. Let's take self-driving cars, for example. If the AI system within the vehicle becomes faulty and the passenger is involved in an accident as a result of this, who is liable?
Under English law, companies and businesses can incur liability for machines and products in the following ways:
1. CPA - Part I
Under the Part I of the CPA 1987, there is a strict liability test for defective products. S2 CPA states that where a product is defective, then the original manufacturer of the product, also known as the 'producer', is held liable for any injury that a defective product causes. According to s3(1) CPA, a product is 'defective' if the product's safety is not generally what you'd expect. Those who have been harmed by a defective product have the right to sue for compensation and can do so without proving that the producer was negligent. There are of course different defences that a producer can rely upon in s4.
Meaning that when it has come to litigation, the burden of proof fell on the manufacturer who had to prove that their products were compliant and met their overall purpose.
2. Negligence
If an injured person wants to claim in negligence, this will be based on the doctrine of "duty of care". Specifically, that the claimant (the injured person) must establish, on a balance of probabilities, that the defendant (the producer) owed them a duty of car to avoid injury as a result of the defective product and that they did take reasonable care. Similarly to the s2 CPA, other parties who were responsible for the manufacturing of the product or helped with supplying/distributing components of the product may also be held liable if it can be proven they were also negligent.
Negligence can take many forms including a failure to take care during the manufacturing process, a failure in the AI design or even a failure to not inform the user of any dangers whilst using the product. If you as the customer can prove that the producer was negligent, you may have a claim. Substantial damages could be awarded depending on the severity of the claim.
3. Contract
If there are is a contract of sale in place between a buyer and a seller where the buyer is purchasing an AI product, they may be able to sue if the product they've been given is in breach of any implied or express terms stated within the contract. As a consumer, you are automatically protected under the Sale of Goods Act 1979 and the Consumer Rights Act 2015. Under these acts, there are standards that all products must meet including that they are satisfactory quality, fit for purpose and that it matches the description you've been given at the time of purchase. In a similar respect to negligence, it will be down to the claimant (the injured person) to establish that the defendant (the producer)) had breached their contractual terms.
If the issue cannot be resolved between parties, it will be down to the courts to interpret specific terms. Again, if successful, substantial damages could be awarded in this instance.
When the UK officially leaves the EU on 31st December 2020, s4(1) CPA will no longer apply as it states that a producer may have a defence if the defect in question is compliant with any requirement under EU law (as the UK will no longer be part of the EU). Furthermore, suppliers and distributors who import products from the EU to the UK will soon be classed as "producers". This means that they will also be held liable for any personal injury or damage from any defective or unsafe products. As the post-Brexit supply chain is still being negotiated, it will be wise for parties within the supply chain to check which insurance they currently have, particularly product liability insurance.
As AI is moving at a rapid rate, once the UK leaves the EU there is a question of whether the current legislation will be revamped and if we can expect new AI specific legislation that runs along side GDPR and the Data Protection Act 2018. It is likely that this will be the case as UK consumers will want to be afforded protection for defective products that are imported into the UK by European suppliers and distributors. better protected and there is more clarity for businesses. reason being that some manufacturers feel it unfair that liability always falls on them. to ensure that
Risk

For AI to really flourish, it will be up to organisations to understand the "risk appetite" they currently have. Risk appetite is the amount of risk that a company or business is prepared to accept in order to meet their objectives before any action should be taken. Once this is understood, companies can decide whether or not they should implement AI and if they choose to, how AI can mitigate any risk the company currently has.
Risk can come in many forms. In the midst of a global pandemic, Covid-19 has certainly had a huge impact on companies financially, with many bearing the brunt of substantial financial losses. Once we incorporate AI into this situation, this has created further resistance for companies looking to invest in AI and other related technologies. For the lucky few who are financially stable or who have already invested into AI, financial difficulties could come into play if their AI products or software start to fail. More money will be required in any necessary rebuilding, repairing or even replacing AI.
Putting financial risk to one side, another risk is that to our privacy and data. Since the introduction of the GDPR and the Data Protection Act 2018, companies have become more meticulous when it comes to its own data as well as any employee or customer data they retain. Should an organisation's data become compromised in anyway, this is not only a breach of the GDPR rules but can also have serious ramifications, including hefty fines by the ICO (Information Commissioner's Office). As AI becomes smarter and faster, there is a risk that more of our data and personal information could be open to the public. The more we start to rely on AI to help us with every day tasks, the more we are inclined to pass more personal data across to new apps, start ups and big businesses. This will then create a "blame game" when data is leaked into the public sphere. Was it wrong for the company to ask me for a lot of personal information? Or was I, as a consumer, naive to pass this information across? As a general rule, companies should stay away from AI related products that can cause privacy or data concerns, even if this concern is slight.
Another risk is employment reduction. There is a growing fear that eventually AI will cause a lot of legal professionals and lawyers to be made redundant. If, for example, your company has incorporated AI that helps reduce the amount of legal administration needed, this could pose a risk for any employees where administration forms a bit part of their job description. This could be the case with small companies who have purchased state of the art AI, meaning they would not need additional staff if their AI can do this work quicker and more efficiently.
Whilst this is not an exhaustive list of the types of risk that can arise, it does put things into perspective for businesses who are thinking of investing in AI. Should they decide to do so, it may be wise to create a risk management plan to address any potential risks.
The most recent white paper that has been released by The European Commission in connection with AI is titled “A European Approach to Excellence and Trust” published on Feb 2020. Whilst extensive, what it essentially outlines is both the advantages and risks that AI can bring. When it comes to minimising risk, the Commission believes that there are many ways we can do this including working with the EU Cybersecurity Agency (ENISA), change existing legislation for clarity and understanding where each party within the supply chain stands.

Final thoughts
Presently, AI is translucent. There are parts that we understand and other parts that are considered grey areas. For companies looking to stay ahead of the curve, it is not enough to just be proficient in Microsoft Office. Don't get me wrong, it's one of the most useful tools of recording information, but it is also encouraged to be open to new types of technologies. Artificial intelligence and new technologies have the capacity to go above and beyond to help the legal industry as well as other industries who welcome it.
From a legal standpoint, the more we can read up about AI, the better prepared we will be when it comes to assessing different types of software, negotiating contracts that involve AI and any future litigious matters. For companies and firms thinking about implementing AI, it may be worth understanding which AI products or software are best to invest in and how this can address any harm that could arise whilst using it in the course of business. Isn't it ironic that AI requires human judgment at the end of the day?
AI will not render lawyers obsolete but it should be mentioned that lawyers who do not embrace what artificial intelligence has to offer could be limited in the ways they can innovate. I personally believe that artificial intelligence still requires a human touch. Machines, as smart as they are, are still known for making mistakes and breaking down. It is in these instances that lawyers and legal professionals can hone their skills of analysis to better understand. After all, isn't AI all limited to the human imagination?
As we've learnt, the implementation and process of AI is not without its risks. At present, no country has created a specific set of laws that cater to responsible AI. Personally, I believe that the current legislation requires revisiting as there are always new types of AI being invented and there is no "one size fits all" template. As the UK is set to leave the EU, with the knowledge that it's gained by being an EU member, could we possibly be the first?
For more information on AI, liability and risk, check out the following links:
Comments