Could AI Supersede the Lloyd’s Insurance Market?

 

July 2023

This is what happens when you ask a chatbot the following question:

“Could AI supersede the Lloyd’s insurance market?”

AI generated this response in about 3 seconds…

It is unlikely that AI can fully supersede the Lloyd’s of London insurance market in the near future. While AI can revolutionize various aspects of the insurance industry, such as underwriting, claims processing, and risk assessment, the Lloyd’s market offers unique services and expertise that cannot be easily replicated by AI.

Lloyd’s of London is known for its specialized insurance products and services, particularly in complex and high-risk areas. Its market consists of a network of brokers and underwriters with industry knowledge and experience that enable them to assess risks and provide tailored insurance solutions. This human element, including expertise in evaluating unique risks and building relationships, is challenging to replicate purely with AI.

Furthermore, Lloyd’s of London operates as a marketplace where insurance risks are shared among multiple syndicates, which can pool their capital to cover large risks. This collaborative and interactive nature of the market may be challenging to replicate through AI systems alone.

While AI will continue to play an increasingly significant role in shaping the insurance industry, it is more likely to complement and enhance the Lloyd’s market rather than completely replace it.

While this analysis may cause brokers to breathe a sigh of relief for the time being, no business should be resting on its laurels. The EU’s AI Act, the first major regulatory approach anywhere in the world, became one step closer to coming into effect last month when it was passed by European lawmakers. Final approval of the Bill is expected by the end of this year.

The Act’s adoption of a strong framework around the use of AI technology is set to agitate the global AI regulatory landscape as Brazil, China, Japan, the US, India, and the UK look to follow suit with their own regulatory approaches. (The UK’s position around the procurement, development and deployment of AI is set out in its own White Paper, outlining its proposed regulatory framework).

Providing for distinct levels of AI-related risk, the new EU regulations will necessitate close analysis by businesses regarding the use of AI in Europe. The touchpoints (both within the EU framework and other jurisdictions generally) are centred around the human and ethical implications of AI, safety and security, accountability, transparency, but seek to balance these against economic interests, legal certainty, and innovation.

AI is currently used in a wide range of business contexts (with new ways being introduced all the time), ranging from employment (recruitment, performance management), and data processing (when creating or utilising generative AI), to the use of AI as a substitute for human decision-making.

Pending global AI regulations are a critical development and it behoves all businesses, but particularly those in the financial institutions (FI) sector, to stay vigilant around these requirements. The possible legal implications in this rapidly evolving world are gaining pace in parallel with AI’s uses.

Some legal ramifications surrounding the use of AI that will have particular relevance to FIs include data privacy and protection, intellectual property (including copyright), bias and discrimination in decision-making (notably but not exclusively within the employment sector), governance protocols including the harmonisation of governance across jurisdictions, professional liability around the use (or non-use) of AI, transparency and explainability.

The EU’s revised Product Liability Directive (PLD) and proposed AI Liability Directive (AILD), each supporting the EU’s drive towards AI regulation, will operate in conjunction with existing EU laws regarding AI-caused injuries. The two directives also warrant further attention by FIs.

The proposed PLD is an updated strict liability set of rules around defective products, encompassing intangible items such as software and AI systems within the definition of ‘product.’ The PLD refers to material damages, including damages to psychological health “that affects the victim’s general state of health as confirmed by a court-order medical expert.”

The AILD deals with tortious (non-contractual) liability for damage caused by AI, intending to ‘ensure that persons harmed by AI systems enjoy the same level of protection as person harmed by other technologies in the EU.’ [1] Providing compensation for victims of harm caused by AI technology, the AILD seeks to manage the burden of proof by creating presumptions around liability: if AI laws are not complied with through fault or omission (intentional or negligent) there is a presumption of causality.

To put this in context, consider this scenario:

A bank seeks to support customers who may be ‘unscorable’ through a lack of credit history. The creation of a risk profile using non-traditional data and machine learning may predict different credit risks when compared to more conventional approaches. However, if that AI system was not developed in accordance with the AI Act’s stated criteria around training, validating, and testing of data, or does not meet the transparency requirements set out in the Act, the FI opens itself up to claims based on flawed and/or unethical risk profiling. The bank user of the AI system will be presumed to be at fault if the claimant proves a failure to comply with the Act’s obligations. The presumption (albeit rebuttable) is to offset the traditional challenges for a claimant to meet the burden of proof.

The EU parliament’s proposed potential penalties for violating the new AI Act (up to €40m or 7% of a company’s annual global revenue) are another reason for real vigilance.

[1] Artificial Intelligence Liability Directive Briefing, European Parliament Research Service, February 2023