Snapshot:

  • As the insurance market formulates a position on AI, companies have a unique opportunity to determine the optimal risk transfer strategies to protect AI investment, innovation, and future growth.
  • To enable a sustainable AI agenda, Chief Risk Officers and insurance managers must ensure they have a seat at the table at an early stage in the development of AI strategy and product rollout.
Download

AI and machine learning have been employed by financial institutions for decades, making the industry one of the most advanced in terms of adoption of the technology. Today, the increased accessibility of generative AI tools, coupled with advances in computer power, have given rise to the next stage of transformation for many institutions intent on deploying AI to sharpen their competitive edge.

While AI offers unparalleled opportunities for innovation and efficiency it is also increasing the severity, frequency and velocity of digital and security risks. Without a robust AI risk management strategy, financial institutions may fail to comply with varying global regulations, could sustain reputational damage, and leave their organisation vulnerable to AI-enabled cyber attacks.

It’s a future that’s already playing out: AI-enabled attacks on financial institutions, such as social engineering using deepfakes and other synthetic media, have been increasing in frequency. In 2023 deepfake incidents in the fintech sector rose 700 percent from the previous year.[1]  There has also been an increase in cyber attacks on the AI assets themselves.[2]

CROs and insurance managers play a crucial role in providing analysis and advice to the various teams working on AI projects at the digital frontier of their companies. However, CROs and insurance managers are often not involved in these initiatives at an early stage: it is only when the product or service is close to being launched that issues around security, privacy, compliance or other business risks surface and risk leaders are involved.

When risk managers are a part of an AI committee they can advise on the risks and insurance implications of AI projects at a more formative stage, thus reducing the risk of future rework, project abandonment, or worse, liability to the company and executives.

Having a seat at the table at the development stage of an organisation’s AI agenda is imperative and allows risk managers the opportunity to contextualise and rationalise associated risks in order to provide strategic guidance and inform operational decisions – becoming the business’ best asset for the safe, responsible and successful deployment of AI initiatives as the future unfolds.

Download the full article to learn more.

Download

 

[1] Wall Street Journal, Deepfakes Are Coming for the Financial Sector, April 2024

[2] US Department of Homeland Security, Increasing Threat of Deep Fake Identities, accessed April 2024

Want to keep up to date with our insights?

Privacy Policy