Is generative AI really ready for financial services?
Generative AI and large language models (LLM) are the most talked about developments in the tech world right now, and for good reason. This advanced technology has the potential to completely revolutionise industries - but is it ready for the heavily regulated financial services industry?
Many industries use AI to handle complex tasks, from assisting with life-changing surgery to identifying cyber threats, and the financial services industry is no exception. In recent years, AI has been put to work aiding customer service interactions, fraud detection and investment analysis. But with the latest advancements in generative AI (ChatGPT being the most well-known example), how effective could AI become at overcoming the challenges specific to financial services?
If you haven't been living under a rock for the past couple of years, you'll know that generative AI represents more than just a minor improvement in machine learning's ability to handle language and natural language questions.
Generative AI has the potential to:
- transform human-machine interaction
- usher in a paradigm shift and become a defining moment for the intersection of humanity and technology
- transform markets, industries, business models and use cases
If you have been hiding under a rock, generative AI is the common term to describe a range of capabilities made possible by a new machine learning model called a large language model (LLM). This technology has advanced significantly, making it possible to answer almost any question and assist with nearly any task. It seems that AI is having its 'iPhone moment'.
The hype is at fever pitch and the tsunami of content on the topic is hard to navigate. But we can now say with certainty that generative AI will affect us all.
The challenges of adopting generative AI in financial services
The pace of change with generative AI raises important questions about how we can best leverage and control this technology. In the financial services and fintech sectors, like many others, it's no longer just about what we can do with generative AI. It’s also about what we should do and when.
Regulatory compliance: The financial services sector is heavily regulated, so AI systems must adhere to strict rules and regulations to protect sensitive customer data and comply with ethical practices (including AML, GDPR and KYC). But generative AI models may not always meet these requirements, exposing companies to legal and compliance risks. Additionally, new regulations around Consumer Duty will increase the burden on financial services providers to show due care and prove they've acted in the best interest of their customers.
- Data privacy and security: Financial services handle sensitive and confidential information, including personal identification, account balances and transaction history. Ensuring the privacy and security of this data is critical, but training a generative AI model on such data could lead to inadvertent disclosure or misuse of sensitive information.
- Bias and fairness: Generative AI models have the potential to propagate or exacerbate biases found in the training data. This could lead to unfair or discriminatory outcomes, which are unacceptable in the financial services sector, where fairness and equal treatment are crucial.
- Interpretability and explainability: In financial services, decisions related to assessing risk or granting credit often require explainability and transparency. But generative AI models are often seen as 'black boxes', making it challenging to understand and justify the reasoning behind their outputs.
- Reliability and robustness: In the financial sector, accuracy and reliability are of paramount importance. But generative AI models can sometimes generate incorrect or nonsensical outputs that could have serious consequences if used for decision-making in a financial context. These "hallucinations" occur when large language models insert plausible data into a response that a user might accept at face value, but that is not represented in the training data or is completely fabricated and false. While this problem has been significantly improved in newer models like GPT-4, it remains a genuine concern for the sector.
- Model risk management: Banks and insurance companies need to assess and manage the risks associated with AI models, including the risk that a model is incorrect or misused. Generative AI models may introduce new and complex risks that are difficult to quantify and manage. There are “unknown unknowns” on the way. Reality doesn’t care about your machine learning model; soon enough, we may need a new financial lens to evaluate and manage it. Brexit is a great example of such an unforeseen event.
- Need for human oversight: While generative AI has advanced significantly, there are still many sectors where the 'human-in-the-loop' is still critical to validate and ensure accurate, safe, usable outcomes. In a high-stakes industry like financial services, human oversight and expertise remains crucial to avoid costly mistakes.
It's important to note that the above issues are, for the most part, not about technical capability. While the models that power generative AI will continue to advance at break-neck speed, they can already provide significant value in applications like customer support (which can enhance consumer experience and retention) as well as fraud detection, portfolio analysis and agent assistance.
A lot more work needs to be done to address the above issues if we're to fully harness generative AI in the sensitive and regulated financial services sector. But given the potential benefits, there's a huge incentive to do that work. There's no doubt that generative AI will become a key part of the fabric of financial services.
So, when will we be ready to infuse this technology into all financial services safely?
Let’s just say, 2023 has been quite a decade so far!