AI in Financial Services: Ongoing Accountability Requires a Clear Strategy
The following is a guest editorial courtesy of Eric Odotei, Group Head of Regulatory Reporting at Finalto.
AI is not coming; it is already here. It is reshaping how we live, work, and make decisions. From business operations to digital engagement, AI has become an essential component of the modern economy. It is not a passing trend: the evidence suggests the evidence suggests artificial intelligence (AI) has the potential to be a transformative force in the financial services. Firms that fail to adapt will not only fall behind but will lose relevance in a market where intelligence, both human and artificial, defines the winners.
The reality is that people are already using AI across multiple industries. In the financial services sector, AI offers advanced tools for risk management, compliance automation, and regulatory reporting.
Financial regulators are already taking steps to prepare for an AI future. Both the Financial Conduct Authority (FCA) and the Prudential Regulation Authority (PRA) have taken proactive steps to provide some guidance to AI oversight and the regulatory environment is adapting to the opportunities and risks AI presents.
Business leaders need to ensure their internal strategies keep pace with this reality.
Utilise but verify
As an evolving and potentially disruptive technology, much of the anxiety around AI has focused on job losses, even the risk of whole job categories being rendered obsolete. There’s a more nuanced way to think about automation and employment. Just as robotics brought precision and consistency to car manufacturing, AI can enhance the quality and efficiency of many industries. Letting AI handle repetitive and time-consuming tasks enables humans to focus on innovation, creativity, and strategic problem-solving. In other words, with the right framework in place, AI has the potential to enhance our work environment, making our day-to-day work life both more productive and meaningful.
Managers often assume that as long as AI delivers results, everything is fine. However, AI also brings substantial risks. Results with no understanding how an AI model reaches its decisions create false confidence.
Critically, there are crucial differences between how AI and humans process information. For example, AI approaches pattern recognition in a way that is fundamentally different from human intuition. It does not rely on gut instinct or a few coincidences, but on analysing vast datasets to uncover consistent patterns and trends. Anomaly detection algorithms can flag missing data, recurring issues, and flaws in assumptions that might otherwise go unnoticed.
However, AI is not infallible, and understanding its limitations is just as important as recognising its potential. At the end of the day, you are trusting whatever outcome you seek to a process you may not fully understand.
Black Box Blues
For all its promise, AI introduces fundamental risks that cannot be managed with yesterday’s governance frameworks. There are real risks in relying on black box systems that produce results without offering any transparency into how those results were reached. For a start, the AI might simply be wrong. AI doesn’t perform magic; it derives inferences from data, and those inferences are only as reliable as the data and models behind them. These inferences may be generated from enormous datasets and executed at impressive speed, but speed and scale do not guarantee accuracy or sound reasoning.
Without visibility into how a model works, we are left trusting an output we cannot properly verify. That is not a sustainable or responsible position, especially in industries like financial services, where decisions carry significant consequences.
AI systems that influence business or regulatory decisions must be explainable, not only to internal teams, but also to auditors and regulators. Black box models, where decision-making logic cannot be clearly articulated, are viewed with suspicion for good reason. Firms need explainability frameworks that match the complexity and risk of the models they use. This is why the principle of a “human in the loop” is encouraged for high-risk applications, ensuring that decisions can be escalated or overridden where necessary.
Perhaps more importantly, AI is not immune to bias. Every model is trained on data, and if that data reflects human prejudice or structural inequality, the AI will absorb and replicate those patterns, often at scale and without detection. This can lead to distorted, and sometimes harmful, outcomes long before anyone realises there is a problem.
Automation and Accountability
Then there’s the question of accountability. In financial services, accountability is not optional. It is the foundation of trust when deploying AI. Firms are expected to implement robust governance frameworks that cover the entire AI model lifecycle, from development and validation, to deployment, monitoring, and eventual retirement.
Clear lines of accountability are essential. This includes the board, senior management, risk committees, and model risk functions. Regulators support the use of model inventory systems to track how AI is used, who owns it, how it performs, and how it is classified in terms of risk.
At the same time, regulators and lawmakers are working to establish their own AI policies. They expect transparency, traceability, and clearly defined roles and responsibilities. If a firm cannot explain how a decision was reached or trace the data that informed that decision, that will not be accepted as a defence in the case of poor reporting or regulatory breaches. Regulatory scrutiny is already increasing and will continue to rise, especially in instances where AI impacts consumer protection.
Responsible AI is an Ongoing Process
In short, AI is a powerful tool that can unlock significant value and efficiency, but only if it is deployed thoughtfully and governed responsibly. Financial services firms must take deliberate and informed steps to guide how AI is trained, tested, and used across their operations.
There must be transparency and accountability at every stage of the AI lifecycle, from data sourcing and model design, to decision-making and implementation. That means no black boxes. Even the most advanced algorithms must be explainable and auditable. This applies whether a firm builds its own models or uses a third-party solution.
Ongoing monitoring is essential. AI systems evolve as their inputs change, and if they are not regularly reviewed, risk can escalate unnoticed.
This presents opportunities as well as risks. I believe we will see the emergence of new roles focused on AI ethics, AI risk strategy, and AI prompt engineering. These professionals will become vital to the way firms implement and oversee intelligent systems.
The Future is Now
AI is already part of daily business. Whether financial services firms like it or not, employees are likely using AI tools in some capacity. That is why it is so important to develop clear and practical policies for employee engagement with AI. Without them, firms cannot ensure responsible use, nor can they protect themselves from reputational, operational, or regulatory consequences.
Now is the time to build strategy, align it with policy, and embrace AI with confidence and care. The question is not whether AI will replace humans, but whether humans will learn to work with AI. Those who master this collaboration will lead the future of finance.
Eric Odotei is Group Head of Regulatory Reporting at Finalto, an innovative prime brokerage that provides bespoke fintech and liquidity solutions. Finalto deliver best-in-class pricing, execution and prime broker solutions across multiple assets, including CFDs on Equities, Indices, Commodities, Cryptos and rolling spot FX, Precious and Base Metals, and bespoke products such as NDFs.