Across the financial services sector, there's enormous excitement about the promise of generative AI including tools such as ChatGPT, Claude and Microsoft Copilot.
The potential to save money, improve customer service and better identify risk is enormous, however many firms have jumped onto the journey without putting in place the necessary controls to prevent serious security risks to their data.
This is not due to a lack of care but often because there is a fundamental lack of understanding of what the risks are and how to mitigate them. Many firms are leading their own AI journey without an expert advisor and as such don't have access to the right advice around data security.
In this article we cover the minimum controls every financial services firm should have in place before embarking on their AI journey.
If that data is sensitive, personal, or highly confidential, misuse or leakage could result in regulatory breach, reputational harm, and financial loss. Yet AI tools are often deployed before organisations have mapped where their sensitive data lives, who accesses it, and how it's governed.
The FCA emphasises that regulated firms must integrate AI risk into existing regulatory obligations from governance and accountability to operational resilience and customer outcomes. In short it expects firms to apply their existing conduct, operational resilience, and data protection frameworks robustly to AI usage.
The EU Digital Operational Resilience Act (DORA) does not prescribe specific controls and AI either but does state oblige organisations to embed ICT risk management and demonstrate risk governance.
Having controls for data governance when using AI are therefore key to financial services firms needing to demonstrate compliance with regulatory frameworks.
Data Loss Prevention (DLP) should be a cornerstone of any AI data strategy. It helps firms define what data must never leave their IT environments and enforces policies that prevent sensitive data from being exposed via chat or generative outputs.
An example might be that you have a policy in place to prevent any emails with text or attachments containing the words "Highly Confidential – Market Sensitive" from being emailed outside of the business.
These are labels you apply to document to classify data according to its risk and confidentiality. Based on the label, certain security controls are applied such as encryption, access restrictions, watermarking and restrictions on sharing to non-trusted locations. These controls are usually aligned with data classification in your information security policy.
For example, a sensitivity label can ensure that client account numbers or internal credit risk models are never used as input in a Copilot session, drastically reducing the risk of leakage.
To help you manage data sprawl by classifying data based on how long it needs to be kept. They allow you to archive outdated information, automate secure deletion of irrelevant data and ensure AI models give higher quality and more accurate responses by using relevant data only.
Retention labels also support compliance requirements by keeping transactional data long enough to satisfy audit or dispute requirements without holding it indefinitely.
Even the best technical controls cannot prevent mistakes made by people using AI incorrectly. It's important that every business clearly articulates its policies around AI and that staff understand them and adhere to them.
An AI policy normally forms part of a wider IT usage policy or an entry in the staff handbook, which explains which AI tools are approved for use and the circumstances they can be used in used.
With the technology landscape changing regularly, especially the capability of generative AI, it's important to have a process review the security of your data on a regular basis, and update and controls them to keep them relevant. This is not a one off exercise.
To achieve this, Pro Drive uses our unique framework for best practice IT for financial services firms to maintain robust data security controls for AI and Microsoft Copilot for our clients.
If you are unsure whether your data security is ready for AI, don't leave your AI strategy to chance. Book an AI consultation with a Pro Drive expert today.