Across the financial services sector, there's enormous excitement about the promise of generative AI including tools such as ChatGPT, Claude and Microsoft Copilot.
The potential to save money, improve customer service and better identify risk is enormous, however many firms have jumped onto the journey without putting in place the necessary controls to prevent serious security risks to their data.
This is not due to a lack of care but often because there is a fundamental lack of understanding of what the risks are and how to mitigate them. Many firms are leading their own AI journey without an expert advisor and as such don't have access to the right advice around data security.
In this article we cover the minimum controls every financial services firm should have in place before embarking on their AI journey.
Why do you need these controls?
Generative AI tools often have access to personal, sensitive and highly confidential data. In the case of Microsoft Copilot it is able to read all data you have access to on your Microsoft 365 systems that you have access to, or possibly other systems as well if connected to them. Through AI use, data can end up in the wrong hands – here are some examples of how this happens:
- AI responses include content taken from confidential or sensitive documents (as AI does not know these are sensitive). This content is not noticed due to lack of detailed review and published publicly or emailed.
- AI discovers documents that staff should not have access to but have been accidentally filed in the wrong place in SharePoint or shared on Teams many years previously.
- A member of your Team uploads your proprietary algorithms and risk models to free AI tools they are using to work more effectively but there are no security controls and the data is used to train the AI model
- Confidential data is used as a knowledge source for an AI agent which is accidentally shared with the whole business and therefore that data is also shared with everyone.
If that data is sensitive, personal, or highly confidential, misuse or leakage could result in regulatory breach, reputational harm, and financial loss. Yet AI tools are often deployed before organisations have mapped where their sensitive data lives, who accesses it, and how it's governed.
Is there regulation around AI usage in financial services firms?
The FCA emphasises that regulated firms must integrate AI risk into existing regulatory obligations from governance and accountability to operational resilience and customer outcomes. In short it expects firms to apply their existing conduct, operational resilience, and data protection frameworks robustly to AI usage.
The EU Digital Operational Resilience Act (DORA) does not prescribe specific controls and AI either but does state oblige organisations to embed ICT risk management and demonstrate risk governance.
Having controls for data governance when using AI are therefore key to financial services firms needing to demonstrate compliance with regulatory frameworks.
What controls do financial institutions require for AI?
Data leakage prevention
Data Loss Prevention (DLP) should be a cornerstone of any AI data strategy. It helps firms define what data must never leave their IT environments and enforces policies that prevent sensitive data from being exposed via chat or generative outputs.
An example might be that you have a policy in place to prevent any emails with text or attachments containing the words "Highly Confidential – Market Sensitive" from being emailed outside of the business.
Sensitivity labels
These are labels you apply to document to classify data according to its risk and confidentiality. Based on the label, certain security controls are applied such as encryption, access restrictions, watermarking and restrictions on sharing to non-trusted locations. These controls are usually aligned with data classification in your information security policy.
For example, a sensitivity label can ensure that client account numbers or internal credit risk models are never used as input in a Copilot session, drastically reducing the risk of leakage.
Retention labels
To help you manage data sprawl by classifying data based on how long it needs to be kept. They allow you to archive outdated information, automate secure deletion of irrelevant data and ensure AI models give higher quality and more accurate responses by using relevant data only.
Retention labels also support compliance requirements by keeping transactional data long enough to satisfy audit or dispute requirements without holding it indefinitely.
AI usage policy
Even the best technical controls cannot prevent mistakes made by people using AI incorrectly. It's important that every business clearly articulates its policies around AI and that staff understand them and adhere to them.
An AI policy normally forms part of a wider IT usage policy or an entry in the staff handbook, which explains which AI tools are approved for use and the circumstances they can be used in used.
How do firms check they have the right controls in place?
With the technology landscape changing regularly, especially the capability of generative AI, it's important to have a process review the security of your data on a regular basis, and update and controls them to keep them relevant. This is not a one off exercise.
To achieve this, Pro Drive uses our unique framework for best practice IT for financial services firms to maintain robust data security controls for AI and Microsoft Copilot for our clients.
If you are unsure whether your data security is ready for AI, don't leave your AI strategy to chance. Book an AI consultation with a Pro Drive expert today.


