
Yes. A genuine one. Not a technicality, not a theoretical concern - a documented breach of the data protection obligations that every authorised mortgage broker agreed to when they became regulated.
A significant number of UK mortgage brokers are currently pasting client details - names, income figures, employment information, addresses - into consumer AI tools to help draft suitability letters, case summaries, and lender communications. Most of them have no idea that this represents a real compliance risk. The tools do not warn them. Nobody in their network has explicitly told them not to do it. And the outputs are useful enough that the practice has become routine.
This article is not designed to create fear. It is designed to provide clarity on something the industry is still working through, so that brokers can make an informed decision about how they use AI rather than discovering after the fact that they have been doing something wrong. Used correctly, these tools are genuinely useful. The goal is to understand how to use them in a way that does not put authorisation at risk.
More than most people assume, and the specifics matter.
When a mortgage broker uses the standard consumer version of ChatGPT - the free tier or the standard paid subscription - their inputs can, depending on the account settings and the version in use, be used by OpenAI to improve their models. That means the client's name, address, income figures, employment details, and any other identifiable information typed into the chat interface could be processed and potentially retained by a third party company based outside the United Kingdom.
Under UK GDPR, the mortgage broker is the data controller for client information collected as part of the advice process. That status carries specific responsibilities: the broker is responsible for how the data is used, where it goes, and who it is shared with. Transferring identifiable personal data into a third party system, even indirectly through a chat interface, without a lawful basis and without a data processing agreement in place, is a breach of those responsibilities.
This is not a gray area in the regulation. The gray area is that most AI tools do not make this obvious. They do not display a warning when client information is entered. They do not ask for confirmation before processing personally identifiable data. They simply answer the question. And brokers who are focused on getting their work done efficiently are not, in that moment, stopping to consider what happens to that data on the other side of the screen.
A lawful basis for processing and, where third parties are involved, a data processing agreement.
Under UK GDPR, every instance of personal data processing requires a lawful basis. For most of the data a mortgage broker holds on clients, that lawful basis is either contract - the data is necessary to deliver the service the client has engaged the broker for - or legitimate interest. Both of these lawful bases apply to the broker's internal use of the data. Neither of them automatically extends to sharing that data with a third party AI platform.
To use a third party tool to process client data lawfully, a data processing agreement - commonly called a DPA - must be in place with that provider. A DPA is a contract that sets out how the third party will handle the data, what they will use it for, how long they will retain it, and what happens in the event of a breach. Without that agreement, there is no contractual framework for sharing the data lawfully, regardless of how useful the tool is.
The free consumer version of ChatGPT does not come with a DPA. Neither do most standard consumer AI tools. By default, using them with identifiable client data puts a mortgage broker on the wrong side of UK GDPR - not because of any bad intention, but because the compliance framework required to use those tools with personal data simply has not been put in place.
The FCA dimension runs parallel to the GDPR requirement and addresses the broker's conduct obligations as an authorised firm.
The FCA expects authorised firms to have adequate systems and controls in place across their operations - including around data, technology, and the tools used in the business. If a broker is using unvetted consumer technology to process client information and something goes wrong - a data incident, a client complaint, a regulatory review - and it emerges that the firm had no AI use policy, no documentation of how these tools were being used, and no data processing agreements in place, that represents an operational risk and a governance failure.
The kind of thing that sits badly in a supervisory visit.
The FCA is actively developing its approach to AI regulation across financial services. Formal guidance specific to the use of AI tools by mortgage brokers is still evolving. But the direction of travel is clear. Firms are expected to think carefully about the tools they use, document how they use them, and maintain proper oversight of anything that touches client data or the advice process.
That expectation exists now, not at some future point when the formal rules are fully developed. The brokers who will be in the strongest position when clear guidance arrives are those who have already been thinking carefully about this - not the ones who assumed that because nobody explicitly told them not to do it, it was acceptable.
The underlying principles are not new. The tools are.
The obligation to protect client data, to have lawful bases for processing, and to ensure that third party providers handling personal data are subject to appropriate contractual controls - these requirements have existed since UK GDPR came into force and before that under the Data Protection Act. What has changed is the proliferation of AI tools that make it very easy to inadvertently breach these requirements in the course of routine work.
The mortgage broker who would never dream of emailing a client's financial details to an unknown external service without a DPA in place is, in many cases, doing the functional equivalent of that every time they paste the same information into a consumer AI chatbot. The difference is that emailing an unknown party would feel obviously wrong. Using an AI tool feels like a productivity shortcut.
That disconnect between the intuitive sense of risk and the actual compliance position is where the problem sits. And closing that gap is the purpose of understanding what these tools actually do with the data they receive.
Pasting identifiable client data into consumer AI tools. This is the single most important immediate action.
That means no client names, no addresses, no National Insurance numbers, no income figures tied to a specific individual, no employment details that could identify the client. If the information you are about to type into an AI tool is specific enough to identify a real person, it should not be entered into a consumer platform.
This does not mean stopping using AI tools altogether. It means changing how they are used. The AI tool does not need to know the client's name to help draft a suitability letter. It does not need the address to help structure a case summary. The useful part of what the tool does - helping with structure, language, format, and professional expression - does not depend on having the identifiable details.
The practical fix is anonymisation. Before using any AI tool with information that relates to a client's case, replace the identifying details with placeholders. The client earns X per year, is employed as a Y, and is purchasing a property valued at Z. The mortgage is being placed with lender A on the basis of B. That version of the input is not personal data. It cannot identify a specific individual. It is usable with any consumer AI tool without creating a compliance risk.
This is not a significant workflow burden. It takes seconds to replace identifying details with generic placeholders before entering information into an AI tool. The output remains equally useful. The compliance risk disappears.
Yes, but they require deliberate selection and understanding of what they include.
OpenAI offers enterprise versions of ChatGPT that include data protection commitments materially different from the consumer tiers. Under the enterprise product, inputs are not used for model training, and a proper contractual framework including data protection terms is in place. This does not resolve every compliance question - the specific terms should be reviewed against the broker's obligations - but it represents a fundamentally different situation from the consumer tool.
Microsoft's Copilot, when accessed through a business Microsoft 365 subscription account rather than the consumer version, operates under Microsoft's enterprise data protection terms. Again, the specific terms should be reviewed, but the underlying framework is categorically different from what the consumer tool provides.
Google's Gemini accessed through Google Workspace for business, and similar enterprise-tier AI products from established providers, generally include some form of data processing agreement and clearer commitments around how input data is handled.
The key questions to ask about any AI tool before using it with client data are: is there a data processing agreement in place or available, does the provider commit to not using inputs for model training, and is the provider subject to UK data protection law or an adequate jurisdiction? If the answer to any of these is unclear, the tool should not be used with identifiable client information until clarity has been obtained.
Practical guidance on building a compliant and effective mortgage business practice, including how technology fits into it, is available through the resources at ashborland.com.
An AI use policy is a simple internal document that sets out how AI tools are used in the practice, what data can and cannot be entered, and who is responsible for ensuring those rules are followed.
It does not need to be a complex legal document. A single page that covers the essentials is adequate for most self-employed mortgage broker practices. What tools the business uses. What those tools are used for. The rule that no identifiable client data is entered into any consumer AI tool. The process for anonymising inputs before they are used. A note of which enterprise tools, if any, are used with client data and what the basis for that use is.
The value of having this document is twofold. First, it creates clarity and consistency in how AI is used across the practice, preventing the kind of inadvertent compliance breach that happens when someone acts on instinct without a framework. Second, it creates a record that demonstrates the broker has considered their obligations and put governance in place.
In a regulatory review, the broker who cannot produce any documentation around AI tool usage is in a weaker position than the broker who has a dated, filed policy that shows deliberate thought. The latter demonstrates a professional who takes their compliance obligations seriously. The former may suggest oversight that extends further than this one issue.
The policy should be reviewed and updated as guidance from the FCA develops and as the tools being used change. Dating it and keeping it on file is straightforward and costs almost no time.
Three specific actions, in priority order.
The first is an honest audit of current AI tool usage. What tools are currently being used in the business? What information has been entered into them? If identifiable client data has been going into consumer AI platforms, that practice needs to stop immediately.
The second is a review of whether enterprise versions of any tools in current use are appropriate to adopt. If the broker wants to continue using AI to support work that touches client information, the enterprise tier of the relevant tool provides a materially better compliance position than the consumer version. Most networks and principal firms are already thinking about this area and may have guidance available or be willing to advise.
The third is drafting a simple internal AI use policy. One page, covering what is used, for what purpose, and what the rules around client data are. File it with a date. Review it when relevant guidance changes.
These three actions are not burdensome. They do not require legal expertise or significant time. They put the broker on the right side of a compliance area that will receive increasing regulatory attention as AI use in financial services becomes more widespread and the formal guidance catches up with the technology.
The same way they should think about any other tool or system that touches client data.
The compliance principles that apply are not new. Data minimisation means collecting and processing only the data that is necessary for the purpose. Lawful basis means having a legitimate reason for every instance of processing, including sharing with third parties. Appropriate controls means ensuring that third parties handling personal data are subject to contractual requirements that protect it.
These principles apply to AI tools just as they apply to the CRM system, the email provider, the document storage platform, and every other technology in the practice. The fact that AI tools are new and the specific guidance is still developing does not mean that the existing principles do not apply. It means that brokers need to apply those principles thoughtfully to a new category of tool.
The brokers who are in the strongest position - both now and when formal FCA guidance on AI arrives - are those who have approached this with the same professional rigour they bring to every other aspect of their compliance obligations. Not because someone told them to, but because the responsibility to the client who trusted them with their information does not change with the introduction of a new and useful piece of technology.
Practical thinking on building a structured, compliant, and sustainable mortgage business is available through The Mortgage Broker Coach content at YouTube and the broader resources at ashborland.com/boost.
Increasing specificity, stricter governance expectations, and less tolerance for the assumption that absence of explicit prohibition implies permission.
The FCA has been clear in its broader regulatory communication that firms are expected to manage the risks associated with the technologies they use. The Consumer Duty framework, which has reshaped compliance expectations across retail financial services, places significant weight on firms having the governance and oversight necessary to demonstrate that client outcomes are being protected. AI tools that process client data without adequate controls are a Consumer Duty concern as much as a GDPR one.
As AI capabilities expand and use increases across the industry, regulatory expectations will sharpen. The tools available today will be joined by more sophisticated applications - AI-assisted advice support, automated document analysis, case preparation tools with direct lender integration. Each new capability brings new compliance questions that the existing principles must be applied to.
The brokers who have built a habit of thinking carefully about data governance, of documenting their tool usage, and of asking the right questions before adopting new technology, will be best placed to navigate this evolving landscape. The habit is more valuable than any specific policy, because the policy needs to change as the technology changes.
Five questions that cover the essential compliance ground.
Does this tool process my inputs in a way that could constitute personal data processing? If the answer is yes, or possibly, the remaining questions become mandatory.
Is there a data processing agreement available from this provider? If not, identifiable client data should not enter the tool under any circumstances.
Does the provider commit to not using inputs for model training? Consumer tools frequently do not make this commitment. Enterprise versions typically do.
Where is the provider based, and does that jurisdiction provide adequate data protection safeguards relative to UK standards? Processing personal data with providers in jurisdictions without adequate protections creates additional compliance considerations.
What is the specific purpose for which this tool is being used, and can that purpose be achieved with anonymised inputs rather than identifiable data? In most cases the answer is yes, which removes the compliance concern while preserving the utility.
These questions apply to every AI tool, not just ChatGPT. The same analysis should be applied to any new tool before it becomes part of the workflow.
Is it a GDPR breach to use ChatGPT with client data as a UK mortgage broker?
Using the standard consumer version of ChatGPT with identifiable client data very likely constitutes a breach of UK GDPR obligations. The broker is the data controller for client personal data. Sharing that data with a third party AI platform without a lawful basis and without a data processing agreement in place is a breach of the broker's responsibilities. The fact that the tool is widely used and convenient does not change the compliance position.
What is a data processing agreement and does a mortgage broker need one to use AI tools?
A data processing agreement (DPA) is a contract between a data controller and a third party that processes personal data on their behalf. It sets out how the data will be handled, what it will be used for, how long it will be retained, and what happens in the event of a breach. A mortgage broker needs a DPA in place with any third party provider that processes identifiable client data, including AI tool providers. The consumer versions of most AI tools do not include a DPA, which means they cannot lawfully be used with identifiable client data.
What is the FCA's position on mortgage brokers using AI?
The FCA is actively developing guidance on AI use across financial services. Specific guidance for mortgage brokers is still evolving. However, the existing expectation - that authorised firms maintain adequate systems and controls over all tools and processes that touch client data and the advice process - applies now. Brokers are expected to think carefully about the tools they use, document how they are used, and have appropriate governance in place. Absence of specific AI guidance does not imply permission to use these tools without oversight.
Can a UK mortgage broker use ChatGPT Enterprise or Microsoft Copilot for work with client information?
Enterprise versions of these tools include data protection commitments materially different from the consumer tiers. ChatGPT Enterprise and Microsoft Copilot accessed through a business Microsoft 365 account both include terms that address how input data is handled and provide a contractual framework for data processing. These are not perfect solutions to every compliance question, and the specific terms should be reviewed against the broker's obligations, but they represent a significantly better compliance position than the consumer tools.
What should a mortgage broker include in an AI use policy?
The policy should cover: which AI tools are currently in use in the business, what each tool is used for, the rule that no identifiable client data is entered into consumer AI tools, the process for anonymising inputs before use, which enterprise tools if any are used with client data and on what basis, and who is responsible for ensuring these rules are followed. It should be dated, filed, and reviewed as guidance develops. It does not need to be a complex legal document - a clear, practical one-page policy demonstrates appropriate governance.
How should a mortgage broker anonymise client data before using AI tools?
Replace all identifying details with generic placeholders before entering information into any AI tool. The client's name becomes "the client" or is removed. Specific income figures become "X per year." Addresses and property details are generalised or omitted. Employment details are described generically. The tool can produce equally useful outputs - draft letters, case summaries, structured text - without knowing who the specific client is. The anonymisation takes seconds and removes the compliance risk entirely for consumer tool use.
What happens if a UK mortgage broker has already been using ChatGPT with client data?
Stop doing it immediately. Review what data has been entered and into which platforms. Consider whether the data exposure needs to be documented as a personal data incident under UK GDPR's breach notification requirements - if there is a risk to client rights, the ICO may need to be notified. Consult the network or principal firm, as most are already thinking about this area. Draft and implement an AI use policy to ensure the breach does not continue. The broker who identifies this issue and addresses it proactively is in a much better position than one who continues after becoming aware.
Does Consumer Duty affect how mortgage brokers should think about AI tool usage?
Yes. The Consumer Duty places significant weight on firms having adequate governance and oversight in place to protect client outcomes. Using AI tools that process client data without appropriate controls is a governance concern under Consumer Duty as well as a GDPR issue. A firm that cannot demonstrate that it has considered how its technology choices affect client data protection is unlikely to satisfy the oversight and governance expectations the Consumer Duty introduces.
Is anonymised data safe to use with consumer AI tools?
Anonymised data that cannot be used to identify a specific individual is generally not considered personal data under UK GDPR. Using properly anonymised information with consumer AI tools therefore does not carry the same compliance risk as using identifiable client data. The critical requirement is that the anonymisation is genuine - not just the removal of the name while leaving other details that could still identify the individual - but that the information is stripped to the point where it cannot be traced back to a specific person.
How can a mortgage broker use AI productively while remaining compliant?
By using it for the activities where it is genuinely useful and where the input can be anonymised: drafting the structure of suitability letters, generating template communications, summarising lender criteria, creating content, researching general market information, and supporting the administrative elements of case management. For any of these activities, the AI tool's output is valuable and the input can be provided without identifiable client details. The compliance risk disappears when the data is properly anonymised, and the productivity benefit of the tool remains entirely intact.
Will the FCA eventually provide specific guidance on AI use for mortgage brokers?
The direction of travel strongly suggests yes. The FCA is actively working through how AI is regulated across financial services, and mortgage broking as a regulated activity will be within scope of any framework developed. The brokers who have already put governance in place, developed AI use policies, and adopted enterprise tools where appropriate will find compliance with any new formal guidance straightforward. Those who have not will face a more significant remediation task. Getting ahead of this now is materially easier than catching up later.