The regulatory landscape US businesses face
Three overlapping frameworks shape what AI governance means for a US business right now.
The EU AI Act came into force in 2024, with phased obligations running through 2027. It applies to any AI system placed on the EU market or affecting EU users – explicitly extraterritorial. A US business with EU customers, EU employees or a website accessible to EU users is within its scope. This isn't a future consideration; it's a current one.
US executive orders on AI – the Biden administration's October 2023 EO and subsequent Trump administration direction – primarily affect federal agencies and large AI developers. They shape the direction of US domestic regulation but don't yet create direct compliance obligations for most private businesses.
US state AI laws, led by Colorado, Illinois and Texas among others, are sector-specific and variable in their requirements. They're growing in number but remain inconsistent – no federal framework has yet resolved the patchwork.
The key point for most US businesses: the EU AI Act is the most immediate external obligation if they have any EU presence, EU customers or EU users. It's where governance investment has the most certain near-term return.
What the EU AI Act requires – practically
The Act uses a risk-based tiered approach. Understanding which tier your AI use cases fall into is the first practical step.
Prohibited AI covers facial recognition in public spaces, social scoring and manipulation of vulnerable groups. US businesses are unlikely to be deploying these, and the prohibition is absolute.
High-risk AI covers employment decisions, credit scoring, biometric identification and educational assessment, among others. These use cases carry specific compliance requirements: conformity assessment, human oversight, data governance documentation and incident reporting. If your business uses AI to screen CVs, score loan applications or make consequential decisions about individuals, this is the category that matters.
General-purpose AI models – GPT-4 class and equivalent – carry transparency and documentation obligations for their developers, not for businesses that use them via API.
Most SMEs deploying commercial AI tools – ChatGPT, Microsoft Copilot, Claude – are using third-party systems. Their obligation is primarily around use-case governance and transparency, not model compliance. You're not responsible for how OpenAI built its model; you are responsible for what you ask it to do and what decisions it informs.
The Act's practical impact on most US SMEs: document your AI use cases, classify their risk tier and apply appropriate human oversight to anything in or near the high-risk category.
Internal AI governance: what it means before regulation demands it
There's a strong business case for AI governance that has nothing to do with the EU AI Act.
Enterprise customers in financial services, healthcare and regulated industries are beginning to include AI governance in procurement questionnaires. If you sell to large organisations, you may be asked to demonstrate your AI governance posture before a contract is signed. This is already happening, not an anticipated future trend.
AI systems making or influencing consequential decisions – hiring, credit decisions, customer communications – create legal exposure under existing discrimination and consumer protection law, regardless of what AI-specific regulation says. A hiring tool that systematically disadvantages a protected group is a problem under current US employment law, not a future AI governance problem.
Internal AI governance doesn't require a compliance programme or a dedicated team. It requires a register of AI systems in use: what decisions each one influences, what data it processes, what human oversight is in place and when it was last reviewed. That register is the starting point. A spreadsheet and a policy owner are sufficient to begin.
AI and data: the infrastructure your governance depends on
AI governance without data governance is mostly decorative. The two are inseparable in practice.
The questions that matter: what data does each AI system process? Where is it stored and by whom? Is it personal data subject to GDPR if you have EU users? Is it personal data subject to CCPA or equivalent state laws if you have California customers? Is it customer-confidential information that shouldn't be passing through a third-party AI system at all?
The Microsoft Copilot data governance issue is the most immediate practical illustration of this. Copilot can surface documents the requesting user has no business seeing – because the underlying SharePoint permissions are overpermissioned and Copilot respects those permissions rather than applying a separate access logic. The governance failure isn't in the AI; it's in the data environment the AI operates in. Fixing it requires knowing what data the AI can reach, not just what you've asked it to do.
For any AI system processing personal data belonging to EU users, GDPR applies alongside the EU AI Act. For US state privacy laws, the same principle holds. Getting AI governance right means having the data governance underneath it in order first.
What US businesses with EU operations need to do now
The EU AI Act's prohibited and high-risk provisions are already partially in force. If your business has EU customers, EU employees or EU-facing AI systems, the following steps are the practical starting point.
Conduct an AI use case inventory. List every AI system in active use: what it does, what decisions it influences or makes, what data it processes and who the affected users are. Include commercial tools as well as anything custom-built.
Classify each use case by risk tier. Most will fall into the general-purpose or limited-risk category. Flag any that involve employment decisions, credit assessment, biometric data or educational outcomes for closer attention.
For high-risk use cases, begin conformity assessment documentation. This includes documenting the system's purpose, the data it uses, the human oversight in place and the process for handling errors or incidents.
For all AI systems touching EU personal data, verify GDPR alignment. Data subject rights, processing agreements with third-party providers and retention limits all apply to data that passes through AI systems.
Appoint a compliance owner for the AI governance register. This doesn't need to be a specialist role – it needs to be a named person responsible for maintaining the register and running the review cycle. Without ownership, registers become outdated and governance becomes nominal.
The US domestic direction – and why EU compliance is still the near-term priority
US federal AI regulation is developing but not yet the immediate compliance priority for most businesses. The Trump administration's stated position is reducing regulatory burden rather than imposing it, and comprehensive federal AI legislation remains some distance away. State-level laws are growing but remain sector-specific and inconsistent – there is no single US compliance standard to target.
For US businesses, this creates a useful strategic position: the EU AI Act provides a well-defined external standard to build governance against now. That governance infrastructure – the use case inventory, risk classification, human oversight documentation – is transferable. The frameworks under development at US federal and state level are broadly aligned to the same principles the EU Act is built on. Investing in EU AI Act readiness now creates a governance foundation that will absorb US domestic requirements as they emerge, rather than requiring a second build from scratch.
The businesses that build AI governance now also spend less responding to it later. Governance built proactively is lighter and cheaper than governance built in response to a regulatory demand or an incident. The register exists, the ownership is clear, the review cycle is running – adding new requirements becomes an update, not a project.
Route B helps US and UK businesses build practical AI governance frameworks – use case inventory, risk classification and EU AI Act readiness. Get in touch to discuss your situation.
Get in Touch