
If you look at something like the EU AI Act, the level of detail they provide in terms of governance frameworks... it’s seen in the United States as something that slows down progress.
Global governments caught in the AI arms race are torn between prioritizing speed and ensuring compliance. Governments that prioritize speeding ahead to develop the latest AI are more likely to defend their spot as global AI leaders, but this often comes at the risk of public wellbeing. On the other hand, governments that emphasize AI regulation may mitigate potential harm but could fall behind their competitors. This raises an important question: where should the line be drawn between progress and wellbeing in AI development?
Nicholas Clarke, Chief AI Officer at Intelagen and Alpha Transform Holdings, a digital engineering and solutions company that builds innovative solutions and digital experiences powered by Agentic AI, Web3/Blockchain, and Google Cloud technologies, warns that without proper compliance frameworks and government oversight, AI could spiral out of control and potentially fall into the hands of bad actors.
The argument for slowing down: Clarke stresses that governance in AI is not a roadblock to innovation but a necessary step in ensuring safety. “If you look at something like the EU AI Act, the level of detail they provide in terms of governance frameworks—what reports need to be made, what audits need to be conducted—it’s seen in the United States as something that slows down progress,” Clarke explains. “But sometimes, slowing down is exactly what’s needed. It's like slamming on the brakes in a car—if you don't, you might veer off the road and crash.”
The challenge, according to Clarke, is more political than technical. He highlights how wealthy individuals and corporations, insulated from the consequences, push for faster progress without considering the risks it poses to the public. “It’s a complicated political problem,” he says. “The people with the most power and money aren’t paying attention to the road ahead, and it’s the everyday person who bears the consequences.”
Nano governance: What might effective governance look like? Clarke advocates for what he terms "nano governance"—the smallest unit of governance that allows a system to be controlled in a compliant manner. “Companies are building agentic compliance technology to prevent agents from doing illegal things and to track them back to humans for accountability,” Clarke explains.

It’s a bigger issue than just a technical one. It’s about political will, creating the right regulatory frameworks, and ensuring these structures are enforced.
Compliance comes at a cost: However, the push for compliance is not without its challenges. Clarke points out that implementing compliance frameworks is both costly and technically difficult. “Compliance is about having a checklist of governing controls that you apply through your technologies,” he explains. “But it’s hard and expensive, and many companies don’t account for this in their business models. People are so focused on amassing wealth quickly that they overlook the costs of regulatory compliance.”
For Clarke, the solution lies in enforced regulatory standards that compel companies to take responsibility. Without these mandates, he believes, companies will continue to prioritize profits over safety, leaving the public vulnerable.
Reducing agentic fraud: Clarke is particularly concerned about the potential for fraud perpetrated by Agentic AI. “The consumer fraud agency has been wiped out, but it’s exactly these agencies that should be guiding others on how to reduce agentic fraud,” he says. Fraudulent schemes targeting vulnerable populations, such as seniors with dementia, are already on the rise. “It’s a bigger issue than just a technical one,” Clarke argues. “It’s about political will, creating the right regulatory frameworks, and ensuring these structures are enforced.” Without the right frameworks, AI could become a tool for exploitation, rather than innovation.