
A lot of these regulations are very new, and people are still trying to figure out how they should be interpreted. Until these regulations start being enforced, the impact on businesses remains unclear.
AI governance is fragmented worldwide, with countries building mismatched frameworks in isolation. In the U.S., the absence of federal regulation has left states scrambling to fill the gap. Could adopting a global standard like ISO 42001 offer the clarity and cohesion we need?
Selin Kocalar is the Co-Founder of Delve, an AI-automated platform to help companies meet compliance standards. Kocalar joined us for a conversation about how fragmentation in AI compliance is creating confusion and tension for businesses and governments, and why reliable AI governance will be imperative.
Innovation vs regulation: "In the U.S., there’s almost this race to push AI innovation out as quickly as possible, trying to stay ahead of other countries," explains Kocalar. "It’s leading to tension between businesses pushing forward with AI innovation and governments trying to manage it. We’re always going to be walking a tightrope—trying to balance the need for innovation with the necessity of ensuring that it’s done responsibly and safely."
States vs Feds: Kocalar is quick to note that tension also exists between individual states and the federal government. "Right now, the federal government is taking a deregulation approach with AI, and as a result, states are left to figure out their own path. For example, California had the proposed AI Safety Act last year, which was vetoed at the last minute. Meanwhile, states like Colorado, Utah, and Texas are developing their own AI regulations as well."

Even though it’s voluntary, we’re going to see more and more companies adopting ISO 42001 as a best practice, especially as larger companies push for standardized frameworks.
Weaving a tangled web: To complicate matters further, the tensions between states, federal agencies, and businesses aren’t siloed; they intersect and exacerbate the situation, creating more uncertainty and inconsistencies. "A lot of these regulations are very new, and people are still trying to figure out how they should be interpreted. Until these regulations start being enforced, the impact on businesses remains unclear," says Kocalar. For now, companies are caught in a kind of regulatory limbo, waiting to see how state-level rules evolve and what eventual compliance frameworks will look like.
A potential fix: In the midst of the uncertainty, Kocalar points to ISO 42001 as a potential solution. This international compliance framework for AI management is increasingly seen as a model for responsible innovation. "ISO 42001 is already being referenced in other regulations like the EU AI Act," Kocalar explains. "It’s gaining traction globally, and I think it’s going to become the international best practice for AI governance in the coming years."
Increasingly popular option: While ISO 42001 remains a voluntary standard, Kocalar believes it could become a de facto requirement for many companies as it grows in prominence. "Even though it's voluntary, we’re going to see more and more companies adopting ISO 42001 as a best practice, especially as larger companies push for standardized frameworks," she predicts. As many US states look towards ISO 42001 as the model for their own state frameworks, the framework could provide the structure the industry and federal government need to standardize regulations.