
I personally believe more regulation is not always a bad thing. Yes, it restricts business sometimes, but it secures the assets. Those best practices that have been determined using prior regulations should be ongoing.
As governments and industry leaders continue to grapple with how to regulate AI, balancing opportunities for innovation with societal safeguards is a key challenge. While the EU AI Act's comprehensive framework of protective measures has been praised by cybersecurity experts and activists, others argue such regulations hinder progress and slow AI's power to innovate.
The need for regulation: We spoke to Prteek Wahi, Vice President - Cybersecurity Architect at JPMorgan Chase, about finding the balance between protection and progress. "I personally believe more regulation is not always a bad thing," he says. "Yes, it restricts business sometimes, but it secures assets. Those best practices that have been determined using prior regulations should be ongoing. It's always uphill from here, no downhill in terms of the best practices."
Wahi suggests that organizations should aim to exceed regulatory requirements rather than merely meeting minimum standards. "There are times when regulations ask us to do a bare minimum, but we still do much more than that," he notes. "We shouldn't take a step back, we should still continue at least from a cybersecurity perspective."
Trusted platforms: Wahi emphasizes the importance of using approved models from established vendors with government relationships rather than downloading unvetted models from open platforms. "The first risk and biggest risk around this is data sharing," he warns. "Even though we have all kinds of agreements and indemnity clauses with these vendors, there's still a risk that our data can get leaked or intermingled with other organizations who are using the same services. These big vendors do not provide us with dedicated models, we depend upon shared ones. Those boundary walls can easily be crossed."
Governance challenges: Beyond data security concerns, Wahi identifies governance as another significant challenge. "The second biggest risk is how do we govern the use of these models in our organization? Different application teams and business processes want to use these models to streamline and make their processes efficient. However, the demand is so huge that it's becoming difficult to govern."

Third party security risk remains a black box, especially with vendors incorporating different models as part of their services making things more entangled, more mysterious. You essentially have a layer of a black box under another layer of black box.
Black box dangers: For cybersecurity professionals like Wahi, API security and third-party security remain persistent concerns. "I know these topics have been top contenders for the last five years, and trust me, for the next five years they'll still be top contenders," he predicts.
“Third party security risk remains a black box, especially with vendors incorporating different models as part of their services making things more entangled, more mysterious. You essentially have a layer of a black box under another layer of black box," Wahi explains.
Future security concerns: Looking ahead, Wahi believes quantum computing developments related to encryption looms large as an issue that needs addressing. "Our encryption protocols are soon going to get outdated. We are already at that stage where malicious actors out there are collecting encrypted data," he cautions.
"Just a few years back, there was a time when they thought, 'It's encrypted data, so there's no point in collecting it.' But now they're collecting encrypted data in the hope that within the next five years, quantum computing problems will be solved and they'll be able to decrypt that data," Wahi explains. “So, I’m most interested in how we can protect our current encryption protocols or how we can upgrade those in the coming years before the attackers start decrypting our data.”