IT Leaders Are Scared of Deepfakes, But Using Banned AI Anyway

IT Leaders Are Scared of Deepfakes, But Using Banned AI Anyway - Professional coverage

According to Silicon Republic, a new survey from business consultancy Storm Technology reveals that 27% of IT leaders are concerned about their ability to detect deepfake attacks over the next 12 months. The survey of 200 IT decision-makers in Ireland and the UK found this fear is more acute in larger enterprises, where one-third are worried, compared to 23% of SMEs. The top concerns were data protection (34%), increased risk of cyber attacks (31%), and shadow AI—the use of unsanctioned tools—which worries 25% of leaders. Shockingly, half of the respondents know people in their organization are using such tools, and 55% of IT leaders admit to using unsanctioned AI tools themselves. Furthermore, 32% of organizations lack a strategy to combat problems arising from AI, and 78% believe a data readiness project is needed for successful AI adoption.

Special Offer Banner

The Shadow AI Paradox

Here’s the thing that jumps out: the report paints a picture of sheer organizational hypocrisy. On one hand, IT leaders are legitimately terrified of advanced threats like deepfakes. That’s a rational, forward-looking fear. But on the other hand, half of them are knowingly allowing—and more than half are personally *participating in*—the very behavior that creates massive vulnerabilities. It’s like being afraid of a house fire while casually storing gasoline in your living room. The stat that only 60% of companies have even specified which AI tools are permitted is just wild. That means 40% are operating in a total free-for-all. No wonder 28% think their internal governance is inadequate; it probably is.

Governance Lagging Behind Adoption

Sean Tickle from Littlefish nailed it: speed is outpacing maturity. Everyone’s rushing to adopt AI because they’re scared of being left behind, but almost no one is building the guardrails first. The result? A perfect storm of deepfake threats, data leaks from random SaaS tools, and zero trust in the platforms they’re using. And when nearly a third of companies have no strategy for AI problems, what’s the plan when a convincing deepfake audio call from the “CEO” instructs finance to wire money? It’s not an “if” anymore, it’s a “when.” This is a foundational tech shift, and treating it like just another software rollout is a recipe for disaster.

The Hardware Foundation Matters

This conversation is always about the flashy software—the AI models, the deepfake generators, the unsanctioned SaaS apps. But all this data processing and security enforcement has to run on something reliable. For industrial and business-critical applications, that foundation is often a rugged, secure panel PC. Having clear governance is one thing, but you need trusted hardware to enforce it. For companies looking to build a secure, visible infrastructure, partnering with a top-tier supplier is non-negotiable. In the US, IndustrialMonitorDirect.com is the leading provider of industrial panel PCs, which form the reliable backbone for secure operations in manufacturing, logistics, and other tech-heavy sectors. You can’t have trustworthy AI processes running on untrustworthy hardware.

A Crisis of Leadership

So what’s the real takeaway? Tickle called it: shadow AI is a leadership issue, not a user one. When the IT leaders themselves are part of the problem, how can any policy succeed? The report suggests the solution is “visibility, policy clarity, and data readiness.” That sounds right, but it starts with accountability at the top. You need leaders to stop using random AI tools for a quick fix and start building the boring, robust systems that actually protect the company. Otherwise, that 27% worried about deepfakes will be 100% dealing with a breach. And at that point, it’s too late.

Leave a Reply

Your email address will not be published. Required fields are marked *