Lets talk about AI governance. A critical topic, with many diverse viewpoints.
Today we want to explore the gap between policy and practice as AI gets embedded deeper into operations.
We roped in our resident AI expert, Elaheh Nouri, for her thoughts on the matter.
Let’s get into it.
Who should own AI governance?
What we’re seeing, both with clients and internally, is pretty simple: AI adoption almost always happens before AI governance.
This wave is different from past technology shifts. Normally, leadership picks a tool, rolls it out, and manages the change top-down. With generative AI, it flipped. Employees started experimenting first because the tools are easy and accessible. Governance conversations came later.
In a lot of the AI readiness assessments we’ve run, AI usage was already widespread before anyone had clearly defined ownership, policies, or guardrails. Leadership often assumes there’s guidance in place. When we survey staff, we hear something different. In some organizations, up to 80% of employees were using AI tools, while formal governance was still nonexistent or unclear.
That’s where “Shadow AI” shows up. Teams are using different tools without visibility from IT or leadership, even when security policies technically discourage it.
We often explain governance with a simple analogy. Designing a house is the visible part, the layout, finishes, all the exciting decisions. But none of it stands without a foundation. Governance is the foundation for AI. Most people don’t see it, and it’s not glamorous, but without it, things get unstable fast.
So who owns it?
In our view, no single department can.
IT handles the technical controls such as monitoring, data loss prevention, and access controls. Business teams define what data is sensitive and what practical use actually looks like. Leadership sets risk appetite and accountability. Legal defines the boundaries. And staff need enablement so they can use AI responsibly instead of guessing.
If governance sits in just one silo, it fails. It only works when it’s cross-functional and treated as an operating model, not a policy document.
Are leaders confident, or still figuring it out?
High usage doesn’t equal maturity.
When we shifted our assessments from asking leadership how confident they felt to asking employees directly, the results were eye-opening. Only about 5% of staff described themselves as highly confident using AI in their daily work. Most were in the “experimenting” category, trying things out, but without real structure or training.
So yes, adoption is happening quickly. But enablement, governance, and clarity are lagging behind.
We’ve also seen leadership perspectives evolve over time. Early on, the conversation was almost entirely about speed. How fast can we deploy? Where can we create value? How do we not fall behind?
Now, a year in, the tone is more balanced. Leaders are asking harder questions about sustainability, risk exposure, and long-term structure. There’s a growing realization that scaling AI without governance creates invisible risk that compounds over time.
So I’d say many leaders are confident in AI’s potential, but still actively figuring out how to manage it responsibly.
What keeps you up at night when it comes to governing AI responsibly?
Honestly? It’s not some advanced, futuristic AI risk scenario.
It’s everyday use without visibility, combined with unrealistic expectations.
Two things concern us most.
First, over-trusting AI. If organizations expect AI to be flawless, the first mistake becomes a crisis. AI will make errors. Governance isn’t about preventing every mistake. It’s about making mistakes manageable and contained.
Second, treating AI like just another software rollout. When that happens, it becomes a commodity tool instead of a strategic capability. Productivity gains stay small. Learning remains fragmented. And risk quietly builds in the background.
The organizations making real progress aren’t the ones writing 40-page policies. They’re the ones operationalizing governance with clear ownership, technical guardrails, practical training, and ongoing monitoring. They treat AI as something that needs structure to scale.
That’s what allows adoption to move fast without creating long-term instability.
Thinking about AI Governance for your own company?
Tecnet’s suite of AI Solutions can assist with the conversation and the roadmap. Reach out to us today for a free consultation.