The AI Barn Door Is Open—Now What?
- Lyle Jacon
- Aug 25
- 4 min read
That old saying applies “it’s a bit like closing the barn door after the horses are already out…” (or, is it the cows?).
AI tools like ChatGPT, Copilot, Github Copilot, Gemini, Claude, Midjourney, and hundreds more have entered the workplace faster than most organizations anticipated. Research says there are now over 10,000 commercially available AI tools spanning marketing, coding, analytics, HR, and design - with new entrants still launching. (And, if you look at LinkedIn there are an even higher number of AI Experts!) Employees are using AI to draft emails, summarize meetings, generate code, create client-facing content or creating images (meet Aivel below!) - often without formal approval or guidance.
While many companies are rushing to “use” AI, others have attempted to ban AI outright, and still others are quietly watching usage spiral beyond their control. The reality? AI is here, and the policy vacuum and lack of guidance is putting organizations at risk. The question is – how much of a concern is there, and what should I do about it? It’s time to shift from reactive panic to proactive governance.
The Quiet Spread of AI
Yes, the concern is real. You’ve probably already heard some of the stories of AI use gone bad...
At a mid-sized marketing agency, junior staff began using generative AI to draft client proposals. It saved time - but one proposal included a fabricated statistic that nearly made it into a $500K pitch. No one had reviewed the AI-generated content, and there was no policy in place to guide usage.
Meanwhile, a regional healthcare provider discovered that administrative staff were using AI to summarize patient intake notes. The summaries were accurate, but the data had been entered into a public LLM without encryption or consent. Legal and compliance teams scrambled to assess exposure, only to realize there was no inventory of AI tools in use.
And, there are already several lawsuits and financial impacts from the consequences of inappropriate AI use – that have cost real money – from a few hundred dollars to the major Zillow case that cost them a $304 million Q3 2021 loss and closing of a division. These cases are happening across industries, from finance to education to manufacturing. And many organizations may not even know it.
A CISO Panel’s Take: “Life Finds a Way”
At a recent cybersecurity conference, I sat in on a panel of five CISOs from global enterprises. Their consensus? AI is already in use across their organizations—whether sanctioned or not. They weren’t focused on bans. Instead, they emphasized enabling safe, constructive use. So, as Ian Malcolm notes in Jurassic Park: “Life finds a way.” AI, like nature, if “finding a way.” The job now is to guide it—not contain it. (And make sure it doesn’t end up like the park!)

“Don’t Let the Raptors Run the Helpdesk”
Unsupervised AI can chew through sensitive data before anyone notices.
Don't Ignore - Get Started and Build from Here
Whether you're a global enterprise or a 10-person startup, here is one way that can help to begin managing AI usage constructively and start to minimize your risk:
Develop and Publish Your Interim Guardrails
Start with clarity - not complexity. Focus on communication, training and aim to control high-risk behaviors, not blanket bans.
Step-by-step:
Identify risky behaviors: Inputting client data into public LLMs, using AI to generate legal or medical advice, not reviewing results before distribution and anything specific to your operations – then incorporate it into your do’s and don’ts document.
Create a one-pager of do’s and don’ts: Keep it simple, visual, actionable and include guidelines on risky behaviors identified specific to your organization. (Looking for a starting point? Click HERE for a downloadable sample.)
Distribute the one-pager: Share via email, Internal Communication Channels, onboarding portals, and team meetings.
Educate with a short training video: Have your cybersecurity, IT, HR, or leadership team record a brief explainer walking through the one-pager and any department-specific guidance. Keep it under 5 minutes and focus on clarity, relevance, and tone.
Create a short quiz: Ask employees to confirm understanding of key do’s and don’ts. This can be embedded in your LMS, HR Tracking System, Google Forms or other survey tools.
Optional enhancements:
Add real-world examples of misuse to make the risks tangible.
Include a feedback loop—let employees suggest updates or flag unclear guidance.
Assign a point person or internal channel for AI questions.
Add a request page/portal for employees to request a review of additional AI tools for approval.
Adopt an Approved Tools List
Your one-pager should link to a living document of AI tools approved by your IT, Security team or MSP/MSSP, how they can be used, what data may and may not be entered and how to use them (prompts and other “how to’s”).
Unless you have a dedicated security / IT Team qualified to provide a secured, approved instance, PII/PHI, IP and other confidential/protected data should never be entered into AI.
Success Story: Controlled Adoption, Minimized Risk
A national insurance provider noticed early signs of AI tool sprawl—employees were experimenting with ChatGPT, Bard, and Jasper without oversight. Instead of banning AI outright, leadership launched a structured rollout: they published interim guardrails, created an approved tools list, and assigned department leads to monitor usage. They also embedded AI policy training into onboarding and quarterly compliance refreshers.
Within six months, they achieved:
78% reduction in unapproved AI tool usage
Zero data privacy incidents across client-facing teams
92% employee awareness of AI policy boundaries
This proactive approach allowed the company to harness AI’s benefits while minimizing risk.
What’s Next
AI isn’t a future risk - it’s a present reality. Whether you're a lean startup or a sprawling enterprise, the key is to get started. Develop and publish clear guardrails, and begin laying the groundwork for sustainable governance. Use the guidance above to get your program started.
If you feel a bit overwhelmed and would like help ensuring your AI use is being done safely and in the best interests of your company, reach out to us. We can help you assess usage, draft policy language, and implement practical controls (click the button above!).
Next up in the series: How do you approve AI tools for organizational use? In the next blog post, we’ll walk through some best practices on how to vet and approve AI tools for organizational use—including security, compliance, and operational criteria.
Disclaimer: This article was created with some AI assistance, but edited, reviewed and fact-checked by a real person.


Comments