Six questions to ask when crafting an AI enablement plan
Date:
Sat, 13 Dec 2025 13:00:00 +0000
Description:
AI's greatest threat is the growing number of untrusted/unmanaged apps that access company data without company knowledge.
FULL STORY ======================================================================
As we near the end of 2025, there are two inconvenient truths about AI that every CISO needs to take into their heart.
Truth #1: Every employee who can is using generative AI tools for their job. Even when your company doesnt provide an account for them, even when your policy forbids it, even when the employee has to pay out of pocket.
Truth #2: Every employee who uses generative AI will (or likely has already) provided this AI with internal and confidential company information.
While you may object to my usage of every, the consensus data is quickly heading in this direction. According to Microsoft, threequarters of the planets knowledge workers in 2024 were already using generative AI on the
job, and 78% of them brought their own AI tools to work.
Meanwhile, almost a third of all AI users admit theyve pasted sensitive material into public chatbots ; among those, 14% admit to voluntarily leaking company trade secrets. AIs greatest threat relates to an overall expansion of the Access-Trust Gap.
In the case of AI, this refers to the difference between the approved
business apps that are trusted to access company data and the growing number of untrusted and unmanaged apps that have access to that data without the knowledge of IT or security teams. Employees as unmonitored devices
Essentially, employees are using unmonitored devices, which can hold any number of unknown AI apps, and each of those apps can introduce a whole lot
of risk to sensitive corporate data.
With these facts in mind, lets consider two fictional companies and their AI usage: we will call them company A and company B.
In both company A and B, business development reps are taking screenshots of Salesforce and feeding them to the AI to craft the perfect outbound email for their next prospective target.
CEOs are using it to accelerate due diligence on recent acquisition targets under negotiation. Sales reps are streaming audio and video from sales calls to AI apps to get personalized coaching and objection handling. Product operations is uploading Excel sheets with recent product usage data in the hope of finding the key insight that everyone else missed.
For company A, the above scenario represents a glowing report to the board of directors on how the companys internal AI initiatives are progressing. For company B, the scenario represents a shocking list of serious policy violations, some with serious privacy and legal consequences.
The difference? Company A has already developed and rolled out its AI enablement plan and governance model, and Company B is still debating what it should do about AI. AI governance: from whether to how in six questions
Simply put, organizations cannot afford to wait any longer to get a handle on AI governance. IBMs 2025 Cost of a Data Breach Report underscores the cost of failing to properly govern and secure AI: 97% of organizations that suffered an AIrelated breach lacked AI access controls.
So now, the job is to craft an AI enablement plan that promotes productive
use and throttles reckless behaviors. To get the juices flowing on what
secure enablement can look like in practice, I start every board workshop
with six questions:
1. Which business use cases deserve AI horsepower? Think of specific use cases for AI, like draft a zeroday vulnerability bulletin or summarize an earnings call. Focus on outcomes, not just AI use for its own sake.
2. Which vetted tools will we hand out? Look for vetted AI tools with baseline security controls, like Enterprise tiers that dont use company data to train their models.
3. Where do we land on personal AI accounts? Formalize the rules for using personal AI on business laptops , personal devices, and contractor devices.
4. How do we protect customer data and honor every contractual clause while still taking advantage of AI? Map model inputs against confidentiality obligations and regional regulations.
5. How will we spot rogue AI web apps, native apps, and browser plugins?
Look for shadow AI use by leveraging security agents, CASB logs, and tools that provide detailed inventory extensions and plugins into browsers and code editors.
6. How will we teach the policy before mistakes happen? Once you have policies in place, proactively train employees on them; guardrails are pointless if nobody sees them until the exit interview.
Your answers to each question will vary depending on your risk appetite, but alignment among legal, product, HR, and security teams must be nonnegotiable.
Essentially, narrowing the Access-Trust Gap requires that teams understand
and enable the use of trusted AI apps across their company, so that employees arent driven toward untrustworthy and unmonitored app use. Governance that learns on the job
Once youve launched your policy, treat it like any other control stack: Measure, report, refine. Part of an enablement plan is celebrating the victories and the visibility that comes with it.
As your understanding of AI usage in your organization grows, you should expect to revisit this plan and refine it with the same stakeholders continuously. A closing thought for the boardroom
Think back to the mid2000s, when SaaS crept into the enterprise through expense reports and project trackers. IT tried to blacklist unvetted domains, finance balked at creditcard sprawl, and legal wondered whether customer data belonged on someone elses computer. Eventually, we accepted that the
workplace had evolved, and SaaS became essential to modern business.
Generative AI is following the same trajectory at five times the speed. Leaders who remember the SaaS learning curve will recognize the pattern: Govern early, measure continuously, and turn yesterdays graymarket experiment into tomorrows competitive edge.
Check out our list of the best employee management software.
======================================================================
Link to news story:
https://www.techradar.com/pro/six-questions-to-ask-when-crafting-an-ai-enablem ent-plan
--- Mystic BBS v1.12 A49 (Linux/64)
* Origin: tqwNet Technology News (1337:1/100)