How Should I Manage Shadow AI Inside My Organization?

Dear Will & AiME,

We’re starting to suspect that employees across different teams are using AI tools on their own, such as writing content, summarizing documents, and analyzing data, without IT or legal approval. Is this something we need to shut down, or just manage?

— Operations Lead, Minneapolis

Short Answer 💡

“Shadow AI” is the use of AI tools by employees without formal approval or oversight, often involving uncontrolled data sharing. Rather than banning AI tools entirely, companies are better served by setting clear data policies, defining acceptable use, providing approved alternatives, and applying targeted restrictions where risks are highest.

Dear Operations Lead,

Shadow AI is quickly becoming the new shadow IT, except it moves faster, is easier to access, and often operates completely under the radar.

The instinct to shut it down is understandable. In most organizations, a blanket ban won’t work, but that doesn’t mean all AI tools should be allowed. The better question is how to bring AI use into the open, block access where the risks are too high, and capture the benefits where they can be managed.

Why Are Employees Using Unapproved AI Tools?

AI tools are frictionless. Employees don’t need procurement approval, technical integration, or even much training to start using them. If a tool saves time, improves output, or helps meet deadlines, people will use it.

Often, employees are simply trying to be more productive. The problem is that these tools are often used without visibility into how data is handled, what terms apply, or what risks are being created for the business.

The Hidden Risks of Unmonitored AI Use

Shadow AI creates a gap between how work actually gets done and how leadership thinks it gets done.

That gap introduces several practical risks:

  • Sensitive or confidential information being entered into public AI tools

  • Inconsistent outputs that haven’t been validated or reviewed

  • Use of AI-generated content that may raise IP or originality concerns

  • Decisions being influenced by tools that lack transparency or auditability

From a business perspective, the issue extends beyond risk to loss of control. Companies can’t manage what they can’t see.

How Shadow AI Threatens Trade Secrets and IP Rights

If employees are inputting internal documents, customer data, product plans, or marketing strategies into AI tools, those materials may be subject to the platform’s terms of use. Depending on the tool and settings, that can include rights to retain, review, or even use that data for model improvement.

For companies relying on trade secrets or proprietary processes, uncontrolled AI tool use can quietly undermine those protections. There’s also the output side. If employees publish or rely on AI-generated content without oversight, questions around ownership, originality, and brand consistency emerge.

In some cases, the risks are serious enough that blocking access is the right call. If employees are using AI tools to process highly sensitive data—such as personal health information, financial records, or trade secrets—and no enterprise-grade alternative exists, IP blockers or network-level restrictions may be appropriate. The goal is not to punish productivity, but to prevent exposures that could result in regulatory violations, litigation, or loss of competitive advantage.

How to Convert Shadow AI Into Managed AI

The practical approach is to convert Shadow AI into managed AI.

Start with visibility:

  • Acknowledge that AI use is already happening

  • Survey teams to understand what tools are being used and why

Then create clear, usable guidelines:

  • Define what types of data can and cannot be entered into AI tools

  • Identify approved tools or enterprise versions with stronger protections

  • Set expectations for review of AI-generated outputs

Provide alternatives:

  • If you restrict certain tools, offer approved ones that meet similar needs

  • Work with teams to integrate AI into workflows in a controlled way

Finally, align internally:

  • Legal, IT, security, and business teams should be coordinated

  • Policies should reflect how people actually work, not just ideal scenarios

Shadow AI is a signal, beyond just a compliance issue. It reveals where your teams see value, where processes are slow, and where innovation is already happening. The goal is to bring Shadow AI into a framework where the business captures the upside while managing the real risks.

— Will & AiME

Three Takeaways:

  1. Shadow AI is already happening. Addressing it reduces risk more than ignoring it.

  2. The biggest concern is uncontrolled data use, not tool adoption itself.

  3. Turn Shadow AI into managed AI through visibility, clear policies, approved alternatives—and targeted blocking where necessary.


Will Schultz & AiME

Will Schultz is an intellectual property and technology attorney and chair of Merchant & Gould’s Internet, Cybersecurity, and E-Commerce practice. He advises businesses on AI, online platforms, digital assets, and emerging technology law, drawing on experience as both a lawyer and entrepreneur.

https://www.merchantgould.com/people/william-d-schultz/
Next
Next

Who Owns the Output If an AI Vendor Trains on My Data?