AI Agents in the Workplace –  How Can Businesses Stay in Control?

Dear Will & AiME,

We’ve started seeing new tools like Copilot CoWork, Claude CoWork, and other AI agents that act more like coworkers than traditional software. These agents can browse the web, draft communications, analyze internal documents, and interact with company systems. The productivity potential is exciting, but a bit overwhelming. What should businesses consider to get the most out of these agents?

— Head of Technology Strategy, Financial Services Company

Short answer 💡

AI agents can significantly improve productivity, but they also introduce new risks around system access, data exposure, and unintended actions. Businesses can stay in control by limiting permissions, monitoring activity, and maintaining human oversight over high-impact decisions.

Dear Head of Technology Strategy,

The newest generation of generative AI tools is evolving from simple assistants into “agents” that can act autonomously within defined tasks. They can retrieve information, generate outputs, and interact with internal systems with minimal human direction.

This shift opens new strategic opportunities, but also new governance considerations. Unlike traditional software, which acts only when prompted, AI agents can operate semi-autonomously. When they have access to internal data, documents, APIs, and external sources, managing access becomes critical.

How Do AI Agents Impact Cybersecurity?

One emerging risk is indirect prompt injection, where instructions can be embedded in external content that an AI agent retrieves during normal activity.

For example, if an agent browses a webpage for research, the page may contain hidden instructions in text or code. The agent may interpret those instructions as part of its task and take unintended actions, such as accessing internal systems or sensitive data. Because the agent is following its instructions, these actions may not immediately appear unusual.

AI agents often have broader system access than traditional tools. That makes access control a crucial safeguard.

How Can You Protect Your Intellectual Property?

For many companies, the information accessible to AI agents includes:

  • proprietary code and technical documentation

  • internal strategy materials

  • confidential customer data

  • product roadmaps and trade secrets

Without proper controls, this information can be exposed or misused. Strong access management helps preserve trade secret protection and ensures compliance with contractual obligations to partners and customers. This is not just a security issue, it’s a core IP and business risk.

Strategic Permission Management

Defining what agents can access is a critical control point. Businesses should apply the principle of least-privileged access. This means an AI agent should only have access to the systems and data necessary to complete its assigned task. For example, an agent drafting marketing content likely does not need access to engineering repositories or confidential product designs.

Role-based access controls and API monitoring can help ensure agents operate within defined boundaries.

Content Monitoring Best Practices

External content is a primary entry point for risk. Monitoring what agents retrieve and process is essential. Before agents process external content, businesses can implement filtering or scanning systems to detect hidden instructions, encoded prompts, or adversarial content.

Regular testing also plays a key role. Security teams can simulate prompt-injection scenarios in controlled environments to evaluate how agents respond and identify potential vulnerabilities.

Human Collaboration Checkpoints

While AI agents can improve productivity, certain actions still benefit from human review. For example:

  • external communications sent to customers or partners

  • financial or operational decisions

  • publication of sensitive internal information

Adding human approval checkpoints helps ensure quality, reduce risk, and provide an opportunity to review outputs before they are finalized.

‍Educating the Workforce

Employees play a critical role in the safe and effective use of AI agents.

One common risk arises when copying content from unknown or untrusted sources into AI tools. If that content contains hidden prompts, the agent may execute unintended instructions without the user realizing it.

Training employees to verify sources and avoid unsanitized inputs helps reduce this risk. As with broader cybersecurity practices, education is a key line of defense.

Capture the Opportunity

AI agents will become part of the workplace. Organizations that treat them as core operational infrastructure, supported by strong controls, monitoring, and human oversight, will be best positioned to realize their value.

-Will & AiME

Three Takeaways:

  1. AI agents autonomously interact with web content and internal systems, making thoughtful governance a strategic advantage.

  2. Defining permissions, monitoring access, and scanning for hidden prompt injections are effective strategies.

  3. Human collaboration and employee awareness maximize effectiveness as AI agents integrate into everyday workflows.

‍ ‍

Will Schultz & AiME

Will Schultz is an intellectual property and technology attorney and chair of Merchant & Gould’s Internet, Cybersecurity, and E-Commerce practice. He advises businesses on AI, online platforms, digital assets, and emerging technology law, drawing on experience as both a lawyer and entrepreneur.

https://www.merchantgould.com/people/william-d-schultz/
Next
Next

What Is Surveillance Pricing? Why Should Businesses Care?