How Do I Safely Scale AI Agents in My Business?

Dear Will & AiME,

We’re starting to pilot AI tools that take actions like sending emails, updating systems, and interacting with customers. This feels like a big shift. What should we be thinking about before we let these tools operate more independently?

— Head of Operations, SaaS Company

Short Answer 💡

As AI moves from assisting to acting, businesses should define when human review is required, limit system access, and monitor activity. Treating AI like a supervised team member helps reduce risk while capturing efficiency gains.

‍Dear Head of Operations,

You’re right to flag this as a shift. Moving from AI that assists to AI that acts is a critical transition. The move raises a different category of business and legal considerations.

How Are AI Agents Different from Traditional AI Tools?

Traditional generative AI tools produce drafts of emails, code, and marketing copy. A human typically reviews before anything goes live.

AI agents, by contrast, are designed to take action: sending communications, making updates, triggering workflows, or even interacting with third parties. In some cases, they operate across multiple systems using APIs and integrations.

The key issue is delegation. You’re assigning tasks that were previously performed (and judged) by people.

What Business Risks Come with AI Agents Taking Action?

When AI takes action, small errors can scale quickly. A poorly generated email is one thing; an AI sending that email to 5,000 customers is another.

Businesses should consider where automation creates operational risk:

  • Customer communications that could misstate terms or commitments.

  • Transactions or system changes that affect billing, access, or data.

  • External interactions that impact brand perception or contractual relationships.

A governance issue also arises: who is accountable when the AI makes a decision that causes harm? Internally, that needs a clear answer before deployment.

What Legal and IP Risks Do AI Agents Create?

Autonomous actions can create unintended legal exposure. For example:

  • An AI agent could reuse copyrighted language or third-party content in outbound communications.

  • It might make statements that conflict with your terms of service or negotiated agreements.

  • It could disclose confidential or proprietary information if guardrails are weak.

Ownership is also worth considering. If an AI agent generates and sends materials automatically, you still need clarity on whether your organization owns the outputs and whether any platform terms impose license-back rights.

A practical approach is to treat AI-generated actions the same way you would treat actions by an employee: subject to policies, approvals, and accountability.

What Data, Privacy, and Security Risks Come with AI Agents?

AI agents often require broader system access, creating expanded attack surfaces and data exposure risks. Key questions include:

  • What data can the agent access, and is that access limited appropriately?

  • Are prompts or instructions exposing sensitive information?

  • How are logs, outputs, and actions stored—and who can audit them?

Prompt injection and manipulation pose growing concerns. If an external party can influence how your agent behaves (for example, through customer inputs), that becomes both a security and operational risk.

How Should I Safely Scale AI Agents in My Business?

Start with controlled use cases. Some functions can operate with limited autonomy. Businesses should consider:

  • Setting clear thresholds for when human review or approval is required.

  • Limiting AI authority in customer-facing or legally sensitive areas.

  • Implementing audit logs and monitoring for all agent activity.

  • Aligning AI actions with existing policies (communications, brand, legal, security).

  • Training teams on what AI agents can and cannot do.

Think in terms of “progressive autonomy.” Expand capabilities as confidence and controls improve.

AI agents can drive meaningful efficiency, but they shift your risk from content quality to operational execution. Focus on what you allow AI to do without oversight.

The businesses that get this right will treat AI agents like junior team members who require structure, supervision, and clear boundaries.

— Will & AiME

Three Takeaways:

  1. AI agents introduce execution risk, not just content risk. Govern accordingly.

  2. Treat AI actions like employee actions, with policies, oversight, and accountability.

  3. Start narrow, monitor closely, and expand autonomy deliberately.


Will Schultz & AiME

Will Schultz is an intellectual property and technology attorney and chair of Merchant & Gould’s Internet, Cybersecurity, and E-Commerce practice. He advises businesses on AI, online platforms, digital assets, and emerging technology law, drawing on experience as both a lawyer and entrepreneur.

https://www.merchantgould.com/people/william-d-schultz/
Next
Next

What Frameworks Should Companies Consider for AI Risk Management?