We’re entering a new phase of AI use, in which employees will increasingly build their own AI agents and workflow assistants. Take technology investment group Prosus for example, whose Global Head of AI states that 90 per cent of their 4000 AI Agents have been built by staff members.
DIY AI can have an enormous impact. Consider the productivity gains from each employee building small agents that save an hour or two per week.
But before everyone starts building their own agents and your organisation soon becomes the AI Wild West…
- The organisation needs some sensible controls in place,
- Employees need access to approved tools, and
- The knowledge and inspiration to use them.
Inaction won’t stop your teams developing their own agents, but it will put your business at risk. We outline the steps to take to get started.
Start by agreeing the agent parameters
Your employees are already experimenting with DIY agents, so setting ground rules should be a priority. This isn’t about stifling innovation, but ensuring that people don’t inadvertently create security, compliance, or data governance risks.
Create a simple, short ‘Responsible AI Agent Policy’ that clearly defines what employees can and cannot do. The goal is data protection and regulatory compliance.
This is likely to allow the creation of internal productivity agents, workflow automations, knowledge assistants, and data summarisation. But also, to forbid potential data and governance breaches.
Restrict:
- Uploading of confidential data into unapproved tools
- Use of external AI services that sit outside the company tenancy
- Agents that interact with customer data (without approval)
Support this with a simple approval process, with the goal of ensuring you have oversight of things that have wider impact. So, for own-use, personal productivity agents no approval may be required. Where the agent is to be more widely used, for example by other team members on a shared workflow, require manager approval. Meanwhile, if the impact is likely to be cross-department or customer facing, IT or governance approval should be required.
This should prevent chaos without blocking safe experimentation.
Review data governance
Alongside this, it’s important that the organisation has the right data governance tools in place.
No one should be able to create an agent that can access data that they aren’t allowed to see, so use Microsoft Purview to review and, if necessary, rectify:
- Data classifications
- Use of sensitive information labels
- Data loss prevention (DLP) policies
Provide user-friendly AI tools
Perhaps the greatest risk is employees quietly using unapproved AI tools.
Part of the solution is to provide ready access to approved tools, along with a safe sandboxed environment for experimentation, before they go elsewhere.
Typically, this means using:
- Microsoft 365 Copilot – this is the AI assistant all licensed employees can use, supported by Copilot Chat for non-licensed employees
- Microsoft Copilot Studio – the separately licensed platform that can be used to build AI agents and
- Microsoft Power Automate – for automation
This will help to ensure that:
- Data stays inside your Microsoft tenant
- Security policies are applied
- Access controls are enforced
- Audit logs are available
While there are other tools available, working within the Microsoft AI stack will enable you to make the most of your M365 foundation. This may not be everybody’s first choice, however, so adding in a security layer with tools from Microsoft Purview is crucial to ensure employees don’t stray from the approval platform.
Microsoft 365 Copilot and Copilot Studio and how they’re licensed
Microsoft 365 Copilot is the AI capability embedded inside Microsoft 365 apps, such as Word, Excel, Outlook, PowerPoint, and Teams. But it is not embedded within your Microsoft 365 subscription and requires a separate, paid, add-on licence. This enables organisations to deploy Copilot to specific users, test AI adoption, and control costs.
Microsoft 365 Copilot enables employees to:
- Summarise documents
- Draft emails
- Analyse spreadsheets
- Summarise meetings
- Answer questions using company data
Copilot Studio is a development environment for creating AI agents, which can:
- Answer questions from company knowledge bases
- Automate workflows
- Interact with systems
- Perform multi-step tasks
These can be deployed in Microsoft Teams, Microsoft 365 Copilot chat, and SharePoint. By buying Microsoft 365 Copilot you automatically get these ‘basic’ Copilot Studio capabilities for internal use.
However, a separate Copilot Studio licence, or Copilot Credits, are required to:
- Publish agents to websites
- Create customer-facing bots
- Allow external users to interact with agents
- Run large-scale autonomous agents
Copilot Studio follows a usage-based subscription model. This operates tenant-wide with capacity either purchased through credit capacity packs or pay-as-you-go billing, in the form of Copilot Credits.
Training on AI agents
Once the governance and tools are in place the focus shifts to education and inspiration.
Employees rarely build useful tools until they understand the possibilities. As with any new initiative, people will range from the enthusiastic to the resistant – and many will need to see coworker’s successes before they want to get involved.
As a result, some organisations will take a phased approach to training. Start with a cohort of early adopters, perhaps self-selected, before working through other groups.
An AI training and enablement programme might include:
- Internal workshops
This could include explanation of AI agents, how to build simple agents, and security and governance considerations. Seeing real examples helps unlock ideas, so provide practical examples. Such as meeting summarisation, policy Q&A assistants, sales proposal drafting agents, and service desk ticket triaging.
- Starter toolkits
Remove the ‘blank page’ problem by giving employees:
- Pre-built agent templates
- Example prompts
- Sample Power Automate flows
- Documentation on best practices
- Encouraging experimentation
It’s important to support this with a culture of experimentation, so consider recognition for successful agents, innovation competitions and even ‘AI Agent hack days’.
A practical five-step framework for employees
Provide employees with a process they can follow, like the following.
1. Identify a real issue
The best agents simplify things that consistently take time. This might be summarising meetings, or triaging requests/complaints, or searching for policies.
2. Start small
Build simple agents first, like a document summariser. As experience builds so too can complexity.
3. Be data confident
Agents are only as good as the data they’re working with, so encourage employees to:
- Take responsibility for data quality
- Use verified internal sources (eg knowledge bases, and SharePoint libraries)
- Avoid unapproved external sources
- Not to use sensitive data without permission
If building the right data culture is becoming one of your roadblocks to agents we’ve written a practical guide to building a data culture which can help.
4. Give agents clear parameters
Agents should know:
- What data they use
- What they can and cannot do
- When human review is required
For example, ‘this Agent provides draft responses, which should be reviewed before being sent externally.’
This aids both human, in helping to prevent over-trusting AI output, and Agent.
5. Iterate
Agent output improves through iteration and learning. Realistically the first version will not be perfect so look for improvements: collect feedback, look for errors, and enhance prompts and logic.
Provide a set of simple rules that support security and compliance needs
Few people read, or remember, long policy documents so provide employees with a set of AI safety rules they can follow. For example:
- Don’t expose sensitive data outside approved systems.
- Agents should only access data you are authorised to see.
- You’re responsible for your agent’s output.
- AI output must be reviewed before external use.
- Automation interacting with systems requires approval.
Engineering success
The biggest risk is not employees building AI agents, but them doing so insecurely with unauthorised tools.
Successful organisations are moving quickly to get ahead of this. They are:
- Ensuring that the right governance and security is in place
- Providing the right tools
- Enabling their use with training
- Building a culture of responsible experimentation.
Cloud Direct can help you build a secure and governed foundation to implement AI agents within your enterprise. Learn more about Cloud Direct’s:
Or request a call with a subject matter expert using the form below.