What AI Tools Are Allowed at Work? A Complete Guide

Last updated: April 2026

Quick answer: The most commonly allowed AI tools in the workplace are Microsoft Copilot (integrated into M365), GitHub Copilot for developers, and — in many technology companies — ChatGPT. However, policies vary enormously by employer and industry.

The landscape of AI tools at work in 2026 is far from uniform. Some employers actively encourage AI adoption and pay for premium subscriptions; others have implemented strict blanket bans. Understanding where the lines are drawn can help you evaluate job offers and plan your own productivity.

The Main Categories of Workplace AI Policy

Policy TypeWhat it MeansCommon in
Allowed (free use)Employees may use AI tools on company devices without restrictionTech companies, start-ups, creative agencies
Allowed with conditionsAI use is permitted but subject to guidelines (no client data, review outputs, etc.)Consulting, media, professional services
Paid by companyEmployer provides and pays for AI tool subscriptionsForward-thinking tech and product companies
BlockedAI tools are restricted or prohibited on company infrastructureFinance, law, healthcare, government
In-house tools onlyOnly proprietary or vendor-approved AI tools are permittedLarge enterprises with their own AI products

ChatGPT

ChatGPT remains the most discussed AI tool in workplace policy conversations. Many technology companies allow it freely; financial services firms and law firms frequently block it. Some organisations that initially banned ChatGPT have since developed internal guidelines that permit its use with caveats — for example, no uploading of client data.

Microsoft Copilot

Microsoft Copilot has a unique advantage in the workplace: because it is deeply integrated into Microsoft 365, many organisations that block standalone AI tools still allow Copilot within their existing software environment. Microsoft's enterprise data privacy commitments have helped convince cautious IT and legal teams. In our data, Copilot has a notably higher allowance rate than ChatGPT.

GitHub Copilot

For developers, GitHub Copilot is often the most relevant question. Many technology employers not only allow GitHub Copilot but pay for licences as a standard developer benefit. Even some organisations that restrict general AI tools for non-technical staff may allow Copilot specifically for their engineering teams.

Claude (Anthropic)

Claude's workplace adoption tends to mirror ChatGPT's: allowed in permissive organisations and blocked in restrictive ones. Some companies that block ChatGPT have not explicitly addressed Claude, creating a grey area. Our data suggests Claude is less frequently mentioned in company policies than ChatGPT, though this is changing.

Gemini (Google)

Google Gemini, particularly as integrated into Google Workspace, is gaining traction in organisations that use Google's suite of tools. Like Copilot, its integration into existing trusted infrastructure gives it a smoother path through corporate approval processes.

Specialist AI Tools

Beyond the general-purpose assistants, specialist AI tools — for legal research, financial modelling, medical documentation, and so on — are often treated differently in workplace policies. A law firm may block ChatGPT whilst permitting a specialist legal AI that has passed its vendor assessment process.

Check AI tool policies at specific companies

Frequently Asked Questions

Which AI tool is most commonly allowed at work?

Based on ChatBlocked.ai data, Microsoft Copilot has the highest allowance rate in enterprise environments, largely because it integrates with existing Microsoft 365 infrastructure. GitHub Copilot leads among technology and developer teams.

Can I use personal AI subscriptions on company devices?

This depends entirely on your employer's acceptable-use policy. Many organisations that restrict AI tools do so at the network or device level, meaning personal subscriptions may still be blocked. Others only restrict use of client or company data, not the tools themselves.

How do companies decide which AI tools to allow?

Typically through a vendor assessment process led by IT security and legal teams. They evaluate data privacy terms, security certifications, data residency (particularly important for EU/UK compliance), and whether an enterprise agreement can be negotiated.

Browse company AI policies →

How we collect and moderate data →