Beyond Human-in-the-Loop: How AI Oversight is Evolving for Law Firms in 2026
From Co-Intelligence to Overseeing Agents
Most law firms have the same instinct when it comes to AI: a lawyer should review everything AI produces before it reaches a client. That instinct is sound. Professional responsibility, client confidentiality and the reputational risk of getting something wrong all point in the same direction - keep a human firmly in control. And for many tasks, that remains exactly the right approach.
But as AI becomes more capable, applying the same level of oversight to every task - from high-stakes client advice to routine document filing - will become neither practical nor necessary as law firms work through their roadmap to use more AI tools with confidence.
Ethan Mollick, Wharton professor and author of Co-Intelligence, coined the term "Co-Intelligence" (which we adopted from the outset in our work on AI) to describe how humans and AI work together - prompting back and forth conversations to develop and review outputs, building on each other's strengths. That model remains the right starting point for tasks involving client advice, legal judgement and professional responsibility.
But the landscape is shifting — remarkably quickly.
As Mollick himself wrote in March 2026: "After ChatGPT was introduced, human-AI work took the form of what I called co-intelligence, where humans would prompt AI back-and-forth to get help on tasks. Starting in late 2025, we entered a new era thanks to AI agents. This is an era of managing AIs, rather than working with them."
AI agents — systems that can plan, research, draft, check their own work and take action with minimal human direction — are becoming practical. They can handle multi-step tasks that previously required constant supervision. This doesn't mean human oversight disappears. It means the nature of that oversight needs to vary depending on the task.
Four Levels of Review
Rather than treating human oversight as a single standard that applies to everything, we think about it as a spectrum with four levels.
- Level 1: Human-led, AI-assisted. The lawyer does the work. AI provides research summaries, drafting suggestions or precedent identification. The lawyer reviews, edits and takes responsibility for every output. This is co-intelligence in its purest form — and it's the right model for high-judgement, high-risk work. Most firms are here today.
- Level 2: AI-led, human-supervised. AI executes a defined workflow. The lawyer supervises — reviewing outputs at agreed checkpoints and sampling for quality rather than checking every item. Think bulk document review where AI classifies documents and a lawyer samples a defined percentage. Human judgement remains central, but the role shifts from doing the work to overseeing it.
- Level 3: AI-managed, human-governed. AI runs an end-to-end workflow within agreed boundaries. Humans are involved at the governance level — setting the rules, monitoring performance and reviewing exceptions. The human-in-the-loop principle still applies, but the loop is wider.
- Level 4: Agentic AI — autonomous within boundaries. AI agents plan and execute multi-step tasks independently within defined guardrails. An agent might research a legal question, draft a summary, cross-check sources and present findings for lawyer review — without human involvement at each step.
Where law firms should be now
Most firms are at Level 1. Some are moving into Level 2. That's entirely appropriate.
But the firms that will be best positioned as AI matures are those building their operating model now with the full spectrum in mind. Not because they need to rush to Level 4, but because having the structure in place — governance, training, quality assurance, accountability at each level — means they can move up deliberately as confidence grows.
The practical question for managing partners
Does your firm have a way to decide which level of oversight each task needs? And is that decision documented, governed and understood across the firm?
If the answer is "everyone just uses their own judgement" then you have the same problem most firms have — inconsistency, unmanaged risk and no defensible position when clients, regulators or insurers ask how you manage AI.
Our Smarter Technology and AI Adoption Roadmap addresses this through three components:
- An Adoption Roadmap that translates all of this into clear, practical guidance — and a structured path from experimentation to firm-wide deployment.
- An AI operating model that defines oversight levels for different tasks — matched to the risk and complexity of each use case, not one-size-fits-all.
- A traffic light system that classifies tools and use cases by readiness — GREEN for confident deployment, AMBER for structured pilots, GREY for personal use only, RED for not yet ready. As AI agents become more capable, new use cases enter the system and the traffic light approach ensures each one is assessed at the right oversight level.
The pace of change in AI is accelerating. Firms that build this foundation now will be ready to take advantage of what comes next, while maintaining the professional standards and governance that clients expect.
Where does your firm stand?
Our free self-assessment at assess.cartonconsultants.com takes under 15 minutes and gives you a question-by-question breakdown of your firm's readiness — covering data, security, governance, training and more. It's a practical starting point for the conversation.
If you'd prefer to talk it through, I'm happy to have an initial conversation — no obligation.
📞 07779 653105
📧 acarton@cartonconsultants.com
📅 Book a call here >>
Allan Carton is Lead Consultant at Carton & Co, helping law firms modernise and grow through smarter technology, stronger client relationships and practical business development. A qualified solicitor with an MBA from Alliance Manchester Business School, Allan has advised law firms since 1990 on technology selection and adoption, client relationships and business development.










