May 7, 2026

Liam Weedon, founder of GTM Layer
Liam Weedon
7

How AI Agents Run a Two-Person RevOps Consultancy

Featured illustration for How AI Agents Run a Two-Person Consultancy

We are a two-person RevOps consultancy running multiple client engagements simultaneously. That should not work. The operational overhead of managing projects, maintaining documentation, tracking deliverables, syncing client portals, writing status updates, and keeping data consistent across tools should eat most of our capacity before we even start doing actual client work.

It does not, because AI agents handle most of it.

This is not a hypothetical "here is how AI could help" post. This is what actually runs, every day, in our business. The specific agents, the specific workflows, and what they do that a human used to do.

What the agents actually do

We run AI agents across every layer of operations. Not a single monolithic AI, but purpose-built agents that handle specific operational tasks. Each one connects to our tools via MCP, reads the context it needs, and acts.

Daily project updates. Every evening, an agent scans our project management tool for status changes across all client projects. It reads the issues that moved, the comments that were added, and the completion context. Then it writes a per-client status update and posts it to each client's portal. No human writes these updates. The agent pulls from the work tracking data and writes in a consistent format.

Meeting-to-action pipeline. After every client call, the AI meeting recorder generates a transcript. An agent picks up the transcript, extracts action items, creates properly structured issues in our project tracker with full context, and syncs those issues to the client's portal. The gap between "we discussed this on the call" and "it is tracked as a task" is minutes, not days.

Email triage to tasks. An agent scans incoming and sent emails nightly, identifies anything that implies a client action item, and creates tasks with the email context attached. Things that would otherwise live in someone's inbox as a mental note become tracked work.

Documentation maintenance. When a client's architecture changes, when a schema is updated, when a workflow is modified, the agent that completed the work also updates the relevant documentation. Decision logs, schema references, workflow docs. The documentation stays current because updating it is not a separate task.

Client portal sync. Our project tracker and client-facing Notion portals stay in sync automatically. When an issue status changes, the client portal reflects it. When a deliverable is completed, the completion context appears in the portal. Clients see real-time progress without us manually updating two systems.

Enrichment and data operations. Client enrichment workflows run through our operational data store with a caching layer that prevents duplicate spend. The agent manages cache lookups, triggers fresh enrichment when TTLs expire, and routes results to the right client tables.

What this replaces

In a traditional consultancy, the work these agents handle would require dedicated operational roles. Not because the work is hard. Because the volume is relentless.

Project management overhead is the biggest one. Creating issues, updating statuses, writing completion notes, syncing between internal and client-facing systems. In a traditional setup, that is 1-2 hours per day per project manager. We run multiple projects with zero hours spent on project management admin because the agents handle it.

Status updates and reporting used to be a weekly chore. Writing client-facing updates for each engagement, summarising what moved, what is blocked, what is coming next. Each update took 15-20 minutes to write well. With 4-5 active clients, that is 1.5 hours per week of pure writing. The agent does it daily (not weekly) and it takes zero human time.

Documentation was the thing that always slipped. Everyone knows they should update the docs. Nobody does, because the actual work always takes priority. The agent does not have this problem. It does not deprioritise documentation because it does not have competing priorities. When the work is done, the docs get updated. Every time.

Email follow-up tracking is the one most people do not think about. How many action items live in your inbox right now that should be tracked tasks? The email triage agent catches these and turns them into actual work items. The gap between "I should do that" and "that is tracked" disappears.

What the agents cannot do

This is the important part. The agents handle operational overhead. They do not handle the work itself.

Strategic decisions. When a client asks "should we restructure our sales process or invest in outbound first?" the agent can surface the data, pull the enrichment stats, and show the pipeline analysis. But the recommendation is a human call. It requires judgement about the client's stage, their team, their market, and a dozen other factors the agent does not weigh.

Client relationships. The agents keep the portal updated and the tasks tracked. The actual relationship, the trust, the reading-the-room in a call, the knowing-when-to-push-and-when-to-hold, that is human. Always will be.

Novel problem-solving. The first time we encounter a problem we have never seen before, the agent is no more useful than a blank page. It can search our documentation for related patterns, but if the pattern does not exist yet, a human has to figure it out. Once we solve it and document the solution, the agent can apply that pattern next time.

Scope and priority calls. When two things are urgent and we can only do one, the agent cannot make that call. It can surface the options, show the dependencies, and highlight the consequences of each path. The decision is human.

The pattern is clear: agents handle anything that is repeatable, context-dependent, and does not require judgement. Humans handle anything that requires judgement, trust, or genuine novelty. The job shifts from "doing everything" to "doing the things only a human can do."

Two-column comparison showing what humans handle versus what AI agents handle in a RevOps consultancy

How this compounds over time

The agents get better the longer they run, and it is not because of model improvements. It is because of context accumulation.

Every client engagement adds documentation to our knowledge base. Every completed project adds patterns the agent can reference. Every decision log entry teaches the agent about how we think about problems. After twelve months of running this way, the agent's output quality is noticeably different from month one, because it has twelve months of documented context to draw from.

Specific examples of how this compounds:

Task templating. After creating hundreds of project issues, the agent learned our structure. When it creates a new issue for a HubSpot workflow build, it automatically includes the sections we always include: context, acceptance criteria, dependencies, testing approach. It learned this from the existing issues, not from explicit instruction.

Status update quality. Early status updates were functional but generic. After months of writing them against increasingly rich project data, the updates now include relevant context like "this connects to the enrichment cache work completed last sprint" because the agent can see the full project history.

Cross-client pattern recognition. When we start a new client engagement that looks similar to a previous one, the agent can surface the relevant documentation, the architecture decisions we made, and the problems we ran into. It does not just search. It connects the current context to historical patterns because it has read and written the documentation for both.

The economics

The cost of running these agents is negligible compared to the alternative.

AI subscriptions, managed database hosting, and the project management tools total less than a single part-time hire. The operational capacity they add is equivalent to 2-3 full-time operational roles: a project manager, a documentation specialist, and a data operations coordinator.

This is not about replacing people. It is about a two-person team being able to run a consultancy without hiring an operations team. The humans focus on strategy, client delivery, and the work that actually requires expertise. The agents handle the operational tax that would otherwise consume half our time.

The alternative is clear: either hire to handle the overhead (expensive, and you need enough revenue to justify it), or let the overhead consume your delivery capacity (unsustainable). Agents are the third option that makes a small team viable at scale.

Where to start if you want to build this

You do not need all of these agents on day one. Start with the one that handles your biggest time sink.

For most small teams, that is the meeting-to-action pipeline. Connect your AI meeting recorder to your project tracker through an agent. Every call generates tasks automatically. You will feel the time savings within a week.

Next, add the client portal sync. Keeping internal and external systems in sync manually is a tax that scales with the number of clients. Automate it and that tax disappears.

Then add the nightly status updates. These are the thing that slips first when you get busy, and they are the thing clients value most. An agent that writes them daily, without fail, immediately improves your client experience.

Build incrementally. Each agent you add frees up time that you can reinvest in the next one. The flywheel described in the AI-first operations guide applies here: build, use, learn, improve, repeat.

This post is part of a series on building AI-first operations. Related: What MCP Actually Means for Business Operations, Why We Built an Operational Data Store Instead of Making HubSpot Do Everything.