A bulleted list of "things we could automate" is a brainstorm with a professional label. A real automation audit produces a deliverable with scores, recommendations, and a build order that someone can act on without further meetings.

This article defines what a good automation audit covers, what the output should look like, and how to tell the difference between a thorough audit and a surface-level one, whether you are doing the audit yourself or hiring someone to do it.

Phase Zero: Process Discovery Before Scoring

Before you can score workflows, you need to know what the workflows actually are. This is the step most audits skip, and it is the reason most audits produce recommendations that break on contact with reality.

Process discovery is the practice of mapping how work actually moves through a team, as opposed to how the SOP says it should move. The gap between those two is where automation projects fail. A workflow diagram drawn from memory in a meeting room captures the intended process. Process discovery captures the real one: the workarounds, the Slack messages that substitute for formal handoffs, the spreadsheet someone maintains because the CRM field is unreliable, the approval step everyone skips when the manager is on leave.

Manual discovery: shadowing and interviews

For small teams (under 20 people), the most effective discovery method is structured observation. Sit with the person who runs the workflow. Watch them do it. Ask them to narrate each step, including the parts they consider too obvious to mention. Those "obvious" parts are where the undocumented exceptions live.

Three questions that surface hidden workflow steps every time:

  1. "What do you do when [standard input] is missing or wrong?" This reveals exception handling that exists in someone's head but appears nowhere in the documented process.
  2. "Who do you check with before moving to the next step?" This reveals informal approvals and dependencies that formal process maps miss entirely.
  3. "What breaks if you are out sick for a week?" This reveals the steps that depend on institutional knowledge rather than documented procedure.

Task mining: discovery at scale

For larger teams or complex operations, manual observation becomes impractical. Task mining tools record how employees interact with applications at the desktop level: clicks, keystrokes, application switches, copy-paste patterns, and data entry sequences. The software reconstructs actual workflows from these interactions, revealing patterns that no interview would surface.

The category has matured significantly. Microsoft Power Automate includes built-in task mining that records desktop activity and clusters similar tasks to highlight automation candidates. Dedicated platforms like UiPath Task Mining, Mimica, and ABBYY Timeline go deeper: AI-powered analysis that identifies repetitive patterns across thousands of recorded sessions, calculates time spent per task variant, and ranks automation opportunities by frequency and effort savings.

For Automation Switch readers, the practical entry point is Power Automate's process advisor if you are already in the Microsoft ecosystem, or a two-week manual discovery sprint using the interview questions above if you are a smaller team. The goal is the same either way: a complete, accurate map of how work actually happens before you start scoring it.

Why discovery changes what you automate

Teams that skip discovery and go straight to scoring typically automate the processes they can see: the ones with formal SOPs, the ones with clear tool integrations, the ones a manager can describe in a meeting. The highest-value automation targets are often the processes teams have built around the gaps in their existing tools. A task mining study by KYP.ai found that organisations typically discover 30 to 40 percent more automation-eligible tasks through structured discovery than through interviews alone.

This matters because the workflows you discover during this phase feed directly into the five-dimension scoring framework below. Discovery produces the raw material. Scoring produces the priority list. Skip discovery and the priority list is incomplete from the start.

The five dimensions of an automation audit scoring framework: time cost, error rate, dependency chain, data quality, and judgment requirements

The Five Dimensions of a Good Audit

Every workflow step in the audit should be scored across five dimensions. These scores determine priority, sequencing, and whether the step should be automated at all.

Dimension 1: Time Cost Per Step

What to measure: Hours per week this step consumes, including hidden time.

Hidden time is where most audits undercount. Beyond the time to execute the step, include:

  • Context switching (stopping other work to handle this step)
  • Fixing errors that the step produces
  • Time waiting for inputs from the previous step
  • Time spent communicating status to others

A step that takes 20 minutes to execute but causes 45 minutes of context switching and error correction costs 65 minutes, not 20.

Dimension 2: Error Rate and Rework

What to measure: How often does this step produce incorrect output? What is the downstream cost of each error?

Data entry errors, missed follow-ups, wrong routing, and duplicate records all have real costs: time to fix them, decisions made on bad data, and customer experience damage when errors reach the outside world.

High error rate + high downstream cost = highest priority for automation, because automation eliminates the error source (human input) rather than just managing the consequences.

Dimension 3: Dependency Chain

What to measure: Which steps block other steps? Where are the single points of failure?

"Only Sarah knows how to do this" is a dependency, and a risky one. When Sarah is on holiday, the workflow stops. When Sarah leaves the company, institutional knowledge leaves with her.

Dependency analysis reveals where automation delivers resilience, not just efficiency. A step that three people are waiting on is a higher priority than a step that only affects the person doing it.

Dimension 4: Data Quality

What to measure: Are the inputs to this step structured and digital, or messy and manual?

Automation requires clean, consistent data. If the input to a step is a PDF someone fills in by hand, a Slack message with variable formatting, or a spreadsheet where column names change, automation will fail or produce garbage output.

Data quality problems must be fixed before building automation, not after. An audit that recommends automating a step with poor input data quality is an audit that will produce a broken automation.

Dimension 5: Judgment Requirements

What to measure: What percentage of this step is rule-based versus judgment-based?

The test: could you write a complete SOP for this step that a person with no context could follow without asking any questions? If yes, it is rule-based and automatable. If the SOP would require sentences like "use your judgment" or "it depends on the situation," the step has judgment requirements that automation cannot replace.

Judgment requirements mean the judgment-heavy portion stays human while the rule-based portion gets automated around it.

The Audit Scoring Table

Here is the format every workflow step should be documented in:

CriteriaStepTime cost (hrs/wk)Error rateDependency riskData qualityJudgment %Automation priority
RowExample: CRM data entry3 hrsHighLowMedium5%High
RowExample: Deal approval0.5 hrsLowHighHigh70%Low

Priority is determined by the combination: high time cost + high error rate + low judgment = highest priority. Low time cost + low error rate + high judgment = lowest priority or "do not automate."

The Audit Deliverable Format

A professional automation audit produces three outputs. Three specific documents:

1. Workflow Map with Scores

Every step in the workflow documented with its five-dimension scores. Visual format: a table or simple process diagram with scores annotated. One page per workflow, not a 40-slide presentation.

The workflow map is the source of truth. Everything else derives from it.

2. Automation Recommendations (Prioritized)

A ranked list of automation recommendations, ordered by impact score. Each recommendation includes:

  • What to automate: The specific step or steps
  • Recommended tool: Which platform best fits this automation
  • Estimated build time: Hours to implement, including testing
  • Estimated monthly savings: Hours recovered × cost per hour
  • Dependencies: What needs to be in place before this can be built

The ranking prevents the common mistake of automating the interesting workflows first instead of the impactful ones.

3. The "Do Not Automate" List

This list is as important as the recommendations. It identifies:

  • Steps where human judgment is the actual value being delivered (automate around them, not through them)
  • Steps where the cost to build and maintain the automation exceeds the cost of doing them manually
  • Steps where the input data quality is too poor to automate reliably until the upstream problem is fixed

An audit without a "do not automate" list is incomplete. It will lead to wasted build time on automations that break or produce worse outcomes than the manual process.

Red Flags That an Audit Missed Something

Four outcomes indicate the audit was insufficient:

The automation broke within a week of deployment. This means the edge cases that the audit should have surfaced were not documented. The step was more variable than the audit credited.

Immediate need for exception handling that was not in the spec. Every workflow has exceptions. A good audit catalogs them and either builds handling for them or explicitly flags them as manual steps.

The automated output goes to a folder nobody checks. The audit did not follow the output of the automation to its destination. Automation that produces an output nobody acts on solves nothing.

The team reverted to the manual process within a month. The automation either produced lower-quality output than expected, was harder to use than the manual process, or solved the wrong problem. All three indicate insufficient audit work upfront.

Automation Audits as a Consulting Deliverable

For businesses that take automation seriously, a professional audit is worth paying for.

Pricing benchmarks:

  • Single-workflow audit (one process, one team): $2,000–5,000
  • Full business process audit (all workflows, cross-functional): $5,000–10,000
  • Ongoing audit retainer (quarterly reviews + new workflow assessment): $1,500–3,000/quarter

Who commissions these audits:

  • Small businesses considering their first automation investment and wanting a clear ROI picture before committing
  • Companies evaluating automation vendors and needing an independent assessment
  • Teams building a business case for automation spend that requires defensible numbers

For guidance on evaluating automation consultants and what to expect from a professional engagement, see How to Hire an Automation Expert (And What to Ask Them).

For a structured framework you can use to run the audit yourself, see Best Automation Tools for Small Businesses in 2026.

Ready to have your workflows audited professionally? Book an Automation Audit with Automation Switch and get a scored workflow map, prioritized recommendations, and a clear build order delivered within two weeks.