Home > Columns > CRM Columns
The Four Jobs Every Agent Is Doing — and Why Your Organization Can't See All of Them
Contributed article by Mitch Lieberman
Every contact center agent is doing at least four jobs at
once. As leads we do not give them enough credit for how hard their job is, and
can be. They are managing a conversation, following a process, solving a
problem, and operating a technology stack. Simultaneously, under time pressure,
with a real person on the other end waiting for help.
The skills vocabulary most organizations use to hire, train,
and evaluate those agents accounts for only one of those jobs at a time.
Sometimes two. Rarely all four, and almost never in a way that connects Quality
Assurance (QA), training, coaching, and workforce management (WFM) to the same
underlying picture of agent capability.
That is the coordination failure at the center of contact
center performance management. Not the tools, not the agents or the maps. This
is a team sport.
Different Functions, Different
Lenses
QA scorecards measure communication and compliance, mostly
soft skills. Training curricula cover product knowledge and call flows, while
hiring profiles screen for empathy and multitasking. Workforce management
routes (WFM) on tenure and queue type. Each function is doing its job. They are
using different vocabularies to describe the same agent.
When a QA trend shows "communication scores are
dropping," that information doesn't automatically tell training which
specific behavior to develop. It doesn't tell WFM which queue assignments to
adjust and it doesn't tell hiring which screening criteria to tighten. The data
exists, but the diagnosis doesn't.
SQM Group’s research on first call resolution (FCR) links directly to operating
efficiency: for every 1% improvement in FCR, a call center reduces operating
costs by 1%, and for the average midsize call center, that improvement is worth
about $286,000 in annual operational savings. SQM also positions FCR as both a
customer service effectiveness metric and an operating efficiency metric,
because higher FCR is associated with fewer repeat contacts and lower service
costs.
For a center handling 40,000 interactions per month, that is
roughly $300K–$500K annually per percentage point. Repeat contacts, mis-routed
calls, and unresolved issues all trace back to skills gaps that organizations
can measure but cannot diagnose — because QA, training, and WFM are not working
from the same framework.
What This Looks Like from the
Customer's Side
Consider a customer who calls about a billing discrepancy.
The agent is polite, uses the customer's name, and acknowledges the
frustration. All solid communication behaviors. But the agent can't diagnose
why the charge appeared. They search the knowledge base, find a generic
article, and issue a credit without understanding the root cause. Three weeks
later, the same charge appears. The customer calls back. A different agent
picks up, sees no documentation of the root cause, and starts from scratch.
From the customer's perspective, this is one experience: the
company charged them twice and couldn't figure out why. From the organization's
perspective, these are two separate tickets, both of which received passing QA
scores. The first agent scored well on communication. The second followed the
correct process. Neither was coached on what actually went wrong, because the
skills vocabulary didn't connect the diagnosis failure to the documentation
failure to the repeat contact.
Qualtrics research consistently shows that high-effort
customer experiences are the strongest predictor of churn. This is what
high-effort looks like from the inside. It is not dramatic. It is corrosive.
And it compounds.
Four Domains, One Map
The solution isn't a new theory imposed on top of existing
operations. It's alignment to structures that are already there.
Academic and occupational research, drawn from O*NET
classifications, Lightcast workforce analytics, and industry training
frameworks, converges on roughly six distinct skill clusters in contact center
environments. Six is analytically useful but operationally cumbersome. Contact
centers don't evaluate agents across six scorecard sections. They evaluate them
across three or four.
The four-domain model consolidates those clusters into
categories that match how QA platforms already work: Communication &
Interpersonal Skills, Process & Compliance Execution, Knowledge &
Problem-Solving, and Digital Dexterity & Systems Proficiency. Balto,
Calabrio, Observe.AI, and Call Center Studio all cluster their metrics into
groupings that map directly onto these four domains. Most organizations
adopting this model can relabel or lightly restructure existing scorecards
rather than rebuilding from scratch. The operational vocabulary is already
there. The shared language is what's missing.
Each domain names something distinct that can go wrong
independently. An agent can be excellent at communication and still generate
repeat contacts because their knowledge of root cause analysis is shallow; they treat symptoms, not problems. An agent
can know the product cold and still create compliance risk because they rush
through disclosure steps on difficult calls. An agent can handle both
beautifully and still burn 25% of every interaction on after-call work because
their CRM navigation is slow.
These are not the same failures. They require different
coaching, different routing decisions, and different hiring screens. The
four-domain model doesn't just name the gap. It tells you which gap you're
looking at.
The Payoff: Coaching That Names
the Problem
When HR, Training, QA, and Operations all use the same four
categories, data from one function is immediately readable by the others.
An agent struggling with repeat contacts gets a coaching
note that says "your Knowledge & Problem-Solving scores suggest you're
treating symptoms rather than diagnosing root causes, so, here's a scenario to
practice" instead of "you need to improve your resolution
quality." That is not a semantic distinction. The first gives the agent
something to practice. The second gives them something to feel bad about.
The shared taxonomy also creates the diagnostic foundation
that makes AI-assisted coaching genuinely targeted rather than generic. When a
QA review surfaces a pattern; an agent consistently skipping identity
verification, or repeatedly rushing through closing steps on difficult calls, a
practice scenario can appear in that agent's queue within hours. Not two weeks
later, when the behavioral memory has faded.
Ebbinghaus's research on the forgetting curve, first published in 1885 and confirmed by a century of learning
science since, makes the point plainly: coached content that isn't practiced
within days has limited staying power. The traditional QA-to-coaching cycle
like a call recorded Tuesday, reviewed Thursday, coaching scheduled for two
weeks from Monday, works against retention by design. A shared skills taxonomy
with targeted, triggered practice scenarios works with it.
AI coaching is not a prerequisite for using the four-domain
model. Organizations benefit from the shared vocabulary and alignment long
before any AI is deployed. But once the taxonomy is established, AI coaching
becomes the mechanism that compresses weeks of lag into hours and makes skills
development continuous rather than episodic.
The Useful Question
The model is a starting point. The domains can be adapted to
specific industries, regulatory environments, and organizational structures.
What matters is the shared language and whether a QA finding in one function
can be acted on immediately by another.
Which brings the question worth sitting with: when QA
identifies a skills gap in your contact center today, how many steps does it
take before a specific coaching intervention happens and how many functions
have to translate the finding into their own vocabulary before it gets there?
If the answer involves more than two steps and more than one
translation, you have a coordination failure. The four domains give you a way
to reduce that friction and make a QA finding usable across the operation.
From Framework to Practice
Implementation does not need to start with a full
transformation program. It can start with a simpler question: are QA, training,
coaching, and workforce management all describing agent performance in the same
language? In most centers, they are not.
The first move is to take the scorecard you already have and
map it to the four domains, then use those same categories across coaching,
training, calibration, and routing. The goal is not to build a better framework
on paper, but to make a skills gap identified in one part of the operation
immediately usable by the others.
From there, the path is practical: sharpen onboarding around
communication and compliance, build deeper diagnostic skills as agents take on
more complexity, and shorten the gap between what QA identifies and what agents
actually get to practice. That is how contact centers move from measuring
performance in fragments to improving it as a system.
Mitch Lieberman is VP of Fuel iX, within TELUS Digital, and
a CX strategist and operator focused on contact center performance, AI
integration, and the organizational alignment that makes both work.