Home > Columns > Executive Interviews
Call Simulator Executive Interview
David Lawson, Co-Founder and CEO, Call Simulator
Click image to view the ebook

In today’s customer service landscape, training and
development are undergoing a radical transformation. As organizations continue
to adopt advanced technologies like real-time agent assist and AI-driven
quality management tools, one critical element is often overlooked:
foundational, skill-based training. In this interview, Sheri Greenhaus,
Managing Partner of CrmXchange, sits down with David Lawson, Co-Founder and CEO
of Call Simulator, to discuss the shifting dynamics of contact center training
and the growing role of simulation in preparing agents for real-world
scenarios.
Lawson points out the risks of over-reliance on technology
without preparing human agents to fully utilize it. The conversation explores
how scalable simulation, combined with AI-powered coaching, is helping
organizations train agents more effectively across a variety of roles. They
also explore the ethical and practical concerns surrounding generative AI, with
Lawson offering a thoughtful perspective shaped by his early work with IBM
Watson.
Sheri Greenhaus: It’s
been a little over two
years since we last spoke, can you bring me up to speed on what Call
Simulator is up to?
David Lawson: We’re
very proud of our progress. We’re now approaching over 600 911 emergency
dispatch centers training emergency dispatchers in the critical work they do.
As I’ve said before, that market was always intended to be our launch point.
From the beginning, we planned to expand into the broader corporate space—and
that turned out to be a great decision.
Now, we’re not only working with Fortune 500 companies but
also with several Fortune 50 organizations. And we’re doing it at scale.
What’s really exciting is how this applies across
industries—everything from airlines and shipping and logistics to health and
life insurance.
Sheri Greenhaus: When
we last spoke, you described how your system provides scenario-based
training—it can run at scale, monitor quality at scale, and even conduct
evaluations at scale.
Looking at your website now, I’m not sure if this was
available before, but it looks like you’ve added chat functionality. Is that
new?
David Lawson: Yes,
that’s right. We’ve added chat capabilities because we’re increasingly being
used as a platform for communication training.
We started out more focused on call centers, and while they
remain a core use case, our platform is now being applied to a wide range of
communication types. That includes text-based support like live chat and social
media—anywhere text is being used to communicate.
One of the important features we added was multi-chat
simulation. That means we can simulate multiple chat conversations happening at
the same time, which reflects the real-world experience of someone working in a
chat support role.
People often ask, “Does it actually work?” And the answer is
yes, as long as the practice is relevant and realistic, it absolutely works. If
you can simulate the back-and-forth of real conversations such as handling
curveballs like an angry customer or a complex situation, then, just like with
anything else, you’ll start off slow, get better with practice, and eventually
become much faster and more effective when you go live.
What’s interesting is that companies are starting to realize
that when a real human gets involved, someone who can add empathy, nuance, or
even identify upsell opportunities, the experience improves beyond what AI
alone can provide.
Even if handle time doesn’t drop, or even increases
slightly, you often see a big payoff in terms of higher Net Promoter Scores,
improved CSAT, more revenue, and longer customer relationships.
Initially, the expectation was that bots would take over
service. And at first, it looked great from a financial perspective—no salaries
to pay. But the result was frustrated customers who didn’t feel heard or
helped. Over time, companies realized that whatever they were saving with
automation, they were losing in customer satisfaction, and worse, in long-term
revenue.
Sheri Greenhaus: When
we run webinars featuring companies talking about AI, there’s a consistent
message, especially from those in sensitive industries like financial services
and insurance. A bot can easily answer simple questions like “When is my
payment due? and that’s fine. But when it comes to more sensitive or complex
situations, you still need a human agent.
More and more companies, and end users, are realizing that
bots can’t handle everything. So now, they’re more strategic; reserving their
live agents for calls that are emotional, complex, or require deeper
engagement.
David Lawson: Absolutely,
100%. And what that also means is that the so-called “easy” calls are now
handled by AI, which leaves the human agents dealing mostly with the difficult
ones.
Often, by the time a customer reaches a human, they’ve
already gone through AI prompts, possibly feeling frustrated. They may have
even had to convince the AI to escalate the issue. When the agent picks up, the
clock has already been ticking from the customer’s point of view, even if the
human just joined the conversation.
That’s why many advanced platforms are emphasizing their
ability to transition customers seamlessly from AI to a human agent, providing
the agent with a full summary so the customer doesn’t have to repeat
themselves. That makes a big difference.
But with this shift comes a greater need for
simulation-based training. One of the worst things companies can do is throw a
bunch of advanced tech at the agent and then not train them on how to use it.
There’s this flawed idea that with all this smart technology, training becomes
less necessary. It’s actually the opposite.
Flight simulators exist not because pilots aren’t smart, but
because the cockpit is complex, with countless scenarios that could unfold. The
same logic applies here: the more tech we introduce to assist humans, the more
essential it becomes to let them experience it in a safe, simulated
environment, where it’s okay to make mistakes and learn from them without real
consequences.
That’s another challenge we see: some organizations expect
new hires to perform at full capacity on day one, without acknowledging the
need for time to learn the tools, terminology, and unique aspects of the job.
We’ve learned a lot from training 911 dispatchers, where
mistakes can mean life or death. Their work requires intense active listening,
communication, and multitasking and lives depend on how well they perform. In
the corporate world, the stakes may be different, but the principle is the
same. It’s not about literal life or death but brand damage, or lost revenue.
No matter how intelligent or experienced someone is, they
still need structured practice to succeed.
Sheri Greenhaus: It’s
interesting, we’re hosting a webinar at the end of this month, and I asked the
participating vendors to send over some bullet points on what they planned to
cover. Not one mentioned simulation or training.
David Lawson: That
doesn’t surprise me. There’s often a disconnect. A lot of focus is placed on
things like real-time agent assist tools that support agents during live calls.
And yes, those tools are impressive. Companies also invest heavily in quality
systems that evaluate performance after
the fact.
The typical process ends with something like, “Watch this
video and try to improve,” which really isn’t effective. That’s what happens
when you place quality at the end of
the process and hope it somehow influences performance at the start. It doesn’t work.
Real training means giving agents the opportunity to
actually do what’s expected of them,
using the same metrics they’ll eventually be judged on. It means giving them
the chance to build experience in a safe, simulated environment before they go
live.
Then, when quality systems detect issues, you can say,
“Looks like empathy is a challenge, go practice with these scenarios designed
to improve your empathetic communication.”
A lot of CCaaS (Contact Center as a Service) companies have
poured money into end-of-line quality measurement and agent assist tools,
thinking that would be enough. They overlooked training.
When I first entered the call center space, I was surprised
by how many of W. Edwards Deming’s classic management principles had been
forgotten, especially things like:
- Principle
6: Institute training on the job.
- Principle
13: Institute a vigorous program of education and self-improvement.
Sheri Greenhaus: I
started CrmXchange nearly 30 years ago. Back then, training, and ongoing
training, was like gospel. It was just something every company did.
Years ago, I managed an outsourcing business and we were
constantly training our staff as the technology evolved. I think you’re right:
somewhere along the way, people started believing they didn’t need as much
training because agents could just press a button and get the answer.
David Lawson: Exactly,
and it’s unfortunate. That shift makes sense in context. When I entered the
industry, it was right around the time these advanced tech tools such as
real-time assist, automated quality monitoring, etc., started coming into the
picture.
But the irony is, the more technology you layer in, the more
skilled the human agent needs to be. They’re no longer just answering simple
questions. They have to operate at a higher level to make the most of these
tools.
Sheri Greenhaus: How
are you partnering with CX vendors?
David Lawson: We
can integrate our simulations into any quality system. From the beginning, we
designed our platform to be open, so it’s fully built with APIs and deep links
to support those integrations. That flexibility is key. You don’t want your
learning systems to be isolated silos, especially in large-scale organizations.
Sheri Greenhaus: Are
you expanding within end-user organizations?
David Lawson: Absolutely.
We're now being brought into communication training well beyond the traditional
call center environment. In many cases, there's no texting or phone involved.
It's live, in-person interaction. That could include leadership development,
customer service roles like gate agents, or other face-to-face scenarios.
This has really expanded our use cases. Call centers will
always be a key part of what we do, but we’re seeing broader adoption across
departments. And for that reason, one of the most significant features we’ve
introduced since we last spoke is AI coaching.
We’ve incorporated generative AI to create skills-based,
rubric-driven scoring, in addition to the compliance scoring we’ve always
offered. In call centers, we aim to integrate with whatever quality system they
already use, but there are many departments and employees across large
organizations who don’t use, or don’t have access to those systems.
By having our own built-in AI coaching, we can support both
ends of the spectrum: the entire organization or just specific parts that lack
an existing quality infrastructure.
We’re really proud of that development. More and more
organizations are moving toward skills-based evaluation, and rubrics offer a
far better measurement of effectiveness than a simple binary: “Did you say the
right phrase or not?” The rubric-based approach allows for more nuance,
context, and, ultimately, a more accurate view of performance.
Sheri Greenhaus: And
I suppose when you're working in the contact center and then extending into
other areas, simulations can help ensure consistency. For example, a gate agent
wouldn’t be saying something completely different from what a customer just
heard from the contact center.
David Lawson: Exactly.
Maintaining a consistent corporate message across all touchpoints is crucial.
In fact, human interaction is becoming one of the most important ways for
companies to differentiate themselves. As AI becomes more common, many customer
experiences will start to feel the same. When a real person enters the
equation, that moment becomes a brand-defining opportunity.
That human contact is where companies can reinforce what
makes their brand unique; whether it's quality, price, comfort, or something
else, and also where customers can feel truly heard. And that’s one of the
biggest limitations of AI. Even in industries where AI typically works well, if
a customer is upset and just wants to be heard, AI will never fully meet that
need. No one believes AI is truly listening. That’s a gap, and it opens the
door for competitors who do offer
real human connection.
Sheri Greenhaus: Yes,
although this is a bit of an aside—but what about AI “boyfriends” and
“girlfriends” that make people feel heard?
David Lawson: (Laughs)
Right! That’s a whole other issue. But honestly, it worries me. When people
start to believe those interactions are real,
it becomes dangerous. AI is, by nature, a sociopath. It’s not trying to help or
hurt, it’s simply responding based on data. It has no values, no soul. It will
do the same task flawlessly at 2 AM or 4 PM, and it won't complain. But that’s
also the danger. It may do something amazing one moment, and something
completely inappropriate the next.
That unpredictability is like dealing with someone who is
kind to one person and cruel to another. Psychologists would say that's because
there's no consistent core, no person
behind the actions. As someone who was at IBM Watson in 2013 and 2014, at the
start of commercial AI, I’ve seen what it can do and what it can’t. Back then,
we were one of the first companies allowed to use Watson, and I was there at
the headquarters when it launched. That was the beginning of commercial AI.
Since then, we've moved toward more flexible platforms like
Google’s, because we needed to scale. But those early experiences taught me a
lot about what AI is and what it isn’t.
The real shift came with generative AI. It’s a complete game
changer. But I think it’s already being oversold, especially in more nuanced
human situations. And we’re beginning to see the backlash.
What could truly change everything is quantum computing. I’m
old enough to remember when the Pentium chip revolutionized computing and
gaming. Quantum will be that kind of leap. Its ability to hold both “yes” and
“no” simultaneously, something current computers can’t do and is deeply human.
It’s how we can have two favorite restaurants or hold conflicting emotions at
once.
If AI is ever going to feel more human, more capable of
empathy and nuance, I believe quantum computing will be the spark that gets us
there.
Sheri Greenhaus: Do
you worry more about AI becoming too human, starting to “think”, or that it
stays essentially a giant database, just doing what it thinks you want it to do?
David Lawson: What
worries me is something we’ve seen before in history: the technology advancing
faster than the understanding of the people who created it. When that happens,
bad actors can take advantage.
In the past, tech mostly helped humans do tasks faster, but
the human was still in control, pressing a button, giving a command. Now, with
generative agents, we say, “Go do this,” and it just does it. I was joking recently that maybe the Roomba was our first
warning. It was a vacuum, yes, but it actually did something without us there. It was tech that took action on its
own. Gen AI is that on a whole new level.
And that’s the concern. With a Roomba, maybe it bumps into
your cat. But with a generative agent, you’re handing over access to your
systems, your passwords, your sensitive data. What might it do with that? We’re
already seeing stories where agents are being infiltrated and manipulated into
giving up data to third parties. That’s why I love being in the training
business. We work with simulated environments and fake data. It keeps the risk
low while we build skills and understanding.
Sheri Greenhaus: In
the last 30 seconds, any final thoughts?
David Lawson: If
I think back two and a half years ago, most of the market didn’t even believe
scalable role-play training was possible. What’s changed is the level of
understanding. Today’s customers are much more informed. They already know what
can be done. Now, they’re focused on how
we’re executing it.
I’m excited. We’re seeing demand across industries and use
cases. We’re past the novelty stage. It’s no longer, “What is this?” Now it’s,
“How can we integrate this into our operations?” And that’s a great place to
be.