Select Page

Mind Meld with the Customer

How would you like to mind-meld with the customer, as if you had perfect knowledge of all information pertinent to the sale?

With AI, this might just become a reality.

Thanks to large-scale computation, AI has now exceeded beyond-human performance in many economically valuable cognitive tasks, especially related to language through the use of so-called Large Language Models (LLMs).

Naturally, many business processes will be amenable to beyond-human automation, including sales.

In full disclosure, I credit inspiration to Irzana Golding for her article “Operationalizing Competitive Intelligence, including ChatGPT” in which she introduces the notion of a “SalesGPT” for competitive-intelligence questions.

She posits how a locally-informed LLM could bring critical competitive intelligence (CI) to the salesperson’s attention in a way that might clinch the deal.

But how?

LLMs are Not Just Language Models

It is often revealed via win-loss analysis that lost deals (some say up to “50%”) were winnable, but for some missing information.

It’s likely that the information was available, but unobserved. As Sherlock Holmes said: “Watson, you see, but you do not observe.”

No alt text provided for this image

In the realm of deductive reasoning, observation is akin to connecting the dots. For humans, the process is far from simple and fraught with mishaps and biases. Enter the LLM, a powerful tool that recognizes semantic patterns and can apply them to knowledge across disparate datasets.

Beyond being a mere sentence generator, an LLM possesses a prodigious memory of all the data it has encountered. With this wealth of information, an LLM can expertly connect disparate pieces of data and generate insights that might have been impossible for humans to discern.

This ability to identify patterns in data is especially useful for businesses seeking to gain a competitive edge. An LLM could identify connections between seemingly unrelated data sets and illuminate the potential impact of a competitor’s latest product feature on a company’s sales play.

Consider the following prompt provided to ChatGPT about going up against a fictitious company called Volanti:

No alt text provided for this image
Competitive Intelligence Provided to ChatGPT

Note the blue highlights are the contextual prompts that I entered manually for the demo. For an integrated solution (“SalesGPT”), this data would be retrieved dynamically (from a vector database like Pinecone) in response to the question. The aggregated data would be sourced from various CI data sources.

Now let’s pose the question: “How might we win the deal selling to a client interested in Volanti because of their Zero Trust offering?”

No alt text provided for this image
Response from ChatGPT about sales play informed by CI data

Note how the response is a synthesis–”connecting the dots”. Consider that the salesperson might not have noticed the CRM note by a colleague, but that’s OK because “SalesGPT” did.

The response is synthesized from a range of disparate CI sources, turning that data into something potentially cogent and salient, given the circumstances–i.e. how to deal with the competitor’s feature. Notice the statement:

“Highlight our roadmap: While we may not currently offer Zero Trust, we have a roadmap to introduce it in the future. It is important to communicate this to the client and emphasize that our implementation will be built from the ground up, like our flagship product, ensuring optimal integration and cost efficiency.

This is a classic sales play to recognize an objection and then turn it around to become an advantage–i.e. yes, we’ll be later, but better and lower cost — here’s why!

Maybe this contrived example is too crude and obvious, but you get the idea. And don’t overestimate the ability of a human sales person to do all this synthesis for themselves. There are plenty of places to go wrong: information blindness, distraction, cognitive biases, information overload etc.

Zero Sum Game or Secret Sauce?

From my recent experiments with a number of business use cases, the exciting feature of LLMs is how much latent power there is waiting to be utilized via prompt-engineering, data augmentation, fine-tuning and various architectural tricks. LLMs offer a new toolkit for conducting information retrieval experiments and then quickly putting them into production.

But what if everyone is using the same LLM and similar competitive data? Isn’t this a zero-sum game? And, what if the client is using the same technology, second-guessing the sales pitch? (This seems likely.)

The winner will be whoever uses LLMs creatively enough. The right approach could lead to unique in-house advantages, making LLM-hacking “recipes” the secret sauce for sales.

Moreover, with recent announcements, like Databrick’s work with Dolly, or Stanford’s work with Alpaca, there is every reason to believe that enterprises could build their own LLMs without requiring the gargantuan budgets used to build GPT-4.

There are so many possibilities, such as the use of LLMs to create tabular datasets from unstructured, semi-structured and structured data (e.g. the CRM, sales reports and various metrics tables) and feed those into powerful prediction frameworks like XGBoost or EconML to generate predictive and causal outlooks. Better still, these outlooks are blended back into the LLM generative toolchain to enable sales teams to ask speculative or causal questions:

Which of the three contacts in the client is most likely to have objections and what might they be?

Why did the customer to ask about our Composable API roadmap?

Well, this is just the tip of iceberg for enterprise applications of LLMs.

If you want to know more about how to do your own LLM-hacking for your use case, feel free to get in touch (or connect on LinkedIn).