Brand consulting in the age of AI – championing our value

Brand consulting in the age of AI – championing our value cover

Do you want value or efficiency? Our industry is actively navigating this tension in the age of AI.

As brand consultants, our central mission has always been to deliver maximum value to our clients.

Value creation is our raison d’etre. It’s how our clients measure our impact. Clients define value differently, so the metrics constantly evolve to match their unique context.

Our clients primarily measure our roles as value creators in financial terms, i.e., what commercial uplift can we see in the business since your brand intervention?

That is the desired effect, with a direct cause: the application of human insight and analysis to design business solutions. This is how we, as brand consultants, showcase our value.

The business community has understood this value equation for decades.

Now, AI is complicating matters.

AI: Game-changing or endgame?

Depending on who you speak to, the arrival of AI either marks the demise of our industry or the start of a revolutionary new chapter.

Let’s take the doomsday scenario. There is a lot of writing about the existential threat that AI poses to the job market. In 2024, the Institute for Public Policy Research reported that the second wave of AI adoption could see the displacement of 7.9 million jobs in the UK.

This is the worst-case scenario, but it does contextualise the magnitude of change that is expected with the rise of AI. The consulting sector is already feeling this impact.

A recent IBM report recorded that 86% of consulting buyers say they are actively seeking out services that incorporate AI and technology assets.

What we’re witnessing is a challenge to the established definition of value. People are now confining value to time and cost efficiencies.

The time and cost efficiency argument is hard to negate. With Chat GPT, Co-Pilot or Mid-Journey, you can generate solutions at an unthinkable speed. Time is money in business, and now you can generate solutions in minutes that once took weeks of meticulous work.

But this does not and should not undermine our value proposition as brand consultants.

Whilst these AI tools can generate answers at speed, there is limited evidence to suggest that these tools can deliver the invaluable blend of creative and critical thinking. Human interpretation of a problem, shaped by data analysis, experience and intuition.

Despite the liberally applied terminology of leading tech firms, these models are not actually sentient. They can play back valuable existing information, but they are not equipped to identify and reflect the needs and emotional drivers of human stakeholders. This means they can’t generate genuine human insights.

To generate powerful human insights, you must always analyse data through the prism of human empathy and understanding.

Understanding the strengths and limitations of AI adoption

Designer: Rachel Cramp

Across our sector, AI adoption moves at warp speed. We can already observe a transition from AI-readiness to AI nativism, the shift from using AI as a supporting tool to promoting an AI-driven business model.

Alas, AI is not the new frontier. Its time has already arrived.

Consultancies are using the roster of AI tools available to speed up and complement what we call the “discovery phase” of a project:

  • To deepen their understanding of the market and customer without the need for primary research.
  • To minimise the laborious and time-intensive burden of traditional desk research.

There is some sense to this, particularly when faced with time constraints and client pressure to effect change. Or when starting to explore a complex, specialist business in a sector that you are less familiar with.

This is its indisputable strength, and this capability is only going to get better.

But its wider application, here and beyond the discovery phase, should be approached with caution.

AI vs human understanding

Our value as both business and brand experts lies in our ability to interpret raw data and apply insights to shape meaningful change.

In its crudest sense, when a company employs a brand partner, they are enlisting them for their critical and creative thinking strategies and services.

If, however, a consulting partner is over-reliant on AI models, you’re in effect outsourcing your thinking to a business that will, in turn, outsource their thinking to an AI model.

Critically, as a consultancy, you’re not interpreting raw data. Instead, you’re interpreting an interpretation of raw data. One that’s been generated by models that can be subject to algorithmic bias or informed by defective data inputs.

Interpretation of an interpretation. You could argue that this deficiency also applies to humans. This is how we exchange and consume information, shaped and often distorted by human inference and interpretation.

But what you forfeit with AI models is that convergence of critical and creative thinking that stems from the powerful blend of EQ, IQ, lived experience and human intuition. In short, human understanding.

Designer: Rachel Cramp

AI vs original research

As a business, you ultimately want your business or brand to be distinctive and differentiating. A brand that reflects your unique value proposition and position within a congested marketplace.

The backbone of any compelling value proposition is a robust understanding of your customer. Customers who are complex and contradictory, governed by divergent wants, needs and motivations. Binary choices do not play a prevalent role in customers’ decisions and actions.

This is why brand consultancies advocate for primary research: the ability to learn from your actual people, your actual customers. To land on an insight or solution that has been informed and validated by your stakeholders.

This is the central mission of a brand consultant. To land on something that is credible (customer-centric) and impactful (performance-enhancing).

GenAI’s predictive insights are informed by industrial quantities of data. But the absence of a tailored, prescriptive research methodology focusing on real-world stakeholders brings their authenticity into question.

AI vs human scepticism

The latest type of large language models are known as “reasoning models”. They have been named as such due to their alleged ability to carry out critical thought.

All sounds very promising until you consider two things:

  • These models don’t think and therefore cannot reason. They are fed industrial quantities of data. An algorithm then applies mathematical probability to generate a response. These are automated responses, not considered thoughts.
  • These models consistently output incorrect information. Creatively branded as “hallucinations”, the latest models are massively susceptible to computing errors. Chat GPT’s latest 2 reasoning system (03 and 04-mini), when tested for accurate responses to questioning prompts, were found to hallucinate 33% and 48% to questions about public figures, and 51% and 79% to more general factual questions.

AI models can serve as effective platforms to collate and group foundational information. But these models lack the human intuition, subject matter expertise and lived experience to detect when something is awry. They lack the good old-fashioned probing cynicism that makes a good critical thinker.

AI vs commercial risk mitigation

Despite my evident reservations, the transformative potential of AI is undeniable, particularly in the fields of robotics, automobility and life sciences.

But the speed of its adoption means that the deployment of due process and safeguarding protocols cannot keep in tow.

The world of AI remains somewhat ungoverned. As a business, you’re forced to grapple with two at times competing instincts:

  • Maximising your earnings potential
  • Minimising your risk profile

We continue to see in the media the baggage associated with this nascent technology. Most critically, the “unintentional” utilisation of private data, protected IP and unverified informational sources that are required to feed these large language models. This poses serious ethical and legal questions and creates a potential outcome in which your brand and brand strategy are built upon data and insights that you do not own.

For want of a better word, plagiarism. The antithesis of meaningful and effective business and brand strategy.

In conclusion

As a sector, we need to champion our methodology. We have a tried and tested approach. It’s methodical and it’s thoughtful, and it creates value for our clients.

AI is here to stay and certainly has a role to play in our work. But it can’t do the critical thinking for us. And it certainly can’t empathise with the needs of your human stakeholders

AI models are efficient and supportive research partners. But, we should confidently state to our clients that the value lies in the human experts.

Clients should be asking their brand partners how they are integrating AI. And they should be demanding that these partners get the balance right:

  • To use AI to supplement their thinking, not shape it
  • Use AI as a research partner, not a strategic lead
  • And view AI outputs with scepticism – until its findings can be verified and validated by humans