The AI Culture War: Battleground for Cultural Competence
March 21, 2026

By Donnie Broxson – CEO & Cultural Intelligence Leader
Artificial intelligence has reshaped how marketing works. It analyzes massive datasets in seconds, streamlines workflows, surfaces patterns humans might miss, and generates ideas at unprecedented speed.
But beneath the excitement lies an uncomfortable truth. Artificial Intelligence is becoming the most influential storyteller in marketing without understanding the people it represents. And that should concern all of us. This is not a theoretical disconnect, but one that is happening now. At scale! Largely unnoticed, it has real consequences for how cultures are seen, heard, and engaged.
The inconvenient reality behind AI’s “intelligence” is that it doesn’t learn from society as it exists nor as it is experienced. It learns only from the record society leaves behind.
That record (the internet, published data, historical content) was not built with cultural accuracy in mind. Minority communities have always been mischaracterized, oversimplified, or erased altogether. Language is flattened. Context is lost. Regional and generational differences are ignored. (Bear in mind that minority communities are not just ethnic or linguistic, they are any subset of society: identity, economic, specific region, etc.)
When AI absorbs this material, it doesn’t question it. It reinforces it. The results are assertions that can sound confident while being culturally wrong. Dangerously wrong. Unlike a human misstep, AI doesn’t fail in isolation. It fails repeatedly, consistently, and invisibly.
Bias, when automated, stops looking like bias. It starts looking like truth. That “truth” becomes the base for future queries, launching a cycle where technology effectively skews culture.
This Moment Demands Urgency
Marketing has always shaped perceptions. Generalization has always obscured nuance. But never before has marketing technology had the ability to industrialize misrepresentation at this scale.
As brands and agencies increasingly rely on AI for strategy, ideation, and creative development, the risk is no longer a single campaign getting culture wrong. The risk is an entire system normalizing distorted representations out of efficiency and considering them valid because they are data-based (factual or not). Machine-generated does not equal accurate.
That should concern anyone who cares about relevance, growth, or responsibility.
Reframing the Choice Between Technology and Humanity
The conversation around AI is often presented as black and white: embrace technology or protect human insight. Progress or principle. Speed or nuance. Quantity or quality. That framing is flawed. AI is not the problem. Unsupervised AI is. Accepting Artificial Intelligence as a replacement for thinking or craft is a dangerous strategy.
When treated as an authority rather than an assistant, AI overreaches, it replaces critical thinking. When treated as a collaborator, guided by people who understand culture deeply, it becomes powerful in the right ways. We should employ this tool as Assistive Intelligence.
The future isn’t about choosing between human intelligence and artificial intelligence. It’s about humans embracing the duty to teach the machine (of course that means the human must have the necessary expertise to be an effective moderator and guide).
Cultural Expertise Can’t Be an Afterthought
Cultural understanding isn’t something you add after the output is generated. It must be present at the point of creation. It should shape the questions, frame the context, and evaluate the results. This is where culturally astute professionals matter more than ever. Their role is not that of translators or validators, but as architects of the process.
They understand where data falls short. They recognize when something feels off even if it tests well. They know the difference between representation and reality, and they have the lived and learned experience to intervene before harm scales up.
AI doesn’t know what it doesn’t know. Experts do.
When it comes to augmenting expert insight with machine intelligence, it requires teaching AI with intention. This goes beyond better prompts (an important step) to include proactive expert guidance, critical review, quality control parameters, uploading facts and meaningful research, and a shift in mindset.
Shifting our perspective means:
- Acknowledging that bias is baked into existing systems
- Accepting that speed without scrutiny is a liability
- Prioritizing cultural truth as a requirement, not a bonus
- Designing AI processes that assume oversight, not autonomy
Most importantly, it requires embracing the tenet that technology cannot replace perspective, history, or lived experience.
The Question That Will Define the Next Era
AI will shape how the next generation sees itself and how brands speak to the world. That influence is inevitable. The real question is whether AI will reflect true culture or only the version that has been most conveniently documented?
The answer depends entirely on who steps up to challenge assumptions, demand better inputs, and insist that culture is not optional in the age of automation. If we don’t teach AI who we really are, it will keep telling the world who it thinks we are. That probably doesn’t end well.



























