Causal analysis and market research: models, recipes, and robots. [INSIGHT]

By Scott Porter

As often as I can, I attend a lecture series at UCLA called the Jacob Marschak Interdisciplinary Colloquium on Mathematics in the Behavioral Sciences.

The final lecture from the 2011-2012 academic year (June 1, 2012) was presented by Judea Pearl, a Professor of Computer Science at UCLA with interests in Artificial Intelligence and Causal Reasoning.

This is a high-level review of the framework presented by Professor Pearl and a discussion of some ways this framework is helpful in the context of brand and marketing research.

How does Artificial Intelligence research relate to solving more general research questions?

Here is an oversimplification of Professor Pearl’s past research: As part of doing research on how one would teach robots to make sense of basic relationships between events in the real world, Pearl had to develop and continue to refine a mathematical framework for representing causal relationships.

Humans intuitively handle reasoning like this every day, but a robot would need an explicit internal mathematical model to do it.

Creating these mathematical models turned out to be tricky. An illustration Pearl often uses to demonstrate the weakness in his early attempts is the relationship between rain and the grass being wet.

Pearl initially tried to handle the problem by using association. If a robot observed rain and the grass being wet together enough times, it could identify that the two concepts were related. Using these association-based models, the robot would correctly deduce that if it rains, the grass would be wet.

But the robot would also incorrectly deduce that if the grass was wet, it was raining, ignoring the possibility of alternate causes such as a sprinkler. To really re-create the intuitive human ability to make sense of events, robots need to be able to understand the causal relationships between these events.

After extending his framework to handle causal relationships, Pearl realized that this type of framework is useful for humans, too.

Although humans intuitively handle causal reasoning in simple cases, our intuition breaks down in complicated cases with many variables. In fact, in brand and marketing research we are often trying to determine what causes the outcomes we care about, as in a drivers analysis, and use that information to plan the best tactics to improve the chances of reaching our goals.

For example, we may want to understand which themes to promote in advertising in order to build desired perceptions. Or we may want to understand how to allocate spend across different media in order to best increase awareness.

How can this help us with the problems we are trying to solve?

I’ve been using Pearl’s framework in brand and marketing research work for some time, and especially love the graphical notation Pearl has developed that allows one to visualize problems using diagrams with arrows representing relationships between variables.

These visualizations of relationships allow us to have meaningful conversations with clients about the assumptions we’re making as part of the analysis. The reasonableness of any analysis depends on having assumptions that you are willing to defend/accept. So having a way to talk about these assumptions that doesn’t require an advanced mathematical or statistical background is critical if we’re to get important knowledge out of experts’ heads and into our analysis.

Once you have assumptions that you accept, the premise of Pearl’s framework is that a computer can use a set of mathematical laws to analyze the problem, and answer the following questions based on those assumptions:

– Can the effect I am interested in be estimated given the data I have (or plan to have)?
– If not, what data would be necessary to estimate the effect?
– If so, what is the recipe for estimating the effect?

Application

Consider a question like understanding the impact that changing one variable would have on another variable. This is a common question marketers face when trying to make strategic decisions about how to most effectively grow their business.

We can answer this question by running a randomized experiment, and comparing the state of the world with and without the change. However, it is not always feasible or ethical to run experiments to estimate every effect under consideration.

Pearl’s theoretical and mathematical framework allows one to determine if it is possible to estimate this same effect without a randomized experiment, and if so, how to estimate it. To create the recipe one needs to produce the answer, the computer uses the causal model to understand the relationships between all the variables and identifies which sets of variables you would need to examine more closely in order to estimate the answer.

For example, in order to find out the effectiveness of an ad, you might have to adjust for certain differences between the groups that saw the ad and those that didn’t. The audience for a TV program that reviews the latest movies is potentially already more favorably predisposed toward movie watching, so a simple comparison of those who saw the ad and those who did not could over-estimate the impact of the ad. Some adjustment would need to be made, and Pearl’s framework allows us determine which adjustments are the appropriate ones to make given our set of causal hypotheses.

What are the latest developments in this type of approach and how might they be applied?

Pearl also introduced some extensions to his framework based on collaboration with one of his graduate students, Elias Bareinboim. The first extension is the ability to consider cases where there are suspected differences between environments. For example differences between multiple markets, multiple sample sources or populations, or even differences between the laboratory and a market environment.

When we have data collected from various panels or other data sources such as client databases, there are often some aspects of the recruited population that may not be representative of the population we hope to make an inference about (e.g. people recruited from a customer database would differ in certain ways from potential customers).

It is also common that the way in which we measure something in a survey or in a lab setting may differ somewhat from the real world.

Pearl first captures hypotheses about which variables might have suspected differences by marking them on the causal diagram. Similar to his previous work, the idea is to then feed a computer this information and let the computer provide us with a recipe or algorithm for which variables we need to adjust in order to get our answer.

Using the causal model, the computer can identify which variables are related to the ones which we suspect have differences, and use that information to build a recipe to adjust the data appropriately.

For example, identifying the variables we know behave differently in the laboratory and the marketplace can help us design additional measurements into our study to help account for those differences, and then we can use Pearl’s framework to determine if given our design we will be able to appropriately adjust for the differences in the laboratory setting.

Pearl also discussed a second extension to his framework in which he pushes this even further. He showed how it is possible to combine information across a number of past studies, all of which may have varying differences from the environment in which you would like to make an inference.

For example, say that you are introducing a new product in a particular market and you would like to forecast some effects given some marketing actions you are going to take. You may have a number of previous studies to rely on, but none exactly matching your current situation. For example, you may have introduced several other products in the market of interest. And you may have already introduced the product in other markets.

Even though none of the studies or samples may be exactly equivalent to the environment you’re interested in, this extension to Pearl’s method can take into account which aspects of each previous study are similar and combine the information together accordingly.

This is preferable to what researchers often currently do, which is average together all of the past work to get a rough range or norm for the effect.

When just averaging all the past work we’re ignoring how the studies differed from the case we’re interested in, and we try to pull in a wide enough range of studies that these differences net out.

However, by taking a close look at which aspects of each of the studies differ from the case we’re interested in, we can build estimates taking into account all of the current and past data in a rigorous, informed way to look at cases which may be very different from the cases in which we collected the data (for example, we might need to adjust our expectation for the impact of a particular media versus the impact we saw in another market because of differences in how that media operates in each market).

I was impressed with these newest extensions to Pearl’s approach. We already combine data from multiple sources to make inferences similar to what Pearl laid out. But the structure he laid out in terms of identifying the key potential differences and then having a computer generate the recipe for combining the data makes it possible to combine more datasets with more variables than an analyst could handle manually. And without the effort (or anguish).

By generalizing this process to something that a computer can aid us with, it frees up our analysts to the tasks they are best suited for: identifying the hypotheses about how the market works and translating these into potential courses of action for clients.

How can we take advantage of these types of advances?

There are easy ways to take better advantage of the advanced theoretical work that researchers like Professor Pearl have put at our disposal.

The most important step is to actually discuss the inputs that the analyst may need to use a framework such as Pearl’s for answering questions. It is important to have hypotheses about what variables might be part of the causal chain that produces the outcomes we care about. We also need to identify ways that the data that we have collected might differ from the market or situation of interest.

The results of these discussions don’t have to be quantified models… the result can be a whiteboard diagram with a few variables and arrows or a list of important variables and considerations. But we should have these discussions instead of leaving the analyst to make less informed assumptions. Armed with this kind of information and discussion, the analyst can better answer clients’ most central questions.

By Scott, VP of Methods, works for Added Value North America in our LA office.

Skip to content