By Allison Gutkowski & Kristopher Woung-Fallon
Marketers and advertising agencies have always conducted research. For decades, they have also tapped into the methodology and concepts from academic psychology. Traditional marketing research aims at answering questions related to branding, product development, advertising, or evaluating potential new markets. Basically, ask the people in that market what they think, which inherently had its issues. Then, in the 1990s, brain imaging (fMRI) and other neuroscience tools (EEG, etc.) made it possible to visualize the workings of the human brain. With an objective window into the mind (in theory), marketers soon hoped to bypass many of the problems associated with asking subjects overt questions with these tools.
There’s been a lot of chatter since then surrounding the use of neuro-measures in media with the use of metrics and norms for predictive purposes. It begs questions like, are norms a measure of mediocrity? What is a norm during times of global crisis? And, how does the industry’s need for metrics & norms align with neuroscientific output? Or does it?
In the webinar Norms, Metrics, & Media Madness: A Frank Discussion on Norms, Metrics, Neuroscience & Media Testing, we dive into these ideas and more with a panel of experts. Below, we cover several highlights of the live session on this engaging topic within the consumer research industry…
VP of Research & Innovation at HCD Research, Michelle Niedziela, PhD kicks off our panel discussion establishing an understanding of how normative databases are typically applied and defining some of the issues that can come up with their application here:
In this clip, Michelle explains how normative databases are typically used to understand what the general public is currently thinking compared to what they were thinking previously, and how the potential issues highlighted in this clip could drive the consumer experience, meaning that the database may not be as relevant if changes occur within the market.
Norms are somewhat of a “moving target.” Due to these constant changes in the market, there should be an established value in updating them as they evolve versus debating whether they are good or bad.
Watch Raymond Petit, Executive Director of the Masters of Science in Business Analytics at the Rady School of Management at the University of California, San Diego, explains this further when calling on the industry to define what a norm truly is scientifically and then upholding normative databases to that standard:
Many companies provide their own metrics and norms that are supposed to be generalized measures. What has been found is that there can be some form of bias. They may not be reflective of the general population or a “true norm.”
Anna Wexler, Assistant Professor in the Department of Medical Ethics & Health Policy at the Perelman School of Medicine at the University of Pennsylvania speaks to this point in our panel discussion here:
In this clip, Anna raises the point of potential limitations in consumer neuroscience technology that could exclude specific segments in a given population if not considered, such as hair texture or length within varying demographics.
With neuroscientific measures, it is important to have relative baselines, or comparisons within subject, built into the research design.
Watch Vinod Venkatramann, PhD, Associate Professor in Marketing & Director of the Center for Applied Research in Decision Making at the Fox School of Business, Temple University, explain the value for reference in consumer neuroscience here:
In the context of ad testing, Vinod advises finding what the best reference could be for the kind of test that you are conducting and including those ads into your test, so you have both, how the new category is performing and how the reference fits within the past norms.
Vinod goes further to explain how when doing this you are correcting the individual differences at the physiological level and then looking at the relative change across different ads to see how they compare.
Whether it is pre-pandemic or in today’s new world, we should stop and think about how relevant the norms and metrics we rely on are. What goes into them? How is the data collected? Are they relevant to my product category? Are we seeing the full story?
It is important to critique any norm or metric in this way as many go/no-go decisions are often based on simple scores or comparisons to norms. Understanding potential strengths and limitations of not only the norms and metrics but also the way by which the data is collected, such as the number of electrodes on an EEG headset, will be vital to pressure testing how a norm or metric will work for you and your specific research question. If you are interested in connecting with Team HCD to discuss this trending topic further, please contact Allison Gutkowski (Allison.Gutkowski@hcdi.net).
Comments