Capabilities, Opinion

When Did “Intercept” Become a Bad Word?

We get it: with so much primary and secondary data at our fingertips these days, with so many creative and interactive online approaches, with panel targeting honing in on lower and lower incidence groups, the idea of an intercept can seem… old-fashioned. Why navigate the logistical challenges of in-person data collection, when you can just launch a quick web survey—it’s a no-brainer, right? Or is it?

Technological advances make managing intercepts much easier, and clients who leverage this approach are reaping the benefits.

As we’ve seen the industry shift away from in-person quantitative engagements, we have to wonder: are we moving in the right direction? Of course, web surveys have their place—we collect hundreds of thousands of online surveys each year, and firmly stand behind this methodology—but are they always the right approach? What happened to the good old-fashioned intercept, and why is it so often cast aside?

Intercepts have evolved a lot over the years. The painful clipboard and pencil approach has been largely replaced by interactive, mobile-optimized engagements, improved efficiencies, and (in our experience) a new appreciation from the respondent perspective. What’s remained the same is the opportunity to capture authentic feedback in real-world environments. What does this mean? Essentially, there are numerous benefits you may be overlooking. For instance:

  1. You can go directly to your target customer. Often, we know so much about a particular specialized audience: kayakers, woodworkers, vegans, etc. that we think they’re everywhere. But the truth is, no matter how well-defined you feel your audience is, by definition there are always more people who don’t qualify in the general population.  Intercept research provides the perfect opportunity to go where your target segment is concentrated.  Outdoor shows, water-related events, or even riverside parks and boat launches provide excellent environments to find kayakers in numbers that make research more affordable.
  2. Respondent screening is honest and immediate. Despite our love for online panels, there are times when they do make us nervous. Even with our most stringent quality measures, it can be hard not to wonder about the face behind the screen. This might be less of a concern on a general population survey, but what about your specialized, low-incidence target? Are they who they say they are, or are they just checking every box on a “which of the following” screening question? On the other hand, there’s no question about the authenticity of a kayak owner when they are interviewed while unloading and launching their craft.
  3. You can rely on “in-the-moment” feedback, rather than respondent recall. Much of the work we do in both quantitative and qualitative research is based around trying to understand the consumer’s mindset when they are purchasing goods or services.  Instead of asking what other brands were considered back when they were shopping, we’ll go directly to the point of sale and observe and record the reasons why each brand is considered or rejected.
  4. Your respondent base includes more than the “tails.” Many of our retail clients rely on cost-effective “snapshots” to gauge the basics: satisfaction with personnel, ability to find a product, etc. Whether this is done through receipt surveys, a customer panel, or a post-purchase email, there is a universal challenge: those who are highly satisfied (and want to give praise) or highly dissatisfied (and want to complain) are much more likely to respond. This approach becomes problematic when a broader, more representative sample of shoppers is required. Highly satisfied and dissatisfied respondents might be sufficient if evaluating speed of checkout, but do we want to rely on them for testing new store concepts? In these situations, many of our clients turn to us for intercepts instead. In fact, some will use their snapshot data as an impetus for more substantial, in-person initiatives.
  5. Intercepts can feed into other research phases. The purchase process is nuanced and frequently difficult to define. Often the tactics we use to find respondents at a certain stage can create unnatural situations (for example, asking a screened respondent to wait to buy a product until the scheduled shop-along). Intercepts afford the ability to find true in-the-moment purchasers, and turn them into longitudinal participants. The on-location participant can be surveyed, AND she can be recruited for an actual shop-along. She can even host us for an OOBE test in her home, which can turn into a needs-assessment ethno, and then complete follow-up usage surveys as well!
  6. You can visually present proprietary products or concepts without fear of leaks. Early prototypes and/or confidential competitive information can be shown to participants without their ability to screenshot the image (during an online interview) and send/post it elsewhere.
  7. Capture audio and video of the interview. One-on-one interviews done in situ—at the kayak launch site, at the outdoor store during purchase—are incredibly powerful to share with marketing/product/brand teams and even upper management.  Additionally, video that not only captures consumers talking about the product but shows them interacting with the product can be extremely beneficial in training sales and manufacturing teams.

It’s hard to believe it’s been nearly fifteen years since MDC first introduced our Platinum Intercept Program.  Back then, we were still lugging heavy boxes of paper surveys around the country, and assembling large overnight data entry teams to turn findings around quickly. At the time, many of our clients were approaching us because the entire process seemed overwhelming on their own: printing and assembling surveys, laminating show cards, shipping sufficient numbers of packets, clipboards, pencils, gift cards, etc., and then returning everything for high-volume of data entry. On top of all that, the logistics of managing a multi-city team of interviewers (and all the inherent questions, challenges, store issues, etc.) was often a full-time job. We’ve always prided ourselves on handling these aspects seamlessly, drawing on our 40 years of experience in this space.

Now, of course, technological advances have made managing intercepts much easier, and our clients who continue to leverage this approach are reaping the benefits. Our intercept research has helped successfully redesign store layouts, choose lighting concepts and store displays, optimize channel positioning, and build employee training programs. We’ve even shot professional commercial soundbites as part of in-store interviews.

If you’ve become disillusioned with intercept research because it feels too antiquated, too cumbersome, or too expensive, it may be worth a call with us. Our vast experience, combined with our in-house technology, makes this a surprisingly cost-effective solution. In fact, many of our clients attest to the fact that authentic, in-person feedback has value far outweighing any logistical expenses.

Intrigued? Contact us to learn more. We promise we won’t be waiting with a clipboard and pencil.

Irene O’Reilly — Account Executive, MDC Research

irene@mdcresearch.com


MDC RESEARCH  –  8959 SW Barbur Blvd., Suite 204  Portland, OR 97219  –  (800) 344-8725
Learn more at:  www.mdcresearch.com            Copyright 2017, MDC Research

 

Opinion

Why survey design is like pulling teeth

Survey design image 2Most people don’t rely on their own personal knowledge of dentistry to conduct dental exams and treatments as part of their overall health maintenance; instead, they go to a dentist. Similarly, most businesses don’t count survey design as a core competency, but they know market research is an essential part of maintaining a healthy business. The research industry has seen some truly remarkable innovations over the last decade, including online DIY survey technology and other SaaS offerings which allow anyone easy access to the mechanics of survey creation. What these tools don’t do, however, is teach the user how to write an appealing and scientifically sound questionnaire, nor do they point out design issues with surveys created on the platform. There is no “survey check” that puts a squiggly red line under problematic question wording or formats.

It’s a specialized skill to be able to understand the analyses needed to accurately address the objectives, then envision the structure and type of data that will be required to perform those analyses.

The temptation of “cheap” survey access, plus lack of professional guidance, has led many organizations down a path fraught with peril, and even caused some to make fatally flawed decisions. Most DIY survey platform users don’t have enough training in the theory and practice of survey design to create an instrument that will yield the data they need to make optimal, informed decisions. There’s nothing wrong with acknowledging that you don’t understand all aspects of questionnaire design, just like there’s nothing wrong with knowing that you don’t know how to safely and painlessly extract your own tooth. Both are complex skills that require extensive training, and tackling the job yourself is likely to lead to unintended (and quite possibly disastrous) results.

Mere exposure to surveys is not comparable to formalized education and experience in questionnaire design.

The DIY survey is especially rampant due to the sheer volume of surveys in modern life. Everyone is exposed to multiple, ongoing requests to participate: there’s the survey request printed at the bottom of your retail receipt; the pop-up survey that appears on your favorite websites; the phone call from a pollster or market research agency; the sincere request from a charity to complete a mail survey; the list goes on and on. When the average person is exposed to so many surveys (both well-written and poorly-written), some begin to believe they understand exactly how they work.  For those who have taken part in dozens, or even hundreds, of surveys over the years, it’s tempting to believe they know enough to write one from scratch. But mere exposure to surveys is not comparable to formalized education and experience in questionnaire design.

It costs more, and takes more effort, to clean up or revise a poorly written survey than to write a good one from scratch.

I studied research methodology for six years in college, and take part in industry organizations, forums, continuing education, and conferences throughout the year. My firm has several such full-service research consultants who work together to optimize the work we do for our clients.  We design and execute more than 500 surveys a year—every year—and we’ve been doing so for almost 40 years. Even with all that experience, we still provide critical feedback and improvements to one another’s surveys, and passionately debate the best way to elicit the necessary data. Survey design is a major part of my specialized skill set, and one in which I, and other researchers, take great pride.

For most who haven’t formally studied survey methodology, writing your own survey is a bad idea. Here are the top six reasons why:

1. You’re too close to the topic

If you’re considering writing your own survey, it’s probably related to a topic that is important to you, or to your ability to fulfill your professional obligations. But your investment in the topic, and the importance of the outcome to you, are potential stumbling blocks. It’s all too easy to allow your own biases to creep into question wording. When you pass a draft around your office, your internal team is not apt to recognize instances when your corporate culture and/or jargon are skewing the meaning of a question for respondents. You’re also unlikely to recognize all the opportunities for eliciting negative ratings and comments, or how to use them in a constructive and unbiased analysis of the final data.

2. You’re overburdening your respondents

It’s very tempting to go overboard on survey length. The thinking is “While we’ve got their attention, let’s also ask X, Y, and Z.” This tendency to keep expanding the survey in the name of efficiency leads to respondent fatigue, poor data quality, and survey abandonment. You need a survey that is custom-designed not only to address your questions and objectives, but to be appealing and relevant to respondents. The rare customers or potential customer may share your fascination with the minutiae of the purchase decision process for your product or service, but the vast majority do not. They may like your product, and they may even love your brand, but that doesn’t translate into wanting to spend 20-30 minutes answering detailed and repetitive questions. Factors like value proposition, overall length, complexity, and flow must all be considered, while still addressing the key research objectives. This is not a balancing act for the untrained to tackle.

3. You haven’t thought through how the data will be analyzed

It’s not enough to brainstorm a list of questions for which the responses will be “interesting.” Too often, that leaves novices with a file full of data they’re not sure how to use to their best advantage. A good researcher knows how to think “backwards” from the objectives. It’s a specialized skill to be able to understand the analyses needed to accurately address the objectives, then envision the structure and type of data that will be required to perform those analyses. From there, an experienced researcher can design a questionnaire that will accurately and actionably populate the required data fields, and successfully fuel insightful analysis.

4. You don’t know why some formats are better for certain questions

Related to #3 above, there are usually several possible ways to ask any given research question. Consider the following examples:

  1. What sporting events do you plan to attend this month? (Open-ended)
  2. Are you going to attend a sporting event in this month? (Yes/No/Not sure)
  3. How many sporting events will you attend in the next 30 days? (Numeric response)
  4. How many of each of the following types of sporting events will you attend in the next year? (Grid with ranged responses)

I’ve had clients write up questions in all of these formats, but most are unaware of the proper uses, benefits, and disadvantages of any particular format. It’s important to consider how the data will be used before settling on question structure. Here’s a pop quiz on just this one example among dozens of factors requiring evaluation for every single survey item:

Match the question formats above with the advantages and/or disadvantages listed below.

a.  Often used to screen respondents for qualification; not useful for deep segmentation

b.  Used to evaluate relative standing; space-efficient but adds to respondent fatigue and usually requires rotation or randomization

c.  Good for exploratory insights; creates respondent fatigue and often requires additional back-end coding

d.  Allows frequency-based analysis and segmentation; prone to overestimation

5. You don’t suffer over every word

Okay, maybe “suffer” is too strong a word, but a talented survey writer can step back from a survey item and look at it from all angles. Does it really ask what you think it asks, or is there room for multiple interpretations? Is there wording that creates subtle or implicit bias? Is the question, as crafted, likely to lead respondents to answer more positively or negatively? What are the impacts of previous questions in the same survey? Do the response codes and options align to the question, and are they written in a way that allows every respondent to provide an accurate and complete answer? Each of those questions (and many others) is the subject of literally hundreds of research community articles, debates, and experiments. It’s simply not possible for someone with little to no knowledge of this background to adequately weigh these factors. Trust a trained research professional to help you create survey questions that will give you accurate, actionable, and projectible data.

6. You don’t have time and money to burn

It costs more, and takes more effort, to clean up or revise a poorly written survey than to write a good one from scratch. There is no value to surveys which, through inexperience, deliver incomplete, unactionable, or inaccurate data. I can’t tell you how many times I’ve had a client insist they will “save money” by writing their own survey, only for them to end up with unforeseen expenses and extended timelines when the results are suboptimal at best, or unusable at worst. As professional researchers, we get it: we know your budget is limited and your timeline is tight. We want to help, really! When it comes to survey research, the GIGO principle (garbage in, garbage out) cannot be dismissed. Don’t be afraid to invest a few hours of your research professional’s time on upfront survey design—it will pay off in data accuracy, quality, and utility.

Your dentist genuinely loves teeth, and has spent years studying and practicing the best ways to take care of them. A trained research professional genuinely loves survey design and analysis, and has invested significant time and effort in honing those skills. Let us keep you, and your business, all smiles.

Alice Blackwell — Vice President, MDC Research

alice@mdcresearch.com


MDC RESEARCH  –  8959 SW Barbur Blvd., Suite 204  Portland, OR 97219  –  (800) 344-8725
Learn more at:  www.mdcresearch.com            Copyright 2017, MDC Research

 

Capabilities, Opinion

Event-Based Research – Are you missing out?

Trade shows, conferences, vendor events, channel partner trainings, product launches: chances are, your company either sponsors or attends some combination of these in-person events over the course of the year. In fact, according to Meeting ProfessionalsEvent research image International, the US spends more than $122 billion annually on corporate events. This price tag exists because these functions are valuable—they’re typically well-attended, and are highly effective at building brand awareness, improving customer relations, and introducing new products and services.

There is likely no faster, high-quality way to conduct focus groups, usability sessions, or IDIs than leveraging an existing event.

In short, company events are successfully leveraged for a number of different purposes; however, businesses like yours are often overlooking a significant opportunity to find value in these settings—conducting on-site market research.

Event-based research has been a critical cornerstone for numerous enterprise companies, but oddly is underutilized in the corporate landscape as a whole. With nearly twenty years’ experience conducting research studies at large-scale corporate events, we want to help you understand the unparalleled value of this kind of approach.

Event-based research has numerous benefits, most notably:

  1. Location, location, location. If your company sponsors or hosts a national or regional event, it is typically to bring strategically valuable audiences together in one central location. This could be key customers, business partners, employees, sales reps/distributors, etc., but one thing is certain: it represents a tremendous opportunity to address qualitative research needs. Engaging this strategic audience in research ensures:
    • Prequalified audience members. The very thing that makes these attendees valuable to your event also makes them perfect candidates for insightful research.
    • Central location. Because the audience is already attending your event, travel logistics and geographic distribution are greatly simplified.
    • Key staff present. Regardless of the audience, it is also extremely likely that key members of your management and/or marketing team are also in attendance, making it easy for them to observe qualitative sessions.
  2. Cost savings. The benefits of location listed above translate directly into significant cost savings for the project. Researchers have a ready supply of qualified participants, available time blocks in which to schedule them, and no need to travel to several locations around the country (or globe) for the actual sessions.
  3. Shortened timeframe. In addition to cost, qualitative initiatives take time. When you factor in design, recruiting, travel, reporting etc., multi-market focus groups can easily span eight weeks or more. We’ve seen clients reluctantly shelve qualitative efforts (or, worse—attempt to use a quick-turn survey to collect what should have been qualitative insights) because decision-makers view the timeline as “too long.” There is likely no faster, high-quality way to conduct focus groups, usability sessions, or IDIs than leveraging an existing event.

Why not use the opportunity of a competitor’s event to conduct competitive BI research or even to learn about your company’s brand equity among this important audience?

With registration data helping us hone in on key participants (or, in some cases, our trained recruiters present on the floor screening real-time), recruiting is a breeze. Sessions are considered part of the conference schedule, so show rates and engagement are high. Well-designed event research can even include debriefing/presentation sessions with company management at the end of the event, meaning key players can head back to the office ready to implement the research findings.

  1. Content feedback. Strategic announcements of new products and/or services made during the event provide the perfect opportunity to obtain feedback directly from attendees. These insights can then be quickly leveraged to support broader roll-out readiness and marketing/communication efforts. It’s also the perfect place to obtain feedback on the event itself, and provide input for improvements for the next one.

These benefits are easiest to realize when your company is the sponsor of the event, but there are advantages to event-based research even when this is not the case. Your company may be a participating sponsor of an overall industry event, or an event primarily hosted by one of your strategic partners. The audience is still compelling, your key personnel are still likely to be in attendance, and most of the cost savings can still be realized.

Event research image 2In fact, many of these advantages can be obtained when you’re simply aware of an event, or even when it is being held by one of your competitors. Why not use the opportunity of a competitor’s event to conduct competitive BI research or even to learn about your company’s brand equity among this important audience?

The bottom line is that research should be a dedicated part of your event planning. You’ll be hard-pressed to find these cost and time efficiencies elsewhere in your research budget.

Not sure how to set up a successful event-based research program? It may seem a daunting task to anyone new to this approach, which is where we’re eager to help. In fact, our current record stands at conducting 53 one-hour focus group sessions over a three-day event, covering more than 40 topics with five moderators, but we’ve also done two to three sessions at smaller, regional events.

Contact us to learn more about why such clients as Microsoft, HPE, AZZ, Autodesk and many others have engaged with us to develop custom event-based research programs at large and small conferences throughout the world.

Warren Beymer — Vice President, MDC Research

warren@mdcresearch.com


MDC RESEARCH  –  8959 SW Barbur Blvd., Suite 204  Portland, OR 97219  –  (800) 344-8725
Learn more at:  www.mdcresearch.com            Copyright 2017, MDC Research

 

Opinion

10 mistakes companies make with their market research efforts

I’ve wanted to write this article for nearly a decade. Many in my firm cautioned against it, not wanting to endanger our relationships with existing and potential clients. “Don’t get on your soapbox,” they pleaded. Perhaps they’re right, but as the president of a mid-sized, full-service research company with more than 30 years in the research industry, I’ve seen just about everything and believe there are a few issues that still need to be addressed on the client side.

Some of you might think “Who is this guy to tell us all the things we’re doing wrong?” Well, like most good advice, it comes from a place of concern and caring. Know this: at MDC Research we love our clients, we love research, and we constantly strive to make it better. If you recognize yourself in even one of the bad traits listed in this article, and want improve the value of your research, we want to work with you.

Every one of these ‘mistakes’ is described in order to make your client-side research budgets more effective. Yes, research firms would benefit as well from more efficient workflow to more exciting methodologies. But ultimately, we exist to make our clients more effective and successful, and therefore I submit this list.


1. Sending all research projects out for bid

There is a common belief that competitive bidding is the best way to review and select vendors for upcoming research projects. In reality, nothing could be further from the truth. Researchers are often hesitant to submit their very best, most creative research design, for fear that an “expensive” design may not even make the first cut.

Corollary: Frequently, a second round of proposals/bids is requested, using the intellectual properties of one or more of the vendors who originally submitted a design. This is often couched as: “we’ve re-imagined the project and would now like you to bid on…” The original—creative, experienced, knowledgeable—vendor is now faced with a situation in which they must re-bid against vendors who were incapable of conceiving the sophisticated design in the first place. The other vendors now bid on the original design but are not familiar with its nuances and requirements, and therefore submit bids far below the cost proposed by the original vendor. The result is the client getting ‘cheap’ research from a company that doesn’t fully understand the design and is less capable of implementation.

Market research companies should be up for periodic review, exactly like advertising agencies and outside legal counsel. You’ll never get the very best from a research vendor without building a true working partnership with them, and that can’t be done while sending each individual project out to bid.

There is a reason your company doesn’t send every advertising campaign out for bid, nor do they ask attorneys to bid on writing every contract or handling every negotiation. Heck, your company probably doesn’t even ask suppliers of consumable commodities (paper/water/toner) to bid on every shipment.  Instead, most companies have annual or multi-year contracts for these services. Market research companies should be up for periodic review, exactly like advertising agencies and outside legal counsel. But you’ll never get the very best from a research vendor without building a true working partnership with them, and that can’t be done while sending each individual project out to bid.

If you’re worried the removal of the per-project bidding process will result in increased costs, simply demand transparent pricing. You should be able to calculate the cost of a project before you receive a proposal from any research vendor that is your true partner. (Full disclosure: we share our cost calculators directly with our clients and encourage their use). Review the designs your research partner submits and don’t be afraid to ask them to rethink any design you’re not completely satisfied with (much like you would do with an ad agency’s creative pitches).

Still not convinced?  Have a heart-to-heart with any of your current research vendors, and they will tell you they provide their very best pricing to those clients with whom they have the strongest connection and understanding of what to expect in the working relationship.


2.  Not getting the most out of your research vendor

It’s a fact: in today’s world of frequent (and sometimes scheduled) company reorganizations, we, as research vendors, often find ourselves with more knowledge of and history with a client’s products, services, or divisions than those within the company. It is the norm for us to have decades long relationships with our clients, often spanning at least a half dozen primary contacts on the client’s side. On ours? We still have the original account team, mostly comprised of principals in our firm.

Our knowledge of what research has worked for your company and its products, our understanding of your market and its segments, and even our ability to navigate your internal politics is encyclopedic.

Our knowledge of what research has worked for your company and its products, our understanding of your market and its segments, and even our ability to navigate your internal politics is encyclopedic. You’d be shocked at how often a new team asks us to conduct research that has already been done. We’ve lost sight of the number of times we’ve seen a research request along the lines of: “We need to better understand our customers and more fully integrate with their business model,” yet we know the exact date/year that very project was conducted, its outcomes and (often, but not always) what was done internally with the findings. But, that knowledge falls on deaf ears all too often, because we’re not considered part of the team, and all the people in the new positions want to do is pick the lowest cost provider to conduct the (redundant) research for which they currently have budget.

Fully engaging your research vendor means you can leverage existing data (that no one on your team even knows exists) while spending that same budget to further flesh out the original findings or on another initiative altogether. Here’s a secret: we hate wasting time and money doing the same research over and over even more than you do. Let us help you get more out of your research budget. That’s when we can really contribute value above and beyond the parameters of the current project under consideration.


3.  Thinking of research in terms of discrete projects, not as a process

The very best use of corporate research is an ongoing process that constantly improves its subject. Most research requests that hit my desk are ‘one-offs,’ basically designed in a vacuum with a ‘start’ and a ‘finish.’ There may be some reference to an older study (think two years, not two months old) but this is usually connected to a desire to ‘update’ the information, rather than building on and internalizing the findings from the first study. Additionally, we’ve all seen our share of tracking studies that just chug along spitting out the same banal ratings/scores month after month, year after year (see mistake #10). There are sometimes projects that are ‘multi-phased’ with both quantitative and qualitative elements. But what we rarely see is the type of meaningful, ongoing commitment to a research program that truly produces actionable results to excite and motivate upper management.

Issues raised by the initial study deserve deep follow-up to fully understand, but were unknown at the time of project initiation.

We’ve all heard the expression: “every good answer begs two additional questions,” and in the cost/benefit world of market research this is often very true. Those additional questions are nearly always “why is it that way?” and “how can we change/improve it?” While those questions are probably asked superficially in your original study, they do not attain the depth of what you really need to know. More commonly, issues raised by the initial study deserve deep follow-up to fully understand, but were unknown at the time of project initiation. Without a research process to provide continuous learning, this critical information will never see the light of day.

Survey questionnaire real estate is a finite resource – we’re limited by money and respondent time/interest constraints that apply to each project, but can be effectively addressed in an ongoing research process.

A well designed, on-going commitment to market research for a particular product, service, or message delivers that nuance, leaves fewer questions unanswered, parallels the development of said product/service/message (i.e., adapts to the changes inherent in any production roadmap to stay relevant and up-to-date), and can even change the very personality of the development team/company. A true commitment to always knowing how the market will react to your development is critical, and reduces the potentially exorbitant expense of missing the mark because your research was out-of-date, shallow, or didn’t reflect your current build.


4.  Getting your vendor involved too late in the process

Our company is nearly always asked to provide examples of our experience in research, in our client’s industry/business, and in a specific methodology. After nearly 40 years in the business, this experience is usually quite impressive to our clients. However, sometimes the very next thing that occurs is that we’re given an RFP that has a fully baked out methodology that doesn’t match (or optimize) the goals of the project. And when we attempt to point this out, we’re often met with silence or deflected with “that’s what our CMO/VP Marketing/Marketing Manager wants,” which immediately negates the years of experience and thousands of successful projects which made the client want to work with us in the first place. Rather than helping the company craft a project that most accurately and efficiently meets their informational goals, we’re reduced to the role of order taker/fulfiller.

Your vendor can improve turn-around time, remove extraneous and expensive design features, and crystallize the data delivered into actionable insights.

Instead, choose your research vendor based upon trust, knowledge, experience, and ability to deliver and then include them in the early stages of research design. I guarantee, if you’ve chosen wisely, the vendor can improve turn-around time, remove extraneous and expensive design features, and crystallize the data delivered into actionable insights. Perhaps more importantly, the vendor will understand every aspect of the reasons the project is being undertaken, the politics and personalities of those on the project team, and can design research to better deliver the specific data required so your business can move forward.

We see more surveys and other research projects in a month (about 75 – 100) than many companies conduct in a decade. Let us put that experience to work from you from the outset; don’t handcuff us to a design that we know we can improve upon if given the chance.


5.  Your data collection doesn’t focus strongly enough on negative issues

Sure, everyone on your company’s product team wants the survey to produce positive ratings for the new widget/service/message. But it’s very difficult to take only positive feedback and tell your C-suite or front-line staff: “our customers already like our product, now let’s do better.” Instead, focus on finding every opportunity to make your products and services better.

Honing in on potential obstacles to your offering doesn’t mean the research is trying to ‘kill it’—in fact, quite the opposite is true. Well-designed research intended to tease out all potential roadblocks creates data that is a veritable roadmap to success.


6.  Conducting too much research in-house

Bluntly put, in-house market research professionals are often too close to the subject to produce fully objective research results. It could be as ‘innocent’ as allowing a few marketing generated adjectives to remain in your product description (that came directly from the marketing department/product group/engineering team). It could be leading and/or biased questions or even the omission of entire lines of critical questioning for the sake of expediency. Worse still, sometimes internal pressures and politics mean the research has virtually no chance of being objective.

We all understand the value of independent, outside counsel in legal matters, and the same is true for research.

Whatever the case, an objective outside vendor is better positioned to scrub all bias out of the survey design, questionnaire document, and data analysis. We all understand the value of independent, outside counsel in legal matters, and we all place more value on objective, third party white papers and case studies than we do on those created in-house with an explicit or implicit bias. The same is true for research. At its core, the goal of market research is to reduce risk. Don’t compromise that by allowing any bias to creep in. The risk on the back end is too great and if unchecked it will defeat the entire purpose of the research.


7.  Excluding your research consultants from the implementation of the findings

Lots of companies have positions or even departments dedicated to ‘customer insights’ or ‘customer advocacy’ or, at the very least, like to include their ‘customer’s POV’ in internal training and marketing discussions. But politics, inertia, and incomplete understanding of all the market’s segments often cause this message to become diluted or even abandoned. If you’re serious about always including your customer’s perspective, you should include your market research consultant on all ‘customer voice’ task forces and initiatives. Like any objective third party, we will ensure that your message remains on-target and will help real change occur. Don’t let the job stop when customer data is delivered; continue it until it becomes part of your company’s DNA.


8.  Conducting research that is broad, but shallow

Your customer insights department puts the word out that they’re conducting research and the requests come flying in from all quarters, with an array of disparate suggestions for “nice to know” questions that stray far from the core goals of the project. Resist the urge to add items which broaden the scope, but subtract from the profound data that a targeted, deep dive project can deliver. The very best—and most actionable—research depends on a design that focuses on the subtle and possibly complicated customer attitudes and behaviors around the issue at hand. Focus laser-like on the core objectives and ignore peripheral questions.  If you find that some of the “off topic” questions are critically important to your company, then they probably deserve their own deep-dive research.


9.  Over-reliance on online methodologies

The advent of online research capabilities has seduced many companies (both client and vendor) into using it nearly exclusively. At first blush, it’s easy to see why: it’s usually less expensive and faster to conduct. But dig a little deeper and troublesome underlying problems are raised.

Professional respondents. Straightliners, speeders, non-committed respondents: we’ve seen them all and they sprout like weeds within online panels. Don’t believe me? Just notice how much time online panel providers dedicate to assuring you they have the issue under control.  This is a problem that gets worse over time, as more and more people succumb to the come-ons proliferating all over the internet. (Fun activity: Google “Get Paid to Do Surveys” and sift through just the top 20+ results.)  Anybody remember ads and promises to get paid to take telephone surveys? Didn’t think so.

Probing and clarifying open-ended responses. Market research companies spend hours every day, every shift training their interviewers to probe for more open-ended responses, as well as to better clarify them into meaningful, useful answers. This simply doesn’t occur online. In fact, one of the ways every market research firm cleans online data is to search for meaningless drivel and/or random typing in open ended response boxes.

It is true that some online, open-ended responses can be detailed and on-point. But the vast majority are simplistic (“Why didn’t you like this concept?” Response: “Just didn’t like it.”) or off-topic (“Why didn’t you like this ad?” Response: “Your company sucks.”). Trained interviewers know to probe past those responses and/or turn them back on track. Additionally, real-time monitoring and verification assures that respondents are focused on the topics being researched. Neither of these occur with online research.

Difficult audiences are almost impossible to reach online. Don’t be fooled that business-to-business audiences, moderate-to-low incidence audiences, and high-level decision makers can be reached cheaply online, no matter what your panel provider says. Another industry side-effect of professional online respondents is that they very quickly figure out how to ‘fake’ their way past survey screeners. Unless screeners are very carefully written by experienced market research vendors (and alas, even sometimes then) these respondents can suss out the correct path to gain entry. And, because these difficult/low incidence audiences also typically pay the highest online survey fees (either real cash or points to be redeemed) there is greater incentive to do so. Once again, trained professional interviewers, real-time monitoring, and respondent verification are all designed to weed out these ‘fake’ respondents.

Will an online methodology be effective and not just low-cost?

This is not to say that all online research is bad. Rather, I’m suggesting we tilt the scale back towards the center, and encourage thoughtful investigation of the research methodology chosen, instead of blithely pushing ahead with an online panel/survey. The goal is always to be cost effective, but you can’t simply ignore the second word in that term: will the chosen methodology be effective and not just low-cost?


10.  Tracking studies that are too long in the tooth

Most of our clients have some form of long-term tracking studies in place for customer satisfaction, awareness, market share, etc. Often, the results from these studies are boiled down into a couple of measurement scores that find their way onto management dashboards (and sometimes, even management compensation) and therefore become entrenched into customer research. While these trackers can be useful, there are a few important caveats.

Staleness over time. While changing the questions asked and metrics used to calculate scores every wave would defeat the entire purpose of a tracking study, it is also not effective to leave them in place too long. If you company has one (or more) of these trackers in place, chances are your products or services look significantly different now than they did when the tracker was originated. There may have been a few changes along the way—a question added here, or a wording change there—but most questions stay the same for trending/consistency reasons, and don’t necessarily reflect the current product environment.

Not the entire picture. Sometimes, too much relevance is put on the output from a deeply embedded tracking study, and no one—least of all upper management—is questioning what may be driving that (potentially watered down) calculation. Off-setting actions—a price decrease around the same time customer service response times are increasing, for example—can produce the same overall score, but miss both sides of the changes in the market. If you aren’t spinning off ad-hoc studies based upon the findings of your trackers, you a) aren’t using the tracker to the fullest, and/or b) you need to update your tracking questions to be more meaningful.

Like other company processes and procedures. tracking studies –need to evolve over time. Changes are made proactively to the product or service in order to adapt to external forces/changes, and tracking measurements/scores must be transitioned as well. Market research vendors are masters at this transition. Tracking studies can provide critical, ongoing data to help steer your company in the right direction, but only if they can adapt and change to accurately reflect your company’s needs right now, and not the situation that existed when the study was conceived.

 

Bonus observation: If you’ve read this far, you deserve a little something extra.

Procurement systems/agencies are the bane of the service industry. I don’t know enough about the use of procurement systems to speak to their value at saving costs across commodities or consumable products. What I do know is that they aren’t doing your company any favors when purchasing consultative services. The ‘one size fits everything’ approach is ludicrous, cumbersome, maddening and inefficient. Ever try to fit survey sample size options into a unit price table to calculate total price? Doesn’t work. Just imagine how hard it is to communicate a creative, non-traditional design via a mechanism that, by definition, forces every submission to look the same.  I can’t imagine a scenario in which it is better to have another layer inserted between both sides in an on-going, working consultative relationship. Buying pencils? Maybe. Trying to explain the various components of a multi-phase, dual methodology research project? Nope, not even close. And of course, this additional layer inserts procurement people who have little to no research experience or expertise into what is now a three-party collaboration.

We know our clients share some of our frustration on this issue, and it is typically far removed from their purview.  Still, it is important to be aware that you may need to help your most trusted, creative, and loyal research vendors navigate this obstacle in order to provide an accurate, effective proposal.

There you have it—the view from my seat. Remember, I present these from an authentic desire to improve your company’s research program. After all, the better your return from research efforts, the more your company will commit to research in the future.

Don’t agree with these? Did I miss some points? Want to tell us the things that drive you crazy about research vendors? Start a dialog and let’s discuss how we can work on eliminating these issues from your research plan. 503.977.6762 or michael@mdcresearch.com

Michael Oilar — President, MDC Research


MDC RESEARCH  –  8959 SW Barbur Blvd., Suite 204  Portland, OR 97219  –  (800) 344-8725
Learn more at:  www.mdcresearch.com            Copyright 2017, MDC Research