This article was published in the May 2016 Issue of the Online MR Magazine – CLICK HERE to read the magazine.
Andrew Jeavons: There are two urgent topics that need to be addressed. The first is data security with regard to the EU. The death of the Safe Harbor framework, which allowed data to be imported from the EU onto USA based servers, is a very significant event. The EU is becoming very strict about data privacy and this is something many USA based survey companies have to address immediately. There are no easy solutions, but unless the issue of security and the EU is tackled head on many current multinational survey companies may find they are restricted to USA data collection only.
The second one is, as ever, mobile. The industry has talked for decades about making surveys easier to complete, making them shorter and so forth. The truth is that very little, with a few notable exceptions, has been done about it. In 1998 I wrote a paper for an ESOMAR conference which pointed out that web respondents are more likely to drop out of web surveys on long grid questions. It is now 2015 and I STILL SEE discussions about grids and how they need to be shorter and easier to complete. It seems little has changed for many companies. I think mobile is going to force the industry to amend their ways. Mobile forces surveys to be shorter and more comprehensible; there is simply no choice in the matter.
Using mobiles to access the web is becoming de rigueur for millennial, so surveys have to change. Finally companies will have to address the problems of long, unwieldy surveys. I think this will be painful for many parts of the industry, but it must be done. Long grids, long surveys and poor user interfaces have to be dealt with.
With so many methodologies at our disposals is it becoming “too many cooks spoil the broth”? How should we go about selecting the right methodology for our audience?
Andrew Jeavons: The methodologies have come from client demand – although on one hand they are a natural evolution of the market place. But as you say, there are a lot of them. Selection of the right tool is a question of education, on both the client and vendor side. The risk is that a poor choice of methodology leads to poor results, but the vendor will inevitably get the blame for this. As vendors we must try our hardest to educate the clients and our staff in the methodologies available to enable informed choice to take place.
This requires investment in time and money. All too often a certain methodology is used because we may be familiar with it, not because it is the right tool. In the wider world relational databases are a good example of this. Relational databases are great tools, but they are not always the correct choice. They are used because they are familiar. We have to foster a spirit of inquiry and provide the tools to answer questions easily. Documentation and training is the key.
How can we ensure a seamless marriage between technology and survey research?
Andrew Jeavons: I think we have achieved that, or at least we have made huge advances. When I started in the survey software business there were about 6 providers of survey software in the world, at least I only knew of 6. Now we must have 600 or more. What has happened is that as technology evolved, particularly the web, surveys evolved too. The survey industry tends to be an early adopter of new technology; I was working on web surveys in 1995/96 for instance. The industry did see the potential of the web very early on.
Mobiles and IOT are the next challenges and I think IOT survey integration will become as important as mobile surveys. The challenge is enabling some sort of interface with embedded IOT systems. How do you take a customer satisfaction survey initiated by the fridge? How does the toaster implement NPS? This may seen bizarre now, but I think this is the challenge we will face in the future. When we talk about technology we have to think about attracting the best talent. The industry has to show that it values technological talent. Very often there is some ambivalence towards “techies” within the survey industry. The truth is that the survey industry is totally dependent on technology and the sooner this basic truth is accepted the better.
What are some of the common challenges survey researchers face on a recurring basis to provide value to their clients?
Andrew Jeavons: It seems to me that sample is always a source of problems. Google surveys is a great innovation, it is not so much the survey technology but access to the biggest river sample on the planet that makes it so useful. Your results are only as good as the sample you use. I wonder if IOT will help with this, if we have embedded systems in consumer goods we instantly have access to know users of the product, this could help alleviate the problems with getting a valid sample.
I don’t think enough work has been done to understand why respondents join panels or complete surveys. We need a “respondent theory” to understand the motivations of survey respondents. If we have more insight into respondent’s motivations we can adjust recruitment and management strategies accordingly.
To be honest I am not a fan of the “story telling” trend that seems to be talked about constantly. I think it implies that style is more important than content. I know that privately a lot of MR executives feel the same way. We are doing research, not writing fiction.
Yet I can see the problem that story telling is supposed to address, that is to engage the client and make the results meaningful. That is an enduring challenge, but I don’t think that better presentations rather than better research are the key. I see far more talk about story telling than new methodologies and that is not a good sign.
As an insight expert what are some of the changes you will like to recommend – that will have a positive impact on the survey research industry?
Andrew Jeavons: As I mentioned we need to work out why respondents respond, that is something been ignored for too long. Survey complexity and length have to be addressed somehow; with better quality data we have better quality research.
What we really need to do is take an in-depth look at the assumptions and expectations of surveys. There is a huge amount of evidence from research on eye witness testimony in legal cases which show us that people are not very good at remembering. If people cannot recall which person attacked another person and what they looked like, then how reliable is their recall of what they did in the supermarket on the last visit they made? Is it reasonable to expect a high level of recall about what shampoo you bought last week?
We are heavily invested in the idea that consumers recall vast amounts of information about products which are very often not that important to them. Once we learn what information we can get from respondents accurately the quality of research will rise. It will also show us which methodologies are the best to use. I suspect mobile will come out as the best simply because it can be used “in the moment” which minimizing the problems of recall. Better to get 5 questions from a respondent when they are in the supermarket than 20 a week later when they are at home.
Are small enterprises able to leverage the power of surveys? If so what more changes you intend to suggest in coming 5 years?
Andrew Jeavons: I think smaller companies can get easy access to survey technology, companies such as Survey Monkey and Survey Gizmo provide great systems at an economic price. However technology is not enough, the design of a study and the interpretation of the results is more important. I feel that frequent short surveys can be far more powerful than “kitchen sink” surveys; I’d like to see more emphasis placed on this approach.
The temptation to have long surveys has to be resisted. Indeed it would be great to have a written “survey pledge” ( like Grover Norquist’s tax pledge) signed by companies who perform surveys and clients agreeing that their surveys will always less than 20 minutes, have no grids, be mobile friendly and so forth. It would be a start.
The most common complaint about surveys is that they are boring – how do you suggest we can spice them up?
Andrew Jeavons: Making surveys clearer and shorter is the best alternative. The problem with changing surveys in terms of their content is the potential for introducing biases. For instance more images may make the survey more interesting because of the content of the images, but images can introduce cognitive biases that may not be obvious.
Question texts should be made as short as possible, choice options as short as possible. Having 32 possible response choices, for instance, is too many. As few response options as possible is far better. I’m really not convinced that surveys can be made less boring while they remain in the traditional question and answer format, but they can be made simpler. Lowering the cognitive load of a survey is probably a more realistic goal than making it more interesting. There is also the basic truth that some survey topics are just not very interesting, no matter how many images you embellish the survey with. Making this type of survey easy to complete is the best hope. Breaking up the subject matter of a survey into several surveys helps too.
Two short surveys are probably better in terms of data quality than one long survey. It raises expenses and the time needed to complete the survey, but respondent fatigue is reduced and with that the quality of the data should rise.
How can merging gaming concepts to survey research become a ‘game changer’ for the industry?
Andrew Jeavons: I feel very strongly that the gamification approach as championed by Betty Adamou at Research Through Gaming is the right approach for the future (disclosure: I am a technical advisor to Research Through Gaming). True gamification allows the respondent to provide data without the rigors of an “ask and answer” survey format. It also allows for paradata (data about how data is collected) such as response time to be measured which can provide extra insights. For instance the speed of response to a certain image may be an indicator of a non verbalized prejudice or affinity. Ultimately the game construct is held to be more engaging, they are games after all.
Computer gaming has become a well established part of life for almost all age ranges, so it has a high level of familiarity. Early theories of child psychology were based on the idea that play in young children is a form of constructive cognition, albeit physical. Using games we may engage the respondent in a different sort of cognitive processing and hopefully overcome the issues of boredom. Gamification has some plausible psychological theory behind it which indicates it could solve many of the problems of traditional surveys. It is also true that not all surveys can be rendered into a game, which is something that will be discovered by trial and error.
What quality measures needs to be practiced to ensure optimum results for survey research?
Andrew Jeavons: With web and mobile surveys we have a lot data available to us about the survey as it is being taken. Here is a list the data points that I feel are important for effective quality assurance.
(1) Speed to complete survey. This is pretty well established now; it can weed out people taking surveys as fast as possible with no regard to their answers. Extremely slow respondents are questionable too, analysis of demographics of slow respondents is important.
(2) Answer completeness – this is a count of how many codes respondents use of multiple choice questions, the length of open ends and so on. It could be viewed as a loose measurement of respondent attention.
(3) Question retry count. I am not sure if many survey systems provide this measurement. This is a measure of how many questions are submitted that are not completed correctly. An example would be where mandatory items on a grid are not all completed and the grid is submitted anyway causing the survey system to force a respondent retry of the grid. Obviously it is a measure of the cognitive load of a survey, if questions are hard to complete correctly they must be placing a high load on the respondent. Paying attention to this measure in a soft survey launch is a chance to catch significant problems with a survey.
(4) Dropout rate. These are the questions respondents appear to abandon the survey on. This is often not the same measure as (3), questions may be problematic because of the information they request is too sensitive (for instance income), or they may be badly worded or just plain confusing.
How to ensure that the respondents are engaged and active throughout the course of a survey?
Andrew Jeavons: My answer to this is to return to the idea of respondent theory. If we knew more about what motivates respondents to take surveys we would be able to design surveys in such a way as to keep them engaged. At the moment we fall back to the “bright shiny things” approach, this is the idea that if we make the survey interesting enough (make it bright and shiny) then respondents will be engaged and complete the survey. This approach is roughly the same as the design approach for cat toys. We need to be more sophisticated. It is significant that the problem of survey engagement has been a hot topic for decades. When telephone surveys were prevalent it was an issue, now with the dominance of web/mobile surveys it is still an issue. It is time for a new approach, bright and shiny doesn’t seem to be working.
How can established companies maintain a competitive edge given new players entering survey research market?
Andrew Jeavons: It is always important, in my view, to be at the leading edge of technological advancement if at all possible. But the truth is that this will not guarantee the growth of your company or protect you from new entrants to your market place with the same technology. Client retention is as important as being at the leading edge. Clients will not leave you if they are treated well and feel that you are “keeping up” with the industry trends. A comprehensive client retention program is crucial to the success of your company; client churn costs a lot of money.
One approach many new companies use is to undercut your company on price to effectively buy the market. This can be a hard approach to counter but if you have established loyalty among your client base it can be protection against this sort of predation. There has been a trend over the last 20 years or so to do business without any face to face meetings. This has obvious time and money advantages, but it does mean you have a weaker relationship with your clients. Pick a client account value (for instance accounts worth more than $X per annum) and visit these clients. The relationships you have with the clients is your protection against new players.
According to you what seems to be the future of market research – especially in context of survey research?
Andrew Jeavons: There is no doubt in my mind that survey research will continue for many years to come. The question is the medium. We have seen the growth of mobile technologies change surveys and I think IOT will do the same. Appliances initiating surveys, which I think would be done via a smart phone, will become common. I think the problems with obtaining valid sample will be mitigated to a degree by such services as Google Surveys which have a large coverage. So called “Big Data” is the real challenge to the core of market research.
“Big data” in my mind really means analytics. Many companies conduct their own research without outside analysts and seem perfectly adept at doing this. Budgets are the driving force for this; companies are not willing to spend the type of money traditional market research companies need. In the past obtaining survey data was only possible via a market research company. Now this is not true. The analytical skills required to analyze the data were not available except via a market research company. Now this is not true. Market research has to regain the higher ground of genuine theories about consumer behavior, something data and analytics will not provide on their own. The customer satisfaction practice has been sliced away from traditional market research; predictive analytics has also been taken over by non market research companies.
This trend will continue into other practice areas. Market research needs to redefine its role with some urgency. Survey research is only one channel of data now. Surveys have to remake their image, they have to be better constructed and their limitations have to be acknowledged. At the moment surveys are seen as a universal tool, but that is not true. By defining what surveys are really good at the industry can grow from that strength.
About the author: Andrew was the architect of significant growth of Survey Analytics during his tenure as CEO. He grew the company from 4 employees to 35 while he was there with a concomitant increase in revenue. As CEO he oversaw all operations of the company, including 2 offices in the USA and a development center in Pune, India. Andrew provided survey software expertise, strategic leadership and management as CEO.
Andrew has domestic and international experience in software development, sales, marketing, P&L management and recruitment. As a thought leader, he has authored several award winning papers and a well known speaker at industry conferences. Andrew has extensive experience of working with Fortune 500 companies. He loves fast moving environments and worships innovation and creativity ! Andrew started in the survey software sector with Quantime as a software developer, and went on to start E-Tabs and a specialist Quantime products consulting company. Other skills include knowledge of statistics, neuropsychology, psychology and writing. When he not doing any of these things he is a goldsmith and lapidary.
This article was published in the May 2016 Issue of the Online MR Magazine – CLICK HERE to read the magazine.