When you need to compare measurements over time or across business units, you need to conduct quantitative research. This usually means a survey. We can do it all for you or work with your in-house research department to provide help in developing the right questions and helping develop recommendations based on our detailed analysis.
The types of surveys we can help you with include:
- Communication audits assessing the effectiveness of messages and channels.
- Readership or viewership surveys for publications and electronic communications.
- Knowledge tests.
- Pre- and post-tests to assess the effectiveness of a communication intervention.
- Climate or attitude surveys addressing all aspects of the employment experience.
- Benefits and compensation surveys.
- Customized surveys.
How We’re Different
- Our questions focus on effectiveness, not just satisfaction.
- Our survey findings lead directly to actionable results.
- Our reports are written in plain English; they’re concise and incisive.
- Our reports are highly visual, suitable for sharing directly with executives and employees.
Sample Projects: Surveys
Minimizing drop-offs for online surveys
Q: We are paying attention to ‘drop offs’ — those who start an online survey and might find they don’t have the time or they go to an open-ended question and leave. Any solutions for minimizing drop offs for online surveys? Thanks.
A: Here’s an idea if lack of time is the issue. If your survey platform allows this option, as does the one I like to use (SurveyMonkey), you can set the options so that a person can re-enter their partially completed survey where they left off, as long as they did not complete the last page yet. This only works if they enter the survey using the same device on which they started the survey because that’s what the program recognizes–the device, not the person.
Second, you will always have some drop-offs. That’s why it’s important to consider which questions you put at the end. For employee communication surveys, I put the fairly tedious questions about identifying current and preferred sources for a series of topics at the end. If I put them anywhere else, it will increase the number of people who abandon the survey at that point. I put all the questions where we will want to compare results against norms, or past results, before that series of questions because that’s where we need the highest statistical reliability for comparisons–how well informed people are or how many do/don’t have access to different communication channels. There is no need for either type of comparison for current/preferred sources questions. All that matters for those is the answers today for the same people answering both questions–how well does the current mix match what they prefer? There is no purpose to comparing those numbers to other companies or to your own results in past years.
Another issue is demographic questions. Putting them at the beginning reduces the number of people who will even start the survey. Putting them at the end, where I think they belong, artificially inflates the abandon rate for the survey; they may have answered all the real questions on the survey and just chosen not to identify their demographic characteristics for fear of being identifiable if someone decided to find their answers (as in, “I’m the only woman at this level in this unit in this country in this office.”) To avoid this problem, I like to use different “collectors” on a platform like SurveyMonkey. You identify which demographic question is the most important to you for the way you want to sort your results and create a different collector URL sent to each subgroup. All the collector links take respondents to the identical survey, but the survey keeps track of the collector link through which the respondent entered the survey. Then you have 100% identification of business group for everyone who even starts the survey.