You can minimize drop-offs by paying careful attention to which questions you put where in your survey–and by using things like collector URLs to identify different groups without asking demographic questions.
Completing open-ended questions is very time-consuming, especially at the end of a survey. Unless the questions are on topics the audience has a burning need to communicate to you, you should not expect a high response rate.
It’s better to use a specific middle point term that matches the scale being used. “Neutral” doesn’t mean very much, unless people see it as being indifferent to the question.
If survey respondents haven’t been exposed to a communication channel, it’s better to give them an option that lets them tell you they haven’t seen it rather than letting them choose a neutral option.
If you use a 3-point instead of a 5-point scale, you won’t be able to see shifts in how strongly the audience agrees or disagrees from year to year.
You’ll get a higher response rate from the paper survey in the room than asking those people later to answer questions online. I’d also ask questions that identify to what extent the event improved people’s knowledge of the topics covered, changed their opinions about the issues, or influenced their likely behaviors.
A communication audit survey should include questions about how effectively messages are getting through and about how effective the channels carrying the messages are. You can create about 80% of the “right” survey questions based mostly on the messages/campaigns/topics your department is supposed to be communicating and the channels your job involves managing.
People are more used to having two positive and two negative options, especially if the anchors on both ends of the scale are “strongly agree” and “strongly disagree.”