Maximizing the Value of User Surveys

By Tamara Wilhite

Emailing users a survey provides a low-cost and highly convenient way for IT teams to receive feedback on what their customers think. Yet the design of online surveys affects the results.

  • Design surveys to fill no more than one screen. User completion drops off for each screen that must be filled in.
  • Provide between three and seven options when requesting a rating for performance, such as “meets expectations” to “fails to meet expectations” or “satisfied” to “dissatisfied”. Yes / No questions can be hard for users to decide on, and fail to gauge those who are less than thrilled but not dissatisfied. The ability to note when users go from “thrilled” and “meets expectations” to merely “moderately satisfied” provides a warning that quality and customer satisfaction are dropping but before customers drop you.
  • Ensure that surveys work as expected before sending them out. Verify that menu buttons work and radio buttons retain their selections. Surveys that fail to work correctly can result in users failing to fill them out or throw off the data that is received. A sudden drop in survey completion rates or a spike in surveys only partially filled in can signify this type of problem.
  • If a survey directs users to another website, notify them of this in the survey message. Users may shut down the survey if it appears to be a malicious website redirect.
  • Avoid setting up surveys that pop up in another browser session. These surveys may be blocked by pop-up blockers or by users themselves assuming an ad is appearing.