On-Demand Webinar icon On-Demand Webinar

How to Create Effective Audience Surveys

Surveys are the least demanding, most scalable way to collect insights about your audiences. And if you’re not surveying them regularly, everything your organization does—from communications to program design—is merely guesswork. This age-old method can be understated today—but the survey is alive and well in the array of modern marketing technology.

The good news? Surveys are cheap and easy. The bad news? Surveys are cheap and easy. Because surveys are so simple to write and distribute, most organizations ask bad questions (inevitably leading to bad data). It’s easy to conduct surveys so quietly flawed that they deliver inaccurate, biased, useless insights. If your surveys aren’t carefully crafted, you’ll have untrustworthy results.

In this webinar, you’ll become a survey savant by learning modern strategies to create, distribute, and analyze member surveys. More importantly, you’ll learn the common pitfalls to avoid in the science of survey building so you can take full advantage of this dynamic way to collect audience insights.

Transcript

SUMMARY KEYWORDS
survey, audience, responses, questions, information, people, answer, donor, fill, data, organization, options, easy, incentive, good, summer camp, work, talk, feedback, informational interviews

I’m Rachel Clemens. Today’s webinar is on how to create effective audience surveys. If you don’t survey your audiences, it’s tough to know what they’re looking for and want from you. We’ll talk today about how we can learn more about our supporters to engage better and serve them. We’ll talk about strategies to get the answers you’re looking for, tactics you can use to build well-crafted surveys, and considerations for analysis. Now, I want to say that this is not a session for the data nerds. If you’re a data nerd, you’ll know all this already. If you went to school for research, you’ll know this already. If you’re like most of us communicators, fundraisers, or membership people, we didn’t necessarily get this in school. So today, we’re going to go over how to do audience surveys effectively - the basics, but also a little bit beyond the basics. Also, just keep in mind that everything here applies to most surveys. So, these principles will apply if you are at a nonprofit and are thinking about a donor survey. These will apply if you’re at an association and are considering a membership survey, university, alums, or student survey. Everything I will talk about here applies to all different types of surveys, so don’t worry about that. I’m Rachel Clements, the Chief Marketing Officer here at Mighty Citizen. Believe it or not, I am good with technology and communications despite our beginning here. I’ve been doing this for about 20 years. At Mighty Citizen, we do branding, marketing, and digital experiences for all mission-driven organizations. Everything, including nonprofits, associations, government, and universities. We’re based in Austin, Texas, where it is a beautiful day today. And I hope it’s beautiful where you are, too. What can surveys tell us? They can tell us what motivates our supporters. Why are our supporters our supporters? Why did they engage with us in the first place? Why did they continue to engage with us? It gives us lots of lots of information about that. How can we better engage with our audiences? What do they want from us? What are we doing well? What are we not doing so well? Who are our supporters? What are their demographics? What motivates them? What keeps them bound together? What personal information might we be able to pull from our supporters to better communicate with them? Why aren’t [insert your audience here] engaging more? Why aren’t young audiences engaging more? Why aren’t we reaching a certain demographic? You can get more information about certain audiences by surveying them. How good is our stewardship? How well are we stewarding our donors? Our members, our alums, our students? How well are we continuing to cultivate relationships with them? In the end, research kills opinions. What do I mean by this? When I enter an organization, I’m talking about creating a survey. What often happens is I’ll ask a question, and they’ll answer that question. So, if I say, for example, what motivates your donors to give to you? I’ll get answers back from them. And I might get lots of answers. And in fact, there may not be a good consensus in the room about why their donors give. They’re giving me their opinions; they’re giving me information based on some form of likely anecdotal information. "So and so told me they give because of this, and I think that’s true." What research does is it kills those opinions. It gives you actual data to say, "This is why our donors give to us." And if you don’t have consensus within your organization, it also allows you to point to something that can drive consensus. If your executive director, for example, wants to do something that you know won’t work with your audiences, we want to give you the research and the data to tell them why that won’t work. So, by the end of today’s session, you’ll be able to describe the surveys’ shortcomings and benefits. We’re going to start there. We’re going to list the six principles of an effective survey. We’ll write good survey questions; your questions must be written unbiasedly. We’ll talk about analyzing and interpreting our survey results, as well. What are the shortcomings and benefits of donor surveys? First of all, let’s make the case against surveys. Generally, people tend to estimate. By that, I mean they do a lot of, "hmm, that sounds about right." That’s okay in general because that’s human tendency. That’s not a surprise. But at the same time, it’s just not 100% accurate if they’re estimating. And sometimes they’ll give you inaccurate answers as well. They might give you the answers you want, and bad questions might sometimes sway you. Your recipients don’t understand that you’ve written a bad question, and you don’t understand you’ve written a bad question, so they’ll answer that question thinking it is unbiased and biased. They don’t know the difference. Sometimes you don’t either. It’s hard to predict the future. We are not born with crystal balls. We so wish we were. But if you’re going to ask someone, "Hey if we offered a chatbot on our website, would you use it?" Well, that’s hard for them to answer. They want to say yes. They see themselves as their best selves in the future. I’ll give you an example. When Netflix first came out a decade ago, they asked people, "What do you want to watch this weekend?" That would get you to add something to your queue. People were adding all these documentaries and things that would make them better people. What Netflix found is that nobody wanted to see the documentaries by the time the weekend rolled around. They wanted to watch Sharknado or some terrible comedy to break them out of their everyday lives. So, while we have the best future intentions, they’re unreliable. Bad questions look like good questions.If you write a bad question--and by that, I mean a biased question--your recipient is likely to answer it without realizing it is biased. It’s also hard to reach statistical significance. Statistical significance is truth. So it’s the combination of asking enough people and getting a high response rate to hold that survey data up as truth. Most of our clients and the organizations we work with do not have the time or resources to reach statistical significance. Therefore, you’ve got to look at your surveys as more indicators of truth, but it’s better than not having any information or having solely anecdotal information. Keep in mind that your surveys are wide but shallow. This means you can survey many people at once, but you can’t go deep on a survey. We’ll talk about why that is. You can reach many people, but you can’t get a lot of information out of them. You often want to marry up surveys with other forms of research, like informational interviews, where you can get more qualitative information and go much deeper with a smaller amount of people. But there are lots of benefits to surveys. The first is that they are flexible. They can be conducted in lots of ways. They can be conducted on your website, via email, via social media with polling, or by phone. They can also be conducted in person. Sometimes, we have hard-to-reach audiences, so we’ve got to go out in person to do these surveys. They can be anonymous; if your survey is anonymous, it is more likely to be unbiased. And we’ll talk a little bit more about the implications of that. They are cost-effective. Quite honestly, running surveys on online platforms is fairly cheap. They can quickly be put together. Software like Survey Monkey will offer questions written by professionals that you can pull that might be appropriate for your survey. And then lastly, they are extensive. No other method provides such a large number of responses. So keep that in mind as well. Should your organization conduct a survey? I will tell you about this thing we discussed called "The Goodwill Jar." Imagine that you’re sitting across the table from a donor, a member, an alumni, a student, or whoever your audience is that you want to survey. You ask them to be a member or to donate money to you; therefore, they are putting money in the Goodwill Jar between you on that table. When you ask them to fill out a survey for you, they’re making another deposit into that piggy bank jar. They’re just putting in, putting in, putting in. You have to find ways to put some of the deposit back into the jar on your side. One way to do that is to conduct a survey and to make sure that when you conduct that survey, you take the information you get from it to serve them better. If you won’t do anything with the information, there’s no point in surveying. You cannot conduct a survey solely because you’re curious. That’s not a good enough reason. We’ll talk about the purpose and goals behind surveys. You must intend to take action on that survey and put deposits back into that Goodwill Jar. Let’s start with the six principles of effective survey design. If you decide to conduct a survey, here’s where you will start. Before you write any questions, you will start with the purpose of the survey. What is your goal? A lot of times, when you sit down and create this survey strategy with people on your team, there are a lot of goals that come out of it. You want to make sure that you have one primary goal. It’s okay to have multiple goals, but make sure that there is one that is driving your strategy. That’s your primary goal. Then, ask yourself: What will we do with the information when we get it? Usually, in our survey strategies, the line under our primary question will say what we will do with this information. So, every time someone drops a question, they’re forced to answer what they will do with it. What will you do with the information you get, and how will it be implemented? You want to ensure you understand upfront who within your organization needs buy-in. Suppose you are asking questions of an audience about your programs, for example. In that case, you have to have the program staff weigh in on that survey and ensure they are comfortable with the questions you’re answering and understand why you’re asking them. What you don’t want to have happen is that you’re doing this in a silo. Halfway through the survey, or right before you launch the survey, your executive director pops up and says, "Hey, I’d like to add five questions to that survey." Make sure you ask yourself early: Who needs to have buy-in on this, and who do I need to ensure is included? Keep in mind that different surveys might have different goals. Therefore, you might have different surveys for each of those goals. You can have a survey that is solely to learn about an audience. Maybe you want to find out more about them or their demographics. You can also have a survey that compares audiences. We’ll take a look at this in a minute. Here’s an example from the University of Central Florida for their alum survey. When they wrote out their goals, these were their goals. Number one, they wanted to understand alum satisfaction with the Alumni Association. How well did they understand and appreciate their benefits and activities? Secondary goals: They want to find out why alums do or do not join the Association and understand the interest of alums in making contributions. This is just an idea of how your goals could look. You want to make sure that your survey is targeted. A lot of times, people ask how many people they should survey, and the truth is that it depends on the audience that you’re serving. You can slice and dice your audience in many different ways. Maybe you want to look at them by age, their donation amount, or how often they interact with your organization. There are many ways to slice and dice; your data will tell you how many you have in that audience. Now, you want to ensure you’ve got enough people to run a survey. Most surveys have about a 5 to 10% response rate. I think the technical answer to that is two or 3%. But we typically see about five or 10% response rate. If you have 200 people in the audience, you’re looking at 10 to 20 people that you would be basing facts on. That’s not enough to say, "Okay, that’s worth running this survey." You want to make sure you’re looking at your data and numbers to ensure you have a big enough audience. Remember that typically, small and representative will be good for precise surveys. If you’re trying to understand a program better, you want to engage people who interact with that program and maybe not your entire database. On the other hand, if you’re considering going to a rebrand and want to get a sense of your brand perception, you might want to send that survey to everybody in your database. That might be bigger and broader. Think about which audiences you’re going to target. As I mentioned, you can slice and dice them differently. One option is to slice them by age. Perhaps you have younger members or donors that you want to talk to versus older donors to see how they interact with the org. We know that younger audiences want different things from us. You can run a survey solely based on age. You can also base it on lapsed versus engaged audiences. For example, if you’re dissociating, you could survey lapsed members versus engaged members and see the difference.You can also look at it by sex. It’d be interesting to see how women feel about certain programs or initiatives versus men. And then, just because I’m in Texas, I have to get a dig in here and do a little bragging. You can also do it by greatness -- Texans versus everyone else. Remember, though, no matter how you slice your data, you must have good data in your database to target an audience. And not only that, but you must feel confident in the data that you have. What you don’t want to do is send someone a survey about a singular program, and they have no idea what you’re talking about or have never interacted with that program. Number three, in terms of how to make a survey effective, is to keep it short. That’s Napoleon, by the way. That’s my Napoleon joke. Short surveys produce higher response rates. The shorter it is, the less people have to do, the faster they get through that survey, and the more likely they are to fill it out. Ideally, your survey takes fewer than five minutes to complete. Remember that you want to keep it nice and short; that is how you will get more people to fill it out. Ideally, we’d also have fewer than ten questions. You can’t ask many questions if you’re trying to do under five minutes. Make sure you’re trying to keep it under 10. Also, you want to have fewer than ten if you include many open-ended questions. We’re going to talk more about what this looks like. Open-ended questions are those where you ask people to fill in text in a text box. If you’re asking them to do a lot of that, you want to have a max of maybe three. Your survey needs to be delivered well, it has to be intuitive, and it has to work on multiple browsers. You want to test your survey. It has to be mobile-friendly. Many of our audiences are engaging with our social media or email via phone, which is increasing. And so they’re going from their phones to your survey. Therefore, your survey must work well on a mobile device and be easy to fill out there. That’s another argument for it being short, right? People are not going to scroll a whole lot on a phone. You want to make sure when you send out your survey reminders, go ahead and remove anyone from your list who’s already answered that survey. They feel doubtful if they’ve already answered the survey and get a reminder. They’re like, "Well, did I fill it out?" They might try filling it out and wonder if their responses went through. You don’t want to create doubt. Doubt is bad for your brand, so remove anyone from those survey reminder emails. I mentioned testing. You must test your survey. You only get one shot for people to fill this thing out. You can’t edit your survey after it starts running because you will have messed up data. You have some people who filled it out pre-fix and others who filled it out post-fix. It’s tough to compare anything or gather anything in that fashion. So make sure you have tested every possible scenario. When you run your survey, I recommend reading the questions out loud. You’re more likely to catch your grammar or spelling mistakes when you read out loud. It just forces you to slow down. Test it over and over again. Test it on lots of devices, test it on different browsers. I always recommend testing it with a neighbor. You want to test it with people who haven’t seen it. You’ve been living and breathing this survey, and you may have used jargon that is particular to your organization. If you share it with people within your organization, which is good practice, just know that they may also understand that jargon and aren’t likely to flag it. I always recommend a neighbor because they’re friendly enough to take a quick survey for you. But they don’t know the ins and outs of your work. So, if they turn to you and ask you a question, they need to work on wherever they get stuck. Effective surveys are incentivized. I put an asterisk here because there are a lot of implications for this. Incentives are great because they can increase your response rate by five to 20%. Here’s here’s what I mean by an incentive. You’ll see a survey where, if you take it, you’re in the drawing for 20 gift cards. That’s one example of an incentive. Another example: we had a client, this was public radio, and they had a whole bunch of CDs in their warehouse. They wanted to get rid of the CDs. They had been piling up. So, they decided to give a CD to everyone who filled out the survey. That you know that that was great for them. They had a lot of responses from that. They also had to ship out all the CDs, so that’s a consideration. However, don’t let your incentive be too big. For example, you would not want to give away a $500 gift card. That’s just too large. You’ll have people filling out the survey as a chance to win. You want it to be big enough to be a carrot but not so big that everybody sends it to all their family members, too, right? We usually do about $100 gift cards. If the survey goes out to many people, 10s of 1000s, we might do several gift cards and draw at the end. Again, make sure that incentives are easy to allocate. Those CDs that went out had to be shipped out to everyone. That cost postage. They were willing to do that. We talked about that beforehand. Another org I’ve heard of is Texas A&M. They gave away maroon bluebonnets, which their audience loved. They did this for their alums when they sent out a survey. Those were light and easy to ship. So, just consider how you will allocate and ship those incentives. Keep in mind, though, with incentives that if a user doesn’t share their contact information, they can’t get the incentive. You cannot also have an incentive if you want an anonymous survey. You must know who filled out the survey to get them an incentive. One way we get around this is by leaving the option open for people to be anonymous or leave their contact information. The last line in our survey will say, "You can choose to remain anonymous, but if you’d like to be considered for our incentive, please fill in your contact information." That’s one way to get around, kind of. All right, so we’ve now got our survey strategy in place. Notice we have not written any questions yet. You do not jump right into writing questions. You’re just working on the strategy upfront. Then, as you start to craft your questions, we want to ensure that we’re crafting good survey questions. There are two types of survey questions. One is closed questions, and they are measurable. The other are open questions, and they are more revealing. So, let’s dive into each of these. Close questions are those where I ask you a question, and then I give you a list of options to answer that question. So here’s an example from UT Austin. "Have you already included UT Austin in your estate plans?" Their options are "Yes," "No," "I plan to," or "Undecided at this time." Notice those are nice, big buttons. They work great on mobile. The user experience of the survey is nice. There are four options there. Closed questions provide a list of acceptable responses. They’re typically multiple-choice. They can be yes/no, they can be checklists, they can be scales, etcetera. They’re easier and less time-consuming on both ends of the survey. They are much faster and easier for your audience to answer; they don’t take a lot of forethought to do that. They are also easier for you to analyze, as well. So keep that in mind. Just be careful not to bias them. Since you are providing the answers, you may not get the truth you’re seeking if you don’t look at all the possible answers. Here’s an example of a survey that is a podcast survey. It had eight pages. The survey was eight pages long. Far too long. But it was mostly full of closed questions. This is an example of some of those: What’s your age? What’s your Gender? What’s your highest level of education? One thing I want to point out for you is you have to make sure that you are highlighting all possible answers in closed questions. Make sure you’ve covered your bases. In this example, What is your gender? I’m highlighting the "Other" box. So their options are "Male," "Female," "Prefer not to say," and "Other." For some organizations, you will not be able to do this. So, for whatever reason, just politically or religiously, it will not make sense for you to offer something other than Male/Female. However, for most of us, we have to offer other options now. So, make sure you’re covering your bases. Here are some examples of open questions. Open questions are more revealing. They’re the questions that you ask them to answer in their own words. They’re writing in an answer inside your textbox. Some examples might be: Why did you choose to become a donor? What’s the one way we can best show our appreciation for you? What one word was used to describe our scholarship program? Why did you choose to become a member? There are all kinds of ways that you can tweak it so it’s appropriate to the audience. Again, open questions allow respondents to answer in their own words, and they provide unanticipated insights. When people use their own words, you find out all kinds of things you never would have known. Usually, when we go into a survey, we have a working audience knowledge. Surveys are often made to confirm what we thought we knew with actual data. That happens a lot. However, I have never done a survey with a client or on our own, where we have not found something unexpected. And the unexpected insights typically come from these open questions. That’s the value of them; they’re hugely valuable. They also tend to encourage more reflection because they’re forced to think in their own words. Therefore, more accuracy, as well. So, they slow people down in filling out surveys; they take longer and tend to be more accurate. And they require more human time to analyze. Just like they take more time for your user to fill out, they take more time for you to analyze. This is why, in a typical survey, we recommend no more than two to three open questions. I’ll give you an example. We surveyed a performing arts center. We sent it to 50,000. It was a brand survey. It went to a large group of 50,000 people in their database. We got 2500 responses, right on the dot. That was around 5%. We had three open questions in that survey. So then, we had to dig through 7500 individual open-question responses. It adds up quickly; just be aware of that. In that podcast survey I mentioned, there were eight pages. However, only two of their questions were open questions. What do you like most about this podcast? What do you like least? Okay, again, I want to be slim with these open questions. In your survey, should you ask closed, open, or both questions? Chances are you likely want to do both. Again, it goes back to the purpose of your survey. Can you do your full survey with closed questions? Sure, you probably can. Make sure with closed questions that you have an "Other" box. That allows people to write in open answers if they want to. But typically, the unexpected insights come from the open question. I will typically recommend both. And again, I don’t recommend doing more than three open-ended questions. If you need more than three, you’ll probably want to adopt a different research method. You might want to move to informational interviews or something like that. We generally try to marry our surveys with informational interviews to get deeper. One last thing: don’t do this. This audience or this organization will remain nameless. This is a big wall of scales. A lot of organizations do this. They’ll have like a one to 10-point scale. Then, they’ll ask you ten different questions based on that scale. It’s just It’s taxing on the mind. While these are close questions and should be really quick, the wall of scale is intimidating for your users. Also, don’t do this. This was recent. You’ll notice that the recipient or the person sending the email does not reply. And then the subject line is "Tell us what you think!". This was someone requesting a survey response for me, and they did not reply to their email name. Just keep this in mind. This is why we test, like go ahead and test the emails in the email to yourself. Okay, so we are going to do a pop quiz. I will ask you guys to chat in your questions box for the answers to these questions. What’s wrong with these questions? So what is the fallacy and these questions? Okay, question number one, What is the most affordable and fun summer camp? Go ahead and put that in your questions box for me. Many of you are getting this. It is that we’re asking two questions here. We’re asking what’s the most affordable summer camp, and what’s the most fun summer camp? This is nearly impossible to answer because what’s most affordable in your perception may not also be what is most fun. So, again, we’re asking two questions in one. When I present this session, many times, people will raise their hand and say, "That is a subjective question for a survey." That’s okay. Because sometimes your surveys are there to get perception. The purpose of the survey is to understand brand perception, perhaps. So, asking a question like this gauging perception is fine. Here’s the way to fix it. What’s the most affordable summer camp? What is the most fun summer camp? You can split it into two different questions. A lot of people will end up making two questions into one when they’re trying to get their survey down to under ten questions. They think, oh, I’ll combine them. But again, I cannot do that. Don’t do that. Okay, question number two. What is wrong with this one? How much do you enjoy our annual event? Go ahead and get your responses in. Yes, by the mere fact that we say "enjoy," we have biased our question, right? We’re framing this as a positive. What if they didn’t enjoy it? They won’t want to tell you that now because you set them up only to enjoy it. You want to change this to "What is your opinion of our annual event?". This would be using unbiased words versus positive framing. Question number three. What’s wrong with this question? When were you born? Before 1950, 1950 to 1960, 1960 to 70, 70 to 80, or after 80? On the surface, we’ve got all of our bases covered here. But what if you were born in 1960? Do you choose B? Or do you choose C? These are not mutually exclusive options. This is one thing to look out for when you do your surveys. You want to shift it. 0 to 5960? To 69. Okay, last one. How often do you visit our website? The options are "Never," "Sometimes," "Regularly," and "Often." What’s wrong with this question? These answers are vague. You might think you’ve covered your bases. But what’s the difference between "Sometimes" and "Regularly?" How do I distinguish between B and C? Do I want to switch it and say: "Never," "A few times per year," "Once per month," "Once per week," or "Almost daily." These are concrete and specific. Most people can’t remember if they’ve done something within the last seven days or roughly the last 30. Once you get into a few months, I might think it was two months ago. And it turns out it was eight months ago. That happens to me all the time. Make sure that they’re concrete. The other thing is you always want to cover your bases with "I never do that." People tend to forget that option. They assume you do. Or "I always do something." Make sure you also include your extremism. Five survey analysis techniques. Let’s go into these, as well, and see how, once we get our information back, how we analyze it. Read all responses for patterns. That’s number one. When you get those open questions, make sure you go through and read all of those answers. When your survey is active, it’s easy to jump into your data. If you’re using Survey Monkey, Qualtrics, or something like that, it’s really easy to jump into your data and start looking at it in the middle of that survey. I geek out on that stuff. I find myself getting giddy about it. It’s ridiculous. I want to jump in there and look at everything. But I have to force myself not to do that. I will only look at the survey data once it is all in. The only thing I will check in the survey process is how many responses I have. That’s the only thing I’ll allow myself to look at. That’s usually on the dashboard, so it’s easy to see without seeing anything else. Sometimes, you want to know if you need to push it slightly. I hold my viewing until the end of the survey because I want to be unbiased when reading everything. Otherwise, you can start to kind of bias yourself just by seeing stuff early on. Once that data is in, I read it multiple times--each of those open questions. I read it, and after the first time, I think, "Okay, what do I remember?". So, if I read 100 different responses to the same question, I might make notes about what stood out in my head about those responses, like what patterns emerged. And then, I go back through them again, and I begin to name and group them as they relate. So, for example, I typically will use a tagging system, and I’ll say this response is an answer to a particular campaign. If the question is, "Why are you a member?" The answer might be, "This campaign spoke to me," or "My parents were lifelong members." I’ll tag those things like "lifelong members" or "campaign." I’ll just start to pattern them together. Your software should allow you to do this easily and then sort them that way. So you can start to see which ones rise up to the top and have more answers. You can also use a word cloud when it’s appropriate. This is what a word cloud looks like. It’s a quick and easy way to visualize the responses to a particular question. You’ll notice when I was talking about the open-ended questions. I was giving you examples; I often said, "What one word would you use to describe x?" I always like the one-word answers because even though it’s open, it’s easier for people to come up with action over the word. Also, it’s easy for me to analyze because I can pop it into a wordle or word cloud. This one, for example, is in response to the summer camp question: "What one word would you use to describe summer camp?" What’s beautiful about this is that I can also see what words our audiences use to describe our summer camp. Therefore, I can use those same words. They may not be the words we’ve been using. If "playing" or "play" was not key to our summer camp language, it is going to be now. I’m going to be using it now. In general, be careful with averages. You don’t want to use averages to report those numbers when using scales. In this example, cats are better than dogs. You might get five responses on a 1.5 scale, where one disagrees and five agree. They are 2, 3, 3, 4, and 5. They’re kind of across the board. If you average those responses, you’ll get 3.4 as your average. It’s neutral. Neutral to positive. You may also get, running the same survey at a different time, you might get two ones and three fives. Well, those responses are disparate. They are passionate one way or the other about whether cats are better than dogs. But your average is still 3.4. So, it still looks neutral. It looks like you got neutral feedback, and it matches the number for the more mixed results. Instead, this is how you want to show your responses to a scale question. You want to show it by the number. So 40% Somewhat agree, 20% Agree, etc. That’ll show any discrepancies or disparate information much better than an average. Make sure you’re focusing on the big picture. It’s really easy to focus on specifics and outliers. If someone is a good writer and writes a passionate, open-question response, you’re likely to give that response more weight than someone who is not a good writer. That’s just human nature. Be aware of that as you’re moving through your surveys. Also, sometimes, you’ll get one piece of feedback that is either glowing or stinks. If it’s one piece of feedback from one person, and you didn’t hear anything like it anywhere else, maybe call that person to get more information. However, don’t don’t beat up the whole survey or your whole program because of that one piece of information. Instead, you want to notice those big items. Say you’re seeing a piece of feedback coming up repeatedly. Again, you may want to marry that with some informational interviews. I’ve gotten as much as I can from the survey; I will test and get more information again. Also, you generally want to be more skeptical of good than bad news. People tend to believe the good news. Therefore, just be aware that if everything’s glowing, there might have been bias in your survey. Typically, you will get some mixed feedback, and that’s kind of what you’re looking for. Just be skeptical if everything’s rah rah. Also, depending on the audience, they’re predisposed to like you. If you’re surveying donors, for example, hopefully, your donors like you. There’s probably more information lurking if you haven’t asked the right questions. Lastly, confirm the information you’re getting and then change it. Again, I cannot stress enough that there’s no point in surveying unless you will enact change. Remember, too, that surveys are just one of the research tools at your disposal. I have mentioned informational interviews; sometimes, we call them stakeholder interviews, which is typically what we do when we work with our clients. Focus groups are another obvious option: getting people in a room and discussing how they feel about something. You can also conduct a survey more than once.One of the challenges with the brand is measuring it. The brand is notoriously difficult to measure. If I were thinking about a brand effort or redesigning my brand, I would do a survey early on before I did anything to get perception noted. This would help me understand where we stood with our audiences. Then, I would undergo the brand change. Maybe a year after the brand has been out, or maybe six months, depending on timing, I would return and do that same survey again. If you know already that you’re going to run a survey more than once, you want to make sure the questions you’re asking now will also serve you six months from now, a year, etc. Make sure you’re making changes in your organization once a clear picture emerges. Again, this is the thing we asked for in our strategy. For this question, here’s what I will do with the information when I get it, and I will act on that. And so if you’re not going to act on the information you’re getting, don’t ask. That’s a question you can take out and make that survey shorter. If you don’t have an action item for each of your questions, don’t ask them. The survey is to make changes and improve your organization. Start small with the change to make sure you’re headed in the right direction. Say you’re going to make a programmatic shift, and you run a survey. You find out that people don’t understand the program; they don’t understand the benefits of it; they don’t know why it exists. You get that information; you make some small changes. Then, you can survey that audience again to see if that change has been realized, if they better understand it, and if they even noticed you’ve made a change. Again, you can make changes and then survey to see if you’re headed in the right direction. Lastly, report back. Most of the time, when you fill out a survey, you do not get the responses back. You don’t know what everyone else said or how you stood compared to what everyone else said. You don’t understand what the organization will do about anything they receive. So, look for ways to report back the information that you get from your survey and what your takeaways are from it. This is an example from Barry University. They ran an alum survey. And they came back and said: This is how many alumni responded; here are the breakdowns regarding how they ranked certain things, where they live, and their reasons for returning. This is all compelling information. So keep that in mind. A quick recap. Remember that surveys are an inexpensive way to gather lots of data. They’re wide but shallow, so you probably want to marry them at another research effort. They are indicators, not 100% truth. You cannot hold them up as 100% true unless you’re reaching statistical significance. Make sure you’re starting your survey with a goal and a plan that comes before writing questions. A good survey design is purposeful; it’s targeted to certain audiences, it is brief or short, it’s delivered well, it’s often incentivized, and it is tested. You want to make sure that you’re writing clear questions and providing context at the same time. You also want to read the results. I read through them multiple times. You want to avoid your quick assumptions. Also, I recommend avoiding reading survey responses as they are coming in. Also, don’t let the numbers fool you; don’t just report on averages. Remember, at the end of the day, the purpose of the survey at its core is to ensure that you have information that you can hold up and say, "This is how our audiences feel about this." Research kills opinions. If you walk into a room and ask a question after a survey, almost everybody has the same answer because they all point to the same data. It’s an excellent tool for getting everybody on board and getting consensus about the direction you need to move in because you have data to back it up. Before we get to questions, I want to mention that you can get the slides at mightycitizen.com/surveys. That is up and live now if you want to grab that. Also, we have some tools available on that landing page. I’ve got a donor survey guide. So if you are in nonprofits, or advancement at universities, or foundations for Association, and you want to get started doing donor surveys, I’ve got a tool for that. If that does not apply to you, if your audience is related to communications and how you can better communicate with people, we’ve also got a survey up on communicating better with your audience. You can get both of those at mightycitizen.com/surveys. You can also get more tools and templates at mightycitizen.com/tools. If you’ll bear with me for a minute, I will head over here and look at my questions. Richard asks, "What about negative feedback from a survey? Do you report that back?". I think it depends on the kind of negative feedback you’re getting back. If you’re getting wholesale negative feedback, you might want to share that with your audience. Suppose it’s something that is a wholesale problem. In that case, you probably want to go to audiences and say, "Hey, as part of our survey feedback, we heard this, and here’s how we’re going to improve it." If you got one-off feedback here and there, you have the option potentially to go back to that person. So say, for example, that you’ve run a donor survey and have a donor who’s extremely unhappy. They left you their contact information. I recommend that my development director be on the phone with that donor. That’s an important relationship, and I’d ensure they felt heard. If they’re the only person who gave you that feedback, I would not report that to everyone. It depends on the feedback you’re getting. But even if you’re getting wholesale poor feedback, I would want to share that back with my audiences. Someone asked, too, can you speak a little bit about the number of people you need to feel like you’ve got a good survey set? I probably want to base my survey on 100-plus people. Ensure you have at least 100 people responding to your survey to feel confident. And that’s a 10% response rate on 1000 people or a 5% response rate on 2000 people. You have to have enough people to survey initially. Again, an incentive is going to help you reach those numbers. If you do not have those numbers, if you’re starting, or if a database isn’t that big, start with those informational interviews. Do a dozen of those and see what information you get from those, perhaps. Then, you could try to back that up with some survey information. Okay, I’m going to wrap it up there. I appreciate your time. I’m going to share my contact information here as well. Feel free to ping me if you have any questions or if I didn’t get to your question. My email is rclemens@mighty citizen.com. And again, we will be following up with slides and a recording either later today or early tomorrow. This also wraps up our Mighty Big Month at Home! This is our last webinar of April. I appreciate the guys attending today and look forward to next time. Thank you.