1E has built one of the greatest tools available in the market today—Tachyon Experience—to measure the devices that people work with in a business. We gather data about hardware and software performance and led the way with a synthetic transaction layer used to give an experience score to a device. With all that knowledge you can do a lot with our tools—take a look here to see a real-world example of Tachyon Experience.
Even with that knowledge in your hands though, there are questions that an IT organization, or a company at large, cannot answer without directly asking the people effected. For instance:
For years, Customer Satisfaction surveys have taken place annually or quarterly to feel the “pulse” of the workforce overall in the way that they interact with a division (like IT, Service Desk, or HR). These surveys are often long, sweeping in their approach, and attempt to measure several important criteria around employee engagement and satisfaction for general services.
At 1E, we have taken a more modern approach. Pulse Surveys, in the form of Tachyon Sentiment, are embedded in Tachyon Experience with the primary purpose to provide insights as to how the user is feeling about their IT experience and to map that to the objective reality of their devices. As the pulse surveys have grown, we have found Sentiment to be used to ask numerous other questions due to its simplicity and ability to easily track changes over time.
In order to start down the journey of pulse surveys and gather sentiment data, follow these 7 steps to get the most of out of the 1E product and ensure you receive the honest and actionable feedback you’re looking for.
This is probably an obvious first step, but anyone who has tried to do academic research in their lives or science of most types realizes that answering this question concisely is very difficult. But you need to
know what you want to measure to get the outcome you want. App store ratings are a good example. Their goal is to get as many positive reviews as possible. That’s why it’s common to see the rating displayed only after you have identified yourself as someone who likes the application or recently had a positive experience. The method they use complements what they want to measure.
When you want to measure a specific metric (and you should want to be specific) think about precisely what it is you want to ask. For example, a common question is “Rate your experience with IT in the past n days?” with some rating (1-10 for example). If the goal is to get an overall feeling for the user toward IT, this question may work. The question’s answer though is going to depend highly on what the last IT-related interaction was. If this particular person opened a ticket recently, the score will indicate the quality of that ticket interaction. If the person received a new application or device recently, the rating is likely to indicate how the person feels toward that application or device. The point here is be very precise in your question to get the feedback you want.
Another common example today is companies migrating from traditional file services to cloud services like OneDrive for Business or Box. In this instance, let us presume you want to measure how users feel about the new application and associated service. There are many important pieces of information you may want to learn, but you should avoid the trap of asking dozens of questions to get a deep, sociological, experiment-style of survey. For example, if you have selected a vendor for your transformation (say Microsoft OneDrive for Business) and you ask questions about how people feel about OneDrive, it is important to think about what you can do to improve their feelings. You could work to adjust the perception of OneDrive or come to understand what it is people don’t like about the application. However, if you have already chosen an application without user feedback then it is unlikely to be useful to know your employees dislike the application outright because you are not going to change it.
I will give a tip here: if you want to get the most out of our survey system, think about creating a management group for a set of users that are early adopters of a technology, measure their technical experience with the product and target them for surveys. This way you can get a feeling for how early adopters interact with the application. It’s even better if you are doing a ‘bake off’ of different solutions and want your users to help determine the best application. By connecting the technical metrics with the survey responses, you end up with a very objective way to measure an application before making it an enterprise standard.
Alternatively, you could focus on measuring the attributes you can change. For example, “How was the migration from your file share to OneDrive” is a question that can help you focus on the experience of migration. A question like “Can you rate your experience in finding the files you need in OneDrive” may tell you if you need to drive user training into this specific element of OneDrive.
The oldest trick when you want a certain kind of answer is to frame the question in a way that guides a user to the answer you want. One of the most common you might have experienced is the restaurant standard “Everything good?” or “How good is the food?”. By throwing ‘good’ into the question you are already leading the person who is answering the question down the garden path. Avoid, whenever possible, the outcome in the survey questions. If our goal were not to get good tips or to ensure the customer was happy, we would ask, “How was the meal?” or “How was the service”?
As an example, let us presume your company has recently changed from using the Zoom service for meetings to Microsoft Teams. You could ask a question “How good is your overall experience with Microsoft Teams” [1-10] [Not very good – amazing experience]. It would be wiser to ask a non-leading question “How has your experience been using Microsoft Teams” [1-10] [Very Poor – Very Good].
There are some variations here when you want to measure very specific questions that appear to be leading. For example, very common questions are “How likely are you to recommend our product to…?” or “How likely are you to recommend our company as a good place to work?”. These are examples of how you can target a question with a specific idea in mind (recommendation). As long as you ensure honesty and integrity in the questions, these are also useful. One of the reasons that Net Promotor Score (NPS) caught on was their demand that only a person with a very high level of acceptance was defined as a promoter. The reason for this is that individuals tend to rate in the middle unless they have a very strong feeling either way. The goal of NPS was to catch those strong feelings as they knew one of the weaknesses of surveys, in general, is that respondents tend to pick the middle road. I’ll give two more examples of why this can be risky when you want clear information.
“How does your laptop perform” (1-10) 1 = Very Poorly, 5 = Average, 10 = Very Well
or
“How good has your experience been with your laptop” 1 = Very Bad, 5 = Good, 10 = Very Good
By substituting the word ‘good’ into the question and giving 5 a qualifier of Good I can now objectively say to my managers that n% of people have said that the performance is good or better. In the first survey I would have more honestly told people that the experience is considered average instead of good. Just consider the use of language when designing surveys to ensure you get the insights you are looking for.
One of the most frustrating experiences I have when I answer question is having to do mental gymnastics to understand the question. This often is the case when a negative is used in the question or in the answer: “What didn’t you think was bad?”, “What wouldn’t you have changed about the experience?”, “How much worse is Teams than Zoom?”.
Negative questions, in general, can put more burden on the person answering the questions. If at all possible, phrase questions positively and with the least number of words for clarity. Typically, a negative question “How much worse is Teams than Zoom?” can be changed to “How much better is Zoom than Teams?”, or even better “Rate your overall experience with Zoom as compared to Teams”.
Similarly, the answers to the questions should be both clear and consistent. If you tend to use “Very Poor, Poor, Average, Good, Very Good” then use that description every time it would make sense (or just use stars or happy faces if they are available). Avoid using “Very Good” once and then “Excellent” or “Best” later. If the highest value is Very Good, then use that as the highest value whenever possible. Agreeing on standards is easy with Tachyon Sentiment because we offer ratings out-of-the-box so consistency becomes commonplace.
There is plenty of information on survey design that will tell you that designing surveys with lots of questions will result in far less responses than simple 1-3 questions. If you want to maximize results, ask questions that can be responded to in less than 1 minute. The less questions you ask, the less often they are used, and the less time they take to answer, the greater your response rate. Prioritize what you really want to know and avoiding asking any extraneous questions.
Sample size is a complicated science. Without getting into the details, I’ll give a quick tour of the most important elements that make up sample size in regards to getting responses. To find out what you need, you should be able to answer this question: How many today people could you ask this question to? If the question was “How was the migration from File Share to OneDrive”, your population is everyone who will be migrated from traditional file shares to OneDrive for Business.
Another question you need to ask yourself is, how important is it to be accurate? Most of us like to have usable results, but that does not mean you need to have 95% accuracy with your results. Often, we are happy to be 80% confident of our answer because, if we ignore results that are likely 81% accurate, we have no answer at all. I am not getting into the math here, but the obvious answer is the more people that answer the question versus the population size, the more accurate your results are. But do not despair—even if you have a total of 10,000 people you want to ask, if half of them respond then you statistically have a 1% margin of error (if about ¼ of them respond, you have a 5% margin of error). For the professionals out there, I am obviously estimating values here, but I am being generally accurate. The point is that the bigger the total population, the less total people you need (in relation to that population) to get a good result. That is great news if you work for a big enterprise because even a small number of respondents can still give you accurate results. If you work for a smaller company it does mean a bit more work to get respondents.
Tachyon Sentiment offers you the opportunity to ask the same questions over a period of time to monitor the score and see how sentiment is evolving in your enterprise. But how often should you be requesting feedback? The answer to this question will vary enterprise to enterprise and even more so when you factor in how many different questions the person is being asked over a given period.
The best way to determine how often to ask a question is to watch your response rate. If the first question received an 80% response rate (wow!) and the second question gets 60% after one month, watch carefully. If the dip continues, try to change the frequency of the question to see if you receive more answers. This part is not an exact science as the number of variables involved are too numerous to mention (are you asking during a very busy time like yearend, or a time where there are a lot of holidays or festivals, etc.?).
We do believe firmly that especially the general sentiment score questions should be asked on a repeating interval to gauge the overall heartbeat of your users’ sentiment; but that cadence must be adjusted for the realities of the feedback you receive.
This might seem a bit of a strange point to make as often sentiment is simply information and not a point of action. However, the tool that we have built, with its ability to ask targeted questions to populations who are affected by change, is focused on action. If you discover in the course of your surveys that you have unhappy users during your migration to OneDrive for Business, for example, then take steps to act on this. If you aren’t sure what is wrong, ask follow-up questions or send people to gather more information. Adjust your process or tools and then ask again to see if you are achieving better results.
The most important part of surveying people in our tool is not to simply get a heartbeat, but to make the entire organization more engaged, excited, and to have a better user experience. If you don’t know what you are going to do with the results of the survey, then try to avoid taking it and instead ask a question that you can act on when you find out the answer.
Interested in learning more about how sentiment gathering can transform a business? Attend our upcoming webinar series to learn more about this, as well as how it plays part of a bigger initiative to positively influence the employee digital experience.