Quantifying Qualitative Data in UX Surveys

Sensorama Design
9 min readOct 5, 2021

Some Benefits and Practices

by Larissa Tramontin e Luiz Henrique Simões, from Sensorama’s team

Faced with a boost of information related to big data, analytics and data science as a whole, we see attention to qualitative information increasingly overshadowed by the power of synthesis and accuracy of numbers. In the debate about the value of design and research to the user and customer experience, a diversity of graphs and estimates prevails in the speech of different leaders. Who has never come across exhibitions like this?

We are talking about the user experience, a person, something that anthropologists, psychoanalysts and social scientists in general spend years to understand. After all, what is experience? How can we establish patterns of success and failure in relation to something so subjective and volatile? And along with the dilemma, for anyone who is an experience designer, questions arise about how to attribute value to experience in the light of numerical focus, even if all numerical information needs to be explained through words — reiterating the context relevance.

Well, we could write a thesis showing indisputable evidence of how much the world revolves around the qualitative analysis of our reality. Instead, we will follow the maxim: “If you can’t beat them, join them” 😎 And at the end of this piece we hope to have shown how poor numerical data are when interpreted alone.

Let’s Get Started

Thinking of those who are not very used to this reality, we will present in this article an overview of how we can leverage quantitative analysis methods and tools to support qualitative research data (and vice versa), as well as some benefits provided through this process, both for the researcher and for the stakeholder.

“Does this subject matter to me?”

In case you are wondering if these tips will fit in your daily basis, we suggest you answer the following questions regarding your work routine:

  • Do you need to deal with an excessive amount of data?
  • Do you need to present this data to someone in a consolidated and fast way?

If you answered “yes” to both questions, you are in the right place. This content was developed taking advantage of a context of UX research, branding, market research, service design, among other similar themes. The possibilities that will be presented here can be incorporated into the reality of any segment.

Basic practices to start quantifying your research:

If the subject of a research is the interest of a certain audience in podcasts, for instance, we’ll meet people who listen to it, others who just know what it is but don’t consume it, the ones that are truly addicted, in addition to those who have never heard the word podcast. For each of them, naturally, every research starts to segment similar information, right? How about starting to count these similarities and differences?

Keep in mind that the context of the numerical result is set by the research. If thirty people don’t like podcasts, and twelve of them don’t like radio, maybe the problem isn’t exactly the podcast. For all qualitative findings, it’s useful to list the numbers in the user journey so that they tell the story of the process being mapped.

Taking into account a sample used in a survey, be careful not to emphasize quantitative data over qualitative data. The first isn’t always enough to justify the results alone, especially if it is a small sample, and remember the idea is one complementing the other. Within more mature teams in mixed methods research, statisticians can, for example, use procedures that handle small amounts of qualitative data in order to infer more accurate results from the relationship between information, such as Kendall’s correlation. First, let’s build the basics. :)

Aiming to obtain a successful qualitative structure in an easy way, it is necessary to get closer to tools such as Excel or Airtable. And don’t worry: Miro, Figma and other beloved tools will continue between us. Here is a simple and quick example that can be done in free forms, such as Google Forms, and used by you while interviewing someone via Google Meet, Zoom, etc.

Input by Google Forms
Export to Google Sheets

This way, you ensure that at the end you have a neatly assembled spreadsheet and a simple interface to use while recording your findings, without having to worry about formatting or filling in tight cells with text while conducting the interview. Each row represents information from a single person, while each column represents a survey variable. This way, it is much easier for descriptive statistics tools to work, turning qualitative information into visual representations.

What needs to be done to this point?

To think of a way to make your interview script a Google Forms, so that later on it is easier to export a Google Sheets file for analysis and categorization.

Now we can start thinking about categorizing qualitative information in order to see its frequency in our interviews (or responses). Next, we analyze the differences and similarities, grouping user profiles and/or usage profiles, or many other possibilities that open up with each new survey.

We are not inventing anything new, nor are we trying to reinvent the wheel. We are talking about data collection and disposition standards. Many people reading this may have used similar processes in dissertations, writing articles, etc.

In the given example, Mack responded that she listens to podcasts and, within her open-ended responses, she provided us with interesting information:

  • She prefers to listen on her way to work
  • During the pandemic, she listens while drinking coffee

From one answer we were able to identify some key ideas. These categories help us to contextualize the journey/process we are analyzing. In this case, the words and expressions “pandemic”, “coffee” and “on the way to work” are mentioned. It’s pretty simple, right? Basically, we break down the information and investigate it, as Descartes described the empirical process, in the foundation of science: “separate the whole and study the parts”.

If we have 10 users answering, we might be able to find the frequency of the “pandemic” category, for example, as present in 70% of responses, which would help us identify similarities and so on.

When analyzing the example in the table below, we will notice that the lines “Mack” are repeating themselves. This is a way of isolating the identified categories in order to be able to treat each one of them as distinct data. This is important from the moment you want to filter information in a software such as Google Sheets/Excel, or decide to view this data in a Business Intelligence (BI) system, for instance.

However, if you need a simpler consolidation, there are tools like Airtable or Notion that allow you to put more than one category in the same table field, as well as the visualization in charts and other functionalities. By the way, if you are thinking on how to store survey data so that it can be reused in another project or team, easily and quickly, these platforms are great alternatives.

Categorization

Benefits for the researcher and stakeholder:

Looking into the findings, to see them within a real day-to-day perspective, and create solutions that benefit people, takes a necessary and precious time for every researcher. That is, the analysis of the collected data.

When we quantitatively consolidate a research, we identify some opportunities, such as: saving time, to have a more comprehensive view of the problem quickly and efficiently, and — perhaps most importantly — contextualizing the qualitative information instead of devaluing it or making it useless. This process can speed up decision making about where to go deeper or what recommendations should come out of the results.

It is interesting that we think about how to reference our projects, so that we naturally build a web of knowledge. A more direct and simple example is the end of any graduation work: bibliographical references are nothing more than attesting that knowledge is being built in the face of historical continuity and similarity between themes, don’t you agree? One way to start doing this is to create a set of references that refer to each step of a project, for example. If in a given sample we can learn that podcasts are mostly listened to on mobile devices, we can give this learning a label and reference it. Basically like we do in a term paper or academic dissertation.

Ok, and how to operationalize this?

One way to do it is by using the tools already mentioned here, such as Notion, Airtable, or Excel. The problem with Excel is that each file has a lot of information that makes sense to be there, but you can’t always use the same Excel to conduct all projects and research. So what would be a good alternative?

A good way to add value when consolidating information or presenting the survey to stakeholders, putting aside the solution of just “including graphics in a presentation”, is the use of data analysis and Business Intelligence (BI) tools, such as Google Data Studio or Power BI, for instance.

Here at Sensorama Design we use these tools in some projects, as long as the effort is consistent with the results expected by our clients or that it brings the abundance of information necessary for the diagnosis and recommendations regarding the problem addressed (in this case, the research team itself benefits from the insights). In addition, the stakeholder will be able to share this report with other teams interested in the same data.

Returning to our discussion of references, it is possible to use BI tools to keep different analyzes together in a larger system, labeled by a set of tags responsible for bridging the gap between projects related to the same users, processes, platforms, etc.

If you already have an organized database, the result is quite simple and rewarding as the interactive view of the data can bring new perspectives and insights. Google Data Studio, for example: it’s from Google itself, it doesn’t need to be installed on the computer, it’s free and integrates with Google Sheets spreadsheets — which makes the whole process easier. You don’t need to know much about Excel or data science to use these systems, just be curious and willing to learn.

Still talking about the podcasts usage research example, imagine that you need to conduct new interviews (with the same sample or not) in the same project, but this time to find out which platforms are most used by users (desktop/mobile). Wouldn’t it be interesting to use the same database (image 3) to enrich knowledge on the subject? This could be done by just adding new questions to Google Forms or editing the spreadsheet manually, always respecting the column and row logic. Aligning this example with the benefits of using a BI platform, in the image below we bring a demonstration of how this database could be viewed.

Example of interactive analysis

Considerations

Some precautions are very important and must be observed when you are preparing your work.

Assess what you want to achieve with the results and what will be the “deliverables” you or your client want, especially if you are still learning to use new tools for data consolidation.

Think that the key to being able to scale your qualitative findings is to define a good system of naming and linking the qualitative information.

If you are considering building a “research repository”, for example, it is critical that you define some key categories (such as a respondent’s ID — name or telephone) and have supporting information that can be consistent during the analysis by people not involved with the survey, such as the date of data input or the city where the user is located, ensuring that the information is not lost and is easily identified.

Conclusion

Considering that organizations are investing more and more in structuring and analyzing the data that orbits their business, it is clear that taking advantage of these methodologies can add a lot of value to the deliveries for your clients, in addition to making your work easier. Keep in mind that the customer is often used to looking at numbers rather than qualitative data, therefore you will be introducing something more familiar to their routine.

Finally, we understand that having logical thinking is important for anyone who is responsible for quantifying the data, but don’t be afraid to learn and try to challenge your team in this matter. It’s easier than you think.

Links to the example used in this article:

Did you like the tips? If you want to access the documents, these are the links to access the database and to access consolidated visuaalization.

*Este artigo também está disponível em português, acesse aqui para conferir. :)

--

--

Sensorama Design

We are a UX Design & Service Design team who wants to make business human again. We are inspired by people.