Archive for “February, 2015”

The Qualitative versus Quantitative Dimension

The distinction here is an important one, and goes well beyond the narrow view of qualitative as “open ended” as in an open-ended survey question.  Rather, studies that are qualitative in nature generate data about behaviors or attitudes based on observing them directly, whereas in quantitative studies, the data about the behavior or attitudes in question are gathered indirectly, through a measurement or an instrument such as a survey or an analytics tool. In field studies and usability studies, for example, the researcher directly observes how people use technology (or not) to meet their needs. This gives them the ability to ask questions, probe on behavior, or possibly even adjust the study protocol to better meet its objectives. Analysis of the data is usually not mathematical.

By contrast, insights in quantitative methods are typically derived from mathematical analysis, since the instrument of data collection (e.g., survey tool or web-server log) captures such large amounts of data that are easily coded numerically.

Due to the nature of their differencesqualitative methods are much better suited for answering questions about why or how to fix a problem, whereas quantitative methods do a much better job answering how many and how much types of questions. Having such numbers helps prioritize resources, for example to focus on issues with the biggest impact. The following chart illustrates how the first two dimensions affect the types of questions that can be asked:


Two dimentions of questions that can be answered by user research

The Attitudinal versus Behavioral Dimension

This distinction can be summed up by contrasting “what people say” versus “what people do” (very often the two are quite different). The purpose of attitudinal research is usually to understand or measure people’s stated beliefs, which is why attitudinal research is used heavily in marketing departments.

While most usability studies should rely more on behavior, methods that use self-reported information can still be quite useful to designers. For example, card sorting provides insights about users’ mental model of an information space, and can help determine the best information architecture for your product, application, or website. Surveys measure and categorize attitudes or collect self-reported data that can help track or discover important issues to address. Focus groups tend to be less useful for usability purposes, for a variety of reasons, but provide a top-of-mind view of what people think about a brand or product concept in a group setting.

On the other end of this dimension, methods that focus mostly on behavior seek to understand “what people do” with the product or service in question. For example A/B testing presents changes to a site’s design to random samples of site visitors, but attempts to hold all else constant, in order to see the effect of different site-design choices on behavior, while eyetracking seeks to understand how users visually interact with interface designs.

Between these two extremes lie the two most popular methods we use: usability studies and field studies. They utilize a mixture of self-reported and behavioral data, and can move toward either end of this dimension, though leaning toward the behavioral side is generally recommended.