This distinction can be summed up by contrasting “what people say” versus “what people do” (very often the two are quite different). The purpose of attitudinal research is usually to understand or measure people’s stated beliefs, which is why attitudinal research is used heavily in marketing departments.

While most usability studies should rely more on behavior, methods that use self-reported information can still be quite useful to designers. For example, card sorting provides insights about users’ mental model of an information space, and can help determine the best information architecture for your product, application, or website. Surveys measure and categorize attitudes or collect self-reported data that can help track or discover important issues to address. Focus groups tend to be less useful for usability purposes, for a variety of reasons, but provide a top-of-mind view of what people think about a brand or product concept in a group setting.

On the other end of this dimension, methods that focus mostly on behavior seek to understand “what people do” with the product or service in question. For example A/B testing presents changes to a site’s design to random samples of site visitors, but attempts to hold all else constant, in order to see the effect of different site-design choices on behavior, while eyetracking seeks to understand how users visually interact with interface designs.

Between these two extremes lie the two most popular methods we use: usability studies and field studies. They utilize a mixture of self-reported and behavioral data, and can move toward either end of this dimension, though leaning toward the behavioral side is generally recommended.