Iterative design is a proven approach for optimising the usability of a product or service. Teams create prototypes, test them with users, find problems and fix them. But iterative design does not guarantee innovation. To develop innovative designs, we need to question the way we have framed the problem and instead focus on our users’ underlying needs.
In which we go on safari, stop at a red light, meet a zoologist, and discover four classic questions that can help us design better user research.
A usability test is the wrong research method when you want to discover if there's a real user need for your product; when you want to understand the environment where your system is used; and when you want to find out how people use your product in their daily lives. So why do I almost always recommend a usability test as a team's first user research activity?
We take a look at some subtle yet pervasive experimenter effects, at ways they can bias the outcome of a design experiment, and at what we can do to control their influence.
The Design Studio is a wonderful methodology to encourage multidisciplinary design, but in practice teams often create design concepts that aren’t grounded in user research. We can bake user research findings into every design concept that emerges by using the context of use (users, goals and environments) as a constraint. As an added bonus, this approach helps teams create many more solutions to a design problem.
We're seeing a sea-change in our industry as firms scramble to build fledgling user experience (UX) teams. While this is a sure sign of a maturing discipline, it is not without its teething problems. In particular, the voice of the UX team can sometimes sound more like a whisper. Why do some UX teams fail to achieve the impact expected of them? Here are 6 mistakes we've seen in UX teams that prevent them from having boardroom influence.
Most companies would claim to design products and services that are simple to use. But when you ask customers to actually use these products and services, they often find them far from simple. Why is there a disconnect between what organisations think of as simple and what users actually experience?
There are few things more likely to make your design or UX project difficult than a poorly conducted stakeholder meeting. Structuring your stakeholder interview around a few simple techniques will ensure you get off to a good start and set you up for success.
With the advent of Lean UX — a kind of science of design — the ability to design and conduct an experiment should now be an important part of every designer’s skill set. But what is an experiment? How do you design an experiment? And how can you trust the results?
UX debrief meetings are sometimes viewed as little more than a way to wrap-up a project. This is a mistake. A UX debrief meeting can accomplish much more than just tie a bow on the project. But it's easier to get a debrief meeting wrong than it is to get it right — as I painfully discovered during the debrief meeting from hell.
Most new products fail within the first few months after launch. This article describes 10 critical thinking tools that can be used to flag concerns about the project you are working on. These rules can be used by all team members to help save — or in some cases, kill off — struggling projects.
To be an effective representative of both the user and the designer, and to help steer decision-making, usability practitioners must find a way to influence retailers of consumer products. That means building a new partnership with marketing.
It won’t have escaped your notice that despite many companies investing in user experience, everyday consumer products still have the ability to frustrate the living daylights out of people. I argue this is because marketing teams, influenced by big retailers, unwittingly block the design team’s view of the end user.
Without a clear understanding of a research problem one cannot expect customer or user research to deliver useful findings. Here are five things you can do to help better define a research problem and sharpen your research question.
Reading user instructions continues to rank high on people’s lists of ‘activities-to-be-avoided-at-all-possible-costs’. We’ve worked with a number of clients to improve their user support materials and we frequently encounter five common mistakes made by development teams. This work has given us some insight into how best to avoid these problems occurring in the first place.
Two measures commonly taken in a usability test — success rate and time on task — are the critical numbers you need to prove the benefits of almost any potential design change. These values can be re-expressed in the language that managers understand: the expected financial benefit.
The parallels between good research and good detective work are striking. In this article I take a close look at what user experience researchers can learn from the investigative methods used by detectives. And, in the spirit of all the best detective stories, we draw an important conclusion: if you want to become a better researcher you should learn to think like a detective.
We're increasingly asked by organizations for advice on building a user experience competency. Our advice is to start at the top and get the right person for that first critical leadership role. User experience leaders demonstrate 3 core competencies: they understand research; they follow user experience methods and standards; and they are great communicators.
When properly carried out, usability reviews are a very efficient way of finding the usability bloopers in an interface. But there are four common mistakes made by novice reviewers: failing to take the user’s perspective; using only a single reviewer, rather than collating the results from a team; using a generic set of usability principles rather than technology-specific guidelines; and lacking the experience to judge which problems are important.
In spite of a proliferation of books, articles and blogs explaining how to measure usability, few companies seem to put their usability metrics to good use. In this article we show how you can link the numbers from usability tests to the numbers that steer business decisions — and in the process, influence your company's business.
A recently published international standard requires manufacturers of medical devices to follow a user centered design process. To comply, manufacturers of medical devices will need to change the way they design, develop, test and manufacture their systems.
Until usability gets embedded in the processes of your company, you'll probably find you need to justify the investment. Fortunately, usability initiatives deliver a major return on investment: it's not unusual for usability projects to return benefits of 5-10 times their cost in the first year alone.
Being frugal during economic hard times is good business practice. So how can you squeeze your usability budget and still deliver great insights? As well as saving you money, these 10 tips will also help you explode the myth that usability must, of necessity, be expensive and time-consuming.
Are you a CIO, purchasing officer, or IT manager, about to invest in productivity software for your company? If you are, here's a question you should ask your supplier before you sign on the dotted line: "Just how usable is this product?" Astonishingly, most companies won't be able to answer, and those that try will answer the question only vaguely. But now help is at hand. It's called CIF. And it's about to change the game.
Important roads in London are known as 'red routes' and Transport for London do everything in their power to make sure passenger journeys on these routes are completed as smoothly and quickly as possible. Define the red routes for your web site and you'll be able to identify and eliminate any usability obstacles on the key user journeys.
ISO have released a new standard for measuring the usability of every day products, like ticket machines, mobile phones and digital cameras. This standard, ISO 20282, includes test methods for quantifying the usability of consumer products to ensure they meet a pre-defined quality level. This development is exciting because the standard's focus on usability measurement reflects a sea change in the evolving practice of usability.
Focus groups have come under critical scrutiny in recent times and their reliability as a means of understanding customers is frequently questioned. One problem is that, in spite of what conventional wisdom tells us, it is not the voice of the consumer that matters. What matters is the mind of the consumer. The mistake is in believing that what the mind thinks, the voice speaks. It is time to start exploring methods that can probe beyond the obvious and deliver stronger predictive value. In this article we take a closer look at focus groups and suggest when they should and should not be used.
It is easy think of all numeric user research data as being equivalent. For the most part numbers seem to be, well … numbers. In reality, however, we process bits of information in different ways using different sorting and measuring rules. Although we often do not stop to think about it in this way, recording and classifying of data is always done according to a scheme.
Many people think questionnaire and survey design is common sense. If that's true then common sense can't be that common because many surveys are very poorly designed. For example, surveys often ask irrelevant questions or biased questions or just too many questions. These problems make the resulting data impossible to analyse. This article reviews best practice in survey design.
User manuals have a bad reputation. In a recent USA Today poll that asked readers "Which technological things have the ability to confuse you?" user manuals came out top! Increasingly companies are rethinking the way they approach user manuals. Here are some tips for improving the usability of user manuals.