How Long is the Wait?

Users want to know.

Evan Dody
February 23, 2015

Everyone hates waiting.

There is both anecdotal and research-based evidence that posting wait times in real time makes consumers less unhappy about waiting in line. Service providers from Disney World to the DMV have taken steps to provide this information in an attempt to improve their users’ experiences. What effect do these efforts have on brand perception, we wondered, especially for service providers that communicate with users digitally? 

As part of our ongoing research into emerging and conventional usability best practices, we tested users’ emotional responses to emergency room wait times posted on a hospital’s website. We wondered if the conventional wisdom — that transparency about wait times improves the user experience, even if the wait is a long one — would hold in an emergency situation, and what impact the introduction of wait times would have on an already-stressful ordeal. We hope that our learning in the ER context might extend to other spheres, from public transit to theme parks

Emergency room wait times.

We chose hospital websites as our research stimulus for several key reasons: 

  1. Emergency rooms are notorious for long wait times. 
  2. Requiring emergency care can be a stressful experience. 
  3. Emergency room wait times have become more widely available in recent months. 
  4. As digital products and services improve user experiences, including in healthcare, patient expectations are on the rise. 

Our recent research shows that making even small digital improvements to operations and infrastructure can create a more positive healthcare experience, and that patients want easier access to healthcare information online. Given that a visit to the emergency room is almost always stressful, it’s worth questioning how adding an honest wait time to the hospital’s home page might improve one’s perception of care and support the broader strategic goals of healthcare providers. Does posting the average wait time on the hospital’s home page make people more or less likely to visit the emergency room? How does providing this information make users feel about their healthcare experience? 

How we tested.

We tested two different versions of wait time information: one that provided just the current ER wait time, and one that compared the current ER wait time at the hospital with the national average wait time. Participants were asked to imagine that it was late at night, the doctor’s office was closed, and that they needed to visit the emergency room to deal with an urgent (but not life-threatening) injury. 

To probe how viewing a wait time made users feel, we used neuromarketing technology, an experimental method designed to test user reactions before the subject is able to articulate them. The tool measures pre-verbal emotional responses by asking participants to interact with a hospital home page, then respond to a rapid succession of images on a screen. This approach is meant to circumvent rationalization and provide access to unfiltered emotional responses. 

After participants were introduced to the neuromarketing tool and asked to imagine their emergency scenario, each participant interacted with the first hospital website, showing wait time information by itself, and then used the neuromarketing tool. After this, each participant was asked a series of questions by an interviewer. Then the participants interacted with the alternative homepage, comparing the wait time to the national average, and were interviewed again. 

For our research stimulus, we used this generic hospital home page.

stimulus

Findings.

Proximity to the ER  is still paramount.

Almost all participants perceived the first stimulus — the straightforward hospital wait time — positively. But while the majority of participants stressed that they appreciated knowing the wait time, it wasn’t clear how likely they were to “shop around” for an ER in an emergency.

"It’s really about convenience – what hospital is closest?"

Often, they said they’d default to hospitals that were familiar, nearby, or that have a good reputation. 

People perceive better quality of care when they can see wait times.

Many participants thought that the care would be better at a hospital that posted wait times honestly. Some suggested that if a hospital cares how long it would take to see patients, it would care more about patient welfare in general.

"They’re putting effort into informing the public, which they don’t have to do."

When asked to describe how the wait times made them feel, the participants used language like “a sense of belonging,” “being welcomed,” and “ less anxious.” 

Posting wait time seems more cutting-edge.

Participants also perceived a hospital that provided wait times as being cutting-edge. 

If a hospital is savvy enough to provide updated waiting times, patients reasoned that it might be more likely to offer innovative medical technology.

A few participants, however, worried that low wait times might mean lower quality patient care. They wondered if an over-emphasis on efficiency might pressure doctors to spend less time with each patient. 

“There’s some tension between the business aspect of seeing patients quickly and the care aspect of doctors spending enough time with patients.”

Context should be actionable and specific to the user.

When we turned to the second stimulus, which compared the hospital ER wait time with the national average, many participants were unimpressed. The testing took place in Brooklyn, and many would have preferred an average New York City wait time instead of a national one. Some would have preferred a few nearby wait times, so they could compare without looking up multiple hospitals. Some explicitly stated that they liked the single wait time, without context, better. 

"Do I care how long people are waiting in San Francisco? No."

In short, including the national average felt gimmicky to users — like it was just there to differentiate the brand, not to help them make decisions about visiting the emergency room. They didn’t find this information useful, and the extra information often led to more questions. 

Some other kinds of context, however, would have been more helpful. Several subjects asked what “wait time” means: is it the time until a patient is triaged, or until she speaks to a doctor? How often is the wait time refreshed?

We learned that participants cared most about the credibility of the information itself, not its relation to other hospitals or the national average. In every case, participants wanted to know how the information provided was reflecting a better patient experience. 

What we learned about wait times.

In this study, providing wait times appeared to improve brand perception, though it’s important to note that failing to do so is probably not a deal-breaker, because convenience and proximity of service are sometimes more important in an emergency. In general, however, participants reported that wait times remove a layer of uncertainty and provide a sense of control over an otherwise uncontrollable situation, like waiting to see a doctor. 

Medical researchers have catalogued the risks of advertising emergency room wait times, which may include dangerous self-triaging and a misperception of wait time for critical patients, like those suffering from cardiac arrest. These would certainly need to be addressed by providing appropriate context. 

Our recommendations for wait times: 

  • Tell consumers how long they’ll be waiting for service. 
  • Explain the meaning of the wait time; in healthcare, a “door to doctor” approach is considered the best practice, and hospitals should provide caveats that doctors will address more urgent cases sooner. 
  • Consider that unnecessary information (like our national averages example) may adversely affect patients’ perception of service or cause “choice paralysis.” 
  • Consider how the information is being delivered. 

Our study did not explore the effects on patient perceptions of different platforms: for example, on desktop, on mobile devices, on a billboard along the highway, or at the point of service itself. This question bears further investigation.

What we learned about neuromarketing research.

One of the goals of this research series is to better understand the value of emerging research tools and their capacity to evaluate usability. 

In this study, we wanted to assess emotional response, so we used a neuromarketing tool as part of the research. Our assessment of the tool is that it provided unique insight and holds real potential. Some participants were initially confused by it, but most embraced it after they had learned to use it. The tool added an element of fun to the interview process: subjects enjoyed the quick response time and the opportunity to respond to questions without overthinking them. Much of the feedback we received from the tool reinforced the information we gleaned from interviews – particularly that the wait times made patients feel a sense of belonging, empowerment, and being empathized with. 

What’s not clear is how variables like how the moderator introduced the tool affected our results. Because the tool is moderated and needs a lot of explanation, there is potential for inconsistency, especially if there is more than one moderator. 

Used with a larger participant group and an unmoderated interview capability, this neuromarketing tool could be very valuable. It is already useful, insofar as it underscored our qualitative findings and provided greater specificity to the feelings people expressed. However, when used qualitatively, it has similar limitations to traditional qualitative research. 

These findings apply to the specific study we conducted. To confirm these findings, further studies are necessary. 

The Huge usability research initiative.

This test is part of an ongoing usability research series, in which Huge’s UX and research teams are collaborating to test conventional design and usability standards. The aims of this series are three-fold: 

  1. By testing with a broad cohort of participants, we are gleaning more generalized insights, applicable to a wide range of products and users, than the more pointed research we undertake for clients can provide. 
  2. We’re interrogating conventional wisdom to generate a canon of knowledge about usability and design. 
  3. We are building a library of research on research, assessing the pros and cons of disparate methodologies.

The team.

This is a cross-disciplinary initiative. By working together, UX and research teams at Huge are extending our library of UX best practices and testing research methodologies in innovative ways. The following individuals contributed to this project: 

Research. 

User Experience. 

Creative. 

With Kristen Ames, Huge Ideas.