hci

CHI'09: Moving UX into a position of Strategic Relevance

Thoughts from the CHI 2009: Digital Life New World conference in Boston, MA.

This panel session explored many of the issues which I’ve discussed with people in UX (user experience) teams at large corporations. The panel focused on 5 key strategies for “moving UX into a position of strategic relevance.” The strategies were:

  1. UX evangelism and documentation
  2. Ownership of UX
  3. Organizational positioning
  4. Calculating ROI
  5. Conducting “ethnographic” research

The panel had 5 members (one was absent, and her viewpoint was represented by another member). They included Killian Evers (a PM at PayPal), Richard Anderson (Riander), Jim Nieters (UX team at Yahoo), Craig Peters (Founder of Awasu Design), and Laurie Pattison (UX team at Oracle). Each panel member was given a few minutes to share their thoughts, and then the group discussed a series of scenarios. The panel ended with a brief Q&A session.

Panel Discussion:

How do we make ourselves strategically important to an organization? We need to find the 1 big thing that makes us important and stress that. We need to show that we understand the business, and the fact that business goals surround making money. This could be accomplished by partnering with business teams to identify the most significant problems facing the company, then working together to solve that.

It’s important to deliver results quickly. We only get one chance to make a first impression, which in business terms usually means that we have one calendar quarter to make a significant impact and prove our worth. In order to prove your value, choose to do something that the rest of the company can’t do themselves. For example, when creating deliverables like prototypes and wireframes, create something really nice that others can’t produce themselves. If you are just starting a UX movement at your company, try to pick projects that matter most to the bottom line, such as those that will be demoed to the customers the most, or those that impact market share and sell the most. Also, make sure that the first few projects that you choose are projects that you can really succeed on, because if you fail, it will follow you forever.

For example, at Laurie’s company, they were having a problem with online help—even though there was an online help document, people continued to call the call center and ask the same questions. After conducting a series of user studies, she found that when users were frustrated, they just called the call center instead of trying to use the online help. She decided to address this by moving answers to key questions into the system itself. After making this change, the number of calls that went to the call center dropped, thus saving time and money. This made the value of Laurie’s contribution more apparent to those in charge.

Common Scenarios

Scenario I: You’re on a good UX team, but the sales team sells from a tech-focused point of view. The UX team is “too busy” to teach the sales team about the nuances of a user-centered point of view.

Solution: You may need to find the time to do important things like teaching user-centered point of view to the rest of the company. For example, you could give them a “crash course” about what the UX team does at a high level. This could also be accomplished through “brown bag” sessions, formal training, or answering RFPs.

Scenario II: The CEO of your company “gets” UX, but middle management doesn’t. Without middle management support, user centered design is not recognized as a real science, and is not seen as necessary for executing the CEO’s vision.

Solution: Gain buy-in from people from other groups, such as project managers, software engineers, and other tech and hardware folks. One of the panelists described a story in which he helped the hardware team to reduce a 5-hour cable setup process into a 20 minute procedure by color-coding the wires, getting rid of the big instruction manual, and introducing clear GUIs. Although the UX team didn’t do all of this by themselves, the project never would have gotten done without them. This was an example of how UX expertise was used to optimize beyond just the UX team, which leads to increased buy-in and trust from other teams. That being said, the UX team may need to reposition itself so that it owns the important and relevant pieces of projects, such as owning the UI specifications for an engineering team.

Scenario III: A large enterprise software company has branches in many different countries. The UX team has trouble understanding what matters to a project’s success because they have limited domain knowledge and are less likely than others to be invited to strategic meetings. This makes it harder for them to make intelligent compromises.

Solution: Educate the team members to fill in gaps in domain knowledge, perhaps starting with new hire training. Have weekly meetings for the team in which you invite people from other parts of the company to speak and share their knowledge, thus increasing respect given by the rest of the company. It’s important not to seem like a liability because you don’t understand the product or the business; ask questions. Also, the team needs to learn how to compromise because no one wants to work with someone who is not willing to compromise. However, there is the danger to compromise yourself out of anything useful, which might cause you to lose the respect of others. As with anything, it’s important to find common ground, rather than arguing.

Q&A Session

Q: This talk seems to assume that UX is not in a position of relevance. What if you haven’t been “invited” to take on a position of relevance at your company?

A: You might need to invite yourself. Figure out where you could be useful and go knocking. Where could you have the greatest impact? Take on that project and do your best work to make sure that the project gets noticed.

Q: What if you are an internal supplier of UX whose job it is to make others’ jobs easier?

A: Being an internal supplier of UX is much like any other type of UX practitioner, except your end users are internal. Testimonials go a long way for this sort of position—for example, you might want to work with QA and share success stories with the rest of the company.

Q: Is UX moving towards the role of a specialist, such as a lawyer? For example, does a farmer need a biotechnician in his employ?

A: Instead of worrying about being a specialist versus a technician, what about just being a team player? This sort of middle ground does exist. The key is to know your value proposition, and have a team that has a mix of different skills.

My Thoughts

Having spoken with many UX professionals and attended a variety of HCI talks & events, I’m aware that these are common discussion points in the HCI community. During the talk, someone asked, “How long will we be able to keep making excuses because HCI is a ‘young field’?” It does seem a little strange that this many decades in, HCI is still struggling to find its “place in the world.”

It’s a valid concern, though. Last summer, I attended a brown-bag where my mentor led a discussion about how UX can be integrated into the company’s Agile software development cycle. Clearly this methodology, which is becoming more popular in large tech firms, was not designed with the user experience in mind. It will be interesting to see if new methodologies will be able to bridge the gap between user-centered design and the software development cycle. I suspect that this “gap” is really not quite so large as it would initially seem, and I remain optimistic that if we continue to share the lessons we learn, soon we won’t need to make any “excuses” whatsoever.

Unintended Consequences of Health Care Technology

Here’s an interesting article about why the strong push for electronic medical records may not be a wise decision. The article talks about the government’s proposal to spend $50 billion over five years to promote technology for health care– a major proponent of which is replacing paper medical records with digital ones. Although the concept sounds good at first, the article points out that there is very little research-based support for the benefits of electronic medical records.

This reminded me about a BayCHI talk that I went to over the summer where Chris Longhurst (of Stanford’s Children’s Hospital) discussed the “Law of Unintended Consequences” with regards to health care technology. In his talk, he described how hospitals that were “going paperless” in order to rid themselves of inefficient processes discovered some unexpected problems. One particular example was of a system which allowed doctors to prescribe medications in a computer system by selecting things from a list (rather than by writing them out by hand). Although this sounds like a great way to streamline processes, the task was in fact made too easy. Doctors would mistakenly select incorrect medications from the list and not notice their mistake, whereas if they had been writing things by hand, this would have been much less likely to happen. Unfortunately, mistakes can be very costly in the health care field– according to this study, mortality rates have in fact increased in hospitals that adopt certain health care technology.

Clearly, if we are going to invest in new healthcare technology, we need to be aware of the risks involved and do everything we can to foresee and plan for these “unintended consequences.” In such a high-stakes field, it’s not enough to just design experiences that streamline processes and “make things easier to use.” We also need to consider how we can call attention to tasks that demand high levels of focus without creating information overload or frustration. Health care technology is an incredibly important field with immense potential for good, but these sorts of considerations are absolutely necessary if we are to create technologies that help more than they hurt.

HCI versus Interaction Design

I was working with a masters student in CMU’s Interaction Design program today, and afterward we got into a lively discussion about the distinctions between ID and HCI.

One of the few things we agreed on: HCI and ID use similar methodologies to learn about users, including both qualitative and quantitative studies.

There were many more things that we did not agree on. I found it particularly interesting to hear her perspective because given her background, I had expected her to have a good understanding of what HCI “is all about.” However, she presented several misconceptions which are, if anything, even more prevalent in the greater design and technology communities. Some examples:

  • Misconception 1: Although user research methodologies may be similar, HCI and ID use them for different reasons. ID is all about designing a “complete user experience,” while HCI is completely centered on coming up with technical solutions. My response to that is– how can we come up with a useful, usable, and pleasant technology system without actually looking at the complete user experience? The “solution” might look like a piece of technology, but behind that are serious considerations about our users and their behaviors, values, aspirations, and more. Technology is just a medium through which a better user experience can be achieved– just as in ID, it is the way that people choose to use that technology that really defines their experience.
  • Misconception 2: People who study HCI are simply UI designers for the desktop/mobile/web. Take a walk through the HCI labs at CMU and you will see how absolutely untrue this is. HCI strives to push the limits. The beauty of technology is that it makes anything possible. As HCI practitioners, we are not boxed into using one certain type of media– we can explore any number of new ideas. We can combine virtual and real-world elements into new creations that have all sorts of unique affordances. UI design is just some small part of this, and when there are so many new types of interfaces, even UI design itself can be an incredibly immense area to explore.
  • Misconception 3: ID is about people-to-people interaction. HCI is not because it’s limited to technology. This statement troubles me because it implies that HCI is solely about having people interact with computers. This is a gross misconception– it pains me to know that people think of HCI as just finding ways to redesign the Photoshop UI. HCI is about creating technology that enables. As to what sorts of interactions it enables, well, this could really be anything– how people interact with each other (instant messaging, Facebook, etc.), how they behave within their environment (GPS, wearable computing), how they understand themselves (online identity building, methods of self-recording), and so on– the possibilities are endless. Although I can’t profess to know much about the specifics of ID, I would imagine that they pitch a similar platform of the possibilities that their field encompasses. And I am sure that many ID projects have a strong technology component, simply because technology is so prevalent in every aspect of life. Design someone’s experience as they walk through a museum, and you need to be aware that viewers are probably carrying cell phones with them. How can you completely disregard a potential distractor (or opportunity!) like this if you claim that you are designing a true space of possibility?

All-in-all, it was very interesting to see what sorts of misconceptions are associated with HCI. Why does someone in ID have such a restricted view of HCI, even though the two disciplines have so much overlap? I wonder if some of it has to do from the courses that we take and the deliverables involved. I suspect that if she were to read some HCI research papers or attend and HCI conference, she would realize that the distinction is not quite as strong as she originally thought. Classroom deliverables aside, our goals are the same: to improve the lives of our end users.

Singing with Machines

Kelly Dobson doesn’t just “work with machines”– she sings with them. Her website features some of interactive machines she’s made, such as the “ScreamBody” (scream into it and it will silence your scream, but record it and allow you to play it back later). For more overviews of some of her past work, check out a video of her talk at GEL 2008. In the first few minutes, she sings (snarls?) with a blender, and relates they story of how she learned to make machines an extension of herself by singing with them.

Kelly’s research is interesting because it focuses on mutual empathy in human-machine, and even machine-machine, pairs. As you listen to her speak, it’s easy to forget that line between what makes humans and machines different. Singing in harmony is one of those things that seem so distinctly human– if you can start to do this with a machine, how can you not start to feel some sort of empathy? I wonder what other sorts of activities humans do to relate to each other that can be extended to machines. Also, what are the benefits of strengthening relationships between humans and machines? Kelly mentions therapy– if we trust in machines, perhaps we can allow them to console us and provide support.

Another interesting thought she brings up: in the future, will there be a need for machines to console other machines? This may sound far-fetched, but how many times have we contemplated machines that “feel” emotions? I think this leads to another question– does feeling emotions simply mean having needs that must be satisfied externally? The typical view of creating emotional machines is that we need to build systems that mimic how people emotionally respond to different situations. A sophisticated system might be able to pass a Turing test if it were able to detect and respond to situations in an appropriate way. However, does this mean that a machine is really “feeling”?

It is also important to consider how people learn emotions, and include this in such a model. Social learning theory might suggests that emotions are really learned during childhood as children view the world around them for cues about how to respond to things emotionally. Other theories suggest that emotions are inborn traits– perhaps born out of an evolutionary need for survival. For example, the feeling of “loneliness” might push people to connect with others, which builds relationships that are beneficial to the individual as well as society. Can we build machines that have base “instincts” that guide their behavior, but are also capable of learning appropriate emotional responses? Can machines use some sort of social referencing in order to learn appropriate reactions to situations based on both the context and their emotional state? I’m curious about how much of machine-emotion research is about capturing the ways that people learn and express emotions. An alternative may be to determine how people judge others’ emotions based on their words and behavior. This could lead to the design of machines that cause us to perceive them as emotional beings, based on our own emotional reactions to them.