Facebook as a means of self-socialization

I was doing a bit of reading about the social psychology theories behind Facebook, and stumbled across the concept of “peripheral awareness.” Resnick (2001) describes this as the phenomenon of people learning more about their community in order to increase their social capital. I hadn’t really thought about it before, but perhaps this is why people spend so much time simply “browsing” Facebook. Although a part of it may be based on the desire to (for lack of a better word) “stalk” individuals, a larger motivation may be self-socialization.

Consider the concept of “people watching.” Although this may be in-part motivated by curiosity and a desire for amusement, another part of it is consciously observing others as a way for us to understand our own place in the world. On Facebook, we are free to people-watch without any danger of getting “caught.” This allows us to spend large amounts of time understanding the average and deciding who we want to be more/less like. Having observed and discussed Facebook usage amongst my classmates and friends, I’ve found that most people spend a significant amount of time browsing two types of people: those they are close with, and those they are jealous of or wish to belittle. Could it be that while watching the lives of others, we are simultaneously deciding how we will change ourselves in response to them? By viewing the trends of the majority, can we not better learn how to express ourselves in a way that helps us to become the people that we wish to be?

Unintended Consequences of Health Care Technology

Here’s an interesting article about why the strong push for electronic medical records may not be a wise decision. The article talks about the government’s proposal to spend $50 billion over five years to promote technology for health care– a major proponent of which is replacing paper medical records with digital ones. Although the concept sounds good at first, the article points out that there is very little research-based support for the benefits of electronic medical records.

This reminded me about a BayCHI talk that I went to over the summer where Chris Longhurst (of Stanford’s Children’s Hospital) discussed the “Law of Unintended Consequences” with regards to health care technology. In his talk, he described how hospitals that were “going paperless” in order to rid themselves of inefficient processes discovered some unexpected problems. One particular example was of a system which allowed doctors to prescribe medications in a computer system by selecting things from a list (rather than by writing them out by hand). Although this sounds like a great way to streamline processes, the task was in fact made too easy. Doctors would mistakenly select incorrect medications from the list and not notice their mistake, whereas if they had been writing things by hand, this would have been much less likely to happen. Unfortunately, mistakes can be very costly in the health care field– according to this study, mortality rates have in fact increased in hospitals that adopt certain health care technology.

Clearly, if we are going to invest in new healthcare technology, we need to be aware of the risks involved and do everything we can to foresee and plan for these “unintended consequences.” In such a high-stakes field, it’s not enough to just design experiences that streamline processes and “make things easier to use.” We also need to consider how we can call attention to tasks that demand high levels of focus without creating information overload or frustration. Health care technology is an incredibly important field with immense potential for good, but these sorts of considerations are absolutely necessary if we are to create technologies that help more than they hurt.

Blogging to evade social norms

I recently had a discussion with a friend about how we are living in an “extremely narcissistic time.” Reading some of the academic literature on motivations for blogging, this claim seems like it has some validity to it. Many of the motivations for blogging seem tied to a desire for one-sided self-expression and indulgence.

A 2004 paper by Nardi et al. includes some interesting excerpts from blogs that are particularly telling. When bloggers write about events that happened during their day (typical “diary style” fodder), part of the motivation may be to look back on it for future enjoyment. Part may be for gaining personal insight by reflecting on past events. However, why use a public blog rather than a private diary? It seems that many bloggers are motivated by the knowledge that others may read and form impressions about the blogger based on their words. If the blog is entertaining, it suggests that the blogger is an entertaining person. If it teaches a skill, it suggests that the blogger is very skillful. If it captures life events that seem interesting or glamorous, then it suggests that the blogger is an interesting person. Self-projection, then, is key. Can we say that this is a form of Narcissism?

This seems to have strong similarities to the Facebook mini-feed phenomenon. When

HCI versus Interaction Design

I was working with a masters student in CMU’s Interaction Design program today, and afterward we got into a lively discussion about the distinctions between ID and HCI.

One of the few things we agreed on: HCI and ID use similar methodologies to learn about users, including both qualitative and quantitative studies.

There were many more things that we did not agree on. I found it particularly interesting to hear her perspective because given her background, I had expected her to have a good understanding of what HCI “is all about.” However, she presented several misconceptions which are, if anything, even more prevalent in the greater design and technology communities. Some examples:

  • Misconception 1: Although user research methodologies may be similar, HCI and ID use them for different reasons. ID is all about designing a “complete user experience,” while HCI is completely centered on coming up with technical solutions. My response to that is– how can we come up with a useful, usable, and pleasant technology system without actually looking at the complete user experience? The “solution” might look like a piece of technology, but behind that are serious considerations about our users and their behaviors, values, aspirations, and more. Technology is just a medium through which a better user experience can be achieved– just as in ID, it is the way that people choose to use that technology that really defines their experience.
  • Misconception 2: People who study HCI are simply UI designers for the desktop/mobile/web. Take a walk through the HCI labs at CMU and you will see how absolutely untrue this is. HCI strives to push the limits. The beauty of technology is that it makes anything possible. As HCI practitioners, we are not boxed into using one certain type of media– we can explore any number of new ideas. We can combine virtual and real-world elements into new creations that have all sorts of unique affordances. UI design is just some small part of this, and when there are so many new types of interfaces, even UI design itself can be an incredibly immense area to explore.
  • Misconception 3: ID is about people-to-people interaction. HCI is not because it’s limited to technology. This statement troubles me because it implies that HCI is solely about having people interact with computers. This is a gross misconception– it pains me to know that people think of HCI as just finding ways to redesign the Photoshop UI. HCI is about creating technology that enables. As to what sorts of interactions it enables, well, this could really be anything– how people interact with each other (instant messaging, Facebook, etc.), how they behave within their environment (GPS, wearable computing), how they understand themselves (online identity building, methods of self-recording), and so on– the possibilities are endless. Although I can’t profess to know much about the specifics of ID, I would imagine that they pitch a similar platform of the possibilities that their field encompasses. And I am sure that many ID projects have a strong technology component, simply because technology is so prevalent in every aspect of life. Design someone’s experience as they walk through a museum, and you need to be aware that viewers are probably carrying cell phones with them. How can you completely disregard a potential distractor (or opportunity!) like this if you claim that you are designing a true space of possibility?

All-in-all, it was very interesting to see what sorts of misconceptions are associated with HCI. Why does someone in ID have such a restricted view of HCI, even though the two disciplines have so much overlap? I wonder if some of it has to do from the courses that we take and the deliverables involved. I suspect that if she were to read some HCI research papers or attend and HCI conference, she would realize that the distinction is not quite as strong as she originally thought. Classroom deliverables aside, our goals are the same: to improve the lives of our end users.

Challenges of CMC Research

I am currently taking a class about Computer Mediated Communication (CMC). Some of the issues that consistently come up are challenges of studying CMC phenomena. For example, when we rely strictly on elements captured through technology, this limits our view of all types of communication. This can in turn limit our understanding of the impacts of CMC. However, this sort of analysis may be appropriate for different types of questions related to CMC. This seems to indicate that a large part of the challenge with studying CMC is phrasing research questions correctly, and choosing appropriate methodologies by which they can be answered. This also seems like a research area where arguing validity/generalizability is particularly challenging.

Even more challenging is the fact that technology rapidly, with just as rapid effects on social interaction. For example, “self-expression” studies that were run with personal websites 5 years ago could be repeated with Facebook now, but the results might be drastically different. In the digital realm, people keep coming up with new types of technology and throwing it out there to see what sticks. Then others start to use it and integrate it into their own lives and learn more/are changed by it.

The continuing social shift/development is particularly hard to capture across time and technologies. I found a paper by Garton et al. that attempts to overcome this by visualizing social network changes over time. Although people’s interactions with technology, expression, and connection will continue to change over time as new methods of CMC emerge, the piece that will stay consistent is that technology causes interpersonal relationships to change as new possibilities emerge. Can we measure this across different networks as they come and go? It is an interesting challenge, but perhaps this sort of work can give us a better understanding of the network shifts that are ocurring.

Cisco TelePresence

Cisco takes a stab at co-presence:

Cisco TelePresence promo on YouTube

Honestly, there doesn’t seem to be much of a difference between this and traditional teleconferencing, with the exception of a larger screen and smoother connection technology. That being said, since this is only a promo video, who knows what the system is like in real life– there might be more lag than shown in the video, and auditory input/output might be a challenge. Never mind the setup of the video screens– not everyone’s going to have that same cherry wood conference table. And what happens when there are 10 people in one office trying to get in on the same conference?

The TelePresence system also does not solve many of the problems caused by a lack of co-presence, such as the ability to pass artifacts around the table. In a real conference, you may move your seat to get a better view of the whiteboard; in TelePresence, you do not have this ability. There is no way to make a private comment to your neighbor, and no way way to break off into small group discussion.

Although Cisco TelePresence furthers much of the technology for remote communication, it still fails to afford many of the capabilities of face-to-face communiction. Until those can be bridged, systems like TelePresence will not make us feel like we are “really in the same room as all of you.”

Co-presence Affordance in Virtual Worlds

In one of my classes, we were discussing the affordances present in different types of computer mediated communication. Afterwards, I was reading through one of Prof. Kraut’s papers about using visual information to collaborate on physical tasks. It got me thinking more about the co-presence affordance, and whether it is considered to be a part of virtual worlds like Second Life. Note: The co-presence affordance means that while communicating with others, you share the same environment as your conversation partners– in the Kraut study, this would be the example where subjects are repairing the bicycle together, physically in the same room. In comparison, the video and audio only tasks do not have co-presence affordance; for more examples of the resulting trade offs, see the article.

For example, in Second Life, there is some sense of co-presence because in the game world, players think of their avatars as the reality which they are currently in. Thus, you could say that Second Life has co-presence because even though you aren’t in the same environment as the actual player, to some degree neither are you. Are the details sufficient for true co-presence, though? You can carry out actions in order to succeed in some Second Life tasks, like following someone somewhere. However, you would never be able to accomplish a task that requires complexity such as the bicycle repair task. Though Second Life tries to imitate the actions a person could make in real life, it does not have the co-presence affordance sufficient to stand in for FTF interactions.

However, players in Second Life adapt their view of the world to that which is available to them (in this case, rough movements like a “follow me” task). How similar to real life must an experience be in order to be considered “true co-presence”? In a game with a restricted view of reality, where more detailed tasks are not required, are restricted affordances enough? Perhaps some of the appeal of virtual worlds like Second Life comes from being able to ignore fine tuned interactions (such as those necessary to repair a bike) and focus on other types of interactions instead.

It would be interesting to see how interactions in virutal worlds change if they gain more realistic co-presence affordances. I have heard of situations where people have tried to use Second Life for non-recreational purposes, such as work meetings and training sessions. I imagine that some of the motivation for trying these is to capitalize on Second Life’s supposed co-presence affordance, but perhaps the reason that these have not caught on is that the types of co-presence that session leaders were hoping for– students being able to observe a speaker’s facial features, or a speaker being able to tailor a lecture based on the body language of the students– are not yet present in this digital world. Thus, this sort of interaction could even be detrimental because it forces users to adapt to a different environment with different rules. The co-presence experienced is really a virtual one, and the ability to translate between this and the real world is an interesting challenge.

Emotional Multitasking

This morning, I was chatting on IM, checking email, and reading an article for class (a typical Friday morning). As I was doing this, I found myself wondering about how people multitask at an emotional level. Since working memory is limited to just a couple items (7 +- 2) at a time, people who are good at multitasking are those who are good at quickly swapping task-related data in and out of memory. What sort of effect does this have on emotion? For example, if you were IMing with someone about happy news, but reading a very sad email, would your emotions fluctuate as you flipped between the two items? Would the stronger emotion dominate, or would the other emotion help to temper it?

Also, there must be some sort of cost for trying to mediate the different emotions associated with each task. With so many concurrent forms of emotional stimuation, its no wonder stess levels keep going up.

iGoogle meets Chickenfoot

A few weeks ago, I started using iGoogle as an attempt to free up Firefox tabs while feeding my GMail/GCal/GReader addiction. Since then, iGoogle and I have formed a bit of a love-hate relationship. Although I like being able to see all my information in one place, the feature limitations are very frustrating (why can’t I apply labels without going to “real” GMail?!) The design limitations are also painful. In particular, I am continuously irked by how LARGE that header image is. It’s visually distracting and takes up precious screen real estate. This means that when I’m looking for information, I have to try to ignore the distracting image and potentially scroll down to see the bottom halves of my gadgets. Although iGoogle has built up a community around skinning themes, there is no ability to modify dimensions or layout, making the header a consistent annoyance in my iGoogle experience.

Before: is this header really necessary? That header is only cute the first time you see it. After that, it’s a distraction.

Today, I finally found a way to get rid of that header. Meet Chickenfoot, a “Firefox extension that puts a programming environment in the browser’s sidebar so you can write scripts to manipulate web pages and automate web browsing.” Although the basic idea is similar to GreaseMonkey, Chickenfoot’s goal is to allow users to easily write scripts to interact with web pages without having to look through source code. For example, it’s easy to automate a task like running a Google search, or changing the text label on a button. Users can write scripts on the fly using a built-in command line, or save scripts as “Triggers” that can be run manually or automatically later on.

I decided to give the application a try, and was delighted to find that Chickenfoot is very easy to pick up. In about 2 hours, I learned a bit about scripting in the Chickenfoot environment, wrote a script to fix my iGoogle design problem, and exported the fix to a Firefox extension.

The script I wrote is surprisingly simple. Every time you load up iGoogle, the script replaces the DIV that contains the header with a simple 1-line search box. Easy! Converting the script into a Firefox extension was a snap using Chickenfoot’s package function. The only complaint that I have is that the script does not run until after the webpage has fully loaded, which is noticeable since iGoogle loads so slowly. However, since iGoogle uses AJAX, the script only runs the first time you load up the webpage. This is a small tradeoff for the lovely screen real estate which I’ve freed up. Amazing how 2 hours and 4 lines of code have made me so much happier with the iGoogle experience– thank you, Chickenfoot! Now, if only I had time to rewrite the entire iGoogle user experience…

After installing the iGoogleClean Firefox plugin Final product, sans terrible header.

You can check out the code that I posted on the Chickenfoot script repository, or download the Firefox extension.

Singing with Machines

Kelly Dobson doesn’t just “work with machines”– she sings with them. Her website features some of interactive machines she’s made, such as the “ScreamBody” (scream into it and it will silence your scream, but record it and allow you to play it back later). For more overviews of some of her past work, check out a video of her talk at GEL 2008. In the first few minutes, she sings (snarls?) with a blender, and relates they story of how she learned to make machines an extension of herself by singing with them.

Kelly’s research is interesting because it focuses on mutual empathy in human-machine, and even machine-machine, pairs. As you listen to her speak, it’s easy to forget that line between what makes humans and machines different. Singing in harmony is one of those things that seem so distinctly human– if you can start to do this with a machine, how can you not start to feel some sort of empathy? I wonder what other sorts of activities humans do to relate to each other that can be extended to machines. Also, what are the benefits of strengthening relationships between humans and machines? Kelly mentions therapy– if we trust in machines, perhaps we can allow them to console us and provide support.

Another interesting thought she brings up: in the future, will there be a need for machines to console other machines? This may sound far-fetched, but how many times have we contemplated machines that “feel” emotions? I think this leads to another question– does feeling emotions simply mean having needs that must be satisfied externally? The typical view of creating emotional machines is that we need to build systems that mimic how people emotionally respond to different situations. A sophisticated system might be able to pass a Turing test if it were able to detect and respond to situations in an appropriate way. However, does this mean that a machine is really “feeling”?

It is also important to consider how people learn emotions, and include this in such a model. Social learning theory might suggests that emotions are really learned during childhood as children view the world around them for cues about how to respond to things emotionally. Other theories suggest that emotions are inborn traits– perhaps born out of an evolutionary need for survival. For example, the feeling of “loneliness” might push people to connect with others, which builds relationships that are beneficial to the individual as well as society. Can we build machines that have base “instincts” that guide their behavior, but are also capable of learning appropriate emotional responses? Can machines use some sort of social referencing in order to learn appropriate reactions to situations based on both the context and their emotional state? I’m curious about how much of machine-emotion research is about capturing the ways that people learn and express emotions. An alternative may be to determine how people judge others’ emotions based on their words and behavior. This could lead to the design of machines that cause us to perceive them as emotional beings, based on our own emotional reactions to them.

Five Second Test

I just found this little site: www.fivesecondtest.com

Web designers submit images of their site mockups. Users then come to the Five Seconds Test website and select a test to take. The image of your website layout flashes on their screen for 5 seconds, and then the user completes one of the following tasks depending on which type of test they are taking:

  • Classic: users are asked to list things that they remember after viewing your interface
  • Compare: users see two versions of your interface and specify their preference
  • Sentiment: users are asked to list their most and least favorite things about your interface

I took a couple of the tests and found that it was quite fun to be a tester. Maybe that’s just because I really like looking at and analyzing UIs, but the fast paced-nature and simple feedback form makes it rather absorbing. I felt like I wanted to just review website after website, rather than having to keep clicking the “do a random test” button!

Getting users to come to and continue to participate in the tests must be one of FST’s challenges. Without a certain continuous flow of testers, people submitting designs will get little out of the service since this sort of limited feedback really needs to be available in larger amounts in order to gain useful recommendatiosn from it. Although this seems to be a pet project right now, I think this has a lot of potential as a method for quick usability tests & uniting a webdesign community. I’m sure there must be websites out there that are dedicated to users sharing their interfaces and receiving feedback from the community, but the FST feels different because it blends a sense of low commitment with promise of high reward. For quick design iterations, the FST might be all that you need if you’re looking for the impressions of many, rather than the detailed analyzations of a few. It would be great to see the FST creators, mayhem(method), try to build up some community around this, or for an existing online design community to adopt a similar type of test.