Category: UX

0

Many clients have asked and been confused about the differences between customer segmentations and personas. So here’s a quick write-up to help provide some clarification.

Customer segmentation is a marketing tool to identify different groups of customers (or potential customers) within a market so that it is possible to target particular products, services or marketing messages. There are many ways to slice and dice customer segments depending on the marketing needs, such as age, income or life stages. Customer segmentation itself does not provide insights but a way to differentiate and group customers.

Personas (and archetypes) are essentially a design tool to create empathy with a group of real users. Personas are fictional characters (but based upon robust research with real people) designed to represent a group of people with similar characters, values and behaviours. The purpose of personas is to mix demographic information with archetypal behaviors in a believable and true to data harmony. Personas are most useful when they are paired with scenarios to provide contexts and lead to insights thus guide design decisions.

Customer segmentations and personas are not mutually exclusive however they are totally different types of tools for different purposes and contexts of use. In some cases personas could be mapped to segmentations to represent key customer segments, but in other cases, personas could be based on factors that are totally separate from segmentations. For example, when we segment insurance customers we may segment them by life stages, while looking at a digital decision flow design we will develop personas focused on their readiness to make a decision and their use of digital as opposed to life stages.

0

by Golden Krishna

“Atmadm.”

Getting our work done was an alphabet soup nightmare.

“chkntfs.”

“dir.”

(Source: vintagecomputer.net)

Then, in 1984, Apple adopted Xerox PARC’s WIMP — window, icon, menu, pointer — and took us a galactic leap forward away from those horrifying command lines of DOS, and into a world of graphical user interfaces.

Apple’s Lisa. (Source: Guidebook Gallery)

We were converted. And a decade later, when we could touch the Palm Pilot instead of dragging a mouse, we were even more impressed. But today, our love for the digital interface has gotten out-of-control.

It’s become the answer to every design problem.

How do you make a better car? Slap an interface in it.

Speedometer in BMW’s Mini Cooper. (Source: BMW)

Who doesn’t want Twitter functionality inside their speedometer? (Source: CNET)

How do you make a better refrigerator? Slap an interface on it.

“Upgrade your life” with a better refrigerator door. (Source: Samsung)

Love to check my tweets when getting some water from the fridge. (Source: Samsung)

How do you make a better hotel lobby? Slap an interface in it.

(Source: IDEO)

A giant touchscreen with news and weather is exactly what’s missing from my hotel stay. (Source:IDEO)

Creative minds in technology should focus on solving problems. Not just make interfaces.

As Donald Norman said in 1990, “The real problem with the interface is that it is an interface. Interfaces get in the way. I don’t want to focus my energies on an interface. I want to focus on the job…I don’t want to think of myself as using a computer, I want to think of myself as doing my job.”

It’s time for us to move beyond screen-based thinking. Because when we think in screens, we design based upon a model that is inherently unnatural, inhumane, and has diminishing returns. It requires a great deal of talent, money and time to make these systems somewhat usable, and after all that effort, the software can sadly, only truly improve with a major overhaul.

There is a better path: No UI. A design methodology that aims to produce a radically simple technological future without digital interfaces. Following three simple principles, we can design smarter, more useful systems that make our lives better.

Principle 1: Eliminate interfaces to embrace natural processes.

Several car companies have recently created smartphone apps that allow drivers to unlock their car doors. Generally, the unlocking feature plays out like this:

  1. A driver approaches her car.
  2. Takes her smartphone out of her purse.
  3. Turns her phone on.
  4. Slides to unlock her phone.
  5. Enters her passcode into her phone.
  6. Swipes through a sea of icons, trying to find the app.
  7. Taps the desired app icon.
  8. Waits for the app to load.
  9. Looks at the app, and tries figure out (or remember) how it works.
  10. Makes a best guess about which menu item to hit to unlock doors and taps that item.
  11. Taps a button to unlock the doors.
  12. The car doors unlock.
  13. She opens her car door.

Thirteen steps later, she can enter her car.

The app forces the driver to use her phone. She has to learn a new interface. And the experience is designed around the flow of the computer, not the flow of a person.

If we eliminate the UI, we’re left with only three, natural steps:

  1. A driver approaches her car.
  2. The car doors unlock.
  3. She opens her car door.

Anything beyond these three steps should be frowned upon.

Seem crazy? Well, this was solved by Mercedes-Benz in 1999. Please watch the first 22 seconds of this incredibly smart (but rather unsexy) demonstration:

(Source: YouTube)

Thanks “Chris.”

By reframing design constraints from the resolution of the iPhone to our natural course of actions, Mercedes created an incredibly intuitive, and wonderfully elegant car entry. The car senses that the key is nearby, and the door opens without any extra work.

That’s good design thinking. After all, especially when designing around common tasks, the best interface is no interface.

Another example.

A few companies, including Google, have built smartphone apps that allow customers to pay merchants using NFC. Here’s the flow:

  1. A shopper enters a store.
  2. Orders a sandwich.
  3. Takes his smartphone out of his pocket.
  4. Turns his phone on.
  5. Slides to unlock.
  6. Enters his passcode into the phone.
  7. Swipes through a sea of icons, trying to find the Google Wallet app.
  8. Taps the desired app icon.
  9. Waits for the app to load.
  10. Looks at the app, and tries figure out (or remember) how it works.
  11. Makes a best guess about which menu item to hit to to reveal his credit cards linked to Google Wallet. In this case, “payment types.”
  12. Swipes to find the credit card his would like to use.
  13. Taps that desired credit card.
  14. Finds the NFC receiver near the cash register.
  15. Taps his smartphone to the NFC receiver to pay.
  16. Sits down and eats his sandwich.

If we eliminate the UI, we’re again left with only three, natural steps:

  1. A shopper enters a store.
  2. Orders a sandwich.
  3. Sits down and eats his sandwich.

Asking for an item to a person behind a register is a natural interaction. And that’s all it takes to pay with Auto Tab in Pay with Square. Start at 2:08:

(Source: YouTube)

Auto Tab in Pay with Square does require some UI to get started. But by using location awareness behind-the-scenes, the customer doesn’t have to deal with UI, and can simply pursue his natural course of actions.

As Jack Dorsey of Square explains above, “NFC is another thing you have to do. It’s another action you have to take. And it’s not the most human action to wave a device around another device and wait for a beep. It just doesn’t feel right.”

Principle 2: Leverage computers instead of catering to them.

No UI is about machines helping us, instead of us adapting for computers.

With UI, we are faced with counterintuitive interaction methods that are tailored to the needs of a computer. We are forced to navigate complex databases to obtain simple information. We are required to memorize countless passwords with rules like one capital letter, two numbers and a punctuation mark. And most importantly, we’re constantly pulled away from the stuff we actually want to be doing.

A Windows 2000 password requirement. (Source: Microsoft)

By embracing No UI, the design focuses on your needs. There’s no interface for the sake of interface. Instead, computers are catered to you.

Your car door unlocks when you walk up to it. Your TV turns on to the channel you want to watch. Your alarm clock sets itself, and even wakes you up at the right REM moment.

Even your car lets you know when something is wrong:

(Source: YouTube)

When we let go of screen-based thinking, we design purely to the needs of a person. Afterall, good experience design isn’t about good screens, it’s about good experiences.

Principle 3: Create a system that adapts for people.

I know, you’re great.

You’re a unique, amazingly complex individual, filled with your own interests and desires.

So building a great UI for you is hard. It takes open-minded leaders, great research, deep insights…let’s put it this way: it’s challenging.

So why are companies spending millions of dollars simply to make inherently unnatural interfaces feel somewhat natural for you? And even more puzzling, why do they continue to do so, when UI often has a diminishing rate of return?

Think back to when you first signed up for Gmail. Once you discovered innovative features like conversation view, you were hugely rewarded. But over time, the rate of returns have diminished. The interface has become stale.

Sadly, the obvious way for Google to give you another leap forward is to have its designers and engineers spend an incredible amount of time and effort to redesign. And when they do, you will be faced with the pain of learning how to interact with the new interface; some things will work better for you, and some things will be worse for you.

Alternatively, No UI systems focus on you. These systems aren’t bound by the constraints of screens, but instead are able to organically and rapidly grow to fit your needs.

For example, let’s talk about Trunk Club.

It’s a fashion startup.

They think of themselves as a service, not a software company or an app-maker. That’s an important mind set which is lost on many startups today. It means they serve people, not screens.

And I guess if we’re going to talk about Trunk Club, I’ve got to mention a few of their peers: BombfellUnscruffSwag of the Month and ManPacks.

After you sign up for Trunk Club, you have an introductory conversation with a stylist. Then, they send your first trunk of clothes. What you like, you keep. What you don’t like, you send back. Based on your returns and what you keep, Trunk Club learns more and more about you, giving you better and better results each time.

Diminishing rate of return over time? Nay, increasing returns.

Without a bulky UI, it’s easier to become more and more relevant. For fashion, the best interface is no interface.

Another company focused on adapting to your needs is Nest.

When I first saw Nest, I thought they had just slapped an interface on a thermometer and called it “innovation.”

As time passes, the need to use Nest’s UI diminishes. (Source: YouTube)

But there’s something special about the Nest thermostat: it doesn’t want to have a UI.

Nest studies you. It tracks when you wake up. What temperatures you prefer over the course of the day. Nest works hard to eliminate the need for its own UI by learning about you.

Haven’t I heard this before?

The foundation for No UI has been laid by countless other members of the design community.

In 1988, Mark Weiser of Xerox PARC coined “ubiquitous computing.” In 1995, this was part of his abstract on Calm Technology:

“The impact of technology will increase ten-fold as it is imbedded in the fabric of everyday life. As technology becomes more imbedded and invisible, it calms our lives by removing annoyances while keeping us connected with what is truly important.”

In 1998, Donald Norman wrote “The Invisible Computer.” From the publisher:

“…Norman shows why the computer is so difficult to use and why this complexity is fundamental to its nature. The only answer, says Norman, is to start over again, to develop information appliances that fit people’s needs and lives.”

In 1999, Kevin Ashton gave a talk about “The Internet of Things.” His words:

“If we had computers that knew everything there was to know about things—using data they gathered without any help from us—we would be able to track and count everything, and greatly reduce waste, loss and cost.”

Today, we finally have the technology to achieve a lot of these goals.

This past year, Amber Case talked about Weiser-inspired location awareness.

There’s a lot we can achieve with some of our basic tools today.

Let’s keep talking.

Oh, there’s so much more to say:

Watch the Cooper Parlor. After this essay exploded on Twitter, Cooper hosted a No UI event with special guest, design legend Donald Norman.

Listen to “The best interface is no interface” at SXSW. Thanks for reading this essay, tweeting about it, and generously pressuring SXSW to accept this talk. Thanks to you, I will be speaking about “The best interface is no interface” at SXSW 2013.

Discuss on Branch. Join the conversation on Branch about the world of No UI.

Follow the No UI Tumblr. I’m collecting more case studies, more examples and articles about the technology that can help us eliminate the interface on Tumblr. Get inspired at nointerface.tumblr.com

Comment below. Where do you see No UI opportunities?

Related Reading

 

Special thanks: to everyone at Cooper and all those who have helped, particularly Stefan Klocek, Chris Noessel, Doug LeMoine and Meghan Gordon.

Corrections: the original version of this article referred to “Pay with Square” as “Pay by Square”, incorrectly stated the published date of “The Invisible Computer” and cited Adam Greenfield.

On Nov 10th, the diamond wedding anniversary of Jacqueline Bouvier Kennedy and John Fitzgerald Kennedy, National Geographic Channel premiered Killing Kennedy, a well made movie about the story behind the JFK assassination. Apart from the beautifully made movie itself, NatGeo also launched an equally touching Web experience: http://kennedyandoswald.com

The simple yet stunningly rich experience of the website is one of the best examples of web storytelling through the use of not only great photography, videos, but also an important element: sound.

The beautifully mastered voice-over in the background, the piano, the birds, the whirlwind, the snapping of cameras…all of these non-visual elements instantly add another dimension to the experience, drawing the audience into the emotional space-time created by the artists. Try do a simple test, browse the site with sound on, and then sound off; you will find yourself shifting in between two worlds, an immersive world of the story, and a world that stood outside. That’s the power of sound. It renders your mind and brings the storytelling to life.

According to the research conducted by Dr. Vinoo Alluri from the University of Jyväskylä, Finland, sound is the only medium that lights up the entire brain under fMRI scan, compared with partial light-ups of visual stimulations. The researchers found that music listening recruits not only the auditory areas of the brain, but also employs large-scale neural networks. For instance, they discovered that the processing of musical pulse recruits motor areas in the brain, supporting the idea that music and movement are closely intertwined. Limbic areas of the brain, known to be associated with emotions, were found to be involved in rhythm and tonality processing. Processing of timbre was associated with activations in the so-called default mode network, which is assumed to be associated with mind-wandering and creativity.

“Our results show for the first time how different musical features activate emotional, motor and creative areas of the brain,” says Prof. Petri Toiviainen from the University of Jyväskylä. “We believe that our method provides more reliable knowledge about music processing in the brain than the more conventional methods.”

Film makers are the masters of creating compelling and convincing storytelling experiences, and no one else understand how powerful sounds are for storytelling than film makers.

“The power of sound to put an audience in a certain psychological state is vastly undervalued. And the more you know about music and harmony, the more you can do with that.” - Mike Figgis

Similar to film making, interactive Web experiences engage our audio-visual senses, only with interactivity and no constraint of time. The challenge with using sound on Web is time; its very difficult to synchronize sound when you can not control people’s visual flows and sequences. However, the degree of freedom in people’s interactions with the Web can also serve as a great opportunity to use sound in very creative ways, bringing more immersion and realism to the experience itself. The key is ‘context’. The use of sound in interactive experiences has to be contextual and responsive, for example, the sound of birds and waves are triggered as ambience when the audience is viewing photos of the ocean.

When the right sounds are used with the right contexts and responsiveness, the experience can not only be much more engaging and memorable, but also influencing people’s behaviours. According to studies, with the right use of sound effects and background music on storytelling based user experience, there’s a significant improvement on key metrics such as click-through rates and time spent, as well as social sharing and potentially conversions. In other words, when used right, sound brings better business results. According to research, with the help of music and sounds, audience could understand better the story being created and have a more enjoyable experience while experiencing it.

Although HTML Web has existed for over two decades, the use of interactive sound on a mass scale on the Web is still something relative new, or even undervalued. If you do a Google search on the subject, many ‘best practices’ recommend against using sound on the Web, for a couple of reasons: Firstly, it’s hard to synchronize sound with the right contexts of the Web. Secondly, the technologies and internet bandwidth just weren’t there yet, so latency and performance have always been top concerns. Last but not least, there’s a lack of talent pool and experts in designing interactive sound UX, so instead of doing it wrongly, it’s better to avoid it altogether! But it doesn’t mean we should ignore the power of sound and keep silence.

With that, before concrete Web specific audio UX principles and methods are established through industry practices and research, many methods and frameworks of the traditional film making can be learned and borrowed. For example, the D3S (Dynamic Story Shadows and Sounds) framework, was built with the main objective of increasing the understanding and enjoyment of a viewer of an interactive story generated in a virtual environment with autonomous virtual agents. It follows two parallel layers when considering music execution: event sounds and background music. Event sounds are used to underscore actions of the virtual characters that occur in the scene. Differently, background music, offers some of the score functions, with a special focus on enhancing the understanding of the story. In D3S, this type of music is classified in four different categories: character themes, background music that emphasizes emotions, background music for key moments and background music as filler.

Certain musical features can dynamically change according to the evolution of the environment. In D3S we considered: volume, instrumentation, and tempo.Volume is associated with emotions intensity. Different instruments are associated with different characters so that the audience has a better perception of what is happening in the story and who is doing what. The third parameter manipulated was music tempo which is associated with environment’s arousal.

More specifically, the association between instruments and characters is a good way of hinting which actions a certain character is doing, helping the audience to identify them. Changes in volume of sounds associated to actions between characters have an influence on the perception of the strength of the relation between them. Themes with features associated to happiness (such as major mode and faster tempo) might suggest that the character is happy, while themes with features associated to sadness (such as minor mode and slower tempo) might suggest that he is sad. Background music can also have a big impact about what is happening in scene – If we have two characters acting with a type of music, the audience might think they are doing something. If we change the type of music radically, they might think they are doing something completely different. From the results obtained, we can draw some conclusions about the importance of music associated with virtual characters, emphasizing the importance that sound and music has in these characters perception, and eventually in their believability.

The above is just a brief intro of how important music and sound can play in creating immersive and emotional digital experiences. I see a big trend coming with rich sound enabled digital multi-dimensional experiences along with the emerging technologies of wearable computing and multi-screen experiences. Humans have long been using sound as a way to learn and interact with the physical world, and there’s no reason why we should not use sound as a key interface in digital world. A big paradigm shift is coming.

For more information, please feel free to leave you comment below or contact BOZ UX.

 

Killing Kennedy