Category: Technology

On Nov 10th, the diamond wedding anniversary of Jacqueline Bouvier Kennedy and John Fitzgerald Kennedy, National Geographic Channel premiered Killing Kennedy, a well made movie about the story behind the JFK assassination. Apart from the beautifully made movie itself, NatGeo also launched an equally touching Web experience: http://kennedyandoswald.com

The simple yet stunningly rich experience of the website is one of the best examples of web storytelling through the use of not only great photography, videos, but also an important element: sound.

The beautifully mastered voice-over in the background, the piano, the birds, the whirlwind, the snapping of cameras…all of these non-visual elements instantly add another dimension to the experience, drawing the audience into the emotional space-time created by the artists. Try do a simple test, browse the site with sound on, and then sound off; you will find yourself shifting in between two worlds, an immersive world of the story, and a world that stood outside. That’s the power of sound. It renders your mind and brings the storytelling to life.

According to the research conducted by Dr. Vinoo Alluri from the University of Jyväskylä, Finland, sound is the only medium that lights up the entire brain under fMRI scan, compared with partial light-ups of visual stimulations. The researchers found that music listening recruits not only the auditory areas of the brain, but also employs large-scale neural networks. For instance, they discovered that the processing of musical pulse recruits motor areas in the brain, supporting the idea that music and movement are closely intertwined. Limbic areas of the brain, known to be associated with emotions, were found to be involved in rhythm and tonality processing. Processing of timbre was associated with activations in the so-called default mode network, which is assumed to be associated with mind-wandering and creativity.

“Our results show for the first time how different musical features activate emotional, motor and creative areas of the brain,” says Prof. Petri Toiviainen from the University of Jyväskylä. “We believe that our method provides more reliable knowledge about music processing in the brain than the more conventional methods.”

Film makers are the masters of creating compelling and convincing storytelling experiences, and no one else understand how powerful sounds are for storytelling than film makers.

“The power of sound to put an audience in a certain psychological state is vastly undervalued. And the more you know about music and harmony, the more you can do with that.” - Mike Figgis

Similar to film making, interactive Web experiences engage our audio-visual senses, only with interactivity and no constraint of time. The challenge with using sound on Web is time; its very difficult to synchronize sound when you can not control people’s visual flows and sequences. However, the degree of freedom in people’s interactions with the Web can also serve as a great opportunity to use sound in very creative ways, bringing more immersion and realism to the experience itself. The key is ‘context’. The use of sound in interactive experiences has to be contextual and responsive, for example, the sound of birds and waves are triggered as ambience when the audience is viewing photos of the ocean.

When the right sounds are used with the right contexts and responsiveness, the experience can not only be much more engaging and memorable, but also influencing people’s behaviours. According to studies, with the right use of sound effects and background music on storytelling based user experience, there’s a significant improvement on key metrics such as click-through rates and time spent, as well as social sharing and potentially conversions. In other words, when used right, sound brings better business results. According to research, with the help of music and sounds, audience could understand better the story being created and have a more enjoyable experience while experiencing it.

Although HTML Web has existed for over two decades, the use of interactive sound on a mass scale on the Web is still something relative new, or even undervalued. If you do a Google search on the subject, many ‘best practices’ recommend against using sound on the Web, for a couple of reasons: Firstly, it’s hard to synchronize sound with the right contexts of the Web. Secondly, the technologies and internet bandwidth just weren’t there yet, so latency and performance have always been top concerns. Last but not least, there’s a lack of talent pool and experts in designing interactive sound UX, so instead of doing it wrongly, it’s better to avoid it altogether! But it doesn’t mean we should ignore the power of sound and keep silence.

With that, before concrete Web specific audio UX principles and methods are established through industry practices and research, many methods and frameworks of the traditional film making can be learned and borrowed. For example, the D3S (Dynamic Story Shadows and Sounds) framework, was built with the main objective of increasing the understanding and enjoyment of a viewer of an interactive story generated in a virtual environment with autonomous virtual agents. It follows two parallel layers when considering music execution: event sounds and background music. Event sounds are used to underscore actions of the virtual characters that occur in the scene. Differently, background music, offers some of the score functions, with a special focus on enhancing the understanding of the story. In D3S, this type of music is classified in four different categories: character themes, background music that emphasizes emotions, background music for key moments and background music as filler.

Certain musical features can dynamically change according to the evolution of the environment. In D3S we considered: volume, instrumentation, and tempo.Volume is associated with emotions intensity. Different instruments are associated with different characters so that the audience has a better perception of what is happening in the story and who is doing what. The third parameter manipulated was music tempo which is associated with environment’s arousal.

More specifically, the association between instruments and characters is a good way of hinting which actions a certain character is doing, helping the audience to identify them. Changes in volume of sounds associated to actions between characters have an influence on the perception of the strength of the relation between them. Themes with features associated to happiness (such as major mode and faster tempo) might suggest that the character is happy, while themes with features associated to sadness (such as minor mode and slower tempo) might suggest that he is sad. Background music can also have a big impact about what is happening in scene – If we have two characters acting with a type of music, the audience might think they are doing something. If we change the type of music radically, they might think they are doing something completely different. From the results obtained, we can draw some conclusions about the importance of music associated with virtual characters, emphasizing the importance that sound and music has in these characters perception, and eventually in their believability.

The above is just a brief intro of how important music and sound can play in creating immersive and emotional digital experiences. I see a big trend coming with rich sound enabled digital multi-dimensional experiences along with the emerging technologies of wearable computing and multi-screen experiences. Humans have long been using sound as a way to learn and interact with the physical world, and there’s no reason why we should not use sound as a key interface in digital world. A big paradigm shift is coming.

For more information, please feel free to leave you comment below or contact BOZ UX.

 

Killing Kennedy

Ever spent 5 minutes embarrassingly looking all over your bag to find a store points card in front of a long line at a cashier and then found out you left it home? Now you have a solution, a single smart card that puts all your cards in one so you never have to worry about missing a card.

Having worked for multiple financial clients, I’ve witnessed and been part of the battle for digital wallet. Yet this week, the creator of Coin, a seven-person startup in San Francisco stunned the world. Mobile payments remain a much sought-after nut to crack for technology companies both large and small. Any firm able to facilitate person-to-person or person-to-business transitions at mass scale stands to gain significant profit off those payments.

There are two things to note with the emergence of Coin:

One, the breakthrough is through a small startup, not a major financial institution. This is yet another great example of how tech is slowly (or rather rapidly) eating away what used to be a gated community of big banks. With the emergence of PayPal, Square, BitCoin and now Coin, the threat to the traditional financial industry is not only real, but tremendously urgent.

Two, the solution is a smart one. Coin is fabricated with a patent-pending magnetic strip that can change depending on what card one wants to use. The battery in Coin, said to last up to two years, powers a small display screen that shows which saved card will be charged, along with its expiration date. Cards are entered into Coin after being swiped on a Square-like dongle plugged into a smartphone. It is bridging wireless digital technologies such as bluetooth and NFC with the physical attributes of traditional cards. You can still swipe the Coin card as you would normally do with a traditional credit card, lowering learning curve and minimizing the impact to established user conventions and behaviours.

This is only one more step closer to a bigger overhaul of our financial world. Soon enough cards may disappear altogether and with it, comes iris or finger print payments, smart watch payments, etc. etc. The internet of things and big data will further facilitate micro-transactions that will change every corner of our world. The future of our financial engagement is super fascinating.

 

A recent report on PSFK reveals that many sci-fi like technologies have already started to emerge and become mainstream.

A brain–computer interface (BCI), often called a mind-machine interface (MMI), or sometimes called a direct neural interface or a brain–machine interface (BMI), is a direct communication pathway between the brain and an external device. BCIs are often directed at assisting, augmenting, or repairing human cognitive or sensory-motor functions. (Source: Wikipedia)

Here are a few new things that will show you a sneak peek of the future entails.

Neurocam

Demoed at this year’s Human Sensing conference, Neurocam is a wearable camera system that uses brainwave sensors and a smartphone camera to identify what the wearer is interested in and then automatically records the footage and saves them in an album. The system consists of a headset with a brainwave sensor. The user attaches his or her iPhone to the headset. The iPhone camera “sees” what the wearer is looking at through a prism and analyzes the wearer’s brainwaves via an iPhone app. The app assesses the wearer’s interest on a scale of 0 to 100. If the wearer’s brainwaves indicate an interest level of at least 60, the system automatically records the scenes and saves them in five-second GIF clips.

Neurocam-2

 

Mind Controlled Cars

Increasing the amount of time you can concentrate sounds like it would be a tedious process, but a graduate from Design Academy Eindhoven has recently proven that it can be quite the opposite. Alejo Bernal has created an illuminated toy car that can only be controlled by your mind and gets brighter the more you are able to focus your attention. His hope is that the project will help ADHD sufferers to overcome their condition and learn what it means to stay focused for extended periods of time.

To take control of the car you have to wear an electroencephalography (EEG) headset that measures your brain’s electrical activity and converts it into actionable signals for the car. Talking with Dezeen, the designer explained how the remote-control car works:

As you try to focus, the increased light intensity of the vehicle indicates the level of attention you have reached, [and] once the maximum level is achieved and retained for seven seconds, the vehicle starts moving forward.

illuminated-mind-controlled-toy-car-ADHD-concentration-training-Alejo-Bernal-2

 

(to be continued…)