As phones have transitioned to smart phones, our personal technology has graduated from conduits between people to a more sophisticated breed that allows for – even invites – direct control. In tandem, people are getting rid of voicemail, making fewer phone calls, and texting more. In one vein, this seems like a more truncated, efficient behavior, but it also implies greater intimacy with the device.
We’re also growing to expect the similar level of control we have over our phones to expand to the devices of our environment. The “smart home” and “connected” objects are commanded with our phones for the time being. Contrary to the shift in phone use, control of these devices that is buried in a growing library of apps is not efficient.
The technological response to surfacing quick control over these smart objects is the use of voice interfaces. The Xbox’s Kinect allows for voice control of your Xbox apps and access to media. The Xfinity remote control makes “change the channel to HBO” possible. Apple’s Siri, Microsoft’s Cortana, Amazon’s Alexa, and the Google Now services are all serious attempts at broadening voice control to access many services.
While speech-to-text recognition has largely improved, the voice controlled services themselves still lack in the sophistication that people presume exists when communicating through a nuanced medium as speech. Even if this level of sophistication is attained, and the services understand and respond exactly as we expect them to, the challenge of intimacy remains.
When common interaction with phones shifted from calls to text as the interfaces allowed more direct (read: intimate) control, we’ve created this controversial-yet-accepted balance of interacting with people directly and multitasking with our pocket computers. Voice interaction necessitates a more public display of that human computer interaction. One that is so uncomfortable, directly inhibits its use. Think of the times you have used your voice input on a phone: public settings, private settings with people around, or solitary settings?
Although we may not be able to out-design social mores, we can take the first challenge—that of accuracy, intuitive use, and predictable outcome—to the whiteboard and to the APIs.
Yes, much hype. Much much hype.
Yes, I’m always skeptical, and I’m assuming that VR headsets (e.g. Oculus) will take a few iterations, and price points to catch on. Now even a few years into it, wearables are still getting mediocre traction. At best, Apple has people wearing them for social status or fashion. Nevertheless, new technology is deserving of design consideration even more than existing, common devices. They need to be nurtured, and “done right” in order to have a longer life ahead.
What follows are a few things I would keep in mind if I found myself in a position to design for Virtual Reality. Perhaps with more exposure to VR, I can add to this list in the future.
Sharing the room
Others that are not wearing the headset have no insight to the VR experience; unlike a TV, which can be a shared experience. Devices will either have to become more affordable so that everyone can wear them at the same time, or the solitary device should provide some external feedback to others in the room; such as an outward-facing display that mirrors a 2D version of the virtual experience, distinct audio signals (for the room, not the wearer), or as some currently offer: an optional feed that displays on a TV/monitor.
Accessories to support and enhance
Accessories can enhance the experience, further immersing you into the virtual reality by giving you a great approximation of bodily control. These can range from the more necessary, to nice additions.
The ability to turn in place with ease (and not falling into real world objects) is probably the most important and can be solved with a basic swivel chair or the more expensive 360° treadmills.
In concert with existing wrist wearables, or custom-made wristbands, the VR headset would no longer need to be the main point of interaction (click, tap, or toggle). Using accelerometers and Bluetooth that are already included in any fitness wearable, one could wave an arm in front of them and have the action mimicked in VR. Or similarly, a shake or a tap on the wrist could replace the need to tap a button on the headset for making selections.
Keep things out of frame (move the eye)
The same principle that applies to photography, painting, or any kind of visual medium: you want the eye to move across the canvas. In this case, you want heads to turn. Succeeding at this influence are short films that have a rich and beautiful environment, but also play between primary and secondary subjects. At times, both are not within the same gaze and you must turn to see either subject.
This should be used in moderation however, as you can easily tire a VR participant with too many subjects in different directions, and also risk a poor experience that leaves observers feeling they might have missed out on parts of the story because they were forced to follow one subject while another of equal importance remains out of view.
Sound quality is as important as image quality
A truly immersive experience relies on tricking your senses. A well-crafted story also relies on directed attention. Audio quality aides both of these by bringing the observer into the virtual world with realistic ambient sound, and the ability to subtly distinguish voice will help people grasp if there’s a character standing next to them that they need to turn and face, or if the speech is coming from an omnipresent narrator.
Prompt to enable Do Not Disturb when starting the VR
This is a short one, but nothing ruins a virtual experience like a pesky notification pushing its way into view. Before starting a VR experience, there should be some reminder or prompt to enable Do Not Disturb mode for the phone. More aggressively, VR software could just disable notifications, but I prefer to let users make the choice.
Subtitles should remain fixed, detached from video movement
Another specific point is that layered content, like subtitles, should be fixed to an easily legible portion of the screen. In one demo, they were out of view, below the general plane of vision. Although moving around and exploring the setting is a hallmark of VR, some visual elements should be fixed or represented “out” of the virtual space – another plane, or layer, if you will.
When I slow down to look at my interaction with most web sites, I notice an incongruity for accomplishing the same end goal: publishing.
While different sites offer different levels of sophistication, I’ve noticed that creating or editing content on the modern web is bubbling-up closer to the surface. I liken this to term as used in Computer Science: abstraction.
At first, you had to write binary that worked directly with the processor. Then they created a language that allowed you to write logic gates which were converted to binary. The higher you get, the more programming becomes natural to humans.
As Nick sums it up, there is a pretty deep (and technical) background to programming that few of us think of today — even the programmers. Even though a well-versed developer that works in an Object Oriented language might know the logic behind the code, we have long since been removed from considering logic gates and binary code.
Translating this to web development, the abstraction layers could go as far as the binary code, but the fundamental difference between software and what we predominantly see on the web seems to start with HTML. Taking human steps back from HTML, by my count, we are just now seeing mainstream implementation of a fourth layer of abstraction.
HTML and other languages
It used to be that you had to write everything in the language our browsers speak. Yes, we still build websites like this, but you don’t have to know this language to write a blog or update your status. I consider HTML, CSS, and other browser languages to be the first or bottom layer of abstraction on the web. Kids used to learn HTML if they wanted their MySpace to look a certain way. Later, WordPress says, “no more!” Enter the second layer of abstraction.
Admin panels and WYSIWYG
WYSIWYG (what you see is what you get) has been around before the internet, letting you select a different font style or change your margins, colors, and other preferences. Its implementation in Content Management Systems (CMS) brings about the second layer of abstraction. I’m sure you can count plain text input somewhere in earlier CMS platforms, but this is the more common method of creating content on the Web. Blogger might have been the first prolific example (above), but I haven’t spent enough time Googling to tell you for sure.
To be true to the definition of abstraction this comparison should only be made from language-to-language. In that regard, languages like SASS, LESS, and the like are another layer of abstraction on top of CSS. I’m using abstraction liberally to talk about the mode of interaction you have with a computer when creating content online. In that regard, SASS and CSS are in the same bucket of “manually writing out instructions for the browser.”
An important element of this second layer of abstraction is not only the WYSIWYG, but its placement within an administrator’s section of the website. On Blogger, WordPress, or even the relatively modern Tumblr, you must sign in and access a different side of the website to enter new content and publish.
What makes Tumblr interesting is that the primary experience of viewing and interacting with other posts within the community takes place in the same logged-in state / administrator view.
Other services fall elsewhere in the spectrum of a definitive edit mode and read mode. WordPress for example has a completely different experience in the edit mode or administrator side of the site, whereas Flickr was one of the first to blur the line and display the same interface for editing as reading — with minor differences when clicking on things.
It seems that in the development of a new content platform, there’s a defining choice whether to embrace the Content Management System (CMS) or to try and hide it as much as possible, creating the illusion that your draft could just as well be live, published content. This design decision is what carries some products from the Second Layer of abstraction and the Third Layer of Abstraction, where creating and reading content begin to merge.
Always logged-in + squishy CMS
As you can see above, Flickr has made quite a few changes over the years and I think it’s an excellent example of a third layer of abstraction to creating content online. Yes, there’s a smaller gap between this layer and the second than there was between the first two, but it’s distinct enough to deserve recognition.
In the Flickr example, people are still interacting with a CMS and they are logged in as an “administrator” of their content. What is significant however, is that our identities online have become more solidified and with a more liberal use of browser cookies, we are almost always identified when walking in to a website we commonly use. For example, WordPress, despite its many improvements, will still ask you to sign in to access the administrator part of your blog; whereas Facebook, Flickr, Medium, and many others will remember you, and what’s more: the main mode of interacting with those sites (communities really) is within the logged-in state.
As we lean toward an always logged-in state by default, the CMS necessarily has merged with the published content. Even when interacting with the CMS, it has become standard to do the composing or editing in the same place you’re viewing other content. When making a Facebook status, your browser doesn’t ask you to leave the newsfeed. When publishing a tweet, your browser no longer requires you to refresh the page to view that content. Overall, there a higher sophistication of Front-End web development being employed that makes these CMS interactions quite “squishy” compared to the very distinct moments you will have with a WordPress CMS and reading the blog, for example.
Sitting here in 2015, this doesn’t sound like much of a revelation. My apologies if I didn’t warn you ahead of time, but I don’t see myself as a visionary. I just think its important to document what we see.
If I am to follow this winding definition of creating content and getting further away from complying with computers to get things done, then the last layer as I see it must be collaborative documents. Hear me out:
Writing code – You’re using HTML, SASS, anything that’s meant for a browser and not a human.
WYSIWYG / Hard CMS – You’re filling in text boxes, clicking formatting buttons, previewing, publishing, and then going somewhere else to see how it looks.
Always logged-in / Squishy CMS – You don’t have to go anywhere else when finished creating or editing content. The line between browsing the web and composing has been blurred, but there is still a very strong line between you and your readers: the save/post/publish button.
Collaborative content – In this state, the CMS is the viewing platform and composing platform at the same time. There’s no line between browsing and composing, nor is there a line between you and your readers:
I think where this notion of the Fourth Layer feels a bit forced is that it’s not a typical use case. Collaborative tools such as Google Docs, Dropbox, Box, and the like are associated with professional use only, and even in a professional setting they are not the norm.
What’s interesting to me however, is a hypothetical type of social media where that line between author and reader is selectively removed. Let’s take Facebook as an easy target. Imagine if you didn’t have to click the Post button on a status.
Oh wow Rob, that would make my life 9,000 times easier!
Yeah, I thought as much. It might even do more harm than good.
Side-note: isn’t there some publication that uses data Facebook has on what people draft as a status update versus what people actually publish?
I think it would be fun in some safe spaces, such as a curated group of your best friends where you could post content live and anyone that happens to be on the page at the same time can be drawn into your activity and then instantly (or simultaneously) begin to respond/react/build on what you’re putting out there.
Okay, so let’s back away from the Facebook example a bit. I’m getting very specific just to try to explore what using the web would feel like if we managed to abstract ourselves just a bit more from the already ‘squishy’ CMS. Perhaps there will always be a line, a very minimum confirmation moment when an author does acknowledge something will be born into the Internet or not.
I think this depends on the concept of the web for most people. If you imagine it’s more akin to a book or newsletter than a dinner conversation or phone call, then yes, there will always be an interaction with the machine no matter how minimal. If you’re of the latter opinion however, then maybe at some point all lines will dissolve and we interact with the web as we do in person – maybe that leads to more explaining, and less editing, but that’s a whole other can of worms.
I didn’t want to get too technical while exploring these different ways of creating content online, but I’d like to acknowledge that these layers of abstraction do not imply that we’re detaching from machines, markup languages, or programming of any sort. If anything, a greater layer of abstraction requires more sophisticated code to support such an elegant interface on the outside.
Good design should not aspire to rendering a complicated system into a seamless one; on the contrary, I hope we continue to focus our attention to the seams and learn how to best mold them to fit our needs.*
We used to log in to create blog posts and that was a necessity for security and identifying the author. Now we are logged in everywhere, for social reasons, for our own sense of digital identity.
* I’d love to take credit for such an intelligent-sounding stance on design, but I first read about it here: Matthew Chalmers (2003)
There’s some discussion around the office, mostly among Interaction Designers, about the “Invisible User Interface.” Here’s an excerpt from The Best Interface is No Interface by Golden Krishna on The Verge. I readily agreed with almost everything he says …until reading the article that I share below.
As a criticism to our obsession with apps and interfaces (I’m certainly guilty), I think his point of view is refreshing. It strikes at something that should be discussed. Golden Krishna identifies a symptom of lazy design, and dare I say, kowtowing to less-than-savvy clients that are prepared to give you $1M to design an app.
An honest scenario
Some institution or company comes to a design agency with a problem.
Usually it boils down to a basic problem: We need more people to sign up for our service, or we want people to use our service more, and the classic we want people to buy our things instead of our competitors’.
The design agency has been designing apps and websites for years. The fact that a business even approaches a design agency implies that the business owner or otherwise important stakeholder has a solution in mind: an app, a web site, an interface.
The design agency will “take a step back”, carefully rephrase the business problem to their client. They’ll brainstorm and consider many solutions. At the end of the day, the unspoken understanding is that the design agency knows how to make apps and websites, and the business person came to the agency because that’s what they want.
Long story short, both parties end up jumping to the conclusion that an interface is the solution to the problem.
Designers are problem solvers.
You might have a title like visual designer, graphic designer, experience designer, interface designer, interaction designer… and that first word in your title pushes you to keep making the sort of things you always make. My greatest personal and professional challenge is to acknowledge the second word of these (often silly) titles. Living up to being a Designer means considering everything, and not jumping to the familiar toolbox to fix or improve something.
Side-note: This is why I was so enamored by Service Design that Fjord champions. Unfortunately, it’s less tangible and must be difficult to sell, because this type of thinking is still in the minority of their portfolio.
I meant to just drop a link in here and sprinkle in a pull-quote from an article that I liked. I’m eager to explore where I really stand between the ideas of Invisible Interface and seamfull experiences, but I’m still quite fresh on the topic. For now, here’s the link I came here to share: