The True Multi-Screen Experience

1/27/2014 Stephen Colbert 0 Comments

One of the most exciting advancements in technology right now is the "second screen experience."  The reason it is exciting is because it is the beginning of a unification of various branches of technology, with even deeper implications.

Anyone who has ever been watching Netflix and opened a separate window to verify Easter eggs, or check an actor out on IMDB is familiar with the second screen concept.  It is merely a means of communicating additional info to you in order to supplement or enhance your digital activities.  Something like the music identification app "Shazam" would be a very minimal example of a second screen experience.  Obviously your smartphone might be the only screen in that equation if you other "screen" is in fact a radio, but it's the exact same concept.

Monolith and Warner Bros. are producing a new game "Middle-earth: Shadow of Mordor" that looks like it's basically a LOTR themed combination of the Batman: Arkham games and Assassin's Creed.  It looks pretty chill and I'll probably play it when it comes out, which means you'll likely get to read my review at that time as well.

Anyway, Shadow of Mordor will have a companion app called the "Palantir" (LOTR fans know what that is, use the Googles if you don't).  It will run on any iOS device and listen to your gameplay to give you contextual assistance or useful information based on in-game events.

This is not the first game to have this feature, it is actually a growing trend specifically with games and movies (technically, the Gamecube/Game Boy Advance link up was probably one of the pioneers in this area, but I don't know of a single person that actually used the feature). It has been used for TV/movies to provide additional info like showing family trees for Game of Thrones, or provider character origins, timelines, and special effects breakdowns for Avengers. Some games will put items from the HUD on a second screen, or put a map on it so you can actually use the map while simultaneously exploring.


This is really something geeks have been doing for years when they watch TV with their smartphones or have a multiple monitor setup on their PC.  Even while I'm writing this, I have blogger open on one monitor, and a Chrome tab full of Tim and Eric .gifs on the the other.  The difference is, these tasks are now being split out to various devices and enhanced to be more seamless and automatic.

This is all cool stuff, but it calls a few things into question about our preconceptions about different devices.  In an age of Netflix, Google Voice, Chromecast, evernote, and dropbox, what defines which screen is the "second screen?"  If I'm watching Netflix on my tablet and I send it to my TV for a few minutes, then pick it up on my smartphone when I leave the room, then which device was primary? The tablet, since it originated there? The TV, because that's what society has viewed content on for the longest?  Or were all 3 devices simply different types of displays and the Chromecast was the primary device?


I find it funny that a lot of the tech industry got caught up with the classification of the "phablet" (or the existence of it at all) and entirely missed the point that screen size is irrelevant.  Your TV, your smartphone, your phablet, your 10" tablet, your 7" tablet, your computer monitor(s), your laptop screen, your smartwatch, and your Google Glass... They're all just displays that show you information.  

Almost none of the information you are being shown is actually on the device, so why does it matter what the device is "made for."  Once cellular voice is VOIP, then what is the difference between a 6" phone and a 6" tablet?  Obviously the issue of convenience is relevant.  Using a 10" device as the device always on your person is a little less convenient than a 4"-5" screen, but that should be defined by use case, not by a "phone" or "tablet" label.

We need to liberate ourselves from the outdated mindset of labeling device types and realize that TVs, monitors, tablets, phablets, smartphones, projectors, and basically any other internet enabled screen is just a content display device.  They can all display basically the same content in a similar way.  And we need to design content around that same paradigm.  Yes, content should be enabled for various screen sizes, but that's it.


The Chromecast is great for media, but any productivity should work the same way.  We are far closer to this end with various cloud services, but we are still treating each device like it is different in ways other than the screen size.

Watch the movie "Her" and see how Theodore Twombly never has to stop and go "Oh, wait, I need to get back to my desktop to complete this task, because that's where X is located."  The use of cloud services in that movie are beautiful.  Notice there are no peripherals.  He has a monitor and 3D/hologram projector at home, a monitor at work, carries a mobile device, and has an ear piece used for most interaction.  That's the true multi screen experience.  It's beautiful and we are on our way, but we definitely need a paradigm shift before it can be properly realized.

0 disqus:

Sound off