When the final technical challenges to the mainstream adoption of Augmented Reality are overcome, the petabytes of data stored on the Internet will flood our immediate reality, opening up new frontiers for advertising and giving whole new meaning of the term “future shock”.
The first generation of AR tools, like the Layar and AcrossAir apps have given us a glimpse of what is to come, linking up the metadata of the Internet directly with the user’s immediate environment. The video below shows some of the practical applications.
But at present these are still little more than cool toys for early adopters. Although I desperately want to use such incredible technology on a daily basis, once the novelty wears off, you soon find yourself defaulting to the easier, handier, quicker means of finding things, such as Map applications, and of course, your own memory.
When ideas are ahead of their time, we do our best to actualize it using whatever technology exists at the time. Case in point: Tablet PCs. The first patents regarding tablet PC technologies go back as far as 1888(!) and it wasn’t until 2001 that Microsoft unveiled the first modern, technically viable “tablet”, which was basically a laptop with a clunky touch interface. It took a convergence of several other technological trends (cheap, powerful processors, 3G broadband, hi-res colour screens and multitouch) to finally make the tablet viable – a threshold passed with the Apple iPad in 2010.
Really there are only four technical advances needed to allow AR 2.0 to take off. 4G Mobile Networks that will allow much higher amounts of data to be transferred, while advances in Cloud Computing will allow that data to be uploaded and processed by supercomputers, removing the limiting factor of the handset’s power. Another relevant technology currently being touted by Google is “Autonomous Search” – delivering search results without you actually asking performing the search. This seeming paradox means that Google will proactively deliver you information based on your past locations, past search queries, data mined from your gmail account, plus and present location. The final advance, and the one that will likely take longest to perfect, are optical overlays such as Digital Glasses or Digital Contact Lenses that will offer a new place to put this information.
Digital Contact lenses are currently in the prototype stage, and big developments are expected in the coming years. The end goal is having a HUD style layer of information overlaid on top of your field of vision, merging Augmented Reality with your day to day existence. When these technologies converge, it is easy to see the next generation AR experience that will emerge.
Say you are heading home from work, services like Google and Bing know from mining your past search queries, locations and email corrospondance that you a) have a habit of stopping to grab a bite to eat on the way home b) have mentioned mexican food a lot more in your recent email and status updates. Autonomous search will then spontaneously shortlist three mexican restaurants within ten minutes walk of you on your current location, and display them on your Digital Contact Lens.
Hungry, you look at the one with five stars and see it is only seven minutes walk from your location, and you haven’t heard of it, so you decide to check it out. By selecting it either by voice (perhaps sub vocal commands) or from another device like a phone, the path to the restaurant is overlaid on your field of vision directly onto the road itself, like a guideline towards the next quest in a computer game. Red at first, the line will turn progressively green as you get closer to your destination. Taking a cue from subtle HUD systems from video games, this type of color based feedback avoids visual clutter and unnecessary intrusiveness.
However, on your way there, you see a second, branching pathway outlined in yellow. Curious, you look at the glowing pathway and as you do so, another panel fades into your periphery vision. This is a “Sponsored Ad” that is being displayed based on conditions like location and old fashioned keyword bidding. The panel shows that the restaurant currently has an early bird menu that will finish in ten minutes. You make a split second decision, and decide to go for Thai instead, and turn to follow the golden pathway. It will also be likely that the current craze for points and medals in location based services such as FourSquare will shift onto ad networks. Perhaps ad networks will offer you meta-loyalty schemes by earning points for going to sponsored locations.
Scenarios like this are really only the tip of the iceberg. Microsoft have already showed Bing Maps AR capability to overlay video onto a specific point in space. This will allow historical video to be mapped onto your field of vision, directly relative to your point of view. Imagine being on holiday in Berlin and discovering a glowing video icon on the ground. Standing on it activates a video of the Berlin Wall falling, as it was recorded on that exact spot decades ago, relative to your own point of view.
If we assume a certain level of uptake (as ubiquitous as say, mobile phones are now) and certain Apps becoming more widespread that others, what we would have is new strata of virtual information existing in shared perceptual space.
In future, AR developments are likely to include digital cameras built into the contact lenses themselves (at present this is science fiction, but it is the next logical step for AR and hardly impossible) that would allow a “google goggle” style image search recognition as part of the AR experience. Stare at a flower and whisper a word, and you can bring up its wikipedia page. Look at a product and automatically search for reviews and user ranked meta-scores. The social, cultural and privacy implications of this is worth considering.
The Internet will start to break through into reality and become one with our everyday experience in a much more direct way. Not so much augmented reality as a hyper-reality, that will invoke – particularly in older generations and conservative movements, – a Future Shock that borders on revulsion, arguing we are effectively sacrificing what remains of our privacy, even our free will itself, for what they see as a faustian bargain with technology. Will they be right?
What are the implications to the first generation to grow up in a world where data drifts through space? Where everything we do, everything we look at is stored and tracked? Where technology tells us where to go, and what path to take? The consequences of fully realised AR are both exciting and terrifying, but when have we ever shrunk from developing technology just because of potential negative applications?