Google’s Project Glass Augmented Reality is Missing Something: Augmented Reality

Posted by Andrew on April 4th, 2012

GOOGLE PROJECT GLASS LOOKS AS PROMISING AS MICROSOFT’S VISION OF THE TABLET PC IN 2001…

Google has only released one video and little else on their project for bringing augmented reality to the masses, so it’s hard to cast aspersions on what’s the most vapory of vaporware. That said, I’ll pick apart the video; in that even with the use of After Effects and the potential to show us anything, their vision of the future seems rather timid.

Like the silly Nintendo Power Glove in Minority Report (far less impressive than Microsoft’s Kinect and ideas in the labs when the movie was made), we get a vision of the future that feels dated before it happens.

The future is not run on Nintendo Power Gloves...

Google’s glasses appear to just be a screen in front of your face with eye tracking. And I don’t mean that in the ‘iPhone is just a screen you touch’, way. It feels like Microsoft’s attempt at tablets in the early 2000’s. They figured your finger would just be a pointing device for Windows. Substitute ‘eye’ for ‘finger’ here and you get a shortsighted vision of the potential for this technology.

Google’s Glass doesn’t do anything different than what we do now. The screen is just in a different place. Think of how the iPad changed the way we interact with software or how Microsoft’s Kinect changed gaming. Augmented Reality could be bigger than all of this.

Touch interfaces took off when you realized that the medium had changed. Google’s Glass doesn’t feel that way. I don’t think they get their medium. My first case in point is the map feature:

How does Google envision using augmented reality to show us a map? They just float a regular map in front of you. Why not lay the map over on your field of view and actually show you a path to follow?

Second, let’s look at the trip to the book store. Obviously, Google doesn’t want to scare off brick and mortar partners with flashing deals to buy the book elsewhere. But why not use their already solid image recognition technology to hover reviews of the book or show us augmented publisher information. The same for the concert poster. Make the thing move. Show us what a connected world looks like.

Third, the apps were disappointing. When the girlfriend calls, why not make it look like she’s on the top of the building with him? Don’t just overlay reality, blend it. Why not create artificial elements in real space?

Fourth, show us virtual objects. What’s a virtual ebook look like to read or a magazine? I’d love to see what Google thinks the future of virtual items will be like with augmented reality. I have to image it’s more than a transparent screen.

That said, I’m excited that Google is taking the initiative on this. I’ll leave them with the words of Tom Hardy’s Eames in Inception, “You mustn’t be afraid to dream a little bigger, darling.”

For good time’s sake:

  • Michael Hilliard

    I agree that it was all very simple and tossed together. I wonder if it was just that,, a simple response to a leak tossed together in order to wow “normal people”,  Not the few jaded nerds who set back and critique things like this. 

    I as one of those jaded nerds think that the leak reached outside of what I have heard called “nerd Media” by my normal friends and and they put something out because some pr guy decided it would be good for their public image.

  • http://www.andrewmayne.com Andrew Mayne

    Good question. Ostensibly, it’s intended to get developers excited.

  • Anonymous

    My chief issues with the way this video is put together is not just the things that you have pointed out it is missing, but the extremely misleading way it misrepresents what the user experience with such a product would ever actually be. 

    I don’t know about you but I do not hold my eyes as steadily as a cameraman holds a camera. I am always shifting where I am looking by movement of both my head and my eyeballs. This video, being a PR video, does not even pretend to address what I would perceive as basic functionality issues with such a product. I imagine it will be vaporware, because I’m not sure they can actually get it to work properly. I feel like the AR elements would move around my field of vision as I change it in such a way as to be nauseating. I feel like any interface designed to work in the way represented here would have to be massively more complicated than what is represented here. Frankly, as much as you wouldn’t want one, you’d need a keyboard to use. There are times when speaking aloud is not an option. As I understand it, the people who currently use eye tracking to type are generally the same people who generally don’t move much other than their eyes. Perhaps I am missing some people. 

    Factor in also that there would be many people who would have to have this system somehow integrated with their pre-existing vision augmentation hardware (glasses or contacts), a factor that would surely alter functionality in a not insignificant way. Some are unable to wear contacts, forcing them to use spectacles instead… something I’m sure would lead to a cracked screen epidemic toward the level of that currently experienced by iPhone users. 

    So, as you began this article, this is the most vapory of vaporware, and in this form that is all it will ever be. 

  • Michael Hilliard

    That would turn developers off. I am not a dev, but that video shows a pretty limited system of functionality to me. I dont think a small time dev would put his time toward ideas for this given what is currently offered in the video. Time is money and when you don’t have much money you have no time to generate anything for a product that you can’t actually see. No pun intended.

  • http://twitter.com/3pointedit David Mcsween

    Climbing those stairs at the end, taking that call to his girl. I really thought that he was going to “share view” his last goodbye… All the way to the pavement.

    And I doubt that anyone will really want to talk to their glasses in public. The killer app for mobile or cell phones? Not voice but txt. People want to type covertly to comunicate quietly. Not yammer on for all to hear.

    Most human user interfaces are the way they are, because of R and D not T(op) of  H(ead) or O(ut) of A(rse). Mouse works well, keyboard works well, tablet drawing works well too, pointing with finger in space not so great.

  • http://www.andrewmayne.com Andrew Mayne

    I totally thought he was going to jump too!

  • http://twitter.com/3pointedit David Mcsween

    Yeah but no feathers, so I knew it was a bust :P

  • http://ebonnebula.deviantart.com/ EbonNebula

    I agree that it was a bit underwhelming, but it feels like they are trying to demonstrate what is commercially viable today. And the technology just isn’t where it needs to be for fully immerse augmented reality. Yes, we do have the technology to project an image of someone so it looks like they are standing right next to you, but not in an accurate, visually appealing, cost efficient way. Realistically, I’d say we are another 5-10 years away from any sort of immerse experience.

    Seriously, google maps has been around for 7 years, and 20% of the time, the map overlay doesn’t match up to the images in street view.

  • http://ebonnebula.deviantart.com/ EbonNebula

    I agree that it was a bit underwhelming, but it feels like they are trying to demonstrate what is commercially viable today. And the technology just isn’t where it needs to be for fully immerse augmented reality. Yes, we do have the technology to project an image of someone so it looks like they are standing right next to you, but not in an accurate, visually appealing, cost efficient way. Realistically, I’d say we are another 5-10 years away from any sort of immerse experience.

    Seriously, google maps has been around for 7 years, and 20% of the time, the map overlay doesn’t match up to the images in street view.