Up on the bridge, there isn't really all that much in terms of classified displays, and having data integrated to what you see (Shoe-HUD) would be pretty awesome. We had an E-4's whose sole purpose in life during an UNREP was to hold up a white board and write down prop pitch/engine RPM so the conn standing out on the bridgewing would have SA on what the last engine order was.
Why can't you just modify existing display software to accomodate this, instead of investing in Google Glass? How do you get Google Glass and SRPM/pitch to 'talk' when the ship is not built to wirelessly transmit that data? Don't say 'voice recognition' -- that tech has been around for over 20 years and I've yet to have it work reliably for me.
And for close in surface engagements, topside talks in clock positions, CIC talks in true bearings.
Can't possibly fix this by modifying training? If surface ships use similar CCS as us, couldn't they just speak in relative bearings just the same, especially since the CCS can easily display both values instead of making the OOD mentally calculate the difference?
What if a sailor aloft had an app in Glass that would show him all the maintenance steps rather than have someone below reading them to him, so he could also use two hands?
I can't speak for Navy wide, but in our community if a Sailor is having a procedure read to him, it's so that the reader (usually the more senior) can circle-x the steps and provide backup if he's about to jack it up. So an iPAD with check-boxes would be more appropriate, but then we go back to "just make a copy and put it on a clipboard and spend the $400 elsewhere."
And innovating processes without at least DH level approval is a big nono. The latter can be fixed with a culture shift, but now you have to convince the heavies to allow any Sailor to do alts to your expensive, proprietary goggles with the risk of damaging them.
Off the top of my head, I can see a million uses for a bridge team (Rules of the Road/Standing Orders on the fly by just asking, CPA, VMS mirroring etc etc etc).
If you don't know the RoR and COSO, you shouldn't be standing watch.You don't pull out the rules of the road in the middle of the highway while driving your car, do you? VMS mirroring already exists, and VMS already displays CPA through the target data pair function (a relatively useless function that clutters the screen if PADs are on and set to a useful value, which they should be). Additionally, an iPAD is better for manuals as mentioned before -- presumably you'd be looking through a PDF of these documents and would need to get to the appropriate page quickly.
But you're right, I didn't stand any surface watches; we intel weenies generally don't get to do that--at least, no one on my ship did--so if there's some u/w watch application that would be improved by instant, one-handed ability to take photos of what the watchstander is seeing, or look up or share information, but that would be hampered by an iPad, I'm all ears.
This is the crux of the issue. What problem are you trying to solve that can only be done with a tech that hasn't been released yet? It seems instead that you're trying to make the problems match the solution, cuz Google Glass is what the cool kids wear. And I'm sorry, but that's exactly what a lot of the top brass do with their pet projects -- we need this because I thought of it and it's different, not because it necessarily solves anything.