Head-mounted displays, abbreviated HDM, is just what it sounds: a computer display you wear in your head. Informally, it stands for a display system built into goggles or as part of a helmet, worn on the head, that gives the illusion of a floating monitor in front of the user’s face. Single-eye units are known as monocular HMD and dual-eyes stereoscopic (technique used to enable a three-dimensional effect, adding an illusion of depth to a flat image) units as binocular HMD. Primarily they have been designed to ensure that no matter in what direction the user might look, a monitor would stay in front of his eyes.


Monocular HMD – Google Glass


Binocular HMD – Meta Pro



A head mounted display is a type of a heads-up display, which can be defined as a transparent or miniaturized display technology that does not require users to shift their gaze from where they are naturally looking. Other than HMDs there are also fixed mounted HUDs. This is typically achived by the use of projected or reflected transparent displays in line-of-sight. There are some distinct generation in the HUDs history. The first generation of that type of HUD used reflected CRT display. The second generation used solid-state light sources like LED to back-light LCD projection. The third generation uses wave-guided options and, finally, the forth generation uses scanning lasers to project all types of images and video. Some of the earliest HUDs where used in military vehicles to assist in navigation and targeting.

Weareable computers

Head-mounted displays are probably the most prominent symbol of weareable computers, however, there are a lot of devices that are included in that group, starting from the first abacus on a necklace in a 16th-century abacus ring. Nowadays, where technological products are getting more sophisticated there are amazing innovative components, like Myo.


First known wearable computer – Abacus on a ring


Ultimate wearable computer – Myo









Wearable computing is the study or practice of inventing, designing, building, or using miniature body-borne computational and sensory devices. Wearable computers may be worn under, over, or in clothing, or may also be themselves clothes. However, the field of weareable computing extends beyond “Smart Clothing”, in fact, it is commonly referred as “Body-Borne Computing” or “Beareable Computing ” so as to include all manner of technology that is on or in the body, e.g. implantable devices as well as portable devices like smartphones.



There are a lot of ways of combine HMDs into different groups. Head-mounted displays differ in whether they can display just computer-generated image (CGI), show live images from the real world or a combination of both.

  • HMDs that display only computer-generated image, are sometimes referred to as virtual reality.
  • HMDs that allow a CGI to be superimposed on a real-world view. This is sometimes referred to as augmented reality or mixed reality. Within this set, there are two main groups:
    • Optical See-Through AKA Optical HMD: Combining real-world view with CGI can be done by projection the CGI through a partially reflective mirror and viewing the real world directly. If you are in a mission-critical application and you’re concerned what happens should your power fail, an optical see-through solution will allow you to see something in that extreme situation. If you are concerned about the utmost image quality, portable cameras and fully-immersive head-mounted display can’t match the “direct view” experience.
    • Video See-Through: Combining real-world view with CGI can also be done electronically by accepting video from a camera and mixing it electronically with CGI. This can be useful when you need to experience something remotely: a robot which you send to fix a leak inside a chemical plant; a vacation destination that you’re thinking about. This is also useful when using an image enhancement system: a thermal imagery, night-vision devices, etc. One aspect of video see-through is that it’s much easier to match the video latency with the computer graphics latency.




Unless, actually, heads-up displays landed in aircraft as early as 1948, they are a important part of the head-mounted display’s history.

Seeking ways for pilots to keep their line-of-sight outside the cockpit and avoid having to constantly look down at the instrument panel for information, aeronautical engineers devised a way to project visual data onto ground glass mounted within an airplane’s windshield. This, in essence, replaced the idea of the simple gunsight that had been used in aviation since World War I. Its origins in fighter aircraft of World War II when air combat grew more complex and speeds increased rapidly. To allow the pilot to focus on shooting an enemy down, vital information was shown directly ahead on a glass plate which he could see through. In later years, as technology advanced, it was found that other information could also be shown, reducing the need to look down onto the dashboard.



The idea appealed to car designers too because it could improve driving safety if a driver’s eyes were looking ahead more than down at instruments. However, the technology cost a lot and HUDs in cars remained a dream, usually show in concept cars at motorshows. It was only in the mid-1980s that Nissan installed a HUD in a production model which might have been the first commercial application in the car industry. It was a simple set-up with the speed projected onto the windscreen ahead of the driver. One issue that cropped up then was the visibility of the display which had problems if it was a very bright day. Nissan offered the HUD for a while and then stopped for unknown reasons (customers probably didn’t find it useful or worth the extra cost). GM also tried offering it on some models and also gave up and it wouldn’t be till over two decades later that HUDs would again appear in cars that were for sale to the public – and mostly on the very expensive models.




Medical practitioners can also benefit from the use of a head mounted display. Using the device to project images of x-rays, the surgeon can utilize the technology to more efficiently locate and remove growths and tumors from the body.

Video Gaming






After seeing several examples of what people have done, we can think of the possibility of developing the project with Google Glass as part of a larger system in which the glasses interact with other devices, such as Leap Motion, to achieve more complex goals.

Some ideas

We just started investigating Google Glass, after reading this post, we were amazed of what some people have already done with Google Glass, and of the a wide range of possibilities/ideas to be developed with them (regardless the ideas are morally correct or not).

Getting Started…

In this post we try to summarise the software apps we are going to use and the full setup of the Google Glass development enviroment. Finally we are going to deploy a simple example.

First of all, we need to install:

After the installation, we need to follow some more steps:

  1. To be able to compile Google Glass applications we need to download and install, through SDK Manager in Android Studio, the 15th version of Android SDK, named Ice Cream Sandwich Maintenance Release 1.
  2. To debug Google Glass apps is needed to download and install, through SDK Manager in Android Studio also, the Google USB Driver. However, as we are developing in Windows 8, a fix is needed. The following steps explains how to enable the proper driver for Google Glass.
  3. Install Google Glass companion MyGlass application to:
    1. Configure and manage Google Glasses.
    2. Get location updates from the smartphone GPS.
    3. Send text messages.
    4. Hangout with contacts.
    5. Install Glassware (Google Glass only) apps.
  4. Configure WiFi network from MyGlass application following these steps.
  5. Set Google Glass in debug mode. To achieve this you need to:
    1. Go to the settings card.
    2. Select the Device Info card in the settings.
    3. Scroll over to Turn on debug and tap it.

To test the enviroment setup, we clone a simple Google Glass app example from Github. The example shows the usage of the compass sensor in a live card (We are going to exaplain the desing components and concepts in future posts).

After cloning it, we:

  1. Import the project to Android Studio with Import Project.
  2. Set the proper Android SDK version 15 in Project Structure.
  3. Deploy it to Google Glass and watch it running