Hardware Sensors

In this first instance we mention the Google Glass hardware sensors, well go deeper in some of the sensors capabilities after we get a bigger picture of the rest of the Google Glass hardware.

  •     Proximity/ambient light  sensor
  •     3-Axis Gyroscope / Orientation sensor
  •     3-Axis Accelerometer
  •     3-Axis digital compass
  •     TouchPad

Proximity/ambient light  sensor (LiteON LTR-506ALS): It is used to detect objects close to the glasses. This sensor calculates the ambient light to adjust the brightness level of the screen and could also be used to detect if the user’s hands are in front of the glass and activate, for example, gestural commands.

Inertial sensor (InvenSense MPU-9150, Gyro + Accelerometer + Compass)
The MPU-9150 is a 9-axis MotionTracking device. It incorporates InvenSense’s MotionFusion (sensor fusion, generated by combining the output from multiple motion sensors).

The MPU-9150 combines two chips:

  •      MPU-6050, which contains a 3-axis gyroscope, 3-axis accelerometer, and an onboard DPM (Digital Motion Processor, Embedded hardware accelerator used to calculate the MotionFusion) capable of processing complex 9-axis MotionFusion algorithms.
  •     AK8975, a 3-axis digital compass.

The part’s integrated 9-axis MotionFusion algorithms access all internal sensors to gather a full set of sensor data.


The touchpad is a full custom module made by Synaptics, and is driven by a Synaptics T1320A touchpad controller.



  1. http://blog.glassdiary.com/post/62793449799/google-glass-hardware-sensors
  2. http://prezi.com/x7lwawcvc1or/caracteristicas-tecnicas-de-googles-glass/
  3. http://www.invensense.com/mems/gyro/mpu9150.html
  4. http://www.invensense.com/mems/glossary.html
  5. http://pdf1.alldatasheet.com/datasheet-pdf/view/535562/AKM/AK8975.html
  6. http://www.gsmarena.com/glossary.php3?term=sensors
  7. http://www.catwig.com/google-glass-teardown/


Context-Aware Computing

As we claim in this post, head-mounted displays are probably the most prominent symbol of weareable computers. In their paper, “Context-awareness in wearable and ubiquitous computing”, Abowd, Dey, et al. concluded

That future computing environments promise to break the paradigm of desktop computing. To do this, computational services need to take advantage of the changing context of the user. The context-awareness we have promoted considers physical, social, informational and even emotional information. Beginning with an applications perspective on future computing environments, we have been able to identify some of the common mechanisms and architectures that best support context-aware computing.

Taken into account the importance of context-aware computing in wearable computers we will investigate the definitions of context, context-aware computing and some others related concepts.

First of all, let’s review some examples of context-aware applications. If fact, the are a lot of examples, for instance one can think of:

  • A music player that, when the enviroment’s sounds are too loud, it automatically turns the volume up to a more desirable level. Here the desirable level could be determinated based on the user’s ableness to hear at some levels.
  • A text reader that takes the user proximity to the device to enlarge the text size when the user goes further from it.
  • Doors that opens when a person is near them.


There are multiple definitions of context in the field of computing. As Dey and Abowd published, the ones that can best be exaplained are:

  • Schilit et al. claim that the important aspects of context are: where you are, who you are with, and what resources are nearby. They define context to be the constantly changing execution environment. They include the following pieces of the environment:
    • Computing environment available processors, devices accessible for user input and display, network capacity, connectivity, and costs of computing.
    • User environment location, collection of nearby people, and social situation.
    • Physical environment lighting and noise level.
  • Dey et al. define context to be the user’s physical, social, emotional or information state.
  • Pascoe defines context to be the subset of physical and conceptual states of interest to a particular entity.

Dey and Abowd, taking into account the definition listed above, defined:

Context is any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves.

Their definition makes it easier for an application developer to enumerate the context for a given application scenario. If a piece of information can be used to characterize the situation of a participant in an interaction, then that information is context.

Also, they defined categories of context as location, identity, time, and activity are the primary context types for characterizing the situation of a particular entity. These context types not only answer the questions of who, what, when, and where, but also act as indices into other sources of contextual information.

Context awareness and ubiquitous (pervasive) computing

In computer science context awareness refers to the idea that computers can both sense, and react based on their environment. Also the notion of context-awarness is closely related to the vision of ubiquitous computing.

The word “ubiquitous” can be defined as “existing or being everywhere at the same time”, “constantly encountered”, and “widespread”. When applying this concept to technology, the term ubiquitous implies that technology is everywhere and we use it all the time. Because of the pervasiveness of these technologies, we tend to use them without thinking about the tool. Instead, we focus on the task at hand, making the technology effectively invisible to the user.

As Mark Weiser introduced, ubiquitous computing names the third wave in computing. First were mainframes, each shared by lots of people. Now we are in the personal
computing era, person and machine starting uneasily at each other across the desktop. Next comes ubiquitous computing, or the age of calm technology, when technology recedes into the background of our lives.

He also claim a important difference with VR:

Ubiquitous computing is roughly the opposite of virtual reality. Where virtual reality puts people inside a computer-generated world, ubiquitous computing forces the computer to live out here in the world with people. Virtual reality is primarily a horse power problem; ubiquitous computing is a very difficult integration of human factors, computer science, engineering, and social sciences.


Mark Weiser in 1991 said:

The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.

To realize such ubiquitous computing systems with optimal usability, i.e. transparency of use, context-aware behaviour is seen as the key enabling factor. Computers already pervade our everyday life – in our phones, fridges, TVs, toasters, alarm clocks, watches, etc – but to fully disappear, as in the Weiser’s vision of ubiquitous computing, they have to anticipate the user’s needs in a particular situation and act proactively to provide appropriate assistance. This capability require means to be aware of its surroundings, i.e. context-awareness.

Context-Aware Computing

One of the firsts definitions of context-aware computing was the one provided by Schilit and Theimer in 1994 to be software that adapts according to its location of use, the collection of nearby people and objects, as well as changes to those objects over time”. Their definition restricted it from applications that are simply informed about context to applications that adapt themselves to context.

Further definitions that are in the more specific “adapting to context” also distinguish them taking into account  the method in which applications acts upon context. Ones define that the users select how to adapt the application based on his interest or activities. However, the other ones, define that the system or application should automatically adapt it’s behaviour based on the context.

Dey and Abowd, taking into account multiple definitions of context-aware computing, defined it as: “A system is context-aware if it uses context to provide relevant in- formation and/or services to the user, where relevancy depends on the user’s task.”.

If fact, they have chosen a more general definition of context-aware computing. The main reason for it, was that with it they didn’t exlude existing context-aware applications.


Categorization of Features for Context-Aware Applications

In a further attempt to help define the field of context-aware applications, Dey and Abowd presented a categorization for features of context-aware applications. Previously there were two remarkably attempts to develop such taxonomy, the first one proposed by Schilit et al. and the other proposed by Pascoe.

The Schilit proposal had 2 orthogonal dimensions:

  • Wheter the task is to get information or to execute a command
  • Wheter the task is executed manually or automatically

Based on these dimensions, four instances can be defined:


  1. Proximate selection: Applications that retrieve information for the user manually based on available context. It is an interaction technique where a list of objects or places is presented, where items relevant to the user’s context are emphasized or made easier to choose.
  2. Automatic contextual reconfiguration: Applications that retrieve information for the user automatically based on available context. It is a system-level technique that creates an automatic binding to an available resource based on current context.
  3. Contextual command: Applications that execute commands for the user manually based on available context. They are executable services made available due to the user’s context or whose execution is modified based on the user’s context.
  4. Context-triggered actions: applications that execute commands for the user automatically based on available context. They are services that are executed automatically when the right combination of context exists, and are based on simple if-then rules.


Pascoe proposed a taxonomy of context-aware features. There is considerable overlap between the two taxonomies but some crucial differences as well. Pascoe developed a taxonomy aimed at identifying the core features of context-awareness, as opposed to the previous taxonomy, which identified classes of context-aware applications. In reality, the following features of context-awareness map well to the classes of applications in the Schilit taxonomy.

The features are:

  1. Contextual sensing: Ability to detect contextual information and present it to the user, augmenting the user’s sensory system. This is similar to proximate selection, except in this case, the user does not necessarily need to select one of the context items for more information.
  2. Contextual adaptation: Ability to execute or modify a service automatically based on the current context. This maps directly to Schilit’s context-triggered actions.
  3. Contextual resource discovery: Allows context-aware applications to locate and exploit resources and services that are relevant to the user’s context. This maps directly to automatic contextual reconfiguration.
  4. Contextual augmentation: Ability to associate digital data with the user’s context. A user can view the data when he is in that associated context.


Based on the previous categorizations, Dey and Abowd presented their definition that combines the ideas from the two taxonomies and takes into account three major differences. They defined three categories:

  1. Presentation of information and services to a user
  2. Automatic execution of a service
  3. Tagging of context to information for later retrieval

They also introduced two important distinguishing characteristics: the decision not to differentiate between information and services, and the removal of the exploitation of local resources as a feature.


In further posts we will show some of the work already done to design and develop context-aware applications and how sensors are the key enabling context-aware applications.



  1. http://www.interaction-design.org/encyclopedia/context-aware_computing.html
  2. http://www.ubiq.com/hypertext/weiser/UbiHome.html
  3. http://www.nrsp.lancs.ac.uk/kmthesis.pdf
  4. https://smartech.gatech.edu/
  5. https://smartech.gatech.edu/bitstream/handle/1853/3389/99-22.pdf
  6. https://smartech.gatech.edu/bitstream/handle/1853/3531/97-11.pdf

Humanistic Intelligence (HI)

Wearable computers are perfect to embody HI, to take a first step toward an intelligent wearable signal processing system that can facilitate new forms of communication through collective connected H.I..

Humanistic Intelligence is a signal processing framework in which:

  1. Intelligence arises by having the human being in the feedback loop of the computational process
  2. The processing apparatus is inextricably intertwined withthe natural capabilities of the human mind and body.

Rather than trying to emulate human intelligence, HI recognizes that the human brain is perhaps the best neural network of its kind.

Another feature of HI is the ability to multi-task. It is not necessary for a person to stop what they are doing to use a wearable computer because it is always running in the background, so as to augment or mediate the human’s interactions. Wearable computers can be incorporated by the user to act like a prosthetic, thus forming a true extension of the user’s mind and body.

There are three fundamental operational modes of an embodiment of HI: Constancy, Augmentation, and Mediation.

Firstly, there is a constantly of user interface, which implies an “always ready” interactional constancy, supplied by a continously running opetational constancy. Wearable computers are unique in their ability to provide this “always ready” condition which might, for example, include a retroactive video capture for a face recognition reminder system. After-the-fact devices like traditional cameras and palmtop organizers cannot provide this retrospective computing capability.

Secondly, there is an augmentational aspect in which computing is NOT the primary task. Again, wearable computing is unique in their ability to be augmentational without being distracting to a primary task like navigating through a corridor, or trying to walk down stairs.

Thirdly, there is a meditational aspect in which the computational host can protect the human host from information overload, by deliberately diminished reality, such as by visually filtering out advertising signage and bilboards.




HMD – History and objectives of inventions

Much before Google Glass came on the scene to testers in April 2013, head-mounted displays had appeared, in the form of concepts, during the middle of the twentieth century. Since then, there have been many researches, investigations, tests, prototypes and usable products that have made head-mounted display history.

Head-mounted displays history dates to 1945, where McCollum patented the first stereoscopic television apparatus. The objects of his invention were:

  1. Provide a new and improved stereoscopic television apparatus whereby a plurality of people can simultaneously and with equal facility view an object which has been transmitted  by stereoscopic television.
  2. Provide a new and improved stereoscopic television apparatus which is simpler, cheaper, more efficient and more convenient than those heretofore known.
  3. Provide a new and improved stereoscopic television apparatus wherein the image creating mechanism is mounted in a spectacle frame.
McCollum patent figures

McCollum patent figures


Heilig also patented a stereoscopic television HMD for individual use in 1960. His invention was directed to improvements in stereoscopic television apparatus for individual use. It comprises the following elements: a hollow casing, a pair of optical units, a pair of television tube units, a pair of ear phones and a pair of air discharge nozzles. The below image show an example of his invention:

Heilig stereoscopic television diagram

Heilig stereoscopic television diagram


The objects of his invention were:

  1. Provide easily adjustable and comfortable means for causing the apparatus containing the optical units, to be held in proper position, on the head of the user so that the apparatus does not sag, and so that its weight is evenly distributed over the bone structure of the front and back of the head, without the necessity of holding the apparatus up by hand.
  2. Provide means whereby the optical and television tube units may be individually adjusted to bring said units into their proper positions with respect to the eyes of the user and with respect to each other.
  3. Provide ear phones which are so designed that the outer ear is completely free and un touched, thus allowing the ear phones to operate fully as sound focusing organs.
  4. Provide means for independently adjusting the pair of ear phones to bring them into proper position with respect to the ears of the user.
  5. Provide means for conveying to the head of the spectator, air currents of varying velocities, temperatures and odors.
  6. Provide the optical units with a special lens arrangement which will bend the peripheral rays coming from the television tube so that they enter the eyes of the user from the sides therefor, creating the sensation of peripheral vision filling an arc of more than 140° horizontally and vertically.

Two years later, Heilig, developed and patented a stationary virtual reality (VR) simulator, the Sensorama Simulator, which was equipped with a variety of sensory devices including handlebars, a binocular display, vibrating seat, stereophonic speakers, cold air blower, and a device close to the nose that would generate odors that fit the action in the film, to give the user virtual experiences. The main objectives of his invention were:

  1. Provide an apparatus to simulate a desired experience by developing sensations in a plurality of the senses.
  2. Provide a new and improved apparatus to develop realism in a simulated situation.
  3. Provide an apparatus for simulating an actual, predetermined experience in the senses of an individual.
Sensorama Simulator

Sensorama Simulator


In 1961, Philco Corporation designed a helmet that used head movements to gain access into artificial enviroments enhaced with a tracking system. The invention was called Headsight and was the first actual HMD invention.

The main objective was to be used with a remote controlled closed circuit video system for remotely viewing dangerous situations. In fact, their system used a head mounted display to monitor conditions in another room, using magnetic tracking to monitor the user’s head movements.

Philco Headsight

Philco Headsight


Bell Helicopter Company, in 1960’s, performed several early camera-based augmented-reality systems. In one, the head-mounted display was coupled with an infrared camera that would give military helicopter pilots the ability to land at night in rough terrain. An infrared camera, which moved as the pilot’s head moved, was mounted on the bottom of a helicopter. The pilot’s field of view was that of the camera.


Ivan Sutherland, a hall of fame computer scientist, invented the first true computer mediated virtual reality system called  ‘Sword of Damocles’. It was the first BOOM display – Binocular Omni Orientation Monitor. Essentially, the system was a complete computer and display system for displaying a single wireframe cube in stereoscopy to the viewer’s eyes. Unfortunately, at the time, such apparatus was too bulky to head mount. Instead, it was bolted into the ceiling, and reached down via a long, height adjustable pole, to where a user’s head could be strapped to it. The display, whilst primitive by today’s standards, tracked the position of both eyes, allowed the user to swivel it around the Z axis 360 degrees, tracked its orientation and the head position of the user. In addition, the system was not immersive, allowing the user to see the room beyond via transparent elements of the hardware. Thus, it is also considered to be the first augmented reality display.

The Sword of Damocles was the precursor to all the digital eyewear and virtual reality applications.

The objective of his invention was to surround the user with displayed three-dimensional information which changes as he moves.


Sutherland - Sword of Democles

Sutherland – Sword of Democles


Steve Mann, born 1962, in Ontario, Canada, is a living laboratory for the cyborg life-style. He is one of the leaders in WearComp (wearable computing) and one of the integral members of the Wearable computing group at MIT Media Lab. He believes computers should be designed to function in ways organic to human needs rather than requiring humans to adapt to demanding requirements of technology. Mann has developed computer systems — both wearable and embedded — to augment biological systems and capabilities during all waking hours. His work touches a wide range of disciplines from implant technology to sousveillance (inverse surveillance), privacy, cyber security and cyborg-law.

In 1981, Steve Mann created the first version of the EyeTap. While still in high-school he wired a 6502 computer (as used in the Apple-II) into a steel-frame backpack to control flash-bulbs, cameras, and other photographic systems. The display was a camera viewfinder CRT attached to a helmet, giving 40 column text. Input was from seven microswitches built into the handle of a flash-lamp, and the entire system (including flash-lamps) was powered by lead-acid batteries.

The objective was to acts as a camera to record the scene available to the eye as well as a display to superimpose computer-generated imaginery on the original scene available to the eye.

Steve Mann - First EyeTap

Steve Mann – First EyeTap


In 1989, sold by Reflection Technology, Private Eye head-mounted display scanned a vertical array of LEDs across the visual field using a vibrating mirror. The monochrome screen is 1.25-inches on the diagonal, but images appear to be a 15-inch display at 18-inches distance.

Private Eye

Private Eye


Steve Mann, appearead again in 1994 with the Weareable Wireless Webcam. Webcam transmitted images point-to-point from a head-mounted analog camera to an SGI base station via amateur TV frequencies. The images were processed by the base station and displayed on a webpage in near real-time. (The system was later extended to transmit processed video back from the base station to a heads-up display and was used in augmented reality experiments performed with Thad Starner). It was the first example of Lifelogging.

The lasts 20 years the development of HMD blow up, were companies created many products with different technologies, types, structures and uses. Input devices that lend themselves to mobility and/or hands-free use are good candidates, for example:

  • Touchpad or buttons
  • Compatible devices (e.g. smartphones or control unit)
  • Speech recognition
  • Gesture recognition
  • Eye tracking
  • Brain–computer interface

The main examples are:

  • 1998: Digital Eye Glass EyeTap Augmediated Reality Goggles


  • 2000: MicroOptical’s TASK-9 was founded in 1995 by Mark Spitzer, who is now a director at the Google X lab. The company ceased operations in 2010, but the patents have been acquired by Google.


  • 2005: MicroVision’s Nomad Digital Display: MicroVision is now working with automotive suppliers to develop laser heads-up displays to provide information to drivers in their field of vision.


  • 2008: MyVu Personal Media Viewer, Crystal Editiondf: MyVu’s Personal Media Viewers hooked up to external video sources, such as an iPod, to provide the illusion of watching the content on a large screen from several feet away.


  • 2009: Vuzix Wrap SeeThru Proto: Wrap SeeThru was developed by Vuzix in 2009. The publicly traded company has been developing video eyewear for 15 years and has dozens of patents on the technology.


  • 2013: Meta’s eyewear enters 3D space and uses your hands to interact with the virtual world. The Meta system includes stereoscopic 3D glasses and a 3D camera to track hand movements, similar to the portrayals of gestural control in movies like “Iron Man” and “Avatar.” Meta expects to have more fashionable glasses in 2014.


  • 2013: Google Glass: Google’s project program for developing a line of hands-free, head-mounted intelligent devices that can be worn by users as “wearable computing” eyewear.




  1. http://www.media.mit.edu/wearables/lizzy/timeline.html
  2. http://www.irma-international.org/viewtitle/10158/
  3. https://docs.google.com/viewer?url=patentimages.storage.googleapis.com/pdfs/US2388170.pdf
  4. https://docs.google.com/viewer?url=patentimages.storage.googleapis.com/pdfs/US2955156.pdf
  5. http://www.mortonheilig.com/SensoramaPatent.pdf