Total Recall

Technology’s ultimate purpose is to improve people’s quality of life. One aspect of improving quality of life is to provide or enhance abilities that are missing, diminishing, or otherwise in need of improvement. Memory is one such ability.

The idea of Total Recall is to be able to remember when an event happened, where it happened, who was there, why it happened, and how we felt. Total Recall aims to amass memories, experiences, and ultimately knowledge from an individual perspective and for a multitude of individuals.

It starts with the use of personal sensors, like a microphone in a pair of glasses or a camera in a necklace; it wold include other sensors, all of which would record an individual perspective of the world. (This recording is intended to be continuous and under user control.)

But, Total Recall is not simply an individual memory enhancer. It could have many other applications, for example in health care, education, and support of elderly and people with disabilities:

  • Placing a microphone array on a hearing impaired person’s glasses can allow collection of audio that gets converted to text and displayed on a PDA in near real time.
  • Being able to recall a patient’s food intake and recent environments can help discovery of allergies.
  • Monitoring food intake of diabetics can provide automatic warning signals when appropriate.
  • Being able to review a patient’s state before and after a serious health problem, like a heart attack, can help doctors arrive at a more accurate diagnosis in an emergency situation.

Some people’s first reaction, when they hear about a system that records everything, at every moment, and everywhere you go, is fear. After all, who knows who else might get their hands on this information? But the reality is that this is already starting to happen around us. For instance, there are cameras (webcams) everywhere—on traffic lights, on highways, in buildings. We expect that a world that is constantly recording will come sooner or later.

There are many benefits to such technology, as well as drawbacks, and keeping them in balance requires both technical and legal/social solutions. From the technological point of view, we need to design and build systems that provide proper security, privacy, and integrity mechanisms. Such mechanisms should enable a wide variety of policies so that legal/social policy development is not hampered by a paucity of technical alternatives. Without technical flexibility, the inevitable development of technology may result in poor policy by default.

There are always scary uses of technology, but we believe this technology can result in much good, if done right. We have enhanced our eyesight with glasses and our timekeeping ability with watches, so why not enhance our memories as well?

Leap Motion +1

Being a while after the last post, we have came up with a Leap Motion. This amazing device uses hand and finger motions as input, analogous to a mouse, but requiring no hand contact or touching.

After few hours of investigation on how to install Leap Motion in Fedora 20, our new development enviroment, we have found this github repository. In it, the author provide some scripts to create the RPM package, install it and finally create the leapd service.

Here is a screenshot of the Visualizer app taken after the installation:

 

Screenshot from 2014-06-02 01:18:44

 

 

In following posts we will be following this example that uses the LeapJS SDK to transfer data collected by the Leap Motion using a node.js server to the Android device.

Also we will be trying to emulate the amazing presentation posted in the Leap Motion blog.

Cheers!

 

References

  1. https://www.leapmotion.com
  2. https://developer.leapmotion.com/
  3. http://marctan.com/blog/2013/05/26/leap-motion-and-android-a-match-made-in-heaven
  4. https://www.leapmotion.com/blog/create-the-ultimate-interactive-presentation-reveal-js-leap-motion-google-glass-and-sendgrid/
  5. https://github.com/leapmotion/leapjs
  6. https://developer.leapmotion.com/leapjs/getting-started
  7. https://developer.leapmotion.com/documentation/javascript/api/Leap_Classes.html

 

Hardware Sensors

In this first instance we mention the Google Glass hardware sensors, well go deeper in some of the sensors capabilities after we get a bigger picture of the rest of the Google Glass hardware.

  •     Proximity/ambient light  sensor
  •     3-Axis Gyroscope / Orientation sensor
  •     3-Axis Accelerometer
  •     3-Axis digital compass
  •     TouchPad

Proximity/ambient light  sensor (LiteON LTR-506ALS): It is used to detect objects close to the glasses. This sensor calculates the ambient light to adjust the brightness level of the screen and could also be used to detect if the user’s hands are in front of the glass and activate, for example, gestural commands.

Inertial sensor (InvenSense MPU-9150, Gyro + Accelerometer + Compass)
The MPU-9150 is a 9-axis MotionTracking device. It incorporates InvenSense’s MotionFusion (sensor fusion, generated by combining the output from multiple motion sensors).

The MPU-9150 combines two chips:

  •      MPU-6050, which contains a 3-axis gyroscope, 3-axis accelerometer, and an onboard DPM (Digital Motion Processor, Embedded hardware accelerator used to calculate the MotionFusion) capable of processing complex 9-axis MotionFusion algorithms.
  •     AK8975, a 3-axis digital compass.

The part’s integrated 9-axis MotionFusion algorithms access all internal sensors to gather a full set of sensor data.

Imagen

Touchpad
The touchpad is a full custom module made by Synaptics, and is driven by a Synaptics T1320A touchpad controller.

 

References

  1. http://blog.glassdiary.com/post/62793449799/google-glass-hardware-sensors
  2. http://prezi.com/x7lwawcvc1or/caracteristicas-tecnicas-de-googles-glass/
  3. http://www.invensense.com/mems/gyro/mpu9150.html
  4. http://www.invensense.com/mems/glossary.html
  5. http://pdf1.alldatasheet.com/datasheet-pdf/view/535562/AKM/AK8975.html
  6. http://www.gsmarena.com/glossary.php3?term=sensors
  7. http://www.catwig.com/google-glass-teardown/

 

Context-Aware Computing

As we claim in this post, head-mounted displays are probably the most prominent symbol of weareable computers. In their paper, “Context-awareness in wearable and ubiquitous computing”, Abowd, Dey, et al. concluded

That future computing environments promise to break the paradigm of desktop computing. To do this, computational services need to take advantage of the changing context of the user. The context-awareness we have promoted considers physical, social, informational and even emotional information. Beginning with an applications perspective on future computing environments, we have been able to identify some of the common mechanisms and architectures that best support context-aware computing.

Taken into account the importance of context-aware computing in wearable computers we will investigate the definitions of context, context-aware computing and some others related concepts.

First of all, let’s review some examples of context-aware applications. If fact, the are a lot of examples, for instance one can think of:

  • A music player that, when the enviroment’s sounds are too loud, it automatically turns the volume up to a more desirable level. Here the desirable level could be determinated based on the user’s ableness to hear at some levels.
  • A text reader that takes the user proximity to the device to enlarge the text size when the user goes further from it.
  • Doors that opens when a person is near them.

Context

There are multiple definitions of context in the field of computing. As Dey and Abowd published, the ones that can best be exaplained are:

  • Schilit et al. claim that the important aspects of context are: where you are, who you are with, and what resources are nearby. They define context to be the constantly changing execution environment. They include the following pieces of the environment:
    • Computing environment available processors, devices accessible for user input and display, network capacity, connectivity, and costs of computing.
    • User environment location, collection of nearby people, and social situation.
    • Physical environment lighting and noise level.
  • Dey et al. define context to be the user’s physical, social, emotional or information state.
  • Pascoe defines context to be the subset of physical and conceptual states of interest to a particular entity.

Dey and Abowd, taking into account the definition listed above, defined:

Context is any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves.

Their definition makes it easier for an application developer to enumerate the context for a given application scenario. If a piece of information can be used to characterize the situation of a participant in an interaction, then that information is context.

Also, they defined categories of context as location, identity, time, and activity are the primary context types for characterizing the situation of a particular entity. These context types not only answer the questions of who, what, when, and where, but also act as indices into other sources of contextual information.

Context awareness and ubiquitous (pervasive) computing

In computer science context awareness refers to the idea that computers can both sense, and react based on their environment. Also the notion of context-awarness is closely related to the vision of ubiquitous computing.

The word “ubiquitous” can be defined as “existing or being everywhere at the same time”, “constantly encountered”, and “widespread”. When applying this concept to technology, the term ubiquitous implies that technology is everywhere and we use it all the time. Because of the pervasiveness of these technologies, we tend to use them without thinking about the tool. Instead, we focus on the task at hand, making the technology effectively invisible to the user.

As Mark Weiser introduced, ubiquitous computing names the third wave in computing. First were mainframes, each shared by lots of people. Now we are in the personal
computing era, person and machine starting uneasily at each other across the desktop. Next comes ubiquitous computing, or the age of calm technology, when technology recedes into the background of our lives.

He also claim a important difference with VR:

Ubiquitous computing is roughly the opposite of virtual reality. Where virtual reality puts people inside a computer-generated world, ubiquitous computing forces the computer to live out here in the world with people. Virtual reality is primarily a horse power problem; ubiquitous computing is a very difficult integration of human factors, computer science, engineering, and social sciences.

 

Mark Weiser in 1991 said:

The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.

To realize such ubiquitous computing systems with optimal usability, i.e. transparency of use, context-aware behaviour is seen as the key enabling factor. Computers already pervade our everyday life – in our phones, fridges, TVs, toasters, alarm clocks, watches, etc – but to fully disappear, as in the Weiser’s vision of ubiquitous computing, they have to anticipate the user’s needs in a particular situation and act proactively to provide appropriate assistance. This capability require means to be aware of its surroundings, i.e. context-awareness.

Context-Aware Computing

One of the firsts definitions of context-aware computing was the one provided by Schilit and Theimer in 1994 to be software that adapts according to its location of use, the collection of nearby people and objects, as well as changes to those objects over time”. Their definition restricted it from applications that are simply informed about context to applications that adapt themselves to context.

Further definitions that are in the more specific “adapting to context” also distinguish them taking into account  the method in which applications acts upon context. Ones define that the users select how to adapt the application based on his interest or activities. However, the other ones, define that the system or application should automatically adapt it’s behaviour based on the context.

Dey and Abowd, taking into account multiple definitions of context-aware computing, defined it as: “A system is context-aware if it uses context to provide relevant in- formation and/or services to the user, where relevancy depends on the user’s task.”.

If fact, they have chosen a more general definition of context-aware computing. The main reason for it, was that with it they didn’t exlude existing context-aware applications.

 

Categorization of Features for Context-Aware Applications

In a further attempt to help define the field of context-aware applications, Dey and Abowd presented a categorization for features of context-aware applications. Previously there were two remarkably attempts to develop such taxonomy, the first one proposed by Schilit et al. and the other proposed by Pascoe.

The Schilit proposal had 2 orthogonal dimensions:

  • Wheter the task is to get information or to execute a command
  • Wheter the task is executed manually or automatically

Based on these dimensions, four instances can be defined:

Untitled

  1. Proximate selection: Applications that retrieve information for the user manually based on available context. It is an interaction technique where a list of objects or places is presented, where items relevant to the user’s context are emphasized or made easier to choose.
  2. Automatic contextual reconfiguration: Applications that retrieve information for the user automatically based on available context. It is a system-level technique that creates an automatic binding to an available resource based on current context.
  3. Contextual command: Applications that execute commands for the user manually based on available context. They are executable services made available due to the user’s context or whose execution is modified based on the user’s context.
  4. Context-triggered actions: applications that execute commands for the user automatically based on available context. They are services that are executed automatically when the right combination of context exists, and are based on simple if-then rules.

 

Pascoe proposed a taxonomy of context-aware features. There is considerable overlap between the two taxonomies but some crucial differences as well. Pascoe developed a taxonomy aimed at identifying the core features of context-awareness, as opposed to the previous taxonomy, which identified classes of context-aware applications. In reality, the following features of context-awareness map well to the classes of applications in the Schilit taxonomy.

The features are:

  1. Contextual sensing: Ability to detect contextual information and present it to the user, augmenting the user’s sensory system. This is similar to proximate selection, except in this case, the user does not necessarily need to select one of the context items for more information.
  2. Contextual adaptation: Ability to execute or modify a service automatically based on the current context. This maps directly to Schilit’s context-triggered actions.
  3. Contextual resource discovery: Allows context-aware applications to locate and exploit resources and services that are relevant to the user’s context. This maps directly to automatic contextual reconfiguration.
  4. Contextual augmentation: Ability to associate digital data with the user’s context. A user can view the data when he is in that associated context.

 

Based on the previous categorizations, Dey and Abowd presented their definition that combines the ideas from the two taxonomies and takes into account three major differences. They defined three categories:

  1. Presentation of information and services to a user
  2. Automatic execution of a service
  3. Tagging of context to information for later retrieval

They also introduced two important distinguishing characteristics: the decision not to differentiate between information and services, and the removal of the exploitation of local resources as a feature.

 

In further posts we will show some of the work already done to design and develop context-aware applications and how sensors are the key enabling context-aware applications.

 

References

  1. http://www.interaction-design.org/encyclopedia/context-aware_computing.html
  2. http://www.ubiq.com/hypertext/weiser/UbiHome.html
  3. http://www.nrsp.lancs.ac.uk/kmthesis.pdf
  4. https://smartech.gatech.edu/
  5. https://smartech.gatech.edu/bitstream/handle/1853/3389/99-22.pdf
  6. https://smartech.gatech.edu/bitstream/handle/1853/3531/97-11.pdf

Humanistic Intelligence (HI)

Wearable computers are perfect to embody HI, to take a first step toward an intelligent wearable signal processing system that can facilitate new forms of communication through collective connected H.I..

Humanistic Intelligence is a signal processing framework in which:

  1. Intelligence arises by having the human being in the feedback loop of the computational process
  2. The processing apparatus is inextricably intertwined withthe natural capabilities of the human mind and body.

Rather than trying to emulate human intelligence, HI recognizes that the human brain is perhaps the best neural network of its kind.

Another feature of HI is the ability to multi-task. It is not necessary for a person to stop what they are doing to use a wearable computer because it is always running in the background, so as to augment or mediate the human’s interactions. Wearable computers can be incorporated by the user to act like a prosthetic, thus forming a true extension of the user’s mind and body.

There are three fundamental operational modes of an embodiment of HI: Constancy, Augmentation, and Mediation.

Firstly, there is a constantly of user interface, which implies an “always ready” interactional constancy, supplied by a continously running opetational constancy. Wearable computers are unique in their ability to provide this “always ready” condition which might, for example, include a retroactive video capture for a face recognition reminder system. After-the-fact devices like traditional cameras and palmtop organizers cannot provide this retrospective computing capability.

Secondly, there is an augmentational aspect in which computing is NOT the primary task. Again, wearable computing is unique in their ability to be augmentational without being distracting to a primary task like navigating through a corridor, or trying to walk down stairs.

Thirdly, there is a meditational aspect in which the computational host can protect the human host from information overload, by deliberately diminished reality, such as by visually filtering out advertising signage and bilboards.

 

References

http://n1nlf-1.eecg.toronto.edu/ieeeis_intro.pdf

HMD – History and objectives of inventions

Much before Google Glass came on the scene to testers in April 2013, head-mounted displays had appeared, in the form of concepts, during the middle of the twentieth century. Since then, there have been many researches, investigations, tests, prototypes and usable products that have made head-mounted display history.

Head-mounted displays history dates to 1945, where McCollum patented the first stereoscopic television apparatus. The objects of his invention were:

  1. Provide a new and improved stereoscopic television apparatus whereby a plurality of people can simultaneously and with equal facility view an object which has been transmitted  by stereoscopic television.
  2. Provide a new and improved stereoscopic television apparatus which is simpler, cheaper, more efficient and more convenient than those heretofore known.
  3. Provide a new and improved stereoscopic television apparatus wherein the image creating mechanism is mounted in a spectacle frame.
McCollum patent figures

McCollum patent figures

 

Heilig also patented a stereoscopic television HMD for individual use in 1960. His invention was directed to improvements in stereoscopic television apparatus for individual use. It comprises the following elements: a hollow casing, a pair of optical units, a pair of television tube units, a pair of ear phones and a pair of air discharge nozzles. The below image show an example of his invention:

Heilig stereoscopic television diagram

Heilig stereoscopic television diagram

 

The objects of his invention were:

  1. Provide easily adjustable and comfortable means for causing the apparatus containing the optical units, to be held in proper position, on the head of the user so that the apparatus does not sag, and so that its weight is evenly distributed over the bone structure of the front and back of the head, without the necessity of holding the apparatus up by hand.
  2. Provide means whereby the optical and television tube units may be individually adjusted to bring said units into their proper positions with respect to the eyes of the user and with respect to each other.
  3. Provide ear phones which are so designed that the outer ear is completely free and un touched, thus allowing the ear phones to operate fully as sound focusing organs.
  4. Provide means for independently adjusting the pair of ear phones to bring them into proper position with respect to the ears of the user.
  5. Provide means for conveying to the head of the spectator, air currents of varying velocities, temperatures and odors.
  6. Provide the optical units with a special lens arrangement which will bend the peripheral rays coming from the television tube so that they enter the eyes of the user from the sides therefor, creating the sensation of peripheral vision filling an arc of more than 140° horizontally and vertically.

Two years later, Heilig, developed and patented a stationary virtual reality (VR) simulator, the Sensorama Simulator, which was equipped with a variety of sensory devices including handlebars, a binocular display, vibrating seat, stereophonic speakers, cold air blower, and a device close to the nose that would generate odors that fit the action in the film, to give the user virtual experiences. The main objectives of his invention were:

  1. Provide an apparatus to simulate a desired experience by developing sensations in a plurality of the senses.
  2. Provide a new and improved apparatus to develop realism in a simulated situation.
  3. Provide an apparatus for simulating an actual, predetermined experience in the senses of an individual.
Sensorama Simulator

Sensorama Simulator

 

In 1961, Philco Corporation designed a helmet that used head movements to gain access into artificial enviroments enhaced with a tracking system. The invention was called Headsight and was the first actual HMD invention.

The main objective was to be used with a remote controlled closed circuit video system for remotely viewing dangerous situations. In fact, their system used a head mounted display to monitor conditions in another room, using magnetic tracking to monitor the user’s head movements.

Philco Headsight

Philco Headsight

 

Bell Helicopter Company, in 1960’s, performed several early camera-based augmented-reality systems. In one, the head-mounted display was coupled with an infrared camera that would give military helicopter pilots the ability to land at night in rough terrain. An infrared camera, which moved as the pilot’s head moved, was mounted on the bottom of a helicopter. The pilot’s field of view was that of the camera.

 

Ivan Sutherland, a hall of fame computer scientist, invented the first true computer mediated virtual reality system called  ‘Sword of Damocles’. It was the first BOOM display – Binocular Omni Orientation Monitor. Essentially, the system was a complete computer and display system for displaying a single wireframe cube in stereoscopy to the viewer’s eyes. Unfortunately, at the time, such apparatus was too bulky to head mount. Instead, it was bolted into the ceiling, and reached down via a long, height adjustable pole, to where a user’s head could be strapped to it. The display, whilst primitive by today’s standards, tracked the position of both eyes, allowed the user to swivel it around the Z axis 360 degrees, tracked its orientation and the head position of the user. In addition, the system was not immersive, allowing the user to see the room beyond via transparent elements of the hardware. Thus, it is also considered to be the first augmented reality display.

The Sword of Damocles was the precursor to all the digital eyewear and virtual reality applications.

The objective of his invention was to surround the user with displayed three-dimensional information which changes as he moves.

 

Sutherland - Sword of Democles

Sutherland – Sword of Democles

 

Steve Mann, born 1962, in Ontario, Canada, is a living laboratory for the cyborg life-style. He is one of the leaders in WearComp (wearable computing) and one of the integral members of the Wearable computing group at MIT Media Lab. He believes computers should be designed to function in ways organic to human needs rather than requiring humans to adapt to demanding requirements of technology. Mann has developed computer systems — both wearable and embedded — to augment biological systems and capabilities during all waking hours. His work touches a wide range of disciplines from implant technology to sousveillance (inverse surveillance), privacy, cyber security and cyborg-law.

In 1981, Steve Mann created the first version of the EyeTap. While still in high-school he wired a 6502 computer (as used in the Apple-II) into a steel-frame backpack to control flash-bulbs, cameras, and other photographic systems. The display was a camera viewfinder CRT attached to a helmet, giving 40 column text. Input was from seven microswitches built into the handle of a flash-lamp, and the entire system (including flash-lamps) was powered by lead-acid batteries.

The objective was to acts as a camera to record the scene available to the eye as well as a display to superimpose computer-generated imaginery on the original scene available to the eye.

Steve Mann - First EyeTap

Steve Mann – First EyeTap

 

In 1989, sold by Reflection Technology, Private Eye head-mounted display scanned a vertical array of LEDs across the visual field using a vibrating mirror. The monochrome screen is 1.25-inches on the diagonal, but images appear to be a 15-inch display at 18-inches distance.

Private Eye

Private Eye

 

Steve Mann, appearead again in 1994 with the Weareable Wireless Webcam. Webcam transmitted images point-to-point from a head-mounted analog camera to an SGI base station via amateur TV frequencies. The images were processed by the base station and displayed on a webpage in near real-time. (The system was later extended to transmit processed video back from the base station to a heads-up display and was used in augmented reality experiments performed with Thad Starner). It was the first example of Lifelogging.

The lasts 20 years the development of HMD blow up, were companies created many products with different technologies, types, structures and uses. Input devices that lend themselves to mobility and/or hands-free use are good candidates, for example:

  • Touchpad or buttons
  • Compatible devices (e.g. smartphones or control unit)
  • Speech recognition
  • Gesture recognition
  • Eye tracking
  • Brain–computer interface

The main examples are:

  • 1998: Digital Eye Glass EyeTap Augmediated Reality Goggles

mann1980

  • 2000: MicroOptical’s TASK-9 was founded in 1995 by Mark Spitzer, who is now a director at the Google X lab. The company ceased operations in 2010, but the patents have been acquired by Google.

mann1980

  • 2005: MicroVision’s Nomad Digital Display: MicroVision is now working with automotive suppliers to develop laser heads-up displays to provide information to drivers in their field of vision.

mann1980

  • 2008: MyVu Personal Media Viewer, Crystal Editiondf: MyVu’s Personal Media Viewers hooked up to external video sources, such as an iPod, to provide the illusion of watching the content on a large screen from several feet away.

mann1980

  • 2009: Vuzix Wrap SeeThru Proto: Wrap SeeThru was developed by Vuzix in 2009. The publicly traded company has been developing video eyewear for 15 years and has dozens of patents on the technology.

mann1980

  • 2013: Meta’s eyewear enters 3D space and uses your hands to interact with the virtual world. The Meta system includes stereoscopic 3D glasses and a 3D camera to track hand movements, similar to the portrayals of gestural control in movies like “Iron Man” and “Avatar.” Meta expects to have more fashionable glasses in 2014.

MetaPro_shot_one02_610x338

  • 2013: Google Glass: Google’s project program for developing a line of hands-free, head-mounted intelligent devices that can be worn by users as “wearable computing” eyewear.

Google_Glass_Explorer_Edition

 

References

  1. http://www.media.mit.edu/wearables/lizzy/timeline.html
  2. http://www.irma-international.org/viewtitle/10158/
  3. https://docs.google.com/viewer?url=patentimages.storage.googleapis.com/pdfs/US2388170.pdf
  4. https://docs.google.com/viewer?url=patentimages.storage.googleapis.com/pdfs/US2955156.pdf
  5. http://www.mortonheilig.com/SensoramaPatent.pdf
  6. http://90.146.8.18/en/archiv_files/19902/E1990b_123.pdf

HMD

Head-mounted displays, abbreviated HDM, is just what it sounds: a computer display you wear in your head. Informally, it stands for a display system built into goggles or as part of a helmet, worn on the head, that gives the illusion of a floating monitor in front of the user’s face. Single-eye units are known as monocular HMD and dual-eyes stereoscopic (technique used to enable a three-dimensional effect, adding an illusion of depth to a flat image) units as binocular HMD. Primarily they have been designed to ensure that no matter in what direction the user might look, a monitor would stay in front of his eyes.

Imagen

Monocular HMD – Google Glass

Imagen

Binocular HMD – Meta Pro

 

HUD

A head mounted display is a type of a heads-up display, which can be defined as a transparent or miniaturized display technology that does not require users to shift their gaze from where they are naturally looking. Other than HMDs there are also fixed mounted HUDs. This is typically achived by the use of projected or reflected transparent displays in line-of-sight. There are some distinct generation in the HUDs history. The first generation of that type of HUD used reflected CRT display. The second generation used solid-state light sources like LED to back-light LCD projection. The third generation uses wave-guided options and, finally, the forth generation uses scanning lasers to project all types of images and video. Some of the earliest HUDs where used in military vehicles to assist in navigation and targeting.

Weareable computers

Head-mounted displays are probably the most prominent symbol of weareable computers, however, there are a lot of devices that are included in that group, starting from the first abacus on a necklace in a 16th-century abacus ring. Nowadays, where technological products are getting more sophisticated there are amazing innovative components, like Myo.

Imagen

First known wearable computer – Abacus on a ring

Imagen

Ultimate wearable computer – Myo

 

 

 

 

 

 

 

 

Wearable computing is the study or practice of inventing, designing, building, or using miniature body-borne computational and sensory devices. Wearable computers may be worn under, over, or in clothing, or may also be themselves clothes. However, the field of weareable computing extends beyond “Smart Clothing”, in fact, it is commonly referred as “Body-Borne Computing” or “Beareable Computing ” so as to include all manner of technology that is on or in the body, e.g. implantable devices as well as portable devices like smartphones.

 

Types

There are a lot of ways of combine HMDs into different groups. Head-mounted displays differ in whether they can display just computer-generated image (CGI), show live images from the real world or a combination of both.

  • HMDs that display only computer-generated image, are sometimes referred to as virtual reality.
  • HMDs that allow a CGI to be superimposed on a real-world view. This is sometimes referred to as augmented reality or mixed reality. Within this set, there are two main groups:
    • Optical See-Through AKA Optical HMD: Combining real-world view with CGI can be done by projection the CGI through a partially reflective mirror and viewing the real world directly. If you are in a mission-critical application and you’re concerned what happens should your power fail, an optical see-through solution will allow you to see something in that extreme situation. If you are concerned about the utmost image quality, portable cameras and fully-immersive head-mounted display can’t match the “direct view” experience.
    • Video See-Through: Combining real-world view with CGI can also be done electronically by accepting video from a camera and mixing it electronically with CGI. This can be useful when you need to experience something remotely: a robot which you send to fix a leak inside a chemical plant; a vacation destination that you’re thinking about. This is also useful when using an image enhancement system: a thermal imagery, night-vision devices, etc. One aspect of video see-through is that it’s much easier to match the video latency with the computer graphics latency.

Uses

 

Military

Unless, actually, heads-up displays landed in aircraft as early as 1948, they are a important part of the head-mounted display’s history.

Seeking ways for pilots to keep their line-of-sight outside the cockpit and avoid having to constantly look down at the instrument panel for information, aeronautical engineers devised a way to project visual data onto ground glass mounted within an airplane’s windshield. This, in essence, replaced the idea of the simple gunsight that had been used in aviation since World War I. Its origins in fighter aircraft of World War II when air combat grew more complex and speeds increased rapidly. To allow the pilot to focus on shooting an enemy down, vital information was shown directly ahead on a glass plate which he could see through. In later years, as technology advanced, it was found that other information could also be shown, reducing the need to look down onto the dashboard.

url6

Cars

The idea appealed to car designers too because it could improve driving safety if a driver’s eyes were looking ahead more than down at instruments. However, the technology cost a lot and HUDs in cars remained a dream, usually show in concept cars at motorshows. It was only in the mid-1980s that Nissan installed a HUD in a production model which might have been the first commercial application in the car industry. It was a simple set-up with the speed projected onto the windscreen ahead of the driver. One issue that cropped up then was the visibility of the display which had problems if it was a very bright day. Nissan offered the HUD for a while and then stopped for unknown reasons (customers probably didn’t find it useful or worth the extra cost). GM also tried offering it on some models and also gave up and it wouldn’t be till over two decades later that HUDs would again appear in cars that were for sale to the public – and mostly on the very expensive models.

bmwm605_hud

 

Medicine

Medical practitioners can also benefit from the use of a head mounted display. Using the device to project images of x-rays, the surgeon can utilize the technology to more efficiently locate and remove growths and tumors from the body.

Video Gaming

 

 

References

  1. http://www.worldviz.com/products/head-mounted-displays
  2. http://en.wikipedia.org/wiki/Head-mounted_display
  3. http://www.wisegeek.com/what-is-a-head-mounted-display.htm
  4. http://electronics.howstuffworks.com/gadgets/other-gadgets/VR-gear1.htm
  5. http://whatis.techtarget.com/definition/stereoscopy-stereoscopic-imaging
  6. https://www.spaceglasses.com/
  7. http://whatis.techtarget.com/definition/heads-up-display-HUD
  8. http://www.gartner.com/it-glossary/wearable-computer/
  9. http://www.interaction-design.org/encyclopedia/wearable_computing.html
  10. https://www.pinterest.com/pin/392305817510787967/
  11. http://www.creol.ucf.edu/Research/Publications/1505.PDF

Possibilities

After seeing several examples of what people have done, we can think of the possibility of developing the project with Google Glass as part of a larger system in which the glasses interact with other devices, such as Leap Motion, to achieve more complex goals.

Some ideas

We just started investigating Google Glass, after reading this post, we were amazed of what some people have already done with Google Glass, and of the a wide range of possibilities/ideas to be developed with them (regardless the ideas are morally correct or not).

Getting Started…

In this post we try to summarise the software apps we are going to use and the full setup of the Google Glass development enviroment. Finally we are going to deploy a simple example.

First of all, we need to install:

After the installation, we need to follow some more steps:

  1. To be able to compile Google Glass applications we need to download and install, through SDK Manager in Android Studio, the 15th version of Android SDK, named Ice Cream Sandwich Maintenance Release 1.
  2. To debug Google Glass apps is needed to download and install, through SDK Manager in Android Studio also, the Google USB Driver. However, as we are developing in Windows 8, a fix is needed. The following steps explains how to enable the proper driver for Google Glass.
  3. Install Google Glass companion MyGlass application to:
    1. Configure and manage Google Glasses.
    2. Get location updates from the smartphone GPS.
    3. Send text messages.
    4. Hangout with contacts.
    5. Install Glassware (Google Glass only) apps.
  4. Configure WiFi network from MyGlass application following these steps.
  5. Set Google Glass in debug mode. To achieve this you need to:
    1. Go to the settings card.
    2. Select the Device Info card in the settings.
    3. Scroll over to Turn on debug and tap it.

To test the enviroment setup, we clone a simple Google Glass app example from Github. The example shows the usage of the compass sensor in a live card (We are going to exaplain the desing components and concepts in future posts).

After cloning it, we:

  1. Import the project to Android Studio with Import Project.
  2. Set the proper Android SDK version 15 in Project Structure.
  3. Deploy it to Google Glass and watch it running