Saturday, May 24, 2008

Leading Biometric Technologies

A growing number of biometric technologies have been proposed over the past several years, but only in the past 5 years have the leading ones become more widely deployed. Some technologies are better suited to specific applications than others, and some are more acceptable to users. We will discuss the four leading and most deployed biometric technologies:

  • Facial Recognition
  • Fingerprint Recognition
  • Hand Geometry
  • Iris Recognition
Face Recognition
Face recognition technology is the least intrusive and fastest biometric technology. It works with the most obvious individual identifier – the human face. Instead of requiring people to place their hand on a reader or precisely position their eye in front of a scanner, face recognition systems unobtrusively take pictures of people's faces as they enter a defined area. There is no intrusion or delay, and in most cases the subjects are entirely unaware of the process. They do not feel "under surveillance" or that their privacy has been invaded.Face TechnologyFace technology is based on neural computing and combines the advantages of elastic and neural networks. Neural computing provides technical information processing methods that are similar to the way information is processed in biological systems, such as the human brain. They share some key strength, like robustness fault-resistance and the ability to learn from examples. Elastic networks can compare facial landmarks even if images are not identical, as is practically always the case in real-world situations. Neural networks can learn to recognize similarities through pattern recognition.


Face recognition is also very difficult to fool. It works by comparing facial landmarks - specific proportions and angles of defined facial features - which cannot easily be concealed by beards, eyeglasses or makeup.
Every face has numerous, distinguishable landmarks, the different peaks and valleys that make up facial features. Face recognition defines these landmarks as nodal points. Each human face has approximately 80 nodal points. Some of these measured by the software are:


  • Distance between the eyes
  • Width of the nose
  • Depth of the eye sockets
  • The shape of the cheekbones
  • The length of the jaw line

These nodal points are measured creating a numerical code, called a faceprint, representing the face in the database.

Finger Print Recognition
Fingerprint recognition is one of the best known and most widely used biometric technologies. Automated systems have been commercially available since the early 1970s, and at the time of our study, we found there were more than 75 fingerprint recognition technology companies. Until recently, fingerprint recognition was used primarily in law enforcement applications. Fingerprint recognition technology extracts features from impressions made by the distinct ridges on the fingertips. The fingerprints can be either flat or rolled. A flat print captures only an impression of the central area between the fingertip and the first knuckle; a rolled print captures ridges on both sides of the finger. An image of the fingerprint is captured by a scanner, enhanced, and converted into a template. Scanner technologies can be optical, silicon, or ultrasound technologies. Ultrasound, while potentially the most accurate, has not been demonstrated in widespread use.

During enhancement, “noise” caused by such things as dirt, cuts, scars, and creases or dry, wet or worn fingerprints is reduced, and the definition of the ridges is enhanced. Approximately 80 percent of vendors base their algorithms on the extraction of minutiae points relating to breaks in the ridges of the fingertips. Other algorithms are based on extracting ridge patterns.

In the biometric process of finger scanning is a curved line in a finger image. Some ridges are continuous curves, and others terminate at specific points called ridge endings. Sometimes, two ridges come together at a point called a bifurcation. Ridge endings and bifurcations are known as minutia.

The number and locations of the minutiae vary from finger to finger in any particular person, and from person to person for any particular finger (for example, the index finger on the left hand). When a set of finger images is obtained from an individual, the number of minutiae is recorded for each finger. The precise locations of the minutiae are also recorded, in the form of numerical coordinates, for each finger. The result is a function that can be entered and stored in a computer database. A computer can rapidly compare this function with that of anyone else in the world whose finger image has been scanned.

Hand Geometry
Hand geometry systems have been in use for almost 30 years for access control to facilities ranging from nuclear power plants to day care centers. Hand geometry technology takes 96 measurements of the hand, including the width, height, and length of the fingers; distances between joints; and shapes of the knuckles. Hand geometry systems use an optical camera and light-emitting diodes with mirrors and reflectors to capture two orthogonal two-dimensional images of the back and sides of the hand. Although the basic shape of an individual’s hand remains relatively stable over his or her lifetime, natural and environmental factors can cause slight changes.

The image acquisition system comprises of a light source, a camera, a single mirror and at surface (with pegs on it). The user places his hand - palm facing downwards - on the surface of the device. The five pegs serve as control points for appropriate placement of the right hand of the user. The device also has knobs to change the intensity of the light source and the focal length of the camera. The lone mirror projects the side-view of the user's hand onto the camera.

The hand geometry-based authentication system relies on geometric invariants of a human hand. Typical features include length and width of the fingers, aspect ratio of the palm or fingers, thickness of the hand. Normally there are 14 axes along which the various features mentioned above have been measured. The five pegs on the image serve as control points and assist in choosing these axes. The hand is represented as a vector of the measurements selected above. Since the positions of the five pegs are fixed in the image, no attempt is made to remove these pegs in the acquired images.

Machine comes in compact form that can be attached with the wall quite easily.

Iris Recognition
Iris recognition technology is based on the distinctly colored ring surrounding the pupil of the eye. Made from elastic connective tissue, the iris is a very rich source of biometric data, having approximately 266 distinctive characteristics. These include the orbicular meshwork, a tissue that gives the appearance of dividing the iris radically, with striations, rings, furrows, a corona, and freckles. Iris recognition technology uses about 173 of these distinctive characteristics. Formed during the 8 months of gestation, these characteristics reportedly remain stable throughout a person’s lifetime, except in cases of injury. Iris recognition can be used in both verification and identification systems. Iris recognition systems use a small, high-quality camera to capture a black and white, high-resolution image of the iris. The systems then define the boundaries of the iris, establish a coordinate system over the iris, and define the zones for analysis within the coordinate system.

Iris patterns become interesting as an alternative approach to reliable visual recognition of persons when imaging can be done at distances of less than a meter, and especially when there is a need to search very large databases without incurring any false matches despite a huge number of possibilities. Although small (11 mm) and sometimes problematic to image, the iris has the great mathematical advantage that its pattern variability among different persons is enormous. In addition, as an internal (yet externally visible) organ of the eye, the iris is well protected from the environment and stable over time. As a planar object its image is relatively insensitive to angle of illumination, and changes in viewing angle cause only affine transformations; even the non affine pattern distortion caused by papillary dilation is readily reversible. Finally, the ease of localizing eyes in faces, and the distinctive annular shape of the iris, facilitates reliable and precise isolation of this feature and the creation of a size-invariant representation.

No comments: