Table of Contents


ABSTRACT        5

LIST OF ABBREVIATIONS        6

1        Introduction        7

2        working of haptic systems        9

3        haptic devices        .18

4        commonly used haptic devices        .21

4.1        logitech wingman mouse        21

4.2        phantom        22

4.3        CyberGlove        .23

4.4        CyberGrasp        .24

5        application        .26

6        LIMITATIONS OF HAPTIC DEVICES        33

7        future vision        35

8        conclusion        37

9        bibliography        38


ABSTRACT


              “HAPTICS”-- a technology that adds the sense of touch to virtual environment. Haptic interfaces allow the user to feel as well as to see virtual objects on a computer, and so we can give an illusion of touching surfaces, shaping virtual clay or moving objects around.

               The sensation of touch is the brain’s most effective learning mechanism --more effective than seeing or hearing—which is why the new technology holds so much promise as a teaching tool.

               Haptic technology is like exploring the virtual world with a stick. If you push the stick into a virtual balloon push back .The computer communicates sensations through a haptic interface – a stick, scalpel, racket or pen that is connected to a force-exerting motor.

   With this technology we can now sit down at a computer terminal and touch objects that exist only in the "mind" of the computer. By using special input/output devices (joysticks, data gloves, or other devices), users can receive feedback from computer applications in the form of felt sensations in the hand or other parts of the body. In combination with a visual display, haptic technology can be used to train people for tasks requiring hand-eye coordination, such as surgery and space ship maneuvers.

                In this paper we explicate how sensors and actuators are used for tracking the position and movement of the haptic device moved by the operator. Then, we move on to a few applications of Haptic Technology. Finally we conclude by mentioning a few future developments.

LIST OF ABBREVIATIONS


VA

Virtual Artifact

VR

Virtual Reality

DC

Direct Current

VHB

Virtual Haptic Back

LED

Light Emitting Diode


Introduction

1. What is ‘Haptics’?

        Haptic technology refers to technology that interfaces the user with a virtual environment via the sense of touch by applying forces, vibrations, and/or motions to the user. This mechanical stimulation may be used to assist in the creation of virtual objects (objects existing only in a computer simulation), for control of such virtual objects, and to enhance the remote control of machines and devices (tele-operators). This emerging technology promises to have wide-reaching applications as it already has in some fields. For example, haptic technology has made it possible to investigate in detail how the human sense of touch works by allowing the creation of carefully controlled haptic virtual objects. These objects are used to systematically probe human haptic capabilities, which would otherwise be difficult to achieve. These new research tools contribute to our understanding of how touch and its underlying brain functions work. Although haptic devices are capable of measuring bulk or reactive forces that are applied by the user, it should not to be confused with touch or tactile sensors that measure the pressure or force exerted by the user to the interface.

The term haptic originated from the Greek word ἁπτικός (haptikos), meaning pertaining to the sense of touch and comes from the Greek verb ἅπτεσθαι (haptesthai) meaning to “contact” or “touch”.

2. History of Haptics

In the early 20th century, psychophysicists introduced the word haptics to label the subfield of their studies that addressed human touch-based perception and manipulation. In the 1970s and 1980s, significant research efforts in a completely different field, robotics also began to focus on manipulation and perception by touch. Initially concerned with building autonomous robots, researchers soon found that building a dexterous robotic hand was much more complex and subtle than their initial naive hopes had suggested.

In time these two communities, one that sought to understand the human hand and one that aspired to create devices with dexterity inspired by human abilities found fertile mutual interest in topics such as sensory design and processing, grasp control and manipulation, object representation and haptic information encoding, and grammars for describing physical tasks.

        In the early 1990s a new usage of the word haptics began to emerge. The confluence of several emerging technologies made virtualized haptics, or computer haptics possible. Much like computer graphics, computer haptics enables the display of simulated objects to humans in an interactive manner. However, computer haptics uses a display technology through which objects can be physically palpated

        One of the earliest forms of haptic devices is used in large modern aircraft that use servomechanism systems to operate control systems. Such systems tend to be "one-way" in that forces applied aerodynamically to the control surfaces are not perceived at the controls, with the missing normal forces simulated with springs and weights. In earlier, lighter aircraft without servo systems, as the aircraft approached a stall the aerodynamic buffeting was felt in the pilot's controls, a useful warning to the pilot of a dangerous flight condition. This control shake is not felt when servo control systems are used. To replace this missing cue, the angle of attack is measured, and when it approaches the critical stall point a "stick shaker" (an unbalanced rotating mass) is engaged, simulating the effects of a simpler control system. This is known as haptic feedback. Alternatively the servo force may be measured and this signal directed to a servo system on the control. This method is known as force feedback. Force feedback has been implemented experimentally in some excavators. This is useful when excavating mixed materials such as large rocks embedded in silt or clay, as it allows the operator to "feel" and work around unseen obstacles, enabling significant increases in productivity

WORKING OF HAPTIC SYSTEMS

1. Basic system configuration

Basically a haptic system consist of two parts namely the human part and the machine part. In the figure shown above, the human part (left) senses and controls the position of the hand, while the machine part (right) exerts forces from the hand to simulate contact with a virtual object. Also both the systems will be provided with necessary sensors, processors and actuators. In the case of the human system, nerve receptors performs sensing, brain performs processing and muscles performs actuation of the motion performed by the hand while in the case of the machine system, the above mentioned functions are performed by the encoders, computer and motors respectively.

2. Haptic Information

Basically the haptic information provided by the system will be the combination of (i) Tactile information and (ii) Kinesthetic information.

1). Tactile information refers the information acquired by the sensors which are actually connected to the skin of the human body with a particular reference to the spatial distribution of pressure, or more generally, tractions, across the contact area.

For example when we handle flexible materials like fabric and paper, we sense the pressure variation across the fingertip. This is actually a sort of tactile information. Tactile sensing is also the basis of complex perceptual tasks like medical palpation, where physicians locate hidden anatomical structures and evaluate tissue properties using their hands.

 2). Kinesthetic information refers to the information acquired through the sensors in the joints. Interaction forces are normally perceived through a combination of these two information.

3. Creation of Virtual environment (Virtual reality)

Virtual reality is the technology which allows a user to interact with a computer-simulated environment, whether that environment is a simulation of the real world or an imaginary world. Most current virtual reality environments are primarily visual experiences, displayed either on a computer screen or through special or stereoscopic displays, but some simulations include additional sensory information, such as sound through speakers or headphones. Some advanced, haptic systems now include tactile information, generally known as force feedback, in medical and gaming applications. Users can interact with a virtual environment or a virtual artifact (VA) either through the use of standard input devices such as a keyboard and mouse, or through multimodal devices such as a wired glove, the Polhemus boom arm, and omnidirectional treadmill. The simulated environment can be similar to the real world, for example, simulations for pilot or combat training, or it can differ significantly from reality, as in VR games. In practice, it is currently very difficult to create a high-fidelity virtual reality experience, due largely to technical limitations on processing power, image resolution and communication bandwidth. However, those limitations are expected to eventually be overcome as processor, imaging and data communication technologies become more powerful and cost-effective over time.

4. Haptic feedback

Virtual reality (VR) applications strive to simulate real or imaginary scenes with which users can interact and perceive the effects of their actions in real time. Ideally the user interacts with the simulation via all five senses. However, today’s typical VR applications rely on a smaller subset, typically vision, hearing, and more recently, touch.

Figure below shows the structure of a VR application incorporating visual, auditory, and haptic feedback.

The application’s main elements are:

1) The simulation engine, responsible for computing the virtual environment’s behavior over time;

2) Visual, auditory, and haptic rendering algorithms, which compute the virtual environment’s graphic, sound, and force responses toward the user; and

3) Transducers, which convert visual, audio, and force signals from the computer into a form the operator can perceive.

The human operator typically holds or wears the haptic interface device and perceives audiovisual feedback from audio (computer speakers, headphones, and so on) and visual displays (for example a computer screen or head-mounted display).Whereas audio and visual channels feature unidirectional information and energy flow (from the simulation engine toward the user), the haptic modality exchanges information and energy in two directions, from and toward the user. This bidirectionality is often referred to as the single most important feature of the haptic interaction modality.

 5. Architecture for Haptic feedback:

Basic architecture for a virtual reality application incorporating visual, auditory, and haptic feedback.

  1. • Simulation engine:

Responsible for computing the virtual environment’s behavior over time.

  1. Visual, auditory, and haptic rendering algorithms: 

Compute the virtual environment’s graphic, sound, and force responses toward the user.

• Transducers:

Convert visual, audio, and force signals from the computer into a form the operator can perceive.

  1. • Rendering:

Process by which desired sensory stimuli are imposed on the user to convey information about a virtual haptic object.

The human operator typically holds or wears the haptic interface device and perceives audiovisual feedback from audio (computer speakers, headphones, and so on) and visual displays (a computer screen or head-mounted display, for example).

Audio and visual channels feature unidirectional information and energy flow (from the simulation engine towards the user) whereas, the haptic modality exchanges information and energy in two directions, from and toward the user. This bi directionality is often referred to as the single most important feature of the haptic interaction modality.

System architecture for haptic rendering: 

An avatar is the virtual representation of the haptic interface through which the user physically interacts with the virtual environment.

Haptic-rendering algorithms compute the correct interaction forces between the haptic interface representation inside the virtual environment and the virtual objects populating the environment. Moreover, haptic rendering algorithms ensure that the haptic device correctly renders such forces on the human operator.

1.) Collision-detection algorithms detect collisions between objects and avatars in the virtual environment and yield information about where, when, and ideally to what extent collisions (penetrations, indentations, contact area, and so on) have occurred.

2.) Force-response algorithms compute the interaction force between avatars and virtual objects when a collision is detected. This force approximates as closely as possible the contact forces that would normally arise during contact between real objects.

Hardware limitations prevent haptic devices from applying the exact force computed by the force-response algorithms to the user.

3.) Control algorithms command the haptic device in such a way that minimizes the error between ideal and applicable forces. The discrete-time nature of the haptic- rendering algorithms often makes this difficult.

The force response algorithms’ return values are the actual force and torque vectors that will be commanded to the haptic device.

Existing haptic rendering techniques are currently based upon two main principles: "point-interaction" or "ray-based".

In point interactions, a single point, usually the distal point of a probe, thimble or stylus employed for direct interaction with the user, is employed in the simulation of collisions. The point penetrates the virtual objects, and the depth of indentation is calculated between the current point and a point on the surface of the object. Forces are then generated according to physical models, such as spring stiffness or a spring-damper model.

In ray-based rendering, the user interface mechanism, for example, a probe, is modeled in the virtual environment as a finite ray. Orientation is thus taken into account, and collisions are determined between the simulated probe and virtual objects. Collision detection algorithms return the intersection point between the ray and the surface of the simulated object.

6. Computing contact-response forces:

Humans perceive contact with real objects through sensors (mechanoreceptors) located in their skin, joints, tendons, and muscles. We make a simple distinction between the information these two types of sensors can acquire.

1). Tactile information refers to the information acquired through sensors in the skin with particular reference to the spatial distribution of pressure, or more generally, tractions, across the contact area.

To handle flexible materials like fabric and paper, we sense the pressure variation across the fingertip. Tactile sensing is also the basis of complex perceptual tasks like medical palpation, where physicians locate hidden anatomical structures and evaluate tissue properties using their hands.

2). Kinesthetic information refers to the information acquired through the sensors in the joints. Interaction forces are normally perceived through a combination of these two.

To provide a haptic simulation experience, systems are designed to recreate the contact forces a user would perceive when touching a real object.

There are two types of forces:

1.Forces due to object geometry.

2.Forces due to object surface properties, such as texture and friction.

 

7. Geometry-dependent force-rendering algorithms:

The first type of force-rendering algorithms aspires to recreate the force interaction a user would feel when touching a frictionless and textureless object.

Force-rendering algorithms are also grouped by the number of Degrees-of-freedom (DOF) necessary to describe the interaction force being rendered.

Surface property-dependent force-rendering algorithms:

All real surfaces contain tiny irregularities or indentations. Higher accuracy, however, sacrifices speed, a critical factor in real-time applications. Any choice of modeling technique must consider this tradeoff. Keeping this trade-off in mind, researchers have developed more accurate haptic-rendering algorithms for friction.

In computer graphics, texture mapping adds realism to computer-generated scenes by projecting a bitmap image onto surfaces being rendered. The same can be done haptically.

8. Controlling forces delivered through haptic interfaces: 

Once such forces have been computed, they must be applied to the user. Limitations of haptic device technology, however, have sometimes made applying the force’s exact value as computed by force-rendering algorithms impossible. They are as follows:

  1. Haptic interfaces can only exert forces with limited magnitude and not equally well in all directions
  2. Haptic devices aren’t ideal force transducers. An ideal haptic device would render zero impedance when simulating movement in free space, and any finite impedance when simulating contact with an object featuring such impedance characteristics. The friction, inertia, and backlash present in most haptic devices prevent them from meeting this ideal.
  3. A third issue is that haptic-rendering algorithms operate in discrete time whereas users operate in continuous time.

Finally, haptic device position sensors have finite resolution. Consequently, attempting to determine where and when contact occurs always results in a quantization error. It can create stability problems.

All of these issues can limit a haptic application’s realism. High servo rates (or low servo rate periods) are a key issue for stable haptic interaction.

HAPTIC DEVICES

        A haptic device is the one that provides a physical interface between the user and the virtual environment by means of a computer. This can be done through an input/output device that senses the body’s movement, such as joystick or data glove. By using haptic devices, the user can not only feed information to the computer but can also receive information from the computer in the form of a felt sensation on some part of the body. This is referred to as a haptic interface.

        Haptic devices can be broadly classified into

1. Virtual reality/ Telerobotics based devices

        i) Exoskeletons and Stationary device

        ii) Gloves and wearable devices

        iii) Point-sources and Specific task devices

        iv) Locomotion Interfaces

2. Feedback devices

        i) Force feedback devices

        ii) Tactile displays

3. Exoskeletons and Stationary devices

        The term exoskeleton refers to the hard outer shell that exists on many creatures. In a technical sense, the word refers to a system that covers the user  

or the user has to wear. Current haptic devices that are classified as exoskeletons are large and immobile systems that the user must attach him- or herself to.

4. Gloves and wearable devices

        These devices are smaller exoskeleton-like devices that are often, but not always, take the down by a large exoskeleton or other immobile devices. Since the goal of building a haptic system is to be able to immerse a user in the virtual or remote environment and it is important to provide a small remainder of the user’s actual environment as possible. The drawback of the wearable systems is that since weight and size of the devices are a concern, the systems will have more limited sets of capabilities.

5. Point sources and specific task devices

        This is a class of devices that are very specialized for performing a particular given task. Designing a device to perform a single type of task restricts the application of that device to a much smaller number of functions. However it allows the designer to focus the device to perform its task extremely well. These task devices have two general forms, single point of interface devices and specific task devices.

6. Locomotion interfaces

        An interesting application of haptic feedback is in the form of full body Force Feedback called locomotion interfaces. Locomotion interfaces are movement of force restriction devices in a confined space, simulating unrestrained mobility such as walking and running for virtual reality. These interfaces overcomes the limitations of using joysticks for maneuvering or whole body motion platforms, in which the user is seated and does not expend energy, and of room environments, where only short distances can be traversed.

7. Force feedback devices

        Force feedback input devices are usually, but not exclusively, connected to computer systems and is designed to apply forces to simulate the sensation of weight and resistance in order to provide information to the user. As such, the feedback hardware represents a more sophisticated form of input/output devices, complementing others such as keyboards, mice or trackers.. These devices translate digital information into physical sensations.

8. Tactile display devices

        Simulation task involving active exploration or delicate manipulation of a virtual environment require the addition of feedback data that presents an object’s surface geometry or texture.. While haptic interfaces will present the shape, weight or compliance of an object, tactile interfaces present the surface properties of an object such as the object’s surface texture. Tactile feedback applies sensation to the skin.

COMMONLY USED HAPTIC DEVICES

1. LOGITECH WINGMAN FORCE FEEDBACK MOUSE

It is attached to a base that replaces the mouse mat and contains the motors used to provide forces back to the user.

        Interface use is to aid computer users who are blind or visually disabled; or who are tactile or Kinesthetic learners by providing a slight resistance at the edges of windows and buttons so that the user can "feel" the Graphical User Interface (GUI). This technology can also provide resistance to textures in computer images, which enables computer users to "feel" pictures such as maps and drawings.

2. PHANTOM:

The PHANTOM provides single point, 3D force-feedback to the user via a stylus (or thimble) attached to a moveable arm. The position of the stylus point/fingertip is tracked, and resistive force is applied to it when the device comes into 'contact' with the virtual model, providing accurate, ground referenced force feedback. The physical working space is determined by the extent of the arm, and a number of models are available to suit different user requirements.

The phantom system is controlled by three direct current (DC) motors that have sensors and encoders attached to them. The number of motors corresponds to the number of degrees of freedom a particular phantom system has, although most systems produced have 3 motors.

The encoders track the user’s motion or position along the x, y and z coordinates the motors track the forces exerted on the user along the x, y and z-axis. From the motors there is a cable that connects to an aluminum linkage, which connects to a passive gimbals which attaches to the thimble or stylus. A gimbal is a device that permits a body freedom of motion in any direction or suspends it so that it will remain level at all times.

Used in surgical simulations and remote operation of robotics in hazardous environments

3. CyberGlove

Cyber Glove can sense the position and movement of the fingers and wrist. The basic Cyber Glove system includes one CyberGlove, its instrumentation unit, serial cable to connect to your host computer, and an executable version of Virtual Hand graphic hand model display and calibration software.

The CyberGlove has a software programmable switch and LED on the wristband to permit the system software developer to provide the CyberGlove wearer with additional input/output capability. With the appropriate software, it can be used to interact with systems using hand gestures, and when combined with a tracking device to determine the hand's position in space, it can be used to manipulate virtual objects.

 4. Cyber Grasp

The Cyber Grasp is a full hand force-feedback exo skeletal device, which is worn over the CyberGlove. CyberGrasp consists of a lightweight mechanical assembly, or exoskeleton, that fits over a motion capture glove. About 20 flexible semiconductor sensors are sewn into the fabric of the glove measure hand, wrist and finger movement. The sensors send their readings to a computer that displays a virtual hand mimicking the real hand’s flexes, tilts, dips, waves and swivels.

The same program that moves the virtual hand on the screen also directs machinery that exerts palpable forces on the real hand, creating the illusion of touching and grasping. A special computer called a force control unit calculates how much the exoskeleton assembly should resist movement of the real hand in order to simulate the onscreen action. Each of five actuator motors turns a spool that rolls or unrolls a cable. The cable conveys the resulting pushes or pulls to a finger via the exoskeleton.

                           

APPLICATIONS

1. Graphical user interfaces

Video game makers have been early adopters of passive haptics, which takes advantage of vibrating joysticks, controllers and steering wheels to reinforce on-screen activity. But future video games will enable players to feel and manipulate virtual solids, fluids, tools and avatars. The Novint Falcon haptics controller is already making this promise a reality. The 3-D force feedback controller allows you to tell the difference between a pistol report and a shotgun blast, or to feel the resistance of a longbow's string as you pull back an arrow.

Graphical user interfaces, like those that define Windows and Mac operating environments, will also benefit greatly from haptic interactions. Imagine being able to feel graphic buttons and receive force feedback as you depress a button. Some touchscreen manufacturers are already experimenting with this technology. Nokia phone designers have perfected a tactile touchscreen that makes on-screen buttons behave as if they were real buttons. When a user presses the button, he or she feels movement in and movement out. He also hears an audible click. Nokia engineers accomplished this by placing two small piezoelectric sensor pads under the screen and designing the screen so it could move slightly when pressed. Everything, movement and sound is synchronized perfectly to simulate real button manipulation.

2. Surgical Simulation and Medical Training

Various haptic interfaces for medical simulation may prove especially useful for training of minimally invasive procedures (laparoscopy/interventional radiology) and remote surgery using teleoperators. In the future, expert surgeons may work from a central workstation, performing operations in various locations, with machine setup and patient preparation performed by local nursing staff. Rather than traveling to an operating room, the surgeon instead becomes a telepresence. A particular advantage of this type of work is that the surgeon can perform many more operations of a similar type, and with less fatigue. It is well documented that a surgeon who performs more procedures of a given kind will have statistically better outcomes for his patients. Haptic interfaces are also used in rehabilitation robotics.

In ophthalmology, "haptic" refers to a supporting spring, two of which hold an artificial lens within the lens capsule (after surgical removal of cataracts).

A 'Virtual Haptic Back' (VHB) is being successfully integrated in the curriculum of students at the Ohio University College of Osteopathic Medicine.  Research indicates that VHB is a significant teaching aid in palpatory diagnosis (detection of medical problems via touch). The VHB simulates the contour and compliance (reciprocal of stiffness) properties of human backs, which are palpated with two haptic interfaces (Sensable Technologies, Phantom 3.0).

Reality-based modeling for surgical simulation consists of a continuous cycle.  In the figure given above, the surgeon receives visual and haptic (force and tactile) feedback and interacts with the haptic interface to control the surgical robot and instrument.  The robot with instrument then operates on the patient at the surgical site per the commands given by the surgeon.  Visual and force feedback is then obtained through endoscopic cameras and force sensors that are located on the surgical tools and are displayed back to the surgeon.

3. Military Training in virtual environment.

From the earliest moments in the history of virtual reality (VR), the United States military forces have been a driving factor in developing and applying new VR technologies. Along with the entertainment industry, the military is responsible for the most dramatic evolutionary leaps in the VR field.

Virtual environments work well in military applications. When well designed, they provide the user with an accurate simulation of real events in a safe, controlled environment. Specialized military training can be very expensive, particularly for vehicle pilots. Some training procedures have an element of danger when using real situations. While the initial development of VR gear and software is expensive, in the long run it's much more cost effective than putting soldiers into real vehicles or physically simulated situations. VR technology also has other potential applications that can make military activities safer.


Today, the military uses VR techniques not only for training and safety enhancement, but also to analyze military maneuvers and battlefield positions. In the next section, we'll look at the various simulators commonly used in military training.  Out of all the earliest VR technology applications, military vehicle simulations have probably been the most successful. Simulators use sophisticated computer models to replicate a vehicle's capabilities and limitations within a stationary -- and safe -- computer station.

Possibly the most well-known of all the simulators in the military are the flight simulators. The Air Force, Army and Navy all use flight simulators to train pilots. Training missions may include how to fly in battle, how to recover in an emergency, or how to coordinate air support with ground operations.

The Army uses several specific devices to train soldiers to drive specialized vehicles like tanks or the heavily-armored Stryker vehicle. Some of these look like long-lost twins to flight simulators. They not only accurately recreate the handling and feel of the vehicle they represent, but also can replicate just about any environment you can imagine. Trainees can learn how the real vehicle handles in treacherous weather conditions or difficult terrain. Networked simulators allow users to participate in complex war games.

4. Telerobotics

In a telerobotic system, a human operator controls the movements of a robot that is located some distance away. Some teleoperated robots are limited to very simple tasks, such as aiming a camera and sending back visual images. In a more sophisticated form of teleoperation known as telepresence, the human operator has a sense of being located in the robot's environment. Haptics now makes it possible to include touch cues in addition to audio and visual cues in telepresence models. It won't be long before astronomers and planet scientists actually hold and manipulate a Martian rock through an advanced haptics-enabled telerobot, a high-touch version of the Mars Exploration Rover.

5. Collision Detection

Collision detection is a fundamental problem in computer animation, physically-based modeling, geometric modeling, and robotics. In these fields, it is often necessary to compute distances between objects or find intersection regions.

In particular, Scientists have investigated the computation of global and local penetration depth, distance fields, and multiresolution hierarchies for perceptually-driven fast collision detection. These proximity queries have been applied to haptic rendering and rigid body dynamics simulation.

6. Gaming technology

Flight Simulations: Motors and actuators push, pull, and shake the flight yoke, throttle, rudder pedals, and cockpit shell, replicating all the tactile and kinesthetic cues of real flight. Some examples of the simulator’s haptic capabilities include resistance in the yoke from pulling out of a hard dive, the shaking caused by stalls, and the bumps felt when rolling down concrete runway.     These flight simulators look and feel so real that a pilot who successfully completes training on a top-of-the-line Level 5 simulator can immediately start flying a real commercial airliner.

Today, all major video consoles have built-in tactile feedback capability. Various sports games, for example, let you feel bone-crushing tackles or the different vibrations caused by skateboarding over plywood, asphalt, and concrete. Altogether, more than 500 games use force feedback, and more than 20 peripheral manufacturers now market in excess of 100 haptics hardware products for gaming.

7. Cars

For the past two model years, the BMW 7 series has contained the iDrive (based on Immersion Corp's technology), which uses a small wheel on the console to give haptic feedback so the driver can control the peripherals like stereo, heating, navigation system etc. through menus on a video screen.

The firm introduced haptic technology for the X-by-Wire system and was showcased at the Alps Show 2005 in Tokyo. The system consisted of a "cockpit" with steering, a gearshift lever and pedals that embed haptic technology, and a remote-control car. Visitors could control a remote control car by operating the steering, gearshift lever and pedals in the cockpit seeing the screen in front of the cockpit, which is projected via a camera equipped on the remote control car.

8. Robot Control

 For navigation in dynamic environments or at high speeds, it is often desirable to provide a sensor-based collision avoidance scheme on-board the robot to guarantee safe navigation. Without such a collision avoidance scheme, it would be difficult for the (remote) operator to prevent the robot from colliding with obstacles. This is primarily due to (1) limited information from the robots' sensors, such as images within a restricted viewing angle without depth information, which is insufficient for the user's full perception of the environment in which the robot moves, and (2) significant delay in the communication channel between the operator and the robot.

Experiments on robot control using haptic devices have shown the effectiveness of haptic feedback in a mobile robot tele-operation system for safe navigation in a shared autonomy scenario.

9. Prostate Cancer

Prostate cancer is the third leading cause of death among American men, resulting in approximately 31,000 deaths annually. A common treatment method is to insert needles into the prostate to distribute radioactive seeds, destroying the cancerous tissue. This procedure is known as brachytherapy.

The prostate itself and the surrounding organs are all soft tissue. Tissue deformation makes it difficult to distribute the seeds as planned. In our research we have developed a device to minimize this deformation, improving brachytherapy by increasing the seed distribution accuracy.

LIMITATIONS OF HAPTIC SYSTEMS

         Limitations of haptic device systems have sometimes made applying the force’s exact value as computed by force-rendering algorithms impossible.

Various issues contribute to limiting a haptic device’s capability to render a desired force or, more often, desired impedance are given below.

 1) Haptic interfaces can only exert forces with limited magnitude and not equally well in all directions, thus rendering algorithms must ensure that no output components saturate, as this would lead to erroneous or discontinuous application of forces to the user. In addition, haptic devices aren’t ideal force transducers.

2) An ideal haptic device would render zero impedance when simulating movement in free space, and any finite impedance when simulating contact with an object featuring such impedance characteristics. The friction, inertia, and backlash present in most haptic devices prevent them from meeting this ideal.

3) A third issue is that haptic-rendering algorithms operate in discrete time whereas users operate in continuous time, as Figure shown below illustrates. While moving into and out of a virtual object, the sampled avatar position will always lag behind the avatar’s actual continuous-time position. Thus, when pressing on a virtual object, a user needs to perform less work than in reality.

And when the user releases, however, the virtual object returns more work than its real-world counterpart would have returned. In other terms, touching a virtual object extracts energy from it. This extra energy can cause an unstable response from haptic devices.

4) Finally, haptic device position sensors have finite resolution. Consequently, attempting to determine where and when contact occurs always results in a quantization error. Although users might not easily perceive this error, it can create stability problems.

All of these issues, well known to practitioners in the field, can limit a haptic application’s realism. The first two issues usually depend more on the device mechanics; the latter two depend on the digital nature of VR applications.


                            FUTURE VISION

As haptics moves beyond the buzzes and thumps of today’s video games, technology will enable increasingly believable and complex physical interaction with virtual or remote objects. Already haptically enabled commercial products let designers sculpt digital clay figures to rapidly produce new product geometry, museum goers feel previously inaccessible artifacts, and doctors train for simple procedures without endangering patients.

Past technological advances that permitted recording, encoding, storage, transmission, editing, and ultimately synthesis of images and sound profoundly affected society. A wide range of human activities, including communication, education, art, entertainment, commerce, and science, were forever changed when we learned to capture, manipulate, and create sensory stimuli nearly indistinguishable from reality. It’s not unreasonable to expect that future advancements in haptics will have equally deep effects. Though the field is still in its infancy, hints of vast, unexplored intellectual and commercial territory add excitement and energy to a growing number of conferences, courses, product releases, and invention efforts.

For the field to move beyond today’s state of the art, researchers must surmount a number of commercial and technological barriers. Device and software tool-oriented corporate efforts have provided the tools we need to step out of the laboratory, yet we need new business models. For example, can we create haptic content and authoring tools that will make the technology broadly attractive?

Can the interface devices be made practical and inexpensive enough to make them widely accessible? Once we move beyond single-point force-only interactions with rigid objects, we should explore several technical and scientific avenues. Multipoint, multi-hand, and multi-person interaction scenarios all offer enticingly rich interactivity. Adding sub-modality stimulation such as tactile (pressure distribution) display and vibration could add subtle and important richness to the experience. Modeling compliant objects, such as for surgical simulation and training, presents many challenging problems to enable realistic deformations, arbitrary collisions, and topological changes caused by cutting and joining actions.

Improved accuracy and richness in object modeling and haptic rendering will require advances in our understanding of how to represent and render psychophysically and cognitively germane attributes of objects, as well as algorithms and perhaps specialty hardware (such as haptic or physics engines) to perform real-time computations.

Development of multimodal workstations that provide haptic, visual, and auditory engagement will offer opportunities for more integrated interactions. We’re only beginning to understand the psychophysical and cognitive details needed to enable successful multimodality interactions. For example, how do we encode and render an object so there is a seamless consistency and congruence across sensory modalities—that is, does it look like it feels? Are the object’s densities, compliance, motion, and appearance familiar and unconsciously consistent with context? Are sensory events predictable enough that we consider objects to be persistent, and can we make correct inference about properties? Hopefully we could get bright solutions for all the queries in the near future itself.


                              CONCLUSION

Finally we shouldn’t forget that touch and physical interaction are among the fundamental ways in which we come to understand our world and to effect changes in it. This is true on a developmental as well as an evolutionary level. For early primates to survive in a physical world, as Frank Wilson suggested, “a new physics would eventually have to come into this their brain, a new way of registering and representing the behavior of objects moving and changing under the control of the hand. It is precisely such a representational system—a syntax of cause and effect, of stories, and of experiments, each having a beginning, a middle, and an end— that one finds at the deepest levels of the organization of human language.”

Our efforts to communicate information by rendering how objects feel through haptic technology, and the excitement in our pursuit, might reflect a deeper desire to speak with an inner, physically based language that has yet to be given a true voice.

                             BIBILIOGRAPHY

35