Development of an Augmented Reality Force Feedback Virtual Surgery Training Platform

Development of an Augmented Reality Force Feedback Virtual Surgery Training Platform

Ruei-Jia Chen1, Hung-Wei Lin2, Yeong-Hwa Chang1, *,Chieh-Tsai Wu3, 4, and Shih-Tseng Lee3, 4

1Department of Electrical Engineering, Chang Gung University, Taiwan
2Department of Electrical Engineering, Lee-Ming Institute of Technology, Taiwan
3Department of Neurosurgery, Chang Gung Memorial Hospital, Taiwan
4Medical Augmented Reality Research Center, Chang Gung University, Taiwan

(Received 31 March 2011; Published on line 1 September 2011)
*Corresponding author:
DOI: 10.5875/ausmt.v1i1.102

Abstract: In order to develop a virtual surgery training platform with a force feedback function so as to facilitate the training of new medical personnel, this study first had to establish a virtual environment, and then implement interactions involving vision and tactile sensations. The system's augmented reality function modules include the establishment of an augmented reality environmental space, image loading and model establishment, and force feedback modules, as well as the required design for collision detection, object parameter settings, and controller functions. Apart from combining a virtual environment with force feedback and establishing diverse force feedback modules, this project also overcomes the single-point sensor restriction of most force feedback hardware, and establishes a tactile cutting function. In addition to the establishment of force feedback modules, the project further employs the conservation of energy principle in the design of the energy estimator and controller, and completes the design of a stable virtual surgery training platform.

Keywords: Augment Reality; force feedback


Augmented reality involves the use of a computer-simulated real or virtual environment in conjunction with a user interface to develop an interactive device. The goal of augmented reality is to give users a feeling of great realism [1]. Augmented reality implies an advanced human-machine interface employing multiple sensory pathways to achieve real-time simulation and interaction [2], where sensory pathways may include vision and touch

Simulated environments incorporating virtual reality have often been used for military applications in the past. Their advantages include the ability to use existing equipment to simulate various types of situations, the ability to simulate repeated operations at a low cost and with minimal risk, and they can be used to provide professional training to new personnel. Such systems employ real-time 3-D models, force feedback, position tracking, and other auxiliary visual and auditory technologies to simulate human sensations. Users can obtain highly-realistic experiences through the manipulation of a computer input device relying on their intuition and muscular movements [3]. Because of the potential risks involved in clinical surgery, if a highly reliable virtual surgery platform incorporating physiological testing information such as MRI and CT images could be developed to present virtual medical environments, surgeons would be able to repeatedly simulate possible situations prior to actual surgery, and thereby achieve optimal medical and pre-surgical training results [4-10].

Research on virtual reality in conjunction with force feedback has flourished over the last few years as researchers have sought to enhance human-machine interactions. The focal points of recent international research and discussions have included virtual tactile feedback frameworks, force feedback simulation, improvement of high-frequency transient response caused by collisions, improvement of force feedback stability, and the use of modular tactile description algorithms to improve penetration response involving hard objects [10-12]. In the field of virtual surgery, surgical actions generally consist of the four categories of cutting, drilling, puncturing, and pulling. Although these actions have different behavioral models, they basically can be described as combinations of such physical characteristics as friction, viscosity, dynamic friction, damping, and elasticity. When it has been confirmed that objects have collided, a virtual system will calculate the change in the objects' motion after the collision, along with the magnitude and direction of the force at the time of the collision, and transmit this information to a force feedback device letting the user feel the force of the collision. In addition, all the basic rules of the various types of forces can be derived using Hooke's Law. Breaking ranks with the many researchers investigating stability, Kuchenbecker, Fiene and Niemeyer proposed another approach to handling force feedback in virtual environments that does not primarily involve stability. The researchers focused on the high-frequency transient response that occurs when something touches an object, and their paper suggests that three methods can be used to simulate real world collisions of both soft and hard objects [13].

As far as medical applications are concerned, researchers Gang Song and Shuxiang Guo of Japan's Kagawa University proposed a system consisting of an operating platform with a pair of arms; this system employs mathematical spring force and damping force feedback modules to provide a virtual reality force feedback mechanism, and can be applied to arm treatment and rehabilitation work, increasing verisimilitude and the effectiveness of training [14].

In recent years, the focus has increasingly shifted to highly-efficient and replicable virtual-reality operating environments, and virtual reality has been widely applied to medical applications. The focal points of recent research in this field have included stability analyses of single-input force feedback devices, force feedback in conjunction with stress deformation of objects, and coordinated dual arm motion mechanisms with force feedback [15-18].

System framework and development environment

System framework

The system framework for the development of this force feedback based virtual surgery training platform is shown in Figure 1. The user employs an input device and output device to communicate virtual reality; the input device is a haptic device and the output device consists of display equipment. A computer is used to calculate haptic force, ensuring that the haptic device and display equipment are mutually consistent. The haptic system cannot be operated on its own, but requires a 3-D visual display environment in order to ensure that users enter a virtual environment with simulated tactile sensations. The first issue involving the haptic and visual sides was the refresh rate. Human vision requires a refresh rate of 30-60 frames per second for dynamic images to ensure that motion appears continuous. In contrast, the minimum refresh rate for tactile sensations is 1,000 times per second; otherwise discontinuity can be perceived. Because of the huge gap in refresh rates between visual and tactile sensations, the program must process the two threads in parallel to ensure that the needs of each side are met. Because of this, the overall software framework is divided into a haptic side and visual side.

Figure 1. Figure Framework of haptic side and visual side.

In order to achieve an even more realistic user environment and conform to current research trends, the basic system framework also encompassed hand manipulations and a head-mounted display instead of a flat-panel display (see Figure 2).

Figure 2. System framework including two arms and head-mounted display.

System hardware and processes

In order to create an interactive virtual reality system, the system hardware required a computer server, display, and haptic equipment. The computer was responsible for core calculations, and was linked with the display and haptic equipment. The display displayed image outputs from the computer. With regard to force feedback, the computer performed calculations on haptic parameters, and transmitted this data to the haptic equipment for output to the user. A schematic diagram of the system is shown in Figure 3.

Figure 3. Schematic diagram of integrated tactile virtual reality system.
Figure 4. Phantom Omni device [19].

System hardware needs included two haptic devices (Phantom Omni) possessing the ability to move with six degrees of freedom and a maximum output force of 3.3N (Newton); one PC with a Pentium® D 2.66GHz dual-core CPU, an ATI Radeon X1300 Series display card, and 1GB of DDR RAM.

In order to simplify the hardware development process and achieve the goals of effective force feedback and rapid system development, we utilized a Phantom Omni haptic device (see Figure 4) to serve as a brain system input device, and employed force feedback from this device to simulate the tactile feel of real objects. The following is an overview of the relevant functions:

  • The mechanical arms used in this study.
  • The Phantom Omni device used an IEEE-1394 FireWire port to communicate with the computer, and could be connected in series with other haptic devices, such as multiple Phantom Omni devices.

The arms could rotate in six directions. In the program, the motions of the mechanical arms were represented as ±X in the left-right direction, ±Y in the up-down direction, and ±Z in the front-back direction. Due to the design of the arms, the angle of rotation and arm movement range were restricted to approximately 160mm in the left-right direction, 120mm in the up-down direction, and 70mm in the front-back direction. Forces in all three directions could be output, and the maximum output force was 3.3 Newton.

The force feedback system had to be coupled with a cranial display environment to ensure that users experienced virtual reality. The first problem confronting the integration of the haptic and visual sides was the refresh rate. Human vision requires a refresh rate of 30-60 frames per second for dynamic images to ensure that motion appears continuous. In contrast, the minimum refresh rate for tactile sensations is 1,000 times per second; otherwise tactile discontinuity can be perceived. Because of the huge gap in refresh rate between visual and tactile sensations, the program had to process the two threads in parallel to ensure that the needs of each side were met.

Augmented reality force feedback module design

Puncturing model design

In order to simulate puncturing behavior, it was first necessary to perform collision detection for two objects. When this was implemented on the computer, it was necessary to determine the locations of the equipment at all times, and these locations had to be represented on a coordinate system centered on the equipment. Because two objects have different coordinates, the coordinate systems could not immediately be used to detect collisions between the objects. Instead, it was necessary to perform two sets of coordinate conversion operations to convert the coordinate systems to volumetric data object coordinate systems. Then, find the HU (Hounsfield Unit) values of the locations, which could be used to determine whether they had entered the target tissue to be punctured. In addition, the corresponding force feedback had to be generated based on preset tissue characteristics.

Since this study sought to simulate the puncturing of the cranium, which is composed of rigid tissue, the effect of soft tissue deformation was not taken into consideration. After contact, the simulated punctured force was set as a type of friction, allowing the Equation (1) to be used; in this Equation, b is the cranial damping coefficient and is the instantaneous speed of the tool object. Special notice should be taken of the fact that, in accordance with the laws of physics, the production direction and travel direction of friction are opposite.


The puncturing force feedback algorithm shown in Figure 5 was established based on the above description.

Figure 5. Flowchart of the puncturing force feedback algorithm process.

Cutting model design

Puncturing behavior involves collision detection for a single-point contact between objects, followed by force feedback and image erasure to display the destructive effect. In contrast, cutting involves making a cut along a line or a surface. In comparison with the puncturing algorithm, the difficulty of this method lies in the question of how to determine contact with the contact point and the corresponding force feedback produced at each contact point. Because tool rendering was performed using 3D MAX software and the tool's graphic loading program contains a tetrahedral grid that is not equally partitioned, collision detection of the tool's cutting surface requires the re-establishment of tool object coordinates, and the coordinates of the newly-produced points are then used to perform collision detection. This paper employs vector characteristics, and uses force feedback sensors on the equipment itself to determine the tool object coordinates and draw vectors to construct a straight-line parametric form. This form is used to derive the coordinates of all points on the tool's cutting surface; these coordinates are then converted to the corresponding volume data object coordinates, which are used to derive the corresponding HU values. The HU values are in turn used to perform collision detection. When employing this method, one must ensure that the volume data object coordinates, after converting the points on the cutting surface, do not exceed the size of the volume data object; otherwise the corresponding HU value cannot be found and the system will display an error message.

In addition, due to the limitations of the equipment, force could only be generated at the force feedback sensor on the front edge of the equipment. As a result, this study employed a method of detecting the direction of the collision after it occurred at the contact point, and then using identical parameters to generate force at the front-edge force feedback sensor. If a tool is relatively long, this method may lead to an erroneous tactile sensation.

Figures 6-8 show schematic diagrams of the cutting algorithm. The diagram in Figure 6 shows that the straight-line parametric form has not yet been used to modularize the cutting surface of the knife. Because of this, collision detection and production of force feedback can only occur at the red spot on the front edge of the knife. Here destruction of the volume data object can be performed to implement surface cutting. If straight-line parametric modularization of the tool cutting surface has not yet been performed, the tool will first puncture the surface and then perform internal cutting. Because the body of the knife does not have a collision detection function, this part will not produce visual erasure, and destruction will only occur along the trajectory of the tip as it enters the surface. If cutting is performed at this time, only the part of the tip in contact with the surface will sense force feedback; the body of the tool will not produce force feedback. A schematic diagram illustrating this situation is shown in Figure 7.

Figure 6. Schematic diagram of surface cutting.
Figure 7. Invasive cutting (does not include straight-line parametric modularization of tool cutting surface).
Figure 8. Schematic diagram of cutting force feedback after penetration.

To explain this situation in more concrete terms, if the tool tip has already penetrated the tissue, but straight-line parametric modularization of the cutting surface has not been performed, there will be no visible destruction due to cutting, and no force feedback will be produced. If the cutting surface has been modularized, even if the tool tip has not come into contact with the volume data object, collision detection can still be performed using a contact point on the body of the tool, so that destruction of the object and production of force feedback can proceed. As has been described above, force feedback at this time will be subject to the limitations of the equipment, and force can only be generated at the tip of the tool based on its direction of motion (see schematic diagram in Figure 8). Figure 9 is a flowchart of the cutting process.

In order to determine the tactile sensation of the knife handle during cutting, this study used vector characteristics and the straight-line parametric form to estimate cutting behavior. The following method was employed:

Let the tool tip coordinates be (x0, y0, z0) and the tip vector be (u, v, w). The parametric form in (2) can now be established:


The known parameter i is the straight-line solution, and the tool length is L. Use the distance formula to derive the coordinate parameter t for each point using Equation (3):


Here k is the number of the point.

Substitute t back into the straight-line parametric form to obtain the coordinates of the kth point. The solution can now be obtained:


Substitute t back into the parametric form to obtain the coordinates of the kth point (x0, y0, z0).

Estimator and controller design

When facing actions and visual images associated with complex surgical procedures, the system may be prone to instability under certain circumstances. For instance, problems such as signal sampling errors may occur between a discrete time system (virtual environments) and a continuous time system (operating end of force feedback arm) due to quantification [20], data sampling [21, 22], numerical integration methods (forward rule, backward rule, trapezoidal Rule) [23, 24], or time delay [25-27]. To eliminate instabilities caused by these phenomena, this study sought to design a tactile controller employing the principle of conservation of energy proposed by James Prescott Joule. By performing energy estimation and compensation, this method can achieve system stability.

Figure 9. Cutting algorithm flowchart.

The passivity theorem proposed by Hannaford (2002) uses energy dissipation as the basic method of analyzing stability, and can be used to analyze the stability of nonlinear systems. This method originated from the concept of circuits and networks, and examines energy relationships of system inflows and outflows. The advantage of this method is that it does not require the virtual environment parametric model to be known; it can achieve control by estimation of system energy in the time domain, and therefore meets the needs of this study.

A typical single-port network system is shown in Figure 1. Here f is the applied force, in N, and v is the velocity, in m/s. In contrast to definitions for circuit systems, here f is equivalent to the voltage and v is equivalent to the current. Passivity implies dissipative characteristics. In the case of a simple circuit system, such as a circuit with a simple resistance and external applied voltage, the entire system will continuously dissipate energy, and will not produce any energy of its own. This kind of circuit can therefore be considered a circuit system possessing passivity. Energy dissipation is defined as follows.

Definition 1: The single-port network in Figure 10 possesses initial energy E(0), and the system will be an energy dissipating system, expressed as


where f is force (N).

Figure 10. A single-port network system.

Definition 2: In the case of a discrete time system, if the system possesses initial energy E(0), then the system will be an energy dissipating system at sampling time ΔT, expressed as


Figure 11 is a schematic diagram of an augmented reality system with an added controller. Here fh is the applied force at the operating end, vh is the velocity of the operating end, fe is the applied force in the virtual environment, ve is the velocity in the virtual environment after the equipment makes contact, α is the energy dissipation controller, fc is the resultant of the controller compensation and virtual environment, and Eobsc is the system energy estimated by the estimator.

Figure 11. Block diagram of the haptic virtual environment control system.

Relying on the single-port network system energy concept, the total energy of the virtual environment system estimated by the front edge of the controller must be greater than or equal to zero in order for it to be a stable energy-dissipating system. A system energy estimator can consequently be designed as follows:


We can rely on definition 2 to obtain the following controller design equation:


Finally, the true feedback force can be obtained based on the calculated compensation amount employing Equation (9):


The foregoing discussion addresses a single-dimensional virtual environment system. This paper's actual environment is a three-dimensional augmented reality system. When going from a one-dimensional system to a three-dimensional system, the estimator must calculate energy along the three global coordinate axes, as in Equation (10):


The places with the largest energy compensation differences in one-dimensional and multi-dimensional systems are in multi-dimensional environments. System energy compensation must take the direction of the compensating force into consideration; otherwise the operator's perception of the shape of the virtual object's surface may be incorrect. As shown in Figure 12, the feedback force's compensation on a single axis is not the same as the direction of the actual feedback force. Because of this, the direction of the resultant will change under the influence of the compensating force. This situation will cause the original applied force to distort the user's impression of the object's shape. The vector projection theorem is used to project the compensatory force in the direction of the original applied force, which will not influence the magnitude of the compensatory force actually produced in the system. In fact the component of the compensatory force perpendicular to the applied force will have no effect on the system. This characteristic can therefore simultaneously be used to revise the direction of the compensatory force, so that the shape of the object will feel realistic without affecting the system's actual compensatory effect. A schematic diagram of the revision of the compensatory force is shown in Figure 13.

Figure 12. Schematic diagram of the effect of compensatory force direction on feedback force.
Figure 13. Schematic diagram of the revision of feedback force in the virtual environment.

The actual controller feedback force must be revised using the vector projection method, as in Equation (11):


Final output of the controller feedback force, as in Equation (12):


Interaction the between visual module and virtual environment

The medical images employed in this study consisted of patient CT images provided by the Brain Augmented Reality Research Center at Chang Gung Memorial Hospital in Linkou. The raw CT images (composed of several dozen to several hundred DICOM images) were synthesized using the image processing software Amira (as shown in Figure 1), and output as raw data files that could be identified by the program. These files were then read by the program's file reading program, and the program's image conversion module converted the files to 3D image data that could be visualized in a virtual environment (see Figures 14 & 15).

Figure 14. Dicom medical images loaded by Amira.
Figure 15. Reconstructed image after the program has loaded the raw data.

In the virtual environment, two loaded 3D images will have mutual interactions, and must meet the following two criteria: One, the two objects must be in the same coordinate system; two, the two objects must have a basis for collision in the form of a contact point. After a tool object has been loaded into the program, its location will be within the global coordinates. After medical imaging volume data has been loaded to the program, it will possess volume object coordinates, and each point in the volume coordinates will possess the characteristics of an object. If the tool object will interact with the medical imaging object, coordinate conversion must be performed between them. The tool equipment coordinates of the haptic equipment Phantom Omni were obtained, and coordinate conversion performed for the first time to convert the tool coordinates to global coordinates. The global coordinates were then converted using the volume data object's coordinate conversion array. After placing the object in the same coordinate system, the coordinates were used to obtain the corresponding HU values, which revealed whether the current tool was in contact with the target volume object and was puncturing or cutting it.

System realization and conclusions

The virtual reality platform developed in this study was used to load clinical CT images, which were used to create three-dimensional images. The realized system was used to perform cutting of a liver and a cranium. The liver cutting is shown in Figures 16 and 17. Figure 18 shows the cranium when the surgical tool has not yet contacted cranial tissue, Figure 19 shows the cranium after the tool has come into contact with the tissue, and Figure 20 shows the cranium after the tool has completed cutting of the cranial tissue. Apart from the realization of the force feedback module, this study also successfully incorporated dual mechanical arms and visual presentation via a head-mounted display. As shown in Figure 21, users may freely toggle between display images on a flat-panel LCD display (Figure 21-right) or on the head-mounted display screen (Figure 21-left).

This study successfully combined haptic and visual function in a virtual surgery training platform prototype with a force feedback function. The study also used vector characteristics to establish a straight-line parametric form and generate a virtual collision point. A cutting function module was developed, subject to the restrictions of the haptic equipment, with only single-point sensors. In order to avoid instability in the virtual environment, this study employed the principle of the conservation of energy to estimate changes in the virtual environment system, and designed a multi-dimensional environmental controller to ensure system stability. It will be possible to develop even more diverse force feedback modules in the future, including clamping and pulling modules and modules in which two arms are capable of coordinated movement. Moreover, the addition of network functions will enable remote virtual surgical training via the Internet.

Figure 16. Realization of liver cutting (front).
Figure 17. Realization of liver cutting (rear).
Figure 18. Tool has not yet contacted cranial tissue.
Figure 19. Tool is cutting cranial tissue.
Figure 20. Tool has removed cranial tissue.
Figure 21. Two-arm manipulation and head-mounted display.


This work was supported by Ministry of Economic Affairs, Taiwan, under the Technology Development Program for Academia in the project “Developing a Brain Medical Augment Reality System” with Grant No. 99-EC-17-A-19-S1-035.


  1. C. W. M. Leão, J. P. Lima, V. Teichrieb, E. S. Albuquerque, and J. Keiner, "Altered reality: Augmenting and diminishing reality in real time," inIEEE Virtual Reality Conference (VR), Sigapore, 2011, pp. 219-220.
    doi: 10.1109/VR.2011.5759477
  2. C. V. Hurtado, A. R. Valerio, and L. R. Sanchez, "Virtual reality robotics system for education and training," inIEEE Electronics, Robotics and Automotive Mechanics Conference (CERMA), Cuernavaca, Mexico, 2010, pp. 162-167.
    doi: 10.1109/CERMA.2010.98
  3. A. Kotranza and B. Lok, "Virtual human + tangible interface = mixed reality human an initial exploration with a virtual breast exam patient," inIEEE Virtual Reality Conference (VR), Reno, Nevada, USA, 2008, pp. 99-106.
    doi: 10.1109/VR.2008.4480757
  4. M. Vankipuram, K. Kahol, A. Ashby, J. Hamilton, J. Ferrara, and M. Smith, "Virtual reality based training to resolve visio-motor conflicts in surgical environments," inIEEE International Workshop on Haptic Audio visual Environments and Games (HAVE), Ottawa, Ontario, Canada, 2008, pp. 7-12.
    doi: 10.1109/HAVE.2008.4685290
  5. J. Ackerman, (2000).Ultrasound visualization research [Online]. Available:
  6. J. Dankelman, "Surgical robots and other training tools in minimally invasive surgery," in2004 IEEE International Conference on Systems, Man and Cybernetics, The Hague, The Netherlands, 2004, pp. 2459-2464.
    doi: 10.1109/ICSMC.2004.1400699
  7. J. Pettersson, K. L. Palmerius, H. Knutsson, O. Wahlstrom, B. Tillander, and M. Borga, "Simulation of patient specific cervical hip fracture surgery with a volume haptic interface,"IEEE Transactions on Biomedical Engineering, vol. 55, no. 4, pp. 1255-1265, 2008.
    doi: 10.1109/TBME.2007.908099
  8. K. D. Reinig, C. G. Rush, H. L. Pelster, V. M. Spitzer, and J. A. Heath, "Real-time visually and haptically accurate surgical simulation,"Studies in health technology and informatics, vol. 29, no. pp. 542-545, 1996.
  9. R. Konietschke, A. Tobergte, C. Preusche, P. Tripicchio, E. Ruffaldi, S. Webel, and U. Bockholt, "A multimodal training platform for minimally invasive robotic surgery," in19th IEEE International Symposium on Robot and Human Interactive Communication (Ro-Man), Viareggio, Italy, 2010, pp. 422-427.
    doi: 10.1109/ROMAN.2010.5598608
  10. C. Basdogan, S. De, J. Kim, M. Manivannan, H. Kim, and M. A. Srinivasan, "Haptics in minimally invasive surgical simulation and training,"IEEE Computer Graphics and Applications, vol. 24, no. 2, pp. 56-64, 2004.
    doi: 10.1109/MCG.2004.1274062
  11. A. M. Tahmasebi, P. Abolmaesumi, D. Thompson, and K. Hashtrudi-Zaad, "Software structure design for a haptic-based medical examination system," inIEEE International Workshop on Haptic Audio Visual Environments and their Applications, Ottawa, Ontario, Canada, 2005.
    doi: 10.1109/HAVE.2005.1545658
  12. M. A. Otaduy and M. C. Lin, "A modular haptic rendering algorithm for stable and transparent 6-DOF manipulation,"IEEE Transactions on Robotics, vol. 22, no. 4, pp. 751-762, 2006.
    doi: 10.1109/TRO.2006.876897
  13. K. J. Kuchenbecker, J. Fiene, and G. Niemeyer, "Improving contact realism through event-based haptic feedback,"IEEE Transactions on Visualization and Computer Graphics, vol. 12, no. 2, pp. 219-230, 2006.
    doi: 10.1109/TVCG.2006.32
  14. G. Song and S. Guo, "Development of an active self-assisted rehabilitation simulator for upper limbs," inThe Sixth World Congress on Intelligent Control and Automation (WCICA), Dalian, China, 2006, pp. 9444-9448.
  15. A. Bardorfer, M. Munih, A. Zupan, and A. Primozic, "Upper limb motion analysis using haptic interface,"IEEE/ASME Transactions on Mechatronics, vol. 6, no. 3, pp. 253-260, 2001.
  16. C. Youngblut, E. J. Rob, H. N. Sarah, A. W. Ruth, and A. W. Craig, (1996).Review of virtual environment interface technology [Online]. Available:
  17. B. R. Brewer, M. Fagan, R. L. Klatzky, and Y. Matsuoka, "Perceptual limits for a robotic rehabilitation environment using visual feedback distortion,"IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 13, no. 1, pp. 1-11, 2005.
    doi: a href="">10.1109/TNSRE.2005.843443
  18. Y. Tao, H. Hu, and H. Zhou, "Integration of vision and inertial sensors for home-based rehabilitation," in2nd Workshop on Integration of Vision and Inertial Sensors (InerVis), Barcelona, Spain, 2005.
  19. Sensable technologies, inc. Available:
  20. N. Diolaiti, G. Niemeyer, F. Barbagli, and J. K. Salisbury, "A criterion for the passivity of haptic devices," inIEEE International Conference on Robotics and Automation (ICRA), Barcelona, Spain, 2005, pp. 2452-2457.
    doi: 10.1109/ROBOT.2005.1570480
  21. A. Jazayeri and M. Tavakoli, "Stability analysis of sampled-data teleoperation systems," in49th IEEE Conference on Decision and Control (CDC), Atlanta, Georgia USA, 2010, pp. 3608-3613.
    doi: 10.1109/CDC.2010.5718117
  22. J. H. Ryu, Y. S. Kim, and B. Hannaford, "Sampled and continuous time passivity and stability of virtual environments," inIEEE International Conference on Robotics and Automation (ICRA), Taipei, Taiwan, 2003, pp. 822- 827 vol.821-822- 827 vol.821.
    doi: 10.1109/ROBOT.2003.1241695
  23. N. Diolaiti, G. Niemeyer, F. Barbagli, and J. K. Salisbury, "Stability of haptic rendering: Discretization, quantization, time delay, and Coulomb effects,"IEEE Transactions on Robotics, vol. 22, no. 2, pp. 256-268, 2006.
    doi: 10.1109/TRO.2005.862487
  24. J. H. Ryu, J. H. Kim, D. S. Kwon, and B. Hannaford, "A simulation/experimental study of the noisy behavior of the time domain passivity controller for haptic interfaces," inIEEE International Conference on Robotics and Automation (ICRA), Barcelona, Spain, 2005, pp. 4321-4326.
    doi: 10.1109/ROBOT.2005.1570785
  25. K. Hertkorn, T. Hulin, P. Kremer, C. Preusche, and G. Hirzinger, "Time domain passivity control for multi-degree of freedom haptic devices with time delay," inIEEE International Conference on Robotics and Automation (ICRA), Anchorage, Alaska, 2010, pp. 1313-1319.
    doi: 10.1109/ROBOT.2010.5509148
  26. J. H. Ryu, D. S. Kwon, and B. Hannaford, "Stable teleoperation with time-domain passivity control,"IEEE Transactions on Robotics and Automation, vol. 20, no. 2, pp. 365-373, 2004.
    doi: 10.1109/TRA.2004.824689
  27. J. Yoneyama, "Robust stability and stabilization for uncertain discrete-time fuzzy systems with time-varying delay," in7th Asian Control Conference (ASCC), Hong Kong, 2009, pp. 1022-1027.


  • There are currently no refbacks.

Copyright © 2011-2018 AUSMT ISSN: 2223-9766