Advanced Driving Assistance Systems for an Electric Vehicle

Advanced Driving Assistance Systems for an Electric Vehicle

Pau Muñoz-Benavent*, Leopoldo Armesto, Vicent Girbés, J. Ernesto Solanes, Juan Dols, Adolfo Muñoz, and Josep Tornero

Institute of Design and Manufacture, Universitat Politècnica de València, Spain

(Received 7 October 2012; Accepted 25 October 2012; Published on line 1 December 2012)
*Corresponding author: pmunyoz@idf.upv.es
DOI: 10.5875/ausmt.v2i4.169

Abstract: This paper describes the automation of a Neighborhood Electric Vehicle (NEV) and the embedded distributed architecture for implementing an Advanced Driving Assistance System (ADAS) with haptic, visual, and audio feedback in order to improve safety. For the automation, original electric signals were conditioned, and mechanisms for actuation and haptic feedback were installed. An embedded distributed architecture was chosen based on two low-cost boards and implemented under a Robotics Operating System (ROS) framework. The system includes features such as collision avoidance and motion planning.

Keywords: Advanced Driving Assistance Systems; Intelligent Vehicles; Vehicle Automation

Introduction

Based on statistics reported by Mu [1], 95% of traffic accidents are related to human factors, while approximately 70% are directly related to human errors. In this sense, driving is a process that requires of the driver enough skill to adapt to environmental changes. Nowadays, one may find many automated driving solutions available for commercial vehicles. However, such solutions show lack of flexibility to adapt and compete at low prices in order to become part of a massive industrialized solution. The current trend in vehicle automation is to develop solutions where the driver and an intelligent system coexist and interact with a common aim: to improve safety. This research is in one of the most relevant interest areas not only in the automotive sector, but also in other sectors related to forklift manufactures, bus manufactures, wheel chair manufactures, among others.

In this paper, a Neighborhood Electric Vehicle (NEV) automation process is presented, including its particular hardware and software architecture, as well as an Advanced Driving Assistance System (ADAS) with haptic, visual, and audio feedback implemented in the vehicle to avoid collisions and improve safety.

Related Work

There are a variety of technologies implementing ADAS, most of which are in the field of automobile vehicles. Some of the most significant are the Anti-Lock Braking System (ABS) [2], Adaptive Cruise Control (ACC) [3], Adaptive Headlights (AH) [4], Lane Change Assistant (LCA) [5] / Blind Spot Detection (BSD) [6], Driver Drowsiness Monitoring and Warning (DDMW) [7], Electronic Brake Assist System (EBAS) [8], Electronic Stability Control (ESC) [9], Gear Shift Indicator (GSI) [10], Lane Departure Warning System (LDWS) [11], Night Vision (NV) [12], Obstacle and Collision Warning (OCW) [13], Pedestrian / Vulnerable Road User Protection (VRUP) [14], Tire Pressure Monitoring System (TPMS) [15], Up-to-date Traffic Information (UTI) [16], Intelligent Speed Adaption or Intelligent Speed Advise (ISA) [17], Adaptive Light Control (ALC) [18], Automatic Parking (AP) [19], Traffic Sign Recognition (TSR) [20], and Hill Descent Control (HDC) [21].

In the development of ADAS, the driver plays an important role in demanding additional functionalities; driver demand leads researchers to focus on the interaction between humans and automated systems. Such research topics have included maneuver selectable based driving [22], interface design dedicated to satisfy drivers' needs [23], human-machine interface components development (visual, auditory, and haptic) for safe speed and distance [24], and the combined effects of users' driving preferences and safety margins in generating optimal maneuvers [25]. Likewise, vehicle environment monitoring is an important field in ADAS, and requires a large amount of sensors; this has led to research in sensor fusion, such as for laser scanner and video systems [26]. There is also an effort being made in creating feedback components for the driver, where haptic interfaces, such as pedals for controlling deceleration, are in development [27]. Many of the studies are focused on creating simulation methods for evaluating and verifying the quality, security, and functionality of these systems [28], as well as for analyzing their results for distance and velocity in relation to driver security [29].

Automation of an Electric Vehicle

Bombardier is an NEV made of fiberglass, and designed to be used in short travels and on low-speed roads (45 km/h max.). The vehicle has been adapted to perform manual assisted driving although it can also be used for autonomous tasks. The system is composed of a direction control subsystem, to which the steering wheel is connected, and a speed control subsystem, which operates on the main drive electric motor. We have extended its standard capabilities in order to implement effectively an ADAS by installing additional range and imaging sensors such as lasers, sonars, and cameras.

Direction Subsystem

In order to automate fully the Bombardier NEV, a Renault Twingo steering column was installed on the direction control subsystem to be used as a power assisted steering wheel. The steering column contains a permanent-magnet DC motor that comes with an integrated electric clutch to engage the column to the motor. The motor is managed through a power stage over an I2C interface, and the clutch is controlled with a digital signal. In addition to this, a GMR (Giant Magnetoresistive) sensor was conditioned to deliver a continuous analogue signal proportional to the stress applied over the steering wheel, and therefore can be used as a torque sensor. Attached to the steering column is an absolute encoder mounted to measure axis angle.

Speed Subsystem

The original speed subsystem is composed of a speed sensor, throttle, and brake pedals, together with the vehicle speed controller, as shown in Figure 3. The speed sensor is a Hall effect sensor — originally included with the vehicle — which delivers a frequency modulated pulse conditioned to a continuous voltage level signal. Both pedals are themselves potentiometers, and can be used to determine the pedals' positions. When the brake pedal is pressed, a hydraulic pump activates the brake pads on the left front wheel, however its electric signal has no effect on the vehicle speed controller. Consequently, only the throttle pedal can be used to regulate the vehicle speed from an external electronic device. Therefore, the position of the brake pedal can only be monitored; it cannot be used to regulate the vehicle speed. In any case, for an ADAS, it is very important for the user to have haptic feedback as a natural interface for assessing the risk of a given situation.

To this end, the speed subsystem was adapted for vehicle speed regulation by the addition of two extra subsystems, one for each pedal. These are shown in Figure 4. In the throttle pedal, we installed a proportional blocking system with a lever controlled by a servo. Such a system acts as a haptic device, providing feedback to the driver in the case of exceeded speed. For the brake pedal, we installed a mechanism with a servo that allows control of the brake position by mechanical movement of the pedal.

Exteroceptive sensors

For the implemented ADAS to perform intelligent tasks such as people and vehicle detection, and collision avoidance, additional sensors were installed. These include a ring of networked sonar sensors through a CAN bus based on a Polaroid 6500 module, a front SICK LMS200 laser ranger, and camera modules.

Hardware Architecture For Embedded Processing

We use a two-layered embedded processing system. The low-level system handles all vehicle-related electric signals, while the high-level system processes images from the vision system, as well as executes the main parts of the driver assistance functionalities.

The RoBoard RB-110 is based on a Vortex86DX 32 bits CPU, @1GHz, 256 MB RAM. A Linux based OS with a Robotics Operating System (ROS) as main software component. The board has dimensions 96 mm x 50 mm, and accepts supply voltages from 6 to 24 volts with very low power consumption. It is equipped with several interfaces that allow connection to many of the vehicle’s signals: 16 PWM/GPIO channels, high-speed serial bus, TTL serial port, RS-485, 3 USB 2.0, AD converter, I2C bus, 10/100M LAN and miniPCI socket.

The IGEPv2 board is based on a 1GHz OMAP3730 processor with 512MB RAM, a dedicated DSP for image processing, 10/100M LAN, USB ports, DVI output (for a touch screen panel), and audio in/out connectors. The main characteristic of the OMAP3 is its dual-processor capability, providing cross-compiling tools for Linux based ARMv7 platforms as well as interfacing with the DSP through shared memory space and DSPLINK messages using Codec Engine framework. It also provides accelerated graphics hardware based on PowerVR SGX540, which can be used to develop real-time multimedia applications for assisted driving.

Assisted Driving System With Haptic-Visual-Audio Feedback

Concept Description

In autonomous driving solutions, specific targets that should be reached within a map are usually given as commands to the robot. In order to reuse most of existing implementations and algorithms for autonomous robots, an ADAS must be adapted to provide valid goals according to driver intentions.

The goal — that is, the target location of the vehicle — in manual-assisted mode, can be directly computed from odometry sensor data. The driver must steer the wheel and throttle the vehicle accordingly to his purposes. This represents a deviation of the steering wheel angle $\alpha $ and linear velocity ${{v}_{t}}$ from an equivalent artificial “front” of vehicles of type (1,1) [30]. These deviations are treated as inputs in computing the target linear velocity ${{v}_{\text{target}}}$ and angular velocity ${{\omega }_{\text{target}}}$ of the vehicle, based on the standard tricycle kinematic model and non-holonomic constrains:

\[\begin{align} & {{v}_{\text{target}}}={{v}_{t}}\cdot \cos \left( \alpha \right),{{\omega }_{\text{target}}}={{{v}_{t}}\cdot \sin \left( \alpha \right)}/{l}\;, \\ & \dot{x}={{v}_{\text{target}}}\cdot \cos \left( \theta \right),\dot{y}={{v}_{\text{target}}}\cdot \sin \left( \theta \right), \\ \end{align}\tag{1}\]

where $l$ is the front and rear wheel distance. In order to compute the target goal ${{x}_{\text{target}}}$ and ${{y}_{\text{target}}}$, Equation (1) must first be solved:

\[\begin{align} & {{x}_{\text{target}}}=x+2\cdot \frac{{{v}_{\text{target}}}}{{{\omega }_{\text{target}}}}\sin \left( {{\omega}_{\text{target}}}\frac{{{T}_{hor}}}{2} \right)\cdot \cos \left( \theta +{{\omega }_{\text{target}}}\frac{{{T}_{hor}}}{2} \right)\frac{1}{2}, \\ & \\ & {{y}_{\text{target}}}=y+2\cdot \frac{{{v}_{\text{target}}}}{{{\omega }_{\text{target}}}}\sin \left( {{\omega }_{\text{target}}}\frac{{{T}_{hor}}}{2} \right)\cdot \sin \left( \theta +{{\omega }_{\text{target}}}\frac{{{T}_{hor}}}{2} \right), \\ \end{align}\tag{2}\]

where $x$, $y$ and $\theta $ are the current robot Cartesian position and orientation and ${{T}_{hor}}$ is the time-horizon. Equations (2) are singular when ${{\omega }_{\text{target}}}\approx 0$, and the approximation \(\sin \left( {{\omega }_{\text{target}}}\cdot \frac{{{T}_{hor}}t}{2} \right)\approx {{\omega }_{\text{target}}}\cdot \frac{{{T}_{hor}}}{2}\) should be used instead:

\[\begin{align} & {{x}_{\text{target}}}=x+{{v}_{\text{target}}}\cdot \cos \left( \theta \right)\cdot {{T}_{hor}}, \\ & {{y}_{\text{target}}}=y+{{v}_{\text{target}}}\cdot \sin \left( \theta \right)\cdot {{T}_{hor}}. \\ \end{align}\tag{3}\]

Figure 6 shows the locus of different target goals with different time-horizons, steering wheel angles, and linear velocities.

As mentioned previously, the use of such targets allows the integration of generic navigation frameworks — rather than autonomous navigation — in dealing with the driving assistance problem. In particular, we used the move_base framework to design a global planner that provides a valid plan accounting for obstacles in the surrounding environment. A local planner may use such a plan to propose desired linear and angular velocities, as if controlling an autonomous robot. The haptic interface will then take into account the desired velocities in providing haptic feedback to the driver.

A standard global planner takes as inputs the proposed target location, the current robot state, and a 2-D map containing information about obstacles. Assuming that the vehicle will describe an arc in a short term period, the global planner constructs a path, based on the vehicle's actual velocities and position, and trims the path into a shorter path (namely, a collision-free plan) that is obstacle free-guaranteed by a map. In this step, the shape of the robot is considered and inflated with a safety margin distance dependent upon the inscribed radius of the vehicle footprint. The local planner takes such a proposed plan and computes desired linear and angular velocities (${{v}_{d}}$ and ${{\omega }_{d}}$) based on the plan length $d$ (to avoid frontal collisions), and the closest point of a potential collision based on the true robot footprint (to determine which turning direction should be avoided).

Therefore, in order to decelerate a vehicle safely to avoid a frontal collision with the closest obstacle, we assume, for simplicity, a second order dynamic model of an electric motor:

\[\frac{V\left( s \right)}{U\left( s \right)}=\frac{K}{\tau \cdot s+1}\,\,\,,\,\,\,\frac{D\left( s \right)}{V\left( s \right)}=\frac{1}{s},\tag{4}\]

where \(V\left( s \right)\equiv L\left[ v\left( t \right) \right]\) is the vehicle’s linear velocity, \(D\left( s \right)\equiv L\left[ d\left( t \right) \right]\) the travel distance and \(U\left( s \right)\equiv L\left[ u\left( t \right) \right]\) is the driver’s input with range \(u\left( t \right)\in \left[ -{{u}_{\max }},{{u}_{\max }} \right]\), and L is the Laplace transform. Without loss of generality, we will assume that the vehicle reaches a top linear velocity of ${{v}_{\max }}$ when the driver fully accelerates, that is \(u\left( t \right)={{u}_{\max }}\). Therefore \(K={{{v}_{\max }}}/{{{u}_{\max }}}\;\) and $\tau $ is the time constant. By applying the inverse Laplace transform to Equation (4) and with initial conditions $v\left( 0 \right)=0$ and $d\left( 0 \right)=0$, (with $v$ being the current vehicle velocity):

\[\begin{align} & d\left( t \right)=\left[ \tau \left( 1-{{e}^{-{}^{t}\!\!\diagup\!\!{}_{\tau }\;}} \right)\cdot v-{{v}_{\max }}\left( t-\tau \left( 1-{{e}^{-{}^{t}\!\!\diagup\!\!{}_{\tau }\;}} \right) \right) \right], \\ & v\left( t \right)={{e}^{-{}^{t}\!\!\diagup\!\!{}_{\tau }\;}}\cdot v-{{v}_{\max }}\left( 1-{{e}^{-{}^{t}\!\!\diagup\!\!{}_{\tau }\;}} \right). \\ \end{align}\tag{5}\]

We can then compute the time taken to reach zero velocity when maximum negative acceleration is applied:

\[t=\tau \cdot \ln \left( \frac{{{v}_{\max }}+v}{{{v}_{\max }}} \right).\tag{6}\]

Therefore, the minimum travel distance ${{d}_{\min }}$ is:

\[{{d}_{s}}=\tau \cdot \left[ v-{{v}_{\max }}\cdot \ln \left( \frac{{{v}_{\max }}+v}{{{v}_{\max }}} \right) \right],\tag{7}\]
\[{{d}_{\min }}={{d}_{s}}+{{d}_{front}},\tag{8}\]

where ${{d}_{front }}$ is the distance from the vehicle center to the front of its body (plus a security margin).

On the other hand, a maximum distance, indicating the influence distance of an obstacle, is simply computed ${{d}_{\max }}={{d}_{\min }}+{{D}_{\max }}$, where ${{D}_{\max }}$ is a design parameter representing the anticipation distance in which the vehicle should reduce its speed before reaching the “inevitable collision” distance ${{d}_{\min }}$.

Therefore, in order to assist vehicle speed control, we propose the following desired velocity:

${{v}_{d}}=\left\{ \begin{matrix} 0\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{ if }d <{{d}_{\min }} \\v-\frac{v\left( {{d}_{\max }}-d \right)}{{{d}_{\max }}-{{d}_{\min }}}\,\,\,\,\,\,\,\,\text{ if }{{d}_{\min }}<d<{{d}_{\max }} \\{{v}_{d}}=v\,\,\,\,\,\,\,\text{ otherwise} \\\end{matrix} \right.,\tag{9}$

Based on such a desired velocity, our ADAS will modify the position of the servo and break as follows:

$servo=\left\{ \begin{matrix} {{\phi }_{\min }}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{ if }v<{{v}_{d}} \\ \frac{\left( {{\phi }_{\max }}-{{\phi }_{\min }} \right)\left( v-{{v}_{d}} \right)}{v\left( 1-\alpha \right)}\text{+}{{\phi }_{\min }}\,\,\text{ if }\,\,\alpha v<{{v}_{d}}<v \\ {{\phi }_{\max }}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{ otherwise} \\\end{matrix} \right.\text{,}\tag{10}$
$brake=\left\{ \begin{matrix}0\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{if }\alpha v<{{v}_{d}} \\ \frac{-{{u}_{\max }}\left( \alpha v-{{v}_{d}} \right)}{\alpha v}\,\,\text{ if}\,\,0<{{v}_{d}}<\alpha v \\ -{{u}_{\max }}\,\,\,\,\,\,\,\,\,\,\,\,\,\text{otherwise} \\\end{matrix} \right.,\tag{11}$

where ${{\phi }_{\min }}$ and ${{\phi }_{\max }}$ are the servo’s minimum and maximum reachable positions. $\alpha \in \left\{ \begin{matrix} 0, & 1 \\ \end{matrix} \right\}$ is a design parameter that establishes desired behavior when ${{d}_{\min }}< d<{{d}_{\max }}$. A higher $\alpha $ value is equivalent to more aggressive behavior

As can be seen from Equations (10) and (11), our proposed haptic device will linearly cancel the driver input over the throttle pedal if the plan length d is lower than ${{d}_{\max }}$. As the plan length increases towards ${{d}_{\max }}$ the servo will be proportionally pushed out from its minimum allowed position ${{\phi }_{\min }}$ to its maximum position ${{\phi }_{\max }}$. Extra deceleration might be required if the vehicle reaches a distance to the closest potential obstacle lower than a distance, ${{d}_{brake }}$, related to the design parameter $\alpha$. In such cases, the brake assistance system becomes active and increases linearly up to the maximum allowed deceleration ${{-u}_{\max }}$ when the vehicle reaches ${{d }_{\min }}$.

In order to provide a desired angular velocity we propose:

${{\omega }_{d}}=\omega +{{\omega }_{comp}},\tag{12}$

where ${{\omega }_{comp}}\in \left\{ \begin{matrix} 1, & 0, & -1 \\ \end{matrix} \right\}$ is a compensation value depending on the quadrant in which the potential collision is found, as shown in Figure 8.

The torque ${{\tau }_{mortor}}$ applied to the power assisted steering wheel, is affected by the compensation value as follows:

${{\tau }_{mortor}}={{\tau }_{torque}}+{{K}_{\tau }}sign\left( \omega -{{\omega }_{d}} \right).\tag{13}$

Software Architecture

The software is implemented under a ROS [31] framework in a distributed architecture; topics are published and subscribed to transparently through an Ethernet port.

The system is implemented as different nodes using the ROS framework; our own nodes are defined compatible with the Move_Base node in the navigation stack [31]. An overview of the system is given in Figure 9, where grey boxes represent ROS nodes, and white ovals represent additional components such as plug-ins.

The software architecture is divided into two coarse control levels: the low-level, implemented in the RoBoard RB-110, is in charge of executing the Vehicle_Controller node, whilst the high-level, in the IGEP v2, integrates the decision control unit and the visualization unit, executing the ADAS_Controller, Move_Base, Laser_Driver, and Visualization nodes.

The Vehicle_Controller node is responsible for reading vehicle’s proprioceptive sensor data and publishing the “/state” topic, representing the current state of the vehicle, which includes pedal measurements, gear settings, torque measurement, and so forth. The Vehicle_Controller node is also responsible for publishing the odometry data to the “/odom” topic. Moreover, it executes the lowest level of the control loop between the driver and the vehicle. Reference commands are received from the ADAS_Controller through the “/reference” topic which contains the throttle servo and brake desired positions, and the torque to be applied to the steering column.

The ADAS_Controller has two main objectives: to computes a tentative goal based on the current state of the vehicle (read from the “/state” topic) following the algorithm described in section IV, Equations (1) to (3); and to process the desired linear and angular velocities, read from “/cmd_vel”, to compute reference commands with Equations (10), (11) and (13).

The Move_Base node supports any global planner adhering to the nav_core::BaseGlobalPlanner interface specified in the nav_core package, and any local planner adhering to the nav_core::BaseLocalPlanner interface, also specified in the nav_core package [31]. For this purpose, we have implemented two plug-ins, both of which adhere to the aforementioned interfaces.

As mentioned in section IV, the global planner computes a collision free plan according to the vehicle kinematics, ensuring that such a plan is directed towards the desired goal. On the other hand, the local planner implements Equations (4) to (9) and (12), providing the desired linear and angular velocities.

The Visualization node subscribes to all topics and aims to provide a Graphical User Interface (GUI) for the touch screen panel. The GUI uses the OpenGL ES 2.0 library for the OMAP3 platform (IGEP v2). The aim of the GUI is to visualize the current state of the vehicle and its surroundings so that the driver is provided with visual feedback.

The GUI shows a simplified overview of the position and direction of heading of the vehicle in its near environment. This is accomplished with an orthographic top view of the scene containing a footprint of the vehicle, nearby obstacles (in their real and inflated states) and an arrow of direction of heading.

The current velocities and risk of collision are represented by arrow in a color map. The length of the arrow depends on the ratio of current and desired lineal velocities, whilst the color of the arrow varies from green (no risk of collision) to red (potential collision).

The system is mainly designed to give support to the driver in the case of danger or an obstructed view of the surroundings, whilst keeping idle during normal driving conditions. Future versions of the system are planned to include additional visual and auditory feedback, such as a proposed direction of heading, camera feeds of blind spots, and signal tones.

Conclusion and Further Work

This paper has described the automation of an electric vehicle and the embedded distributed architecture for implementing an ADAS with haptic, visual, and audio feedback in order to improve safety. An embedded distributed architecture was chosen based on two low-cost boards and implemented under the ROS framework. The system includes features such as collision avoidance and motion planning.

Research will be done on haptic-audio-visual interfaces for driving assistance to extend current developments with more elaborated information, such as mobile object detection and 3-D map generation. Some benefits of this include the development of new haptic interfaces, integration of sound surroundings to echo-localize mobile objects from previously processed information and 3-D visualization of the environment with image interpretation of relevant information for the driver.

Moreover, we plan to use a vehicle simulator currently in development (shown in Figure 11) to generate different scenarios, in which the safety performances of the system proposed in the current paper, as well as other assistance systems, may be evaluated. In addition, the evaluation of the methodological benchmark based on different metrics already proposed in [32] will be studied.

Furthermore, we will focus on researching and implementing vision, laser, and radar algorithms for detection, path prediction, and tracking of pedestrian and mobile object. Sensor fusion with non-conventional techniques and incremental learning for mobile object prediction will also be studied.

References

  1. G. Y. Mu and M. W. Chen, "Study on status quo and countermeasure of road traffic safety based on accident stat," Communications Standarization, no. 2, 2005.
  2. Wikipedia, Anti-lock braking system, [Online].
    Available: http://en.wikipedia.org/wiki/Anti-lock_braking_system
  3. P. I. Labuhn and W. J. Chundrlik Jr, "Adaptive cruise control," US Patent 5454442, 1995.
  4. N. E. Beam, "Adaptive/anti-blinding headlights," US Patent 6144158, 2000.
  5. G. Braeuchle and J. Boecker, "Lane-change assistant for motor vehicles," US Patent Application 20050155808, 2005.
  6. B. A. Miller and D. Pitton, "Vehicle blind spot detector," US Patent 4694295, 1987.
  7. O. A. Basir, J. P. Bhavnani, F. Karray, and K. Desrochers, "Drowsiness detection system," US Patent 6822573, 2004.
  8. Wikipedia, "Emergency brake assist."
    Available: http://en.wikipedia.org/wiki/Emergency_brake_assist
  9. M. Sawada and T. Matsumoto, "Vehicle stability control system," US Patent 7577504, 2009.
  10. C. A. Nurse, "Gear shift indicator," US Patent 3985095, 1976.
  11. Wikipedia, Lane departure warning system, [Online].
    Available: http://en.wikipedia.org/wiki/Lane_departure_warning_system
  12. Wikipedia, Automotive night vision, [Online].
    Available: http://en.wikipedia.org/wiki/Automotive_night_vision
  13. T. O. Grosch, "Radar sensors for automotive collision warning and avoidance," in Optoelectronic and Electronic Sensors, Orlando, FL, 1995, pp. 239-247.
    doi: 10.1117/12.212749
  14. B. Parks, "Hinge device for pedestrian protection system," US Patent Application 20070246281, 2007.
  15. B. F. Doerksen and D. M. Nattinger, "Tire pressure monitoring system," US Patent 4816802, 1989.
  16. J. M. Shyu, "Traffic information inter-vehicle transference and navigation system," US Patent 5428544, 1995.
  17. M. Päätalo, M. Kallio, and H. Peltola, "Intelligent speed adaptation – effects on driving behaviour," in Traffic Safety on Three Continents, Moscow, Russia, 2001, pp. 805-814.
  18. H. Kong, Q. Sun, A. Ansari, and J. H. Burns, "Adaptive lighting control for vision-based occupant sensing," US Patent 7095002, 2006.
  19. M. G. Macphail and D. B. Kumhyr, "System and method for automated parking," US Patent 6646568, 2003.
  20. O. Stromme, "Automatic traffic sign recognition," US Patent 6813545, 2004.
  21. M. J. Gallery, K. G. R. Parsons, and P. A. Beever, "A wheeled vehicle with hill descent control," European Patent 0856446, 1998.
  22. M. Kauer, M. Schreiber, and R. Bruder, "How to conduct a car? A design example for maneuver based driver-vehicle interaction," in IEEE Intelligent Vehicles Symposium (IV), San Diego, CA, 2010, pp. 1214-1221.
    doi: 10.1109/IVS.2010.5548099
  23. A. Lindgren, F. Chen, P. Amdahl, and P. Chaikiat, "Using personas and scenarios as an interface design tool for advanced driver assistance systems," in Universal access in human-computer interaction. Ambient interaction. vol. 4555, C. Stephanidis, Ed., Berlin/Heidelberg: Springer, 2007, pp. 460-469.
    doi: 10.1007/978-3-540-73281-5_49
  24. E. Adell, A. Varhelyi, M. Alonso, and J. Plaza, "Developing human-machine interaction components for a driver assistance system for safe speed and safe distance," Intelligent Transport Systems (IET), vol. 2, no. 1, pp. 1-14, 2008.
    doi: 10.1049/iet-its:20070009
  25. F. Biral, M. Da Lio, and E. Bertolazzi, "Combining safety margins and user preferences into a driving criterion for optimal control-based computation of reference maneuvers for an adas of the next generation," in IEEE Intelligent Vehicles Symposium, 2005, pp. 36-41.
    doi: 10.1109/IVS.2005.1505074
  26. N. Kaempchen and K. Dietmayer, "Fusion of laserscanner and video for advanced driver assistance systems," in The 11th World Congress on intelligent transport systems, Nagoya, Japan, 2004.
  27. M. Mulder, M. M. van Paassen, J. Pauwelussen, and D. A. Abbink, "Haptic car-following support with deceleration control," in IEEE International Conference on Systems, Man and Cybernetics, San Antonio, TX, 2009, pp. 1686-1691.
    doi: 10.1109/ICSMC.2009.5346803
  28. B. Schick, B. Kremer, J. Henning, and M. zur Heiden, "Simulation methods to evaluate and verify functions, quality and safety of advanced driver assistance systems " in IPG Technology Conference, 2008.
    Available: http://www.ipg.de/uploads/media/01_Schick_Paper_03.pdf
  29. E. Adell, A. Várhelyi, and M. Dalle Fontana, "The effects of a driver assistance system for safe speed and safe distance – a real-life field study," Transportation Research Part C: Emerging Technologies, vol. 19, no. 1, pp. 145-155, 2011.
    doi: 10.1016/j.trc.2010.04.006
  30. G. Campion, G. Bastin, and B. Dandrea-Novel, "Structural properties and classification of kinematic and dynamic models of wheeled mobile robots," IEEE Transactions on Robotics and Automation, vol. 12, no. 1, pp. 47-62, 1996.
    doi: 10.1109/70.481750
  31. ROS (robot operating system), [Online].
    Available: http://www.ros.org/wiki/
  32. H. Yuste, L. Armesto, and J. Tornero, "Benchmark tools for evaluating AGVs at industrial environments," in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Taipei, Taiwan, 2010, pp. 2657-2662.
    doi: 10.1109/IROS.2010.5652864

Refbacks

  • There are currently no refbacks.

Comments on this article

View all comments


Copyright © 2011-2017  AUSMT   ISSN: 2223-9766