Lola is equipped with a vision system developed in the DFG research project “Towards a general vision system for humanoid robots” (HU-1743/1-1) by the Institute for Autonomous Systems Technology at the University of the Federal Armed Forces (TAS). The newly developed image processing method enables Lola to navigate in an unknown environment.
Lola has the proportions of an average, 180 cm tall adult and weighs
approximately 60 kg. The structural components design follows a thorough
analysis and walking experiments with Johnnie, the humanoid robot previously
developed at AM. Lola’s mechanical structure is characterized by an
extremely lightweight design and a kinematic configuration with 25 actuated
degrees of freedom, allowing for natural and flexible motion patterns. The
joints are actuated by modular, multi-sensory servo drives with a high power
density. The drives are based on AC servo motors, harmonic drive gears and
planetary roller screws, respectively. Unlike humans, the center of mass (CoM)
of a biped robot is typically at a level with the hip joint or even below.
Since stability increases with higher CoM positions, special emphasis was
put on an improved mass distribution of the leg apparatus to achieve good
dynamic performance: the mass distribution in the hip-thigh area is
significantly improved by employing a roller screw-based linear actuator in
the knee joint. The ankle joint is actuated by a parallel mechanism of two
linear actuators with the motors mounted on the thigh, next to the hip
joint. Thus, a large part of the actuator mass could be shifted close to the
hip joint rotational axis, resulting in a highly dynamic behavior of the
legs. The dimensioning of structural components is based on a comprehensive
multibody simulation model of the robot. For some components with complex
multi-axial stress conditions and strict geometric constraints, concept
design proposals are determined by topology optimization. Finite element
analyses are conducted on all highly loaded parts. Major structural
components are designed as aluminum investment castings in order to meet the
weight and stiffness targets.
The sensor system supports the implementation of model-based control algorithms. Absolute angular sensors allow the direct measurement of the joint angles, compensating compliance and nonlinearity in the drive mechanisms. A high-precision inertial measurement unit with fiber-optic gyroscopes estimates the orientation and angular velocity of the upper body.
The ground reaction forces and moments are measured by a six-axis force/torque sensor. Because commercial six-axis sensors with appropriate measurement ranges are rather bulky and heavyweight, a customized sensor was developed. Due to the highly coupled kinematics and dynamics, central stabilizing control is crucial for a biped robot. From the technological point of view, however, the central control unit can be unloaded from low-level tasks, such as motor control and sensor data acquisition and processing. These tasks are carried out by decentralized controllers, forming an “intelligent” sensor-actuator network with central control of global system dynamics. All controllers are connected by a real-time communication system.
Based on Johnnie’s control system, a hierachical control and trajectory planning system has
been developed. The trajectory planning system generates stable trajectories from a prescribed target walking motion. The reference trajectory planning is improved by a better robot model and an anticipatory calculation of the next steps. A new contact force and CoM trajectory planning method is used. This method runs in real-time giving Lola the ability to react quickly to unexpected events.
Because of even small errors in measurements and models and a vaguely known environment, biped locomotion can not be realized with precalculated trajectories and motor control alone. The planned trajectories are therefore modified based on measured contact forces and torques as well as the inertial orientation and angular velocity of the upper body.
The walking control stabilizes global system dynamics by modifying the planned walking pattern which consists of task space trajectories and contact forces. The modified trajectories are tracked using a hybrid position/force control. The decentral drive controllers form the lowest control layer. Above, the decentral joint angle control layer is located. On the superior layer, the global system dynamics is controlled in workspace. Kinematic redundancies are solved within the workspace control which allows for a simple and effective use of the redundant degrees of freedom. Walking parameters like step length, walking direction or speed can either be set by a human operator or decided autonomously by Lola.
A very important component of autonomous robots is environmental cognition. TAS is particularly interested in visual perception research. In the field of robotics, vision systems get more and more powerful. There are commercial solutions available for quality assurance, monitoring and even navigation systems, e.g. Advanced Driver Assistance Systems like tracking stability. However, these systems are often highly specialized and may not be feasible for a wide variety of applications. Human-like general and flexible cognition is far from technical realization. This is the motivation of the above-mentioned DFG project.
The goal is to develop a general vision system for autonomous mobile robots which can be used in variable situations, like indoor or outdoor scenarios. In the past, robotic demonstrations typically took part in predefined environments. By contrast, the envisioned system can act in any context, thus enabling the robot to walk in user-defined, non simplified surroundings, to learn different objects, search for and recognize them. Therefore a general navigation system with different layers is being developed. The lowest layer realizes a navigation behavior which can be used in a wide field of scenarios and can prevent collisions in a fast and secure way.
But this level is not able to solve complex tasks, like climbing stairs as it recognizes steps primarily as obstacles. These requirements will be handled by higher levels and depend on the existence of specific objects. As soon as the vision system gets to know that a specific cognitive ability can potentially be activated, a transition from the reactive to a higher layer takes place.
The cooperation between layers enables the robot to navigate through any environment. At the same time specific abilities can be used if they are available. By means of the first level — the reactive level — the robot can avoid any natural obstacle while no knowledge of the given objects or environment is necessary. To succeed, a stereo camera rig is used which provides images with a resolution of 5 megapixels.
Depending on the intended action, different information has to be extracted from the input data. In a new approach, the images are dynamically divided into regions of diverse attention, allowing for the execution of complex algorithms only in areas of high informational need. Thereby the system provides high-resolution data processing at the cost of reduced computational load.
The resulting motion of the robot is mapped onto the joint angles, which are controlled on the lowest layer. The position, velocity and acceleration of each joint is controlled by a PID controller with friction observer.
Basically the platform consists of several belts which form an endless
torus. The belts can be actuated and generate motion in one direction (X),
the whole torus can rotate and generates motion an a second direction (Y).
As the two motions can be controlled independently, any resulting motion can
be generated to recenter a person.
The actual implementation offers 3.5 to 4.6 meters of walking space and will be sized up to 5.5 times 4.6 meters by december 07 speeding up to 2 m/s (person starts jogging). As far as known this is the lagest and fastest implementation wordlwide at the moment (Stated Oct 07).
The plaform can be easily sized up by its modularity. The theoretical span is almost unlimited by an innovative construction (patend pending). As size matters with regard to the maximum allowable accelerations on a human on the platform, this implementation can be considered as a major breakthough in the history of motion platform construction.
Fields of application:
- The "holodeck": The user is equipped with a Head Mounted Device (HMD) which shows the virtual reality. The HMD is trackes with a motion tracking system, on the one hand to generate the video data for the stereo vision of the HMD, on the other side to calculate the deviation of the user from the platforms center. This deviation is used to recenter the user. By respecting acceleration limitations and other restrictions, this process will not be noticed by the user. Many different applications are possible, from a walk to a new designed urban area to the research of an order picking process in an innovative environment. Within this project, it is possible to walk around in the ancient Pompeij using the city engine.