Active Uncalibrated Visual Servoing

Contact: Bill Yoshimi <yoshimi@cs.columbia.edu>

Figure 1. Experimental setup with peg to be aligned with hole on block. Note camera is mounted off to the side of the end-effector.

Calibration methods are often difficult to understand, inconvenient to use in many robotic environments, and may require the minimization of several, complex, non-linear equations (which is not guaranteed to be numerically robust or stable). Moreover, calibrations are typically only accurate in a small subspace of the workspace; accuracy degenerates quickly as the calibration area is left. This can pose a real problem for an active, moving camera system. In our case, using an active vision system, it is not feasible to recalibrate each time the system moves. What is needed is an online method of calibration that updates the relationships between the imaging and actuation systems. We have developed a new set of algorithms that can perform precise alignment and positioning without the need for calibration. By extracting control information directly from the image, we free our technique from the errors normally associated with a fixed calibration. We attach a camera system to a robot such that the camera system and the robot's gripper rotate simultaneously. As the camera system rotates about the gripper's rotational axis, the circular path traced out by a point-like feature projects to an elliptical path in image space. We gather the projected feature points over part of a rotation and fit the gathered data to an ellipse. The distance from the rotational axis to the feature point in world space is proportional to the size of the generated ellipse. As the rotational axis gets closer to the feature, the feature's projected path will form smaller and smaller ellipses. When the rotational axis is directly above the object, the trajectory degenerates from an ellipse to a single point.

We have demonstrated the efficacy of the algorithm on the peg-in-hole problem. A camera is mounted off to the side of the end-effector of the robot. A peg, which is to be inserted in a hole on the block, is aligned with the rotational axis of the end effector see figure 1. The algorithm uses an approximation to the Image Jacobian to control the movement of the robot. The components of the Jacobian J are dependent on the robot parameters, the transformation between the camera system and the robot system, and the camera system parameters. We calculate the Image Jacobian by making two moves in world space, observing the movements of the feature in image space. By empirically calculating the Image Jacobian at each new point (and throwing away the information from previous calculations), we can use the new estimates to move the robot to the correct alignment position even though we have not calibrated the two systems.

These images shows a point feature being tracked and the image ellipses generated from the points. The reduced area image ellipse is computed after a positioning move computed by the Image Jacobian. The final frame shows the system performing the accurate alignment using the image-ellipses as a control signal. We are currently able to compute this at an update rate of about .2 Hz.

Click to see the generation of image ellipses for a typical peg-in-hole experiment. The goal is to insert the peg into one of the holes in the aluminum block in the scene. At the end of the video, we verify the methods accuracy by actually inserting the peg in the hole. WARNING!!! the video is 1,314,816 bytes long so you may have to wait a while...

Billibon H. Yoshimi and Peter K. Allen. Active, uncalibrated visual servoing. In 1994 IEEE International Conference on Robotics &Automation, volume 4, pages 156-161, 1994.


Return to the Robotics Lab home page