Navigation : EXPO21XX > ROBOTICS 21XX > H25: AI, IT and Computer Vision > University of Extremadura
Videos
Loading the player ...
  • Offer Profile
  • The Robotics and Artificial Vision Laboratory (ROBOLAB) is located at the University of Extremadura, Cáceres, Spain.

    Since its foundation in 2000, it is devoted to conducting research in Intelligent Mobile Robotics and Computer Vision.
Product Portfolio
  • Projects and Research Contracts

    • APSUBA

    • Active Perception for Scene Understanding and Behaviour Analysis (APSUBA): An application for Social Robotic.
      The aim of the presented project is to develop an active perception system of the environment, acting as an agent inside a heterogeneous sensor network, for scene understanding and behaviour analysis. The proposed perception system is composed of two different mechanisms: an active visual perception system and a metric perception system acting inside a network of heterogeneous sensor such as 3D laser scanner, Inertial Measurement Unit (IMU) and camera. Both two systems will be calibrated. The sensorial fusion allows detecting the regions of interest using active vision (e.g. changes in the scene, human-robot interaction, human/robot motion, etc). Metric information will be used for later segmentation and 3D modelled stages. Moreover, data fusion will be applied for both heterogeneous sensor calibration and perception. Finally, the models will be used for scene recognition and behaviour understanding. In order to obtain the relevant elements in the scene, a perception-based grouping process will be employed, which is performed by a hierarchical irregular pyramid. Using the information given by the visual mechanism, the metric perception system will provide 3D information of the interest sector, through developing a multi-layer homography-based reconstruction approach. The segmentation in large datasets will be achieved using clusters provided by Gaussian Mixture Models (GMM). These segments will be modelled using high level geometric features (superquadric surfaces), which will be used for the last stage of the system: scene understanding and behaviour analysis using Bayesian rules.

      The proposed project will provide contributions in different topics like mobile-structure sensor network, heterogeneous sensor calibration, localization, scene recognition, 3D reconstruction, active perception, sensorial fusion or human behaviour understanding. The results of this project are also interesting in other research fields (e.g. smart environment).
    • ACROSS

    • ACROSS (Auto Configurable RObots for Social Services) aims to incorporate robotic services into social scenarios allowing them to anticipate to user needs by improving communication and empathy between people and embodied agents.

      Singular Strategic Scientific-Technological Projects consist of "a group of research, development and innovation (R&D) activities intended to enhance the scientific and technological integration of agents and boost technology transfer, contributing to improve the technological capacity of enterprises".
    • RoboComp

    • RoboComp is an open-source robotic framework. It is made up of different robotic software components and the necessary tools to use them. The running components form a graph of processes that can be distributed over several cores and CPU's using software component technology. Communications are handled by the Ice framework, following the approach taken by the Orca2 robotics system. Our main goals in designing the system are efficiency, simplicity and reusability. RoboComp also uses in some of its components other tools like CMake, Qt4, IPP, OpenSceneGraph and OpenGL. Programming languages used in the project are mainly C++ and Python but the Ice framework makes possible to easily use components written in many other languages. RoboComp, as ROS, can also be called a robot operating system as long as it provides an abstraction layer to robot hardware.

      Compared to other similar projects (such as Orca or ROS), the main advantage of Robocomp is its efficiency, and its ease of use.
    • RobEx

    • Robex, our new prototype of mobile robot is ready! It is a evolution of Speedy with a new chassis geometry and a new Wifi-N link. Motion is done by two Maxon DC motors with HP encoders.

      They are controlled by custom made 3 Amps PWM amplifiers driven by the AtMega32 microcontroller described bellow. This board computes the two PID loops at 2KHz and reads an electronic compass at 2Hz connected through a I2C bus.

      The compass and the motor controllers can be coupled in a higher order loop to follow an externally commanded magnetic direction. In the front we have placed a URG Laser -4 meters range and .5 degrees res.- connected also through a USB port.

      Other microncontroller board controls the three servos of the steroscopic head that hold two ISight Firewire cameras. Still another one controls a custom made "vestibular" board.
  • Robots

    • ROADEX

    • Vision in humans provides much of the information received from the outside world. Undoubtedly, a robot with this sensory capability can overcome larger challenges than one who lacks it. Traditionally, there has been some decoupling between the studies on artificial vision and those related to robot control. In the first group, the main effort has focused on producing symbolic and/or geometry interpretations of the world from sensory information obtained by one or more views of a scene. Although progress in this regard are numerous, this isolated conception of visual perception has not produced the desired results in robotics. In addition, several studies show, with increasing force, that perception is closely related to action and therefore should not be considered as a separate process. In order to establish appropriate links between visual perception and action control, it is necessary to answer two fundamental questions: how to include the influence of actions in the perceptual process? and how to regulate actions according to perception? The main assumption we raised is that these two issues can be solved through attention. More specifically, the basis of our proposal is that attention acts as a connection means between perception and action, which allows either, to drive the perceptual process according to actions and to modulate actions according to the perceptual result of the attentional control. Based on this idea, we have successfully developed a control architecture that allows solving navigation tasks in an autonomous mobile robot with stereo vision. The main objective of this project is to expand the control architecture of our robots to give them the ability to know and recognize their environment. The new system is a process of visual SLAM (Simultaneous Localization And Mapping) to be integrated into the architecture as a new behavior, acting effectively with other behaviors to support the overall control and to allow the robot to adapt to its environment . This new behavior will keep the attentional philosophy of the system, which will provide it the capability to act in both active and passive ways. On one hand, in a situation where the environment is unknown, the SLAM behavior will behave actively, guiding attention and actions of the robot to explore the environment and to build an internal representation of it. Faced with a family environment, the SLAM behavior will remain in a passive state while other behaviors act. In such a situation, its processing activity will be dedicated to remain localized and to update any change produced in the environment by using the previously obtained representation and the information provided by the visual attention system. These two ways of self-localization allow the robot to adapt progressively to its environment, without prior knowledge about it. Moreover, this adaptation only takes play from the interaction between the robot and the environment. As a result, the robot will act effectively in any area without the need to be reprogrammed or explicitly adapted.
    • Loki

    • The robot Loki is an AMM built as a collaboration among several Spanish universities and companies, following design and coordination by UEX. This robot has been built as an advanced social robotics research platform. The mobile base of Loki holds batteries, energy management electronics, local controllers and a high-end dual-socket Xeon board. Standing on it, a rigid column supports the torso which holds two arms and a expressive head. Each arm has 7 degrees-of-freedom (DOF) in anthropomorphic configuration built with a mixture of Schunck motors in the upper arm and a custom-made 3 DOF forearm, a 6 DOF torque-force sensor in the wrist and a 4 DOF BH8 Barret hand with three fingers. The head is connected to the torso by a 4 DOF neck and has two orientable Point Grey firewire cameras, an ASUS Xtion range sensor and 5MP Flea3 camera. It also features an articulated yaw and eyebrows for synthesis of facial expressions and microphones and a speaker for voice communication. The total number of DOF is 37. The control of all these elements is performed by a set of components created with the open source robotics framework RoboComp, which can easily communicate with other frameworks like ROS or Orca2. The robot currently can execute a basic repertoire of navigation, manipulation, and interaction behaviours that are organized under the new RoboCog architecture, and developed on top of the RoboComp framework.
    • Thor

    • RoboLab has finished the robot Thor, the new version of its original robot RobEx. It has been completely redesigned to admit a 150Kg load, offering up to 150Ah/12v of electric power.

      Thor has been created to be the base of a high performance mobile manipulator, endowed with a robotic 7dof arm, a hand and an expressive head.

      Thor has an embedded Cortex processor running Linux and RoboComp. The basic components running inside compute odometric information and serve laser data and battery state. Additionally several reflexes can be added as new components.
    • Dulce

    • Dulce, as part of an educational research experience with 5th grade kids, RoboLab has build a robotic turtle that executes Logo programs. This effort is part of a term long cooperation between math teacher Petri Colmenero of Dulce Chacón Primary School and the RoboLab team.

      During this term the fifth graders have received weekly classes on Logo using the KTurtle open software Linux application. After this introduction to the basics of computer programming, today the participants have programmed directly the robot Dulce, commanding it to draw letters and geometric figures on a giant board placed on the floor.

      This activity intends to bring the computers and robots closer to the learning process, turning it into an active, "learn by doing" experience.
    • Ursus

    • Ursus is an assistive robot developed in the Laboratory of Robotics and Artificial Vision of the University of Extremadura in collaboration with the Virgen del Rocío Hospital. It is designed to propose games to children with cerebral palsy in order to improve their recovery. It will also be used as a tool for their therapists. In order to make it visually pleasant for children, it has a friendly height and has been wrapped into the covering tissue of a teddy bear. Patients can get feedback of the status of the exercise in real-time by looking at the image the robot projects on the wall. This information encourages children to correct themselves.
      Robolab is building a new robot in collaboration with the Hospital Virgen del Rocío at Sevilla, within the ACROSS project. It is designed to propose games to children with cerebral palsy in order to improve their recovery. It will also be used as a tool for their therapists. In order to make it visually pleasant for children, it has a friendly height and has been wrapped into the covering tissue of a teddy bear. Patients will get feedback of the status of the exercise in real-time by looking at the image the robot projects on a screen or a tv monitor.
      Inside this video stream, the robot injects virtual objects signaling the limits of the movements. This augmented reality features are intended to increase the inmersion sensation in the user and to extend the duration of the physical exercises. To achieve the necessary tracking precision of the user arm, URSUS uses its camera under the neck and augmented reality techniques.
    • Muecas

    • Muecas is our last robotic head. It is designed to be used in social robots as mean to transmit expressions. The most sophisticated components of the head are the eyes. They are made with hollow nylon spheres holding firewire cameras inside.

      The eye ball are actuated by Faulhaber linear motors without gear reduction, and can achieve speeds and accelerations comparable to human eyes. The neck is a 3 dof's parallel actuator driven by dc motors.

      There is also an additional dof for the pan movement oh the head. Additional elements are being developed: eyelids, eyebrows and mouth will complete the basic endowment of this outstanding head.

      All electronics will be packed in the head and below the neck, including a dual-core Atom embedded processor running RoboComp. All communications in and out of the head will go through a 1Gb Ethernet interface.
    • Tornasol

    • Tornasol is our first medium-sized robot. He has an aluminium frame and 4-wheels (not very good idea) carrying all the electronics and computer on board. We designed and built for him the power amplifiers and a microcontroller-based board using a risc 100MHz Hitachi SH3 for PID control of the electrical motors. On top of these elements an AMD K7 processor takes care of image adquisition and processing, and navigation control. In addition, it is connected to the world through a Gbit Ethernet link.

      In the upper structure we have placed a robotic stereo head with three degrees of freedom and two Firewire cameras. The robot is able of generating an approximated 3D reconstruction of its environment at a frame rate of 4Hz. Here you can see some snapshots of these results.
    • Speedy

    • Speedy is a 1/8 model car based autonomous mobile robot. This robot includes a two-level arquitecture: a mini-ITX board with a C3 x86 compatible processor and 4 ATMega32 based custom boards designed and built in the Lab.

      Each of these boards takes care of a low-level behavior or group of behaviors: PID drive motor control, steering servo control and digital compass for autopilot in one of them, an ultrasound tilt-controlled scanner in another one, a set of 3-axis accelerometers for inclination sensing in the third one, and servo control of a pan-tilt camera. Limitations in turning radius suggest porting all hardware to a new robotic base.

      We have started new work on Component-based software using the Ice communications framework.
    • Pneux

    • Pneux is a complete mechanical design of a biped robot created by Sebastian K., a former graduated student in the Lab. It's actuators are pneumatic muscles, a technology in which we work from time to time -when we get funds. It has 2 dof's in the ankles, 1 in the knees, 3 in the hips and a 3 dof's stereovision head.

      More research on dynamic walking robots has been pursued in collaboration with Professor Félix Monasterio at the Universidad Politécnica de Madrid.
    • Stereo Head

    •  As part of Sebastian's Thesis, he designed a stereo robotic head that was built and placed on the robot substituting the old one. This new head is faster and more accurate than the old one. The cameras are Sony's firewire model with controllable zoom, focus and many other parameters.
    • Insex

    • Insex is a hexapod machine built by last year undergraduates. It's parts are made with a Nylon-Fiberglass compound. Joints are powered by 12 servo motors controlled by three cascaded commertial boards. It's small brain is C++ coded in a computer outside the body following a fairly complex component-based distributed design. The main limitation is the absence of force-feedback from the feet.
    • Miura

    • Miura is the stereo version of Pulguita. Five degrees of freeedom with digital PID controlled servos and two ISight Firewire cameras. Specially indicated for the new dual-core processors.

      Paper submitted to ICINCO-MARS 2007 describing the distributed architecture for visual attention and navigation that we are currently working on.
    • Pulguita

    • Pulguita is a quick algorithm demonstrator that lives on our desks. It can be connected to any computer and is used to try and debug vision and navigation algorithms. The camera is Apple's ISight and for the pan-tilt unit we use two digital servos from Futaba. The base is built also on top of two servos an all of them are controlled by microcontroller board and connected to de Pc through a serial link.It's low cost, fast to assemble and can be used in robotic or vision courses.
    • Master-Slave Surgical Tool

    • Master-Slave Surgical Tool pair of robotized minimally invasive surgical instruments. The goal of this project is to design a mechanism to operate remotely a surgical instrument with force feedback. The left image shows the two chassis supporting the instruments. In the right image, a custom made torque sensor is used to reproduce in the master side the resistance encountered by the slave.

      This project has been developed in collaboration with the Minimally Invasive Surgical Center of Cáceres, Spain and the group of Professor Félix Monasterio at the Universidad Politécnica de Madrid.