Navigation : EXPO21XX > AUTOMATION 21XX > H05: Universities and Research in Robotics > Tufts University - HRI Lab
Tufts University - HRI Lab
Loading the player ...
  • Offer Profile
  • Our current projects focus on effective human-robot interaction through natural language dialogue and dynamic robot autonomy in a variety of settings, including search and rescue scenarios or wheelchair and tele-presence operations. In this context, we are developing new mechanisms for situated natural language understanding and multi-modal information integration in unknown environments. These mechanisms are integrated into our DIARC architecture for natural human-robot interaction. We are also developing a robust, fault-tolerant multi-agent system infrastructure (called ADE) for robotic architectures that will ensure the sustained, long-term operation of future robots.
Product Portfolio
  • Research

    • affective control and evolution
    • interactions between affect and cognition
    • cognitive robotics for human-robot interaction
    • embodied situated natural langage interactions
    • multi-scale agent-based and cognitive modeling
    • architecture development environments for complex robots
    • Temporal, Environmental, and Social Constraints of Word-Referent Learning in Young Infants: A NeuroRobotic Model of Multimodal Habituation

    • We present a neuroanatomically-based embodied
      computational model of multimodal habituation to explore the temporal and social constraints on the learning observed in very young infants. In particular, the model is able to explain empirical results showing that auditory word stimuli must be presented synchronously with visual stimulus movement for the two to be associated.
    • Investigating Multimodal Real-Time Patterns of Joint Attention in an HRI
      Word Learning Task

    • A snapshot from the participant’s first-person view, sitting across a table from a robot, trying to teach the robot object names.

      The cross-hair indicates the participant’s eye gaze at this moment. In this example, the robot was not following the participant’s attention.
    • Planning for Human-Robot Teaming in Open Worlds

    • A Pioneer P3-AT on which the planner integration was verified.