KTH researchers are helping robots learn to cooperate, a difficult but potentially revolutionary challenge.
A child-sized robot shuffles toward a ball on the floor of the Computer Vision and Active Perception Lab at KTH Royal Institute of Technology, as a team of student researchers watches intently.
The robot stoops down and reaches for the ball, without success. An adenoidal, high-pitched voice pipes up: “I missed,” the robot says. “Bad luck. I’ll try again.” And it does, missing yet again. But on the third attempt, it lifts the ball. “I’ve got it,” the robot says, before tossing the ball.
To an outsider, the demonstration might appear to be no more than an exercise in basic robotic abilities, but the activity is actually part of an ambitious project to make robots co-operate with one another on complex missions using body language.
Funded by the EU's Seventh Framework Programme for Research, RECONFIG (Cognitive, Decentralized Coordination of Heterogeneous Multi-Robot Systems via Reconfigurable Task Planning) began early in 2013 and runs to 2016, with project partners at Aalto University in Finland, the National Technical University of Athens in Greece, and the École Centrale Paris in France.
In the not so distant future, robotics will play a major role in service-oriented tasks of everyday life – at home, in the classroom and at work – and robotic vacuum cleaners have already made their way into our homes. But sometimes a single robot is not capable enough to carry out a certain task, such as moving heavy objects.
Dimos Dimarogonas, Assistant Professor in Automatic Control and coordinator of the project, says that the basic aim is to get one robot to understand when another robot needs its help, and to change its plans accordingly.
“By studying each other’s body language, our robots will learn to interpret information and utilise it,” Dimarogonas says. “If they cannot accomplish their individual plans, once they have met and exchanged plans, they can reconfigure to pursue a mutually satisfactory goal.” RECONFIG aims for decentralized coordination of robot teamwork, in which each robot identifies its own tasks. The work underway in the robot lab has so far been focused on individual robots, or agents; but Dimarogonas says they’re working toward multiple robot systems.
“We are building from the ground up,” he says. “We need to make sure it is robust enough that when we move to the multiple agent platform, it will work there as well.”
Getting robots to co-operate is difficult, since each robot has its own view of the world. They categorize the same object differently, depending on their perceptional viewpoint – somewhat like humans. But if humans have problems with common perception, how will robots manage? This is what the research partners are trying to address.
The researchers are trying to reach agreement on how the robots should perceive sensor-based information such as seeing or grasping an object.
By combining reconfigurable task planning with implicit sensing-based communication (embedded cognition-coupled communication) and explicit body language, such as pointing at an object, the robots will be able to re-evaluate their pre-programmed plans based on common perceptional agreement. They will also be able to update both these plans and corresponding trajectories based on their pre-programmed tasks,
“Let’s say that a team of robots are pre-programmed to find and carry all the chairs in a room,” Dimarogonas says. “One of them identifies a new chair in real-time. It will then be able to decide whether to carry the chair itself or inform another robot to change its plan to take care of the new chair.”
If successful, the projects can set the standard for real-time robot co-operation for in-house domestic service activities, or even for working as assistants in hospitals or schools, Dimarogonas says. “The School of Electrical Engineering is really excited about this, because lower and higher education are among the stakeholders we have targeted for this project.
“There’s a lot of work to be done, but we could see robot teams like this working in schools relatively soon.”
The robot stoops down and reaches for the ball, without success. An adenoidal, high-pitched voice pipes up: “I missed,” the robot says. “Bad luck. I’ll try again.” And it does, missing yet again. But on the third attempt, it lifts the ball. “I’ve got it,” the robot says, before tossing the ball.
To an outsider, the demonstration might appear to be no more than an exercise in basic robotic abilities, but the activity is actually part of an ambitious project to make robots co-operate with one another on complex missions using body language.
Funded by the EU's Seventh Framework Programme for Research, RECONFIG (Cognitive, Decentralized Coordination of Heterogeneous Multi-Robot Systems via Reconfigurable Task Planning) began early in 2013 and runs to 2016, with project partners at Aalto University in Finland, the National Technical University of Athens in Greece, and the École Centrale Paris in France.
In the not so distant future, robotics will play a major role in service-oriented tasks of everyday life – at home, in the classroom and at work – and robotic vacuum cleaners have already made their way into our homes. But sometimes a single robot is not capable enough to carry out a certain task, such as moving heavy objects.
Dimos Dimarogonas, Assistant Professor in Automatic Control and coordinator of the project, says that the basic aim is to get one robot to understand when another robot needs its help, and to change its plans accordingly.
“By studying each other’s body language, our robots will learn to interpret information and utilise it,” Dimarogonas says. “If they cannot accomplish their individual plans, once they have met and exchanged plans, they can reconfigure to pursue a mutually satisfactory goal.” RECONFIG aims for decentralized coordination of robot teamwork, in which each robot identifies its own tasks. The work underway in the robot lab has so far been focused on individual robots, or agents; but Dimarogonas says they’re working toward multiple robot systems.
“We are building from the ground up,” he says. “We need to make sure it is robust enough that when we move to the multiple agent platform, it will work there as well.”
Getting robots to co-operate is difficult, since each robot has its own view of the world. They categorize the same object differently, depending on their perceptional viewpoint – somewhat like humans. But if humans have problems with common perception, how will robots manage? This is what the research partners are trying to address.
The researchers are trying to reach agreement on how the robots should perceive sensor-based information such as seeing or grasping an object.
By combining reconfigurable task planning with implicit sensing-based communication (embedded cognition-coupled communication) and explicit body language, such as pointing at an object, the robots will be able to re-evaluate their pre-programmed plans based on common perceptional agreement. They will also be able to update both these plans and corresponding trajectories based on their pre-programmed tasks,
“Let’s say that a team of robots are pre-programmed to find and carry all the chairs in a room,” Dimarogonas says. “One of them identifies a new chair in real-time. It will then be able to decide whether to carry the chair itself or inform another robot to change its plan to take care of the new chair.”
If successful, the projects can set the standard for real-time robot co-operation for in-house domestic service activities, or even for working as assistants in hospitals or schools, Dimarogonas says. “The School of Electrical Engineering is really excited about this, because lower and higher education are among the stakeholders we have targeted for this project.
“There’s a lot of work to be done, but we could see robot teams like this working in schools relatively soon.”