Activities in a warehouse are often unergonomic, physically demanding and monotonous. Robots with cognitive abilities can support people in their work if they are able to recognize their environment, the situation and the human intention to act. Such intelligent robotic systems could also be used in everyday life, in particular, to assist physically handicapped people.
In the „2+2 Project“ WALL-ET, funded by the Federal Ministry of Education and Research (BMBF), German and Korean partners from research and industry are working on a social transport robot that can support people in their work in a warehouse or in everyday shopping with the help of cognitive skills. WALL-ET stands for „Warehouse Autonomous Lean Logistics Entity for Transportation“. The robotic platform itself is being developed and set up by the Korean research partner KIMM (Korean Institute of Machinery & Materials). Together with the Korean Institute of Technology Europe (KIST Europe) based in Saarbrücken, DFKI will provide the necessary intelligence for the system, e.g. software modules for voice interaction, multimodal dialog as well as activity and intention recognition.
The developed platform will be tested in Germany in cooperation with Globus SB-Warenhaus Holding. The application-oriented Innovative Retail Laboratory (IRL) jointly operated in St. Wendel by the founding partners Globus, Saarland University and DFKI, offers the ideal test environment. Through the cooperation with Globus, interested supermarket customers can also experience the latest innovations. On the Korean side, the developed robot system will be tested by the stationary manufacturer Dong-A in the warehouse in Daejeon.
Multimodal dialog
Many providers now offer commercially available voice-controlled assistance systems. They concentrate on relatively simple voice commands or requests, such as „Turn on the light in the living room“ or „How is the weather today? However, human communication is much more complicated. We don’t just speak and hear, we also use other so-called modalities, such as facial expressions and gestures, and take into account the context of the current conversation and the environment both when speaking and when interpreting what is being said. In addition, missing or unclear information within a dialog is often actively asked by the dialog partners. Multimodal dialogue systems take this human behavior as a model in order to implement conversations with robots or digital assistance systems as naturally as possible. The DFKI has been conducting research in the field of artificial intelligence for over 30 years and has developed systems in projects such as Verbmobil or SmartKom that are still a pioneering achievement today, particularly in the field of speech processing and multimodal dialog.
Project partners:
Germany:
– Korea Institute of Science and Technology Europe, KIST Europe (consortium leader)
– German Research Center for Artificial Intelligence GmbH, DFKI
– Globus SB-Warenhaus Holding
South Korea:
– Korean Institute of Machinery & Materials KIMM
– DONG-A Pencil Co.
Project volume:
Total volume approx. 1 Mio. €
on the German side approx. 500.000 €
Project duration:
01.04.2019 – 31.03.2022
Press contact:
Reinhard Karger
Corporate Spokesperson
German Research Center for Artificial Intelligence (DFKI)
Campus D3 2
66123 Saarbrücken, Germany
E-mail: Reinhard.Karger@dfki.de
Phone: +49 681 85775-5096
contact for scientific information:
Contact:
Prof. Dr. Antonio Krüger
Head of Research Department Cognitive Assistants (COS)
German Research Center for Artificial Intelligence (DFKI)
Campus D3 2
66123 Saarbrücken, Germany
E-mail: Antonio.Krueger@dfki.de
Phone: +49 681 85775-5075