WikiTalk is basically a talking Wikipedia. You interact with the robot by speech, and navigate by speech to whatever topics in Wikipedia you are interested in, and the robot tells you about them using the information from Wikipedia.
The robot gets the information directly from Wikpedia by wi-fi, so you get the latest up-to-date information even or rapidly changing topics. Wikipedia information is trustworthy, because it is contimuously validated by a worldwide community of volunteer editors.
WikiTalk anticipates what you will probably want to hear about next and extracts links to related topics in Wikipedia. If you just say the name of a related topic, the robot switches to the new topic and starts talking about it. Predicting likely topic shifts like this also helps speech recognition.
WikiTalk is multimodal and multilingual. It integrates face-tracking, nodding and communicative gesturing with speech synthesis and speech recognition, following the CDM Constructive Dialogue Model. The system now works in English, Finnish and Japanese.
A first implementation of WikiTalk on Nao robots was made at Supélec in Metz, France in 2012 during a one-month international PhD summer school. A Finnish language localisation was made at University of Helsinki by the Academy of Finland DigiSami project as a step towards SamiTalk.
Since 2016 WikiTalk has been developed by CDM Interact, a Finnish start-up company that I set up with Graham Wilcock. Multilingual WikiTalk with a Japanese language localisation made at Doshisha University was demonstrated at COLING 2016 in Osaka, Japan.
In 2017 WikiTalk won an award in Best Robot Design (Software Category) at ICSR 2017 (International Conference on Social Robotics) in Tsukuba, Japan.