Our research focuses on creating a robotic system that aids human-to-human communication. The robot acts as a personal companion that understands the user’s emotions and helps express them alongside the user. First, the user’s facial expression is detected through a connected camera device and relays the retrieved information to a humanoid robot. The humanoid robot then performs physical gestures that match the detected emotion. By using this system, those who are unable to freely move their own bodies can add a physical component to their communication method. In this paper, we have determined the efficacy of translating detected facial expressions into robot movements. Through experiments and surveys, we determined whether our proposed ‘Ex-Amp Robot’ helped enhance the communication of a hypothetically tetraplegic user.
Robot Avatar for Enhancing the Communication of Physically Disabled Humans : Ai Kashii, Kazunori Takashio and Hideyuki Tokuda 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2016), Aug, 2016
The robot avatar proposed in this paper is a personal companion that captures users’ facial expressions and translates retrieved emotion data into gestures best suited to express the user’s emotion in real-time. This robot allows users to enjoy physical aspects of communication, as well as put an impact on parts of conversations to raise conversation quality. In this research, we have conducted experiments to validate the efficacy of translating facial expressions into robot movements.
Expression Amplifying Robot — A Personal Expression Translator — : Ai Kashii, Kazunori Takashio, Hideyuki Tokuda IEICE Tech. Rep., vol. 116, no. 106, CNR2016-3, pp. 11-16, June 2016.
Currently, various different robots aid and enhance our daily lives. They help us in the form of communication, manufacturing, and physical labor. Even within the category of enhancing communication, robots may be used in a myriad of ways, including use as an avatar (a representation of an identity) or as a conversation companion on its own. In this paper, we have focused on creating a robotic system that would aid human-to-human communication. This robot acts as a personal companion that understands the user’s emotions and helps express them alongside the user. It first detects the user’s facial expression through a connected camera device, which then relays the retrieved information to a humanoid robot. The humanoid robot then performs physical gestures, according to the detected emotion. By using this personal robot system, a person who is unable to freely move their own body would be able to add a physical component to their method of communication, as the robot will act as the user’s body in the communication. This time around, we have conducted experiments to validate the efficacy of translating detected facial expressions into robot movements. Through experiments and surveys, we determined whether our proposed ‘Ex-Amp Robot’ helped enhance communication between a hypothetically tetraplegic user and another person.
Implimentation of Mirror Recognition Robot based on MoNAD to Mindstrom EV3 : nago
Aiming to study making robots, mirror regonition robot understanding image on a mirror is the robot itself was implemented on Mindstorm EV3. Also, we prototyped a program which learns imitation with genetic algorithm.
Dynamic Conversation Topic Generation with Tweets : theramin
The utterance generation is still a problematic issue. Our implementing dialog system considers tweets of the conversation partner to lead the conversation topic to what is easy to respond for a robot.
Drunken person detection by floor device : ramp
In recent years, accidents have occurred frequently at the station, and accidents caused by drunken people are occurring frequently among them.
In this research, we aim to create a system to prevent drunken people from falling from the home. As a research that will be the first step this time, we will try to detect pedestrians using floor devices.