Telepresence robots using traditional video chats don’t make remote user’s presence fully yet. In order to enhance remote user’s presence, we propose a novel approach for telepresence robots, which replaces remote user’s background with the real time captured image behind the robot. We supposed that the background of the remote user that displayed on telepresence robot’s display is one of the reason for reducing presence. We found the approach is effective to enhance the presence of remote users and to reduce bad impression for delay. By existing remote user’s background, it may remind ”Remote users are in another place.” to people talking with telepresence robots clearly. In this paper, we explain the new approach, implementation and evaluation of the system. We developed and evaluated a system which replaces remote user’s background with the real time background where robots placed and evaluated subjective impressions of communication through the system.
Proposal of Communication Channel Establishment Method with In-Screen Agent in Multi-Person Dialogue：fly
In recent years, the number of opportunities to see a people on video call through an agent or a monitor displayed on a display has increased. Along with that, the number of in-screen guidance agents that interact with people is increasing. In a one-to-one interaction with an in-screen agent, the listener and the speaker are fixed, so there is no need to take any other person into account. However, in the case of a conversation between two or more people such as in telepresence robot, it is problematic to speak to anyone in the vicinity. It is difficult for an agent in the screen or a person in the monitor to look out of the monitor due to the Mona Lisa effect. There is a lot of research being done to turn people on the screen to look. In this research, we focus on the connection of communication channels and propose an in-screen agent that talks with a specific person considering that is engaging in a multi-person conversation. We proposed and evaluated a method to follow the gaze target with the background of the agent’s monitor and a method to give inertia to the body posture when following. We also proposed and evaluated a method of expressing the gazing target by directly reflecting the gazing target on the pupil. In the study of the in-screen agent that can perform gaze, the gaze target is evaluated, but it does not verify what the conversation participants other than the gaze target think. In this study, the method of projecting the gaze target on the pupil and the method of the previous research are developed, and a comparative experiment is performed.
Implementation and Evaluation of Multi-Conversation Scheduler Considering Participants’ Social Behavior：theramin
When a robot is working in a public space, because multiple users behave according to their own purpose, robots can have multiple tasks at the same time. For example, a conversation can interrupt another conversation. A robot should suspend the current conversation when a third person calls the robot which is already talking with another user, or to greet in a situation that a friend is passing by the robot. In this study, I designed a robot behavior in the situation that the interrupter expected to have a new conversation with the robot. Traditional conversational-interruption studies have aimed the interruptions as a cooperative interaction within the current conversation, but few studies have aimed at interruption from outside of the conversation. For a robot to handle interruptions, it is necessary to detect interruption during a conversation, prioritize conversations, and build consensus with users within the conversation. CACTS-C not only schedules conversations based on the four factors, conversation length, relation between the interrupter and the interrupted person, tasks of the stakeholders, and emotion at interruption, but also holds a consensus building conversation with a person who will wait for the robot until the prioritized conversation ends. I implemented a robot-conversation system using CACTSC. The conversation scenario was written in AIML-ap, which is uniquely expanded AIML (Artificial Intelligence Markup Language), which is a markup language for chatbot, to describe the conversation scenario in adjacent pair units. I evaluated the behaviors of the implemented robot and discussed the effectiveness of the model of the CACTS-C’s scheduler. Experimentation results revealed that the conversation prioritization has effectiveness in conversation scheduling. In addition, the persuasion behavior of CACTS-C made the impression of robot’s fairness better than just stating the reason for interruption.
The miniaturization and decreasing cost of networking computers, along with the advancement of cloud infrastructure has eased the implementation of it in commercial products. Additionally, as a result of the rising demand for edge computing, the field of IoT has evolved. IoT devices nowadays are not only limited to collecting data, but are able to actively send life-supporting information to the users. However, it is difficult to give such function to the simple products like daily necessities, thus having devices such as Smart Speakers providing wide variety of information. In this research, we propose v-IoT system, the approach which add information-giving functions to consumables such as plastic bottles using mixed reality technology. This system will determine the information and the action suggested the user to take based on the Affordance of the object. The closest associable pairs of suggested action and object will be determined with reference to the Associative Concept Dictionary, enabling v-IoT to offer a notification system that is apparent and intelligible. The preliminary experiment was conducted to evaluate the user’s perspective of the impression and usability when being informed by the v-IoT system, and the performance-based evaluation with focus to expressiveness and workload required from the designer’s perspective. The result showed effectiveness in the transparency of the information for users, however when the object in mind for the user does not coincide with the object suggested by the system, the evaluation from users showed poor results. Therefore a improvement of system was made by recognizing and taking in account of the users’activity alongside the Associative Concept Dictionary. This thesis discloses the specifics of the v-IoT system, the improved system, and a follow-up evaluation on the user’s perspective.
Robots, which were traditionally seen mostly in factories, are now becoming commonplace in daily life. On the other hand, conventional hard and powerful robots both become weapons and incite fear when cooperating with humans. The attempt to add softness to the structure, exterior, and actuator of a robot is a soft robot. Soft robots have begun to be applied not only to service robots but also to a wide range of fields such as biomechanics, industry and medical care. On the other hand, soft prototyping has been difficult until now. In this study, we developed a modular robot “ModuRo” equipped with a shape memory alloy, a soft actuators, and proposed smooth prototyping environment for soft robots. We implemented two prototypes of ModuRo, and evaluated the performance as an actuator and performed a usability test on the prototyping of a stuffed soft robot. In the performance evaluation, it was found that the soft actuator exhibited a stable output. In the usability test, there was no significant difference in the expressive power between the ModuRo system and the existing code base programming, but it was suggested that the work efficiency was improved by using ModuRo.
With the expansion of video gaming industry reaching into newer territories, video games are no longer just an entertainment tool. They can become one’s career, competition, healthcare, or social interaction. Within the values that video game brings, something that is close to casual player is social interaction on physical world or cyberspace. However, sometimes interaction with an entity without embodiment can lose player’s engagement or satisfaction. For some of the more vulnerable people who relies on video game as a source of social interaction, the lack of embodiment can greatly reduce the enjoyment of playing games. This research will discuss a possible solution to the lack of embodiment. Many Player vs. Player games have a game mode versus CP(Computer Player). However the experience in such modes can lack satisfaction on the player side due to the lack of engagement, challenge, and involvement. This paper proposes a human-agent interaction system in the context of video game. The virtual agent will act as if it is the CP, creating stronger ties and possible companionship. This research will design and evaluate an auxiliary system that enhances the experience of playing against CP by using an on-screen virtual agent. Emotion will be synthesized through an emotion engine using the game state as an input, and the agent will display facial expressions and appropriate utterances. The evaluation will be done through user-end perspective and developer perspective to grasp the whole model of our system, AfRAS, and possible implications to the gaming ecosystem.
The miniaturization and decreasing cost of networking computers, along with the advancement of cloud infrastructure has eased the implementation of it in products for manufacturers. The increase in such products has led to the rise in IoT devices with information-provision function. However, it is difficult to give such function to the product which engage in an one-way association in our daily life like daily commodities, since it requires the device to have a wealth of computational resource. In this research, we propose v-IoT system, the approach which add information-giving functions to those non-computing device using Mixed Reality technology. This system enables appropriate things to give adequate information at the right timing. In this paper, we explain the detail and design of v-IoT system. In addition, we conduct an experiment to evaluate impression and usability of v-IoT to disclose the descriptive power.
It is expected to utilize service robots working in cooperation with humans． In recent years, the number of service robots developed is increasing in Japan． When classifying service robots developed in 2017， genres such as ”hobby” and ”watching and communication” are up to the top． Along with the growing expectation for such service robots， it is expected that the place of robot motion creation will increase． Actual creation of the current motion includes the method of placing key frames on the timeline and motor control by ROS， but these motion creation is a skillful and novice person and it is different in terms of the nature of motion and power it tends to be opened．
I gathered data that three patterns of four motions, “shake hands”, “give things”, “greetings” and “why” were done by three patterns. We also analyzed and analyzed motion data using motion analysis software “Kinovea”. Then, we examined the tendency of motions from analysis results. Regarding the motion that waving the hand, the preliminary action until raising the hand was not emphasized much in any case, and it was emphasized in either side after starting swinging.
This research aims to test the new process of enhancing the quality of telecommunication. Focusing on the presentation of the video call from laptops and from telepresence robot, we test whether utilizing Manga-Effects, emotion-emphasizing Manga-like video filters, proves effective to bring out one’s social presence. While many related research have approaches incorporating the video processing of the users’ background, we approach this topic by taking the users’ facial expressions.
Existing voice changers change the nature of voice by adjusting pitch and format. Therefore, it depends on the voice of the user, so it is difficult to deliver with the ideal voice. I learned about speech synthesis technology with the goal that users can make broadcast with the voice they want to become.
This research aims at developing a system that emulates a smooth and gradual personality development of communication robots through robot-to-user interaction. The system developed in this paper will be a successor model of a previously created model based off human child personality development model, called C2AT2HUB. The successor model will emulate a smoother personality development, while making it transparent for the user and will be experimented and evaluated on a longer period of time using the Vector Resolution Method.
In a robot and human conversation, the robot’s flat voice is a one of the factor that boring conversation. Automatically adjust emotional expressions to speech from virtual emotion in a robot. In this study, I verify that communication will be smooth when it considers the flow of emotional change from the conversation history.
Improving technology of remote communication, human will be free from “transfer”. This term, I mainly researched and made a conclusion about the problems of current telepresence systems. Based on that, I found existing opposite background decreasing the experiences of video chats. Therefore, I will continue research about extracting and replacing background on video chats by using tablet’s camera
We collect robot-like “shapes” and “movements” in daily life with photos, videos, sometimes sketches. Then we categorize and analyze those data from many viewpoints, and consider what “robot-like objects” is. On the contrary, by doing so, it helps to understand what humans or living things- “like” object is.
This research proposes a solution to the declining labor force of Japanese fishing industry through the implementation of aquatic swarm robots, or swarm Unmanned Surface Vehicle (USV). As a target, we primarily focus on the makiami style fishing, where with the implementation of this system, the amount of workers required can be significantly reduced. As a preliminary step, a prototype of swarm USV that can adjust its position as a swarm on the surface of water will be made.
In a robot and human conversation the robot’s voice is so flat that human can’t calm down. There are robots that can adjust it’s voice. However it is hard and endless works that adjusting them in manually. This research verifies influence that automatically changing robot’s voice tone by reading mind itself.
I designed and implemented a robot that provides good nodding which doesn’t obstruct the conversation.
The product development to reduce the time an automated external defibrillator reaches a patient : easy
At the site where cardiac arrest occurred, the cardiac arresters are quickly shocked to the heart using CSR or AED to improve the survival rate. In order to deliver the AED to the site faster, I thought that it would be more efficient for people near the AED to transport to the site rather than going for an AED. In this research, we propose a method of getting passersby to pick up the AED and guiding it to the site of cardiac arrest accurately.
IoT devices which display life-support information to users are increasing. However, it is difficult to add information-giving function to the product which close to user’s life, i.e. commodity. In fact, this information is given from the device which have a wealth computation source. In this research, we propose the approach which add information-giving function to those IoT introduction difficult things, using augmented reality technology. This system enables right things to give right information in right timing.
This research proposes the use of a small animal robot that encourages users to improve the room environment.The robot measures room temperature using a built-in temperature sensor (DHT11) and evaluates the risk of a possible heat stroke. In cases where the risk of a heat stroke is high, the robot demands the user to lower the room temperature by imitating motion that suggests suffering.Through observing users’ behavioral changes, this work assesses the effectiveness of setting “helping behavior towards robots” as a motive.
Today, telepresence robots are regularly being used within the environment around us. “Telepresence” has become one of the convenient tool for businessmen who often need to attend meetings across the sea. Although the use of these robots are generally known for business-related instances, people of different fields now fully extend the use of the robots. Yet, telepresence robots still have a small boundary in what it can do. This paper will discuss how animation video effects can resolve the drawbacks of today’s telepresence robots.
We propose E-Behavior Engine which adds emotion of user and robot to behavior of robot’s motion and utterence. We aim to expand the range of further utilization by making motion and utterance generation of communication robot based on emotion by using this system.
We have studied a decision model of the conversion tasks scheduling for robots. Our decision model is essential for communication robots in public spaces to handle interruptions from outside the conversation.
C2AT2 HUB: Long-term Characterization of Robots based on Human Child’s Personality Development : nago
Since lack of variety of personalities may cause unnatural communication with robots, we propose C^2AT^2 HUB where robots are characterized through long-term interaction. This method defines robot affect as “emotion” and “interpersonal affect” and characterizes robots gradually by adjusting tendency of affect based on history of user actions to robots.
in this research, we propose a human robot communication method in which the robot judges the reaction of the opponent in stages, decides the appropriate intervals according to the spot, and the robot shrinks or separates the distance himself. The robot himself measured the intersection from the expression and behavior of the opponent and designed and evaluated a service that allows the user to take a distance that allows the user to feel confident and comfortable communication.
Aiming to study making robots, mirror regonition robot understanding image on a mirror is the robot itself was implemented on Mindstorm EV3. Also, we prototyped a program which learns imitation with genetic algorithm.
The utterance generation is still a problematic issue. Our implementing dialog system considers tweets of the conversation partner to lead the conversation topic to what is easy to respond for a robot.
In recent years, accidents have occurred frequently at the station, and accidents caused by drunken people are occurring frequently among them. In this research, we aim to create a system to prevent drunken people from falling from the home. As a research that will be the first step this time, we will try to detect pedestrians using floor devices.
Currently, a variety of different robots serve to aid and enhance human beings’ daily lives. In this research, I have focused on creating a robot system that would function as an intermediate in human to human communication. This personal robot system detects the users facial expression through an outside camera, then performs gestures according to the detected emotion. By using this personal robot, a person who is unable to freely move their own body would be able to add a component to their method of communication, as the robot will act as the users’ body in the communication. I have conducted experiments to validate that an Expression Amplifying Robot indeed helps enhance communication between a User, who is unable to move below the neck, and another person.
Presented in ORF (Open Research Forum) 2018 Current telecommunication systems have been greatly improving throughout the years. Telepresence robots are well known to give mobility to the users who cannot physically be somewhere, through the use of motion control of the robot. In such a way, they mainly function as an avatar-like robot for remote users. While these improvements enhance user experience in telecommunication, telepresence robots must be aware of the conversation context to compensate for the multiple difficulties in remote communication. We propose an emotion-emphasizing telecommunication system utilizing Manga-Effects, aimed to enhance users’ communication quality.
Presented in ORF (Open Research Forum) 2018 The miniaturization and decreasing cost of networking computers, along with the advancement of cloud infrastructure has eased the implementation of it in products for manufacturers. The increase in such products has led to the rise in IoT devices with information-provision function. However, it is difficult to give such function to products such as daily commodities, which lack technical devices in which gives a wealth of computational resources. We propose the v-IoT system, which adds information-giving functions to those non-computing device, using Mixed Reality technology. This system enables appropriate things to give adequate information at the right timing.
Presented in ORF (Open Research Forum) 2018 While the age of robots is still rising in today’s society, there will soon come a day where several communication robots reside in each household. In order to differentiate each communication robot, and to make the robots likable, we develop a system that emulates the robots’ smooth and gradual personality development. Using a system model in reference to a human child’s personality development model, C2AT2HUB version 2, we demonstrate a long-term personality development.
Presented at ROMAN 2018 Presented at ORF (Open Research Forum) 2017 Inadequate variety of personalities for communication robots may cause unnatural interaction with them and reduce the sense of attachment. We propose the C 2 AT 2 HUB engine, where communication robots are characterized based on long-term interaction with users. In C 2 AT 2 HUB, robots’ characterization is assessed by the following two factors; “interpersonal affect” and “emotions.” The transition of the factors is adjusted by the history of users’ actions to robots in order to characterize the robots gradually. Through experimentation, our system proved the characterization of robots to be natural and that it improves the user impression of robots.
Presented at ORF (Open Research Forum) 2017 In the coming generations where robots are constantly connected to a network, various functions can be applied to these robots. Examples can be communication between the machines (M2M), communication for services (M2S), and ubiquitous service devices. We call these robots Sociable Robots. We create a system in which the robot is aware of the surrounding context when interacting with humans. We look upon awareness of the users’ state through facial expressions.
“cookpepper” is an assistant application for cooking. By showing Pepper the ingredients available, you can talk about today’s menu. In addition to displaying the recipe on the tablet, Pepper tells you how to cook with using gesture, making cooking more fun! In the end, Pepper gives a special present to you.
“Making Robots more Sociable”, Professor Gordon Cheng of the Technical University of Munich. What is the “Sociable” required for robots in human-robot and robot-robot interaction? In this project, we designed how the robot understands the surrounding situation and the emotion of others as a sense and what kind of expression the robot should face. In this demonstration, robots who empathize with people and other robots will appear. They listen to the conversation. If people are depressed, they feel depressed together or praise and encourage people with two units. Please look at the ubiquitous information space, the form of a new person (body), the social robots, and the interaction that made them a trinity.
Exhibited at 2016 ORF The keyboard encourages people to change behavior by multistage interaction using emotion sensing technology.