Softness for Robot

Outline

Most robots today are used in factories.Robots built with conventional approaches consist of a hard and sturdy exterior and skeleton, and are equipped with powerful actuators.On the other hand, there is an increasing need for robots to fit into our daily lives, and robots built using conventional approaches can cause fear and sometimes even become deadly weapons.To address this issue, soft robots have been proposed to give robots the “softness” of living things.In this project, we propose a prototyping tool for soft robots and an approach to add softness and tactile organs to existing robots.

Vision

In a society where humans and robots coexist in harmony, it is desirable that robots are designed and implemented in such a way that people can easily approach them.It is known that softness gives people a sense of security and closeness, but its practical application has just started in the field of therapy and care.In this project, we aim to realize a system that gives softness to robots in a wide range of fields involving human-robot interaction (HRI).

Project Research
ModuRo:
Most of the soft robots that have been proposed in the past require molding, external devices such as pumps and compressors, and are basically controlled by code-based programs, making prototyping a difficult task.Therefore, we propose “ModuRo,” a prototyping tool using modules equipped with soft actuators.By combining ModuRo, we create an environment in which anyone can easily prototype soft robots from objects around them, such as stuffed toys.

STI:
In order to provide skin sensation to a robot, a number of sensor units are often used to wrap the robot’s exterior.However, the flexibility, shape and material of these units is a challenge for soft robot implementation.In this project, we develop and validate a system called “STI” that can detect changes in the shape of the material as well as add softness to the exterior of the robot.

Publication

none

Sociable Things × Augmented Reality

Outline

Sociable Things is referred to the next-generation IoT environment realized by applying the technology used in the field of personal robots to fine-grained objects including daily necessities. A community of things that grasps various contexts is a stepping stone to encourage people to change their behavior and improve the quality of their living environment. We developed a “work-life balance keyboard” that detects fatigue and physically interferes with the user. Furthermore, in the most advanced project, we are exploring the next-generation IoT spaces focused on the wide range of things by using virtual objects on Augmented Reality (AR).

Vision

The current smart systems have greatly improved the convenience of our lives. It is possible to operate various registered devices from one mobile device, and multiple devices cooperate to establish one system. In recent years, the number of devices such as smart speakers that focus on conversation with users has increased. The purpose of this project is implementing a system that emphasizes not only such convenience but also the interaction between people and things. We aim at realizing the society that various things cooperate to support human behavior from psychological aspects on a daily basis.

Project Research

Interactive IoT Spaces with Augmented Existence
The latest IoT environments focus on sensing parts. On the other hand, the information providing function is premised on using a device with abundant computational resources such as a smartphone. The configuration of notifications is often static because it is deeply linked to the information providing service. In this research, we defined and proposed a new word “augmented existence” as a concept that combines AR and IoT. By increasing the presence and value of things by using both technologies, we will realize an interactive IoT space for all things that users can see, including existing conversational IoT devices.

Publication
Kentaro Taninaka, Kazunori Takashio, Virtual IoT: An IoT Platform with MR Technologies Realizing Low-cost and Flexible Notification of Life-support Information, IEEE The 2019 International Conference on Internet of Things and Intelligence Systems (IoTaIS2019), 2019/11/3-5, Bali, Indonesia.
https://ieeexplore.ieee.org/document/8980382

谷中健大朗, 高汐一紀 , “v-IoT: AR による仮想的 IoT 環境の構築と連想概念による適切な情報提示オブジェクト選択手法” 電子情報通信学会論文誌 D 104.1 (2021): 21-29
https://search.ieice.org/bin/summary.php?id=j104-d_1_21

Telepresence Robot

Outline

Face-to-face communication is still important despite the proliferation of telecommunication tools such as telephone and e-mail, and in 2020, the impact of the new coronavirus infection has led to a rapid increase in the use of video calling applications around the world. However, there are many situations where we feel the limitations of such applications. Telepresence, a technology that reproduces the presence of a person in a remote location, is attracting attention. Telepresence was originally a technology for remote control by an operator in a dangerous place. However, since it has been extended to the context of communication, both the sense of presence felt by remote users and the sense of presence felt by local users toward remote users have become important. In this project, we will study an approach to improve the sense of presence in telepresence robots equipped with telepresence technology.

Vision

As telecommunication systems become more widespread, we will be able to live our lives free from physical constraints, without spending time on the act of moving. In fact, during the epidemic of the new coronavirus infection, the “work-cation” style, in which people work while traveling, was actively used. Some companies have also decided to reduce the size of their headquarters in the capital. In Japan, many companies and other social activities are concentrated in urban areas. Inevitably, people living in these three metropolitan areas account for 51.8% of Japan’s population, making the concentration of population in urban areas a challenge. However, as online telecommunication advances, the difference between rural and urban areas becomes less of an issue. It may even solve problems such as the disparity between urban and rural areas and the concentration of medical services in urban areas.

Project Research

Nonverbal behavior and physical communication are also essential for communication. However, most of the existing telepresence robots show only the face on the display. However, most of the existing telepresence robots show only the face on the display, so it is possible to extend the modality of communication by extending the display range of the remote user. In this project, we will implement a prototype and investigate the relationship between the display area of the remote user and the sense of presence.
If a telepresence system with enhanced physicality is implemented with a two-dimensional display, the presence of the remote user’s background may interfere with the sense of presence. Therefore, in this project, we investigate the relationship between the remote background and the sense of presence, and implement an approach to solve this problem.

Publication
[1] Y. Furuya and K. Takashio, “Telepresence Robot Blended With a Real Landscape and Its Impact on User Experiences,” in 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 2020, pp. 406–411, doi: 10.1109/ro-man47096.2020.9223346.
[2] 古谷優樹, 高汐一紀, “遠隔者の身体的存在感を高めるテレプレゼンスロボットの提案,” in 信学技報, vol. 119, no. 446, CNR2019-54, 2020, pp. 53–57.
[3] 古谷優樹 and 高汐一紀, “テレプレゼンスロボットにおける遠隔ユーザの身体性を考慮した表示手法の検討,” in 信学技報, vol. 121, no. 93, CNR2021-3, 2021, pp. 8–13.

Telepresence Robot that Enhances the Physical Presence of Remote User : furuyan

Telepresence robots using traditional video chats don’t make remote user’s presence fully yet. In order to enhance remote user’s presence, we propose a novel approach for telepresence robots, which replaces remote user’s background with the real time captured image behind the robot. We supposed that the background of the remote user that displayed on telepresence robot’s display is one of the reason for reducing presence. We found the approach is effective to enhance the presence of remote users and to reduce bad impression for delay. By existing remote user’s background, it may remind ”Remote users are in another place.” to people talking with telepresence robots clearly. In this paper, we explain the new approach, implementation and evaluation of the system. We developed and evaluated a system which replaces remote user’s background with the real time background where robots placed and evaluated subjective impressions of communication through the system.

Proposal of Communication Channel Establishment Method with In-Screen Agent in Multi-Person Dialogue:fly

In recent years, the number of opportunities to see a people on video call through an agent or a monitor displayed on a display has increased. Along with that, the number of in-screen guidance agents that interact with people is increasing. In a one-to-one interaction with an in-screen agent, the listener and the speaker are fixed, so there is no need to take any other person into account. However, in the case of a conversation between two or more people such as in telepresence robot, it is problematic to speak to anyone in the vicinity. It is difficult for an agent in the screen or a person in the monitor to look out of the monitor due to the Mona Lisa effect. There is a lot of research being done to turn people on the screen to look. In this research, we focus on the connection of communication channels and propose an in-screen agent that talks with a specific person considering that is engaging in a multi-person conversation. We proposed and evaluated a method to follow the gaze target with the background of the agent’s monitor and a method to give inertia to the body posture when following. We also proposed and evaluated a method of expressing the gazing target by directly reflecting the gazing target on the pupil. In the study of the in-screen agent that can perform gaze, the gaze target is evaluated, but it does not verify what the conversation participants other than the gaze target think. In this study, the method of projecting the gaze target on the pupil and the method of the previous research are developed, and a comparative experiment is performed.

Implementation and Evaluation of Multi-Conversation Scheduler Considering Participants’ Social Behavior:theramin

When a robot is working in a public space, because multiple users behave according to their own purpose, robots can have multiple tasks at the same time. For example, a conversation can interrupt another conversation. A robot should suspend the current conversation when a third person calls the robot which is already talking with another user, or to greet in a situation that a friend is passing by the robot. In this study, I designed a robot behavior in the situation that the interrupter expected to have a new conversation with the robot. Traditional conversational-interruption studies have aimed the interruptions as a cooperative interaction within the current conversation, but few studies have aimed at interruption from outside of the conversation. For a robot to handle interruptions, it is necessary to detect interruption during a conversation, prioritize conversations, and build consensus with users within the conversation. CACTS-C not only schedules conversations based on the four factors, conversation length, relation between the interrupter and the interrupted person, tasks of the stakeholders, and emotion at interruption, but also holds a consensus building conversation with a person who will wait for the robot until the prioritized conversation ends. I implemented a robot-conversation system using CACTSC. The conversation scenario was written in AIML-ap, which is uniquely expanded AIML (Artificial Intelligence Markup Language), which is a markup language for chatbot, to describe the conversation scenario in adjacent pair units. I evaluated the behaviors of the implemented robot and discussed the effectiveness of the model of the CACTS-C’s scheduler. Experimentation results revealed that the conversation prioritization has effectiveness in conversation scheduling. In addition, the persuasion behavior of CACTS-C made the impression of robot’s fairness better than just stating the reason for interruption.

v-IoT -Life Support Information Provision System Using Mixed Reality Technologies-:shandy

The miniaturization and decreasing cost of networking computers, along with the advancement of cloud infrastructure has eased the implementation of it in commercial products. Additionally, as a result of the rising demand for edge computing, the field of IoT has evolved. IoT devices nowadays are not only limited to collecting data, but are able to actively send life-supporting information to the users. However, it is difficult to give such function to the simple products like daily necessities, thus having devices such as Smart Speakers providing wide variety of information.
In this research, we propose v-IoT system, the approach which add information-giving functions to consumables such as plastic bottles using mixed reality technology. This system will determine the information and the action suggested the user to take based on the Affordance of the object. The closest associable pairs of suggested action and object will be determined with reference to the Associative Concept Dictionary, enabling v-IoT to offer a notification system that is apparent and intelligible.
The preliminary experiment was conducted to evaluate the user’s perspective of the impression and usability when being informed by the v-IoT system, and the performance-based evaluation with focus to expressiveness and workload required from the designer’s perspective. The result showed effectiveness in the transparency of the information for users, however when the object in mind for the user does not coincide with the object suggested by the system, the evaluation from users showed poor results. Therefore a improvement of system was made by recognizing and taking in account of the users’activity alongside the Associative Concept Dictionary. This thesis discloses the specifics of the v-IoT system, the improved system, and a follow-up evaluation on the user’s perspective.

ModuRo ソフトロボットのためのプロトタイピング環境:ak1ra

Robots, which were traditionally seen mostly in factories, are now becoming commonplace in daily life. On the other hand, conventional hard and powerful robots both become weapons and incite fear when cooperating with humans. The attempt to add softness to the structure, exterior, and actuator of a robot is a soft robot. Soft robots have begun to be applied not only to service robots but also to a wide range of fields such as biomechanics, industry and medical care. On the other hand, soft prototyping has been difficult until now. In this study, we developed a modular robot “ModuRo” equipped with a shape memory alloy, a soft actuators, and proposed smooth prototyping environment for soft robots. We implemented two prototypes of ModuRo, and evaluated the performance as an actuator and performed a usability test on the prototyping of a stuffed soft robot. In the performance evaluation, it was found
that the soft actuator exhibited a stable output. In the usability test, there was no significant difference in the expressive power between the ModuRo system and the existing code base programming, but it was suggested that the work efficiency was improved by using ModuRo.

v-IoT -Life Support Information Provision System Using Mixed Reality Technologies:shandy

The miniaturization and decreasing cost of networking computers, along with the advancement of cloud infrastructure has eased the implementation of it in products for manufacturers. The increase in such products has led to the rise in IoT devices with information-provision function. However, it is difficult to give such function to the product which engage in an one-way association in our daily life like daily commodities, since it requires the device to have a wealth of computational resource.
In this research, we propose v-IoT system, the approach which add information-giving functions to those non-computing device using Mixed Reality technology. This system enables appropriate things to give adequate information at the right timing. In this paper, we explain the detail and design of v-IoT system.
In addition, we conduct an experiment to evaluate impression and usability of v-IoT to disclose the descriptive power.

AfRAS: Video Gaming with Emotion Expressive Virtual Rival Player:kiyomo

With the expansion of video gaming industry reaching into newer territories, video games are no longer just an entertainment tool. They can become one’s career, competition, healthcare, or social interaction. Within the values that video game brings, something that is close to casual player is social interaction on physical world or cyberspace. However, sometimes interaction with an entity without embodiment can lose player’s engagement or satisfaction. For some of the more vulnerable people who relies on video game as a source of social interaction, the lack of embodiment can greatly reduce the enjoyment of playing games.
This research will discuss a possible solution to the lack of embodiment. Many Player vs. Player games have a game mode versus CP(Computer Player). However the experience in such modes can lack satisfaction on the player side due to the lack of engagement, challenge, and involvement. This paper proposes a human-agent interaction system in the context of video game. The virtual agent will act as if it is the CP, creating stronger ties and possible companionship. This research will design and evaluate an auxiliary system that enhances the experience of playing against CP by using an on-screen virtual agent. Emotion will be synthesized through an emotion engine using the game state as an input, and the agent will display facial expressions and appropriate utterances. The evaluation will be done through user-end perspective and developer perspective to grasp the whole model of our system, AfRAS, and possible implications to the gaming ecosystem.