Attentive in-screen agent — In-screen agent that looks like a specific direction —,Takuto Watanabe,IEICE Tech. Rep., vol. 119, no. 189, CNR2019-15, pp. 19-23, Aug. 2019.

In recent years, 3DCG characters have become widespread and are being used in-screen agents as avatarsfor interactive agents. In this study, we propose a method to give a sense of coexistence and unity of space by lettingthe user follow the background and the method of projecting the gaze target to the pupil. For the experiment, anagent that can manually select the gaze target was created and evaluated.

Voice Expression with Pseudo-emotion in Human-Robot Communication, Tae Kuwahara, Takumi Horie, Kazunori Takashio, IEICE Tech. Rep., vol. 119, no. 81, CNR2019-2, pp. 7-11, June 2019.

In a robot and human conversation, the robot’s flat voice is a one of the factor that boring conversation. Automatically adjust emotional expressions to speech from virtual emotion in a robot. In this study, I verify that communication will be smooth when it considers the flow of emotional change from the conversation history.

Open Research Forum 2019

ModuRo

Person in Charge of the Project : Taiki Majima

To realize the human-robot harmonious society in the Beyond SDGs era, robots are expected to be used in various situations, but many existing robots have a problem that it is difficult to work with humans because of their structures. We focused on soft robots with softness as a hardware architecture approach to this problem. In our demo, you can experience prototyping of applications such as medical robots and service robots by using modular robots with soft exteriors and actuators.


Video Gaming with a Virtual Rival Player

Person in Charge of the Project : Shinsuke Kiyomoto

With the evolution of computer gaming hardware causing entertainment to change, on-screen PvP video games still remain to be the most popular option. Most of these games have a game mode versus CP(Computer Player). However the experience in such modes can lack satisfaction on the player side due to the lack of engagement, challenge, and involvement. This research will design and evaluate an auxiliary system that enhances the experience of playing against CP by using an on-screen virtual agent. Emotion will be synthesized through an emotion engine using the game state as an input, and the agent will display facial expressions and appropriate utterances.


University n.g.

Person in Charge of the Project : Yuki Furuya

As the SDGs are gradually being achieving throughout the world, we envision a society where all people have equal access to quality education. Improved technology used in services such as MOOC make equality in education possible. When we look beyond the fulfillments of the SDGs, simple equality does not compensate for the interactive discussions necessary in high-level academia. At the ORF, we will present a prototype telepresence robot, binding telecommunication and VR technology. It will represent one component which demonstrates the future university free from environmental restrictions.

Matching Things and Information Using Associative Concept Dictionary in v-IoT, Kentaro Taninaka, Kazunori Takashio, IEICE Tech. Rep., vol. 119, no. 81, CNR2019-8, pp. 37-42, June 2019.

The miniaturization and decreasing cost of networking computers, alongwith the advancement of cloud infrastructure has eased the implementation of it inproducts for manufacturers. The increase in such products has led to the rise in IoTdevices with information-provision function. However, it is difficult to give suchfunction to products which engage in an one-way association in our daily life likedaily commodities, since it requires the device to have a wealth of computationalresource. In this paper, we propose Virtual IoT system, the approach which addinformation-giving functions to the non-computing objects using Mixed Realitytechnology. This system enables appropriate things to give adequate informationat the right timing. In this paper, we explain the detail and design of Virtual IoTsystem. In addition, we conduct an experiment to evaluate impression and usabilityof Virtual IoT to disclose the descriptive power.

Sociable Robots Lab Orientation

From this year, We hold Sociable Robots Lab orientation!
You can discuss your research theme with Prof. Takashio in this orientation.
If you are considering taking classes from this autumn semester, please join us!

  • DATE : 2019/7/18(THU) 18:10 – 19:40
  • VENUE : Keio Univ. Shonan Fujisawa Campus Κ12
  • FEE : Free
  • CONTENTS
    • What is “Sociable Robots Lab”?
    • About course registration
    • Research theme & demonstration
    • Research theme Consultation

Social HRI/HAI 新人課題

Who Are You Talking to?

課題内容

二人のユーザとPepperが会話をしている想定で,Pepperにインタラクションの状況を理解する様々な機能を追加する.

  • 悲しい表情をしている人の方を向く機能
  • 人が話していることを検知する機能
  • 誰が話しているのかを推定する機能
  • 人がPepperともう一人のユーザのどちらを見ているのかを推定する機能
  • 「誰が誰に向かって話しかけているのか」を推定する機能 (extra課題)
  • …人が話している間,最も話者が向いていた方向の人(ロボット)が話しかけられている対象とみなす

実装環境

  • Softbank Robotics 社 Pepper
  • Choregraphe (GUIを使ってPepperのプログミング開発ができる)
  • Python 2 (Choregrapheのプログラムをカスタマイズするのに使用)

目的

  • 画像情報や音声情報からコミュニケーションに関する情報を取得する体験をする

背景

  • ロボットが人とN対Nの対話をするには,人が誰に向けて話しているのかをロボットが認識する必要がある
  • 既存研究では,視線(または視線の近似値として頭部の向き)や発話の韻律情報(ロボットに対してはゆっくり喋るなど),体の揺れ,発話内容などをもとに,人が誰に対して発話を行なっているかの推定が行われている.
  • 新人課題では,もっとも古典的な手法として,発話中の視線情報をもとに受話者を推定するプログラムをNaoqi上に構築してもらう.

MIDDLE 新人課題

イライラ棒ゲーム

ルール

イライラ棒をするロボットマニピュレーションシステムの制作をします

  • 使用するロボットはPepper
  • こちらが用意するイライラ棒装置と棒を使用してもらう
  • ロボットは各々が作成したマニピュレーションシステムで動かす
  • 制限時間は5分
  • 枠に当たった場合、並びに棒がスタートとゴール以外からはみ出た場合はその場からやり直し
  • 枠に当たる、はみ出るのは5回までとする

使用機材/言語

  • Pepper
  • コントローラ(以下から選択)
    • PS4 コントローラ
    • iPad
    • Joycon
  • Javascript(QiMessaging)

目的

  • システムの設計、実装を経験する
  • ロボットモーション制作を体験してもらう
  • ロボットと他デバイスとの通信について学んでもらう