ElliQ® Blog

Designing with Expression in Mind

Written by Intuition Robotics Team | Aug 6, 2018

In this upcoming series of blog posts we’d like to introduce you to the team members of Intuition Robotics and show you the processes, tools and principles that go into building ElliQ from the inside, out. In this post, our design innovation researcher, Shlomi Azulay will take you behind the scenes of how we are creating multi-modal expressions for ElliQ.

The more complex technology becomes, the more crucial the role of design is in keeping experiences accessible and intuitive. In this way, the idea of designing a social robot as our first product is an enormous challenge, but one that could set a precedent for a new category of human-machine interactions.  We believe that a strong bond can be built with a machine, but only if this interaction is based in authenticity and defined through its context.

How the robot expresses itself is pivotal in creating this new type of interaction. We wanted to look at every possible way in which ElliQ could express herself, not only through verbal communication. Within the robotics industry, the use of gesture has become foundational for social action output. However, we realized in our early stages that separating each technological tool for interaction creates segmented experiences, and the best experiences come when all of our assets are integrated into cohesive, complex, contextual outputs.

In this case, assembling all of ElliQ’s output components into a specific timeline and formula became crucial in defining her expressions and social interactions. The magic is when the system is able to fully understand its environment and context, and utilize each of its expression mechanisms in tandem to create fluid moments of seemingly authentic personality.

ElliQ’s user experience is composed of 5 distinct layers:

  • Speech: The use of verbal expression for ElliQ to interact and convey personality
  • Sound: Separate from speech, uniquely designed sounds that alert the user of notifications or other product features
  • Motion: ElliQ’s character entity can freely move within three defined degrees of movement, allowing for a wide array of gestures
  • Lighting: Two LED arrays allow for the use of lighting to express character personality as well as call attention to notifications
  • 2D Interface: A detachable screen is utilized for content and video calls, kept separate from the robot in order to keep from breaking her character

It's all about timing.

Working on a new category of consumer product forces us to develop our own set of tools. The lack of traditions and biases in this new design field enables our team to invent new design methodologies, and with a target audience that is not traditionally technology savvy, we are forced to edit our design down to its purest and most accessible form.

 

 

Our main interaction design tool is a 3D animation application. Using this kind of software allows us to place together all five expression layers into a single timeline. In this way, the interaction is played out in a formulaic script, similar to writing a film or live theater piece. Once we began working within this methodology, we found ourselves working on every interaction unit like we were animators working on an endless movie.

Our first step is to develop the script: a script that describes the user’s general interactions and flows. Our second step is to draw a rough storyboard, which helps us count the required design assets from our toolkit for each scene. Usually, at this stage, we define the asset output into a cohesive expression completely based on the verbal sentence that ElliQ will say. After watching a 3D animated version of the scene, we will refine the expression and send it directly to the cloud - where all of ElliQ’s behaviors are stored.

It is important to note that there is a clear difference between the 3D animation design and the final robotic interaction. While the animator works from the point of view of an artificial camera, our designers cannot predict how and in what position we are going to find the person in the room. ElliQ is therefore programmed to behave more like a live actor, identifying the person in the room and adjusting her behavioral expressions based on context. The result is a unique interaction, every time, for every user – guided by design but controlled by you. This is the future of robotic interactions, and we’re excited to be paving the way.