Tip:
Highlight text to annotate it
X
Hello, my name is Julio, and this is Javi
and we are going to show you our project for the SDG2 (Digital Systems II) course in UPM (Technical University of Madrid).
It consists mainly of a robotic face, which you can see here
and with which we can interact.
In the main operating mode,
we can send a message to the robot, and it will make an emotional response
Architecture
Our project's core controller is a Raspberry Pi
which is a computer with the size of a credit card.
It is connected to a speaker, which will play the messages,
an Ethernet cable, that provides it with Internet connection,
this is the power supply wire
and here are the GPIO, general purpose ports to interact with the Raspberry.
We connected some of them to this PCB we designed, which has two functions:
it works as an RS-232 interface to the robotic face,
and as a control/status interface to the user.
This green LED indicates that the program is running,
the yellow LED means that the program is loading,
and the red LED indicates that the program is off.
It also has two buttons:
this one toggles between emotional mode and neutral mode
and this one turns the robot on or off.
On the Raspberry Pi run different processes. The Apache server allows connection to our program through the Internet.
The Freeling program and two SRILM processes (one for positive emotions and one for negative ones) analyse the messages.
And the main process waits for messages and manages the other processes.
Now I am turning it off.
Demonstration
Now we are going to make a demonstration using the neutral voice
using a computer to connect to the Apache server running on the Raspberry Pi.
The web interface contains two main sections:
the first one lets us send a message to the robot
and the second one has some buttons that put the robotic face in certain predefined positions.
We will first show how the buttons work
the first button puts a happy face
[ROBOT:] I am happy
as we can see, the robot put a happy face and played a message accordingly
Sad face
[ROBOT:] I am sad
Surprised face
[ROBOT:] I am surprised
Angry face
He is really upset now
And neutral face
[ROBOT:] I am bored
This was the neutral voice. To toggle between neutral and emotional voice, we have this button on the PCB
Now we are going to send some messages to the robot
to see its emotional response to them
For example, it is now in a neutral position and we are going to send a positive message
such as 'Hello, good morning'
[ROBOT:] Hello, good morning
[ROBOT:] I like this
In the neutral voice mode, the robot plays the message and then says 'I like this' or 'I don't like this'
depending on how positive or negative it finds the message with respect to its current mood state,
and if the new message has the same emotional value as the current state, the robot simply says 'OK'.
For example, if we send the same message again
[ROBOT:] Hello, good morning
[ROBOT:] I like this
it finds it positive again, but if we send it for the third time
[ROBOT:] Hello, good morning
[ROBOT:] OK
we can see that it no longer implies an important change in the robot's mood state,
so it keeps both its mood state and the position of the face.
And now we are going to send some negative messages, to see the same effect with the negative emotion.
[ROBOT:] Goodbye, we are leaving
[ROBOT:] I don't like this
We send it again
[ROBOT:] Goodbye, we are leaving
[ROBOT:] I don't like this
[ROBOT:] Goodbye, we are leaving
[ROBOT:] OK
Emotional voice
Now we are going to show the emotional mode of our robot.
This mode is not integrated in the Raspberry Pi
because it is hosted in a computer in the Departamento de Ingeniería Electrónica (Department of Electronic Engineering) of UPM.
We send the messages to this computer and wait for it to synthesize the emotional response and send it back to us.
To show this, we first put the face in a happy position,
The emotional message takes several seconds to generate
[EMOTIONAL ROBOT:] I am happy
Now we are going to show a sad emotion
[EMOTIONAL ROBOT:] I am sad
and now we are going to send two messages, first a positive one and then a negative one, to see the reaction
For example, we write 'Hello, good morning'
[EMOTIONAL ROBOT:] Hello, good morning
In this case, as the message is positive, the robot plays it with a happy tone.
And now a negative message such as 'Goodbye'
It said the message with a neutral voice, because the robot was in a happy mood, and this negative message changes that to neutral
Android app
Now we are going to show the Android app we developed for the project
It has the same options as the web interface: move the face directly or send a message
It has a configuration screen in which we can introduce the IP address of the server
and the options to set position or talk to the robot
Let us test some of the fixed positions, such as a happy face,
[ROBOT:] I am happy
a surprised face,
[ROBOT:] I am surprised
or a neutral face
[ROBOT:] I am bored
And now we are going to try sending a message
There are two options to send it: one is a text message
for example, we write 'I love you'
'I love you', we send it
[ROBOT:] I love you
[ROBOT:] I like this
And the other option recognises a voice message and sends it
[TO THE PHONE:] I don't love you
[ROBOT:] I don't love you
[ROBOT:] I don't like this
And this is all we have in the Android app.
So this is all, we hope you liked it
If you would like more information about our project, you can visit our blog
or take a look at the project source code, which we have published on GitHub.