1. Autonomous Virtual and Artificial Characters
Virtual characters, also known as virtual actors, have become
very popular in the last 15 years, mainly through 3D movies and
games. In movies, they now have very realistic physical and
emotional characteristics, including their facial expressions,
hair, clothes, and motions. In parallel, robotics has made
considerable progress and new kinds of robots are available as
social companions. Some of them have the appearance of humans or
pets. Since they are physically present, we call them artificial
characters rather than virtual characters. These social robots
share a lot of research in common with virtual characters. In the
rest of this essay, we will call them VACs (virtual and artificial
characters).
VACs can play many roles, but we will distinguish two main ones:
i-companions, which play a social role, and i-substitutes, which
can stand in for a human in telepresence applications.
We envision that in the near future, i-companions, empowered with
memory, emotions, and personalities, will live together with humans,
entertaining and taking care of us. I-companions will ubiquitously
shift form between physical robots and realistic 3D virtual humans.
"I" stands for interactive, interpersonal, intimate, and individualized.
I-companions will improve human quality of life by ensuring people
assistance and safety anywhere, anytime. Deploying i-companions, at
a fraction of the cost of actual humans, will alleviate the gross
shortage of skilled human manpower, particularly in Western countries.
Interactive and digital media have matured to the point that embodying
these technologies in human characters like virtual humans and social
robots is nearing fruition.
I-companions will play a major role in three application domains:
education, security, and healthcare. First, i-companions can assist
human educators of children. For example, while playing a serious
game with a child, an i-companion might notice the child doing
something dangerous, like approaching a hot stove. The i-companion
could ask the child to stop, and, if necessary, call for human
assistance. Second, i-companions will detect abnormal human behaviors
in public spaces. In shops, sales i-companions will help customers
find what they are looking for, and, at the same time, identify
potential thieves. If a person exhibits unusual behavior, the
i-companion will go and talk to her, to discern the person's true
intentions. This is an important improvement over existing security
technologies such as surveillance cameras, as i-companions can help
prevent crime instead of just watching it happen. Finally, i-companions
will monitor the sick or elderly, in homes and hospitals. A virtual
nurse displayed on-screen in a hospital room will keep a patient company,
while assimilating a staggering quantity and variety of data from cameras,
microphones, and medical devices to ensure that the patient's condition
remains stable. I-companions will sense and understand human behavior,
assisting or watching over their human owners. An automatic understanding
of social situations and human intentions is key to any of these
scenarios.
Increasingly, people communicate remotely using new technical
innovations, teleconferencing, and voice over IP, like Skype. These
technologies allow people to attend meetings or gather no matter where
they are. They can even share the same virtual space using 3D avatars,
like in Second Life. More sophisticated systems are being developed with
3D capturing and rendering, leading to a true 3D telepresence experience,
like in the BeingThere Centre.
If the 3D avatar is distantly guided by the real participant, it is
possible to replace this real participant with its autonomous virtual
counterpart, as an i-substitute. The i-substitute is supposed to give a
partial illusion that the real person is present. This implies that the
i-substitute should look the same as the real human, speak with the same
intonation, and be somehow aware of the real situation, the real participants,
and the task being performed. The i-substitute should react at the right time
based on its perception of the situation and the other (real) participants.
It evaluates what each real participant is doing, and develops its perception
from visual and audio input and recognition. The i-substitute reacts according
to the input and its current knowledge. Its reactions encompass animation (body
and facial gestures) and speech synthesis. The i-substitute could be an
autonomous virtual human as well as a social robot, depending on the type of
meeting and environment.
2. Properties of VACs
Autonomy is generally the quality or state of being self-governing. Rather
than acting from a script, a VAC is aware of its changing virtual environment,
making its own decisions in real time in a coherent and effective way. A VAC
should appear to be spontaneous and unpredictable, making the audience believe
that the character is really alive and has its own will.
To be autonomous, a VAC must be able to perceive its environment and decide
what to do to reach an intended goal. The decisions are then transformed into
motor control actions, which consist of animation for a virtual character and
mechanical motion for a social robot, so that the behavior is believable. Therefore,
a VAC's behavior consists of the following sequence: perception of the environment,
action selection, and reaction.
The problem with designing VACs is determining how to decide on the appropriate
actions at each point in time, to work toward the satisfaction of the current goal,
which represents the VAC's most urgent need. At the same time, there is a need to
pay attention to the demands and opportunities coming from the environment, without
neglecting, in the long term, the satisfaction of other active needs.
There are four properties that determine how VACs make their decisions: perception
and awareness, adaptation and intelligence, memory, and emotions.
Perception and Awareness. Perception of the elements in the
environment is essential for VACs, as it gives them an awareness of what is changing.
A VAC continuously modifies its environment, which, in turn, influences its perceptions.
Therefore, sensorial information drastically influences VAC behavior. This means that
we cannot build believable VACs without considering the way they perceive the real
world and us. To realize believable perception, i-companions should have sensors
(cameras, microphone) that play the role of the senses of real humans (eyes, ears).
For social robots, these sensors can be directly built into the robot, but for
virtual humans, we need to have these sensors as auxiliary devices, generally part
of the display. What is essential for VACs is to recognize the people they meet and
what these people do, which requires the complex processing of information.
Adaptation and Intelligence. Adaptation and
intelligence define how the character is capable of reasoning about
what it perceives, especially when unpredictable events happen. A
VAC should constantly choose the best action so that it can survive
in its environment and accomplish its goals. As the environment
changes, the VAC should be able to react dynamically to new elements,
so its beliefs and goals may evolve over time. A VAC determines its
next action by reasoning about what it knows to be true at a specific
time. Its knowledge is decomposed into its beliefs and internal states,
goals, and plans, which specify a sequence of actions required to
achieve a specific goal. When simulating large groups or communities
of VACs, it is possible to use bottom-up solutions that use artificial
life techniques, rather than top-down, plan-based approaches, such as
those that are common in artificial intelligence. This allows new,
unplanned behaviors to emerge.
Memory. It is necessary for a VAC to have a memory
so that similar behaviors can be selected when predictable elements
reappear. Memory plays an important role in the modeling of autonomy,
as actions are often decided based on memories. But imagine a VAC in
a room containing 100 different objects. Which objects can be
considered memorized by the virtual character? It is tempting to
decide that whenever an object is seen by the VAC, it should be stored
in its memory. But if you consider humans, nobody is able to remember
every single object in a room. Therefore, the memory of a realistic
VAC should not be perfect either.
Emotions. The believability of a VAC is made
possible by the emergence of emotions clearly expressed at the right
moment. An emotion is an emotive reaction to a perception that
induces a character to assume a physical response, facial expression,
or gesture, or select a specific behavior. The apparent emotions
of a VAC and the way it reacts are what give it the appearance of
a living being with needs and desires. Without them, a VAC would just
look like an automaton. Apart from making them appear more realistic,
VACs' visible emotions can provide designers with a direct way of
influencing the user's emotional state. It should be noted that
emotions can be generated for recent social robots as their faces
have deformable skin and can display subtle expressions and even
lip synchronization.
3. The Impact of Research on Virtual and Artificial Characters
The four properties above are very important in creating believable
VACs. Modeling these properties accurately in real time requires
research efforts from various branches of computer science. We
emphasize a few of them below.
Behavior Planning. Behavior planning involves
the selection of appropriate actions for the VAC to execute. These
decisions should reflect the individual characteristics of the VAC,
including its intelligence, its motivations, and its social behavior.
Besides being individual, action selection architectures for VACs
should be both reactive and proactive to be efficient in real time.
The transitions between reactions and planning should be rapid and
continuous in order to elicit coherent and appropriate behaviors in
changed or unexpected situations. The design of a behavior planner
satisfying these criteria is a research challenge. If virtual
characters and social robots share most of these properties, it is
obvious that the motion of virtual characters will be smoother, but
social robots will be able to realize concrete tasks like carrying
real objects.
Sensor Design. VACs have to be aware of events
and characteristics of the real world. It takes real devices, such
as cameras, microphones, and haptic devices, to capture this
information and bring it to the virtual characters. The information
they provide must be integrated with that of virtual sensors.
Modeling Emotions. To allow VACs to respond
emotionally to a situation, they could be equipped with a computational
model of emotional behavior. Emotion-related behavior, such as facial
expressions and posture, can be coupled with this computational model,
which can be used to influence their actions. The development of a good
computational model is a challenge.
Modeling Attention and Gaze. When we walk into a city,
we look at other people, objects, or even at nothing in particular. An
important aspect that can greatly enhance the realism of crowd and group
animation is for characters to be aware of their environment and of the
other characters in it. When adding attention behaviors to crowds, we
are confronted with two issues: detecting the points of interest the
characters are looking at, and editing the characters' motions for use
in modeling the gaze behavior.
Modeling Memory-based Emotions. Some researchers
mathematically model emotions, behavior, mood, and personality for
virtual characters. These models can be used to create an emotionally
responsive VAC. However, such models lack the critical component of
memory--a memory of not just events but also of past emotional
interaction. A memory-based emotion model is needed to take into
account the memory of past interactions in order to build long-term
relationships between the virtual character and users.
4. Ethical Concerns
There are ethical concerns regarding the use of VACs.
One such concern involves decisions made by real people that
are based on the advice of autonomous characters. Autonomy
means that the VAC makes decisions based on his or her
understanding of the environment and of the rules for the
surrounding world. In a simulated world, how can we be
sure that this information corresponds to reality? As with
any computer program, VACs are not immune to bugs or tampering.
Their advice on critical matters should always be validated.
Another ethical concern is the fact that an autonomous virtual
human may be indiscernible from a real existing person. For
example, a terrorist group could create a TV spot showing
democratic leaders promoting nondemocratic values. To avoid
misleading and manipulating the public, we will need to use
technology, such as watermarking, to reliably indicate to the
viewer that the human is virtual.
Some autonomous characters promote violence, terrorism,
abuse, or crime, in the context of games or other interactive
situations. If their behavior is realistic, they are likely
to exert a strong negative influence, even when they are known
to be virtual. New laws and regulations will have to be developed
in this area.
When avatars and VACs are together in the same virtual community,
it becomes very difficult for a member of the community to know when
he or she is interacting with an avatar or a VAC. In the near future--when
VACs can replace avatars when the user is away--the problem could become
even trickier.
5. Scalability and Mobility
With the advent of wearable devices, advanced PDAs, and
smartphones, VACs can be with people all the time to guide
and help them. Moreover, with light see-through glasses,
it becomes easy to add virtual characters to a real scene.
Applications of this technology could be to show people how
to use electronic devices or how to find their way through
a particular area without the use of a map. If a social robot
is used, it can show the way or even help to carry objects.
6. Possibilities and Challenges
VACs will be essential in many applications of the future. They will be our playmates, teachers, therapists, and pets. Because of their logic and memory, autonomous characters will bring skills and abilities that complement those of humans, rather than replace them. The next generation--children and young adults--is very open to virtual communities and e-learning, and is expected to have no problems interacting with virtual humans.
How far are we from such a situation? Current VACs are becoming more realistic in terms of their appearance and animation, and they are able to perceive the virtual world and the people living in that world. They may act based on their perception in an autonomous manner. New social robots have an increasingly realistic human appearance and can have smooth facial expressions and gestures. However, their intelligence is constrained and limited.
In the near future, we may expect to have VACs that are able to learn or understand a few situations, due to the development of new methods of artificial intelligence and behavior. However, a great deal of research effort is still needed to reach the point at which VACs can behave autonomously and interact naturally, like real humans. True simulation of the full complexity of human behavior will take more time.
|
Created:
Feb 14 2006
Last updated: Feb 12 2014 |
|
|
|
Web Pages
The Institute for Media Innovation:
housed at Nanyang Technological University (Singapore);
the Core Group works on both virtual humans and social robots.
BeingThere Centre: International Research Centre for Telepresence:
a joint center between Nanyang Technological University (Singapore), the University of North Carolina (USA), and the Swiss Federal Institute of Technology (Switzerland).
The
Center for Human Modeling and Simulation:
investigates computer graphics modeling and animation
techniques for embodied agents, virtual humans,
and their applications.
MIRALab:
a pluridisciplinary lab at the University of
Geneva that is working on virtual human simulation
and virtual worlds.
CMU Social Robots Project:
a project with the goal of making the communication and interaction with robots easy and enjoyable.
Articles
An immersive multi-agent system for interactive applications Wang, Y.;
Dubey, R.; Magnenat-Thalmann, N.; Thalmann, D. The Visual Computer 29, 5 (2013), 323-332.
Anthropomorphism of artificial agents: a comparative survey of expressive design and motion of virtual characters and social robots Dalibard, S.;
Magnenat-Thalmann, N.; Thalmann, D. In Proc. of the Autonomous Social Robots and Virtual Humans Workshop (Singapore, May 9, 2012).
Making them remember -- emotional virtual characters with memory Kasap, Z.;
Benmoussa, M.; Chaudhuri, P.; Magnenat-Thalmann, N. IEEE Computer Graphics and Applications 29, 2 (2009), 20-29.
Simulating gaze attention behaviors for crowds Grillon, H.; Thalmann, D.
Computer Animation and Virtual Worlds 20, 3-4 (2009), 111-119.
Intelligent virtual humans with autonomy and personality: state-of-the-art Kasap, Z.;
Magnenat-Thalmann, N. Intelligent Decision Technologies 1, 1-2 (2007), 3-15.
An integrated perception
for autonomous virtual agents: active and predictive perception Conde, T.; Thalmann, D.
Computer Animation and Virtual Worlds 17, 3-4, (2006), 457-468.
Generic personality and emotion simulation for conversational agents Egges, A.; Kshirsagar, S.;
Magnenat-Thalmann, N. Computer Animation and Virtual Worlds 15, 1 (2004), 1-13.
The
Thing Growing: autonomous characters in virtual
reality interactive fiction
Anstey, J.; Pape, D.; Sandin, D. In Proc.
of the IEEE Virtual Reality Conference 2000 (VR
'00) (New Brunswick, NJ, Mar. 18-22, 2000), 71-78.
Books
Crowd simulation (2nd ed.)
Thalmann, D., Musse, S. R.,
2012
Handbook
of virtual humans
Magnenat-Thalmann, N., Thalmann, D. (Eds.),
2004
Reviews
Let's keep in touch online: a Facebook aware virtual human interface Liu G., Choudhary S., Zhang J., Magnenat-Thalmann N. The Visual Computer 29 (9): 871-881, 2013
Modelling and controlling of behavior for autonomous mobile robots Skubch H., Springer Vieweg, 2013
Building long-term relationships with virtual and robotic characters: the role of remembering Kasap Z., Magnenat-Thalmann N. The Visual Computer 28 (1): 87-97, 2012
Introduction to autonomous mobile robots (2nd ed.) Siegwart R., Nourbakhsh I., Scaramuzza D., MIT Press, 2011
Stepping
into virtual reality Gutierrez M., Vexo F., Thalmann, D., Springer, 2008
Virtual
storytelling: using virtual reality technologies for storytelling Subsol G. (Ed.), Springer, 2006
Fast
multi-level adaptation for interactive autonomous
characters Dinerstein
J., Egbert P. ACM Transactions on Graphics 24
(2): 262-288, 2005
|
|
|
|