Judges’ Queries and Presenter’s Replies

  • Members may log in to read judges’ queries and presenters’ replies.

Presentation Discussion

  • Icon for: Ayelet Gneezy

    Ayelet Gneezy

    May 22, 2012 | 01:28 a.m.

    I enjoyed your video and beleeve hat your work is important. At the same time, I was missing the part where you tell s why we should care about your research. I did find some of that in your poster, with Asperger’s being the best candidate for research motivation. Could you please say a bit more about how such a tool might Eli to better peoples lives (as opposed to bettering their gaming experiences)?
    Thank you.

  • Icon for: Albert Cruz

    Albert Cruz

    May 23, 2012 | 10:26 p.m.

    Thanks for the comment! There are a huge list of applications for this type of work, aside from poker.
    – Security/lie detection. Like Fox’s TV show, Lie to Me (Tim Roth plays the real life Paul Ekman, who is a very influential person in my field), but we want to do this automatically using a computer. The deception detector in Ocean’s 13 is another example for what I’m doing. In that movie, they had a facial expression recognition system that detected cheating.
    – Non-verbal communication with computers (human-computer interaction, intelligent tutoring systems). When two humans communicate with each other, they’re going beyond speech. Non-verbal communication plays a very important part in this. Current communication with computers is limited to your keyboard and mouse. Imagine a computer where you can dismiss e-mail spam with a hand wave. Remote control using facial expressions and gestures. In a field called Affective Computing, computers (embodied agents) project expressions back to you, to further facilitate non-verbal communication.
    - Medical applications. We already mentioned Asperger’s. The transporters is an example of projecting expressions to help Autistic children understand emotion. We would want a robotic system to do this automatically (it would verify a user’s facial expressions and project them as well). Another example would be detection of pain in infants. This is a difficult thing to do; at this stage their verbal feedback is unreliable. An ideal system would be able to detect expressions to determine whether or not the infant is in pain or not.

  • Icon for: Annie Aigster

    Annie Aigster

    May 22, 2012 | 04:43 p.m.

    Great video and research. How expensive would this technology be if applied to medicine and other sciences?

  • Icon for: Albert Cruz

    Albert Cruz

    May 23, 2012 | 09:58 p.m.

    Thanks for the comment! The example I gave in the video uses a computer screen to display an embodied agent. An example of this would be GRETA and the Sensitive Artificial Learner. That type of system would cost the price of the computer you intend to run the system on (and a projector, maybe).

    Peter Robinson’s group is using a robotic head to project expressions here. I can’t say how much that type of system would cost.

  • Icon for: Benjamin Guan

    Benjamin Guan

    May 23, 2012 | 05:10 p.m.

    Great presentation

  • Icon for: Albert Cruz

    Albert Cruz

    May 23, 2012 | 09:53 p.m.


  • Icon for: Margery Hines

    Margery Hines

    May 24, 2012 | 08:21 p.m.

    Great job on the video, you incorporated a lot of information in only three minutes in a very organized way.

  • Further posting is closed as the competition has ended.

Icon for: Albert Cruz


University of California at Riverside
Years in Grad School: 4

Recognizing Human Facial Emotions in Video: A Psychologically-Inspired Fusion Model

Communication among humans is rich in complexity. It is not limited to verbal signals; emotions are conveyed with gesture, pose and facial expression. Facial Emotion Recognition and Analysis, the techniques by which non-verbal communication is quantified from video of the face, is an exemplar case where computers have difficulty in detecting underlying feelings. This has applications in medicine (treatment of Asperger’s syndrome), video games (Xbox Kinect), human-computer interaction and affective computing. The challenge for image analysis is to design a system to recognize apparent facial expressions as underlying emotional states. To date no system has been proposed that can recognize emotions robustly in a naturally captured, spontaneous video of faces. We propose two advancements to the state-of-the-art methods for computers to undertake this challenge: (1) a novel method that is a marriage of perceptual psychology and image analysis, which prunes frames from large data sets to reduce memory cost by retaining significant frames in the same way the human visual system perceives motion in a scene, and (2) a new technique for face alignment that warps faces in a such a way that facial structures are precisely aligned between all frames in a video, and internal information of facial structures is unmodified in the warping process. These two improvements are demonstrated to significantly improve emotion recognition rates over baseline and the other state-of the-art approaches on the challenging AVEC2011 video-subchallenge dataset. This research is a major step towards empathetic computers that are sensitive to emotional states of humans.