I'm interested in conversation, specifically affect in conversation, and in the tools for studying it. I study:
- Perception and action and their interactions
- Affect transfer and other nonverbals in conversation
- Tools for tracking/classifying/analyzing/synthesizing conversation
- Statistical methods for analyzing longitudinal and time-series data
- Data mining techniques and statistical methods in general
- Interactive avatars and human-robot interaction
Separation of Speech and Affect
Have you ever come across a photo of yourself where the expression on your face is something completely awkward and strange--like you couldn't possibly have ever made that expression yourself? I have. What's happening is that there are speech movements and expression movements that are mapped over top of each other. I think that in conversation or in watching video, we filter the faster speech movements out of the slower emotional expressions, so we never see the combination, really. But in a still frame, you can't see the speeds of things, so you can't separate them. My dissertation is focused on a technique for separating emotional expression and speech movements from facial movement data. I'm not interested in cleaning up pictures, of course. I'm interested in being able to analyze (and classify) emotion separately in conversational video. But it might help with the picture thing, too.
OpenMx: Free Statistical Software
OpenMx is intended to be the statistical development platform for the next twenty years or more. It's designed to do a lot of the things that Structural Equation Modelers like to do. More than that, it's intended to be easy for upcoming researchers to use to develop and implement new methods.
I'm the primary back-end developer on the OpenMx Project. Along with Steven Boker, Michael Neale, Michael Spiegel, Ryne Estabrook, and Hermine Maes, I'm a member of the core development team. My focus is primarily on the C back-end and the interface design.
Currently, I'm working on a fast and intuitive multilevel modeling framework for OpenMx, and on speeding up the existing models using analytic derivative computations.
Parallax and Videoconference
One of the problems with videoconference is that it feels like the person is far away. And part of the reason the person feels far away is because they don't do things like make and break eye contact right. Also, when you move around in the environment, everything else around you shifts (for example, if you lean to the left, you'll see more of that side of your monitor). But the image on the screen doesn't. I'm working on a project to help fix both those things, to see what I can do to help make videoconference seem more natural.
We've put together a demo of the high-presence, low-bandwidth videoconference system that I presented in London in Spring 2009. The paper is also available. For this experiment, I'm collaborating with Dr. Steven Boker (my advisor) and Jeffrey R. Spies. The system is officially patent pending, thanks to the UVa Patent Foundation.
Automatic Classification and Generation of Facial Expression
As long as we're monkeying around with analyzing conversation, it seemed a waste not to give a shot at trying to classify facial expression. As it turns out, the estimation of facial expression benefits greatly from the addition of estimated information about the first and second derivatives (that is, the speed and acceleration of facial movements). I'll be presenting the first of the results from this line of research at ACII 2009.
Our latest publication on this project was presented at the 2009 International Conference on Affective Computing and Intelligent Interaction (ACII 2009) in Amsterdam.
I'm interested in the way that people synchronize with each other in conversation. To that end, I'm involved in several research projects where participants are asked to participate in an unstructured videoconference conversation with someone they don't know. From there, we use image-processing technologies to measure the amount of synchronization and symmetry involved in the conversation.
The fun part about this is that we have the technology to modify the video stream in real time. We can change things like the apparent sex and apparent identity of the other participant. And the conversation can go on, with neither person realizing the modification is happening. I'm working on expanding this to include manipulation of affective movements like emotional expressions as well
For this experiment, I'm collaborating with Dr. Steven Boker, Jeffrey Cohn at Pittsburgh, Barry-John Theobald at the University of East Anglia, Simon Lucey at Carnegie-Mellon University, and Jeffrey R. Spies and Michael D. Hunter here at UVa.
Synchronization in Interaction
To test out some of the techniques we're using for the conversation studies, the lab first looked at Dance. Dance is like conversation, but much more predictable, since the semantic structure is easier, and the rhythm's more predictable. I've done some work on the influences of ambiguity on the dynamics of dance.
Hack Your Brain
Evidence from functional and structural analyses of brains is piling up, and the conclusion is clear. Neuroplasticity of one sort or another continues even into old age. We adapt to our environment at every level, from behavior to neurology at least. And since we can choose our inputs, we can control (to some extent) how that adaptation happens. The sensory modification community has jumped on this bandwagon, and the "formal" scientific community is still catching up. But sensory augmentation and manipulation give us a new lever into the understanding of the processes of neuroadaptaion and sensory integration.
I also have some other research interests.
I've taught in three different contexts, over the years: at the University of Notre Dame as an undergraduate TA, at Red Cloud Indian School as a Red Cloud Volunteer high school teacher, and at the University of Virginia as an Adjunct Professor.
Graduate Statistics Instructor
In graduate school, I was asked to teach PSYC 771, part one of the introductory graduate statistics sequence in the Department of Psychology. My co-teacher, Ryne Estabrook and I designed and implemented the curriculum, designed the labs, and taught the classes themselves.Some examples of labs and presentation slides are available upon request.
Presentations and Talks
With a background in computer science, I tend to be one of the more tech-savvy graduate students, so I am sometimes called upon to give introductions to some technical tools. For example, I've given a short Introduction to LaTeX to a few folks here and there. There's also an (admittedly quite messy) page of LaTeX Resources for beginners. Feel free to use and abuse the presentations, code, and formats there, but please acknowledge the folks who made them (the Steve mentioned there is Dr. Steven M. Boker).
As an undergraduate, I was teaching assistant under Kathleen Eberhard for her Psycholinguistics course. There, I graded, held office hours, and ran review sessions.
High School Teacher
After graduation, I moved to The Pine Ridge Indian Reservation in Pine Ridge, South Dakota. There, I worked as a volunteer at Red Cloud Indian School. As a volunteer, my responsibilities included everything from grant writing and administration to library management to tech support to driving a regular bus run.
While I was there, I taught four classes: Digital Imaging, Digital Moviemaking, Basic Web Design, and Photojournalism. Funds for the required equipment were provided by the Teca Oyate Waonspekiya Americorps National Service Grant
Grant Writing Experience
I had the opportunity to write a few grants while I was working at Red Cloud Indian School. Red Cloud is an excellent cause, but I'm proud to say that the grants I've had a hand in writing have raised over a million dollars in funding, mostly to help make sure the kids out there had some access to computing equipment and training. The details of the individual grants are below, for those who speak grant-ese.
Beaumont Foundation of America Educational Institution Pilot Grant
Funded by Beaumont Foundation of America. August 2003--May 2004; Grant in kind: $45,000 Equipment Total Costs. Role: Grant Co-author (with Matthew Ehlman); Grant Administrator.
Teca Oyate Waonspekiya Americorps National Service Grant
Funded by Corporation for National Public Service. December 2003-November 2006; $633,000 Total Costs. Role: Grant Co-author (with Thomas Merkel, S.J.); Negotiator for final funding allocation; Interim Administrator.
Wakanyeja kin Wokiye Owicakiyapi 21st Century Learning Center
Funded by South Dakota Department of Education and Cultural Affairs; January 2004--December 2009; $600,000 Total Costs. Role: Grant Author.