Timothy R. Brick, PhD
PhD Quantitative and Cognitive Psychology
MS Computer Science and Engineering
I'm interested in conversation, specifically affect in conversation, and in the tools for studying it. I study:
- Affect transfer and other nonverbals in conversation
- Tools for tracking/classifying/analyzing/synthesizing conversation
- Statistical methods for analyzing longitudinal and time-series data
- Data mining techniques and statistical methods in general
- Simulation methods, specifically interactive avatars and human-robot interaction
Max-Planck-Insitut für Bildungsforschung
(Max Planck Institute for Human Development)
Forschungsstipendiat (Postdoctoral Fellow)
University of Virginia
PhD Cognitive and Quantitative Psychology2011Thesis Title: (Re)moving Parts: Towards a System for the Separation of Affective Movements in Facial VideoPrimary Advisor: Dr. Steven Boker
University of Notre Dame
MS Computer Science and Engineering2007Thesis Title: TIDE: A Timing-sensitive Incremental Discourse EnginePrimary Advisor: Dr. Matthias Scheutz
BA Psychology2002BS Computer Science and Engineering2001
I'm interested in the way that people synchronize with each other in conversation. To that end, I'm involved in several research projects where participants are asked to participate in an unstructured videoconference conversation with someone they don't know. From there, we use image-processing technologies to measure the amount of synchronization and symmetry involved in the conversation.
The fun part about this is that we have the technology to modify the video stream in real time. We can change things like the apparent sex and apparent identity of the other participant. And the conversation can go on, with neither person realizing the modification is happening.
Understanding the Dynamics of Facial Expression
I've done a little bit of work with automatically processing facial expression, and trying to convince people that dynamics and interactive context matter in the interpretation of expressions. THere's already one publication on this subject, and since then I've been working to try to understand how emotional facial expressions are perceived and created. To that end, I've been collaborating with Viktor Müller and Dionysios Perdikis to use neurophysiological measures (EEG, specifically) to try to understand how the brain processes emotional expression.
I've also been studying the structure of emotional labels for facial expression. In an ongoing study with Angela Staples and Steven Boker at UVa, we're working to figure out what that structure is. It really looks like facial expression can't be easily understood without context. For example, we already showed in one paper that dynamics can help to identify facial actions. Now it looks like facial expression in conversation includes a lot of additional information that's about the person's internal state, but very much also about the dyadic context.
Simulating Facial Expressions
I've also been working with Andreas Brandmaier to develop a generative model of facial expressions that's convincing. We've tried a few basic linear and nonlinear models, but we don't have anything that's truly convincing yet. I have a demo or two--contact me if you'd like to see one. But it isn't solved yet. There's a problem with speech movements, for example, so there's still work to be done on the:
Separation of Speech and Affect
Have you ever come across a photo of yourself where the expression on your face is something completely awkward and strange--like you couldn't possibly have ever made that expression yourself? I have. What's happening is that there are speech movements and expression movements that are mapped over top of each other. I think that in conversation or in watching video, we filter the faster speech movements out of the slower emotional expressions, so we never see the combination, really. But in a still frame, you can't see the speeds of things, so you can't separate them. My dissertation is focused on a technique for separating emotional expression and speech movements from facial movement data. I'm not interested in cleaning up pictures, of course. I'm interested in being able to analyze (and classify) emotion separately in conversational video. But it might help with the picture thing, too.
OpenMx: Free Statistical Software
OpenMx is intended to be the statistical development platform for the next twenty years or more. It's designed to do a lot of the things that Structural Equation Modelers like to do. More than that, it's intended to be easy for upcoming researchers to use to develop and implement new methods.
I'm the primary back-end developer on the OpenMx Project. Along with Steven Boker, Michael Neale, Michael Spiegel, Ryne Estabrook, and Hermine Maes, I'm a member of the core development team. My focus is primarily on the C back-end. OpenMx went to 1.0 in October 2010.
Parallax and Videoconference
One of the problems with videoconference is that it feels like the person is far away. And part of the reason the person feels far away is because they don't do things like make and break eye contact right. Also, when you move around in the environment, everything else around you shifts (for example, if you lean to the left, you'll see more of that side of your monitor). But the image on the screen doesn't. I'm working on a project to help fix both those things, to see what I can do to help make videoconference seem more natural.
Synchronization in Dance
To test out some of the techniques we're using for the conversation studies, the lab first looked at Dance. Dance is like conversation, but much more predictable, since the semantic structure is easier, and the rhythm's more predictable. I've done some work on the influences of ambiguity on the dynamics of dance.
I also have some other research interests.
And some random academic nerdery.
I've taught in several different contexts, over the years: as an adjunct teaching the intro psychology sequence at UVa, as an undergraduate TA at the University of Notre Dame , at Red Cloud Indian School as a Red Cloud Volunteer high school teacher, and I've designed, run, and taught here and there for academic reasons.
Graduate Statistics Instructor
In graduate school, I was asked to teach PSYC 771, part one of the introductory graduate statistics sequence in the Department of Psychology. My co-teacher, Ryne Estabrook and I designed and implemented the curriculum, designed the labs, and taught the classes themselves. If you're interested in slides or syllabi, please feel free to drop me a line.
Presentations and Talks
With a background in computer science, I tend to be one of the more tech-savvy researchers in any given bunch, so I am sometimes called upon to give introductions to some technical tools. For example, I've given talks on R, LaTeX, and OpenMx. I'm also a statistician, so I tend to give talks about things like longitudinal modeling and Structural Equation Modeling, as well. And about my research. And sometimes about robots, too.
At present, I am not posting example talks online. That will likely change in the future, but the last time I did it I had to spend a lot more time fielding questions about them than I'd like. If you're interested in any of the topics above, feel free to email me, and I'll be happy to share some slides.
As an undergraduate, I was teaching assistant under Kathleen Eberhard for her Psycholinguistics course. There, I graded, held office hours, and ran review sessions.
High School Teacher
After graduation, I moved to The Pine Ridge Indian Reservation in Pine Ridge, South Dakota. There, I worked as a volunteer at Red Cloud Indian School. As a volunteer, my responsibilities included everything from grant writing and administration to library management to tech support to driving a regular bus run.
While I was there, I taught four classes: Digital Imaging, Digital Moviemaking, Basic Web Design, and Photojournalism. Funds for the required equipment were provided by the Teca Oyate Waonspekiya Americorps National Service Grant, which I helped to write.