Assistant Professor, Penn State University
Department of Human Development and Family Studies

About Me

I'm interested in the dynamic system that underlies the way that humans interact with our environment. Not just the way that our vision interacts with our movement, but the way that our thoughts and feelings influence the way we and others think. My favorite example of this kind of dynamics is nonverbal communication in conversation. Specifically, I'm interested in:

  • Rapport, affect transfer and other nonverbals in conversation
  • Tools for tracking/classifying/analyzing/synthesizing conversation
  • Statistical methods for analyzing longitudinal and time-series data
  • Data mining techniques and statistical methods in general
  • Simulation methods using interactive avatars and robotics
  • Sensor fusion and sensor integration, especially in the area of sensory augmentation

Affiliations and Acronyms

  • Quantitative Development Group (QuantDev)
  • PSU Institute for Cyberscience (ICS)
  • Department of Human Development and Family Studies (HDFS)
  • College of Health and Human Development (HHD)

Research Projects

Video-conference Manipulation

I'm interested in the way that people synchronize with each other in conversation. To that end, I'm involved in several research projects where participants are asked to participate in an unstructured videoconference conversation with someone they don't know. From there, we use image-processing technologies to measure the amount of synchronization and symmetry involved in the conversation.

The fun part about this is that we have the technology to modify the video stream in real time. We can change things like the apparent sex and apparent identity of the other participant. And the conversation can go on, with neither person realizing the modification is happening.

Data, Analysis, and Privacy

With a background in computing, I'm very interested in data and I'm a bit paranoid about privacy. As a scientist, though, I ask people to trust me with data about them all the time. It's a lot of responsibility. Modern analytic practice means that the data are collected in one place (which is less secure). Then privacy requires that the data not leave that place (which is bad for analysis). Setting privacy and scientific concerns against each other might be the wrong way to go about this.

I'm working on the MID/DLE project as a proposal to stop making science and privacy opposed to each other. What if instead of collecting data, we instead left our measurements in the care of the person we measured? Then they'd have access to their own data at all times. Then privacy is up to them. All we need is a privacy-preserving way to analyze the data, and we'd be good to go.

Understanding the Dynamics of Facial Expression

I've done a little bit of work with automatically processing facial expression, and trying to convince people that dynamics and interactive context matter in the interpretation of expressions. THere's already one publication on this subject, and since then I've been working to try to understand how emotional facial expressions are perceived and created. To that end, I've been collaborating with Viktor Müller and Dionysios Perdikis to use neurophysiological measures (EEG, specifically) to try to understand how the brain processes emotional expression.

I've also been studying the structure of emotional labels for facial expression. In an ongoing study with Angela Staples now at IU and Steven Boker at UVa, we're working to figure out what that structure is. It really looks like facial expression can't be easily understood without context. For example, we already showed in one paper that dynamics can help to identify facial actions. Now it looks like facial expression in conversation includes a lot of additional information that's about the person's internal state, but very much also about the dyadic context.

Understanding Interaction

Some conversations go well. They just seem to flow right, and everybody seems to understand each other. Other conversations seem awkward, stilted, and strange, and seem to have more misunderstandings. I want to know why. The first step in that is understanding

Simulating Facial Expressions

I've also been working with Andreas Brandmaier to develop a generative model of facial expressions that's convincing. We've tried a few basic linear and nonlinear models, but we don't have anything that's truly convincing yet. I have a demo or two--contact me if you'd like to see one. But it isn't solved yet. There's a problem with speech movements, for example, so there's still work to be done on the:

OpenMx: Free Statistical Software

OpenMx is intended to be the statistical development platform for the next twenty years or more. It's designed to do a lot of the things that Structural Equation Modelers like to do. More than that, it's intended to be easy for upcoming researchers to use to develop and implement new methods.

I'm one of the primary developers of OpenMx. Along with Steven Boker, Michael Neale, Michael Spiegel, Ryne Estabrook, and Hermine Maes, I'm a member of the core development team. OpenMx 2.0 will be released any day now.

I also have some other research interests, and some older projects.
And some random academic nerdery.

Contact Info

Timothy R. Brick
Penn State University
409 BBH Building
University Park, PA 16802
USA
Office: (+1) (814) 865-4868
email: tbrick at psu dot edu