Reporting on daylong science events is an exercise in deciding what to leave out, and the recent two-day speaker series I attended, "The Science of the Arts: Perceptual Neuroscience and Aesthetics," was no exception. More than three dozen renowned scientists (including European Dana Alliance member Semir Zeki and Dana Alliance members Solomon Snyder and Kay Redfield Jamison) and artists (Pat Metheny, Leon Fleisher, Marin Alsop, Jim Olson) gave deep insight into how they think our brains work. I had space for only a few of them in my story, so I concentrated on music, but the talks on art, architecture, sculpture, form, dance, hearing, mood, and creativity also provided a lot of food for thought. I hope the Johns Hopkins School of Medicine’s Brain Science Institute, which organized the events, will make recordings of the sessions available.
A theme of the program was how—after millenia of asking questions like "how do we make art?" and "why?"—we finally are creating the technology that might help us get to answers. Brain imaging is one, but another I didn't know about is motion-capture software.
Neuroscientist Amy Bastian of Hopkins displayed this program by BioMotionLab, the BMLwalker, to simulate different people's strides:
Try it: Slide the controls to make the image's stride more like a female or male, a heavier person vs. a lighter-weight one, even happy vs. sad. Use the round arrows to turn the figure.
The software also makes a point that the visual specialists had made earlier in the day: Our sensory systems make a lot of assumptions and predictions from a "flow" of information that is less like a stream and more like Morse code. The differences in the model's strides are very evident, even though the model is made of only 15 little dots. We can see a "person walking," with only a small amount of information.
"We use motion capture to look for things that are invariant," Bastian said, positing that those things are controlled by the brain. "It's a concert of brain areas" that coordinates movement into the smooth moves most of us can produce; if one fails, due to stroke or other illness, another is unlikely to easily rewire.
Choreographer Jonah Bokaer uses software that combines motion capture and dance to experiment with the body's shapes and angles while creating his dances. The software helps him see the range of movements of separate joints without assuming that one joint must influence another. For example, you might think a hand movement must initiate from the shoulder, through the elbow to the hand, but in one of his dances, he seems to initiate the movement from his hip.
Watching the dance mannequin go through a random sequence of movements felt a little disturbing to me; because it treats all motion as possible, the form would run its arm through its chest and cross its legs so tightly my legs hurt in sympathy. Bokaer builds and recomposes the patterns of movement online for weeks, then goes into the studio to revise it some more.
He played us a video of one of his software-suggested pieces, "False Start," (see one version on YouTube at the bottom of this post), which is meant to express a sense of pain. He also is working with a group of dancers ages 24 to 72 to see how this method of thinking about choreography can work for all body types and ranges of motion.
Scott Grafton, director of the University of California at Santa Barbara Imaging Center, uses brain imaging, motion capture, and games like Dance Dance Revolution to study how people learn movements and how they organize them into goal-oriented action. He was part of Dana's Arts & Cognition research, and wrote an essay for Cerebrum last year, "What Can Dance Teach Us about Learning?"
During the session, he played us brief video clips of dancers. "Emotion is embedded in the movement… We all have a clear sense of her emotion from just five seconds of dance," and it did seem true for me. By collecting these snips of emotion, he has built up a library of "intentional states of the dancer" that he shows to people as their brains are being scanned to locate and compare their emotional responses.
"You don't need much info in order to extract emotional content," Bastian agreed. "The brain is very, very good at that."