Principal Investigator: Aina Braxton, Digital Future Lab, University of Washington Bothell
Exerpted from invited poster presentation of the same name at the 21st International Symposium on Electronic Art (ISEA 2015)
Movement and Sound: A Path to Increasing Immersion in Digital Environments
ABSTRACT: Human centered design typically begins with identifying a challenge or problem and proposing a solution; in the case of game production these solutions can include new methods for conveying narratives, new game mechanics, and new interactions that make gaming scenarios more engaging. Our intention was to disrupt the traditional design process, and instead of attempting to identify and solve problems the team began an exploration into the capabilities of the Kinect itself to reflect the moving bodies of the participants and to search for potential movement patterns across groups. The study incorporates the Kinect motion capture device and Digital Future Lab’s proprietary game sound effects to collect both quantitative and qualitative data on how people respond to sound.
- Explore opportunities to improve user immersion in mixed reality spaces by placing the moving body at the center of research and design instead of the equipment
- Determine if similarities exist in response patterns to audio cues across heterogeneous populations
- Create procedural movement representations that could potentially enhance other creative endeavors, such as live dance choreography
- If patterns are identified, explore ways the patterns might contribute to the creation of a new mixed reality interaction and game mechanics, incorporating contextual sensory feedback to increase participant presence in virtual and mixed reality spaces
Figure 1: Top row: 3D spatial data for each movement; middle row: infrared capture; bottom row: incorporated into choreography
We selected 24 different audio cues to be used in the test and grouped by the original function defined by the studio audio engineers; groupings included sound categories such as: “power negative/positive”, character expressions, lasers, explosions, ambient sounds, object actions, loss sounds, win sounds, and user interface sounds (menu clicks and the like).
Test participants were first given a survey to determine what type of ‘mover’ they perceived themselves to be, and were then asked to listen to the cue and respond with movement that seemed to capture or reflect the essence of the cue. Following the exercise, participants were asked to give verbal descriptions of each sound. Each participant’s movements were captured using a video camera and Kinect motion capture software.
The team analyzed both the embodied and verbally described interpretations of the sound to search for patterns; for the purposes of the study, patterns were flagged when more than three movers from at least two different mover groups performed similar movements and used similar descriptors in response to audio cues. Of the 24 audio cues used in the study, 10 were determined to produce patterned movement across mover groups. These patterns were choreographed and work has begun on creating procedural art assets that explore the motion data through the use of flocking particles to provide another perspective on the motion.
Mixed reality platforms open new possibilities for game mechanics that incorporated audio to cue players to perform movement freezes that trigger projected procedural images in 3D space. Desired movement freezes can be hinted to the player in a separate projection. Movements can be rated on shape, ease, and quality and subtle shifts in the background music along with scoring visuals can be used to signal to the player their progress.
Figure 2: environmental representations of flocking-type groups (lower row) paired with study motion data (upper row) and incorporated into choreography (lower middle)
Study data will be evaluated with both augmented and virtual reality devices to determine if movement patterns enhance participant presence in mixed reality and fully virtual spaces. Work will continue to be driven from the Digital Future Lab at the University of Washington Bothell under the direction of Aina Braxton. Contact Aina for more information.