Posts by: GGatheral

Rob Hamilton
Musical Sonification of Avatar Physiologies, Virtual Flight and Gesture

Rob Hamilton’s UDKOSC project takes game data from UDK and uses it to control external audio engines such as SuperCollider, ChucK or PD. The implications for dynamic music and audio for games are profound and throughout his talk and demonstrations Rob showed how we can parameterise any actor that can generate data in a game engine, from the very big (herds of elephant-like creatures) to the very small (individual bones in a bird’s skeleton).

Central to Rob’s presentation was the UDK built Echo::Canyon project, a multi-user virtual environment in which performers at locations around the world can move avatars around a purpose built environment, interacting with the landscape and its carefully positioned landmarks to create a rich and evolving soundscape.

Robert Hamilton is a Ph.D. Candidate in Computer-based Music Theory and Acoustics, at CCRMA, Department of Music, Stanford University.

A recording of the presentation (given via Skype) and Q&A is at the top of this post (or you can download it here).

Audio running time:
00m00s – 33m40s: Presentation
33m40s – 01h50m00s: Q&A

 Presentation slides

 

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.

--
Posted by

Christian Heinrichs
An Elephant Named Expressivity

Christian Heinrichs proposes new, expressive ways for sound designers to control procedural audio models. During his presentation he used a touchpad to generate xy, speed, touch area and pressure data for a model built in PD, to play back a wide variety of creaking door sounds. Mirroring the way film Foley artists work, game audio designers might use a controller like this to perform procedural audio Foley during gameplay sessions, generating sound effects for specific instances of events. Foley performance data could then be analysed by an AI system to generate a control layer, which would be used to perform in-game Foley as expressive as the original; thus bringing forth a kind of virtual Foley artist

Christian Heinrichs is a Phd researcher at the Centre for Digital Music, Queen Mary University.

Please have a look at the slides below. You can listen to the presentation using the player at the top of this post.

 

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.

--
Posted by

2 of 2
12