Paul Weir – Never Ending Music

paul_weir
We were very pleased to have procedural audio veteran and PANow regular, Paul Weir, speaking at the May meetup.

Paul has been involved in some groundbreaking generative music projects for games and retail spaces over the past 20 years, and he began his talk with an overview of his work to date – discussing the pros and cons of PA, some of the challenges you might meet, and how the technology has progressed over time. Following on from that we had a listen to audio examples of the works and took a look at some of the custom software used.

His work for retail spaces is often driven by the client’s need to stimulate a specific mood in the customer. Paul spoke about how he approaches each brief: weaving together location recording, sound design, system design, and finally on-site installation of the standalone music systems, in places as diverse as banks, airports, outdoor public spaces, and high-end department stores.

Video of the presentation and Q&A is below. (The video framing is off-centre for the first couple of minutes).

Paul Weir is an Audio Director, Composer and Sound Designer, currently working as an Audio Director with Microsoft and for Hello Games’ procedural sci-fi game, No Man’s Sky.

Presentation

 

Presentation slides

 

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.

Anthony Prechtl – A musical feature based approach to automatic music generation in computer games

prof
Anthony Prechtl was at April’s event to talk about his current research at the Open University. He is developing generative music software for games which uses run-time data to change a variety of musical features.

Anthony’s Unity demo, Escape Point, is a first person puzzle game in which the player wanders round a 3D maze also inhabited by an AI enemy. He demonstrated the game first without music, then with a static score, and finally with a dynamic score (see video below, far right). For the dynamic score, the intensity of the music increased as the distance between player and enemy decreased and among the musical features affected were: a distortion DSP affected the synth parts, harmony and chords transitioned to minor scales, tempo and volume seemed to increase, The net result gave a feeling of tension that ebbed and flowed in relation to distance from enemy…

Video of the presentation, and audio from the discussion that followed, can be found below.

Anthony Prechtl is a Phd candidate in Computing at the Open University.

Presentation
Q&A and Discussion
Demo video

 

 

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.

Leonard J Paul – Working and Playing with Procedural Audio

Leonard_Paul
It was a pleasure to welcome procedural audio pioneer, Leonard Paul, to February’s meetup! Leonard shared some of the highlights of his procedural audio career to date, and talked in depth about his early ambitions to develop tools for videogame integrations, using Pure Data to build prototypes that might one day emerge in a AAA title. He went on to discuss the current state of play, looking at the movers and shakers in the field and where they could be heading next, before leading us into a Q&A session and some open discussion.

During a demo of his music and SFX systems for the educational game, Sim Cell, he explained how he used oscillators, variable delay lines and granular synthesis patches in Pure Data to generate a dynamic soundtrack that could react to different game states, as well produce as the more routine sound effects used for GUI navigation and spacecraft propulsion.

Videos of each section of the talk can be found below, and if you want to try out any of Leonard’s Pd patches for yourself, you can download them here! [right click+save as]

Leonard Paul is a composer, sound designer and educator based in Vancouver, Canada. He runs the School of Video Game Audio.

Main Presentation
Q&A
Music Demo
SFX Demo

 

Presentation slides

 

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.

Jorge Garcia – Towards Procedural Audio software architectures and pipelines: chasing monsters with UnityOSC

Jorge Garcia
Jorge Garcia is a PANow regular and we were very happy to have him at the front of the room for November’s meetup! His presentation looked at some of the current challenges and opportunities when implementing and controlling procedural audio models in games. He showed how the Open Sound Control (OSC) protocol can be used to establish communication between game engines and audio patching environments like Pure Data, and went on to discuss his own open source implementation of OSC for Unity3D. Since 2011 UnityOSC has been used to build dozens of community-driven prototypes, demos and projects, many of which can be seen in the slides below.

Jorge Garcia is an Audio R&D Programmer with FreeStyleGames/Activision.

 Presentation slides

 

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.

Heavy – Martin Roth, Joe White, Andy Farnell

FullSizeRender

Heavy: A Procedural Audio Development Workflow: generating DSP code from Pure Data for integration into a Wwise/Unreal environment

Martin Roth, Joe White and Andy Farnell presented different aspects of their new procedural audio workflow, Heavy. Addressing many of the common concerns held by audio designers and programmers, they unveiled a workflow that can generate highly optimized C code from (but not limited to) Pure Data synthesis patches, and seemlessly implement them in a UE4 game environment as Wwise plugins.

You can watch a video of the presentation below!

The PD patch, Wwise plugin and Wwise project files used in the fire demonstration can be downloaded here

 Presentation slides

Thanks to Paul Weir for hosting this event.

 

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.

Ignacio Pecino – Spatial and Kinematic Models for Procedural Audio in 3D Virtual Environments

Ignacio Pecino gave a demo of some recent research in procedural audio, and talked specifically about his use of spatial data in simulated physical systems such as cellular automata (‘Life’) to drive sonification models in SuperCollider.

In his Apollonian Gasket simulation, he takes data gathered from the motion of the component discs when they are made to spin like coins, and uses it to dynamically modify parameters in a SuperCollider synthDef.

Using flocking behaviour alogorithms in another simulation, ‘Boids’, he generates complex, evolving soundscapes that have a highly satisfying correlation to the movement of a flock of birds in a 3D virtual environment.

We look forward to more hearing sonifications on his next visit!

s200_ignacio.pecino

Ignacio Pecino is a PhD candidate in Electroacoustic Music Composition at NOVARS. The paper on which this presentation is based can be viewed here.

 Presentation slides

 

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.

Joe White – Tannhäuser PD

Joe White

Tannhäuser is the PD compiler project by Joe White and Martin Roth. Joe demonstrated how a PD patch is parsed and converted into new, virgin C++ code, ready for use in systems such as audio middleware (Wwise plugins) and hardware effects units (the OWL pedal). From a game audio angle, sound designers could develop synthesis or effects patches in Pure Data and quickly convert them to Wwise plugins for immediate integration into a game project. This would unleash a revolution in how game audio designers approach their work, and we’re looking forward to Joe’s next visit (hopefully with a Wwise plugin demo!) in the coming months.

Later on in the evening we discussed (and got very excited about) the synthesis tools revealed in Alastair MacGregor’s recent GDC talk, if you haven’t seen it I recommend you check it out now!

Joe White is software developer at ROLI in London. You can find out more about Tannhäuser here.

An audio recording of the presentation and discussion is coming soon.

 Presentation slides

 

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.

Guillaume Le Nost – An Overview of the AudioGaming Procedural Audio Plugin Suite

Guillaume Le Nost
We were very pleased to welcome co-founder of AudioGaming, Guillaume Le Nost into the the PANow discussions. AudioGaming tools have been most notably used on Django Unchained from Quentin Tarantino, and their client list includes studios such as Soundelux, Lucasfilm, Ubisoft as well as award winning sound designers.

Guillaume took us through some of the history of AudioGaming and described their approach to procedural audio as mix of real-time synthesis and samples, a ‘best of both worlds’ solution that allows them to adopt whichever technique works best for a given problem. A physical modelling approach, although perhaps more analytically accurate, won’t always achieve the best results, and Guillaume proposes the use of ‘physically informed models’ such as the AudioWind plugin, which attributes it’s highly realistic temporal behaviours to wind data gathered from the French national meteorological service.

As the discussion moved into more in-depth technical areas, we were lucky enough to have lead developer Chungsin Yeh at the end of a Skype connection in Paris. Chungsin devoured questions on spectral granulation, methods to synthesise transients, and approaches to analysing physical engine data for the AudioMotors plugin.
Thanks to Guillaume and Chungsin for a fascinating presentation!

Guillaume Le Nost is currently working on audio research projects as Director of Lionfish Audio Ltd in London. You can find out more about AudioGaming tools here.

A recording of the presentation and discussion is at the top of this post.

Audio running time:
00h00m – 00h33m: Presentation
00h33m – 01h53m: Discussion

 Presentation slides

 

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.

--
Posted by

Rob Hamilton – Musical Sonification of Avatar Physiologies, Virtual Flight and Gesture

PANow_20140227
Rob Hamilton’s UDKOSC project takes game data from UDK and uses it to control external audio engines such as SuperCollider, ChucK or PD. The implications for dynamic music and audio for games are profound and throughout his talk and demonstrations Rob showed how we can parameterise any actor that can generate data in a game engine, from the very big (herds of elephant-like creatures) to the very small (individual bones in a bird’s skeleton).

Central to Rob’s presentation was the UDK built Echo::Canyon project, a multi-user virtual environment in which performers at locations around the world can move avatars around a purpose built environment, interacting with the landscape and its carefully positioned landmarks to create a rich and evolving soundscape.

Robert Hamilton is a Ph.D. Candidate in Computer-based Music Theory and Acoustics, at CCRMA, Department of Music, Stanford University.

A recording of the presentation (given via Skype) and Q&A is at the top of this post (or you can download it here).

Audio running time:
00m00s – 33m40s: Presentation
33m40s – 01h50m00s: Q&A

 Presentation slides

 

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.

--
Posted by

Christian Heinrichs – An Elephant Named Expressivity

PANow_20140130b
Christian Heinrichs proposes new, expressive ways for sound designers to control procedural audio models. During his presentation he used a touchpad to generate xy, speed, touch area and pressure data for a model built in PD, to play back a wide variety of creaking door sounds. Mirroring the way film Foley artists work, game audio designers might use a controller like this to perform procedural audio Foley during gameplay sessions, generating sound effects for specific instances of events. Foley performance data could then be analysed by an AI system to generate a control layer, which would be used to perform in-game Foley as expressive as the original; thus bringing forth a kind of virtual Foley artist

Christian Heinrichs is a Phd researcher at the Centre for Digital Music, Queen Mary University.

Please have a look at the slides below. You can listen to the presentation using the player at the top of this post.

 

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.

--
Posted by