Projects

A Selection of Project Undertaken:

Please note that this page is pretty out of date….I’ll try and update when I get a moment!

2011/12 – Sound in Space Symposium 2012

The Creative Technologies Research Group are, again, hosting a free Sounds in Space Research Symposium at the University of Derby.  Details can be found at this page.

2010/11 – Sounds in Space Symposium 2011

In 2011 we ran our first Sounds in Space Symposium.  It was an extremely successful event, with the program of the days events available here.

2010/11 – 3D Data Capture and Utilisation using Microsoft Kinect.

In this project the use of the newly released Microsoft Kinect was investigated with test cases including a Kinect driven Theremin and automatic spot light following system.

2009/10 – The Social Side of Studying – An Investigation Into The Use Of Screencasts And Social Networks In Higher Education

In this project, the potential role of social networking was investigated and utilised in order to support an assignment around Ambisonic Mixing where I was a guest lecturer, rather than the course tutor.  Interestingly, although the screencasts and on-line learning material was well utilised, and deemed as extremely useful, by the students, the social side of the site (a Ning network) wasn’t utilised at all by the students.  It was used, to some extent, by a few external members from around the world (it was a public site)!

2007/8 – The simulation of distance in multi-channel audio.

This project was to carry out the necessary research to firstly ascertain what it is the SoundField mics are recording in order to encode distance.  This was tested using our SoundField microphones in combination with the software packages Adobe Audition 3 and the Aurora audio testing plug-in suite.  The results can then be used to create a plug-in for standard audio packages (a VST plug-in) to correctly encode distance cues with the panning information.  The work will also feed into SPARG’s previous research to be used in the calibration of the Ambisonic decoders that have already been created.  Distance compensation can be setup in the decoders to take into account a) what distance the SoundField mics were calibrated at (i.e. where their ‘focal distance’ is set) and b) the distance at which the speakers are.  This work was presented at Institute of Acoustics’ Reproduced Sound 25 International Conference and the University of Derby’s annual research conference (2009).

2008/9 (Double Award) – Novel Human Computer Interaction Development.

Following the recent success of the Multi-media Applications project ‘Wii are the Music makers’ (http://www.derby.ac.uk/press-office/news-archive/wii-are-the-music-makers ) which resulted in local news and radio coverage along with a stand prepared for the ‘NanoWhat’ event (http://www.nanowhat.co.uk/), this projects aim was to both develop novel human computer interfaces and embed this work both into performance (the use of) and technical teaching content (the development of) using cheap, readily available materials which will allow more intuitive control of audio and lighting/show control software (such as Wii controllers and web cams, for example, costing around £20 each to create multi-touch and motion sensing controllers which would normally cost over £2000 each).

The work combined the use of hardware such as webcams, wii controllers and mobile phones along with available software (such as glovepie http://carl.kenner.googlepages.com/glovepie and eyesweb http://www.infomus.org/EywMain.html ) and custom written Java applications in order to create flexible, wireless, powerful and intuitive human computer interaction devices which will be used to control various audio/video/lighting parameters in different ways.  For example, the motion of someone walking across a stage could be used to control the virtual position of a sound source, or wii controllers handed around the audience to remix audio loops in real-time.

Both the technical and artistic outcomes of this work were presented at the Forum for Innovation in Music Production and Composition, Leeds College of Music, UK with a poster being presented at the University of Derby’s Annual Learning, Teaching & Assessment Conference (2009).

2006/7 (Double Award) & 2007/8 (Single Award)- SPARG True Multi-channel Mixing Environment.

The Signal Processing Applications Research Group has carried out much research into the field of hierarchical multi-channel audio platforms and algorithms (e.g. see http://sparg.derby.ac.uk/SPARG/PDFs/BWPhDThesis.pdf ).  However, all current audio mixing and editing software is ‘hard wired’ to utilise only a fixed number of speakers and with internal workings predicated on stereo mixing paradigms making any true, flexible multi-channel sound mixing problematic at best.  This project is carrying out the implementation and documenting of a true hierarchical, flexible, mixing environment using the already established, permanent Multi-channel Sound Research Lab at the University of Derby.  Our KRK Multi-channel Sound Lab consists of 30 speakers and four subs allowing for full 3D audio creation and audition.  Custom software to drive the various speaker arrays available in the lab (the WigWare Ambisonic Plug-ins) have been constructed and hosted on music production software (such as Reaper – www.reaper.fm and Audiomulch – www.audiomulch.com) in order to allow for the creative use of the system using a convenient and familiar workflow.  This work is currently being fed into a number of undergraduate modules allowing our students to create cutting edge, future proof, audio presentations.

The outcomes of this work were presented at the Institute of Acoustics’ Reproduced Sound 24 International Conference with posters being presented at the University of Derby’s Annual Learning, Teachning & Assessment Conferences (2007 & 2008).

Leave a Reply