UPDATE : 4th May 2016 – I’ve added a video using the measured filters. This will be useful for auditioning the mixes before uploading them to YouTube.
So, I’ve been experimenting with YouTube’s Ambisonic to Binaural VR videos. They work, sound spacious and head tracking also functions (although there seems to be some lag, compared to the video – at least on my Sony Z3), but I thought I’d have a dig around and test how they’re implementing it to see what compromises they’ve made for mobile devices (as the localisation could be sharper…)
Cut to the chase – YouTube are using short, anechoic Head Related Transfer Functions that also assume that the head is symmetrical. Doing this means you can boil down the Ambisonics to Binaural algorithm to just four short Finite Impulse Response Filters that need convolving in real-time with the B-Format channels (W, X, Y & Z in Furse Malham/SoundField notation – I know YouTube uses ambiX, but I’m sticking with this for now!). These optimisations are likely needed to make the algorithm work on more mobile phones.
It’s always bugged me that the VU meters in Reaper are so small, which is particularly a problem if you’re working with large amounts of channels (which, when using Higher Order Ambisonics, is common!). So, I’ve knocked up a flexible multi-channel meter than can be made as big as you like so it should be useful for testing and monitoring when setting up etc..
The scaling is flexible (you can specify the minimum dB value to show) and so is the time window used for both the meter and the peak hold (which is individually held per channel). I’ve commented the code so if you don’t like the colour scheme etc. it should be a doddle for you to alter it yourself! The file can be downloaded below:
outstanding attraction is the features of high quality replique montre. How to tell is a rolex is fake? Related site replique rolex. the truth is that all human hair wigs will tangle and shed as you wear and wash them. buy diamond painting cross stitch kits and get the best deals at the lowest prices.
So, last week Google enabled head (phone!) tracked positional audio on 360 degree videos. Ambisonics is now one of the defacto standards for VR audio. This is a big moment! I’ve been playing a little with some of the command line tools needed to get this to work, and also with using Google PhotoSphere pics as the video as, currently, I don’t have access to a proper 360 degree camera. You’ll end up with something like this:
I’ve recompiled all the plug-ins so there are now 64-bit versions of WigWare for those who now use 64-bit hosts. All audio processing is un-changed. Several issues with the Mac graphical user interface occurred when switching to 64-bit (nothing had to change on Windows!) which I had to fix, so please let me know if there are any issues! Downloads are on the WigWare page, or below :
I’ve just realised that the plug-ins on the site weren’t the versions that I have fixed for Mavericks. I had fixed them almost as soon as Mavericks was released so my students could continue using them, so apologies for not sharing! I’ve replaced all the Mac versions on the WigWare page with the updated graphical versions (the DSP code worked fine, it was the gfx that had issues).
If anyone has any problems with these, please let me know!
Sounds in Space happened on 30th June this year and was an excellent day (the programme and details of the day can be found here). There are always things which could be done better, and hopefully we’ve got all these noted ready for next year (fingers crossed). If you weren’t able to make the event, then the details below may give you a glimpse of what you missed and whether you’d like to come next time!
The 27 speaker 3D surround sound setup was the best we’ve ever had, made possible with the help of recent graduates from Sound, Light and Live Event Technology and Richard and Mark, technicians in Electronics and Sound. Alex Wardle (graduate from Music Tech and Production) also created a video of the event which can be viewed at:
The surround sound Rosetta performance by Sigma 7 (at Derby Theatre, 7.30pm 7th June) will be streamed live with Binaural Audio (wear headphones for 3D audio) at http://sigma7rosetta.co.uk/ . Multi-channel Videos will also be available after the show, and they’ll also be a Sound on Sound article about the event in the future too! If you can’t make the event, the stream will be the next best thing!
The Sounds in Space research symposium is really coming together. The excellent keynote from Chris Pike on new experiences in Broadcast sound, the various Auro 3D, Ambisonics, TiMax and 5.1 demonstrations along with bone conduction headsets and multi-channel Guitars are all pointing to what promises to be an excellent, and FREE event. There are still places available, details on how to sign up are on the Sounds in Space webpage (http://tinyurl.com/SinS2014).
John Crossley, programme leader for the MA in Music Production (at University of Derby) is putting together a not-to-be missed sound audio-visual experience during the BIG SHOW on June 7th at the Derby Theatre.
‘Rosetta’ is an original music suite inspired by the Europeans Space Agency’s ‘Comet Chaser’ satellite as it attempts to meet up with and investigate the 4.6 billion year-old comet; Churyumov-Gerasimenko Presented in 16 channel ‘Super- Surround’ the music will be performed live by Sigma 7, and will include live instruments, voices and electronics. The audience will have a totally immersive, aural and visual experience. There will also be an opportunity before the show to meet some of the team and to find out about the technology involved in putting together a show of this complexity. You’re invited to lose yourself in sound and space!
Supported by The European Space Agency and Funded by the Arts Council.
This is the third show of it’s type, and he’s using 16 channels to achieve the surround effect (using TiMax). It will be a great show, and you can’t grumble at the price! More details and info when I get it – follow @johncrossley01 on twitter for updates 🙂
Our yearly Sounds in Space symposium is shaping up to be a great event. We have a talk from the BBC’s Chris Pike confirmed as the keynote (on object-based broadcasting with examples played over our 20 speaker 3D rig) and further talks on subjects such as surround bone conduction audio, multi-channel internet streaming and live, large scale surround sound composition and implementation. We still have spaces for more talks, so please send short abstracts to email@example.com by 5pm on Monday 19th May 2014.
Live demos and surround works are actively encouraged (we’ll have a 20 speaker 3D rig available in the presentation room – I can provide a Higher Order Ambisonic Decoder if necessary).
We’ve also got plenty of spaces left for attending this free event. If you’re interested in attending or presenting at what’s always a great day, please see this page for more details.