I’ve been asked a few times what’s the best way to install FFMPEG on a Mac with a decent set of libraries included. Here’s the best way I’ve found (also the most compatible way of installing Python too).
NOTE : If you already use MacPorts as your package manager, don’t use homebrew as well….things will go funny. If you don’t know what MacPorts is, then you’re unlikely to be using it, so the commands below will work fine 😉
Continue reading “Installing Python and FFMPEG on a Mac using HomeBrew”
It’s been a little while since my last Ambisonics on YouTube post, so I thought I’d share a filter I’ve made to help make YouTube Ambisonics content sound better! As you may have noticed, the audio that comes off YouTube once your spatial, Ambisonic, audio is uploaded is quite coloured compared to the original. This is due to the Head Related Transfer Functions used in the modelling of the system. If the HRTFs exactly model your own hearing system, then we wouldn’t notice it, but as they won’t, you will!
In order to equalise the system, the same EQ curve just needs applying to all the Ambisonic channels equally before uploading to YouTube. So, first, we need to find the average response of the system. There could be a few methods for this, but the simple approach is to pan an impulse around the listener, storing the frequency response each time. Sum these together, and average them in some way (I used an RMS type approach). This is then an ‘average’ response of the system. I then invert this system (adding delay as it’s non-minimum phase) and then decompose the filter into it’s minimum phase only response for the EQ (as that’s all we’re really interested in).
Continue reading “YouTube Spatial Audio Inverse Filter”
I’ve had a few people ask for me to share the animations from my Surround Audio for VR presentation that I delivered at Sounds in Space this year. I’ve made a video of the powerpoint (30 seconds per slide) so everything can be viewed in context (note there’s no audio, though!). If you weren’t at the event, it goes through both the graphics and audio processing needed to create VR content and shows the limitations, with respect to the inter-aural level (ILD) and time (ITD) differences reproduced by the Ambisonics to Binaural process at varying orders. 8th order Ambisonics does a great job reproducing both the ILD and ITD up to 4kHz.
So, here’s an example (but empty) Reaper project that contains the YouTube binaural filters I measured. You’ll need to use your preferred Ambisonics plug-ins of choice, and I’m assuming FuMa channel ordering etc.. they’ll be remapped by a plug-in.
There is a bundle of JS effects in the folder too, that you’ll need to install (instructions at : http://reaperblog.net/2015/06/quick-tip-how-to-install-js-plugins/) which allow for:
- Ambisonic Format Remapping (FuMa -> ambiX)
- Ambisonic Field Rotation
- Multi-channel Meter
YouTube have now released the official ones they use (but in individual speaker format….not the most efficient way of doing it!), so it’ll be interesting to compare!
As described in a previous post, the ReaVerb plug-in is filtering W, X, Y and Z with a pair of HRTFs which are then simply summed to create the Left and Right feeds.
YouTube Binaural Project Template
UPDATE : 4th May 2016 – I’ve added a video using the measured filters. This will be useful for auditioning the mixes before uploading them to YouTube.
So, I’ve been experimenting with YouTube’s Ambisonic to Binaural VR videos. They work, sound spacious and head tracking also functions (although there seems to be some lag, compared to the video – at least on my Sony Z3), but I thought I’d have a dig around and test how they’re implementing it to see what compromises they’ve made for mobile devices (as the localisation could be sharper…)
Cut to the chase – YouTube are using short, anechoic Head Related Transfer Functions that also assume that the head is symmetrical. Doing this means you can boil down the Ambisonics to Binaural algorithm to just four short Finite Impulse Response Filters that need convolving in real-time with the B-Format channels (W, X, Y & Z in Furse Malham/SoundField notation – I know YouTube uses ambiX, but I’m sticking with this for now!). These optimisations are likely needed to make the algorithm work on more mobile phones.
Continue reading “YouTube 360 VR Ambisonics Teardown!”
It’s always bugged me that the VU meters in Reaper are so small, which is particularly a problem if you’re working with large amounts of channels (which, when using Higher Order Ambisonics, is common!). So, I’ve knocked up a flexible multi-channel meter than can be made as big as you like so it should be useful for testing and monitoring when setting up etc..
The scaling is flexible (you can specify the minimum dB value to show) and so is the time window used for both the meter and the peak hold (which is individually held per channel). I’ve commented the code so if you don’t like the colour scheme etc. it should be a doddle for you to alter it yourself! The file can be downloaded below:
WigWare Multi-Channel VU Meter
Instructions on how to install a JS effect in Reaper can be found at : http://reaperblog.net/2015/06/quick-tip-how-to-install-js-plugins/
Note : I know this isn’t really a VU meter, it’s a peak meter. However, when ever anyone wants to search for one, they search for a VU meter!
So, last week Google enabled head (phone!) tracked positional audio on 360 degree videos. Ambisonics is now one of the defacto standards for VR audio. This is a big moment! I’ve been playing a little with some of the command line tools needed to get this to work, and also with using Google PhotoSphere pics as the video as, currently, I don’t have access to a proper 360 degree camera. You’ll end up with something like this:
So first, the details. Continue reading “YouTube, Ambisonics and VR”
I’ve recompiled all the plug-ins so there are now 64-bit versions of WigWare for those who now use 64-bit hosts. All audio processing is un-changed. Several issues with the Mac graphical user interface occurred when switching to 64-bit (nothing had to change on Windows!) which I had to fix, so please let me know if there are any issues! Downloads are on the WigWare page, or below :
Below are 2nd and 3rd order horizontal decoders for Mac (for Dan)
2nd and 3rd Order Mac Decoders
I’ve just realised that the plug-ins on the site weren’t the versions that I have fixed for Mavericks. I had fixed them almost as soon as Mavericks was released so my students could continue using them, so apologies for not sharing! I’ve replaced all the Mac versions on the WigWare page with the updated graphical versions (the DSP code worked fine, it was the gfx that had issues).
If anyone has any problems with these, please let me know!
Sounds in Space happened on 30th June this year and was an excellent day (the programme and details of the day can be found here). There are always things which could be done better, and hopefully we’ve got all these noted ready for next year (fingers crossed). If you weren’t able to make the event, then the details below may give you a glimpse of what you missed and whether you’d like to come next time!
The 27 speaker 3D surround sound setup was the best we’ve ever had, made possible with the help of recent graduates from Sound, Light and Live Event Technology and Richard and Mark, technicians in Electronics and Sound. Alex Wardle (graduate from Music Tech and Production) also created a video of the event which can be viewed at:
Simon Lewis, Creative Technologies Research Group member took a few pics of the day which you can find at the bottom of this post. Continue reading “Sounds in Space 2014 – Video, Pics and Feedback”