Details on Facebook 360 Ambisonics Mapping from Angelo Farina

EDIT: You can download a JS effect (for Reaper) that does the conversion from ambiX to TBE and another that goes from TBE to 2nd order, 2D, Furse-Malham format here (I’ve included my remapping JS effect too, so you can also go from Furse-Malham to TBE format by converting to that first 🙂 :

Update: September 2017 (from http://pcfarina.eng.unipr.it/TBE-conversion.htm)

NEW Version of the Tools: WigWare AmbiX, FuMa and TBE Tools Sept. 2017 Update

Facebook have updated TBE to remove the ‘R’ component from channel 4.  This is important as, before it was removed, it was impossible to go from TBE to 1st order Ambisonics (for Youtube etc.).  The new details are excellently discussed on Angelo’s webpage which can be found at http://pcfarina.eng.unipr.it/TBE-conversion.htm

TBE(1) =  0.488603 * Ambix(0); W
TBE(2) = -0.488603 * Ambix(1); Y
TBE(3) =  0.488603 * Ambix(3); X
TBE(4) =  0.488603 * Ambix(2); Z
TBE(5) = -0.630783 * Ambix(8); U
TBE(6) = -0.630783 * Ambix(4); V
TBE(7) = -0.630783 * Ambix(5); T
TBE(8) =  0.630783 * Ambix(7); S

Note that going from TBE to horizontal 2nd order FuMa is unchanged (useful for driving my Irregular Ambisonics Decoders 😉

A good timeline and discussion of the issues, and why Facebook changed the encoding, can be found here.

Older Information, pre V2.2 of Facebook Spatial Workstation.

WigWare AmbiX, FuMa and TBE Tools

Angelo Farina has published an excellent article detailing how the 9, 2nd order Ambisonic components map to the 8 channels of the Facebook360 TBE format (they decided to pander to the 8-channel limit of pro-tools 🙁 ).  All the details can be found at Angelo’s website:

http://pcfarina.eng.unipr.it/TBE-conversion.htm 

The important bit, for my own notes, is (with added Furse-Malham mapping):

TBE(1) =  0.486968 * Ambix(1)  (FuMa W)
TBE(2) = -0.486968 * Ambix(2)  (FuMa Y)
TBE(3) =  0.486968 * Ambix(4)  (FuMa X)
TBE(4) =  0.344747 * Ambix(3)  
        + 0.445656 * Ambix(7)  (FuMa Z+R)
TBE(5) = -0.630957 * Ambix(9)  (FuMa U)
TBE(6) = -0.630957 * Ambix(5)  (FuMa V)
TBE(7) = -0.630957 * Ambix(6)  (FuMa T)
TBE(8) =  0.630957 * Ambix(8)  (FuMa S)

And, to go from TBE to 2nd order, 2D, Furse-Malham format (as mentioned by Ed, in the comments below):

W =  1.446968601 * TBE(1)
X =  2.047502048 * TBE(3)
Y = -2.047502048 * TBE(2)
U = -1.839587932 * TBE(5)
V = -1.839587932 * TBE(6)

TBE is Facebook360 Two Big Ears format
ambiX is the ambiX format used by YouTube Spatial Media (ACN channel order and SN3D normalisation)
FuMa is the Furse-Malham channel ordering and normalisation scheme.

See https://en.wikipedia.org/wiki/Ambisonic_data_exchange_formats for further details on channel ordering and normalisation schemes.

A polar plot of TBE Channel 4 (the combination of the Z and R channels in FuMa speak) can be seen above (click for higher res image).

 

Installing Python and FFMPEG on a Mac using HomeBrew

I’ve been asked a few times what’s the best way to install FFMPEG on a Mac with a decent set of libraries included.  Here’s the best way I’ve found (also the most compatible way of installing Python too).

NOTE : If you already use MacPorts as your package manager, don’t use homebrew as well….things will go funny.  If you don’t know what MacPorts is, then you’re unlikely to be using it, so the commands below will work fine 😉

Continue reading “Installing Python and FFMPEG on a Mac using HomeBrew”

YouTube Spatial Audio Inverse Filter

It’s been a little while since my last Ambisonics on YouTube post, so I thought I’d share a filter I’ve made to help make YouTube Ambisonics content sound better!  As you may have noticed, the audio that comes off YouTube once your spatial, Ambisonic, audio is uploaded is quite coloured compared to the original.  This is due to the Head Related Transfer Functions used in the modelling of the system.  If the HRTFs exactly model your own hearing system, then we wouldn’t notice it, but as they won’t, you will!

In order to equalise the system, the same EQ curve just needs applying to all the Ambisonic channels equally before uploading to YouTube.  So, first, we need to find the average response of the system.  There could be a few methods for this, but the simple approach is to pan an impulse around the listener, storing the frequency response each time.  Sum these together, and average them in some way (I used an RMS type approach).  This is then an ‘average’ response of the system.  I then invert this system (adding delay as it’s non-minimum phase) and then decompose the filter into it’s minimum phase only response for the EQ (as that’s all we’re really interested in).

Continue reading “YouTube Spatial Audio Inverse Filter”

Sounds in Space – Audio for Virtual Reality Animations

I’ve had a few people ask for me to share the animations from my Surround Audio for VR presentation that I delivered at Sounds in Space this year.  I’ve made a video of the powerpoint (30 seconds per slide) so everything can be viewed in context (note there’s no audio, though!).  If you weren’t at the event, it goes through both the graphics and audio processing needed to create VR content and shows the limitations, with respect to the inter-aural level (ILD) and time (ITD) differences reproduced by the Ambisonics to Binaural process at varying orders.  8th order Ambisonics does a great job reproducing both the ILD and ITD up to 4kHz.

 

YouTube Binaural Reaper Project

So, here’s an example (but empty) Reaper project that contains the YouTube binaural filters I measured.  You’ll need to use your preferred Ambisonics plug-ins of choice, and I’m assuming FuMa channel ordering etc.. they’ll be remapped by a plug-in.

There is a bundle of JS effects in the folder too, that you’ll need to install (instructions at : http://reaperblog.net/2015/06/quick-tip-how-to-install-js-plugins/) which allow for:

  • Ambisonic Format Remapping (FuMa -> ambiX)
  • Ambisonic Field Rotation
  • Multi-channel Meter

YouTube have now released the official ones they use (but in individual speaker format….not the most efficient way of doing it!), so it’ll be interesting to compare!

As described in a previous post, the ReaVerb plug-in is filtering W, X, Y and Z with a pair of HRTFs which are then simply summed to create the Left and Right feeds.

YouTube Binaural Project Template

YouTubeBinProject

YouTube 360 VR Ambisonics Teardown!

UPDATE : 4th May 2016 – I’ve added a video using the measured filters. This will be useful for auditioning the mixes before uploading them to YouTube.

So, I’ve been experimenting with YouTube’s Ambisonic to Binaural VR videos.  They work, sound spacious and head tracking also functions (although there seems to be some lag, compared to the video – at least on my Sony Z3), but I thought I’d have a dig around and test how they’re implementing it to see what compromises they’ve made for mobile devices (as the localisation could be sharper…)

Cut to the chase – YouTube are using short, anechoic Head Related Transfer Functions that also assume that the head is symmetrical.  Doing this means you can boil down the Ambisonics to Binaural algorithm to just four short Finite Impulse Response Filters that need convolving in real-time with the B-Format channels (W, X, Y & Z in Furse Malham/SoundField notation – I know YouTube uses ambiX, but I’m sticking with this for now!).  These optimisations are likely needed to make the algorithm work on more mobile phones.

Continue reading “YouTube 360 VR Ambisonics Teardown!”

Multi-channel VU Meter JS Effect for Reaper

It’s always bugged me that the VU meters in Reaper are so small, which is particularly a problem if you’re working with large amounts of channels (which, when using Higher Order Ambisonics, is common!).  So, I’ve knocked up a flexible multi-channel meter than can be made as big as you like so it should be useful for testing and monitoring when setting up etc..

The scaling is flexible (you can specify the minimum dB value to show) and so is the time window used for both the meter and the peak hold (which is individually held per channel).  I’ve commented the code so if you don’t like the colour scheme etc. it should be a doddle for you to alter it yourself!  The file can be downloaded below:

WigWare Multi-Channel VU Meter

Instructions on how to install a JS effect in Reaper can be found at : http://reaperblog.net/2015/06/quick-tip-how-to-install-js-plugins/

Note : I know this isn’t really a VU meter, it’s a peak meter.  However, when ever anyone wants to search for one, they search for a VU meter!

WigMCVUMeter Animation

 

YouTube, Ambisonics and VR

Introduction

So, last week Google enabled head (phone!) tracked positional audio on 360 degree videos.  Ambisonics is now one of the defacto standards for VR audio.  This is a big moment!  I’ve been playing a little with some of the command line tools needed to get this to work, and also with using Google PhotoSphere pics as the video as, currently, I don’t have access to a proper 360 degree camera.  You’ll end up with something like this:

So first, the details. Continue reading “YouTube, Ambisonics and VR”

64-bit WigWare Ambisonics Plugins Now Available.

I’ve recompiled all the plug-ins so there are now 64-bit versions of WigWare for those who now use 64-bit hosts.  All audio processing is un-changed.  Several issues with the Mac graphical user interface occurred when switching to 64-bit (nothing had to change on Windows!) which I had to fix, so please let me know if there are any issues!  Downloads are on the WigWare page, or below :

Below are 2nd and 3rd order horizontal decoders for Mac (for Dan)

2nd and 3rd Order Mac Decoders

Mac VST Fix for Mavericks (and Yosemite?)

I’ve just realised that the plug-ins on the site weren’t the versions that I have fixed for Mavericks.  I had fixed them almost as soon as Mavericks was released so my students could continue using them, so apologies for not sharing!  I’ve replaced all the Mac versions on the WigWare page with the updated graphical versions (the DSP code worked fine, it was the gfx that had issues).

If anyone has any problems with these, please let me know!