YouTube Spatial Audio Inverse Filter

It’s been a little while since my last Ambisonics on YouTube post, so I thought I’d share a filter I’ve made to help make YouTube Ambisonics content sound better!  As you may have noticed, the audio that comes off YouTube once your spatial, Ambisonic, audio is uploaded is quite coloured compared to the original.  This is due to the Head Related Transfer Functions used in the modelling of the system.  If the HRTFs exactly model your own hearing system, then we wouldn’t notice it, but as they won’t, you will!

In order to equalise the system, the same EQ curve just needs applying to all the Ambisonic channels equally before uploading to YouTube.  So, first, we need to find the average response of the system.  There could be a few methods for this, but the simple approach is to pan an impulse around the listener, storing the frequency response each time.  Sum these together, and average them in some way (I used an RMS type approach).  This is then an ‘average’ response of the system.  I then invert this system (adding delay as it’s non-minimum phase) and then decompose the filter into it’s minimum phase only response for the EQ (as that’s all we’re really interested in).

Continue reading “YouTube Spatial Audio Inverse Filter”