Stereo to Ambisonics using UHJ

Back in 1983, Michael Gerzon presented a 2, 3 or 4 channel encoding system called UHJ where the 1st two channels are stereo compatible Left and Right signals. This was, in the 4-channel incarnation, a lossless transcoding of the W, X, Y and Z channels of Ambisonics. If only the first two channels are kept, then horizontal Ambisonic surround is still able to be extracted, albeit with less isolation between the resulting channels (much like Dolby Stereo used in cinemas from 1976).

I won’t bore you with the equations and maths of the system, but Gerzon’s original paper can be found at

To cut a long story short, I’ve made a UHJ decoder plug-in based on Gerzon’s paper above, as there isn’t one currently available that works in the same way:

Screenshot of the WigWare UHJ Decoder/Transcoder JS Effect Plugin

Although UHJ encoded material works best (a discography of UHJ releases can be found at, any two-channel recording can be fed into the decoder and either a square/rectangle decoding can be produced, or the B-Format for decoding using another plug-in. Do note that the ‘shelf-filters’ recommended for UHJ are different from the ones used for ‘standard’ Ambisonic B-Format decoding.

There are other UHJ implementations available, but this plug-in has a few features not available elsewhere:

  • Allpass filter based phase shift networks. This is similar to the techniques used in the original hardware, and can sound more natural that FIR filter alternatives.
  • Correct shelf filtering based on Gerzon’s recommendations.
  • Speaker distance compensation (dial in, and compensate for, the distance the speakers are from the sweet spot.

The decoded polar patterns (coming from a 2-channel UHJ encoded source) can be seen below. UHJ does well considering it’s only coming from UHJ stereo!

Low and High Frequency Decoding Polar Patterns for a Square Speaker Array from a 2-channel UHJ source

The plugin, which is a JSFX for Reaper, can be downloaded below. To install, unzip the files into the Effects folder in the Reaper Resource Path. A guide for doing this can be found at:

Reaper JSFX UHJ Decoder/Transcoder (208 downloads)

WHAM – Webcam Head-track AMbisonics

The restrictions imposed by the pandemic thwarted the continuation of ‘in-person’ listening tests into Ambisonic Order and transparency over head-tracked headphones in 2020/21, which is an ongoing project using Very High Order Ambisonics (up to 35th) and hardware head-tracking. It raised the question, “How do we maintain our essential test features using remote systems?”. Many people had access to webcams, laptops and headphones due to remote working, so we sought to leverage this, with the results being WHAM!

The WHAM (Webcam Head-tracked Ambisonics) website ( takes the approach of using a webcam to measure head rotation via the browser to provide the dynamic cues necessary for a convincing room auralisation using Binaural Room Impulse Response (BRIR) data. Visitors to the site can experience up to 17th Order Ambisonics over headphones which incorporate asymmetric binaural filtering of captured room responses that reacts to the rotation of the head.

This approach is only able to model a single source position, but with the associated room response captured to a much higher order than is currently possible using Ambisonic microphones (which max out at 5th order) and allowing for accurate dynamic head movement cues to be processed in real-time in the browser.

The project has added extra functions to the JS Ambisonics library, to enable asymmetrical filtering (left/right symmetry is a common method for increasing the efficiency of Ambisonics to Binaural processing, but isn’t valid if room responses are to be used). The forked JSAmbisonics can be found at:

If you have a webcam and headphones, do give it a try!

New AmbiX WigWare Plugins Available

After some time, I decided to both update my Ambisonic tools to support the AmbiX standard (now widely used for immersive audio, 360 videos and VR) and rework my speaker array decoders using JSFX to make them a little more powerful and with a better workflow for me to quickly create new ones! My plug-ins implement near-field compensation, distance filtering and a few other things that other Ambisonic tools don’t, hence why I decided to bite the bullet and update them all. My 3D Ambisonic Reverb (AmbiXFreeverb) is also available in the bundle.

An example of one of the decoders is shown below. Dual-band decoding is available, and you’re able to tweak the overall gain of the low frequency vs high frequency decodes too (as energy optimised ones tend to be a little quieter). They’ll default to all sensible options, so if you don’t know what some of the controls do, just don’t change them 😉

Screenshot of a 3rd order Octagon Ambisonic Decoder

Both the platform-dependent VSTs (for Mac and PC) and the platform-independent decoders are available for download on the WigWare page, or below:

The VST plugins can be placed in the usual folders for Mac and PC, and a guide to installing JSFX plugins can be found at:

The folder JSFX plugins go in is (Mac)
~/Library/Application Support/REAPER/Effects/WigWare/Amb Dec AmbiX/
or (PC)
%USERPROFILE%\AppData\Roaming\REAPER\Effects\WigWare\Amb Dec AmbiX\

You’ll need to create the WigWare folders. You can get to this quickly from Reaper by selecting Options->Show Resource Path in Explorer/Finder…

How to Install Aurora Tools for Audacity

I’ve been using Angelo Farina‘s excellent Aurora tools with my students for a number of years now to help with Log Sine Sweep measurements, but they often struggle to get the modules working with Audacity. So, here are the instructions I gave them, this year, to help with that. I’ve shared as it’ll likely be useful for anyone that needs to install them!

  1. Download the latest version of the Aurora Plugins for Audacity, noting the version number of Audacity they are for:

The highest version for Mac is 2.4.1 and for Windows is 2.4.1, too (as of Feb 2021)

Here’s a direct link to the latest version (as of Feb 2021):

When you’ve downloaded, extract the files.  If you’re using a Mac, the download is the whole Audacity package with Aurora included.  Just unzip and drag it into applications – jump to point 5 to get them enabled!

Let’s look at the PC version – your unzipped folder should look something like the figure below:

  1. Now, install the corresponding version of Audacity (older versions than the latest can be found at ).  I’m going to put it in a non-default folder to help keep track of versions!
  1. Now, we need to overwrite files in the Audacity install folder with the Aurora + ASIO ones we downloaded and unzipped earlier:
  1. You should get something like this warning that Windows needs to overwrite files!
  1. Now run Audacity as usual.  We need to enable the Aurora modules/plugins so we can use them.  Find the Generate Menu and select Add/Remove Plugins:
  1. On a PC, you should see an Aurora plugin at the top (there are others, as well!).  Click ‘Select All’ and then ‘Enable’ to turn them all on. If you’re using a Mac, you’ll probably see all your VST plugins in here too.  I’ll leave it up to you whether you enable them all…..order by Path (click it!) to have them grouped more sensibly allowing you to just enable the things you want!

Above is the view on a PC.  Below, is the view on a Mac after I’ve clicked ‘Path’ and found the Aurora modules:

  1. Now, if you go back to the Generate menu, you’ll see the top entry is Aurora Sine Sweep Generator – this means it’s all working 🙂

Achieving the Democracy of Sound

My work and that of a colleague (Dr Adam Hill) is being featured as an impact case study for the upcoming Research Excellence Framework 2021 (REF2021). My work was previously featured as a case study in the last REF, too, as it happens (in REF2014). Here’s a video of a presentation we made to the rest of the University to let them know what we get up to. Further details can also be found at

Sounds in Space 2018 Videos, PowerPoints and Pics!

New server……I’m alive!

This year’s Sounds in Space was our most successful yet!  We ran a two day event, with some really excellent demonstrations and talks.  The videos that were streamed live are now available on YouTube, with links to each talk and the corresponding slides available on the Sounds in Space 2018 page .  Attached to this post are also some photos from the day!

Read more for the pics, below!

Continue reading “Sounds in Space 2018 Videos, PowerPoints and Pics!”

Ambisonics to Binaural HRTF Animations

A couple of weeks ago, I presented at the 4th International Conference on Spatial Audio in Graz, Austria (and it was a great event, too!)

The paper I presented was:  Analysis of Binaural Cue Matching Using Ambisonics to Binaural Decoding Techniques.

In the presentation, I demonstrated the performance of the algorithm by looking at inter-aural time and level differences between the ears of a centrally seated listener at different order (1st to 35th order), and a number of people asked if they could get access to the animations I presented at the event.  You can find them linked on a simple webpage below.  If the GIFs don’t sync, just go back and forward to the page again.  Once the GIFs have all been cached by your browser (1st time you load it), opening the page again should start them all at the same time!

Note, these probably won’t play as well on mobile devices (too many GIFs!)

Wiggins ICSA 2017 Animations

Wiggins ICSA 2017 Animations – half speed


Sounds in Space 2017 Pictures

Well, we had a lovely time at Sounds in Space again, this year.  Thanks to all who contributed to the day (guests, presenters, poster peeps and generous sponsors!).  I’ll be uploading and sharing presentations/videos from the day, soon, but in the meantime, here are some pictures (some of the 360 pics are better viewed via the link below).

Google Photos Album of Sounds in Space 2017

Sounds in Space 2017 Live Streams

Tomorrow we’re holding our annual Sounds in Space Research Symposium.  This year we’ve decided to stream the entire event on both YouTube and Facebook using binaural audio (well, Ambisonics to binaural as it happens).  The event is 9.30am (GMT+1) until around 5.00pm.  If you’re interested in watching the events, here’s the links:

YouTube Link
Facebook Link

It’s looking like it’ll be a great day, so do tune in 🙂

Details on Facebook 360 Ambisonics Mapping from Angelo Farina

EDIT: You can download a JS effect (for Reaper) that does the conversion from ambiX to TBE and another that goes from TBE to 2nd order, 2D, Furse-Malham format here (I’ve included my remapping JS effect too, so you can also go from Furse-Malham to TBE format by converting to that first 🙂 :

Update: September 2017 (from

NEW Version of the Tools: WigWare AmbiX, FuMa and TBE Tools Sept. 2017 Update

Facebook have updated TBE to remove the ‘R’ component from channel 4.  This is important as, before it was removed, it was impossible to go from TBE to 1st order Ambisonics (for Youtube etc.).  The new details are excellently discussed on Angelo’s webpage which can be found at

TBE(1) =  0.488603 * Ambix(0); W
TBE(2) = -0.488603 * Ambix(1); Y
TBE(3) =  0.488603 * Ambix(3); X
TBE(4) =  0.488603 * Ambix(2); Z
TBE(5) = -0.630783 * Ambix(8); U
TBE(6) = -0.630783 * Ambix(4); V
TBE(7) = -0.630783 * Ambix(5); T
TBE(8) =  0.630783 * Ambix(7); S

Note that going from TBE to horizontal 2nd order FuMa is unchanged (useful for driving my Irregular Ambisonics Decoders 😉

A good timeline and discussion of the issues, and why Facebook changed the encoding, can be found here.

Older Information, pre V2.2 of Facebook Spatial Workstation.

WigWare AmbiX, FuMa and TBE Tools

Angelo Farina has published an excellent article detailing how the 9, 2nd order Ambisonic components map to the 8 channels of the Facebook360 TBE format (they decided to pander to the 8-channel limit of pro-tools 🙁 ).  All the details can be found at Angelo’s website: 

The important bit, for my own notes, is (with added Furse-Malham mapping):

TBE(1) =  0.486968 * Ambix(1)  (FuMa W)
TBE(2) = -0.486968 * Ambix(2)  (FuMa Y)
TBE(3) =  0.486968 * Ambix(4)  (FuMa X)
TBE(4) =  0.344747 * Ambix(3)  
        + 0.445656 * Ambix(7)  (FuMa Z+R)
TBE(5) = -0.630957 * Ambix(9)  (FuMa U)
TBE(6) = -0.630957 * Ambix(5)  (FuMa V)
TBE(7) = -0.630957 * Ambix(6)  (FuMa T)
TBE(8) =  0.630957 * Ambix(8)  (FuMa S)

And, to go from TBE to 2nd order, 2D, Furse-Malham format (as mentioned by Ed, in the comments below):

W =  1.446968601 * TBE(1)
X =  2.047502048 * TBE(3)
Y = -2.047502048 * TBE(2)
U = -1.839587932 * TBE(5)
V = -1.839587932 * TBE(6)

TBE is Facebook360 Two Big Ears format
ambiX is the ambiX format used by YouTube Spatial Media (ACN channel order and SN3D normalisation)
FuMa is the Furse-Malham channel ordering and normalisation scheme.

See for further details on channel ordering and normalisation schemes.

A polar plot of TBE Channel 4 (the combination of the Z and R channels in FuMa speak) can be seen above (click for higher res image).


%d bloggers like this: