I’ve been using Angelo Farina‘s excellent Aurora tools with my students for a number of years now to help with Log Sine Sweep measurements, but they often struggle to get the modules working with Audacity. So, here are the instructions I gave them, this year, to help with that. I’ve shared as it’ll likely be useful for anyone that needs to install them!
Download the latest version of the Aurora Plugins for Audacity, noting the version number of Audacity they are for:
When you’ve downloaded, extract the files. If you’re using a Mac, the download is the whole Audacity package with Aurora included. Just unzip and drag it into applications – jump to point 5 to get them enabled!
Let’s look at the PC version – your unzipped folder should look something like the figure below:
Now, install the corresponding version of Audacity (older versions than the latest can be found at https://www.fosshub.com/Audacity-old.html ). I’m going to put it in a non-default folder to help keep track of versions!
Now, we need to overwrite files in the Audacity install folder with the Aurora + ASIO ones we downloaded and unzipped earlier:
You should get something like this warning that Windows needs to overwrite files!
Now run Audacity as usual. We need to enable the Aurora modules/plugins so we can use them. Find the Generate Menu and select Add/Remove Plugins:
On a PC, you should see an Aurora plugin at the top (there are others, as well!). Click ‘Select All’ and then ‘Enable’ to turn them all on. If you’re using a Mac, you’ll probably see all your VST plugins in here too. I’ll leave it up to you whether you enable them all…..order by Path (click it!) to have them grouped more sensibly allowing you to just enable the things you want!
Above is the view on a PC. Below, is the view on a Mac after I’ve clicked ‘Path’ and found the Aurora modules:
Now, if you go back to the Generate menu, you’ll see the top entry is Aurora Sine Sweep Generator – this means it’s all working 🙂
My work and that of a colleague (Dr Adam Hill) is being featured as an impact case study for the upcoming Research Excellence Framework 2021 (REF2021). My work was previously featured as a case study in the last REF, too, as it happens (in REF2014). Here’s a video of a presentation we made to the rest of the University to let them know what we get up to. Further details can also be found at https://www.derby.ac.uk/research/showcase/audio-engineering-research/
This year’s Sounds in Space was our most successful yet! We ran a two day event, with some really excellent demonstrations and talks. The videos that were streamed live are now available on YouTube, with links to each talk and the corresponding slides available on the Sounds in Space 2018 page . Attached to this post are also some photos from the day!
In the presentation, I demonstrated the performance of the algorithm by looking at inter-aural time and level differences between the ears of a centrally seated listener at different order (1st to 35th order), and a number of people asked if they could get access to the animations I presented at the event. You can find them linked on a simple webpage below. If the GIFs don’t sync, just go back and forward to the page again. Once the GIFs have all been cached by your browser (1st time you load it), opening the page again should start them all at the same time!
Note, these probably won’t play as well on mobile devices (too many GIFs!)
It’s been a little while since my last Ambisonics on YouTube post, so I thought I’d share a filter I’ve made to help make YouTube Ambisonics content sound better! As you may have noticed, the audio that comes off YouTube once your spatial, Ambisonic, audio is uploaded is quite coloured compared to the original. This is due to the Head Related Transfer Functions used in the modelling of the system. If the HRTFs exactly model your own hearing system, then we wouldn’t notice it, but as they won’t, you will!
In order to equalise the system, the same EQ curve just needs applying to all the Ambisonic channels equally before uploading to YouTube. So, first, we need to find the average response of the system. There could be a few methods for this, but the simple approach is to pan an impulse around the listener, storing the frequency response each time. Sum these together, and average them in some way (I used an RMS type approach). This is then an ‘average’ response of the system. I then invert this system (adding delay as it’s non-minimum phase) and then decompose the filter into it’s minimum phase only response for the EQ (as that’s all we’re really interested in).
It’s always bugged me that the VU meters in Reaper are so small, which is particularly a problem if you’re working with large amounts of channels (which, when using Higher Order Ambisonics, is common!). So, I’ve knocked up a flexible multi-channel meter than can be made as big as you like so it should be useful for testing and monitoring when setting up etc..
The scaling is flexible (you can specify the minimum dB value to show) and so is the time window used for both the meter and the peak hold (which is individually held per channel). I’ve commented the code so if you don’t like the colour scheme etc. it should be a doddle for you to alter it yourself! The file can be downloaded below:
I’ve just realised that the plug-ins on the site weren’t the versions that I have fixed for Mavericks. I had fixed them almost as soon as Mavericks was released so my students could continue using them, so apologies for not sharing! I’ve replaced all the Mac versions on the WigWare page with the updated graphical versions (the DSP code worked fine, it was the gfx that had issues).
If anyone has any problems with these, please let me know!
The surround sound Rosetta performance by Sigma 7 (at Derby Theatre, 7.30pm 7th June) will be streamed live with Binaural Audio (wear headphones for 3D audio) at http://sigma7rosetta.co.uk/ . Multi-channel Videos will also be available after the show, and they’ll also be a Sound on Sound article about the event in the future too! If you can’t make the event, the stream will be the next best thing!
The Sounds in Space research symposium is really coming together. The excellent keynote from Chris Pike on new experiences in Broadcast sound, the various Auro 3D, Ambisonics, TiMax and 5.1 demonstrations along with bone conduction headsets and multi-channel Guitars are all pointing to what promises to be an excellent, and FREE event. There are still places available, details on how to sign up are on the Sounds in Space webpage (http://tinyurl.com/SinS2014).
John Crossley, programme leader for the MA in Music Production (at University of Derby) is putting together a not-to-be missed sound audio-visual experience during the BIG SHOW on June 7th at the Derby Theatre.
‘Rosetta’ is an original music suite inspired by the Europeans Space Agency’s ‘Comet Chaser’ satellite as it attempts to meet up with and investigate the 4.6 billion year-old comet; Churyumov-Gerasimenko Presented in 16 channel ‘Super- Surround’ the music will be performed live by Sigma 7, and will include live instruments, voices and electronics. The audience will have a totally immersive, aural and visual experience. There will also be an opportunity before the show to meet some of the team and to find out about the technology involved in putting together a show of this complexity. You’re invited to lose yourself in sound and space!
Supported by The European Space Agency and Funded by the Arts Council.
This is the third show of it’s type, and he’s using 16 channels to achieve the surround effect (using TiMax). It will be a great show, and you can’t grumble at the price! More details and info when I get it – follow @johncrossley01 on twitter for updates 🙂