Revisiting the 2011 “Bridge Burning” Foo Fighters Teaser with Howler.js and Web Audio API

On the 11th Anniversary of Wasting Light

Lee Martin
Bits and Pieces

--

Foo Fighters “FF” logo

Since Taylor’s passing, I’ve been thinking about the Foo Fighters a lot lately and revisiting some of our projects from the past. In celebration of the 11th anniversary of Foo Fighter’s album Wasting Light, I decided to redevelop our original “Bridge Burning” teaser we launched on January 20th 2011.

This was a few months before the actual record came out and the first chance fans were going to get to hear new music since 2007’s Echoes, Silence, Patience & Grace. I was provided a short clip of of audio from the first track of the album, titled “Bridge Burning.” This track has a very distinctive stereo channel build incorporating all members of the band which crescendos into Dave’s first lyric: “These are my famous last words!” As soon as I heard it, I pictured an audio visual experience that connected the left and right audio channels to each of the F’s in the Foo Fighters “FF” logo. This allowed the audio clip itself to reveal the “FF” logo in all of its glory. It was a huge hit with fans and covered on many major outlets, including Rolling Stone.

Originally, this was developed using a SoundCloud streamed audio track, the SoundManager2 JS player, and a pair of transparent PNG F’s. For this new version, I decided to use some self hosted audio, Howler.js, the Web Audio API, and a single SVG “FF” logo. Check out the CodePen and read on to understand how this new teaser functions.

Building the Teaser

The “Bridge Burning” teaser

As I mentioned, this new version of the teaser uses Howler.js alongside the Web Audio API. The clip is first loaded as a new Howl. Then we use Web Audio to split the audio into the left and right channels and allow the volume level analysis of both. These left and right channels are then used to power the opacity of the left and right F’s which make up the <svg> FF logo.

Let’s begin by initializing our Howl and an array to store the L/R volume.

const channels = [0.0, 0.0]const teaser = new Howl({
src: ['bridge-burning.mp3'],
onload: () => {
// Loaded
},
onplay: () => {
// Played
},
onend: () => {
// Ended
}
})

Now, in the onload event, we’ll use the Web Audio AudioContext() provided by Howler to initialize a channel splitter using createChannelSplitter. We’ll then connect that splitter to the masterGain also provided by Howler. Then we’ll initialize a pair of analyzers connected to each split channel and matching pair of Float32Arrays to store the analyzed volume readings.

// Initialize channel splitter
let splitter = Howler.ctx.createChannelSplitter(2)
// Connect master gain to splitter
Howler.masterGain.connect(splitter)
// Initialize left channel analyser
const leftAnalyser = Howler.ctx.createAnalyser()
// Connect splitter to left channel analyser
splitter.connect(leftAnalyser, 0, 0)
// Initialize left channel data
const leftData = new Float32Array(leftAnalyser.fftSize)
// Initialize right channel analyser
const rightAnalyser = Howler.ctx.createAnalyser()
// Connect splitter to right channel analyser
splitter.connect(rightAnalyser, 1, 0)
// Initialize right channel data
const rightData = new Float32Array(rightAnalyser.fftSize)

Now, let’s write a little analyze method which will be called on every available browser animation frame. This method will get the volume level data from each channel using getFloatTimeDomainData and pass it to our awaiting Float32Arrays. Then, we’ll calculate the root mean square of these these levels to come to an averaged 0.01.0 left and right volume.

const analyze => () {
// Request animation frame
frame = requestAnimationFrame(analyze)
// Get left channel data
leftAnalyser.getFloatTimeDomainData(leftData)
// Get right channel data
rightAnalyser.getFloatTimeDomainData(rightData)
// Process left volume
let leftVolume = rootMeanSquare(leftData)
// Process right volume
let rightVolume = rootMeanSquare(rightData)
// Update channel volumes
channels = [leftVolume, rightVolume]
}

Shout out to James Fisher for this sweet tutorial on measuring audio in Web Audio API which led to the rootMeanSqaure method.

const rootMeanSquare = (data) => {
let sumSquares = 0.0

for (let amplitude of data) {
sumSquares += amplitude * amplitude
}

return Math.sqrt(sumSquares / data.length)
}

I then created a pair of helper methods that would begin and end analysis.

const startAnalyzing = () => {
// Start analyzing
frame = requestAnimationFrame(analyze)
}const stopAnalyzing = () => {
// Stop analyzing
cancelAnimationFrame(frame)

// Reset channels
channels = [0.0, 0.0]
}

I can then startAnalyzing in the Howler onplay event and stopAnalyzing in the Howler onended event. As soon as teaser clip begins playing, we’ll have a self populating channels array which we can use to drive the visual.

// Play teaser
teaser.play()

The visual itself is an SVG with two paths (one for each F) and I’m using a computed style to drive the dynamic opacity using the channels array. Here’s an abstract version of that code.

<svg>
<path d="..." fill="white" :style="{ opacity: channels[0] }" />
<path d="..." fill="white" :style="{ opacity: channels[1] }" />
</svg>

For more info on how this all comes together in a single Vue.js component and how I used modern CSS to make the UI more responsive, check out the CodePen itself. Happy 11th anniversary to Wasting Light! Subscribe of follow me on Twitter for more nostalgia hacking for Foo Fighters and other bands from the past.

--

--

Netmaker. Playing the Internet in your favorite band for two decades. Previously Silva Artist Management, SoundCloud, and Songkick.