Banff_Pt_2_2017.jpg

Temporalists vs Frequentists: A Duality of DomainsLately, there has been an uptick in fellow audio professionals — let's call them temporalists — insisting that the only valid way to align a main loudspeaker to a subwoofer is through the time domain, albeit with a modern twist.

They all but claim that any use of the frequency domain is fundamentally flawed — and that everyone else is essentially doing it wrong.

In principle, I agree — but only when, and only when, the on‑screen data is mistakenly treated as gospel, which it shouldn't be.

But this rigid mindset overlooks a simple truth: both domains, when used correctly, can lead to the same alignment result. The real issue isn't the method — it's the data quality.


"True mastery lies not in the purity of Temporalism or the orthodoxy of Frequentism,
but in the balanced understanding that time and frequency are one
— a duality to be harmonized, not divided."


In this article, I'll show that the domain debate is a distraction from what truly matters: acquiring actionable data — and knowing what to do when it isn't.

When I started out, I was certainly guilty of being frequency‑domain‑centric, a frequentist, if you will. But over the past few years, I have gravitated toward the middle and now see myself more as a "Grey Jedi" — neither fully light nor dark.

Today, I use both approaches, recognizing that neither the time domain nor the frequency domain should be treated as dogma. Each has its strengths — the key is knowing when and how to use them, and how much confidence you can place in the results, given the fundamental limits imposed by the uncertainty principle.

Uncertainty principle

When using FFT (Fast Fourier Transform) to analyze a signal, there's a fundamental limit to how precisely you can know both when something happens and at what frequency it occurs. This is because time and frequency are linked in a reciprocal way — the more precisely you measure one, the less precisely you can measure the other.

This is analogous to a principle in physics called the Heisenberg uncertainty principle, which states that you cannot know both a particle’s exact position and its exact momentum at the same time. (Momentum is like velocity, but scaled by mass.) In signal analysis, time and frequency behave like position and momentum.

FFT analysis isolates consecutive, finite slices of audio, each with a spectrum whose frequency resolution is inversely proportional to the slice duration. For example, a 1‑millisecond‑long slice has a frequency resolution of 1 kHz (1000 Hz), whereas a 10‑millisecond‑long slice has a resolution of 100 Hz. The duration of this audio slice is also known as the time record [1].

Modern analyzers typically divide the audible range into multiple bands, each with an optimized time record — ranging from as short as 5 ms above 10 kHz (equivalent FFT Size 256 @ 48 kHz) to as long as 640 ms below 160 Hz (equivalent FFT Size 32K @ 48 kHz) — to produce a quasi-logarithmic scale with approximately 48 points per octave.

So in the subwoofer range, we have about 1,5 Hz of frequency resolution for roughly every 640 milliseconds of audio — a time interval during which a lot can happen. Such as floor bounces or slapback echoes off a rear wall in venues with a depth of around 50 meters (approximately 300 ms roundtrip) or less.

And while energy arriving outside a time record — as far as phase and coherence are concerned — is rejected and discarded as noise, the frequency domain remains effectively time‑blind and cannot distinguish between multiple events occurring within a single given time record, whether caused by room interaction or destructive interference from other loudspeakers.

In transfer‑function analysis, we ultimately compare two spectra and can do so with sample‑accurate time precision — including aligning main loudspeaker to subwoofer — as long as there's negligible room interaction or destructive interference from other loudspeakers. Otherwise, the data is at risk of becoming unactionable. More on this later.

"Have your cake and eat it (too)?"

Where the frequency domain is time-blind, the time domain is frequency-blind. In the time domain, the impulse response is the counterpart of the transfer function.

While the impulse response offers the advantage of showing how events unfold over time and of distinguishing the first‑arrival direct sound from trailing reflections, it can't be used effectively to align systems unless they respond to the same frequencies.

A condition that isn't met when analyzing signals within distinct passbands — such as a main loudspeaker and a subwoofer — where there's little to no frequency overlap.

Narrowband wavelet vs. broadband impulse responseFigure 1Band limiting is commutativeFigure 2As a workaround, one can band‑limit the broadband impulse response, effectively reducing it to a narrowband signal [2] — typically one‑third‑octave wide — composed of the crossover frequencies common to both passbands, thereby turning the impulse response into a wavelet‑like response.

Wavelet analysis offers a flexible compromise to the fundamental limit imposed by the uncertainty principle, allowing for localized analysis in frequency — as well as in time.

The carrier wave, beneath the wavelet's amplitude envelope, conveys phase information, whereas the envelope itself carries information about signal arrival time.

While a single wavelet response reveals the arrival time for one frequency band, a full wavelet transform provides this information across the entire spectrum.




New shiny object?

What follows isn't a comprehensive history, but rather a brief account based on the paper trail I was able to uncover within a limited amount of time. Naturally, developments like these never happen in a vacuum.

Wavelet analysis isn't new. The term wavelet was first popularized by Morlet et al. in a series of papers in the early 1980s, which described their application to geophysical exploration.

In the audio engineering field, according to the AES Library, one of the earliest mentions is by Don Keele in 1995 [3], who proposed the wavelet transform as a method for studying the transient spectral decay of loudspeakers.

The first AES paper I was able to find that also recommends wavelet analysis for — time alignment — as well as other uses, including spectral decay, is from 2007, by Ponteggia and Di Cola [4].

Wavelet analysis remained mostly academic and obscure in the pro‑audio community until 2021, when an article by Pat Brown [5] — originally written in 2019 — was featured on ProSoundWeb and became the most‑read new content [6] on the site that year.

In 2022, Francisco Monteiro added the single wavelet function to his CrossLite+ program — a newcomer in the measurement software space — and its users, rightfully so, began championing wavelet responses for alignment, some with more zeal than others.

Since then, it's become clear that existing functionalities in other software could be repurposed for wavelet analysis — and yield the same answers.

Be that as it may, whether one chooses the frequency or time domain, when it comes to (spatial) crossover alignment, the goal remains the same: to time- and phase‑align the direct sound — specifically, the first arrivals.

Achieving this requires that the conditions outlined in the table below are met.


   Time Domain  Frequency Domain
 Time  Aligned amplitude envelopes  Aligned group delays
 Phase  Aligned carrier waves  Aligned phase traces



So what does that look like in both domains, assuming no room interaction or destructive interference from other loudspeakers?

Examples

The following examples focus on 1 kHz crossovers, but the same principles apply equally to subwoofers — without loss of generality.

4th Order Linwitz-Riley in XO Study V1.3 by Merlijn van VeenFigure 3Figure 3 shows a fourth-order Linkwitz–Riley crossover. All plots on the left represent the time domain, while those on the right represent the frequency domain. All plots were generated using my free Crossover Study calculator [7].

The frequency span highlighted in magenta represents the range where both passbands have shared custody, and alignment is essential — before one begins to dominate the other by 10 dB or more, at which point alignment becomes inconsequential. Note that all conditions outlined in the table are satisfied.

Group delay [8] is defined as the first derivative of phase with respect to frequency. Therefore, by definition, matching phase slopes indicate matching group delays.

In terms of time alignment, for a given frequency band, matching group delays in the frequency domain correspond to matched envelope delays of the associated wavelets in the time domain. For example, at the crossover frequency, the group delay of a third‑octave‑wide band corresponds directly to the envelope delay of its associated wavelet in the time domain.

Note in Figure 3 that, in the time domain, the envelope peaks of both passbands lag by the same amount — as indicated by their matching group delays in the frequency domain.

Meanwhile, in terms of phase alignment, efficiency throughout the crossover region is determined by whether the phase traces in the frequency domain and the carrier waves in the time domain are properly aligned. Note that the Linkwitz‑Riley topology yields 6 dB of efficiency.

That said, not every crossover is — meant — to achieve both phase as well as time alignment at the same time.

3rd Order Butterworth in XO Study V1.3 by Merlijn van VeenFigure 4Figure 4 shows a third‑order Butterworth crossover. In terms of time alignment, all conditions are once again satisfied.

However, with respect to phase alignment — unlike the previous example — odd‑order crossovers are inherently 90° out of phase. And subsequently only yield 3 dB of efficiency. Where any attempt to increase efficiency by improving phase alignment comes at the expense of time smearing.

Odd‑order crossovers appeal to some because they are “idiot‑proof” — there is no need to worry about polarity. The crossover always remains 90° out of phase, regardless of polarity reversals.

Overshoot in group delayFigure 5From what I can tell, the third‑order Butterworth remains a popular choice for dividing and allocating frequencies between flown and ground-stacked subwoofers — despite the trade‑off of increased group delay and extended sustain (ringing).

Note the overshoot in group delay for the combined result (in red) in Figure 5 — these are the frequencies that will linger longer.

The real challenge: actionable data acquisition

Actionable data provides clear, unambiguous guidance, enabling the user to make informed decisions about applying signal processing such as delay or polarity reversal.

In the frequency domain — except in the case of purely electronic measurements — the phase response never represents just the direct sound. Within a given time record, the analyzer cannot isolate the loudspeaker from the room or exclude interference from other loudspeakers. As such, the appearance of the phase trace is biased by the relative strength of room and loudspeaker interactions.

Phase angles tend to dominant signalsFigure 6Simulation of floor bounce onlyFigure 7Figure 6 shows the combination of two sinusoidal components. Note that the resultant phase angle — ultimately what we see on our screens — gravitates towards the dominant signal.

Room influence increases with measurement distance, as direct and reflected sounds approach similar levels — intruding more noticeably on the results and making the phase trace appear more serrated or jagged.

Figure 7 shows a simulation of a simple setup that solely considers the floor bounce. Note how the phase traces become increasingly jagged with greater distance — indicating that the level of reflected sound is approaching that of the direct sound.

This effect is most notable when phase opposition occurs — that is, when the floor bounce lags by odd half‑cycles (i.e., 0.5, 1.5, 2.5, etc.) — and the bounce jerks the phase trace in proportion to the level offset between the direct and reflected sound.

Consider spending some time with my free floor bounce calculator [9] to gain a better understanding.

Actual room (not a simulation)Figure 8The same setup in an actual room — with six boundaries instead of just one — shows even more aberrations with increasing distance (Figure 8). At this point, the data becomes increasingly less actionable. So now what? Where do we go from here?


Now what?

Before we continue, allow me to state this in no uncertain terms: nobody disputes that the room acts as an extension of the sound system.

And on‑screen data should never be treated as gospel. It's like radiology — without context — the image represents only part of the story. It takes years of experience to tell the difference between pretender data and contender data.

To paraphrase Jamie Anderson:


"Calling the software an 'analyzer' is a misnomer
— the operator is the analyzer."

— Jamie Anderson —
President and founding member of Rational Acoustics



Apparent changes in phase responses (compared to the loudspeaker’s anechoic response) can lead audio professionals to mistakenly believe that the loudspeaker’s intrinsic phase response has changed.

When on‑screen data is treated as gospel, it sets the stage for aligning subwoofers to reflections or other artifacts, rather than to the direct sound.

However, in the interest of aligning direct sound, that is, the first arrivals — it's essential to realize:


"A loudspeaker's — intrinsic — phase response,
within its intended coverage area, does not change
unless the — user — invokes phase shift."

(This typically involves the use of filters.)


Where “loudspeaker” refers to a finished, market‑ready product — not to be confused with a bare transducer without an enclosure.

Furthermore, when on‑screen data is treated as gospel, it indicates a failure to self‑check. After all:


"If you do not know the answer before you start to measure,
how do you know you are getting a good measurement?"

— Ivan Beaver —
Chief Engineer at Danley Sound Labs



Where do we go from here?

Let's explore our options:

  • In the frequency domain one can:

    • Mentally dismiss the false phase wraps and take a calculated leap of faith
      — reasoning through what the phase is meant to be.

      This is where having a quasi‑anechoic reference becomes invaluable
      — which is exactly why we created Tracebook [10].

    • Attempt to window out late‑arriving energy in post, at the expense of reduced frequency resolution.

    • Pre‑align in the near field, where quasi‑anechoic data is still available.


      Then, in the field, use a laser rangefinder or a proxy loudspeaker
      for distance referencing
 — as suggested in the relative / absolute method [11].

  • In the time domain one can:

    • Use the single wavelet response



Third‑octave‑wide wavelet responses in SmaartFigure 9Many applications allow you to measure wavelet responses, including in alphabetical order (but not limited to): CrossLite+, Room EQ Wizard (REW), and Smaart.

An exhaustive explanation of how to align with wavelets is beyond the scope of this article. However, here is a technical note [12] to get you started.

Conclusion

I'm positive this article will only embolden the temporalists in their crusade to convince everyone that the time domain is the only way. They'll likely claim they always had it right.

But true mastery lies in leveraging all available data points from both domains — grounded in a deep understanding of how that data comes into existence and the measurement principles behind each.

Embrace the balance. Become a Grey Jedi — and let's meet in the middle.


References

1. Know your time records
by Merlijn van Veen
https://www.merlijnvanveen.nl/en/study-hall/178-know-your-time-records

2. A group of frequencies
by Merlijn van Veen
https://www.merlijnvanveen.nl/en/study-hall/195-a-group-of-frequencies

3. Time‑Frequency Display of Electro‑Acoustic Data Using Cycle‑Octave Wavelet Transforms
by D. B. (Don) Keele, Jr., in 1995

4. Time‑Frequency Characterization of Loudspeaker Responses Using Wavelet Analysis
by Daniele Ponteggia & Mario Di Cola, in 2007
https://aes2.org/publications/elibrary-page/?id=14262

5. What is the Difference Between Delay, Phase Shift, and Polarity?
by Pat Brown, in 2019
https://www.prosoundtraining.com/2019/07/26/signal-aligning-using-wavelets/

6. #1: The Wonder Of Wavelets: A Tool To Clarify The Difference Between Delay, Phase Shift & Polarity
by Pat Brown, in 2021
https://www.prosoundweb.com/1-the-wonder-of-wavelets-a-tool-to-clarify-the-difference-between-delay-phase-shift-polarity/

7. Crossover Study calculator
by Merlijn van Veen
https://www.merlijnvanveen.nl/en/calculators/198-crossover-study

8. Group delay 101
by Merlijn van Veen
https://www.merlijnvanveen.nl/en/nl/studiezaal/165-group-delay-101

9. Floor bounce calculator
by Merlijn van Veen
https://www.merlijnvanveen.nl/en/calculators/65-floor-bounce

10. Tracebook
Actionable Field Data
https://trace-book.org/

11. Subwoofer Alignment: The foolproof relative / absolute method
by Merlijn van Veen
https://www.merlijnvanveen.nl/en/nl/studiezaal/166-subwoofer-alignment-the-foolproof-relative-absolute-method

12. Tech note #7 – « Time is on your side »
by Thierry de Coninck, in 2023
https://www.fmscience.com.br/en/biblioteca.php