Measuring the speed of sound

This is the first article where I will focus on physics. Or rather on the methods used in experimental physics.

The aim of physics is to learn something about the world we live in. But how do we do that? Everybody knows that we have to do experiments. You do an experiment, you look at the data and then you learn something. Sounds straight forward, but — as always — the devil is in the details.

In this article I want to use the simple example of measuring the speed of sound to show you how to turn a set of data into knowledge, using free software.

Also, I needed something to do with my new TDC.

So, we want to measure the speed of sound. How do we go about that?

The most straight forward way and the first thing that probably comes to mind is the direct approach: You put a microphone some distance away from a sound source and then measure the distance x and the time the sound takes to reach the microphone t.

Then the speed of sound would be just

$c_s = x / t$.

Unfortunately it’s not that easy.

The advantage of the direct approach is that you only need to measure two variables. The downside is that it’s quite hard to measure these variables.

Let’s first take a look at the distance.

As hinted at in the diagram, neither the source nor the microphone are points. They are extensive objects, so it’s not obvious where you should put your measuring tape. You want to measure the distance the sound travels, but you can only measure the distance between the two objects. This leads to an error which, in the worst case, can get as large as the size of the objects.

So what we measure is not the actual distance, but the distance plus some unknown offsets that stem from the physical size of the source and microphone:

$x \rightarrow \tilde{x} = x + \Delta x_{src} + \Delta x_{mic}$.

You can get around this by making the distance between source and microphone much larger than the size of the two objects. Then this error becomes negligible. So, for example, if your source and microphone are a couple of cm large, you’d need a few meters distance between them to reach an accuracy in the range of a few percent.

Depending on your source and microphone this can be hard (especially if you want to do all this inside, on the top of your desk), but it might be possible.

The time measurement leads to the same problem.

How do you start the clock exactly the moment the sound source creates the sound? And how do you stop the clock exactly the moment the sound hits the microphone? Lets assume you have an electronic sound source and you start the clock the moment you give the sound source the signal to produce some sound, there will still be some delay between the signal and the actual production of the sound.

The same applies for the microphone side of our experiment: There will be a delay between the sound hitting the microphone and the clock stopping which leads to an offset between the actual travel time t and the measured time.

$t \rightarrow \tilde{t} = t + \Delta t_{src} + \Delta t_{mic}$.

This, again, can be rectified by making the distance between source and microphone much larger than the uncertainties of the time measurement, but this is not an option if your sound source is not electronic like, for example, clapping your hands.

So what we actually measure is

$c_s \approx (x + \Delta x_{src} + \Delta x_{mic}) / (t + \Delta t_{src} + \Delta t_{mic})$.

Not ideal if you want to make an accurate measurement, is it?

But there is a way around most of these uncertainties. We just have to modify the experimental set-up to include a second microphone:

If we put a second microphone (A) between the source and the first microphone (B), we can measure the distance and time between the two microphones instead of the source and microphone and calculate the speed of sound between these two.

What’s the difference? Why is this better?

Well, before we had to time two completely different events: The creation of the sound and the sensing of that sound by the microphone. Both events lead to unknown and different(!) offsets, that lead to errors in the measurement of the actual travel time of the sound wave.

Now we time two (almost) identical events: Microphone A sensing the sound and microphone B sensing the sound. These events still have two offsets, but they are (almost) identical now, which means they will (almost) cancel out!

Not clear how this works out? Just imagine the chain of events:

• The source produces some sound
• The sound travels from the source to mic A
• Some small delay later the clock starts running, while the sound is on its way to mic B.
• The sound hits mic B
• A small (almost) identical delay later the clock stops

The clock gets started a bit later than it should, but it also gets stopped (almost) the same amount later than it should, so the measured time is (almost) exactly what we want to measure despite the unknown(!) errors.

The same applies for the distance measurement. If we use two identical microphones and just choose the same point on both of them to measure the distance, the two offsets (almost) cancel out.

Why all the ugly “(almost)”s?

Even if you use identical microphones, identical amplifiers and identical electronics to start and stop the clock, the delays will still be slightly different and the membrane in the microphones will not be at exactly the same place.

So what we measure now is

$x \rightarrow \tilde{x} = x + \Delta x_{micA} - \Delta x_{micB} = x + \delta x$

and

$t \rightarrow \tilde{t} = t + \Delta t_{micA} - \Delta t_{micB} = t + \delta t$.

This is much better than before, but we can still do better than that.

How does the time, we measure depend on the distance we measure?

$\begin{array}{rl} \tilde{t} & = t + \delta t = 1/c_s \cdot x + \delta t = 1/c_s \cdot (\tilde{x} - \delta x) + \delta t \\ \ & = 1/c_s \cdot \tilde{x} - 1/c_s \cdot \delta x + \delta t = 1/c_s \cdot \tilde{x} + \Delta t_{tot} \end{array}$

with

$\Delta t_{tot} = \delta t - \delta x / c_s$

This is interesting. Even though we cannot measure the actual travelled distance and time, we still have a linear relation between our measured variables and all the unknown offsets are in the constant. So if we measure the slope, we can directly measure the speed of sound, without knowing (or caring about) the offsets!

And that’s exactly what we’ll do.

1. Brett_cgb said:

Interesting analysis, but I’ve got a question. (I’m using MS Excel.)

Q) My initial data plot and linear trendline exactly match yours. My trendline equation provides m=28.984 and n=-78.300 (same as you within the precision you provided). It doesn’t provide the +/-error for m or n. How was the +/- error calculated?

2. Brett_cgb said:

Ummm… It seems I’ve opened a can of worms, based on what I’ve been able to find.
The answer to that question appears appropriate for a weeks worth of university statistics classes.

• Ast said:

Yeah… I don’t know where you can get this number in Excel, but gnuplot just gives us those numbers together with the fit parameters.

Behind the curtain the the calculation is indeed a bit complicated and involves the (estimated) Jacobian which then gets turned into a variance-covariance-matrix of the parameters. Then you use the diagonal of that matrix as estimates for the uncertainty of the parameters.

If Excel does not offer a way to get these numbers in an easy way, I suggest you switch to another program for your data analysis.

And another tip: If you want to know how this stuff works, you should take a look into the documentations of programs which have realised this functionality, e.g. gnuplot, scipy or any other open source scientific library. Usually there is at least a bit of theoretical background explained.

3. Brett_cgb said:

> you should take a look into the documentations of programs which have realized
> this functionality, e.g. gnuplot, scipy or ….

That’s what inspired the “can of worms” comment…. B)