Measuring the speed of sound

Here’s my experimental set-up (sorry for the crappy picture):

set-up

In the upper left you can see my TDC (the stop watch) connected to a small amplifier circuit, used to amplify the sound signal to a level where the TDC can reproducibly notice the start and stop signals.

amplifier

I used the multimeter left of the TDC to fine tune the amplification.

My microphones are actually a pair of earphones. I’m just using them backwards. ;)

And the two brown sheets of MDF you see between the earphones and the TDC are my sound source. Clapping them together at the far left corner of the table proved to be loud and reproducible enough for this endeavour.

You might notice that the first microphone is not placed all too precisely at the beginning of the measuring tape. Also The second microphone is being held in the air by a helping hand, which increases the possible offset on the distance measurement. But this does not matter, as long as these offsets are the same for all measurements! So the sloppy placement does not matter as long as I don’t touch it till all the measurements are done, and the size of the helping hand does not matter as long as I don’t move the joints.

I didn’t say it, but so far, we only talked about systematic error. Those are errors that remain the same throughout your whole experiment and always “pull” your measurement away from the true value the same way. And as you saw, getting rid of them can get a bit tricky.

But there is another type of error, which is actually quite easy to get a grip on: statistical errors. Statistical errors stem from the fact that nothing ever goes exactly like planned. There is always some sort of fluctuation, some sort of randomness in an experiment. Be it quantum fluctuations in your electronics, unpredictable gushes of wind that randomly alter the air pressure in the room or simply the fact that it’s impossible to clap exactly the same way twice in a row. If you do two identical measurements, you will get two different values.

How do we know which one is the right one, or even which one is better than the other? In general, we don’t. But the good thing about statistical errors, and the reason they are so easy to get a grip on, is that they are random. This means that sometimes they will push your result in this direction and another time they will pull it in that direction, cancelling each other out if you take the average over a lot of experiments.

And that’s just what you do. You take as many measurements as you need (or can be bothered to do) and take the average of the results as your “real” result.

To that end, I took 10 measurements (clap, write down time, reset timer), then I advanced the helping hand by 10 cm. Rinse and repeat.

This is my data (text file).

How can we visualize this data?

The best way, in my opinion, to quickly visualize some data is gnuplot. It’s very easy. Just start up gnuplot and tell it to plot the data file:

plot "sound.txt"

This should give you something like this:

gnuplot raw

Doesn’t look too bad, does it? The time definitely increases with the distance and one could even say that the relationship looks quite linear. Now we just need to fit a linear function to the data to get the speed of sound from the slope. And we can do that with gnuplot as well:

f(x) = m*x + n
fit f(x) "sound.txt" via m,n

This gives us the best fit parameters m and n as well as an estimate to their accuracy (called “estimated standard error” by gnuplot):

\begin{array}{rcl} m & = & (29.0 \pm 2.1) \ {\mu}\mathrm{s/cm} \\ n & = & (-78.3 \pm 83.4) \ {\mu}\mathrm{s} \end{array}

Actually gnuplot only gives us the numbers since it doesn’t know about the units and it doesn’t care, but for us units are important. Never forget the units when quoting values!

Now to check whether gnuplot actually managed to produce a decent fit, we should also plot the fit into the data. (Sometimes a fit might fail or yield obviously wrong parameters. In that case you have to set start values for the parameters and play around with those a bit until it works.)

set xlabel "distance x / [cm]"
set ylabel "time t / [us]"
plot [0:70] "sound.txt", f(x)

gnuplot fit

That looks quite convincing. So what’s the speed of sound and (equally important) how accurate have we measured it?

The first one is easy:

\begin{array}{rl} c_{S} = 1/m = 345 \ \mathrm{m/s} & \end{array}

And the second one isn’t that hard either. But we need a little bit of theory of error propagation for this.

I’ll probably do a more in-depth post about this later on, but for now you’ll just have to believe me, that if you take the inverse of a number, the relative error stays the same, so we get:

\begin{array}{rl} \sigma_{c_{S}} / c_{S} & = \sigma_{m} / m = 7.2 \% \\ \sigma_{c_{S}} & = 7.2 \% \cdot c_{S} = 25 \ \mathrm{m/s} \end{array}

Here the sigmas are used to indicate the standard error or standard deviation of the variable and yes, those are the same sigmas you heard about when they were talking about the Higgs boson.

Anyway, now we got the speed of sound and an error estimate:

\begin{array}{rl} c_{S} = (345 \pm 25) \ \mathrm{m/s} & \end{array}

Not too bad. But 7% error are a bit much for my taste. And thankfully we can still improve our accuracy by actually using some statistics on our statistical errors.

About these ads
5 comments
  1. Brett_cgb said:

    Interesting analysis, but I’ve got a question. (I’m using MS Excel.)

    Q) My initial data plot and linear trendline exactly match yours. My trendline equation provides m=28.984 and n=-78.300 (same as you within the precision you provided). It doesn’t provide the +/-error for m or n. How was the +/- error calculated?

  2. Brett_cgb said:

    Ummm… It seems I’ve opened a can of worms, based on what I’ve been able to find.
    The answer to that question appears appropriate for a weeks worth of university statistics classes.

    • Ast said:

      Yeah… I don’t know where you can get this number in Excel, but gnuplot just gives us those numbers together with the fit parameters.

      Behind the curtain the the calculation is indeed a bit complicated and involves the (estimated) Jacobian which then gets turned into a variance-covariance-matrix of the parameters. Then you use the diagonal of that matrix as estimates for the uncertainty of the parameters.

      If Excel does not offer a way to get these numbers in an easy way, I suggest you switch to another program for your data analysis. ;)

      And another tip: If you want to know how this stuff works, you should take a look into the documentations of programs which have realised this functionality, e.g. gnuplot, scipy or any other open source scientific library. Usually there is at least a bit of theoretical background explained.

  3. Brett_cgb said:

    > you should take a look into the documentations of programs which have realized
    > this functionality, e.g. gnuplot, scipy or ….

    That’s what inspired the “can of worms” comment…. B)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: