Here’s my experimental set-up (sorry for the crappy picture):
In the upper left you can see my TDC (the stop watch) connected to a small amplifier circuit, used to amplify the sound signal to a level where the TDC can reproducibly notice the start and stop signals.
I used the multimeter left of the TDC to fine tune the amplification.
My microphones are actually a pair of earphones. I’m just using them backwards. 😉
And the two brown sheets of MDF you see between the earphones and the TDC are my sound source. Clapping them together at the far left corner of the table proved to be loud and reproducible enough for this endeavour.
You might notice that the first microphone is not placed all too precisely at the beginning of the measuring tape. Also The second microphone is being held in the air by a helping hand, which increases the possible offset on the distance measurement. But this does not matter, as long as these offsets are the same for all measurements! So the sloppy placement does not matter as long as I don’t touch it till all the measurements are done, and the size of the helping hand does not matter as long as I don’t move the joints.
I didn’t say it, but so far, we only talked about systematic error. Those are errors that remain the same throughout your whole experiment and always “pull” your measurement away from the true value the same way. And as you saw, getting rid of them can get a bit tricky.
But there is another type of error, which is actually quite easy to get a grip on: statistical errors. Statistical errors stem from the fact that nothing ever goes exactly like planned. There is always some sort of fluctuation, some sort of randomness in an experiment. Be it quantum fluctuations in your electronics, unpredictable gushes of wind that randomly alter the air pressure in the room or simply the fact that it’s impossible to clap exactly the same way twice in a row. If you do two identical measurements, you will get two different values.
How do we know which one is the right one, or even which one is better than the other? In general, we don’t. But the good thing about statistical errors, and the reason they are so easy to get a grip on, is that they are random. This means that sometimes they will push your result in this direction and another time they will pull it in that direction, cancelling each other out if you take the average over a lot of experiments.
And that’s just what you do. You take as many measurements as you need (or can be bothered to do) and take the average of the results as your “real” result.
To that end, I took 10 measurements (clap, write down time, reset timer), then I advanced the helping hand by 10 cm. Rinse and repeat.
This is my data (text file).
How can we visualize this data?
The best way, in my opinion, to quickly visualize some data is gnuplot. It’s very easy. Just start up gnuplot and tell it to plot the data file:
This should give you something like this:
Doesn’t look too bad, does it? The time definitely increases with the distance and one could even say that the relationship looks quite linear. Now we just need to fit a linear function to the data to get the speed of sound from the slope. And we can do that with gnuplot as well:
f(x) = m*x + n fit f(x) "sound.txt" via m,n
This gives us the best fit parameters m and n as well as an estimate to their accuracy (called “estimated standard error” by gnuplot):
Actually gnuplot only gives us the numbers since it doesn’t know about the units and it doesn’t care, but for us units are important. Never forget the units when quoting values!
Now to check whether gnuplot actually managed to produce a decent fit, we should also plot the fit into the data. (Sometimes a fit might fail or yield obviously wrong parameters. In that case you have to set start values for the parameters and play around with those a bit until it works.)
set xlabel "distance x / [cm]" set ylabel "time t / [us]" plot [0:70] "sound.txt", f(x)
That looks quite convincing. So what’s the speed of sound and (equally important) how accurate have we measured it?
The first one is easy:
And the second one isn’t that hard either. But we need a little bit of theory of error propagation for this.
I’ll probably do a more in-depth post about this later on, but for now you’ll just have to believe me, that if you take the inverse of a number, the relative error stays the same, so we get:
Here the sigmas are used to indicate the standard error or standard deviation of the variable and yes, those are the same sigmas you heard about when they were talking about the Higgs boson.
Anyway, now we got the speed of sound and an error estimate:
Not too bad. But 7% error are a bit much for my taste. And thankfully we can still improve our accuracy by actually using some statistics on our statistical errors.