Urania

A blog named for the muse of Astronomy containing musings by an astronomer

The Difference between “Cooking” Data and Purging Bad Data

Posted on March 06, 2008 by Juan

There is a great article online at Scientific American’s website investigating the claim that Arthur Eddington and Frank Dyson might have “cooked” the data from their solar eclipse observations in 1919 in a way that supported Einstein’s (then new) General Theory of Relativity:

On May 29, 1919, two British expeditions, positioned on opposite sides of the planet, aimed telescopes at the sun during a total eclipse. Their mission: to test a radical theory of gravity dreamed up by a former patent clerk, who predicted that passing starlight should bend toward the sun. Their results, announced that November, vaulted Albert Einstein into the public consciousness and confirmed one of the most spectacular experimental successes in the history of science.

In recent decades, however, some science historians have argued that astronomer Sir Arthur Eddington, the junior member of the 1919 expedition, believed so strongly in Einstein’s theory of general relativity that he discounted data that clashed with it. [From Fact or Fiction: Did Researchers Cook Data from the First Test of General Relativity?]

The nice thing is this article illustrates one of the less well-appreciated challenges facing the functional scientist: distinguishing between bad data and data that conflicts with your theoretical expectations. Bad data, like other things in life, just happens. And when it happens, it can be a pain to deal with. How do you know when the data is “bad” (that is, the result of a problem at the telescope or a glitch in your software) versus when the data simply conflicts with your theory? In one case, getting rid of the data makes sense. However, being over-eager to reject conflicting data may make you reject a completely compatible alternative interpretation to your observation. Furthermore, if your data seem to support a controversial theory, you should be fairly confident your results are not the result of “bad” data. As Carl Sagan said in Cosmos , “Extraordinary claims require extraordinary evidence.” You have to be pretty confident you haven’t made a mistake if your data strays far from what you expect. Knowing the difference between “bad” data and data that supports a different theory the human part of the science I try to teach my students about. It is also the reason peer-review is such an incredibly important part of the scientific process.

By the way, the verdict of the article’s author is that Eddington and Dyson did the right thing. It turns out it was actually Dyson, who was initially inclined against Einstein’s theory, who made the decision to toss the bad data out. The final results when published[1] supported Einstein’s General Theory of Relativity, which still stands as the most well-supported model for gravitation to this day.

Linknotes:

  1. A Determination of the Deflection of Light by the Sun’s Gravitational Field, from Observations Made at the Total Eclipse of May 29, 1919 – Dyson, F. W.; Eddington, A. S.; Davidson, C. 1920, Philosophical Transactions of the Royal Society of London. Series A, 220, 291

Leave a Reply


  • Translate

  • Astro Pic o' the Day

  • Archives

  • Admin



↑ Top