Tag Archives: Bad Science

Theranos Under Investigation

It’s been reported that Theranos is now under several federal investigations, this time for securities violations. It’s possible that the company could end up getting shut down. None of this is very surprising to people who have followed this story. The CEO, Elizabeth Holmes, was featured in some glowing mainstream press coverage last year but soon after questions started arising about the company. They claimed to have a method to do blood tests with a much smaller amount of blood than in traditional tests, but this has been called into question. It’s now widely thought that the claims were wildly overstated, if not outright false.

I would also mention that the CNBC clip shown is worse than useless. It seems as if they’re taking most of the company’s responses at face value, which is never a very good idea in this kind of scandal.

Anti-Vaccine Movie at the Tribeca Film Festival

Apparently the Tribeca Film Festival will feature a documentary on the disgraced anti-vaccine doctor Andrew Wakefield. Unfortunately, the film is probably not discussing the history and facts on the vaccines & autism controversy. Instead, it’s likely a positive portrayal of Wakefield and similar doctors. Wakefield’s original paper asserting that vaccines increase the risk of autism was retracted several years ago after it was found that there were various ethical and methodological errors (including outright fraud), yet much of the anti-vaccine movement still doesn’t seem to have realized it yet. His supporters seem to now just be crackpots ranting about how his groundbreaking research is just being suppressed by the establishment. (If you have ever seen physics crackpots, this kind of thinking is one of the telltale signs that they have no interest in actually learning anything and only want to pontificate about their pet “theories”). Even among people who stop talking about autism, there seems to be significant fear that the vaccines are overwhelming children’s immune systems ( 1) ludicrous & 2) there is actually less exposure with more modern vaccines than with fewer earlier vaccines), among other concerns

The film festival already responded saying that their film choices are supposed to foster “dialogue and discussion.” This makes sense when there is a valid controversy. There is no known link between vaccines and autism, so there is basically one side that is doing research and showing that there doesn’t seem to be any problem, and another one that just asserts that the data is wrong. Similarly to the evolution/creation controversy, there is no academic controversy here. Worse, even if the vaccine opponents are right, it is almost certain that giving vaccines still does far more good than harm.

In Which a Psychologist Completely Misunderstands Physics

About a week ago, the Times published an op-ed to that recent Science paper claiming that many psychology papers are overstating the evidence. The op-ed was written by Lisa Barrett, a psychology professor at Northeastern. Her main argument is that failure to replicate results is one of the ways that science discovers new things, so psychology papers failing to replicate is not a problem.

While I think there is some truth there, I am not convinced by this argument. In fact, I find the counterexample of subatomic physics failing to replicate Newton’s laws to be incredibly disingenuous. The point that the psychology paper was making is attempts to replicate the result be performing what is more or less the exact same study failed to obtain the same result far more often than would be expected from the experimental uncertainties. One of the nice things about physics is that laboratory conditions can be controlled enough to actually perform (almost) identical experiments in (almost) identical conditions. Newton’s laws can be pretty easily verified by almost anyone. You can set up a set of springs and pulleys and measure oscillation frequencies or see how much weight it takes to lift some object attached to a string. There will be small deviations from air resistance, friction, etc., but you will still in the end verify Newton’s laws up to uncertainties from some reasonably known experimental conditions.

Basic classical physics doesn’t break down when you try to replicate an experiment. Rather, it breaks down when you try to extrapolate physical laws a bit too far. Classical mechanics describes things at macroscopic size traveling at speeds much lower than the speed of light. Deviations do exist but are almost always orders of magnitude too small to measure in a realistic experiment. Subatomic physics typically breaks both the assumptions of (1) macroscopic scale and (2) low velocity, so the equations governing our everyday life stop working. Furthermore, this example is also comparing experimental tests of scientific theories to comparisons of different experimental results. If the experiments are truly equivalent, they should obtain the same result regardless of what any theory says. For some measurements there doesn’t even need to be much of a theory at all.

Unfortunately, life sciences and social sciences are often much more difficult to replicate. People and living things aren’t particles; they’re not all identical, interchangeable quanta. Selection effects are going to be a very difficult problem to avoid or control, so I think it is natural to not require the same kind of precision that we can get in something like physics. It may even be true that many of the results in the paper failed to replicate because the populations between the original and new studies were not equivalent. Maybe more care needs to be taken when choosing test subjects in the replication studies, but this is still a problem unless someone can identify exactly how to do this. The whole point of scientific experiments is to generate data that can be used to extrapolate into broader scientific theories. If scientists perform a number of seemingly identical studies and get an equal number of incompatible results, then there isn’t really any way to inform theories other than to say that we don’t yet understand the experimental data.

Even the example of giving a rat a shock suggests that the author has a fundamental misunderstanding of what it means to replicate a study. She mentions that different results are obtained depending on the exact experimental procedure. This is not surprising. If you change an experiment different things can happen because you are no longer replicating the original experiment. You are instead testing to see if your earlier result still applies in different circumstances. This case is also probably more akin to what happens in psychology studies than any example in physics. There will always be slight differences between studies using human subjects, but if the variations aren’t understood (as far as I know, the people studying rats knew how to control their experimental conditions so this wouldn’t be a problem for their measurements), then any result can be rendered almost meaningless.

As I said at the beginning, there still are some worthwhile points here. Sometimes a failure to replicate really is a sign that something interesting could be happening. In neutrino physics, there are a series of “anomalies” of different experiments not matching other experiments. In dark matter detection, there are conflicting measurements of possible dark matter signals and dark matter exclusion curves. There have even been a number of high profile blunders in physics. These things happen. New measurements are being done to try to resolve these conflicts, whether they are experimental errors or real effects. The big problems that seem to have been identified in psychology are that (1) these new measurements often aren’t being done and (2) measurements are consistently falling on the side of having stronger statistical significance than they should. (1) means that errors can’t be easily identified and (2) suggests that published results are both biased and consistently underestimate systematic uncertainties.

Media Figures Supporting Science

Over at Slate, Phil Plait has an article on promoting media figures that support science. In particular, he discusses his decision to retweet a picture highlighting some famous Hollywood actresses who have shown some interest or aptitude in science. The controversy here is over the question of who should be applauded or treated as a role model for their interest or work in science and related fields.

The picture in question highlights five actresses with a variety of connections to science, from inventing new technologies to writing children’s books about math and science. Plait notes that the most controversial choice in the picture is the inclusion of Mayim Bialik, who earned a PhD in neuroscience. It’s not her actual work in earning the PhD that is controversial (as far as I know – I’ve never even taken a neuroscience class), but rather it is her apparent connections to various fringe groups pushing alternative medicine and anti-vaccination beliefs. Plait decided that Bialik’s work using her celebrity status to popularize science is more important the negative effects of her support for pseudoscience.

I would actually take the opposite position on this. The purpose of the image seems to be to highlight some celebrities who can also be seen as role models for children (and girls in particular) who might be interested in science. It shows that even cool people like science. However, in the case of Mayim Bialik, her support for pseudoscience and bad medical practices has far outweighed her scientific achievements in the eyes of the public (she has a PhD but doesn’t appear to have done any research since graduating). It’s difficult to hold her up as a role model when she’s setting back public support of science in other fields. While the idea of the image discussed in the article is fine, I think it would have been much better to showcase five people who have done real work (beyond student research/work) in science or popularizing science and who aren’t associated with anti-scientific groups. It really shouldn’t be that difficult to find five such people, although if it is difficult to do so then that might be a more interesting topic to talk about.

Study: Don’t Trust Medical Advice You Get From TV

A study published earlier this week on fact-checking medical claims made on TV has been popping up all over the internet for the past few days. The authors watched a number of episodes of two different medical talk shows, compiled a list of various recommendations and then tried to find evidence supporting these recommendations. This is a pretty qualitative way to study this, but sounds like a reasonable way to fact check these programs. A well-supported medical claim ought to be easy to find in the literature. Claims that can’t be found in a short search through the literature probably lack enough evidence to make a serious recommendation.

The article makes Dr. Oz’s show look particularly bad (and Dr. Oz’s credibility has already suffered a number of blows this year). Less than half of the 80 recommendations from his show included in the study were found to have any serious supporting evidence. Furthermore, it was rather shocking to see that Dr. Oz’s show mostly gives dietary advice (over 1/3 of recommendations) or recommends “alternative therapies.” This suggests that Dr. Oz is really just telling viewers what they want to hear and not what they need to hear. The other show, The Doctors, wasn’t great either but seemed to have a much more balanced mix of recommendations and also did a far better job at suggesting that people consult their actual doctors rather than just blindly following what they saw on TV.

Having prominent shows with millions of viewers peddling quack medicine is very bad for all science fields and not just medicine. Medicine is probably the closest most people get to actual science, so when unsupported recommendations don’t work they could lead to eroding the public’s trust in science. Furthermore, medical professionals have a duty not to mislead the public. This paper suggests that prominent public figures in medicine are failing at one of their most basic duties.

Creationists at Michigan State

Science reports that a creationist group will hold a workshop tomorrow at Michigan State. Among the various topics of discussion are why evolution is false, why the Big Bang is false, and how evolution leads to Hitler. There are even talks attacking the work of several professors. The conference is also advertising debates between MSU professors and their speakers, even though those professors apparently have no intention of showing up. The conference is sponsored through a student group, although one of the professors notes that the planning seems to all come from an outside group. The dishonest advertising using professors’ names to attract attendees ought to lead to some sort of sanctions against the group. Regardless of actual student involvement, their sponsorship of the event means that the school should be able to hold them accountable for the advertising.

Needless to say, this is quite embarrassing for the Michigan State science community. The setting and student sponsorship are an obvious attempt to lend the creationist group the appearance of legitimacy, using the name of a research university to attack that university’s mission. The school has stated that it won’t try to do anything to shut down the workshop, although the apparent lack of student control might give them a justification for doing so. There are a few courses of action that people can take. They can pack the room with a hostile crowd and then walk out in the middle, leaving an empty room, or stay and grill them with difficult questions. At this point, the beliefs of most prominent creationists are impervious to logic or evidence, so the latter probably won’t work. Another option is to do what the APS does with crackpots: let them give their talks but ignore them so that only a handful of diehard supporters even show up.

2014 Ig Nobel Prizes Awarded

Last week, the 2014 Ig Nobel Prizes were awarded at Harvard.

The physics prize went to a scientist studying the friction of banana peels to see if bananas really are slippery enough to trip us. Obviously, this is important to validate the physics of Mario Kart.

A couple other examples:

The economics prize went to the Italian government for discovering how to inflate its GDP by counting illicit trade like drugs and prostitution.

The medicine prize went to some people testing the efficacy of treating severe nosebleeds by packing the nose with bacon.