Jump to content

Science Losing Credibility As Large Amounts Of Research Shown To Be False


LC

Recommended Posts

As a user of the medical literature I definitely understand the frustration some of the doctors in the article expressed about literature being untrue. As a recent example I was trying to figure out what the evidence was for sending someone on prophylactic anticoagulation after an operation for colorectal cancer, for 28 days vs  stop when they leave the hospital to prevent deep vein thrombosis. There are society guidelines (colorectal, heme/onc, Chest guidelines etc), but if you use the studies they base the guidelines off of, the main one was funded by the company that makes lovenox, and all the statics were performed by a person hired by the company rather then the authors. This is mentioned in a single sentence in the methods section. The reason? Probably the study would have never been done in the first place without the funding. It's that way for lots and lots of medical research, and it makes the conclusions really hard to take as unbiased. 

 

The other problem is even if something has been studied, the studies age and the conclusions may no longer be true. For example best medical management vs CEA(carotid endarterectomy) for stroke risk reduction in asymptomatic patients. The medicines and surgical techniques/anesthesia change over time, so the answer is dynamic, the study has to be redone periodically. Many questions in medicine are similar, the answers change over time even if it has been "settled" at some point in the past.

 

We have a weekly journal club in surgery, where we review recent literature, someone presents and we come to our own conclusions and then discuss it. At the end the question is always will this change your practice, the answer is no like 95% of the time, because many of the studies are narrow, low sample size, retrospective in nature etc. People will keep publishing them though, because there is great incentive to do so.

 

At the same time it's the best we have, the alternative is just using your own best judgement based on experience, which more often then not is what people do. The evidence based medicine thing is really only like 10% of medicine, most stuff has not been studied enough, or there is enough grey area that many different things are acceptable. I think that if people want to question things critically that is good, because not everything done in the name of science is completely benevolent. But each question is separate with separate evidence to consider, painting a broad brush against "Research" or "GMO" or "Vaccines" is kind of simplistic thinking in my view.

 

As an aside I read through the GMO article. #1 its an open access journal, meaning the people who published it paid the journal to print it  #2 It's not designed to show if GMO is carcinogenic, and is stated as such by the authors, they only use 10 rats per arm, the standard to get statically significant data is 50, which is why incidentally they don't perform any statistical analysis on the data they present. #3 It's very hard to understand the way it's written, there is no methods section for example. They do state the analysis they performed of the GMO maize vs non GMO showed it to be chemically similar, with the exception of lower phenolic acid in the GMO group. In figure 4 they compare 11 22 and 33% GMO diet maize (the % refers to the amount of roundup used as pesticide), with the addition of roundup in the water fed to the rat, to a non-GMO diet and no roundup in the water or as a pesticide. There is a problem with that, in that it's difficult for me to conclude if its the diet or the roundup that is causing the higher incidence of tumors at 12  months in the experimental arm. The conclusion of the article is that 90 day monitor for tumor formation is insufficient as the current standard , because the majority of tumors come after this. I don't disagree, but the original article" http://wakingscience.com/2017/03/peer-reviewed-science-losing-credibility-large-amounts-research-shown-false/"  citing this as something that was buried by the scientific community as evidence for carcinogensis of GMO is bogus.

 

 

Link to comment
Share on other sites

  • Replies 62
  • Created
  • Last Reply

Top Posters In This Topic

Andrew Gelmans blog provides some nice coverage of the problems in science from a statistical perspective

http://andrewgelman.com/2012/03/31/dispute-about-ethics-of-data-sharing/

 

And this comment on the blog from a scientist illustrates the sort of forces which lead to an attitude I hate

 

As a researcher who gathers data, and who has unsuccessfully requested data from others, I’m not very sympathetic to the data-sharing argument. Developing a research project, applying for funds and ethical approval, and carrying out the fieldwork takes a huge amount of time and effort. It is simply not in my interests to pay the costs of this work without every opportunity to extract some benefit.

 

Of course, if my comparative advantage were in statistics, rather than fieldwork, it would be in my interests for an open-access data regime. I would clearly benefit from such an arrangement because other fools would have to pay the costs of gathering data, and, seeing as how I could probably outgun them in the analysis of said data, I would readily get published.

 

In an ideal world, we could have it all. I could do my fieldwork, you could make a couple of graphs with the data, and everyone would be happy. But in the real academic world, there are only so many jobs in desirable locations, endowed chairs, and slots in top journals. Why should I give my competition a free ride?

 

I don’t think the ethics are very clear either. It is very easy to occupy the moral high ground when it aligns with your self-interest. One could make a different argument: as with drug development in the pharmaceutical industry, those who invest in data gathering may require a protective period to allow the investment to be worthwhile. The alternative may well be the underprovision of novel data.

 

Moreover, the ethical case is still ambiguous even if the data were collected using public funds. Only part of the cost of gathering data is the dollar amount covered by a grant. A large chunk (especially in the social sciences) of the total cost is my time and effort in creating a successful grant.

Link to comment
Share on other sites

In the real world we have auditors ....

 

They exist to verify (on a sample basis) that a department is either following mandated procedure (internal audit), or that the numbers are what they are claimed to be (external audit). Test failures are expected, there's a discussion, and a written resolution agreement. Make an issue of it, & ultimately you get replaced - at the behest of the Board of Directors.

 

Its an expensive process, of trust and verify. If the independent reviewer cant reasonably verify it, the asserter was lying. It's essentially French Law - guilty until proven innocent - and by-and-large, it works very well. Of course it's possible to defraud; hence an audit only provides 'reasonable', and not 'absolute' assurance.

 

Audit in the blockchain world is a shrinking business, new sources or revenue are needed.

It is a very simple thing for a public funder to demand that all statistical conclusions be audited by a 3rd party independent, and the auditor needs the business. Resulting in less - but verified research, versus the current river of BS.

 

Get the house in order, or it'll be done for you.

The clock has already begun ticking.

 

SD

 

Link to comment
Share on other sites

  • 8 months later...

Found an interesting article which is related to this topic:

 

https://fivethirtyeight.com/features/science-isnt-broken/

 

If we’re going to rely on science as a means for reaching the truth — and it’s still the best tool we have — it’s important that we understand and respect just how difficult it is to get a rigorous result. I could pontificate about all the reasons why science is arduous, but instead I’m going to let you experience one of them for yourself. Welcome to the wild world of p-hacking.

 

“You can do it in unconscious ways —I’ve done it in unconscious ways,” Simonsohn said. “You really believe your hypothesis and you get the data and there’s ambiguity about how to analyze it.” When the first analysis you try doesn’t spit out the result you want, you keep trying until you find one that does.

 

Scientists who fiddle around like this — just about all of them do, Simonsohn told me — aren’t usually committing fraud, nor are they intending to. They’re just falling prey to natural human biases that lead them to tip the scales and set up studies to produce false-positive results.

 

 

Nosek’s team invited researchers to take part in a crowdsourcing data analysis project. The setup was simple. Participants were all given the same data set and prompt: Do soccer referees give more red cards to dark-skinned players than light-skinned ones? They were then asked to submit their analytical approach for feedback from other teams before diving into the analysis.

 

Despite analyzing the same data, the researchers got a variety of results. Twenty teams concluded that soccer referees gave more red cards to dark-skinned players, and nine teams found no significant relationship between skin color and red cards.

 

My take on this: it's very hard to be a scientist in the truest sense. A real scientist has one motivation: discover the truth. But it's quite insidious how human biases creep into this. Take the example from the article about analyzing how soccer refs give red cards based on skin color.

 

Even if you don't give a damn about soccer, about people's skin color, or anything related to the topic, maybe you just want your job to have some meaning in the grand scheme of things. So already you have a bias to create something significant where nothing of significance may exist.

 

Researching a stock, how easy is it to begin by saying "I want to make money". Then this bias creeps into the research. You can interpret pieces of information in a way that seems more profitable. For example: "Oh well Sears just HAS to turnaround with all of that real estate - then I'll make buckets of cash!"

 

This is a damn hard bias to overcome (and also why I think Buffett's rules #1,#2 are the types of bias you need in your head).

Link to comment
Share on other sites

Yeah, great find LC - although it's quite a bit more depressing to me than it is to the author. Although probably that's the outcome of realization that "Science is hard — really fucking hard." or that humans are really not well prepared to do the research (or analysis of research) in areas with large complexities, numerous influencing factors and hard statistical analysis. Or like another quote from the article said: “There are so many potential biases and errors and issues that can interfere with getting a reliable, credible result.”

 

Venturing a bit farther afield, I wonder how many people - both in research and outside - grok the statistics even if they were available and well calculated. Probably >99% of general population would not be able to deal even with rather simple probability caveats like https://en.wikipedia.org/wiki/Confusion_of_the_inverse . But even taking the population of people who should know this, there's apparently high percentage who don't or at least don't grok it offhand (i.e. they would get correct result if they spent time on it, but usually they won't, so incorrect result becomes default "conventional wisdom"). Talking about p-values, I'd guess the percentage numbers are even worse both in general population and specialist population. And that's not even getting to p-hacking, etc.

 

I am not optimistic that humans can learn to do much better even if everyone on the planet wanted to (which clearly isn't the case). It seems to be hard and our brains are not great in dealing with complex and non-intuitive information/data/models. I'll probably fall back on my default position that we need something like Elon Musk's Neuralink ( http://www.theverge.com/2017/3/27/15077864/elon-musk-neuralink-brain-computer-interface-ai-cyborgs ) so that a human could run (replicate) all the stats for claims/research/whatever with minimum effort and within seconds or less. Even then perhaps the brain would explode trying to make conclusion on the problems that have huge number of factors and possible methods to use. We might need not just an interface, but a wholesale integration with something that can handle all of this.

 

8)

Link to comment
Share on other sites

Although probably that's the outcome of realization that "Science is hard — really fucking hard." or that humans are really not well prepared to do the research (or analysis of research) in areas with large complexities, numerous influencing factors and hard statistical analysis. Or like another quote from the article said: “There are so many potential biases and errors and issues that can interfere with getting a reliable, credible result.”

 

Once you start arguing, "its not the system its really just human beings that suck", you have all but outright admitted that your system sucks.

 

See below for something I extracted from Feynman's talk on Cargo Cult science. What I find interesting about this,  is this whole idea that replications should not be done because they are a waste of time.

 

Its never ever about time. Its about priorities. And priorities are about incentives. What people really mean when they say something is a waste of time is that the current system is pushing hard them to do certain things (publish more research, get more grants, get more citations) and they don't have the time to do other important things that they aught to be doing.

 

http://www.lhup.edu/~DSIMANEK/cargocul.htm

Other kinds of errors are more characteristic of poor science. When

I was at Cornell, I often talked to the people in the psychology

department. One of the students told me she wanted to do an

experiment that went something like this--it had been found by

others that under certain circumstances, X, rats did something, A.

She was curious as to whether, if she changed the circumstances to

Y, they would still do A. So her proposal was to do the experiment

under circumstances Y and see if they still did A.

 

I explained to her that it was necessary first to repeat in her

laboratory the experiment of the other person--to do it under

condition X to see if she could also get result A, and then change

to Y and see if A changed. Then she would know that the real

difference was the thing she thought she had under control.

 

She was very delighted with this new idea, and went to her

professor. And his reply was, no, you cannot do that, because the

experiment has already been done and you would be wasting time.

This was in about 1947 or so, and it seems to have been the general

policy then to not try to repeat psychological experiments, but

only to change the conditions and see what happens.

 

Nowadays there's a certain danger of the same thing happening, even

in the famous (?) field of physics.

 

Link to comment
Share on other sites

Although probably that's the outcome of realization that "Science is hard — really fucking hard." or that humans are really not well prepared to do the research (or analysis of research) in areas with large complexities, numerous influencing factors and hard statistical analysis. Or like another quote from the article said: “There are so many potential biases and errors and issues that can interfere with getting a reliable, credible result.”

 

Once you start arguing, "its not the system its really just human beings that suck", you have all but outright admitted that your system sucks.

 

See below for something I extracted from Feynman's talk on Cargo Cult science. What I find interesting about this,  is this whole idea that replications should not be done because they are a waste of time.

 

Its never ever about time. Its about priorities. And priorities are about incentives. What people really mean when they say something is a waste of time is that the current system is pushing hard them to do certain things (publish more research, get more grants, get more citations) and they don't have the time to do other important things that they aught to be doing.

 

Its the incentives and design of the system that are wrong. Not the people.

 

http://www.lhup.edu/~DSIMANEK/cargocul.htm

Other kinds of errors are more characteristic of poor science. When

I was at Cornell, I often talked to the people in the psychology

department. One of the students told me she wanted to do an

experiment that went something like this--it had been found by

others that under certain circumstances, X, rats did something, A.

She was curious as to whether, if she changed the circumstances to

Y, they would still do A. So her proposal was to do the experiment

under circumstances Y and see if they still did A.

 

I explained to her that it was necessary first to repeat in her

laboratory the experiment of the other person--to do it under

condition X to see if she could also get result A, and then change

to Y and see if A changed. Then she would know that the real

difference was the thing she thought she had under control.

 

She was very delighted with this new idea, and went to her

professor. And his reply was, no, you cannot do that, because the

experiment has already been done and you would be wasting time.

This was in about 1947 or so, and it seems to have been the general

policy then to not try to repeat psychological experiments, but

only to change the conditions and see what happens.

 

Nowadays there's a certain danger of the same thing happening, even

in the famous (?) field of physics.

 

 

Even in Feynman's example it isn't the system itself that sucks, there is nothing about science that says experiments should not be repeated, in fact in theory every scientist will tell you that indeed they should be.  It was the individual professor who told her that she'd be wasting her time who sucks.    In the end it always comes down to specific individual people.  You can have the best methods and systems in place theoretically, but if people choose not to follow them in practice they will not work.  Take hand washing by doctors as an example.  Every doctor will tell you that hand washing is important, yet 100,000 people per year die because doctors simply don't practice what they preach.  The theory is great and everyone agrees, but in practice individual human beings choose to do it the wrong way.  People suck.

 

 

 

Link to comment
Share on other sites

Even in Feynman's example it isn't the system itself that sucks, there is nothing about science that says experiments should not be repeated,

 

By system I don't mean some idea of science that exists in a timeless platonic plane of existence. I mean the actual current real world institutions of science that we confront today. This includes peer review, grant committees, enormous number of graduate students etc. And most importantly an incentive system that primarily rewards based on number of high impact papers as measured by number of citations, the prestige of the journal etc.

 

In that environment, a repeated experiment is indeed a waste of time because it won't help a researcher get more grants or be more highly rated. The system I am describing came into existence post WW2. And when I say science is broke, I mean the post WW2 system of science that was created by men like Vannaver Bush:

https://en.wikipedia.org/wiki/Vannevar_Bush

 

ake hand washing by doctors as an example.  Every doctor will tell you that hand washing is important, yet 100,000 people per year die because doctors simply don't practice what they preach.  The theory is great and everyone agrees, but in practice individual human beings choose to do it the wrong way.  People suck.

 

Doctors will do whatever they get paid for and not do what they don't get paid for. Doctors don't get paid for washing hands. No one does. Doctors do get paid for seeing more patients. The system pushes doctors in a certain direction (see more patients, do more operations) instead of in another direction (wash your hands, get your patients to exercise, lose weight).

 

In addition the existence of the incentives themselves can be the problem. Sometimes and I think this is especially true for research you just want to take good people and give them the space to do good work. You don't need to incentivize them...it actually makes things worse not better.

Link to comment
Share on other sites

If by science one means a pursuit of building knowledge, I cannot agree with anyone would would be against this.

 

However, you have to understand that the modern science, especially the systematic method that we are using to create testable and explainable knowledge, is a form of social construct that has been formalized only 1-2 centuries ago (especially the empiricist approach established by Karl Poppper). One should understand that there is nothing absolute about this method and it should also be scrutinized. As pointed out by these articles, the method is prone to errors. So far, it is the best method we've got, but we may come up with a new and better method to collect and organize knowledge in the future. In that regard, accepting the modern science as something absolute is not much different from accepting some God as absolute.

Link to comment
Share on other sites

  • 8 months later...

Re-bumping this thread because, despite the thread name and valid criticism in this thread, scientific progress really is spectacular to watch, and is by far one of the greatest contributors to improvements in human life:

 

https://www.sciencealert.com/lab-grown-lungs-pigs-success-2018

 

Bioengineered Lungs Grown in a Lab Successfully Transplanted Into Living Pigs

Link to comment
Share on other sites

Yes, there are issues, but the scientific method still works. Good science will crowd out bad science, because good science works and bad science doesn’t.

 

This is true long term, just as the market is efficient long term.  Scientific consensus will tend to head in the direction of the truth...eventually.  But in the mean time you have things like politics getting in the way such as with the government telling us all that saturated fat and cholesterol will kill us and we should be eating tons of carbs, high omega-6 vegetable oils, and trans fats.  It took a couple of generations, millions dead of diabetes and heart disease, and the invention of the internet to turn that one around.

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...