Thank you for voting Crowdsignal Logo

Ideally, scientists would fully disclose their own raw data and methods and also spend time replicating others' work. What would best ensure this good behavior? (Poll Closed)

  •  
     
  •  
     
  •  
     
  •  
     
  •  
     
32 Comments

  • Chris - 12 years ago

    In response to Cindy:

    1) Yes, "reproducibility comes out of subsequent work by the same lab and others wishing to build on the ideas." I don't think anyone sets out to reproduce results for the sake of repetition. However, if you are using published work as a basis for your own studies, attempt to validate a couple of key experiments for your peace of mind, and are unable to do so, then this is a serious problem! There is a major difference in discarding an existing hypothesis because of a credible difference in data and interpretation, versus not being able to validate a hypothesis because none of the critical experiments can be reproduced. Discovering that published work is not reproducible hardly seems to be a hallmark of the scientific method.

    2) The key problem with online supplementary space is that it is near limitless and is typically not reviewed with the same scrutiny as the "hard-copy" manuscript. Journals do not proof this text, and it often turns into a morass of difficult to follow, useless information. I have noticed a trend where authors will bury sub-standard data that "supports" the conclusions in the main text but would not stand up to front-line scrutiny. Further, the details in the supplementary materials and methods sections are often incorrect, for instance in terms of reagents or techniques used in the manuscript. I am in favor of publishing supporting material but it needs to be consistently held to high standards.

  • Thomas Chesney - 12 years ago

    Many datasets, especially in the social sciences, will lead to multiple publications over several years. Researchers may be reluctant to publish a dataset that they haven't 'finished with', as someone else might publish an analysis they themselves were planning to do in the future. Dealing with this will be important to convince reseachers to publish data.

  • Cindy - 12 years ago

    This problem has come about because journals limit page numbers and figures and constantly ask you to slash your methods section. Conducting repetitive work for the sake of repetition is a waste of time and money and not worth spending precious grant money on. Neither does it make good reading in the current research environment already overrun by journals that no one has the time to read. Reproducibility comes out of subsequent work by the same lab and others wishing to build on the ideas. The inability to reproduce someones work is a hallmark of the hypothesis-driven scientific process that helps establish whether working theories become accepted or discarded. It is necessary, and comes naturally as the next step in the project. There is no need to "push" another agenda. What is needed is a way to present more detail when you publish. With online supplementary space available, seems like that is the best place to put those details.

  • Darlene Southworth - 12 years ago

    It's not about rewards, it's about opportunities. In two areas, scientists already deposit data for all to see and reanalyze: in systematics of both extant and fossil species where specimens are deposited in museums or herbaria and in molecular studies where DNA sequences are deposited in GenBank or similar public online sites. Scientists already use each other's data.
    No one needs rewards to put specimens in herbaria or sequences in GenBank. These are cultural norms and are usually required by journals.
    So start with creation of discipline-appropriate sites where one could deposit data. Let's see how that goes.

  • Arnold L. Lieber, MD - 12 years ago

    Thirty-four years ago I alerted readers of my book, The Lunar Effect ( 1978, Doubleday ) that lunar time should be incorporated into the design of all biological and behavioral experiments. Results from my own research combined with a working familiarity with the literature on biological rhythms led me to an awareness that that lunar rhythms co-exist with solar rhythms in biological functioning. Therefore, lunar time must be an integral component of biological time. Lunar time is out of synch with solar or calendar time, advancing by 50 minutes daily across the solar time-spectrum ( hence, the lunar month is about a day and a half shorter than the solar month ). The difference between the two time parameters constitutes a variable that must be controlled for in the design and conduct of biological and behavioral research studies. Failure to account for this difference when designing research methodology dooms the ensuing study to inevitable replication failure. Perhaps it is time for scientists to re-visit this long overlooked viewpoint, as it might constitute part of the missing answer.

  • HCPotter - 12 years ago

    Deemphasize publication and citation record and emphasize successful application.

  • Weimin Wu - 12 years ago

    Sometimes we want to repeat others' work to check if we are limited by raw data or by algorithm, but no raw data could be accessed.

  • Agata - 12 years ago

    The materials and method section should be clear and in detail, and if not there should be enough information in supplements. Some articles just have a haze instead of any explanation how they got they data.

  • Concerned Scientist - 12 years ago

    Journals should not accept papers unless the data and methods are also provided in a form that a scientist in that field would be able to reproduce the experiment.

  • Victor Friedlander - 12 years ago

    Both 2. Funding earmarked for replication studies and 3. More publication by journals of data that confirm or refute previous work.

    Funding earmarked for replication studies provides the means for testing results and more publication by journals of data that confirm or refute previous work distributes the relevant information.

    Rewards from funders on subsequent grant applications for depositing sufficient details for replication (or penalties for noncompliance) may work for some non-security related government research but is completely contrary to the objectives of funding from the private sector. Ultimately, preservation of the transparency and collective judgment of research methods and results depends on the science community and not on extraneous interested parties with agendas contrary to the necessities of good science.

  • Suman Ghosh - 12 years ago

    Stowers Institute for Medical Research have adopted a very nice strategy, which is making an online data repository where everybody has to submit all the original/raw data and available for public. This seems to be a lot of work in the beginning, but at the end it is clean strategy and there is nothing to hide.

    If not by institutions, at least most of the high ranked journals should make it as a rule that everyone has to submit all the original or raw data. These probably will not make it to the final figures, but people can view them how the final figures were obtained.

  • Christina - 12 years ago

    Why should this not be a requirement by institutions, funders and publishers? In the digital age, there is no practical reason for any restriction on the inclusion of extensive raw supplementary data. The question should not be whether some incentive should be offered, but rather, where should we put it and how should we organize it?

  • Sean - 12 years ago

    The best approach is to grant a permanent position to the person who uncover the fraud. The people who work close knows the best.

  • Sean - 12 years ago

    The promotion, salary, reward, funding, reputation,... are mostly based on publications. That is the key cause for fraud. Ask the PIs around, how many of them not pushing for their hypothesis, not forcing their students, postdoc to have the results they want/expect?

  • Ruchi Pandey - 12 years ago

    Very frequently the non-reproducibility / fraud in data is blamed on the post doc/ grad students in the lab. It should be made mandatory for the PI to ensure the credibility of the data by training the people in the lab in good lab practices/ ethical issues and enforcing that they follow them through regular interactions and periodical checking / validating the data being generated. If the PI's cannot monitor the research being done due to large research group then either they should not associate with it only for getting credit or have smaller research groups where it would be feasible to ensure that data is reproducible.
    Smaller research groups will also mean more equitable sharing of the scarce research funds and fewer people going through PhD and Post doc programs without career options to match the number of people slogging in the labs.

  • nina papavasiliou - 12 years ago

    These are all simple (simplistic?) answers to a complex question.

    "More publication by journals of data that confirm or refute previous work" seems the best choice of the lot. Have you tried publishing data that refute a previous hypothesis lately? As a rule of thumb, the publication-to-be-refuted is published in high profile journals (sorry Science, but you are often wrong!) but the refutation has a hard time getting into much, much "lower tier" journals (in the vanity press sense of scientific publishing). And not at all by grand-conspiratorial-design: simply, the refuting authors are asked to jump through many hoops to demonstrate why they are correct and the already published work (which was already assessed by reviewers) is wrong. It's human for the community to want to avoid the embarrassment that comes with being wrong, but it's not scientific. If refutations, as a rule, were published in the same journal the work first appeared, might this not be a deterrent to the quick (but often very sloppy) path to glory (which is usually - unfortunately - inextricably linked to publishing at Science, Nature, Cell, etc?)

    PS it goes without question that journals should demand (and at the very least, confirm!) deposition of raw data. But a quick survey of a couple of prominent journals in my field indicates that people have already invented creative ways around this requirement.

  • Stefano Berri - 12 years ago

    The curent system rewards publications. Number and quality. Publishing and maintaining usable code, annotating the data, is an extra cost (particularly time) that is not rewarded but left to the individual research scientists. If anybody has to chose between spending time to make code and data available or work on the next publication, it is difficutl to choose the former. If publishers force code and data availability (as they should) the bare minumum is often done, not rarely with fake annotation or broken code. There must be a system that not only rewards publications, but also reusable code, data curation and annotation, maintainace.

  • Björn Brembs - 12 years ago

    There need to be several changes to the way we handle publications and data. First, we need an attribution system that values contributions to science other than publications. In fact, if someone only contributes publications and nothing else, this person should receive a lower standing in the community than someone with publications, data and other contributions. Such attribution systems are already standard technology elsewhere and the ORCID initiative (http://orcid.org, supported by the AAAS: http://orcid.org/content/participants/432) is an important prerequisite to finally implementing such a system also for the sciences.

    Such an attribution system will incentivize raw data publication, but additional rules need to be applied: funders need to require deposition of raw data just as they require access to the publications. Journals should be required to publish papers contradicting findings they themselves published. Any newspaper is forced to publish corrections in the same spot as the original article. Surely we should hold our scientific journals to at least the same standard?

    Finally, we need to reduce the impact of journal rank on our hiring and firing decisions. An attributions system as described above will help, but more decisive and direct efforts are needed to reform a system in which journal rank is a better predictor of retractions than citations:
    Fang, F., & Casadevall, A. (2011). RETRACTED SCIENCE AND THE RETRACTION INDEX Infection and Immunity DOI: 10.1128/IAI.05661-11
    Seglen PO (1997). Why the impact factor of journals should not be used for evaluating research. BMJ (Clinical research ed.), 314 (7079), 498-502 PMID: 9056804

    Clearly, these reforms will not eliminate fraud, but they will reduce the incidence rate at which it occurs.

  • Andrew D. Steen - 12 years ago

    Journals should demand, as a condition of publication, raw data as well all the tools necessary to recreate results from those data, including specific equations, computer code, and even simple Excel spreadsheets.

  • Emmanuel Okoro - 12 years ago

    This is a question of ethics and morals. Perhaps, less emphasis on grades and such superficial accomplishments, and more on overall character. After all, what is our endpoint in research?

  • Emmanuel Okoro - 12 years ago

    Encourage authors to give original references (not indirect or vague), or clearly explain the reasoning behind their experimental steps during the review process.

  • Donald Strebel - 12 years ago

    Data publication has to be recognized and rewarded as an expected and inseparable part of all scientific research activities. A published data set, properly documented and reviewed, should have the same status and rewards as any other formal scientific publication. Creating and funding a data publication infrastructure, analogous to the existing scientific jounal publication infrastructure, is necessary.

    See the following:

    Meeson, B.W. and D.E. Strebel. 1998. The Publication Analogy: A Conceptual Framework for Scientific Information Systems. Remote Sensing Reviews, vol. 16, pp. 255-292.

    Strebel, D.E., D.R. Landis, K.F. Huemmrich, J.A. Newcomer, B.W. Meeson. 1998. The FIFE Data Publication Experiment. Journal of the Atmospheric Sciences, vol. 55, pp. 1277-1283.

    D. E. Strebel, B. W. Meeson, K. F. Huemmrich, D.R. Landis, J. A. Newcomer. December 1997 .
    Theory and Practice of Interdisciplinary Data Exchange and Preservation
    Presented at the Conference on Scientific and Technical Data Exchange and Integration,
    Sponsored by U.S. National Committee for CODATA, National Research Council.
    Available online at http://www.esm.versar.com/poster/abstract.htm

  • Gerald S. Wasserman - 12 years ago

    The tremendous technical successes of World War 2 (radar, the A-bomb, etc.) were achieved by giving concentrated federal funding to particular groups working in particular universities. This was a substantial departure from the previous decentralized system in which each university had an endowment which generated income which was used to support the research of that university's own faculty. This latter system had begun millennia ago with the founding of the Library of Alexandria.

    At war's end, the Truman Commission explicitly considered whether to return to the previous system and decided to go forward with the new concentrated system. As a result, a single crackpot program director who controls the distribution of funds in a particular research area can turn that area crackpot because anyone who is not a crackpot gets no support.

    It is time to review this decision. I suggest that it should be revised so that some fraction of every federal research grant should go into the perpetual endowment of the university hosting the research. Had that been the direction recommended by the Truman Commission, universities today that are hospitable to genuine scholarship would benefit from the income of their own endowments to a degree that would be comparable to the benefit they get from external grants.

    Of course, such a system would have to be conditioned on an agreement that the income from the endowment could not be used to augment the salary of the football coach.

  • N. S. Arden - 12 years ago

    This is an interesting and important question. Publications to confirm or refute previous work could draw more researchers to replicate studies. However, results and raw data can vary significantly across different groups who attempt similar studies due to variations in study design, conditions and errors. The challenge seems to be not so much in having other groups replicate previous studies but in the limitation of current statistical significance tests. These tests do not convey or control the numerous resulting variations across different groups. A pressure point could come from the journal requirements themselves since most journals do not require a thorough study design procedure.

  • GRINM - 12 years ago

    Publish the negative data

  • James Montalto - 12 years ago

    The implementation of a LIMS would ensure that the study, process, raw data and analytical reports are all protected and searchable.

  • Emilio Bruna - 12 years ago

    Journals should require data archiving upon publication; make these archives citable documents (like the Ecological Society of America's Data Papers).

  • An international standard on transparency that includes transparency in research.

  • Kathleen Taylor - 12 years ago

    We need more data. Everyone can cite an experiment which has been replicated, but what percentage haven't? Given the data already available, there must be some way to:

    a) rate current hypotheses by how valued they are by the community (even a simple citation count would be a start),
    b) work out their replication status, and
    c) target funds towards replicating widely-accepted ideas which lack the replication backup.

  • Paul - 12 years ago

    Fraud is like security. I would not all my money on any of these technical solutions. Crooks will always try to outsmart the system. We should accept that a perceptage (5% ? 10%?) of people violate basic rules. Bob Dylan says he quoted once Lincoln when he said "you can fool some people all of the time and all of the people for some time, but you cannot fool all people all of the time".
    Nurturing a healthy balance of respect and healthy scepticism seems a better bet.

  • Geoff Hammond - 12 years ago

    Recognize the limitations of statistical significance; it says nothing about replicability or importance.

  • Colin Wraight - 12 years ago

    It seems absurd that "Rewards from funders on subsequent grant applications for depositing sufficient details for replication (or penalties for noncompliance)" should be an option here. All published papers should have this information already! That they don't is an indictment of the editorial and review process. This is also where most problems could be caught - and would be if reviewers were more thorough. I propose more reward for conscientious (and constructive) reviewing.

Leave a Comment

0/4000 chars


Submit Comment

Create your own.

Opinions! We all have them. Find out what people really think with polls and surveys from Crowdsignal.