fbpx

Templeton.org is in English. Only a few pages are translated into other languages.

OK

Usted está viendo Templeton.org en español. Tenga en cuenta que solamente hemos traducido algunas páginas a su idioma. El resto permanecen en inglés.

OK

Você está vendo Templeton.org em Português. Apenas algumas páginas do site são traduzidas para o seu idioma. As páginas restantes são apenas em Inglês.

OK

أنت تشاهد Templeton.org باللغة العربية. تتم ترجمة بعض صفحات الموقع فقط إلى لغتك. الصفحات المتبقية هي باللغة الإنجليزية فقط.

OK
Skip to main content
Back to Templeton Ideas

The John Templeton Foundation has become increasingly invested in open science over the last eight years and is now part of a broad coalition of funders and organizations that promote best practices in the reproducibility and transparency of research. We sat down with our staff member Nicholas Gibson, Director of Human Sciences programs, to ask him about the role of open science in accelerating scientific progress and discovery. 

Let’s start at the beginning—what is ‘open science’? 

To borrow from the Center for Open Science, open science means showing your work and sharing your work. Just like we learned as kids, it’s not enough for scientists to say, “Here’s what I’ve concluded from all my research”; scientists also need to show how they arrived at those conclusions by reporting what data they collected, how they collected them, how they analyzed them, and so on. That way, other people can decide whether to buy into those conclusions based on the quality of the evidence rather than relying on the authority or reputation of the scientists or institutions involved. As for sharing, science is supposed to be cumulative—standing on the shoulders of giants—and self-correcting, but of course that’s just not possible unless scientists can find, use, and build upon the products of earlier research. 

Are these new ideas? They sound like the bedrock of modern science.

Right—these principles go back at least as far as the earliest scientific journal, several hundred years ago. But they’ve captured fresh attention in recent years because of a growing realization that the way that science actually proceeds is disconnected from these ideals: scientists are disincentivized from describing their methods in sufficient detail for other scientists to repeat the studies; scientists get evaluated not on the quality of their arguments and evidence but on whether they can secure publications in particular journals; journal editors and reviewers tend to reject submissions unless they have data that make a nice story (or that can be contorted into one); and many journals are paywalled, only accessible to the tiny fraction of the world’s population lucky enough to have an affiliation at an institution willing to pay the often exorbitant subscription fees. What’s new about open science is the effort to find new incentives, norms, and infrastructure that can support the transparency and sharing that most scientists want to practice—but may not be sure how—and that can democratize access to the products of that practice.

Why did the John Templeton Foundation get involved in open science?

In the early 2010s we started to notice the same thing going on in several research areas where we had been active as a funder: ideas that had become popular—for instance, that self-control is a muscle that can be depleted, or that people become more honest or generous after being exposed to religious words, or that intranasal oxytocin can enhance prosocial behavior—were being challenged by other researchers who were having trouble producing the same results. In one sense this was business as usual—self-correcting science in action—but the problem was that many of these failures to replicate the original published studies were not themselves getting published. On top of this, it became increasingly clear that many apparent findings were based on questionable research practices. Just to be clear—I’m not talking about research fraud —but rather things like running hundreds of statistical tests and just reporting the ones that were significant. This is actually how many scientists (me included!) were trained, but the voices pointing out that these ways of working weren’t producing reliable results suddenly reached a critical mass. All of this was a challenge for us as a research funder. A key part of Sir John Templeton’s vision was to provide support for researchers willing to test ideas by using the methods of science. But if science as it was being practiced wasn’t producing reliable new information, how could anyone decide what was actually true? 

So how did the Foundation respond?

Slowly, at first. Saying that researchers ought to practice science in a particular way isn’t that hard; doing something that actually changes how researchers practice science is much more challenging. But one early initiative, the Center for Open Science (COS), caught our attention as an effort that combined education on open science best practices, scientific study of how scientists work (“metascience”), broader culture change efforts within the academy toward norms of openness, and the design and building of the tools and infrastructure that can make open science behaviors possible. This last point was especially important to us as a funder: if we wanted our grantees to begin sharing their data and research materials and to pre-register their studies and analyses, there had better be somewhere where they could do these things, and it had best be a place that was integrated into their research workflows and other tools. So in 2014 we provided an initial $3 million to COS to help build and expand their Open Science Framework. We continue that support today with our newest grant to COS, and our total support for their work has now reached $6 million. Alongside this work, we supported the Berkeley Initiative for Transparency in the Social Sciences (BITSS) to launch a prize recognizing excellence in open science practice, we’ve supported a rebuild of the primary repository for data on religion, and we’ve made several smaller grants to promote or study the impact of open science practices. Most recently we’ve made a grant to promote registered reports within psychology of religion, and have provided significant support for the Psychological Science Accelerator, an open science effort to test the generalizability of psychological phenomena.

Besides specific grant projects, what else is the John Templeton Foundation doing in this space?

We are a founding member of the Open Research Funders Group (ORFG), a partnership of major private research funders interested in making the products of research more discoverable, accessible, reliable, and reusable. This has been a great venue to think with peer funders about how to enact policies that promote open science practices. I’ve had my own growth curve in all of this, and I’m particularly grateful to the many Human Sciences grantees who have committed to doing things like pre-registration and FAIR-compliant data sharing, often for the first time, as one of their requirements for receiving our support. We’ve also been a member of the National Academies of Sciences, Engineering, and Medicine Roundtable on Aligning Incentives for Open Science, and we’ve been involved in developing a toolkit for fostering open science practices. There is a lot happening, and it’s not easy to keep on top of it all, but I’m excited about how rapidly things are changing and improving.

What obstacles stand in the way of greater adoption of open science practices?

There are many. I’d like to see greater awareness that the issues around transparency and replicability extend far beyond the social sciences. Many researchers are still unsure why or how they should pre-register analysis plans or share code or data. Many researchers still seem unaware that openly accessible articles get cited more frequently—by scholars and by journalists—than paywalled articles. Many institutions and funders still have to figure out how to sustainably fund open access publication and infrastructure. And many researchers work at institutions that have yet to sign the San Francisco Declaration on Research Assessment (DORA), meaning that their tenure and promotion committees are more likely to evaluate scholarly contributions in terms of metrics like “journal impact factor” instead of valuing the actual scientific content of published papers or valuing other contributions, such as datasets or software. Misaligned incentives like these are a problem that universities themselves have to solve. Another challenge is for more researchers to see themselves as part of an effort that is bigger than themselves: it’s easy—and understandable—to say, “I’ve worked hard to collect these data—I’m going to hold onto my data and extract all the value from it before sharing it with others.” It’s also extra work to get data into a form where it is usable by others. But there are often uses for data that the original researchers never imagined and that can only be realized when the data are discoverable, accessible, and reusable. Efforts like the CRediT taxonomy of research contributions and the data citation standards of the TOP Guidelines are important steps toward ensuring that scientists can be recognized and rewarded for their role in data collection and curation. 

Could open science backfire by diluting the competitive spirit that drives high achievement?

I think in the end we’ll see that it does the opposite—that is, that open science will provide more opportunities for competition, not fewer. Many researchers around the world are unable to download research articles because of publisher paywalls. What new ideas and lines of research might emerge if there were equitable access to all research papers? The reality is that high achievement is often in spite of the current incentives in tenure and promotion, in scientific publishing, and in research funding—not because of them. But even over the last two years with research on COVID-19 we have seen that it is possible to do things differently—that discovery can be accelerated when researchers learn from each other along the way, when they communicate early which research avenues lead to dead ends and which are fruitful, and when they contribute data from individual labs into larger, more powerful team-based efforts. 

How will we know if and when open science has succeeded?

Drawing again from the Center for Open Science, I think it’s helpful to consider what success looks like at several different levels. Do we have the infrastructure needed to support open science? Is it easy to carry out open science practices? Is it normative within scholarly communities? Are the incentives aligned to make it rewarding to practice open science? And do institutional policies require open science? Once all of these things are true, I think we’ll have made it into a new era of accelerating scientific progress and discovery. 

Are you optimistic we’ll get there?

Yes! I’m encouraged by the progress I’m seeing, and am pleased that the Foundation is playing its part in it.

 

Still Curious? 

Read the Strategic Priority Q&A on Religious Cognition with Nicholas Gibson

Learn more about the Center for Open Science

See the project overview for Center For Open Science: Enabling Research Rigor and Transparency, Fostering Researcher Intellectual Humility