6 December 2016
Last week, Big Data 2 Knowledge (BD2K) held its annual all hands 2-day meeting, followed by a 1-day public symposium on open data science. BD2K is an initiative of the US National Institutes of Health (NIH) aimed at turning “biomedical research into a digital enterprise.” Much like the GA4GH, BD2K has acknowledged that in order to reach its full potential, the massive amount of health-related data now available must be harnessed through open collaboration.
Two notable champions of open science, Harold Varmus (former director of the National Cancer Institute) and Francis Collins (current director of the National Institutes of Health), spoke with Chris Wiggins of the New York Times about the benefits of and challenges to sharing life sciences data. The discussion ranged the gamut, touching on issues of patent law, incentivization, open access publishing, and pre-printing, to name a few. In particular, Dr. Varmus noted that facilitating a culture of sharing is not just a technical question, but also a human problem, which, he said, the GA4GH is directly addressing:
I think it's really critical to recognize that solutions are not just technical, some of them are political, governance issues, getting people in the same room to agree on principles…It’s one thing to say we're all open, but if we don't get data into a form that allows management by using the same software, or using APIs that allow exchange of information from one source or another….It's total madness….Those things have to be resolved at a human level, [and it] requires money to fix the problem and espousing the principles of openness, which requires exchange, involves a commitment to actually improving the systems that we use so that we can actually take advantage of the data that openness makes possible.
In another session, GA4GH Executive Director Peter Goodhand and three other panelists discussed exactly how they are attempting to fix this “human problem” and how they are each trying to make good on their commitment to improving the system. In “New Models for Open Science Emerging Around the Globe," Goodhand, Niklas Blomberg (Founding Director, ELIXIR), Robert Kiley (Head of Digital Services at the Wellcome Library), and Tanja Davidsen (Project Manager, National Cancer Institute) spoke about their respective efforts to support open science. The moderator, Philip Bourne (Associate Director for Data Science, NIH), opened the roundtable with the following question: “What’s not working?” (specifically with respect to the global aspects of open science). Here are some of the responses from the roundtable:
Jurisdictions need to be harmonized: ELIXIR is a trans-national effort to create a network of biological data sources across Europe. When it comes to the human derived data nodes, ELIXIR has the challenge of navigating many different regulatory requirements for data sharing, such as variable data security laws. “That is a big challenge and getting mutual recognition schemes between countries is a very big and unsolved problem,” said Blomberg.
We need a sustainable funding model: Bloomberg also noted a that in the long term, another challenge will be curated management of the data. The costs of biological big data are approaching those of other big scientific infrastructures, such as synchrotrons and big telescopes, Bloomberg said, and will thus require some creative infrastructure funding models to make it all sustainable.
We need to reach the entire global community: Goodhand noted that it has been much easier to create an “Anglo Alliance” than a Global Alliance. “Doing something that includes the UK, the US, Canada, [and] Australia is relatively easy. Making that relevant and making it meaningful and creating opportunities for the whole world to engage in is much more complex,” said Goodhand. The onus is on us, he said, to go around the globe and listen to and learn from other communities, and make what we’re doing relevant to them.
Data sharing needs to be properly incentivized: The Wellcome Trust recently conducted a survey of Wellcome-funded researchers asking about attitudes and practices toward data sharing. According to Kiley, the overwhelming response was a sense that while funders advocate and require sharing, they don’t seem to really take it seriously. “They fear that all that gets looked at are research articles,” Kiley said. Papers are what count for getting jobs and future grants, so why would anyone share their data? Doing so makes it available to competitors who are also keen to publish first. We need to find a way to reward the practices and behaviors we want to promote, he said: Sharing data, making papers open access, etc. “We need to find a mechanism to…make it absolutely clear that [sharing is] a key part of being a researcher and it's a behavior we wish to support and promote.” Both Varmus and Collins acknowledged this issue in their discussion and noted a few efforts to drive sharing within the community. In particular, Varmus noted a recent victory for clinicaltrials.gov, which will require institutions to submit all clinical trial data within 12 months after collection or else face severe participatory and financial penalties.
We have more data and less time than we ever expected: Davidsen and her team just completed the implementation phase of the Genomic Data Commons and the Cancer Cloud Pilots, two projects of the NCI that aim to connect and democratize cancer data stored around the US. “One of the lessons learned,” Davidsen said, “is everything takes a lot longer than you think it's going to.” They also have a lot more data than they originally expected, both in the GDC and with the Cloud Pilots. The challenge, Davidsen said, is getting researchers to share their data “fully and completely.” That means accessing complete (or as complete as possible) clinical data sets to bolster the cancer genomics data. Francis Collins preempted this one in the earlier discussion, saying “it was a lot harder than [he] thought it would be, just for that fairly restrictive set of data about cancer genomics and phenotypes. This is a mountainous effort and it's going to take a lot of smart people to succeed in implementing.”
BD2K has funded a center at the GA4GH member organization University of California at Santa Cruz. Under the direction of GA4GH Vice-Chair David Haussler, the Center for Big Data in Translational Genomics and the GA4GH Data Working Group are developing shared application programming interfaces (APIs) to connect the world’s genome repositories, as well as an open source software stack that uses those APIs.
You can access the full video archive from the meetings here.