Introducing Dx:Revenue - Precision Medicine Intelligence for Business Development Teams

LEARN MORE

The Two Biggest Ideas for Personalized Medicine

by

Nature recently published two articles, within a day of each other as it would happen, that describe challenges confronting the two biggest ideas for advancing personalized medicine.

One-Person Trials

The first idea, described in an article by Nicholas J. Schork, director of human biology at the J. Craig Venter Institute, is that of one-person clinical trials. Such trials, also called N-of-1 trials, leverage the information that can be gained about each and every individual patient’s diagnosis and response to treatment. If administered properly N-of-1 trials have the potential to provide a vast repository of data from which powerful statistical models can be derived, making personalized medicine a ubiquitous characteristic of modern healthcare.

Key to the ability to administer such trials is the collection of “all sorts of relevant data … for one person, as frequently as possible,” and therein lies the rub. As Schork describes it, N-of-1 trials actually take place all the time, albeit in an ad hoc way, as physicians experiment with different treatments for their patients and adjust drugs and dosages based on how the patient responds.

The problem is that knowledge which might be gained from such a process is lost because “few clinicians or researchers have formalized this approach into well-designed trials — usually just a handful of measurements are taken, and only during treatment.”

To paraphrase Laurie Becklund, a foreign correspondent for the L.A. Times who recently died due to breast cancer, the fact that patient information isn’t being collected in a systematic and universally accessible way is “criminal.”

Open Access Data

The second idea, described in this instance in a Nature editorial, is one discussed repeatedly in this blog, and that is the elimination of a forest of silos of patient information, each guarded by a well-meaning but rate-limiting corporate or academic custodian.

The editorial summarizes the issue more clearly than I have seen it elsewhere described in stating that “… everyone agrees that large data sets are crucial, and everyone is racing to collect them. The larger the data set, the more useful. The most useful of all would be one huge database containing all available data. But even though all parties recognize the value of it, many are choosing not to share, and this holds back medical progress.”

Linking It All Together

At Amplion we are fond of quoting David Haussler of UCSC who recently stated that “at the molecular level every disease is a rare disease,” and who at the same time bemoaned a growing forest of unconnected silos of patient data.

Haussler is a member of the steering committee of the Global Alliance of Genomics and Health (GAGH), a rapidly expanding effort to provide, among other things, standards for sharing genomic data between databases.

If some kinds of standards can be agreed upon by the custodians of the largest genomic data repositories then actually combining databases will become unnecessary. A forest of data silos is fine as long as the silos are connected by the virtual hyphae of shared standards and other aspects of interoperability.

Then we can start to take advantage of the innovative new genomic data analysis platforms that are starting to crop up. One such platform is being developed by SolveBio which provides the “plumbing” to link hundreds of data sources into one elegant workspace (disclaimer: Amplion has recently signed a data sharing agreement with SolveBio).

Tute Genomics and Syapse are other players in this space, with Syapse being the most mature platform. Syapse was founded in 2008 and has an installed base that now includes multiple leading cancer centers.

Despite the promise of platforms like these, the resources and motivation necessary to assemble and make accessible the truly massive volume of genomic and phenotypic data that is likely required to empower “tricorder-level” diagnostic algorithms start to feel governmental in scale (and yes, I did just invoke a Star Trek device as the holy grail of disease diagnosis).

If that is indeed the reality of realizing the full potential of personalized medicine, who will be the lucky citizens of the country that claims victory in having the broadest and deepest understandings of the mechanisms of disease?

The U.S. recently committed $215 million to the President’s Precision Medicine Initiative, a central aspect of which is the characterization of 1 million genomes. Those both sound like big numbers, but the U.S. spends that amount on a single Joint High Speed Vessel (JHSV).

And while the JHSV is indeed a very cool craft, perhaps we could forego a few more or some other expenses and increase the investment in precision medicine, because most experts agree that 1 million genomes is a drop in the bucket of what will be needed to make personalized/precision medicine a ubiquitous aspect of modern healthcare.