XGenomes is bringing DNA sequencing to the masses


This post is by Jonathan Shieber from TechCrunch


Click here to view on the original site: Original Post




As healthcare moves toward genetically tailored treatments, one of the biggest hurdles to truly personalized medicine is the lack of fast, low-cost genetic testing.

And few people are more familiar with the problems of today’s genetic diagnostics tools than Kalim Mir, the 52-year-old founder of XGenomes, who has spent his entire professional career studying the human genome.

Ultimately genomics is going to be the foundation for healthcare,” says Mir. “For that we need to move toward a sequencing of populations.” And population-scale gene sequencing is something that current techniques are unable to achieve. 

“If we’re talking about population scale sequencing with millions of people we just don’t have the throughput,” Mir says.

That’s why he started XGenomes, which is presenting as part of the latest batch of Y Combinator companies next week.

A visiting scientist in Harvard Medical School’s Department of Genetics, Mir worked with the

Continue reading “XGenomes is bringing DNA sequencing to the masses”

23andMe might soon offer a more comprehensive $749 DNA service


This post is by Sarah Buhr from TechCrunch


Click here to view on the original site: Original Post




23andMe is testing a $749 “premium” service for deeper health insights, according to several customers who saw a test page for the new product and posted about it on Reddit.

First spotted by CNBC, the company served up a test web page to several customers telling them about a service that would allow them to look at their “whole genome data.” However, when they clicked on the link provided, nothing happened. A few Redditors even posited the notification may have been a mistake as the link led nowhere.

But, according to the company, there’s no error here. 23andMe later confirmed to TechCrunch it sent out a test page to some customers to “gauge interest” in such a product. However, there’s “nothing planned” at this time for such a service, according to a 23andMe spokesperson.

The consumer DNA company charges $299 for its highest package right now, and

Continue reading “23andMe might soon offer a more comprehensive $749 DNA service”

Human sequencing pioneer George Church wants to give you the power to sell your DNA on the blockchain


This post is by Sarah Buhr from TechCrunch


Click here to view on the original site: Original Post




 The blockchain is the buzziest thing on the internet these days and now MIT professor and godfather of the Human Genome Project George Church wants to put your genes on it. His new startup Nebula Genomics plans to sequence your genome for less than $1,000 and then add your data to the blockchain through the purchase of a “Nebula Token.” Read More

Genetics startup Genos wants to pay you for your DNA data


This post is by Sarah Buhr from TechCrunch


Click here to view on the original site: Original Post




DNA The first whole human genome sequencing cost a whopping $2.7 billion. That didn’t bode well for making any breakthroughs on genetic disorders. Luckily, the cost has dropped dramatically since then, leading to a new breed of consumer genetics startups taking a deeper dive into all the double helix’s that make up you
Genos is one of those startups using a next-generation… Read More

You can now pull up your entire genome for under $1,000 on your smartphone


This post is by Sarah Buhr from TechCrunch


Click here to view on the original site: Original Post




Screen Shot 2016-03-08 at 10.53.15 AM Veritas Genetics was one of the first companies to sequence the entire human genome for under $1,000 in 2015. It’s now taken that technology a step further by delivering the results of your entire genome in an app. To put in context just how radical this is, consider the first attempt at whole human genome sequencing required $3.7 billion to produce in 2001. It wasn’t until 2007… Read More

Genentech CEO: FDA needs more funding to review genome products


This post is by Mark Sullivan from VentureBeat


Click here to view on the original site: Original Post




Genentech CEO: FDA needs more funding to review genome products

Above: Genomic data is all around us, the background art from Omicia’s website implies.

Image Credit: Omicia

SAN FRANCISCO — With the arrival of consumer-driven health care, more people will have access to their own genome as a way of guiding the management of their physical well-being.

But this has a downside, as 23andMe’s run-in with the FDA last year demonstrates. Misreading the unique markers in the genome might lead to false positives for diseases, or a false negative might prevent a person from getting necessary treatment.

Genentech CEO Ian Clark

Above: Genentech CEO Ian Clark

Image Credit: Rock Health

Genentech CEO Ian Clark, speaking today at the Rock Health’s Health Information Symposium, said he understands both sets of concerns.

“I see both sides,” Clark said. “Should consumers have access to the information in their own genome? Definitely. But that information is diagnostic.”

And this is where the FDA gets involved.

Clark points out that reviewing genome-reading services is a new job for the FDA, and it’s one for which the agency hasn’t received any additional funding from Congress.

In the biotech industry, companies have pushed for changes in the law that enables them to pay for regulators to review new products. And “genome companies should be lobbying to do the same thing,” Clark said.

23andMe, by the way, has been working well with the FDA and is now awaiting the regulator’s approval on a new product for a rare but serious inherited condition called Bloom syndrome. As of June 27, the company’s 501(k) application had been received and OK’d for review by the FDA.


HealthBeat — VentureBeat’s breakthrough health tech event — is returning on Oct 27-28 in San Francisco. This year’s theme is “The connected age: Integrating data, big & small.” We’re putting long-established giants of the health care world on stage with CEOs of the nation’s most disruptive health tech companies to share insights, analyze trends, and showcase breakthrough products. Purchase one of the first 50 tickets and save $400!


Considered the founder of the biotechnology industry, Genentech has been delivering on the promise of biotechnology for more than 30 years, using human genetic information to discover, develop, manufacture and commercialize biotherapeu… read more »




23andMe gets a $1.4M NIH grant, still awaits FDA approval


This post is by Mark Sullivan from VentureBeat


Click here to view on the original site: Original Post




23andMe gets a $1.4M NIH grant, still awaits FDA approval

Late last year the FDA barred 23andMe from dispensing health information products to consumers based on analysis of their DNA.

But things have brightened up considerably in the nation’s capital for the company. 23andMe cofounder and CEO Anne Wojcicki has done well for herself and her company there in 2014. Lawmakers and regulators like and respect her.

23andMe’s latest Washing ton win comes in the form of a new grant from the National Institutes of Health (NIH).

The company will use the $1.4 million grant to access the whole human DNA sequence and discover rare variants associated with various types of diseases. Some of it will improve and expand the online survey tools 23andMe uses to gather health and ancestry information from users.

The money will also enable external researchers to access aggregate data from the 23andMe database. “23andMe is building a platform to connect researchers and consumers that will enable discoveries to happen faster,”  Wojcicki said in a statement yesterday. “This grant from the NIH recognizes the ability of 23andMe to create a unique, web-based platform that engages consumers and enables researchers from around the world to make genetic discoveries.”

When and if 23andMe scientists make new connections between diseases and genetic variants (which could be used to predict disease), the company plans to publish its findings in peer-reviewed scientific journals.

Overall, the two-year project will yield a database containing genotypes for 40 million genetic variants and information on thousands of diseases and traits for more than 400,000 individuals. Novel associations, especially with the rare genetic variants found by 23andMe, will be of great value for disease prediction, drug development, and biological understanding, 23andMe believes.

The NIH is not the FDA

What everybody wants to know is whether or not the Food and Drug Administration (FDA) will permit 23andMe to get back into the health genetics business.

But Epstein Becker Green attorney Brad Thompson warns that the NIH’s decision to make the grant says nothing about 23andMe’s progress with the FDA. Thompson helps medical device makers and others work with the FDA on compliance issues.

“People tend to lump the federal government altogether and act as though they share one brain, when quite the opposite is true,” Thompson says in an email to VentureBeat. “It is a common everyday occurrence that one federal agency might invest in a new technology that another federal agency views with skepticism. So the NIH investment means nothing with regard to FDA.”

But Thompson isn’t suggesting that the FDA looks poorly on 23andMe — quite the opposite.

“… You need to appreciate that frankly I think FDA loves the technology; I think that many people at FDA personally believe that this technology offers enormous public health potential benefit,” Thompson says.

“If you read the FDA warning letter, the agency clearly invested a tremendous amount of time working with the company trying to guide them through the FDA process. That is extremely rare for FDA to invest that much time in one company.”

Thompson says, and its been reported, that the agency sent 23andMe the warning letter only when the company had stopped talking to the agency.

23andMe’s Afarian told VentureBeat that the communication breakdown was more the result of her company not knowing exactly how and when to communicate with the FDA.

But the company has been talking regularly with the FDA this year. Wojcicki has done more face time at the agency. The two parties, it’s believed, are coming to terms and have worked out a template for approving future 23andMe health products.

23andMe is tightlipped on the timing of the FDA’s approval of its first health product since the warning letter last year, a predictor for a rare but serious inherited condition called Bloom syndrome. As of June 27, the company’s 501(k) application had been received and OK’d for review by the FDA.

In the meantime, 23andMe spokeswoman Catherine Afarian says, even though consumers can no longer get personal health DNA analysis information back from 23andMe, they continue to provide health information to the company via the online surveys.


HealthBeat — VentureBeat’s breakthrough health tech event — is returning on Oct 27-28 in San Francisco. This year’s theme is “The connected age: Integrating data, big & small.” We’re putting long-established giants of the health care world on stage with CEOs of the nation’s most disruptive health tech companies to share insights, analyze trends, and showcase breakthrough products. Purchase one of the first 50 tickets and save $400!


23andMe is a DNA analysis service providing information and tools for individuals to learn about and explore their DNA. We use the Illumina OmniExpress Plus Genotyping BeadChip (shown here). In addition to the variants already included… read more »




DNAnexus builds online hub for scientists to store and share genetic data


This post is by Rebecca Grant from VentureBeat


Click here to view on the original site: Original Post




DNAnexus builds online hub for scientists to store and share genetic data
Shutterstock

We all have vast and complicated datasets buried within us, and genomics startup DNAnexus wants to set that data free. Or at least help scientists make sense of it.

DNAnexus wrapped up $15 million for its cloud-based platform that manages and analyzes large amounts of DNA data.  

DNAnexus builds HIPAA (Health Insurance Portability and Accountability Act)-compliant solutions for genetic-sequencing facilities, diagnostic test providers, and research centers. Its technology provides a central hub where researchers can store and analyze large amounts of DNA data, with “unlimited scaling of computational and storage resources.”

As with all data, it’s what you do with it that matters. Scientists can also use DNAnexus’ platform to securely share data and collaborate with third parties such as universities, hospitals, statisticians, clinical labs, and biologists.

Genomics is a relatively young and rapidly emerging field. It’s taking off right now thanks to technology that has dramatically reduced the time and cost required to sequence the human genome. This implications and opportunities of these developments are far-reaching, both in the medical world and for entrepreneurs exploring ways to harness this data.

Quick-and-easy genomics is crucial to usher in the era of “personalized medicine” — where doctors provide treatment of disease based on an individual’s unique characteristics.

Genomics also has huge potential for cancer research (as well as other diseases), and we are also seeing growing interest — and concern — in commercial genetic testing. Take the situation with 23andMe, which had sold DNA tests directly to consumers rather than to care providers. It has worked with more than 500,000 customers, who receive a health report with “clinical insights” and ancestral information based on their test. But the company and its testing methods have come under fire, as the FDA issued an enforcement action against 23andme in November saying that the kit falls under the medical device category and requires regulation.

Genomics is expected to be one of the most exciting trends of 2014, and DNAnexus works with enterprise organizations working with this technology to help them process massive amounts of genetic data at scale.

Collaboration is a huge part of scientific research and discovery. But collaborating with genomic data is not easy — the amount of data is massive, and it must be kept secure and compliant.

Krishna Yeshwant, a partner at Google Ventures and DNAnexus board member, said in a statement that “the next wave of insights in genetics comes from multi-institutional collaborative efforts producing huge amounts of data.”

This is where DNAnexus steps in.

The company recently announced partnerships with Stanford University and the Baylor College of Medicine, which are using DNAnexus’ platform to process genomes and make the resulting datasets available to researchers around the world. With these two projects alone, DNAnexus has processed over 17,000 genomes and generated more than 500 terabytes of genomics data.

Claremont Creek Ventures, Google Ventures TPG Biotech, and FIrst Round Capital participated in this third round of funding. DNAnexus is based in Mountain View, Calif., and has raised $31.6 million to date.

Reblog this post [with Zemanta]


    



Genetics prof: Why I won’t waste my money on a DNA test in 2014


This post is by Christina Farr from VentureBeat


Click here to view on the original site: Original Post




Genetics prof: Why I won’t waste my money on a DNA test in 2014

When a reporter for the New York Times set out to test three genetic tests, she received extremely varied results.

The direct-to-consumer tests set back writer Kira Peikoff anywhere from $99 to $399. In two months, she received a full set of health results about her likelihood of contracting a variety of adult-onset conditions. But each of the providers she evaluated (23andMe, Genetic Testing Laboratories and Pathway Genomics) interpreting the results differently. In one instance, she was warned about an elevated risk for rheumatoid arthritis; in another, a provider deemed her risk to be minimal.

This isn’t the first time that journalists have evaluated these tests, and it won’t be the last. Academics like Stanford’s Hank Greely have devoted years of research to exploring the ethical and legal implications of the new biomedicine technologies, particularly those related to genetics.

Peikoff received the results just as one of the companies, 23andMe, received a stern warning letter from the U.S. Food and Drug Administration over concerns about the clinical validity of its data. This set off a small firestorm in the process and prompted debate about the future of genetic testing.

How should we interpret damning reports like this? Should we avoid genetic testing altogether? This is still such a nascent sector, so is it better to wait until the technology evolves? I asked Greeley, a genetics professor and the director at the center for law and biosciences at Stanford University, to weigh in.

VentureBeat: Were you surprised that the New York Times writer received three different results?

Hank Greely: Not at all. The results were inconsistent, as the interpretations were all over the map. The analytic validity is strong, I believe. But each of these companies calls the various medical effects based on a different set of studies. What makes the results even more confusing is that the companies do not have the same baseline for what they consider “average.”

VentureBeat: If the results vary so much, do you think there’s much point in ordering a genetic test at all? 

Greely: With a few exceptions, the data derived from SNPs [pronounced snips] won’t show us anything particularly powerful or clear. You need full genome or whole exome sequencing [sequencing of the exome – the protein-encoding parts of all the genes – is beginning to dominate the headlines, thanks to its ability to diagnose diseases that were previously undiagnosable] rather than SNP chips, to get truly meaningful results. Exome and genome sequencing is not yet the norm. So for now, I think it’s better for people to save their money and not purchase any of these tests.

VentureBeat: 23andMe is now offering consumers their raw data. It’s not interpreting the results without FDA approval. Is it worthwhile to have access to that data?

Greely: Unless you are a world-class genetics researcher focused on correlations between SNPs and genetic disease, then no. You won’t be able to glean anything from the raw data. I would save my money and wait to sequence my full exome and genome and then pay for a full assessment with a clinical geneticist and genetic counselor.

VentureBeat: Do you predict that we’ll have able to cheaply sequence our exome, and potentially even genome, next year?

Greely: In 2014, it wouldn’t surprise me if you could get your whole exome sequenced for under $1,000. We just need one of these “next-generation sequencing technology” startups to work. I don’t have a strong sense yet of whether it will be a huge tradeoff between whole exome and whole genome. I might consider waiting a bit longer to get my whole gene sequenced for the same price.

VentureBeat: Any other predictions for 2014 for genetics?

Greely: It may be easier to match relatives in a database if the FBI moves ahead with its plan to increase the number of “CODIS markers.” You may soon definitively be matched in a DNA database to a sibling or cousin you never knew existed. We may see more criminal justice subpoenas or search warrants levied against SNP chip companies. A DNA sample could be matched to a crime scene.

We will also see fights in the courts about the language of genetic privacy laws and statutes. My strongest prediction is that genetics will get even more fascinating and unpredictable.

Reblog this post [with Zemanta]


VentureBeat is creating an index of the top online health services for consumers. Take a look at our initial suggestions and complete the survey to help us build a definitive index. We’ll publish the official index in the weeks to come, and for those who fill out they survey, we’ll send you an expanded report free of charge. Speak with the analyst who put this survey together to get more in-depth information, inquire within.


    



Why the FDA’s anvil dropped on 23andMe


This post is by Hank Greely from VentureBeat


Click here to view on the original site: Original Post




Why the FDA’s anvil dropped on 23andMe
Flickr/Leon Brocard

A 23andMe testing kit.

This is a guest post by Law and Biosciences Professor Hank Greely. It was originally published on his Stanford University blog

On Friday, November 22, the Food and Drug Administration (FDA) sent a nastygram to 23andMe, the only remaining substantial pillar of the “direct to consumer” genomic testing industry. The letter, available here, is well worth reading in its entirety, but, in brief, it told 23andMe, in no uncertain terms, to stop marketing its “Saliva Collection Kit and Personal Genome Service” (the PGS) on the ground that it was an unapproved and uncleared “device” under the Federal Food, Drug, and Cosmetics Act.


Related: Read our in-depth analysis on why the FDA is targeting 23andMe.


It noted that the PGS, because of its lack of approval or clearance, was both adulterated and misbranded under the Act; the distribution of adulterated or misbranded devices is a federal crime.

23andMe’s initial response, click here, through an unnamed press spokesperson on its blog, was, at least on its face, conciliatory.

We have received the warning letter from the Food and Drug Administration. We recognize that we have not met the FDA’s expectations regarding timeline and communication regarding our submission. Our relationship with the FDA is extremely important to us and we are committed to fully engaging with them to address their concerns.

What’s going on — and what does it mean in the long run?

There has long been controversy over whether and how the FDA should, or can, regulate genetic tests, whether delivered, as 23andMe does, direct-to-consumer or through a medical professional.  These issues simmer below the surface of the cease and desist letter, but it is not clear that, at anything other than the deepest level, they are immediately relevant.

It appears that, for whatever reason, 23andMe chose to ignore the FDA — and the FDA chose not to take it anymore. The FDA letter says:

Months after you submitted your 510(k)s and more than 5 years after you began marketing, you still had not completed some of the studies and had not even started other studies necessary to support a marketing submission for the PGS. It is now eleven months later, and you have yet to provide FDA with any new information about these tests. You have not worked with us toward de novo classification, did not provide the additional information we requested necessary to complete review of your 510(k)s, and FDA has not received any communication from 23andMe since May. Instead, we have become aware that you have initiated new marketing campaigns, including television commercials that, together with an increasing list of indications, show that you plan to expand the PGS’s uses and consumer base without obtaining marketing authorization from FDA. 

Note particularly the sentence I bolded.

This sounds as though 23andMe started down the FDA path but, six months ago, not only stopped communicating with the agency but started new and bigger marketing efforts. That suggests that 23andMe did not just ignore the FDA, but, while walking briskly past it, quickly turned and spat in its face.

Now, I have no inside information on this dispute.  I have the (scathing) FDA letter and the (timid) 23andMe initial response. Undoubtedly, something was going on other than lost messages at the 23andMe side.  I don’t know whether that something was a plan to fight the FDA’s jurisdiction on statutory / novel constitutional grounds, or a hope that the FDA would back down, or something else entirely. But if you ignore the FDA for six months, it is not surprising that the agency is going to be unhappy with you.

So this may just be a slap down of an insubordinate, or royally disorganized, regulated firm. But does it say anything more about the future of FDA regulation of genetic tests?

Well, it does suggest that the FDA thinks it can and should regulate those tests. The letter lays out its reasons: “FDA is concerned about the public health consequences of inaccurate results from the PGS device; the main purpose of compliance with FDA’s regulatory requirements is to ensure that the tests work.” The letter gives examples, from BRCA 1 and 2 to pharmacogenomic tests where the accuracy, and the patient’s actions on, the 23andMe tests have potentially serious health implications.

Some have argued that the FDA does not have jurisdiction over these direct-to-consumer tests, and they include some very smart colleagues and friends of mine. How can a plastic tube for collecting saliva be a regulated device? Welcome to FDA law.  The federal statute defines “devices” (“medical” doesn’t appear in the law) as, among other things,

an instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article, . .  which is–

(2) intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease, in man or other animals

As the FDA’s letter points out, 23andMe markets its genetic tests as,

“providing “health reports on 254 diseases and conditions,” including categories such as “carrier status,” “health risks,” and “drug response,” and specifically as a “first step in prevention” that enables users to “take steps toward mitigating serious diseases” such as diabetes, coronary heart disease, and breast cancer. Most of the intended uses for PGS listed on your website, a list that has grown over time, are medical device uses under section 201(h) of the FD&C Act.”

A plastic tube for collecting spit doesn’t look much like an MRI machine or an implantable left ventricular assist device, but it is an “article” that, from its marketing information, 23andMe “intend[s] for [medical] use.”

Of course, the decision about jurisdiction (if 23andMe actually dares make it) will ultimately be one of statutory interpretation for the courts, informed by and giving appropriate deference to reasonable interpretations by the agency charged with implementing the statute — the FDA.  One never knows about courts, but I think the 23andMe jurisdictional argument is a loser. And, a reality of regulated industries, especially those regulated by the FDA, is that fighting your regulator may well, in the long run, be a very bad idea.

Turn the page to read more about the future for genetic testing –>

Continue Reading …


    



Can you trust Facebook with your genetic code?


This post is by Christina Farr from VentureBeat


Click here to view on the original site: Original Post




Can you trust Facebook with your genetic code?
Thomas Hawk

23andme cofounder Anne Wojcicki

DataBeat 2013

Dec. 4 – 5, 2013
Redwood City, CA

Early Bird Tickets on Sale

We share everything on Facebook: our family photos, intimate thoughts, relationship woes.

Some of us even post our DNA.

Thousands of Americans are sharing results of genealogy tests on social media sites like Facebook, even posting their entire genome on GitHub and GenomesUnzipped.

But is it safe for patients to share DNA with a private company and then post test results on the Internet? That’s not so clear.

Genealogy companies like 23andMe that analyze your genetic data encourage this kind of sharing, and they say that it is safe. For just $99, you can send 23andMe a sample of your DNA, and it will send you a full report, replete with information about your health and ancestry, and give you options to share the data online and connect with people you might be distantly related to. It can also be used by prospective parents to determine the risk that their future offspring will inherit a genetic condition.

23andMe’s goal, according to its marketing materials, is to grow to one million customers by the end of the year. The company is compiling the world’s largest “genetic data resource,” its chief executive Anne Wojcicki recently said, to “address unanswered questions related to the contributions of genes, the environment, and your health.” Collecting your data and analyzing it en masse with others’ data is part of the company’s fundamental mission.

23andMe spokespeople and former employees I spoke with did not seem all that concerned about the possible privacy risks of sharing DNA data online.

“There are always risks, but people have to make their own decisions,” said 23andMe spokesperson Catherine Afarian in a phone interview. “People post all sorts of things without regard to consequences — it’s not an issue specific to genetics.”

Should you have the right to share genetic information?

23andMe’s basic premise, and its go-to response to privacy questions, is that people should have control over their own health data.

“It’s not entirely risk-free [to share test results on Facebook],” Afarian admitted. “But again, our mission is to give individuals choice over how their information is shared.”

Dr. Dietrich Stephan says patients who share DNA results on Facebook may risk genetic discrimination

Geneticist and entrepreneur Dr. Dietrich Stephan

But Dr. Dietrich Stephan, a human geneticist, and the founder of Silicon Valley Biosciences, is concerned that people aren’t making informed decisions. Stephan lists several “unforeseen consequences” of sharing genetic information on the Web.

“If you were malicious, you could deny people health insurance or life insurance,” he said.

But when you share your DNA online, it’s not just your own information you are sharing. Because you share many genes with members of your family, you are potentially sharing their information too.

Even if you buy the argument that everyone should have control over their data, how about your brother’s data? Or your future offspring’s?

Dr. Stephan has been embroiled in this debate for years. He started Navigenics in 2007, a 23andMe rival that was acquired by Life Technologies Corp in 2012 (the final sum for this acquisition was never disclosed, but sources tell me that Navigenics struggled with its revenues.)

“It’s one thing to post your DNA on Facebook,” said Missy Krasner, executive in residence at Morgenthaler Ventures, who formerly worked at Google Health (Google Ventures and Google Corporate are investors in 23andMe) and consults at Box. ”But you’re also invading the privacy of people you’re related to.”

Continue Reading …


    



Using big data to cure cancer, Bina ushers in new era of medicine


This post is by Rebecca Grant from VentureBeat


Click here to view on the original site: Original Post




shutterstock_114067633Dr. Narges Bani Asadi says cancer is a genetic disease, and she is using technology to fight it.

Asadi is the founder and CEO of Bina, a healthcare startup working to make ‘personalized medicine’ a reality. Bina applies big data analytics to genomics, making it possible to sequence the human genome in a matter of hours rather than days or weeks.

Today, Bina launched its commercial product. The platform provides physicians, clinicians, and researchers with a detailed picture of a patient’s health. From there, they can make data-driven diagnoses and prescribe individualized courses of treatment.

“Medicine today is very experimental,” said founder and CEO Narges Bani Asadi in an interview with VentureBeat. “Before, there was a bottle neck to crunch the massive amount of genomic data. At Bina, we have created the fastest, most highly accurate, cost-efficient processing solution available in the market today. The next step is to incorporate this genomic data into medical use. Data-driven, information-based medicine is much more targeted. Personalizing therapies for different diseases means a longer and healthier quality of life for all humans.”

There are thousands of genetic disorders. In 2013, over 580,000 Americans are expected to die of cancer. One in 20 babies born in the U.S. is admitted into the neonatal intensive care unit, and 20 percent of infant deaths result from congenital or chromosomal defects. Technology can be used to curb these terrifying trends. Bina’s role is to bridge the gap between DNA sequencing technology and the diagnosticians and clinicians who can apply it to their practice.

“The study of genomics has largely been a research activity done in medical schools and universities,” said Mark Sutherland, Bina’s senior vice president of business development. “They could only look at a few samples at a time because it was too expensive or complicated to do it at scale. There is a tidal wave of data that has not been manageable or in a format physicians can understand. Now we are seeing an inflection point. Sequencing is a powerful way of looking across a broad spectrum to provide insight into the cause of certain diseases and conduct risk assessments, early detection, or predict the possibility of recurrence. It can also be used to find applicable therapies and customize treatments.”

Asadi said her team had to achieve innovations in every step of genetic processing in order to create a scaleable, marketable, effective solution. Bina’s platform includes a hardware box to collect DNA, advanced software to process the data, and applications to turn the data into actionable form. Whereas before a full genetic analysis took weeks or months and could cost thousands of dollars, Bina turns it around within hours for around $200 a sample.

The technology emerged out of Asadi’s PhD work at Stanford. She collaborated with professors from around the world to apply high performance computing and computer architecture to gain a new understanding of human health and disease. Bina was founded in 2011 by three professors from the University of California at Berkeley and Stanford. It is backed by venture funding, and pilot customers include the Stanford Genetics Department and Palo Alto Veteran Affairs Hospital.

Startups don’t often set out to cure cancer or prevent infant mortality. However, as technology continues to evolve and along with it, the healthcare industry, a medical system where diagnoses and treatments are based on hard data, where each and every individual is treated  as such, could be on the horizon.

Read a VentureBeat guest post by Dr. Asadi: The personalized medicine revolution is almost here.

[To learn more about the most transformative IT trends hitting health care, including big data, consider coming to HealthBeat, our event for health care executives and decision-makers, on May 20-21 in San Francisco.]

Photo credit: Shutterstock

Filed under: Big Data, Business, Health, Science

Syapse Raises $3M From The Social+Capital Partnership To Bring Genomic Information To Everyday Healthcare


This post is by from TechCrunch


Click here to view on the original site: Original Post




Screen shot 2013-01-22 at 5.25.36 AM

Founded at Stanford University in 2008, Syapse has been on a mission to help disrupt healthcare by incorporating “omics” (or the study of fields within biology typically ending in “-omics,” like genomics, proteomics or metabolomics, for example) data into standard medical use. Simply put, Syapse is developing a suite of cloud-based applications that brings next-gen genomic sequencing to laboratories and clinics to help better diagnose and treat their patients. Through its software, the company is now processing over 3,000 genomes and has over 100 billion “omics” and clinical data points and 10 terabytes of genomic data under management, making it one of the largest repositories of human genome and clinical data out there.

Based on its traction thus far, Syapse is today announcing that it has raised $3 million in series A financing led by The Social+Capital Partnership. As a result of the investment, which brings its total funding to $4.5 million, Social+Capital founder Chamath Palihapitiya will be joining the startup’s board of directors.

So, what is it about the startup that has veteran investors like Palihapitiya so interested? The startup is working with companies like InVitae to make a new generation of clinical genetic testing accessible and affordable for the average Joe, and co-founder Glenn Winokur says that he sees startups like 23andMe — which have been making headlines of late for their progress in this regard — as potential customers down the road.

InVitae co-founder Randy Scott says that the collective goal companies like his and 23andMe are working toward is to bring genomics into everyday clinical practices and to enable “truly personalized” medicine. But to get there, companies need access to a whole new kind of scientific information infrastructure that enables them to connect genetic and clinical information across millions of individuals. And that’s where Syapse comes in.

The company’s flagship application, Syapse Discovery, provides companies and laboratories with an end-to-end solution that allows them to deploy next-gen sequencing-based diagnostics tests. Through its semantic data structure, Syapse users are able to bring “omics” data together with traditional medical information to both develop and deliver these diagnostic tests, while integrating their existing tools, sequencers and workflows.

Winokur says that the software is in fact compatible “with all major sequencers” and is used by “new system providers like GenapSys to automate sequencing workflows and cloud connect machines to track genomic profiles and quality metrics.”

In terms of where he sees this all going? The Syapse co-founder says that the ultimate goal is for hospitals and physicians to be able to access and use genomics information in patient care. “That is one of the biggest keys to unlocking the future of healthcare, and we’re working with our early customers to help make that happen.”

More on Syapse at home here.

As Yahoo Sales Reorg Proceeds, Former Interclick CEO Katz Departs


This post is by from AllThingsD » Kara Swisher


Click here to view on the original site: Original Post




Interclick_michaelkatz

Michael Katz, one of Yahoo’s high-ranking online advertising execs, is leaving the company, according to a memo he sent out to staff on Friday.

Considered a savvy online ad player and a well-regarded entrepreneur, Katz came to the Silicon Valley Internet giant a year ago when it bought Interclick, the ad-targeting company he co-founded and headed, for $270 million.

Yahoo later used Interclick’s technology in its audience-buying platform called Genome. In a reorganization announced in January, Katz was placed in charge of sales operations and data and performance optimization for Genome.

The data unit is at the center of efforts by Yahoo’s new CEO Marissa Mayer to turbocharge its ad business.

But the Katz missive, which is below in its entirety, clearly signaled that his departure was not an amicable one, which sources underscored was part of a larger rejiggering of the ad sales staff under new COO Henrique De Castro.

“As some of you are starting to learn, my last day with the company will be today,” wrote Katz on Friday. “Leaving Y! is not the hard part — how it happened and leaving all of you is what makes this difficult.”

How it happened, said several sources, was that Katz was suddenly told by HR head Jackie Reses last week that there was not a place for him, only days before a large 12-month retention bonus was to be paid out to him for the Interclick acquisition.

While it is an unusual thing to part on willfully difficult terms with an entrepreneur, as it sends a bad signal to others considering joining the company, Yahoo’s new leadership has been playing hardball with a lot of top execs it is parting ways with, and is also limiting departure packages.

Former marketing head Mollie Spillman, for example, was suddenly let go after she was replaced by former Lockerz CEO Kathy Savitt. And, though he had wanted to leave, former CFO Tim Morse was also told of his replacement in a swift exec house-cleaning move, as was former HR head David Windley.

Of course, such moves are not unusual when a new set of leaders enters the corporate picture. That’s why many at Yahoo expect even more changes to come soon in the ad unit, with most assuming that Mayer and De Castro will bring in new staff they had previously worked with at Google.

Currently, top Yahoo ad execs include Peter Foster, GM of audience advertising at Yahoo; and Mark Ellis, VP of North American sales and global partnerships.

It will be interesting to see what happens to them and others as part of a large ad reorg at Yahoo now taking place, which will definitely include a variety of departures and arrivals. One recent notable Yahoo ad exec departure, for example, was Debbie Menin, who headed entertainment and travel sales strategy, and is now a top sales exec at hot video entertainment network Machinima.

More will come in the new year, given that De Castro has recently briefed employees on a plan to move its sales organization to a “category” model. Simply put, that means its sales reps will sell all of Yahoo’s ad products, as well as its search offerings, in a vertical process organized around advertiser segments.

That massive shift is not Katz’s to worry about anymore, it seems. Here’s his entire email to staff, which is a pretty eloquent one, as goodbye letters go:

Friends and Colleagues,

6 years ago if someone would have told me that the hardest part about building a business would be to one day say goodbye, I would not have believed them. As some of you are starting to learn, my last day with the company will be today. Leaving Y! is not the hard part — how it happened and leaving all of you is what makes this difficult. I will miss the daily interactions and will take with me the many memories. This has honestly been so much fun.

I have learned a lot along the way:

Sometimes winning looks like losing. If you don’t fail, you can’t progress and the stakes only get bigger as you go further down your path.

Be genuinely happy for those that are successful at reaching their goals. If you spend anytime wishing it were you, it will never be.

Stay humble, and never declare victory.

Approximately correct is better than definitely wrong. Do not let perfection be the enemy of excellence.

We are all human — we may make mistakes, we must forgive, forget, and move on together.

Treating people right is not an option.

Treat adults like adults and they will behave like adults. Rules are for children.

People and culture are everything. It’s about so much more than free food and parties, it cannot be forced and without it a business cannot succeed.

I consider myself so very lucky to have known a handful of loyal friends that took a chance, quit their jobs and risked a lot to build this business with me. Their loyalty and hard work helped interclick get off the ground and for that I will forever be grateful. The team they helped to build has truly made this the greatest place to work. Each and every one of you made interclick the very best company to work for.

I would like to leave you all with a reminder of what together we built:

– A company that started with $27,000 and sold for $270,000,000

– A company that redefined the way that marketers think about audience targeting and data

– A company that went public in 2009 on NASDAQ defying all odds

– A company that spit in the face of adversity early in 2011 and came out victorious

– A company whose people are the future of this organization.

So be proud of what together we achieved, look back and know you were part of something big. Then look ahead and know that this is just the beginning, we will all one day build again. For those of you that continue your career at Y!, I ask that you don’t lose sight of greatness. Remember what you are capable of and continue to make me proud.

Thank you for your loyalty, passion, dedication, and collaboration. The finish line is only the beginning of a new race.

-MK

Yahoo’s big data play Genome is smart, but …


This post is by from GigaOM


Click here to view on the original site: Original Post




Yahoo is looking to leverage its big data prowess with a new tool for marketers called Genome. A combination of data, analytics and technology derived from its interclick acquisition and advertising deals with Microsoft and AOL, Yahoo says Genome helps marketers “understand consumer needs, anticipate audiences’ future performance, and … develop efficient media buys.” It’s a smart use of Yahoo’s vast collection of data and in-house analytic expertise, but, as with all things Yahoo, the big concern is that it’s too little, too late.

Yahoo has had a tough few years, no less so this weekend with its CEO’s resignation, but Genome is a breath of fresh air. In some ways, it looks like an acknowledgement that while Yahoo might not rule search or social or, well, any facets of the web, it knows a heck of a lot about analytics. That’s true — as I pointed out yesterday. Yahoo’s heavy investment in (and use of) Hadoop was great for the community, but Yahoo never really evolved its core web platform business in a way that allowed it to make full use of its newfound smarts.

Genome remedies that problem, albeit with a rather sharp left turn. One might look at is as Yahoo’s way of saying, “We need a new channel for making money, so we’ll just sell you analytics as a service.” It’s not an unheard of idea: Google does it to some degree with the aptly named Google Analytics and Google Trends, as well as its growing list of services such as BigQuery and Prediction API. But Genome is broader and more targeted at the same time. It’s clearly designed for marketers, but it provides data from a variety of non-Yahoo sources and even lets users integrate some of their own data.

As we’ve reported recently, analytics services targeting online marketing budgets are a big business right now, and Yahoo should be a welcome addition to the party — maybe it can even be a VIP. In a space dominated by startups and smallish private companies, Yahoo is something of a big fish. Perhaps this is the beginning of pivot in business models that could give Yahoo new life.

My colleague Ki Mae Huessner was at the unveiling of Genome at Internet Week Monday morning, where she said Yahoo’s Rich Riley announced it onstage with Oakland A’s general manager and Moneyball inspiration Billy Beane. Riley said what Beane is for baseball, Yahoo wants to be for advertising. I assume Riley was talking about using analytics to help Genome users stay in the race against larger, richer competitors. For Yahoo itself as a web platform, the scrappy-underdog-that-triumphs ship might might have sailed.

Feature image courtesy of Shutterstock user Gunnar Pippel.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.




How federal money will spur a new breed of big data


This post is by from GigaOM


Click here to view on the original site: Original Post




If you think Hadoop and the current ecosystem of big data tools are great, “you ain’t seen nothing yet,” to quote Bachman Turner Overdrive. By pumping hundreds of millions of dollars a year into big data research and development, the Obama administration thinks it can push the current state of the art well beyond what’s possible today, and into entirely new research areas.

It’s a noble goal, but also a necessary one. Big data does have the potential to change our lives, but to get there it’s going to take more than startups created to feed us better advertisements.

Consumer data is easy to get, and profitable

It’s not fair to call the current state of big data problematic, but it is largely focused on profit-centric technologies and techniques. That’s because as companies — especially those in the web world — realized the value they could derive from advanced data analytics, they began investing huge amounts of money in developing cutting-edge techniques for doing so. For the first time in a long time, industry is now leading the academic and scientific research communities when it comes to technological advances.

As Brenda Dietrich, IBM Fellow and vice president for business analytics for IBM Software (and former VP of IBM’s mathematical sciences division), explained to me, universities are still doing good research, but students are leaving to work at companies like Google and Facebook as soon as their graduate or Ph.D. studies are complete, often times beforehand. Research begun in universities is continued in commercial settings, generally with commercial interests guiding its direction.

And this commercial focus isn’t ideal for everyone. For example, Sultan Meghji, vice president of product strategy at Appistry, told me that many of his company’s government- and intelligence-sector customers aren’t getting what they expected out of Hadoop, and they’re looking for alternative platforms. Hadoop might well be the platform of choice for large web and commercial applications — indeed, it’s where most of those companies’ big data investments are going — but it has its limitations.

Enter federal dollars for big data

However, as John Holdren, assistant to the president and director of White House Office of Science and Technology Policy, noted during a White House press conference on Thursday afternoon, the Obama administration realized several months ago that it was seriously under-investing in big data as a strategic differentiator for the United States. He was followed by leaders from six government agencies explaining how they intend to invest their considerable resources to remedy this under-investment. That means everything from the Department of Defense, DARPA and the Department of Energy developing new techniques for storage and management, to the U.S. Geological Survey and the National Science Foundation using big data to change the way we research everything from climate science to educational techniques.

How’s it going to do all this, apart from agencies simply ramping up their own efforts? Doling out money to researchers. As Zach Lemnios, Assistant Secretary of Defense for Research & Engineering for the Department of Defense, put it, “We need your ideas.”

IBM’s Deitrich thinks increased availability of government grants can play a major role in keeping researchers in academic and scientific settings rather than bolting for big companies and big paychecks. Grants can help steer research away from targeted advertising and toward areas that will “be good … for mankind at large,” she said.

The 1,000 Genomes Project data is now freely available to researchers on Amazon’s cloud.

Additionally, she said, academic researchers have been somewhat limited in what they can do because they haven’t always had easy access to meaningful data sets. With the government now pushing to open its own data sets, and as well as for collaborative research among different scientific disciplines, she thinks there’s a real opportunity for researchers to do conduct better experiments.

During the press conference, Department of Energy Office of Science Director William Brinkman expressed his agency’s need for better personnel to program its fleet of supercomputers. “Our challenge is not high-performance computing,” he said, “it’s high-performance people.” As my colleague Stacey Higginbotham has noted in the past, the ranks of Silicon Valley companies are deep with people who might be able to bring their parallel-programming prowess to supercomputing centers if the right incentives were in place.

Self-learning systems, a storage revolution and a cure for cancer?

As anyone who follows the history of technology knows, government agencies have been responsible for a large percentage of innovation over the past half century, taking credit for no less than the Internet itself. “You can track every interesting technology in the last 25 years to government spending over the past 50 years,” Appistry’s Meghji said.

Now, the government wants to turn its brainpower and money to big data. As part of its new, roughly $100-million XDATA program, DARPA Deputy Director Kaigham “Ken” Gabriel said his agency “seek[s] the equivalent of radar and overhead imagery for big data” so it can locate a single byte among an ocean of data. The DOE’s Brinkman talked about the importance of being able to store and visualize the staggering amounts of data generated daily by supercomputers, or by the second from CERN’s Large Hadron Collider.

IBM’s Dietrich also has an idea for how DARPA and the DOE might spend their big data allocations. “When one is doing certain types of analytics,” she explained, “you’re not looking at single threads of data, you tend to be pulling in multiple threads.” This makes previous storage technologies designed to make the most-accessed data the easiest to access somewhat obsolete. Instead, she said, researchers should be looking into how to store data in a manner that takes into account the other data sets typically accessed and analyzed along with any given set. “To my knowledge,” she said, “no one is looking seriously at that.”

Not surprisingly given his company’s large focus on genetic analysis, Appistry’s Meghji is particularly excited about the government promising more money and resources in that field. For one, he said, the Chinese government’s Beijing Genomics Institute probably accounts for anywhere between 25 and 50 percent of the genetics innovation right now,  and “to see the U.S. compete directly with the Chinese government is very gratifying.”

But he’s also excited about the possibility of seeing big data turned to areas in genetics other than cancer research — which is presently a very popular pastime — and generally toward advances in real-time data processing. He said the DoD and intelligence agencies are typically two to four years ahead of the rest of the world in terms of big data, and increased spending across government and science will help everyone else catch up. “It’s all about not just reacting to things you see,” he said, “but being proactive.”

Indeed, the DoD has some seriously ambitious plans in place. Assistant Secretary Lemnios explained during the press conference how previous defense research has led to technologies such as IBM’s Watson system and Apple’s Siri that are becoming part of our everyday lives. Its latest quest: utilize big data techniques to create autonomous systems that can adapt to and act on new data inputs in real time, but that know enough to know when they need to invite human input on decision-making. Scary, but cool.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.




Strata Week: Genome research kicks up a lot of data


This post is by from O'Reilly Radar - Insight, analysis, and research about emerging technologies.


Click here to view on the original site: Original Post




Here are a few of the data stories that caught my attention this week.

Genomics data and the cloud

Bootstrap DNA by Charles Jencks, 2003 by mira66, on FlickrGigaOm’s Derrick Harris explores some of the big data obstacles and opportunities surrounding genome research. He notes that:

When the Human Genome Project successfully concluded in 2003, it had taken 13 years to complete its goal of fully sequencing the human genome. Earlier this month, two firms — Life Technologies and Illumina — announced instruments that can do the same thing in a day, one for only $1,000. That’s likely going to mean a lot of data.

But as Harris observes, the promise of quick and cheap genomics is leading to other problems, particularly as the data reaches a heady scale. A fully sequenced human genome is about 100GB of raw data. But citing DNAnexus founder Andreas Sundquist, Harris says that:

… volume increases to about 1TB by the time the genome has been analyzed. He [Sundquist] also says we’re on pace to have 1 million genomes sequenced within the next two years. If that holds true, there will be approximately 1 million terabytes (or 1,000 petabytes, or 1 exabyte) of genome data floating around by 2014.

That makes the promise of a $1,000 genome sequencing service challenging when it comes to storing and processing petabytes of data. Harris posits that it will be cloud computing to the rescue here, providing the necessary infrastructure to handle all that data.

Strata 2012 — The 2012 Strata Conference, being held Feb. 28-March 1 in Santa Clara, Calif., will offer three full days of hands-on data training and information-rich sessions. Strata brings together the people, tools, and technologies you need to make data work.

Save 20% on registration with the code RADAR20

Stanley Fish versus the digital humanities

Literary critic and New York Times opinionator Stanley Fish has been on a bit of a rampage in recent weeks, taking on the growing field of the “digital humanities.” Prior to the annual Modern Language Association meeting, Fish cautioned that alongside the traditional panels and papers on Ezra Pound and William Shakespeare and the like, there were going to be a flood of sessions devoted to:

…’the digital humanities,’ an umbrella term for new and fast-moving developments across a range of topics: the organization and administration of libraries, the rethinking of peer review, the study of social networks, the expansion of digital archives, the refining of search engines, the production of scholarly editions, the restructuring of undergraduate instruction, the transformation of scholarly publishing, the re-conception of the doctoral dissertation, the teaching of foreign languages, the proliferation of online journals, the redefinition of what it means to be a text, the changing face of tenure — in short, everything.

That “everything” was narrowed down substantially in Fish’s editorial this week, in which he blasted the digital humanities for what he sees as its fixation “with matters of statistical frequency and pattern.” In other words: data and computational analysis.

According to Fish, the problem with digital humanities is that this new scholarship relies heavily on the machine — and not the literary critic — for interpretation. Fish contends that digital humanities scholars are all teams of statisticians and positivists, busily digitizing texts so they can data-mine them and systematically and programmatically uncover something of interest — something worthy of interpretation.

University of Illinois, Urbana-Champaign English professor Ted Underwood argues that Fish not only mischaracterizes what digital humanities scholars do, but he misrepresents how his own interpretive tradition works:

… by pretending that the act of interpretation is wholly contained in a single encounter with evidence. On his account, we normally begin with a hypothesis (which seems to have sprung, like Sin, fully-formed from our head), and test it against a single sentence.

One of the most interesting responses to Fish’s recent rants about the humanities’ digital turn comes from University of North Carolina English professor Daniel Anderson, who demonstrates in the following video a far fuller picture of what “digital” “data” — creation and interpretation — looks like:

Hadoop World merges with O’Reilly’s Strata New York conference

Two of the big data events announced they’ll be merging this week: Hadoop World will now be part of the Strata Conference in New York this fall.

[Disclosure: The Strata events are run by O’Reilly Media.]

Cloudera first started Hadoop World back in 2009, and as Hadoop itself has seen increasing adoption, Hadoop World, too, has become more popular. Strata is a newer event — its first conference was held in Santa Clara, Calif., in February 2011, and it expanded to New York in September 2011.

With the merger, Hadoop World will be a featured program at Strata New York 2012 (Oct. 23-25).

In other Hadoop-related news this week, Strata chair Edd Dumbill took a close look at Microsoft’s Hadoop strategy. Although it might be surprising that Microsoft has opted to adopt an open source technology as the core of its big data plans, Dumbill argues that:

Hadoop, by its sheer popularity, has become the de facto standard for distributed data crunching. By embracing Hadoop, Microsoft allows its customers to access the rapidly-growing Hadoop ecosystem and take advantage of a growing talent pool of Hadoop-savvy developers.

Also, Cloudera data scientist Josh Willis takes a closer look at one aspect of that ecosystem: the work of scientists whose research falls outside of statistics and machine learning. His blog post specifically addresses one use case for Hadoop — seismology, for which there is now Seismic Hadoop — but the post also provides a broad look at what constitutes the practice of data science.

Got data news?

Feel free to email me.

Photo: Bootstrap DNA by Charles Jencks, 2003 by mira66, on Flickr

Related:

As genomics data approaches exascale, cloud could save the day


This post is by from GigaOM


Click here to view on the original site: Original Post




Life is about to get a lot easier for medical researchers, but a lot more difficult for companies trying to make a buck selling them tools to store and analyze genomic data. When the Human Genome Project successfully concluded in 2003, it had taken 13 years to complete its goal of fully sequencing the human genome. Earlier this month, two firms — Life Technologies and Illumina– announced instruments that can do the same thing in a day, one for only $1,000. That’s likely going to mean a lot of data.

1TB times 1 million equals …

How much data is anybody’s guess, but the exponential increases in productivity suggest it will be in the exabyte range within a few years. A fully sequenced human genome results in about 100GB of raw data, although DNAnexus Founder and CEO Andreas Sundquist told me that volume increases to about 1TB by the time the genome has been analyzed. He also says we’re on pace to have 1 million genomes sequenced within the next two years. If that holds true, there will be approximately 1 million terabytes (or 1,000 petabytes, or 1 exabyte) of genome data floating around by 2014.

A few years ago, Complete Genomics publicly announced its plan to sequence a million genomes by 2014, but it has been woefully behind schedule to this point. It was hoping to do 50,000 genomes in 2011, but finished the year at only 3,000.

Life's Benchtop Ion Proton Sequencer

However, sequencing instruments are evolving in a manner similar to mainstream computers, which is to say they’re always getting faster and more affordable. Whereas sequencers used to cost more than half a million dollars and take up a room, Life’s genome-in-a-day instrument, the one that claims a $1,000-per-genome price point, sits on a desk and will cost only $149,000 when it’s available later this year. Upgrading to Illumina’s new instrument from the previous model costs only $50,000.

The fast rate of improvement comes from genomics’ own version of Moore’s Law, Sundquist said: data throughput and cost both improve by tenfold every 18 months. When Life rival Illumina set a world record in February 2008, it took “less than four weeks at a cost of about $100,000.” At this rate, we’ll have $100 genome sequencing by 2014.

Sundquist added that medical systems have tens of thousands of patients queued up for sequencing, many of which they might start doing now that it can be done so fast and at such a low cost.

Hidden costs: ‘The quest for the $1,000 genome interpretation’

Where things get hairy for IT vendors is figuring out how to make it affordable to store, process and analyze all that data — something Sundquist calls the quest for the $1,000 genome interpretation. It’s still not an inexpensive proposition to buy and maintain a system capable of storing and processing potentially petabytes of data. And if doctors or researchers want to collaborate with colleagues, their facilities bandwidth likely won’t cut it for sending even the raw data for a single genome. That’s why many research institutions are connecting to high-speed research networks designed solely to move massive scientific data sets.

As Forbes’ Matthew Herper opined early last year, even though research costs for genomes will soon cost only $1,000, it costs a lot more to employ people and pay for software capable of analyzing it. Because research genomes aren’t accurate enough for medical use, they often must be sequenced multiple times. Herper’s ultimate analysis:

I’d think if we’re talking about actual medical use, $10,000 is a more accurate number. Certainly, it is not going to drop below the $2,000 level for a magnetic resonance imaging scan. And once the technology is in use, I think it is possible that the costs will go back up.

So, even if genome sequencing itself becomes less expensive, hospitals and patients will both be paying well more than $1,000 for the procedure. Presently, $10,000 is about the going rate from Complete Genomics to sequence, analyze and deliver research results to an individual, although the costs certainly are subject to change if hospitals start performing sequencing workloads themselves.

Cloud computing to the rescue?

Sundquist thinks cloud computing is the answer. His company, DNAnexus, provides a cloud-based platform for storing and analyzing genomics data, something we’ve covered before. “A 100-megabit connection could more than keep up with about a dozen of these machines,” he said, and once the data is in DNAnexus’s cloud platform, institutions no longer have to worry about keeping up with exploding data volumes, sending terabytes of data across the Internet or paying software licenses. Access is centralized and everything takes place on DNAnexus’s virtual infrastructure.

Additionally, cloud computing is ideal for spiky use cases, as is generally the case with genome sequencing.  A general rule of “cloudonomics” is that the cloud costs more on a per-unit basis, but generally will cost less over time unless it’s being used for a steady workload flow better suited to an on-premise system.

Whether it’s DNAnexus or some other cloud service, Sundquist’s reasoning is sound. As prices for gene sequencing continue to fall, doctors should be increasingly likely to do it, but they’ll be limited by the infrastructure in place to support them. Unless the costs of doing this on-premise come down significantly, the cloud might be the only place where storing and analyzing potentially petabytes per hospital isn’t such a daunting undertaking.

Feature image courtesy of Flickr user Robert Gaal.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.