The cost of drug development has skyrocketed, but disruptive technologies can bring it back down to earth.
Over the past several decades, drug-development costs have risen significantly, from $250 million per approved drug prior to the 1990s to $403 million in the 2000s and $873 million in 2010 ($1.778 billion if capitalization over the 14-year approval period is accounted for). Costs are distributed across the development cycle, with about one-third in discovery/preclinical development, two-thirds in clinical development and 5% in submission-to-launch.
However, rising drug-development costs occur amidst the backdrop of a critical need for novel pharmaceutical products to treat conditions with a high cost of global death and disability. Critical modern needs include vaccines for neglected tropical diseases, chemotherapy targeted to an individual’s genome and novel antibiotics to combat antibiotic resistance.
Below we’ll discuss the top challenges in drug development and how disruptive technologies such as AI, blockchain and big data are poised to reduce costs and potentially solve these critical social problems.
Challenge No. 1: Increased Timelines
At 2011 estimates, the total drug approval timeline averages 14 years. The majority of the time is concentrated in R&D: about four-and-a-half years in discovery, one year in preclinical testing, one-and-a-half/two-and-a-half/two-and-a-half years in the three clinical development phases and 18 months in submission to launch.
One of the biggest bottlenecks in drug development is identifying promising disease targets, and AI has shown promise in automatically structuring big data. With this innovation, pharmaceutical companies can monitor public health data and correlate it with EMR using continuous analytics to generate novel insights on potential disease targets. For instance, the well-publicized correlation between Raynaud’s syndrome and fish oil supplements led to successful preclinical treatments for Raynaud’s disease.
Challenge No. 2: Unreliability Of Published Data
A 2011 correspondence published in Nature reported that only 20-25% of published preclinical targets were replicable in in-house pharmaceutical experiments for target validation. Authors politely stated that “An unspoken rule among early-stage venture capital firms that ‘at least 50% of published studies, even those in top-tier academic journals, can’t be repeated with the same conclusions by an industrial lab’” and hinted at a “pressure to publish.”
Dr. John Ioannidis was more plain-spoken in his meta-research paper titled “Why Most Published Research Findings Are False,” citing high-profile examples of needed reforms in scientific research and concluding, “Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true.”
AI can help solve this problem by widely cross-referencing published scientific literature with alternative information sources less susceptible to bias, including theses, conference abstracts, unpublished data sets, public databanks and clinical trials.
Blockchain holds the potential to further reduce bias in clinical trials with “smart contracts” to create a tamper-proof digital ledger for data collection.