It’s been a while so I thought it was about time to do an updated post on observed turn times for commercial gene synthesis. After all, Rob Carlson posted updated cost and productivity curves for DNA sequencing and synthesis. As in my original post on this topic, I plot turn time vs length for orders placed at Ginkgo. The y-axis is turn time in days from when an order is initiated by Ginkgo to the day the company ships DNA back to us. The x-axis is the length of the synthetic DNA (synthon) in base pairs. Based on a request in the comments, I’ve also scaled the size of the data marker by how long ago the synthon order was placed/delivered to see if there are any observable trends in turn time over time (I don’t see any but it does show how we’ve tried out different providers over time 😉 ). The data has all the same caveats as last time – so for convenience, I’ve re-listed them at the bottom of this post.
I suspect that one of the most valuable aspects of this data, together with that from Rob, is that is shows how imperfect our benchmarks for the gene synthesis industry really are. For example, we don’t have a good metric for assessing companies on both cost and turn time. DNA2.0 has better turn times than Genscript but that comes at the expense of a 2X price premium – which company’s service is “better”? Selecting and then celebrating the right benchmarks is important because that’s where the industry will place its resources to improve the underlying technology. To date, the industry has largely been driven to reduce the per base pair cost of gene synthesis because cost per bp is the de factor comparable. This metric has pushed synthesis companies away from standardizing and competing on “library” offerings for sets of rationally designed synthons (many companies offer this but you have to get a custom quote which has a high transaction cost associated with it). Given where I suspect the engineering of organisms is going, making it even modestly more difficult to order libraries is probably detrimental to the field.
To help to combat this problem, I’d love to see the development of a true benchmark test for the gene synthesis industry – i.e. a set of synthons that spans different length, GC content, sequence complexity etc. gets designed and the orders placed simultaneously at all vendors so that there can be a true side by side comparison of the performance of all providers.
Probably the biggest update in the world of commercial gene synthesis over the last year and half is that multiple companies now have linear gene synthesis offerings. In no particular order, IDT offer gBlocks up to 750 bp for $139, Gen9 offers GeneBytes up to 3000 bb in length (they charge per bp but their rates aren’t posted online), and Life Technologies offer Strings up to 1000 bp in length for $149. With these offerings, gene synthesis companies have been finally able to break through the ~$0.30-$0.35/bp floor that they’ve been stagnating at from 2008-2012 by skipping the cloning and sequencing steps and shifting that technical risk onto their customers. The hope would be that these offerings could also result in lower turn times but jury’s still out.
Caveats to the data:
From synthetic biology’s earliest days, DNA synthesis, and more specifically gene synthesis, has been touted as the central, enabling technology of the field. Gene synthesis is part of what lets us make the leap from the ad hoc, cut and paste of genetic engineering to the systematic design that is [or will be] the hallmark of synthetic biology. Given its central importance, it’s not surprising that many of us in the field keep a close eye on both gene synthesis technology and the gene synthesis industry as a whole. Yet most of the discussion focuses just on the cost of gene synthesis. Cost is important. But I’d argue that turn times are equally important in terms of how gene synthesis is used in the field – more on this below.
Below is a chart summarizing turn times versus length for DNA orders at Ginkgo. On the y-axis is the turn around time (in days) and on the x-axis is the length of the synthetic DNA (or synthon) in base pairs. [I refer to all of synthesized DNA fragments as synthons rather than genes since we don’t only synthesize genes.] Data points are colored by gene synthesis provider.
First, a few caveats:
OK caveats finished. What can we infer from this chart? Here are two takeaways.
Lesson 1: Gene synthesis can’t be a part of the design-build-test loop until turn times improve dramatically
Based on our data, turn times are highly variable and show little to no correlation with length overall. This means that when you place an order, you have no idea if you’ll get it back in 2 weeks or 5 weeks. From an engineering process standpoint, I’d argue that the unpredictably long turn times mean that it is crazy to include outsourced commercial gene synthesis in the design-build-test loop as you try to engineer an organism. Instead, do gene synthesis orders up front as a batch (thereby hopefully eliminating gene synthesis from your cycle time) and then mix and match the synthesized parts via a DNA assembly technology with a faster turn time. Or alternatively, try to achieve faster turn times by doing gene synthesis in house from oligos.
Lesson #2: Different providers specialize at different orders
IDT appears to be quite fast at making sub-500bp synthons. This is not too surprising given that IDT leverages their ultramer oligo synthesis tech to offer flat rate pricing on so-called minigenes (< 400bp synthons). At that length scale, you can also opt to stitch together oligos to make the part yourself. But between the costs of oligos, cloning reagents, sequencing and your own time, you might not do much better than the cost of a minigene (even factoring in cheap grad student/postdoc labor!). For synthons in the 500-1500 bp range, Blue Heron seems to be a reasonable compromise choice in terms of turn times versus costs. You get industry competitive pricing with decent turn times. Overall, DNA2.0 appears to have the best turn times for >1 kb synthons. Admittedly this is based on a very limited sample size but anecdotal rumors from folks in the field back it up. So if you’re in a rush and can tolerate the 2X price difference, DNA2.0 could be the way to go.
I’ll close by saying that this post is in no way an attempt to rag on gene synthesis providers. Building DNA is tough. And building DNA for customers is even tougher. But it’s important to think hard about what the realities of costs and turn times of commercial gene synthesis mean for developing best practices for engineering organisms going forward.
I saw this video a few years ago on VHS and was fascinated by the obvious parallels by the debates then on recombinant DNA and the debates today around synthetic biology. It’s great to see this piece of history now online for all to see.
Here I’ll try to give a high-level picture of Ginkgo’s pipeline for organism engineering. If you’ve checked out our webpage, you’ll see that we have several different organism engineering projects happening at Ginkgo that span several different hosts. Our goal was to build out a pipeline that could support the engineering of all these very different organisms for very different purposes but that uses a shared pipeline. To accomplish this goal, we deliberately opted to decouple design from fabrication. Ginkgo organism engineers place requests via our CAD/CAM/LIMS software system. Those requests are then batched and run on Ginkgo’s robots.
By decoupling design from fabrication and pushing construction and testing through a shared, automated pipeline, we’ve been able to achieve a level of productivity that would have been unattainable if we used conventional, manual molecular biology approaches. Below we show a plot of requests (placed either by Ginkgo organism engineers or by other pipeline processes), samples (physical objects containing DNA/strains/reagents), molecules (abstract objects corresponding to unique DNA sequences including but not limited to standardized parts), and runs (batches of multiple requests that have been completed via the Ginkgo pipe).
Hence, Ginkgo organism engineers are free to focus on design and analysis of novel organisms rather than mindless pipetting operations better done by robots than PhDs. We’re building a team of organism engineers each of whom
If you seek to be one of the best organism engineers on the planet and don’t want to be limited in the complexity of the organisms that you engineer by how fast you pipet, you should come talk to us. See the website for details.
Synthetic biology 5.0 wrapped up a couple weeks ago and attending the conference reinforced for me that the field has developed sufficiently over the past few years that we are now seeing different platforms and/or schools of thought in how to engineer organisms start to coalesce.
Chris Voigt gave a nice talk about how his lab harvests large, functional operons from nature, like the nitrogen fixation gene cluster, and goes through a process of refactoring to standardize the control and gene expression elements in order to gain complete control over the pathway. (The refactoring approach was first pioneered by Drew, Sri and Leon.) Unfortunately, refactoring currently appears to lead to a gene cluster that has less activity than what nature provided, but yet there is less concern over unknown or cryptic biology. Interestingly, Chris says that in all his lab’s refactoring efforts (which involves several years of design-construction-debugging by Karsten), they never really discovered new or interesting biology but rather tended to get tripped up by errors in the sequence databases or incorrectly annotated start sites for genes.
John Glass and Dan Gibson both gave talks about genome synthesis and genome upload technologies that came out of JCVI (see PMIDs 17600181, 18218864, 19073939, 19363495, 19696314, 20211840, 20488990, 20935651). The JCVI/Synthetic Genomics platform might be thought of as combining (meta)genome sequencing and genome synthesis to make organisms from scratch.
Zach Serber discussed the Automated Strain Engineering (ASE) platform at Amyris. They are able to build 1500 yeast strains start to finish in 3 weeks (though they do pipeline their process). They have a library of 12,000 parts which they draw from to make up to 6 gene constructs using sewing PCR and then integrate into yeast. He didn’t go into their assay platforms but briefly mentioned that they do a combination of high-throughput screening and ‘omics analysis.
Doug Densmore, Jake Beal and Ron Weiss are working on Bio-Design Automation: namely, the ability to translate a high-level functional specification to successively lower abstraction levels (i.e. devices, parts etc.) until you get the actual DNA sequence that you then construct using automated DNA assembly.
And while it wasn’t presented in detail in a talk at SB5.0, Jef Boeke, Jean Peccoud and collaborators are developing a platform for yeast chromosome redesign. Finally, of course there is the Tom Knight/iGEM/Registry of Standard of Biological Parts approach to synthetic biology which inspired aspects of many of the above platforms.
I imagine that at least a fraction of the would-be biological engineers out there might find the platform or tools aspects of synthetic biology mundane and prefer to focus on the organisms that they can design and build. But I’d argue that every synthetic biologist should care deeply about what the platforms look like. There’s a better than even chance that the future of synthetic biology lies in decoupling design from fabrication and testing. If so, the organism engineers in the future will submit their designs to centralized facilities where designs get batched, fabricated on robots and then [maybe] undergo a preliminary analysis. Hence, the platforms that get designed today are going to dictate the design constraints to which organism engineers will be forced to adhere tomorrow. Over time, the relative merits of each platform’s design constraints will be judged based on the complexity and utility of the engineered organisms that they produce.
Given all the above, you might be asking, what exactly is Ginkgo’s platform for organism engineering? I’ll cover that in a subsequent post …
Congrats Christina and Patrick!
For your consideration …
brought to you by our friends at Hydrocalypse Industries.
Our friend Pete Carr who’s at MIT’s Media Lab came by for a visit last week. Apparently, Pete used to work for Cetus and through his connections there managed to figure out exactly where in Yellowstone National Park is Mushroom Pool, the location where Thomas Brock first discovered Thermus aquaticus in 1966. The thermophile Thermus aquaticus is of course the source of the heat-tolerant Taq polymerase which was key to making PCR work robustly and easily.
On his recent trip to Yellowstone, Pete’s family waited patiently while Pete made his pilgrimage to this key place in biology and biological engineering’s history. Below are a couple photos that Pete had from his trip at Mushroom Pool. (Many thanks to Pete for letting us post them here.)
Since microbes are currently our favorite organisms to engineer here at Ginkgo, it was great to see Wisconsin recognize the first official state microbe! Of course they chose Lactococcus lactis, the bacterium used to make many popular cheeses. Maybe we can convince Massachusetts to choose E.coli as a state microbe for all the biotech drugs it has produced — need to help Coli beat its bad rap!
I had missed this but Sri pointed it out to me. A sacred, 800+ year old Ginkgo tree fell in Japan in March due to a storm. The tree was so revered that 20,000 people from all over Japan came to pay their respects to the fallen tree. To ensure that the tree’s spirit lives on, scientists are going to try and clone the tree, under orders from the local government.