By Enrique Neumann

Much of the work done in a genomics lab is repetitive, labor-intensive, and just plain boring. Is this really the best use of highly skilled scientists? How do you keep staff motivation up when another couple hundred samples roll into the lab? Most importantly, all this manual labor creates huge problems in terms of human error and amplified costs. Here are some major sources of tedium and error in the genomics lab where improvements can make a big impact—reducing costly errors, increasing productivity, and possibly even saving your sanity.


 fluent genomics avoid tedious tasks with automation

Are tedious genomics tasks causing errors and waste in your lab?

Boredom breeds mistakes

Currently a lot of bottlenecks in genomics have arisen as a direct result of introducing high throughput technologies into the workflow. For example, the increasing affordability and accessibility of NGS is giving labs the power to analyze many more samples at a time, creating a need for more speed upstream, where inherently low-throughput manual sample preparation methods are still in place. So wherever there are bottlenecks, you are likely to find some mind-numbing and error-prone manual tasks that are causing frustration and costing you money.

“He is always right who suspects that he makes mistakes”.

~ Spanish Proverb

No matter what the nature of the work, if humans are involved the occasional error is bound to happen. But most of the time the more mundane and seemingly simple tasks are the ones that trip us up. Why? Because the more boring and repetitive the work, the more we dislike it, and the harder it is to concentrate. And as soon as we stop concentrating, mistakes happen.

Bottlenecks are likely sources of error in the lab

Here are 4 of the most tedious:

1. Colony picking – Anyone who has ever had to do colony picking for a large-scale cloning project can tell you how slow and painstaking this process is. For really big projects, a small army of colony-pickers may be needed – all with good eyesight, manual dexterity and extraordinary powers of concentration to remain consistent and avoid mistakes. How likely is that?

2. Nucleic acid extraction – Isolation of DNA and RNA is the starting point in any genomic analysis for a wide variety of applications in clinical diagnostics, forensics, food science, environmental monitoring and biomedical research. Analytical labs may be routinely taking in 100s of samples a day for processing. With short turnaround times and labor-intensive protocols, the pressure on staff can be enormous. The quality of the end product is critical to obtaining trustworthy results, so there is little room for error, but the work is dull and laborious – a recipe for disaster. To make matters worse, many of the samples are unique and irreplaceable, so if the results are compromised by a mistake, repeat analysis may not be feasible.

Samples can arrive in different volumes and diverse forms such as whole blood, serum, stool, urine, plant tissue, FFPE tissue etc., with varying needs in terms of formatting and processing. Solid phase or affinity extraction methods are common, typically using columns or magnetic beads, and involving critical equilibration and wash steps that can easily go wrong or decrease yield if not performed with care. No matter which methodologies are used, such manual approaches are tedious and create many opportunities for error and cross-contamination as large numbers of samples are processed parallel.

3. Quantification – This is another critical stage in preparation of samples for sequencing and other sophisticated analytical methods, where mistakes and inaccuracies can significantly compromise the results. Again, it is not unusual to be processing 100s of samples per day, so costly errors are easily made. For statistically robust results, quantification methods such as qPCR and fluorescent dyes are typically performed in triplicate, which further increases the workload and the chances that something will go wrong. While miniaturization is a great way to increase throughput and lower the price per sample, the downside is that anyone running the process manually will have to pipette microliter volumes accurately and dispense them into many tubes or wells—day-in and day-out—without any mishaps.

4. Sample normalization – Like quantification, a mistake during normalization can seriously compromise the outcome. Picture yourself having to readjust pipette settings repeatedly and aliquot different amounts of sample in to a plate 96 times without making a mistake…would anyone want to be in your shoes? When samples or libraries are highly concentrated, additional intermediate dilution steps may need to be performed, further complicating the process. If subsequent analysis is multiplexed, as for example with NGS, the pooling of normalized samples also needs care and attention. Although it is quite straightforward – simply combining equal amounts of each library—it can be tedious because of the need to ensure adequate mixing, while avoiding damaging the DNA in the process. For example, some protocols call for pipetting each sample up and down 10 times. Not only is this hard on the thumb, but if done poorly it can easily shear or nick nucleic acids.

Separate workflows waste staff time

Sometimes eliminating a single painful step in your workflow can be a false economy, if you are still left with many other labor-intensive operations that are part of the same process. If your workflow is decentralized across several different devices, it can be tedious and wasteful moving from one to another.

With NGS, for example, your ‘workflow’ could involve many steps, each on a separate platform: thermocyclers, magnetic separation devices, incubators, plate storage devices, barcode readers, etc.

Not only is there the inconvenience of moving from one to the next, there may be the additional hassles of transferring samples to compatible plates, waiting around for the device to be available or to finish before you can transfer your samples to the next step, and training staff on different platforms. You might be able to put up with this for one-off projects, but as soon as you start scaling up and industrializing operations, then it makes sense to find a way to integrate all these devices and processes.

Is smarter automation the answer?

In short, everyone’s time is valuable. If problems like these are tying you or your staff down, and compromising sanity in the lab, then some level of automation is probably the answer. Not only can it take away the tedium and reduce error, but the right solutions can also integrate and be adapted to accommodate a lot of different genomic workflows, making them very cost-efficient over the long term.

To stay tuned into this topic, and find additional information and support for genomics applications, check out The Helicase, our dedicated Genomics channel. There you'll have full access to our genomics experts, who are waiting to answer any questions you may have about how to eliminate tedium and costly errors in the lab.

Subscribe to the Helicase

About the author

Enrique Neumann

Enrique Neumann

Dr Enrique Neumann is Product and Application Manager, Genomics, at Tecan, Switzerland. He studied Biology at the University of Santiago de Compostela, Spain. During his PhD at the University of Edinburgh, he focused on the molecular processes in plant cells. He joined Tecan in 2015 and focuses on the development and support of genomic applications for Tecan’s liquid handling platforms.

Related products