Your cart is empty.
By Shang Tsai
Successful implementation of proteomics in the clinical environment has still not materialized, and lags far behind genomics, even after decades of advances in protein sample preparation. The primary cause for this underwhelming performance lies in the diverse physiochemical properties of proteins and the complexity of the sample prep workflow itself. In this two-part series we will start by looking at how realistic it is to automate the protein sample prep workflow, and move on to discuss ways to overcome bottlenecks that prevent proteomics from entering clinical use.
From cancer to coronavirus, protein sample prep and proteomic analysis are often key to developing new treatments.
Proteins: both the disease and the cure
Proteins are the root cause of disease and represent a pathway to cures, whether we are looking at cancer or coronavirus. Indeed, the vast majority of drugs hit protein targets. We have long anticipated that proteomics would deliver novel biomarkers to be used in diagnosis, prognosis, and therapeutic monitoring of disease. For this to happen however, simple and efficient protein recovery and sample preparation must reproducibly occur from a broad range of sample types without significant alteration of the sample processing.
Why hasn’t proteomics yet delivered?
Two reasons. First, expectations were set unreasonably high, hot on the optimistic coat tails of the genomics revolution. Second, only recently have techniques been developed which allow reproducible sample processing for a large variety of samples without change to the protocol. The obvious complexity and diverse physiochemical nature of the proteome has all but tripped us up, raising a frustrating list of “must-haves” for proteomics to be useful in clinical research, and eventually in the clinic. As a bare minimum these include:
- the ability to analyze enough (possibly hundreds or thousands or more) of samples to achieve statistically significant results in patient cohorts
- the simplification of the workflow, to remove the need for personnel with specific technical skills in proteomics
- achieving an acceptable turn‐around time from receiving samples to the generation of a complete proteome profile analysis
- cost‐effectiveness of the workflow
Let us examine each of these in turn, as potential steps on our path to automated protein sample preparation.
Increase protein sample analysis throughput
When considering sample prep, there are broadly speaking two types of proteomics experiments that researchers might want to do: either discovery proteomics, or targeted proteomics. Both types of proteomics can have significantly different sample prep requirements, and any solutions that can mitigate the need for differences in sample prep would be helpful.
For discovery proteomics, experiments are designed to identify as many proteins as possible across a broad dynamic range. For example, the complete measured range for the plasma proteome, from upper limits for albumin to lowest values for thyroid-stimulating hormone (TSH), represents more than 10 logs of molar abundance.1 Discovery proteomics can be used to form an inventory of all detectable proteins in a given sample, or to detect differences in the abundance of specific proteins amongst multiple samples. When it comes to sample prep, this intrinsic complexity means that we might require the concurrent depletion of highly abundant proteins, the enrichment of less abundant proteins, plus fractionation, for example by SDS-PAGE, or using offline or in-line liquid chromatography, prior to MS. If quantitation is needed, this will add yet another layer of difficulty.
Once proteins of interest have been identified and demonstrated to be biomarkers for a given disease state, targeted proteomics experiments can be considered. In contrast to discovery proteomics, targeted experiments might seek to quantify specific proteins, but with high precision, sensitivity, specificity, and throughput. These proteins may become protein biomarkers for specific diseases. Hence, a targeted approach might decrease the complexity of sample preparation and will likely be far easier to implement when it comes to increasing throughput. Targeted proteomics is the method of choice to quantify specific proteins and metabolites in complex samples, for example in pharmaceutical and diagnostic applications.2,3
Whether considering discovery or targeted proteomics, sample preparation for bottom-up proteomics consists of several critical steps:
(1) extraction and solubilization of protein
(2) protein denaturation
(3) removal of detergent and desalting
(4) enzymatic digestion
(5) separation of peptides by liquid chromatography (LC) and possibly mass spectrometry (MS)
Detergents such as SDS are routinely used for solubilization and denaturation of proteins. However, these reagents can interfere with downstream protease digestion and MS analysis, even at low concentrations, and have been notoriously hard to remove, creating a major barrier to increasing throughput. Throughput is further hampered if different sample types require different protocols.
Any attempt to automate protein sample prep and still retain as complete as possible an inventory of the proteome, including membrane proteins, is likely to require some sort of solid support onto which the proteins can be captured, so that they can be rinsed free from substances that are incompatible with downstream processing. Such a protocol will need to remove all types of potential contaminants, including but not limited to detergents (e.g. up to 15% SDS), salts, glycerol, PEG, Laemmli loading buffer and bile salts, among other contaminants. We will explore examples of such supports and protocols in our follow-up article.
Simplify protein sample prep workflow
When it comes to preparing protein samples for downstream analysis, enzymatic digestion protocols can be extremely effective when working with protein suspensions.4 This approach typically requires use of a lysis buffer with a high concentration of detergent, such as 5% SDS. However, SDS is incompatible with downstream analytical instrumentation and assays, such as LC-MS/MS or ELISA. Wash or clean-up steps prior to analysis are essential, yet time consuming and possibly prone to human error when done manually.
The simplification of protein sample prep workflow therefore also requires a robust protocol that can be automated for binding, wash and digestion steps. The ideal protocol should be identical for the different sample types of interest, or at least with as few variations as possible, because differences in the sample preparation are a major source of experimental variation.5
Complex protein samples can be viscous, so automation is also likely to necessitate positive pressure or centrifugation: vacuum may not be enough. Positive pressure has been shown to be more reproducible than vacuum.6 In addition, positive pressure units can generally operate at a higher pressure than a vacuum is able to provide, and can therefore more easily guarantee sufficient and constant pressure on all the columns in a parallel set-up of a matrix of samples, such as in a 96-well plate. This results in a steady pressure being applied over a specified time, with a defined pressure-assisted sample processing (PASP) protocol.
Thus, if we want to be realistic about simplifying the protein sample prep workflow by automation, we will need to consider processing our captured protein samples using a positive pressure unit with a compatible liquid-handling protocol for cleaning up and digesting samples prior to analysis.
Decrease sample turn-around time
To optimize sample turn-around time, one can either decrease the time it takes for an individual sample to be processed, process as many samples as possible in parallel, or both. If the upstream capture, clean up and digestion is automated, the process is probably as optimized as it can be on the level of treatment of the individual sample, so then one can look towards parallel processing of multiple samples in order to decrease sample turn-around time overall.
The ability to perform accurate, automated total protein sample prep in a 96-well plate format would be a game-changer when it comes to experimental design in proteomics. Total protein sample prep when done manually is tedious as samples are handled and processed one-by-one, whereas automation allows the scalable analysis of proteomes, with multiple samples at multiple time points, giving researchers the potential to do infinitely more dynamic and complex sets of experiments.
Achieving an acceptable turn‐around time from receiving samples to the generation of a complete proteome profile analysis will also depend on the technology used downstream. This may involve one or more of the following: mass spectrometry, or LC-MS/MS, electrophoresis, or if specific proteins have already been identified, ELISA or other immunoassay types.
Ensure cost-effectiveness of the workflow
Automation is often thought of as an expensive luxury in the lab. However, we cannot forget that the cost of repeating research can be massive, far dwarfing the cost of automation. Cost-effectiveness will depend on the aim of a particular proteomics research project, as well as on the existing infrastructure. The streamlining of the protein sample prep workflow as described here will certainly help towards cost-effectiveness, potentially saving time and money with less error and manual labor. In turn, a new automated way of working will shift the bottlenecks and the questions that can be answered, and allow new projects to be conceived, and ultimately quicken scientific progress.
Can we automate protein sample prep … or not?
We have seen here that the challenges to automating protein sample prep are huge, but not insurmountable. Our second article in this series will review some concrete solutions and case studies where the exciting progress being made in proteomics automation is bringing us tantalizingly close to practical application of proteomics in the clinic. Meanwhile, check out this video for a sneak preview of how automated protein sample prep could look in the not-too-distant future.
Don't miss the next article in this series. Subscribe to the blog to stay updated.
- 1. Hortin, G. L., & Sviridov, D. (2010). The dynamic range problem in the analysis of the plasma proteome. Journal of proteomics, 73(3), 629–636. PMID: https://pubmed.ncbi.nlm.nih.gov/19619681/ DOI: https://doi.org/10.1016/j.jprot.2009.07.001
- 2. Thakur, S. (2020) Proteomics and Its Application in Pandemic Diseases. Journal of Proteome Research 2020 19 (11), 4215-4218. PMID: https://pubmed.ncbi.nlm.nih.gov/33153265/ DOI: https://doi.org/10.1021/acs.jproteome.0c00824
- 3. Faria, S. S., Morris, C. F., Silva, A. R., Fonseca, M. P., Forget, P., Castro, M. S., & Fontes, W. (2017). A Timely Shift from Shotgun to Targeted Proteomics and How It Can Be Groundbreaking for Cancer Research. Frontiers in oncology, 7, 13. PMID: https://pubmed.ncbi.nlm.nih.gov/28265552/ DOI : https://doi.org/10.3389/fonc.2017.00013
- 4. Klont, F., Bras, L., Wolters, J. C., Ongay, S., Bischoff, R., Halmos, G. B., & Horvatovich, P. (2018). Assessment of Sample Preparation Bias in Mass Spectrometry-Based Proteomics. Analytical chemistry, 90(8), 5405–5413. PMID: https://pubmed.ncbi.nlm.nih.gov/29608294/ DOI: https://doi.org/10.1021/acs.analchem.8b00600
- 5. Piehowski, P. D., Petyuk, V. A., Orton, D. J., Xie, F., Moore, R. J., Ramirez-Restrepo, M., Engel, A., Lieberman, A. P., Albin, R. L., Camp, D. G., Smith, R. D., & Myers, A. J. (2013). Sources of technical variability in quantitative LC-MS proteomics: human brain tissue sample analysis. Journal of proteome research, 12(5), 2128–2137. PMID: https://pubmed.ncbi.nlm.nih.gov/23495885/ DOI: https://doi.org/10.1021/pr301146m
- 6. https://www.americanlaboratory.com/914-Application-Notes/172423-Evaluation-of-an-Automated-Solid-Phase-Extraction-Method-Using-Positive-Pressure/ Accessed 8 December 2020
About the author
Shang Tsai is the Head of Marketing and Product management for Tecan SP. He and his team support and work with customers globally to develop innovative sample preparation solutions and efficient workflow for their analytical needs. Shang has over 25 years of analytical science experience with roles within marketing, business development, applications, and product development.