z-logo
Premium
Proteomics: Capacity versus utility
Author(s) -
Harry Jenny L.,
Wilkins Marc R.,
Herbert Ben R.,
Packer Nicolle H.,
Gooley Andrew A.,
Williams Keith L.
Publication year - 2000
Publication title -
electrophoresis
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.666
H-Index - 158
eISSN - 1522-2683
pISSN - 0173-0835
DOI - 10.1002/(sici)1522-2683(20000401)21:6<1071::aid-elps1071>3.0.co;2-m
Subject(s) - proteomics , proteome , genomics , computational biology , throughput , biology , data science , computer science , gene , bioinformatics , genome , genetics , telecommunications , wireless
Until recently scientists studied genes or proteins one at a time. With improvements in technology, new tools have become available to study the complex interactions that occur in biological systems. Global studies are required to do this, and these will involve genomic and proteomic approaches. High‐throughput methods are necessary in each case because the number of genes and proteins in even the simplest of organisms are immense. In the developmental phase of genomics, the emphasis was on the generation and assembly of large amounts of nucleic acid sequence data. Proteomics is currently in a phase of technological development and establishment, and demonstrating the capacity for high throughput is a major challenge. However, funding bodies (both in the public and private sector) are increasingly focused on the usefulness of this capacity. Here we review the current state of proteome research in terms of capacity and utility.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here