
The curse of normalization
Author(s) -
Wolkenhauer Olaf,
MöllerLevet Carla,
SanchezCabo Fatima
Publication year - 2002
Publication title -
comparative and functional genomics
Language(s) - English
Resource type - Journals
eISSN - 1532-6268
pISSN - 1531-6912
DOI - 10.1002/cfg.192
Subject(s) - normalization (sociology) , computer science , data science , database normalization , data mining , artificial intelligence , pattern recognition (psychology) , sociology , anthropology
Despite its enormous promise to further our understanding of cellular processes involved in the regulation of gene expression, microarray technology generates data for which statistical pre‐processing has become a necessity before any interpretation of data can begin. The process by which we distinguish (and remove) non‐biological variation from biological variation is called normalization . With a multitude of experimental designs, techniques and technologies influencing the acquisition of data, numerous approaches to normalization have been proposed in the literature. The purpose of this short review is not to add to the many suggestions that have been made, but to discuss some of the difficulties we encounter when analysing microarray data. Copyright © 2002 John Wiley & Sons, Ltd.