Premium
Value of Mendelian Laws of Segregation in Families: Data Quality Control, Imputation, and Beyond
Author(s) -
Blue Elizabeth M.,
Sun Lei,
Tintle Nathan L.,
Wijsman Ellen M.
Publication year - 2014
Publication title -
genetic epidemiology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.301
H-Index - 98
eISSN - 1098-2272
pISSN - 0741-0395
DOI - 10.1002/gepi.21821
Subject(s) - imputation (statistics) , 1000 genomes project , data quality , population stratification , genome wide association study , mendelian inheritance , genetic association , computer science , data mining , missing data , genetics , biology , single nucleotide polymorphism , machine learning , genotype , metric (unit) , operations management , economics , gene
When analyzing family data, we dream of perfectly informative data, even whole‐genome sequences (WGSs) for all family members. Reality intervenes, and we find that next‐generation sequencing (NGS) data have errors and are often too expensive or impossible to collect on everyone. The Genetic Analysis Workshop 18 working groups on quality control and dropping WGSs through families using a genome‐wide association framework focused on finding, correcting, and using errors within the available sequence and family data, developing methods to infer and analyze missing sequence data among relatives, and testing for linkage and association with simulated blood pressure. We found that single‐nucleotide polymorphisms, NGS data, and imputed data are generally concordant but that errors are particularly likely at rare variants, for homozygous genotypes, within regions with repeated sequences or structural variants, and within sequence data imputed from unrelated individuals. Admixture complicated identification of cryptic relatedness, but information from Mendelian transmission improved error detection and provided an estimate of the de novo mutation rate. Computationally, fast rule‐based imputation was accurate but could not cover as many loci or subjects as more computationally demanding probability‐based methods. Incorporating population‐level data into pedigree‐based imputation methods improved results. Observed data outperformed imputed data in association testing, but imputed data were also useful. We discuss the strengths and weaknesses of existing methods and suggest possible future directions, such as improving communication between data collectors and data analysts, establishing thresholds for and improving imputation quality, and incorporating error into imputation and analytical models.