
Deep learning for certification of the quality of the data acquired by the CMS Experiment
Author(s) -
Adrian Alan Pol,
V. Azzolini,
G. Cerminara,
F. De Guio,
G. Franzoni,
Cécile Germain,
M. Pierini,
Tomasz Krzyżek
Publication year - 2020
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1525/1/012045
Subject(s) - large hadron collider , interpretability , compact muon solenoid , automation , artificial intelligence , computer science , detector , supervised learning , quality (philosophy) , machine learning , particle physics , object (grammar) , a priori and a posteriori , data mining , pattern recognition (psychology) , physics , artificial neural network , engineering , mechanical engineering , telecommunications , quantum mechanics , philosophy , epistemology
Certifying the data recorded by the Compact Muon Solenoid (CMS) experiment at CERN is a crucial and demanding task as the data is used for publication of physics results. Anomalies caused by detector malfunctioning or sub-optimal data processing are difficult to enumerate a priori and occur rarely, making it difficult to use classical supervised classification. We base out prototype towards the automation of such procedure on a semi-supervised approach using deep autoencoders. We demonstrate the ability of the model to detect anomalies with high accuracy, when compared against the outcome of the fully supervised methods. We show that the model has great interpretability of the results, ascribing the origin of the problems in the data to a specific sub-detector or physics object. Finally, we address the issue of feature dependency on the LHC beam intensity.