z-logo
Premium
Deep learning for quality prediction of nonlinear dynamic processes with variable attention‐based long short‐term memory network
Author(s) -
Yuan Xiaofeng,
Li Lin,
Wang Yalin,
Yang Chunhua,
Gui Weihua
Publication year - 2020
Publication title -
the canadian journal of chemical engineering
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.404
H-Index - 67
eISSN - 1939-019X
pISSN - 0008-4034
DOI - 10.1002/cjce.23665
Subject(s) - variable (mathematics) , computer science , soft sensor , term (time) , nonlinear system , relevance (law) , process (computing) , quality (philosophy) , artificial intelligence , series (stratigraphy) , artificial neural network , machine learning , data mining , mathematics , mathematical analysis , physics , quantum mechanics , paleontology , philosophy , epistemology , political science , law , biology , operating system
Industrial processes are often characterized with high nonlinearities and dynamics. For soft sensor modelling, it is important to model the nonlinear and dynamic relationship between input and output data. Thus, long short‐term memory (LSTM) networks are suitable for quality prediction of soft sensor modelling. However, they do not consider the relevance of different input variables with the quality variable. To address this issue, a variable attention‐based long short‐term memory (VA‐LSTM) network is proposed for soft sensing in this paper. In VA‐LSTM, variable attention is designed to identify important input variables according to their relevance with quality prediction. After that, different attention weights are calculated and assigned to further obtain a weighted input sample at each time step. Finally, the LSTM network is exploited to capture the long‐term dependencies of the weighted input time series to predict the quality variable. The performance of the proposed modelling method is validated on an industrial debutanizer column and a hydrocracking process.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here