
Receding horizon optimal control of HEVs with on‐board prediction of driver's power demand
Author(s) -
Zhang Bo,
Xu Fuguo,
Shen Tielong
Publication year - 2020
Publication title -
iet intelligent transport systems
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.579
H-Index - 45
eISSN - 1751-9578
pISSN - 1751-956X
DOI - 10.1049/iet-its.2020.0245
Subject(s) - model predictive control , powertrain , torque , engineering , electric vehicle , fuel efficiency , dynamic programming , mathematical optimization , computer science , horizon , quadratic programming , energy management , power (physics) , control theory (sociology) , automotive engineering , control engineering , energy (signal processing) , control (management) , algorithm , artificial intelligence , statistics , physics , mathematics , quantum mechanics , thermodynamics , astronomy
To improve a parallel hybrid electric vehicle's (HEV's) fuel economy, this study develops a real‐time optimisation strategy with a learning‐based method that predicts the driver's power demand under the connected environment. This demand is strongly constrained by the total power generated by the energy sources. Therefore, a key issue of solving the energy management problem in real time by model‐based predictive optimisation is to predict the power demand of each receding horizon. The proposed optimisation strategy consists of two layers. The upper layer provides the prediction of the driver's torque demand. Gaussian process regression (GPR) is used to predict the driver's demand with the uncertain and stochastic estimation between the traffic environment and torque demand. Vehicle‐to‐vehicle and vehicle‐to‐infrastructure data are used as the inputs of the GPR model. The lower layer performs finite‐horizon optimisation based on the cost function of energy consumption. A receding horizon control (RHC) problem is formulated, and optimisation is achieved by a sequential quadratic programming algorithm. To validate the proposed optimisation strategy, a powertrain control co‐simulation platform with a traffic‐in‐the‐loop environment is constructed, and results validation with the platform is demonstrated. The comparisons with the dynamic programming and no‐prediction RHC results show that the proposed strategy can improve fuel economy.