z-logo
open-access-imgOpen Access
Multi‐objective electrical demand response policy generation considering customer response behaviour learning: An enhanced inverse reinforcement learning approach
Author(s) -
Lin Junhao,
Zhang Yan,
Xu Shuangdie,
Yu Haidong
Publication year - 2021
Publication title -
iet generation, transmission and distribution
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.92
H-Index - 110
eISSN - 1751-8695
pISSN - 1751-8687
DOI - 10.1049/gtd2.12260
Subject(s) - reinforcement learning , demand response , computer science , artificial intelligence , machine learning , inference , revenue , unsupervised learning , cluster analysis , engineering , economics , electricity , accounting , electrical engineering
Demand response (DR) is an effective load management method. To attract customers to participate, DR policies need to both satisfy customers' individual DR habits and be economically profitable. However, customers’ individual DR habits are hard to be formulated with few hypotheses when other objectives are simultaneously considered. To tackle this challenge, a novel DR behavioural learning method is proposed. We learn customers’ DR habits by an inverse reinforcement learning (IRL) method to reduce the subjectivity in DR model formulation. Meanwhile, in contrast to traditional learning‐based methods, the proposed method can adapt to multiple DR objectives more than just following customers’ DR habits, like obtaining higher economic revenues. Additionally, we consider the diversity and changes of customer DR behaviour patterns and offer an enhancement for the proposed DR behavioural learning method via building a DR pattern clustering and inference module. The proposed method can work with customer‐side energy storage systems to diversify the DR policies and make the DR behaviours more flexible. Case studies show the proposed method can reduce about 10–20% behavioural learning deviations than the compared model‐based methods, while daily charges of the proposed method can be further reduced by over 4% than the compared supervised‐learning‐based methods.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here