z-logo
Premium
A hybrid trust model using reinforcement learning and fuzzy logic
Author(s) -
Aref Abdullah,
Tran Thomas
Publication year - 2018
Publication title -
computational intelligence
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.353
H-Index - 52
eISSN - 1467-8640
pISSN - 0824-7935
DOI - 10.1111/coin.12155
Subject(s) - computer science , reinforcement learning , fuzzy logic , trustworthiness , term (time) , artificial intelligence , machine learning , computer security , physics , quantum mechanics
Multiagent systems (MASs) are increasingly popular for modeling distributed environments that are highly complex and dynamic, such as e‐commerce, smart buildings, and smart grids. Typically, agents assumed to be goal driven with limited abilities, which restrains them to working with other agents for accomplishing complex tasks. Trust is considered significant in MASs to make interactions effectively, especially when agents cannot assure that potential partners share the same core beliefs about the system or make accurate statements regarding their competencies and abilities. Due to the imprecise and dynamic nature of trust in MASs, we propose a hybrid trust model that uses fuzzy logic and Q‐learning for trust modeling. as an improvement over Q‐learning‐based trust evaluation. Q‐learning is used to estimate trust on the long term, fuzzy inferences are used to aggregate different trust factors, and suspension is used as a short‐term response to dynamic changes. The performance of the proposed model is evaluated using simulation. Simulation results indicate that the proposed model can help agents select trustworthy partners to interact with. It has a better performance compared to some of the popular trust models in the presence of misbehaving interaction partners.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here