
Efficient method of calculating Shannon entropy of non-static transport problem in message passing parallel programming environment
Author(s) -
Shangguan Danhua,
Li Deng,
Baoyin Zhang,
Zhicheng Ji,
Gang Li
Publication year - 2016
Publication title -
wuli xuebao
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.199
H-Index - 47
ISSN - 1000-3290
DOI - 10.7498/aps.65.142801
Subject(s) - computer science , entropy (arrow of time) , computation , monte carlo method , mathematical optimization , rényi entropy , statistical physics , algorithm , mathematics , theoretical computer science , principle of maximum entropy , statistics , physics , artificial intelligence , thermodynamics
For the Monte Carlo simulation of the non-static transport problem, there must be many calculation steps. Because some particles cannot finish their transport in the last step, they are naturally used as the source particles of the present step. These particles are called undied particles. It is difficult to adjust the history number of each step to obtain higher efficiency because the adjusting rule is hard to find. The most direct method is to set a large enough history number for all steps. But evidently, it is unnecessary for some steps. Among all possible rules, one candidate of adjusting the history number is to check the convergence situation of Shannon entropy (corresponding to the distribution of some undied particle attributes) every some samples in each step to determine whether or not to simulate more particles. So, this method needs to calculate the Shannon entropy frequently. Because the classical method of calculating Shannon entropy in message passing parallel programming environment must reduce massive data, it is unpractical to be used in this situation for the great increasing of computation time with the high frequency of entropy calculation. In this paper, we propose an efficient method of calculating the entropy in the message passing parallel programming environment by letting each process calculate its entropy value based on the local data in each processer and calculating the final entropy by averaging all the entropy values gotten by all processes. The entropy value calculated by this method is not the same as that by the classical method when using finite history number, but the difference goes to zero when the history number goes to infinity. The most remarkable advantage of this method is the small increasing of computation time when calculating the entropy frequently. It is a suitable method of calculating Shannon entropy when adjusting the history number automatically based on the judgment of the convergence situation of Shannon entropy.