z-logo
open-access-imgOpen Access
Optimizing Chemical Reactions with Deep Reinforcement Learning
Author(s) -
Zhenpeng Zhou,
Xiaocheng Li,
Richard N. Zare
Publication year - 2017
Publication title -
acs central science
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 4.893
H-Index - 76
eISSN - 2374-7951
pISSN - 2374-7943
DOI - 10.1021/acscentsci.7b00492
Subject(s) - reinforcement learning , computer science , regret , chemical reaction , artificial intelligence , reinforcement , machine learning , chemistry , materials science , organic chemistry , composite material
Deep reinforcement learning was employed to optimize chemical reactions. Our model iteratively records the results of a chemical reaction and chooses new experimental conditions to improve the reaction outcome. This model outperformed a state-of-the-art blackbox optimization algorithm by using 71% fewer steps on both simulations and real reactions. Furthermore, we introduced an efficient exploration strategy by drawing the reaction conditions from certain probability distributions, which resulted in an improvement on regret from 0.062 to 0.039 compared with a deterministic policy. Combining the efficient exploration policy with accelerated microdroplet reactions, optimal reaction conditions were determined in 30 min for the four reactions considered, and a better understanding of the factors that control microdroplet reactions was reached. Moreover, our model showed a better performance after training on reactions with similar or even dissimilar underlying mechanisms, which demonstrates its learning ability.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom