Reinforcement Learning for Dynamic Microfluidic Control
Author(s) -
Oliver J. Dressler,
Philip D. Howes,
Jaebum Choo,
Andrew J. deMello
Publication year - 2018
Publication title -
acs omega
Language(s) - Uncategorized
Resource type - Journals
ISSN - 2470-1343
DOI - 10.1021/acsomega.8b01485
Subject(s) - microfluidics , reinforcement learning , computer science , flow control (data) , microchannel , artificial intelligence , throughput , nanotechnology , control engineering , engineering , materials science , computer network , telecommunications , wireless
Recent years have witnessed an explosion in the application of microfluidic techniques to a wide variety of problems in the chemical and biological sciences. Despite the many considerable advantages that microfluidic systems bring to experimental science, microfluidic platforms often exhibit inconsistent system performance when operated over extended timescales. Such variations in performance are because of a multiplicity of factors, including microchannel fouling, substrate deformation, temperature and pressure fluctuations, and inherent manufacturing irregularities. The introduction and integration of advanced control algorithms in microfluidic platforms can help mitigate such inconsistencies, paving the way for robust and repeatable long-term experiments. Herein, two state-of-the-art reinforcement learning algorithms, based on Deep Q-Networks and model-free episodic controllers, are applied to two experimental "challenges," involving both continuous-flow and segmented-flow microfluidic systems. The algorithms are able to attain superhuman performance in controlling and processing each experiment, highlighting the utility of novel control algorithms for automated high-throughput microfluidic experimentation.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom