Premium
Nonlinear weighted feedback control of groundwater remediation under uncertainty
Author(s) -
Whiffen Gregory J.,
Shoemaker Christine A.
Publication year - 1993
Publication title -
water resources research
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.863
H-Index - 217
eISSN - 1944-7973
pISSN - 0043-1397
DOI - 10.1029/93wr00928
Subject(s) - optimal control , mathematical optimization , galerkin method , nonlinear system , penalty method , interval (graph theory) , mathematics , uncertainty quantification , dynamic programming , control theory (sociology) , computer science , finite element method , control (management) , engineering , statistics , physics , quantum mechanics , combinatorics , artificial intelligence , structural engineering
Differential dynamic programming is used to compute optimal time‐varying pumping policies for a pump and treat strategy for groundwater remediation. The feedback law generated by a constrained differential dynamic programming algorithm with penalty functions is used as the basis of feedback laws tested in cases where there is uncertainty in the hydraulic conductivity. Confined transient aquifer flow and transport are modeled using a two‐dimensional Galerkin finite element scheme with implicit time differencing. Optimal policies are calculated using a given or “measured” set of hydraulic conductivities and initial conditions. The optimal policies (with and without feedback) are applied using the same finite element model with a second or “true” set of conductivities. The “true” sets of conductivities are generated randomly from an autocorrelated lognormal distribution by the spectral method. The approach used here has an advantage over other uncertainty approaches because it is not necessary to specify precisely which parameters are considered uncertain and which are certain. Also no single probability distribution need be assumed for each uncertain parameter. By adjusting the relative weight assigned each penalty function, robust feedback laws were obtained that perform equally well under nine different assumed error distributions. In our examples, well‐designed feedback policies cost between 4% and 51% less than the cost of applying the calculated optimal policies without using a feedback law.