z-logo
open-access-imgOpen Access
Process Optimisation Based on Large Databases of Routinely Monitored Industrial Process Data
Author(s) -
Karin Kovar,
Thomas Friedli,
Dusan Roubicek,
David S. Langenegger,
Markus Keller,
HansPeter Meyer
Publication year - 2005
Publication title -
chimia
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.387
H-Index - 55
eISSN - 2673-2424
pISSN - 0009-4293
DOI - 10.2533/000942905777675688
Subject(s) - raw data , process (computing) , computer science , volume (thermodynamics) , process engineering , database , raw material , product (mathematics) , production (economics) , industrial production , scale (ratio) , process optimization , industrial engineering , environmental science , engineering , chemistry , mathematics , physics , geometry , organic chemistry , quantum mechanics , environmental engineering , economics , macroeconomics , programming language , operating system , keynesian economics
Huge amounts of data are routinely logged and stored during the monitoring of biotechnological production processes. A concept is described to extract and analyse the information these data contain and to subsequently apply it for process improvement. In total, roughly 100,000 time series of raw and derived signals which stemmed from 173 high-cell-density processes with recombinant microorganisms at 50 m3 scale (working volume) were processed. As is often the case, no mathematical process models were readily available and therefore data-driven, computer-intensive methods were applied. These endeavours helped to stimulate a change in manufacturing strategy, which in turn has led to an increase in the final product titre of 26% on average.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here