Plenary Lecture 1 Decentralized Planning under Uncertainty: Theory and Practice for Multiagent Collaboration
Date/Time Tuesday, May 31, 2016 08:45-09:45
Venue International Leture Hall of the 2nd floor
Presenter Prof. Frank L. Lewis , University of Texas at Arlington

Abstract

Modern industrial processes are complex and new imperatives in sustainable manufacturing and energy efficient systems require improved decision and control methods. More emphasis is being placed on optimal design of automatic decision and control systems, including minimum fuel, minimum energy, minimum time, minimum pollutant concentration, and others. Operational control loops are responsible for stable plant operation, and they must ensure following of setpoints from higher-level supervisory loops that include optimization-based design criteria. Optimal feedback control design has been responsible for much of the successful performance of engineered systems in aerospace, manufacturing, industrial processes, vehicles, ships, robotics, and elsewhere since the 1960s. Optimal feedback control design is performed offline by solving optimal design equations including the algebraic Riccati equation. It is difficult to perform optimal designs for nonlinear process systems since they rely on solutions to complicated Hamilton-Jacobi-Bellman equations. Finally, optimal design generally requires that the full system dynamics be known, which is seldom the case in manufacturing systems. Dynamics modeling identification of systems is complicated, expensive, and inaccurate. Moreover, practical manufacturing systems may have no tractable closed-form system model. Nevertheless, the availability of large amounts of measured data in today¡¯s industry has the potential to allow good process controller design with optimization of performance if data are properly and efficiently used. This talk will present methods for online data-driven control (DDC) and data-driven optimization (DDO) for processes with unknown dynamical models using process data measured online. We will present several methods for efficient online tuning of process controllers based on measurements of real-time data for unmodeled or partially modeled processes. Techniques from Reinforcement Learning are used to design a novel class of adaptive control algorithms that converge to optimal control solutions by online learning in real time. These are based on actor-critic Reinforcement Learning mechanisms that occur in the human brain and sociological ecosystems. Reinforcement Learning provides methods for learning optimal energy-efficient control solutions online in real-time using data measured along the process trajectories for unmodeled systems with unknown dynamics. In industrial processes, measurements of all the internal process states are not usually available, disturbances are present, and commonly, only a few control gains can be tuned, not all the process controller parameters. Therefore methods based on reinforcement learning Policy Iteration and Approximate Dynamic Programming (ADP) will be employed to design online learning controllers that only adapt output-feedback gains and incorporate features of H-infinity robust control. The result is a two-loop supervisory control scheme, with inner control loops that guarantee operational plant control stability, and outer loops that provide controller tuning on a longer time horizon for optimal performance. Comparisons with other supervisory process control methods are given. Auto-tuning is a method of tuning PD and PID control parameters online using process input test signals. Run-to-Run Iterative Learning Control improves the controller design at each iteration of a process run by measuring errors incurred during the previous run.

Biography

Frank L. Lewis Fellow IEEE, Fellow IFAC, Moncrief-O¡¯Donnell Chair at The University of Texas at Arlington Research Institute. Thousand Talents Consulting Professor, Northeastern University, Shenyang, China. IEEE Control Systems Society Distinguished Lecturer. He obtained the Bachelor's Degree in Physics/EE and the MSEE at Rice University, the MS in Aeronautical Engineering from University of West Florida, and the Ph.D. at Georgia Institute of Technology. He works in feedback control, reinforcement learning, intelligent systems, and distributed control systems. He is author of 7 U.S. patents, 327 journal papers, 411 conference papers, 20 books, 44 chapters, and 11 journal special issues. He received IEEE Computational Intelligence Society Neural Networks Pioneer Award 2012, AIAA Intelligent Systems Award 2016, the Fulbright Research Award, NSF Research Initiation Grant, ASEE Terman Award, International Neural Network Society Gabor Award 2009, U.K. Institute of Measurement & Control Honeywell Field Engineering Medal 2009. He is Distinguished Foreign Scholar, Nanjing University Science & Technology and Project 111 Professor at Northeastern University, China. He received Outstanding Service Award from Dallas IEEE Section, selected as Engineer of the Year by Ft. Worth IEEE Section. He is listed in Ft. Worth Business Press Top 200 Leaders in Manufacturing. He received the 2010 IEEE Region 5 Outstanding Engineering Educator Award and the 2010 UTA Graduate Dean¡¯s Excellence in Doctoral Mentoring Award. He was elected to UTA Academy of Distinguished Teachers 2012 and recieved Texas Regents Outstanding Teaching Award 2013. He served on the NAE Committee on Space Station in 1995. He is Founding Member of the Board of Governors of the Mediterranean Control Association. He helped win the IEEE Control Systems Society Best Chapter Award (as Founding Chairman of DFW Chapter), the National Sigma Xi Award for Outstanding Chapter (as President of UTA Chapter), and the US SBA Tibbets Award in 1996 (as Director of ARRI¡¯s SBIR Program).