吉林快三

Frank Allgöwer, University of Stuttgart
Title: Data and/or control – is control theory becoming obsolete?

Abstract: While recent years have shown rapid progress of learning-based methods to effectively utilize data for control tasks, most existing control theoretic approaches still require knowledge of an accurate system model. It is worth asking if this trend towards data-driven approaches will ultimately lead to an obsolescence of classical systems and control theory. On the other hand, a key feature of control theory has always been its ability to provide rigorous theoretical guarantees – something that the learning community has only recently begun to address. In this talk, we present a novel framework for data-driven control theory, which does not rely on any model knowledge but still allows to give desirable theoretical guarantees. This framework relies on a result from behavioral systems theory, where it was proven that the vector space of all input-output trajectories of a linear time-invariant system is spanned by time-shifts of a single measured trajectory, given that the respective input signal is persistently exciting. We show how this result can be utilized to develop a mathematically sound approach to data-driven system analysis, with the possibility to verify input-output properties (e.g., dissipation inequalities) of unknown systems. Moreover, we propose a novel purely data-driven model predictive control scheme and we present theoretical results on closed-loop stability and robustness. Finally, the presented framework allows us to design state-feedback controllers with performance guarantees, even if the data are affected by noise.

Bio: Frank Allgöwer is director of the Institute for Systems Theory and Automatic Control and professor in Mechanical Engineering at the University of Stuttgart in Germany. Frank's main interests in research and teaching are in the area of systems and control with a current emphasis on the development of new methods for data-based control, optimization-based control, networks of systems, and systems biology. Frank received several recognitions for his work including the IFAC Outstanding Service Award, the IEEE CSS Distinguished Member Award, the State Teaching Award of the German state of Baden-Württemberg, and the Leibniz Prize of the Deutsche Forschungsgemeinschaft.
Frank has been the President of the International Federation of Automatic Control (IFAC) for the years 2017-2020. He was Editor for the journal Automatica from 2001 to 2015 and is editor for the Springer Lecture Notes in Control and Information Science book series and has published over 500 scientific articles. From 2012 until 2020 Frank also served a Vice-President of Germany's most important research funding agency the German Research Foundation (DFG).


Alessandro Astolfi, Imperial College London

Title: Data-driven model reduction

Abstract: The aim of the talk is to discuss two methods for obtaining reduced order models, for linear and nonlinear systems, from data. In the first part of the talk the notion of moment for linear systems is generalized to nonlinear, possibly time-delay, systems. It is shown that this notion provides a powerful tool for the identification of reduced order models from input-output data. It is also shown that the canonical parameterization of the reduced order model as a rank-one update of the "interpolation-point matrix" is not necessary, hence one can prove robustness of data-driven model reduction algorithms against variations in the location of the interpolation points. In the second part of the talk the Loewner framework for model reduction is discussed and it is shown that the introduction of left- and right- Loewner matrices/functions simplifies the construction of reduced order models from data.This is joint work with Z. Wang (Southeast University), G. Scarciotti (Imperial College) and J. Simard (Imperial College).

Bio: Alessandro Astolfi was born in Rome, Italy, in 1967. He graduated in electrical engineering from the University of Rome in 1991. In 1992 he joined ETH-Zurich where he obtained a M.Sc. in Information Theory in 1995 and the Ph.D. degree with Medal of Honor in 1995 with a thesis on discontinuous stabilisation of nonholonomic systems. In 1996 he was awarded a Ph.D. from the University of Rome "La Sapienza" for his work on nonlinear robust control. Since 1996 he has been with the Electrical and Electronic Engineering Department of Imperial College London, London (UK), where he is currently Professor of Nonlinear Control Theory and Head of the Control and Power Group. From 1998 to 2003 he was also an Associate Professor at the Dept. of Electronics and Information of the Politecnico of Milano. Since 2005 he has also been a Professor at Dipartimento di Ingegneria Civile e Ingegneria Informatica, University of Rome Tor Vergata. His research interests are focussed on mathematical control theory and control applications, with special emphasis for the problems of discontinuous stabilisation, robust and adaptive control, observer design and model reduction.


Ben M. Chen(陈本美),  The Chinese University of Hong Ko陈本美Ben M. Chen简历照片ng

Title: Fully Autonomous UAS and Its Applications

Abstract: The research and market for the unmanned aerial systems (UAS), or drones, has greatly expanded over the last few years. It is expected that the currently small civilian unmanned aircraft market is likely to become one of the major technological and economic stories of the modern age, due to a wide variety of possible applications and added value related to this potential technology. Modern unmanned aerial systems are gaining promising success because of their versatility, flexibility, low cost, and minimized risk of operation. In this talk, we highlight some key techniques involved in developing fully autonomous unmanned aerial vehicles and their industrial application examples, which includes deep tunnel inspection, stock counting and checking in warehouses and building inspections.

Bio: Ben M. Chen is currently a Professor in the Department of Mechanical and Automation Engineering at the Chinese University of Hong Kong. He was a Provost's Chair Professor in the Department of Electrical and Computer Engineering, the Natinal University of Singapore (NUS), where he was also serving as the Director of Control, Intelligent Systems and Robotics Area, and Head of Control Science Group, NUS Temasek Laboratories. His current research interests are in unmanned systems, robust control and control applications.

          Dr. Chen is an IEEE Fellow. He has published more than 400 journal an conference articles, and a dozen research monographs in control theory and applications, unmanned systems and financial market modeling by Springer in New York and London. He had served on the editorial boards of several international journals including IEEE Transactions on Automatic Control and Automatica. He currently serves as an Editor‐in‐Chief of Unmanned Systems. Dr. Chen has received a number of research awards nationally and internationally. His research team has actively participated in international UAV competitions, and won many championships in the contests.


Derong Liu(刘德荣),  Guangdong University of Technology

Title: Reinforcement Learning for Optimal Control

Abstract: Reinforcement learning (RL) is one of the most important branches of artificial intelligence. Researchers have been using RL techniques in modern control theory. Self-learning control methodologies are a good representative of such efforts. RL recently has become a major force in the machine learning fields. On the other hand, adaptive dynamic programming (ADP) has now become popular in control communities. Both RL and ADP have roots in dynamic programming and in many ways they are equivalent. Major breakthroughs of ADPRL for optimal control were achieved around 2006, when iterative ADP approaches were introduced. The optimal control of nonlinear systems requires to solve the nonlinear Bellman equation instead of the Riccati equation as in the linear case. The discrete-time Bellman equation is more difficult to work with than the Riccati equation because it involves solving nonlinear partial difference equations. Though dynamic programming has been a useful computational technique in solving optimal control problems, it is often computationally untenable to run it to obtain the optimal solution, due to the backward numerical process required for its solutions, i.e., the well-known "curse of dimensionality". Self-learning optimal control based on ADPRL provides efficient tools for tackling the following two problems. (1) Nonlinear Bellman equation is solved using iterative ADP approaches which are shown to converge. (2) Neural networks are employed for function approximation in order to obtain forward numerical process. Some new developments in ADPRL for optimal control will be summarized.

Bio: Derong Liu received the PhD degree in electrical engineering from the University of Notre Dame in 1994. He became a Full Professor of Electrical and Computer Engineering and of Computer Science at the University of Illinois at Chicago in 2006. He was selected for the “100 Talents Program” by the Chinese Academy of Sciences in 2008, and he served as the Associate Director of The State Key Laboratory of Management and Control for Complex Systems at the Institute of Automation, from 2010 to 2015. He has published 19 books. He is the Editor-in-Chief of Artificial Intelligence Review (Springer). He was the Editor-in-Chief of the IEEE Transactions on Neural Networks and Learning Systems from 2010 to 2105. He is a Fellow of the IEEE, a Fellow of the International Neural Network Society, and a Fellow of the International Association of Pattern Recognition.

 

 



 



吉林快三