Title: Real-time-implementable relative cost min-max optimal control via a dynamic-programming-like method
Abstract: Classical optimal control methods require complete information of the dynamic processes, which means that for a system with unpredictable but measurable disturbances, the optimal control and minimum cost can only be calculated after the process is finished. In some applications, the absolute value of the cost may be of less interest than the relative cost, which is defined as the ratio between the actual cost and the posteriori optimal cost. This paper proposes a relative cost min-max (RCM) optimal control method for a general discrete-time nonlinear dynamic system, with its disturbance sequence being assumed to belong to a finite admissible set. The RCM optimal control policy only uses the known finite admissible set, the current and past information of the disturbance, and can guarantee the minimum relative cost in the worst case. As the relative cost is not an accumulative value like the conventional cost, the Principle of Optimality cannot be directly applied to this problem. A theorem similar to the Principle of Optimality is proved in this paper and based on this theorem a dynamic-programming-like backward induction method is presented to solve the RCM optimal control problem. An example of a nonlinear system is given to illustrate the proposed method.
Publication Year: 2016
Publication Date: 2016-07-01
Language: en
Type: article
Indexed In: ['crossref']
Access and Citation
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot