Learning-based Nonlinear Model Predictive Control Using Deterministic Actor-Critic with Gradient Q-learning Critic
Résumé
In this paper, we present an off-policy reinforcement learning (RL) method used to tune the optimal weights of a nonlinear model predictive control (NMPC) scheme. The objective is to find the optimal policy minimizing the closed-loop performance of point stabilization with obstacle avoidance control task. The parameterized NMPC scheme serves to approximate the optimal policy and update the parameters via compatible off-policy deterministic actor-critic with gradient Q-learning critic (COPDAC-GQ). While efficient, this algorithm requires a heavy computational complexity when combined with NMPC, as two optimal control problems have to be solved at each time instant. We therefore propose two different methods to reduce the real-time computational cost of the algorithm. First, a neural network is used to learn the subsequent state-action features of the advantage function. Then, we propose to use the information delivered by the NMPC scheme to approximate the subsequent state-action features in the critic. Whichever method is used removes the need of a secondary NMPC, significantly improving the training speed. The results show that there is no difference between the original method and the proposed methods in terms of the learned policy and the control performance, whereas the real-time computational burden is almost halved with the proposed methods.
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
licence |