Maximum Entropy RL (Provably) Solves Some Robust RL Problems


By Ben Eysenbach

Almost all real-world purposes of reinforcement studying contain some extent of shift between the coaching setting and the testing setting. Nonetheless, prior work has noticed that even small shifts within the setting trigger most RL algorithms to carry out markedly worse. As we purpose to scale reinforcement studying algorithms and apply them in the true world, it’s more and more vital to be taught insurance policies which might be sturdy to adjustments within the setting.


Strong reinforcement studying maximizes reward on an adversarially-chosen setting.

Broadly, prior approaches to dealing with distribution shift in RL purpose to maximise efficiency in both the typical case or the worst case. The primary set of approaches, corresponding to area randomization, practice a coverage on a distribution of environments, and optimize the typical efficiency of the coverage on these environments. Whereas these strategies have been efficiently utilized to quite a few areas (e.g., self-driving automobiles, robotic locomotion and manipulation), their success rests critically on the design of the distribution of environments. Furthermore, insurance policies that do properly on common should not assured to get excessive reward on each setting. The coverage that will get the best reward on common may get very low reward on a small fraction of environments. The second set of approaches, usually known as sturdy RL, concentrate on the worst-case situations. The purpose is to discover a coverage that will get excessive reward on each setting inside some set. Strong RL can equivalently be considered as a two-player sport between the coverage and an setting adversary. The coverage tries to get excessive reward, whereas the setting adversary tries to tweak the dynamics and reward perform of the setting in order that the coverage will get decrease reward. One vital property of the sturdy strategy is that, not like area randomization, it’s invariant to the ratio of straightforward and laborious duties. Whereas sturdy RL at all times evaluates a coverage on essentially the most difficult duties, area randomization will predict that the coverage is healthier whether it is evaluated on a distribution of environments with simpler duties.

Prior work has recommended quite a few algorithms for fixing sturdy RL issues. Usually, these algorithms all observe the identical recipe: take an present RL algorithm and add some further equipment on prime to make it sturdy. For instance, sturdy worth iteration makes use of Q-learning as the bottom RL algorithm, and modifies the Bellman replace by fixing a convex optimization downside within the interior loop of every Bellman backup. Equally, Pinto ‘17 makes use of TRPO as the bottom RL algorithm and periodically updates the setting based mostly on the habits of the present coverage. These prior approaches are sometimes tough to implement and, even as soon as carried out accurately, they requiring tuning of many further hyperparameters. Would possibly there be an easier strategy, an strategy that doesn’t require further hyperparameters and extra strains of code to debug?

To reply this query, we’re going to concentrate on a sort of RL algorithm generally known as most entropy RL, or MaxEnt RL for brief (Todorov ‘06, Rawlik ‘08, Ziebart ‘10). MaxEnt RL is a slight variant of ordinary RL that goals to be taught a coverage that will get excessive reward whereas performing as randomly as potential; formally, MaxEnt maximizes the entropy of the coverage. Some prior work has noticed empirically that MaxEnt RL algorithms look like sturdy to some disturbances the setting. To the perfect of our data, no prior work has really confirmed that MaxEnt RL is strong to environmental disturbances.

In a latest paper, we show that each MaxEnt RL downside corresponds to maximizing a decrease sure on a sturdy RL downside. Thus, if you run MaxEnt RL, you might be implicitly fixing a sturdy RL downside. Our evaluation supplies a theoretically-justified clarification for the empirical robustness of MaxEnt RL, and proves that MaxEnt RL is itself a sturdy RL algorithm. In the remainder of this publish, we’ll present some instinct into why MaxEnt RL needs to be sturdy and what kind of perturbations MaxEnt RL is strong to. We’ll additionally present some experiments demonstrating the robustness of MaxEnt RL.

So, why would we count on MaxEnt RL to be sturdy to disturbances within the setting? Recall that MaxEnt RL trains insurance policies to not solely maximize reward, however to take action whereas performing as randomly as potential. In essence, the coverage itself is injecting as a lot noise as potential into the setting, so it will get to “observe” recovering from disturbances. Thus, if the change in dynamics seems like only a disturbance within the authentic setting, our coverage has already been skilled on such information. One other approach of viewing MaxEnt RL is as studying many various methods of fixing the duty (Kappen ‘05). For instance, let’s take a look at the duty proven in movies beneath: we wish the robotic to push the white object to the inexperienced area. The highest two movies present that normal RL at all times takes the shortest path to the objective, whereas MaxEnt RL takes many various paths to the objective. Now, let’s think about that we add a brand new impediment (purple blocks) that wasn’t included throughout coaching. As proven within the movies within the backside row, the coverage discovered by normal RL nearly at all times collides with the impediment, not often reaching the objective. In distinction, the MaxEnt RL coverage typically chooses routes across the impediment, persevering with to achieve the objective for a big fraction of trials.

Commonplace RL

MaxEnt RL

Educated and evaluated with out the impediment:

Educated with out the impediment, however evaluated with
the impediment:

We now formally describe the technical outcomes from the paper. The purpose right here is to not present a full proof (see the paper Appendix for that), however as a substitute to construct some instinct for what the technical outcomes say. Our most important result’s that, if you apply MaxEnt RL with some reward perform and a few dynamics, you might be really maximizing a decrease sure on the sturdy RL goal. To clarify this consequence, we should first outline the MaxEnt RL goal: $J_(pi; p, r)$ is the entropy-regularized cumulative return of coverage $pi$ when evaluated utilizing dynamics $p(s’ mid s, a)$ and reward perform $r(s, a)$. Whereas we’ll practice the coverage utilizing one dynamics $p$, we’ll consider the coverage on a unique dynamics, $tilde(s’ mid s, a)$, chosen by the adversary. We are able to now formally state our most important consequence as follows:

The left-hand-side is the sturdy RL goal. It says that the adversary will get to decide on whichever dynamics perform $tilde(s’ mid s, a)$ makes our coverage carry out as poorly as potential, topic to some constraints (as specified by the set $tildemathcal$). On the right-hand-side we have now the MaxEnt RL goal (word that $log T$ is a continuing, and the perform $exp(cdots)$ is at all times rising). Thus, this goal says coverage that has a excessive entropy-regularized reward (proper hand-side) is assured to additionally get excessive reward when evaluated on an adversarially-chosen dynamics.

Crucial a part of this equation is the set $tildemathcal$ of dynamics that the adversary can select from. Our evaluation describes exactly how this set is constructed and reveals that, if we wish a coverage to be sturdy to a bigger set of disturbances, all we have now to do is enhance the burden on the entropy time period and reduce the burden on the reward time period. Intuitively, the adversary should select dynamics which might be “shut” to the dynamics on which the coverage was skilled. For instance, within the particular case the place the dynamics are linear-Gaussian, this set corresponds to all perturbations the place the unique anticipated subsequent state and the perturbed anticipated subsequent state have a Euclidean distance lower than $epsilon$.

Our evaluation predicts that MaxEnt RL needs to be sturdy to many kinds of disturbances. The primary set of movies on this publish confirmed that MaxEnt RL is strong to static obstacles. MaxEnt RL can be sturdy to dynamic perturbations launched in the course of an episode. To display this, we took the identical robotic pushing job and knocked the puck misplaced in the course of the episode. The movies beneath present that the coverage discovered by MaxEnt RL is extra sturdy at dealing with these perturbations, as predicted by our evaluation.

Commonplace RL

MaxEnt RL

The coverage discovered by MaxEntRL is strong to dynamic perturbations of the puck (purple frames).

Our theoretical outcomes recommend that, even when we optimize the setting perturbations so the agent does as poorly as potential, MaxEnt RL insurance policies will nonetheless be sturdy. To display this functionality, we skilled each normal RL and MaxEnt RL on a peg insertion job proven beneath. Throughout analysis, we modified the place of the outlet to attempt to make every coverage fail. If we solely moved the outlet place just a little bit ($le$ 1 cm), each insurance policies at all times solved the duty. Nonetheless, if we moved the outlet place as much as 2cm, the coverage discovered by normal RL nearly by no means succeeded in inserting the peg, whereas the MaxEnt RL coverage succeeded in 95% of trials. This experiment validates our theoretical findings that MaxEnt actually is strong to (bounded) adversarial disturbances within the setting.

Commonplace RL

MaxEnt RL

Analysis on adversarial perturbations

MaxEnt RL is strong to adversarial perturbations of the outlet (the place the robotic
inserts the peg).

In abstract, our paper reveals commonly-used kind of RL algorithm, MaxEnt RL, is already fixing a sturdy RL downside. We don’t declare that MaxEnt RL will outperform purpose-designed sturdy RL algorithms. Nonetheless, the putting simplicity of MaxEnt RL in contrast with different sturdy RL algorithms means that it could be an interesting different to practitioners hoping to equip their RL insurance policies with an oz of robustness.

Acknowledgements
Because of Gokul Swamy, Diba Ghosh, Colin Li, and Sergey Levine for suggestions on drafts of this publish, and to Chloe Hsu and Daniel Seita for assist with the weblog.

This publish is predicated on the next paper:

BAIR Weblog

visitor writer

BAIR Weblog is the official weblog of the Berkeley Synthetic Intelligence Analysis (BAIR) Lab.



Source link

Leave a Comment