Deadline for submission: Saturday, May 4
Feel free to look at the speaker titles and/or abstracts for inspiration: http://spirl.info/2019/program/
*Examples of challenge questions from NeurIPS 2017 HRL Workshop (https://bit.ly/2IZCfq5)*[Josh Achiam] Works like Feudal Networks and Option-Critic make claims of beating non-hierarchical baselines in several settings. Anecdotally, wins for hierarchy seem hard to reproduceIs the current research in deep HRL rigorous enough / reproducible enough to justify existing claims? If not, what steps forward do we have to take?
[Karol Hausman] You have done a lot of original work on policy gradients, actor-critic methods, and in general, reinforcement learning in robotics. These days, we see a lot of these methods that achieve impressive results (although mostly in simulation) using different versions of these methods with deep neural networks.Do you see any breakthroughs in the most recent deep RL methods that are beyond applying a better function approximator to the already known methods? If so, what are those?What do you think are the most exciting research directions that opened up withthe arrival of deep reinforcement learning methods?