Papers
arxiv:2410.03847

Enhancing Inverse Reinforcement Learning through Encoding Dynamic Information in Reward Shaping

Published on Oct 4, 2024
Authors:
,
,
,
,
,
,

Abstract

A novel Model-Enhanced AIRL framework incorporates dynamics information into reward shaping with theoretical guarantees for optimal policies in stochastic environments, demonstrating improved sample efficiency and performance across deterministic and stochastic benchmarks.

AI-generated summary

In this paper, we aim to tackle the limitation of the Adversarial Inverse Reinforcement Learning (AIRL) method in stochastic environments where theoretical results cannot hold and performance is degraded. To address this issue, we propose a novel method which infuses the dynamics information into the reward shaping with the theoretical guarantee for the induced optimal policy in the stochastic environments. Incorporating our novel model-enhanced rewards, we present a novel Model-Enhanced AIRL framework, which integrates transition model estimation directly into reward shaping. Furthermore, we provide a comprehensive theoretical analysis of the reward error bound and performance difference bound for our method. The experimental results in MuJoCo benchmarks show that our method can achieve superior performance in stochastic environments and competitive performance in deterministic environments, with significant improvement in sample efficiency, compared to existing baselines.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2410.03847
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2410.03847 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2410.03847 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.