Model-based reinforcement learning (MBRL) is recognized with the potential tobe significantly more sample efficient than model-free RL. How an accuratemodel can be developed automatically and efficiently from raw sensory inputs(such as images), especially for complex environments and tasks, is achallenging problem that hinders the broad application of MBRL in the realworld. In this work, we propose a sensing-aware model-based reinforcementlearning system called SAM-RL. Leveraging the differentiable physics-basedsimulation and rendering, SAM-RL automatically updates the model by comparingrendered images with real raw images and produces the policy efficiently. Withthe sensing-aware learning pipeline, SAM-RL allows a robot to select aninformative viewpoint to monitor the task process. We apply our framework toreal-world experiments for accomplishing three manipulation tasks: roboticassembly, tool manipulation, and deformable object manipulation. We demonstratethe effectiveness of SAM-RL via extensive experiments. Supplemental materialsand videos are available on our project webpage athttps://sites.google.com/view/sam-rl.
@inproceedings{SAM-RL,title={SAM-RL: Sensing-Aware Model-Based Reinforcement Learning via Differentiable Physics-Based Simulation and Rendering},author={Lv, Jun and Feng, Yunhai and Zhang, Cheng and Zhao, Shuang and Shao, Lin and Lu, Cewu},year={2022},tags={manipulation, differentiable-rendering},sida={基于differentiable rendering更新environment model, 帮助model-based reinforcement learning.},}