Item Infomation


Title: Approximate Policy-Based Accelerated Deep Reinforcement Learning
Authors: Wang, Xuesong
Participants: Gu, Yang
Cheng, Yuhu
Liu, Aiping
Chen, C. L. Philip
Issue Date: 2019
Publisher: IEEE Xplore
Series/Report no.: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, (2019), pp 11
Abstract: In recent years, the deep reinforcement learning (DRL) algorithms have been developed rapidly and have achieved excellent performance in many challenging tasks. However, due to the complexity of network structure and a large amount of network parameters, the training of deep network is timeconsuming, and consequently, the learning efficiency of DRL is limited. In this paper, aiming to speed up the learning process of DRL agent, we propose a novel approximate policy-based accelerated (APA) algorithm from the viewpoint of the error analysis of approximate policy iteration reinforcement learning algorithms. The proposed APA is proven to be convergent even with a more aggressive learning rate, making the DRL agent have a faster learning speed. Furthermore, to combine the accelerated algorithm with deep Q-network (DQN), Double DQN and deep deterministic policy gradient (DDPG), we proposed three novel DRL algorithms: APA-DQN, APA-Double DQN, and APA-DDPG, which demonstrates the adaptability of the accelerated algorithm with DRL algorithms. We have tested the proposed algorithms on both discrete-action and continuousaction tasks. Their superior performance demonstrates their great potential in the practical applications.
URI: http://tailieuso.tlu.edu.vn/handle/DHTL/9903
Source: https://doi.org/10.1109/TNNLS.2019.2927227
Appears in Collections:Tài liệu hỗ trợ nghiên cứu khoa học
ABSTRACTS VIEWS

16

VIEWS & DOWNLOAD

3

Files in This Item:
Thumbnail
  • D9903.pdf
      Restricted Access
    • Size : 2,28 MB

    • Format : Adobe PDF

  • Bạn đọc là cán bộ, giáo viên, sinh viên của Trường Đại học Thuỷ Lợi cần đăng nhập để Xem trực tuyến/Tải về



    Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.