dqn_agent_nature.py 文件源码

python
阅读 24 收藏 0 点赞 0 评论 0

项目:stock_dqn 作者: wdy06 项目源码 文件源码
def forward(self, state, action, Reward, state_dash, episode_end):
        num_of_batch = state.shape[0]
        s = Variable(state)
        s_dash = Variable(state_dash)

        Q = self.model.Q_func(s,train=True)  # Get Q-value

        # Generate Target Signals
        tmp = self.model_target.Q_func(s_dash,train=self.targetFlag)  # Q(s',*)
        tmp = list(map(np.max, tmp.data.get()))  # max_a Q(s',a)
        max_Q_dash = np.asanyarray(tmp, dtype=np.float32)
        target = np.asanyarray(Q.data.get(), dtype=np.float32)

        for i in xrange(num_of_batch):
            if not episode_end[i][0]:
                tmp_ = np.sign(Reward[i]) + self.gamma * max_Q_dash[i]
            else:
                tmp_ = np.sign(Reward[i])
            #print action
            action_index = self.action_to_index(action[i])
            target[i, action_index] = tmp_

        # TD-error clipping
        td = Variable(cuda.to_gpu(target,self.gpu_id)) - Q  # TD error
        td_tmp = td.data + 1000.0 * (abs(td.data) <= 1)  # Avoid zero division
        td_clip = td * (abs(td.data) <= 1) + td/abs(td_tmp) * (abs(td.data) > 1)

        zero_val = Variable(cuda.to_gpu(np.zeros((self.replay_size, self.num_of_actions), dtype=np.float32),self.gpu_id))
        loss = F.mean_squared_error(td_clip, zero_val)
        return loss, Q
评论列表
文章目录


问题


面经


文章

微信
公众号

扫码关注公众号