在星河社区学习PARL使用强化学习来训练AI
PARL 是一个高性能、灵活的强化学习框架。PARL官网:https://github.com/PaddlePaddle/PARL
因为本身PARL就是飞桨的强化学习套件,所以在星河社区AIStudio进行本次实践 飞桨AI Studio星河社区-人工智能学习与实训社区
星河社区里面可以使用Notebook进行项目的编辑和执行,非常简单方便,所以下面的执行语句前面会带!或者%等符号,这是Notebook下执行命令的方式,!是一次性的执行命令 ,比如大部分命令都会用!执行。%是影响环境变量的,比如切换目录一般用%cd
下载PARL软件并安装
!git clone https://github.com/PaddlePaddle/PARL.git
# !git clone https://gitee.com/PaddlePaddle/PARL.git
!cd PARL && pip install .
下载并解压安装
如果github因为网络原因没有下载下来,就先下载zip包,然后再上传到AIStudio,解包再安装。
-
下载GitHub仓库的ZIP包:
- 打开GitHub仓库页面https://github.com/PaddlePaddle/PARL,点击页面右侧的绿色“Code”按钮。
- 在下拉菜单中选择“Download ZIP”选项,浏览器会自动开始下载仓库的压缩包。
- 等待下载完成后,ZIP包会保存到本地默认的下载目录中。
-
上传ZIP包到AIStudio:
- 登录AIStudio平台,进入目标项目的工作空间。
- 点击“上传文件”按钮,选择刚刚下载的ZIP包进行上传。
- 等待上传完成后,ZIP包会出现在工作空间的文件列表中。
-
解压ZIP包:
- 在AIStudio工作空间中,找到上传的ZIP包文件。
- 右键点击ZIP包,选择“解压”选项,或者使用命令行工具执行解压操作。例如,在终端中输入以下命令:
unzip your_repository.zip -d your_target_directory
- 解压完成后,仓库的所有文件会出现在指定目录中。
解压
在本项目里,将PARL解压到 ~/work/PARL 目录:
!unzip ~/work/PARL-develop.zip -d ~/work/# 修改目录名
!mv ~/work/PARL-develop ~/work/PARL
编译安装
!cd ~/work/PARL && pip install .
使用demo例子进行训练
# Train model
%cd ~/work/PARL/examples/QuickStart/
!python train.py
可以看到很快就训到了高分了
/home/aistudio/work/PARL/examples/QuickStart
[05-14 20:32:38 MainThread @logger.py:242] Argv: train.py
/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/utils/cpp_extension/extension_utils.py:711: UserWarning: No ccache found. Please be aware that recompiling all source files may be required. You can download and install ccache from: https://github.com/ccache/ccache/blob/master/doc/INSTALL.mdwarnings.warn(warning_message)
/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/utils/cpp_extension/extension_utils.py:711: UserWarning: No ccache found. Please be aware that recompiling all source files may be required. You can download and install ccache from: https://github.com/ccache/ccache/blob/master/doc/INSTALL.mdwarnings.warn(warning_message)
[05-14 20:32:41 MainThread @train.py:84] obs_dim 4, act_dim 2
W0514 20:32:41.637965 47822 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 12.0, Runtime API Version: 11.8
W0514 20:32:41.639025 47822 gpu_resources.cc:164] device: 0, cuDNN Version: 8.9.
W0514 20:32:44.744032 47822 gpu_resources.cc:306] WARNING: device: 0. The installed Paddle is compiled with CUDNN 8.9, but CUDNN version in your machine is 8.9, which may cause serious incompatible bug. Please recompile or reinstall Paddle with compatible CUDNN version.
/home/aistudio/external-libraries/lib/python3.10/site-packages/gym/utils/passive_env_checker.py:233: DeprecationWarning: `np.bool8` is a deprecated alias for `np.bool_`. (Deprecated NumPy 1.24)if not isinstance(terminated, (bool, np.bool8)):
[05-14 20:32:45 MainThread @train.py:100] Episode 0, Reward Sum 25.0.
[05-14 20:32:45 MainThread @train.py:100] Episode 10, Reward Sum 20.0.
[05-14 20:32:45 MainThread @train.py:100] Episode 20, Reward Sum 10.0.
[05-14 20:32:45 MainThread @train.py:100] Episode 30, Reward Sum 16.0.
[05-14 20:32:45 MainThread @train.py:100] Episode 40, Reward Sum 15.0.
[05-14 20:32:45 MainThread @train.py:100] Episode 50, Reward Sum 11.0.
[05-14 20:32:45 MainThread @train.py:100] Episode 60, Reward Sum 38.0.
[05-14 20:32:45 MainThread @train.py:100] Episode 70, Reward Sum 45.0.
[05-14 20:32:46 MainThread @train.py:100] Episode 80, Reward Sum 12.0.
[05-14 20:32:46 MainThread @train.py:100] Episode 90, Reward Sum 14.0.
[05-14 20:32:46 MainThread @train.py:110] Test reward: 31.8
[05-14 20:32:46 MainThread @train.py:100] Episode 100, Reward Sum 13.0.
[05-14 20:32:46 MainThread @train.py:100] Episode 110, Reward Sum 18.0.
[05-14 20:32:46 MainThread @train.py:100] Episode 120, Reward Sum 14.0.
[05-14 20:32:46 MainThread @train.py:100] Episode 130, Reward Sum 14.0.
[05-14 20:32:46 MainThread @train.py:100] Episode 140, Reward Sum 11.0.
[05-14 20:32:46 MainThread @train.py:100] Episode 150, Reward Sum 48.0.
[05-14 20:32:47 MainThread @train.py:100] Episode 160, Reward Sum 55.0.
[05-14 20:32:47 MainThread @train.py:100] Episode 170, Reward Sum 25.0.
[05-14 20:32:47 MainThread @train.py:100] Episode 180, Reward Sum 15.0.
[05-14 20:32:47 MainThread @train.py:100] Episode 190, Reward Sum 18.0.
[05-14 20:32:47 MainThread @train.py:110] Test reward: 44.8
[05-14 20:32:47 MainThread @train.py:100] Episode 200, Reward Sum 22.0.
[05-14 20:32:47 MainThread @train.py:100] Episode 210, Reward Sum 33.0.
[05-14 20:32:48 MainThread @train.py:100] Episode 220, Reward Sum 53.0.
[05-14 20:32:48 MainThread @train.py:100] Episode 230, Reward Sum 17.0.
[05-14 20:32:48 MainThread @train.py:100] Episode 240, Reward Sum 22.0.
[05-14 20:32:48 MainThread @train.py:100] Episode 250, Reward Sum 25.0.
[05-14 20:32:48 MainThread @train.py:100] Episode 260, Reward Sum 33.0.
[05-14 20:32:48 MainThread @train.py:100] Episode 270, Reward Sum 29.0.
[05-14 20:32:49 MainThread @train.py:100] Episode 280, Reward Sum 36.0.
[05-14 20:32:49 MainThread @train.py:100] Episode 290, Reward Sum 24.0.
[05-14 20:32:49 MainThread @train.py:110] Test reward: 100.0
[05-14 20:32:49 MainThread @train.py:100] Episode 300, Reward Sum 27.0.
[05-14 20:32:49 MainThread @train.py:100] Episode 310, Reward Sum 30.0.
[05-14 20:32:49 MainThread @train.py:100] Episode 320, Reward Sum 44.0.
[05-14 20:32:50 MainThread @train.py:100] Episode 330, Reward Sum 61.0.
[05-14 20:32:50 MainThread @train.py:100] Episode 340, Reward Sum 67.0.
[05-14 20:32:50 MainThread @train.py:100] Episode 350, Reward Sum 64.0.
[05-14 20:32:50 MainThread @train.py:100] Episode 360, Reward Sum 20.0.
[05-14 20:32:50 MainThread @train.py:100] Episode 370, Reward Sum 49.0.
[05-14 20:32:51 MainThread @train.py:100] Episode 380, Reward Sum 34.0.
[05-14 20:32:51 MainThread @train.py:100] Episode 390, Reward Sum 107.0.
[05-14 20:32:51 MainThread @train.py:110] Test reward: 246.2
[05-14 20:32:51 MainThread @train.py:100] Episode 400, Reward Sum 57.0.
[05-14 20:32:52 MainThread @train.py:100] Episode 410, Reward Sum 58.0.
[05-14 20:32:52 MainThread @train.py:100] Episode 420, Reward Sum 21.0.
[05-14 20:32:52 MainThread @train.py:100] Episode 430, Reward Sum 55.0.
[05-14 20:32:52 MainThread @train.py:100] Episode 440, Reward Sum 49.0.
[05-14 20:32:53 MainThread @train.py:100] Episode 450, Reward Sum 60.0.
[05-14 20:32:53 MainThread @train.py:100] Episode 460, Reward Sum 92.0.
[05-14 20:32:53 MainThread @train.py:100] Episode 470, Reward Sum 188.0.
[05-14 20:32:53 MainThread @train.py:100] Episode 480, Reward Sum 89.0.
[05-14 20:32:53 MainThread @train.py:100] Episode 490, Reward Sum 33.0.
[05-14 20:32:54 MainThread @train.py:110] Test reward: 141.8
[05-14 20:32:54 MainThread @train.py:100] Episode 500, Reward Sum 67.0.
[05-14 20:32:54 MainThread @train.py:100] Episode 510, Reward Sum 34.0.
[05-14 20:32:54 MainThread @train.py:100] Episode 520, Reward Sum 97.0.
[05-14 20:32:55 MainThread @train.py:100] Episode 530, Reward Sum 111.0.
[05-14 20:32:55 MainThread @train.py:100] Episode 540, Reward Sum 41.0.
[05-14 20:32:55 MainThread @train.py:100] Episode 550, Reward Sum 75.0.
[05-14 20:32:56 MainThread @train.py:100] Episode 560, Reward Sum 116.0.
[05-14 20:32:56 MainThread @train.py:100] Episode 570, Reward Sum 44.0.
[05-14 20:32:56 MainThread @train.py:100] Episode 580, Reward Sum 33.0.
[05-14 20:32:56 MainThread @train.py:100] Episode 590, Reward Sum 82.0.
[05-14 20:32:57 MainThread @train.py:110] Test reward: 127.8
[05-14 20:32:57 MainThread @train.py:100] Episode 600, Reward Sum 46.0.
[05-14 20:32:57 MainThread @train.py:100] Episode 610, Reward Sum 86.0.
[05-14 20:32:58 MainThread @train.py:100] Episode 620, Reward Sum 101.0.
[05-14 20:32:58 MainThread @train.py:100] Episode 630, Reward Sum 4.0.
[05-14 20:32:59 MainThread @train.py:100] Episode 640, Reward Sum 146.0.
[05-14 20:32:59 MainThread @train.py:100] Episode 650, Reward Sum 88.0.
[05-14 20:32:59 MainThread @train.py:100] Episode 660, Reward Sum 164.0.
[05-14 20:33:00 MainThread @train.py:100] Episode 670, Reward Sum 212.0.
[05-14 20:33:01 MainThread @train.py:100] Episode 680, Reward Sum 45.0.
[05-14 20:33:01 MainThread @train.py:100] Episode 690, Reward Sum 124.0.
[05-14 20:33:03 MainThread @train.py:110] Test reward: 500.0
[05-14 20:33:03 MainThread @train.py:100] Episode 700, Reward Sum 153.0.
[05-14 20:33:03 MainThread @train.py:100] Episode 710, Reward Sum 243.0.
[05-14 20:33:04 MainThread @train.py:100] Episode 720, Reward Sum 195.0.
[05-14 20:33:05 MainThread @train.py:100] Episode 730, Reward Sum 136.0.
[05-14 20:33:06 MainThread @train.py:100] Episode 740, Reward Sum 322.0.
[05-14 20:33:07 MainThread @train.py:100] Episode 750, Reward Sum 169.0.
[05-14 20:33:07 MainThread @train.py:100] Episode 760, Reward Sum 119.0.
[05-14 20:33:08 MainThread @train.py:100] Episode 770, Reward Sum 386.0.
[05-14 20:33:09 MainThread @train.py:100] Episode 780, Reward Sum 121.0.
[05-14 20:33:10 MainThread @train.py:100] Episode 790, Reward Sum 186.0.
[05-14 20:33:12 MainThread @train.py:110] Test reward: 500.0
调试
碰到报错AttributeError: module 'pyarrow' has no attribute 'default_serialization_context'
/home/aistudio/work/PARL/examples/QuickStart
[05-14 20:11:16 MainThread @logger.py:242] Argv: train.py
/opt/conda/envs/python35-paddle120-env/lib/python3.10/site-packages/paddle/utils/cpp_extension/extension_utils.py:711: UserWarning: No ccache found. Please be aware that recompiling all source files may be required. You can download and install ccache from: https://github.com/ccache/ccache/blob/master/doc/INSTALL.md
warnings.warn(warning_message)
[05-14 20:11:18 MainThread @__init__.py:37] WRN No deep learning framework was found, but it's ok for parallel computation.
Traceback (most recent call last):
File "/home/aistudio/work/PARL/examples/QuickStart/train.py", line 18, in <module>
import parl
File "/home/aistudio/external-libraries/lib/python3.10/site-packages/parl/__init__.py", line 54, in <module>
from parl.remote import remote_class, connect
File "/home/aistudio/external-libraries/lib/python3.10/site-packages/parl/remote/__init__.py", line 19, in <module>
from parl.remote.remote_decorator import *
File "/home/aistudio/external-libraries/lib/python3.10/site-packages/parl/remote/remote_decorator.py", line 20, in <module>
from parl.remote.remote_wrapper import RemoteWrapper
File "/home/aistudio/external-libraries/lib/python3.10/site-packages/parl/remote/remote_wrapper.py", line 22, in <module>
from parl.remote.communication import loads_argument, loads_return,\
File "/home/aistudio/external-libraries/lib/python3.10/site-packages/parl/remote/communication.py", line 38, in <module>
context = pyarrow.default_serialization_context()
AttributeError: module 'pyarrow' has no attribute 'default_serialization_context'
换了几个版本都没有搞定
原来parl就没说支持python3.10啊!
classifiers=['Intended Audience :: Developers','License :: OSI Approved :: Apache Software License','Operating System :: OS Independent','Programming Language :: Python :: 3.5','Programming Language :: Python :: 3.6','Programming Language :: Python :: 3.7','Programming Language :: Python :: 3.8','Programming Language :: Python :: 3.9',
哭晕在现场!
放弃!
不放弃!原来用了旧版本,刚开始使用了pypi来安装PARL ,源码使用了gitee上的代码(这些代码有些旧了)
直接使用github上的源代码来安装和训练,问题解决!