Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use PPO to train in psro_scenario #59

Open
donotbelieveit opened this issue Feb 22, 2023 · 1 comment · May be fixed by #60
Open

How to use PPO to train in psro_scenario #59

donotbelieveit opened this issue Feb 22, 2023 · 1 comment · May be fixed by #60

Comments

@donotbelieveit
Copy link

I can not find the implementation of PPO in this project.Through docs I know policy is compatible with Tianshou,but what about trainer?How can I use PPO to train in psro_scenario?I will appreciate it if you can answer my question.

@KornbergFresnel
Copy link
Member

@donotbelieveit PPO is not ready yet as further tests are required, but you can follow our submission to malib.rl.ppo (coming sooner). btw, you can refer to the given training example (here) for using RL subroutines in PSRO. And if you want to know the mechanism of RL trainer, please refer to this marl example: examples/run_gym.py. And also, please feel free to make your PR if you have any ideas to enrich our (MA)RL algorithm lib under malib/rl. :)

@KornbergFresnel KornbergFresnel linked a pull request Feb 28, 2023 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants