MATLAB Answers

How to implement multi-agent RL training with custom MATLAB environment function?

29 views (last 30 days)
Hello everyone,
I have implemented one custom RL environment with MATLAB template environment class. I want to introduce multi-agents to the environment. I find all three examples provided for multi-agent RL are based on Simulink. My question is if it is at all possible to do the same with a MATLAB function? Or do I need to implement my custom RL environment in simulink to work with multi-agent RL?
Thanks.

  0 Comments

Sign in to comment.

Accepted Answer

Emmanouil Tzorakoleftherakis
Hello,
Currently multi-agent training is only supported in Simulink. If you have an environment created in MATLAB you could copy and paste the core parts like the reward and step function into a MATLAB Fcn block in Simulink as a workaround.

  3 Comments

laha
laha on 23 Nov 2020
Thanks Emmanouil.
I was wondering if we could do something like this. Lets say my single agent has two actions [A; B]. Now to make it sort of like multiagent (lets say 3 agents), I write my actions as [A1, A2, A3; B1, B2, B3] i.e. passing them as a vector of actions where A1, B1 are the actions of the first agent and so on.
I know its not a multi-agent. Rather its like single agent with higher action spaces. Does it make sense? Will it work?
" If you have an environment created in MATLAB you could copy and paste the core parts like the reward and step function into a MATLAB Fcn block in Simulink"-- I am not comfortable with simulink. Can you please share some similar examples/documentations on how to covert a MATLB functions to simulink blocks?
Thanks.
Emmanouil Tzorakoleftherakis
Yes that would likely be another way to set up the problem.
There are a lot of examples in the doc for MATLAB Fcn block - maybe you can start with this one.

Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!