Answered
Custom Action Space DDPG Reinforcement Learning Agent
To my knowledge, you cannot implement a custom action space with rlNumericSpec, but what you could possibly do (since adding pen...

9 months ago | 0

| accepted

Answered
Generate Cuda code from a pretrained rlDDPGAgent object for NVIDIA board
If you see here, tha tanhLayer support code generation with GPU Coder starting in R2019b.

9 months ago | 0

| accepted

Answered
Binary Decision Variable in MPC
This should be doable with custom constraints in nonlinear MPC. You can create your own function that decides how the constraint...

9 months ago | 0

Answered
Export the reinforcement learning result - to see the weights of critic network and actor network
Hello, You can see the values of the neural network weights using this function. Yes, you can apply DDPG and RL in general to ...

9 months ago | 0

Answered
Deploy trained policy to simulink model
Hello, Looks like the dimensions cannot be determined automatically. If you double click the MATLAB Fcn block and then click "E...

9 months ago | 0

| accepted

Answered
RL Toolbox: Combine Discrete and Continuous Observations
Does the environments output continuous and discrete observations? If yes, couldn't you use 'rlNumericSpec' for both? The discre...

10 months ago | 0

Answered
RF - Create MATLAB Environment using Custom Functions - myResetFunction
Hi Zhen, I believe you are right - I have informed the documentation team about this.

10 months ago | 0

| accepted

Answered
reinforcement learning using my own function
It looks like your "core" function qualifies as the actual policy (or value function for that matter). The environment would be ...

10 months ago | 0

Answered
How can I find the template for the predefined environment: "CartPole-Discrete"
The predefined environments are coded in an object-oriented way, so you may not find all the info in one file. I would start wit...

10 months ago | 0

| accepted

Answered
How to continue training a DQN agent in the reinforcement learning toolbox?
Hi James, It looks like the experience buffer is the culprit here. Have a look at this question for a suggestion. Pretty much y...

10 months ago | 0

| accepted

Answered
Measures to improve computation time with reinforcement learning block in Simulink
Hi Enrico, Changing the values of TargetUpdateMethod and TargetUpdateFrequency will not change how often training happens, but ...

10 months ago | 0

Answered
DDPG Control - for non-linear plant control - Q0 does not converge even after 5,000 episodes
Hi Rajesh, Looks to me that this problem has converged. Ideally, the Q0 curve should eventually overlap with the average episod...

10 months ago | 0

Answered
Reinforcement Learning Toolbox: DDPG Agent, Q0 diverging to very high values during training
Hi Johan, It makes sense that stopping the training leads to bad actions since the blown-up critic values probably don't lead t...

1 year ago | 0

Answered
Reinforcement Learning Tool Box : How to change epsilon during training?
Hi Keita, Have a look at this link. The 'EpsilonGreedyExploration' option provides a way to reduce exploration as training prog...

1 year ago | 0

Answered
Reinforcement Learning Toolbox- Multiple Discrete Actions for actor critic agent (imageInputLayer issues)
Hi Anthony, I believe this link should help. Looks like the action space is not set up correctly. For multiple discrete actio...

1 year ago | 1

| accepted

Answered
Create policy evaluation function for RL agent
Can you try defining the size of inputs and outputs in the MATLAB Function block? This seems to be coming up a lot in the error ...

1 year ago | 0

| accepted

Answered
Reinforcement Learning Toolbox - When does algorithm train?
The implementation is based on the algorithm listed here. Weights are being updated at each time step.

1 year ago | 0

| accepted

Answered
RL Toolbox: Proximal Policy Optimisation
Hi Robert, Reinforcement Learning Toolbox in R2019b has a PPO implementation for discrete action spaces. Future releases will i...

1 year ago | 0

Answered
Training an agent of reinforcement learning as a motor's controller, but Matlab doesn't not do training at all?
Hello, It is hard to pinpoint the problem exactly without a repro model, but sounds like training stops prematurely. Can you re...

1 year ago | 0

Answered
DDPG - Noise Model - sample time step - definition
Hi Niklas, This post should be helpful. By "sample time step" the documentation refers to the "step count of the RL trainingpro...

1 year ago | 0

| accepted

Answered
Reinforcement Learning Simulink Block Inital Policy
To use the rl agent block, you need to create an agent first, which also requires a policy architecture. When you set up your ne...

1 year ago | 0

Answered
How to bound DQN critic estimate or RL training progress y-axis
Hello, I believe the best approach here is to figure out why the critic estimate takes large values. Even if you scale the plot...

1 year ago | 0

| accepted

Answered
Reinforcement Learning Simulink Block Inital Policy
If you already have a policy with trained weights, you could just use that directly when creating the agent, instead of creating...

1 year ago | 0

Answered
How to use CarMaker with Reinforcement learning tool box?
Hi Jin, You can use CarMaker with Simulink. After you set up the Simulink model to work with CarMaker, you use the same proces...

1 year ago | 0

Answered
reinforcement learning toolbox - q table
Hi Xinpeng, To see the trained table, you have to do is extract it using ‘getCritic’. Try: critic = getCritic(agent); The v...

1 year ago | 1

| accepted

Answered
Reinforcement Learning Toolbox - Change Action Space
Hi Federico, Unfortunately, the action space is fixed once created. To reduce the amount of times an action is selected, you co...

1 year ago | 1

| accepted

Answered
'sim' command error
Hello, I believe that if you install update 1 (or later) of R2019a release, this issue will be resolved.

1 year ago | 0

Answered
how to create own environment in reinforcement learning
To create a MATLAB environment type rlCreateEnvTemplate('myEnv') This will create a template m file based on the pendulum syst...

1 year ago | 0

Answered
How to show the progress of the training step in an episode?
Hello, You can use 'getLearnableParameterValues' to get the network parameters after training. Can you share some more detail...

1 year ago | 0

Answered
How to visualize episode behaviour with the reinforcement learning toolbox?
Hello, To create a custom MATLAB environment, use the template that pops up after running rlCreateEnvTemplate('myenv') In thi...

1 year ago | 1

| accepted

Load more