Community Profile

photo

Emmanouil Tzorakoleftherakis


Last seen: Today

MathWorks

109 total contributions since 2018

Emmanouil Tzorakoleftherakis's Badges

  • Knowledgeable Level 3
  • 6 Month Streak
  • Revival Level 2
  • First Answer

View details...

Contributions in
View by

Answered
reinforcement learning and DDPG agent problem
Looks like training was not successful. There could be many things at fault here - some suggestions: 1) Make sure you are rando...

1 day ago | 1

Answered
RL agent error using simulink
Looks like when you call getActionInfo, instead of passing an environment or agent object as an argument, you are passing a doub...

1 day ago | 0

| accepted

Answered
Deep Q-network reinforcement learning
Hello, The functionality to customize the action space is not yet available. A couple of workarounds: 1) Use penalties in the ...

5 days ago | 0

| accepted

Answered
Reinforcement Learning Random Action Generator
Hi Jason, 1) I am not really sure what you mean. There are two ways to create custom environments in MATLAB - one is using cust...

5 days ago | 1

| accepted

Answered
Reinforcement learning deployment in real-time system
Hello, To generate code from a trained policy, you should follow the process shown here. Note that this is a MATLAB-based workf...

5 days ago | 0

Answered
Implementation of Proximal Policy Optimisation
Hello, It seems you want to use PPO with continuous action space. If that's the case, your actor network does not have the righ...

5 days ago | 0

Answered
DDPG agent has saturated actions with diverging Q value
For the actor switching between extreme actions, please refer to this answer - sounds relevant.In short, make sure you include a...

5 days ago | 0

Answered
Can I use NN built with Fitnet for Reinforcement Learning toolbox with DQN agent?
Hi Abhay, Reinforcement Learning Toolbox currently supports the layers supported by Deep Learning Toolbox only. You could try c...

17 days ago | 0

| accepted

Answered
how I can connect Agent's action to set block parameter?
Hello, The most straightforward way is if the block accepts external input that modifies these parameters (for example like the...

17 days ago | 0

Answered
PPO agent applied to ACC model
Hello, Can you make sure that you set up your actor following a structure similar to this one? It seems that your variance path...

17 days ago | 0

Answered
Problems to set up the reset function in Reinforcement learning environment
Maybe I am missing something, but why don't you add a couple of lines that call 'setBlockParameter' with the appropriate path to...

17 days ago | 0

Answered
Easy way to evaluate / compare the performance of RL algorithm
Why not use a MATLAB Fcn block and implement the dummy agent in there? If you want random/constant actions should be just one li...

2 months ago | 1

Answered
Is it possible to train LSTM Network without a Dataset?
In the paper they mention "Although a readily available dataset is required to train an LSTM network, we devised an efficient wa...

2 months ago | 1

| accepted

Answered
Reinforcement learning: "NextObs" vs. "LoggedState" in step function
Actually, NextObs is the important thing here. It represents the value of your states after you apply current action and integra...

2 months ago | 0

Answered
What's the purpose of adding a transfer function after a Integrator block?
Hello It is likely to filter high-frequency content. Hope that helps.

2 months ago | 0

Answered
PPO agent with continuous action example
Hello, If you want to use PPO, i.e. a stochastic actor with continuous action space, you can follow the structure shown here.

2 months ago | 0

Answered
Environment for Reinforcement Learning Project
Hello, We are working on providing an interface between OpenAI Gym and Reinforcement Learning Toolbox but this will take some m...

2 months ago | 0

| accepted

Answered
How do I properly substitute rlRepresentation with rlValueRepresentation, rlQValueRepresentation, rlDeterministicActorRepresentation, and rlStochasticActorRepresentation?
It would be helpful if you pasted the exact MATLAB code you are typing to see what the problem is. I suspect you simply changed ...

2 months ago | 0

Answered
Deep Q Learning - define an adaptive critic learning rate?
Hi Niklas, I believe this is currently not supported. This is an interesting usecase though - I will inform the development tea...

2 months ago | 0

| accepted

Answered
Build Environment reinforcement learning
Hello, For Simulink environments, the following page should be helpful: https://www.mathworks.com/help/reinforcement-learning/...

2 months ago | 0

Answered
Using Reinforcement Learning algorithm to optimize parameter(s) of a controller
Hi Hazwan, The main difference between using RL for control vs parameter tuning is that in the first case the policy will direc...

2 months ago | 1

| accepted

Answered
Initializing pimp-controller failed: Error binding to tcp://*: no free port in range 9620-9620
Hello, I would contact technical support for this, and show them how to reproduce the error. If the issue is in the communicati...

2 months ago | 0

Answered
Can LoggedSignal in provided Link contain more than just the state?
LoggedSignals is not tied to the state or the observations, so you should be able to store whatever makes sense to you in that v...

2 months ago | 0

| accepted

Answered
Using getValue in matlab fcn block in simulink
Hi Sam, Before R2020a, the easiest way to bring the critic in Simulink without using the Agent block is to call generatePolicy...

2 months ago | 1

Answered
Multi action agent programming in reinforcement learning
This example shows how to create an environment with multiple discrete actions. Hope that helps

2 months ago | 0

Answered
Incorporate Time into Reinforcement Learning Environment
Time would be another parameter of your environment. Interactions between the agent and environment happen at discrete time step...

2 months ago | 1

| accepted

Answered
How to view the output of rlNumericSpec?
Hi Jacob, I think what you want to do is take the output of the agent and do the transformation you mention (not the output of ...

2 months ago | 0

Answered
Create and Train DQN Agent with just a State Path and Not Action Path
Hello, This page shows how this can be done in 20a. We will have examples that show this workflow in the next release. Hope th...

3 months ago | 1

| accepted

Answered
To choose an action, is it correct to compute the value of successor state or do we need to compute value of states in the entire path till end state?
Hi Gowri, Using the Q value for a state+action pair encodes all the information till 'the end of the path' weighted by a discou...

3 months ago | 1

| accepted

Answered
Agent repeats same sequence of actions each episode
Hi Braydon, I am not really sure why you are only looking at the first two episodes. RL can take thousands of episodes to conve...

3 months ago | 0

| accepted

Load more