I have a working custom rl-environment but after training for many episodes, it outputs this error message:
Invalid input argument type or size such as observation, reward, isdone or loggedSignals
Since it happens during training and the function that outputs the error doesn't have any information about those variables I can't find the bug.
I also try to then take that example and use:
InitialObs = reset(env)
[NextObs,Reward,IsDone,LoggedSignals] = step(env,index);
to regenerate the error with the steps and actions the agent chose but then it doesn't happen, any ideas of why this may be happening?
When I check all vectors they have the correct sizes and it trains without a problem for a while.
0 Comments
Sign in to comment.