MATLAB Answers

RL Toolbox: Combine Discrete and Continuous Observations

34 views (last 30 days)
Nicholas Schultz
Nicholas Schultz on 17 Feb 2020
Edited: Magnify on 1 Aug 2020
Hi,
For my project, I am looking at combining discrete observations that one would create through "rlFiniteSetSpec" and continuous observations through "rlNumericSpec". Is there a way that I could make this work by combining them into one observation variable?

  0 Comments

Sign in to comment.

Answers (1)

Emmanouil Tzorakoleftherakis
Does the environments output continuous and discrete observations? If yes, couldn't you use 'rlNumericSpec' for both? The discrete observations will already take values in a finite set as dictated by the environment. If not, I would probably do something along the lines of
ObservationInfo(1) = rlNumericSpec(...);
ObservationInfo(1).Name = 'continuous observations';
ObservationInfo(2) = rlFiniteSetSpec(...);
ObservationInfo(2).Name = 'discrete observations';

  2 Comments

MB Sylvest
MB Sylvest on 26 May 2020
I have a similar problem:
For A2C system I like to combine discrete (3 different actions) and continous (1 continous variable). When I used your above approach It will give an error.
%% Creation of the AC Agent
% Neural networks
criticNet = [
imageInputLayer([1 ObservationInfo.Dimension(2) 1],"Name","state","Normalization","none")
fullyConnectedLayer(32,"Name","Fully_128_1")
tanhLayer("Name","tanh_activation1")
fullyConnectedLayer(32,"Name","Fully_128_2")
tanhLayer("Name","tanh_activation2")
fullyConnectedLayer(1,"Name","output")];
actorNet = [
imageInputLayer([1 ObservationInfo.Dimension(2) 1],"Name","state","Normalization","none")
fullyConnectedLayer(32,"Name","Fully_128_1")
tanhLayer("Name","tanh_activation1")
fullyConnectedLayer(32,"Name","Fully_128_2")
tanhLayer("Name","tanh_activation2")
fullyConnectedLayer(size(Action_Vectors,1)+1,"Name","action")];
obsInfo = getObservationInfo(env);
actInfo = getActionInfo(env);
criticOpts = rlRepresentationOptions('LearnRate',0.001,'GradientThreshold',1); %0.001
%critic = rlRepresentation(criticNet,criticOpts,'Observation',{'state'},obsInfo);%19a
critic = rlValueRepresentation(criticNet,obsInfo,'Observation',{'state'},criticOpts);
actorOpts = rlRepresentationOptions('LearnRate',0.001,'GradientThreshold',1); %0.001
%actor = rlRepresentation(actorNet,actorOpts,'Observation',{'state'},obsInfo,'Action',{'action'},actInfo);%19a
actor = rlStochasticActorRepresentation(actorNet,obsInfo,actInfo,'Observation',{'state'},actorOpts);
agentOpts = rlACAgentOptions(...
'NumStepsToLookAhead',128, ...
'EntropyLossWeight',0.9, ...
'DiscountFactor',0.9);
agent = rlACAgent(actor,critic,agentOpts);
It will give an error at line:
actor = rlStochasticActorRepresentation(actorNet,obsInfo,actInfo,'Observation',{'state'},actorOpts);
with following error:
Error using rl.representation.rlAbstractRepresentation (line 79)
Multiple action channels are not supported.
Clearly it is because actInfo is now a cell. Do you have any suggestions how I can get arround this?
Magnify
Magnify on 1 Aug 2020
In my opinion, you didn't take it seriously to check out the official documentation, which cause so much blunders

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!