left and right sides have a different number of elements
    6 views (last 30 days)
  
       Show older comments
    
    william Smith
 on 7 Apr 2019
  
    
    
    
    
    Answered: kevin harianto
 on 4 Apr 2022
            Getting error with this line, please advise,
Thanks!
dA(i)=(rate_a(i).*F.*Q)-(rate_b(i).*F.*L)-(rate_c(i).*F);
 left and right sides have a different number of elements
3 Comments
Accepted Answer
  Star Strider
      
      
 on 7 Apr 2019
        My guess is that ‘A’, ‘B’ and ‘C’ are vectors, or you would not be subscripting them.  
You need to subscript them here as well: 
 dA(i)=(rate_a(i).*A.*B)-(rate_b(i).*A.*C)-(rate_c(i).*A);
 dB(i)=(rate_b(i).*A.*C)-(rate_a(i).*A.*B)-(rate_d(i).*B);
 dC(i)=rate_d(i).*B;
or you will continue to have a dimension mismatch.  
4 Comments
  Star Strider
      
      
 on 7 Apr 2019
				I don’t have the rest of your code.  
Use your original while loop: 
i=1;
while  (i)<100 && dA(i)<= T && dA(i)>1
 dA(i)=(rate_a(i).*A.*B)-(rate_b(i).*A.*C)-(rate_c(i).*A);
 dB(i)=(rate_b(i).*A.*C)-(rate_a(i).*A.*B)-(rate_d(i).*B);
 dC(i)=rate_d(i).*B;
 dAdt(i)=x.*A(i); %where A is set as zeros vector
 dBdt(i)=y.*B(i);%where B is set as zeros vector
 dCdt(i)=z.*C(i);%where C is set as zeros vector
 A(i+1)=A(i)+i.*(dAdt(i)+dAdt(i+1))/2;
 B(i+1)=B(i)+i.*(dBdt(i)+dBdt(i+1))/2;
 C(i+1)=C(i)+i.*(dCdt(i)+dCdt(i+1))/2;
 i=i+1;
end
and then just do this: 
iv = 1:numel(C);
so the plot call is then: 
plot(iv, C)
That should work, if ‘C’ is a vector.  
More Answers (1)
  kevin harianto
 on 4 Apr 2022
        I also have the same problem with it being from location(:) = [pointCloud.Location];
classdef LidarSemanticSegmentation < lidar.labeler.AutomationAlgorithm
    % LidarSemanticSegmentation Automation algorithm performs semantic
    % segmentation in the point cloud.
    %   LidarSemanticSegmentation is an automation algorithm for segmenting
    %   a point cloud using SqueezeSegV2 semantic segmentation network
    %   which is trained on Pandaset data set.
    %
    %   See also lidarLabeler, groundTruthLabeler
    %   lidar.labeler.AutomationAlgorithm.
    %   Copyright 2021 The MathWorks, Inc.
    % ----------------------------------------------------------------------
    % Step 1: Define the required properties describing the algorithm. This
    % includes Name, Description, and UserDirections.
    properties(Constant)
        % Name Algorithm Name
        %   Character vector specifying the name of the algorithm.
        Name = 'Lidar Semantic Segmentation';
        % Description Algorithm Description
        %   Character vector specifying the short description of the algorithm.
        Description = 'Segment the point cloud using SqueezeSegV2 network.';
        % UserDirections Algorithm Usage Directions
        %   Cell array of character vectors specifying directions for
        %   algorithm users to follow to use the algorithm.
        UserDirections = {['ROI Label Definition Selection: select one of ' ...
            'the ROI definitions to be labeled'], ...
            'Run: Press RUN to run the automation algorithm. ', ...
            ['Review and Modify: Review automated labels over the interval ', ...
            'using playback controls. Modify/delete/add ROIs that were not ' ...
            'satisfactorily automated at this stage. If the results are ' ...
            'satisfactory, click Accept to accept the automated labels.'], ...
            ['Accept/Cancel: If the results of automation are satisfactory, ' ...
            'click Accept to accept all automated labels and return to ' ...
            'manual labeling. If the results of automation are not ' ...
            'satisfactory, click Cancel to return to manual labeling ' ...
            'without saving the automated labels.']};
    end
    % ---------------------------------------------------------------------
    % Step 2: Define properties you want to use during the algorithm
    % execution.
    properties
        % AllCategories
        % AllCategories holds the default 'unlabelled', 'Vegetation',
        % 'Ground', 'Road', 'RoadMarkings', 'SideWalk', 'Car', 'Truck',
        % 'OtherVehicle', 'Pedestrian', 'RoadBarriers', 'Signs',
        % 'Buildings' categorical types.
        AllCategories = {'unlabelled'};
        % PretrainedNetwork
        %   PretrainedNetwork saves the pretrained SqueezeSegV2 network.
        PretrainedNetwork
    end
    %----------------------------------------------------------------------
    % Note: this method needs to be included for lidarLabeler app to
    % recognize it as using pointcloud
    methods (Static)
        % This method is static to allow the apps to call it and check the
        % signal type before instantiation. When users refresh the
        % algorithm list, we can quickly check and discard algorithms for
        % any signal that is not support in a given app.
        function isValid = checkSignalType(signalType)
            isValid = (signalType == vision.labeler.loading.SignalType.PointCloud);
        end
    end
    %----------------------------------------------------------------------
    % Step 3: Define methods used for setting up the algorithm.
    methods
        function isValid = checkLabelDefinition(algObj, labelDef)
            % Only Voxel ROI label definitions are valid for the Lidar
            %  semantic segmentation algorithm.
            isValid = labelDef.Type == lidarLabelType.Voxel;
            if isValid
                algObj.AllCategories{end+1} = labelDef.Name;
            end
        end
        function isReady = checkSetup(algObj)
            % Is there one selected ROI Label definition to automate.
            isReady = ~isempty(algObj.SelectedLabelDefinitions);
        end
    end
    %----------------------------------------------------------------------
    % Step 4: Specify algorithm execution. This controls what happens when
    %         the user presses RUN. Algorithm execution proceeds by first
    %         executing initialize on the first frame, followed by run on
    %         every frame, and terminate on the last frame.
    methods
        function initialize(algObj,~)
            % Load the pretrained SqueezeSegV2 semantic segmentation network.
            outputFolder = fullfile(tempdir, 'Pandaset');
            pretrainedSqueezeSeg = load(fullfile(outputFolder,'trainedSqueezeSegV2PandasetNet.mat'));
            % Store the network in the 'PretrainedNetwork' property of this object.
            algObj.PretrainedNetwork = pretrainedSqueezeSeg.net;
        end
        function autoLabels = run(algObj, pointCloud)
            % Setup categorical matrix with categories including
            % 'Vegetation', 'Ground', 'Road', 'RoadMarkings', 'SideWalk',
            % 'Car', 'Truck', 'OtherVehicle', 'Pedestrian', 'RoadBarriers',
            % and 'Signs'.
            autoLabels = categorical(zeros(size(pointCloud.Location,1), size(pointCloud.Location,2)), ...
                0:12,algObj.AllCategories);
            %A = zeros(10000,10000);
            %filling in the minimum required resolution
            % to meet the neural network's specification.
  %(first iteration failed)   pointCloud.Location = zeros(65,1856,5);
            %Due to an error we must append the various point cloud data
            %first.
           Location =  zeros(64,1856,5);
            %next we can add in the ptCloud locations
       %     Location(:,:,1) = pointCloud.Location;
         %   Location = zeros(65,1856,5);
         Location(:) = [pointCloud.Location]
            %
            ptCloud=pointCloud(Location);
            %This will also be applied to the pointCloud Intensity levels
            % as these are also analyzed by the machine learning algorithm.
  %(Pushed aside for later modifications)          pointCloud.Intensity = zeros(64,1865);
            % Convert the input point cloud to five channel image.
            I = helperPointCloudToImage(pointCloud);
            % Predict the segmentation result.
            predictedResult = semanticseg(I, algObj.PretrainedNetwork);
            autoLabels(:) = predictedResult;
            %using this area we would be able to continuously update the latest file on
            % sending the output towards the CAN Network or atleast ensure that the
            % item is obtainable
            % This area would work the best.
            %first we must
        end
    end
end
function helperDisplayLabelOverlaidPointCloud(I,predictedResult)
    % helperDisplayLabelOverlaidPointCloud Overlay labels over point cloud object.
    %  helperDisplayLabelOverlaidPointCloud(I,predictedResult)
    %  displays the overlaid pointCloud object. I is the 5 channels organized
    %  input image. predictedResult contains pixel labels.
    ptCloud = pointCloud(I(:,:,1:3),Intensity = I(:,:,4));
    cmap = helperPandasetColorMap;
    B = ...
        labeloverlay(uint8(ptCloud.Intensity),predictedResult,Colormap = cmap,Transparency = 0.4);
    pc = pointCloud(ptCloud.Location,Color = B);
    ax = pcshow(pc);
    set(ax,XLim = [-70 70],YLim = [-70 70])
    zoom(ax,3.5)
end
function cmap = helperPandasetColorMap
    cmap = [[30 30 30];  % Unlabeled
        [0 255 0];       % Vegetation
        [255 150 255];   % Ground
        [237 117 32];    % Road
        [255 0 0];       % Road Markings
        [90 30 150];     % Sidewalk
        [255 255 30];    % Car
        [245 150 100];   % Truck
        [150 60 30];     % Other Vehicle
        [255 255 0];     % Pedestrian
        [0 200 255];     % Road Barriers
        [170 100 150];   % Signs
        [255 0 255]];    % Building
    cmap = cmap./255;
end
function image = helperPointCloudToImage(ptcloud)
% helperPointCloudToImage converts the point cloud to 5 channel image
image = ptcloud.Location;
image(:,:,5) = ptcloud.Intensity;
rangeData = iComputeRangeData(image(:,:,1),image(:,:,2),image(:,:,3));
image(:,:,4) = rangeData;
index = isnan(image);
image(index) = 0;
end
function rangeData = iComputeRangeData(xChannel,yChannel,zChannel)
rangeData = sqrt(xChannel.*xChannel+yChannel.*yChannel+zChannel.*zChannel);
end
0 Comments
See Also
Categories
				Find more on Point Cloud Processing in Help Center and File Exchange
			
	Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!