Main Content

randomPatchExtractionDatastore

Datastore for extracting random 2-D or 3-D random patches from images or pixel label images

Description

A randomPatchExtractionDatastore object extracts corresponding randomly-positioned patches from two image-based datastores. For example, the input datastores can be two image datastores that contain the network inputs and desired network responses for training image-to-image regression networks, or ground truth images and pixel label data for training semantic segmentation networks.

This object requires that you have Deep Learning Toolbox™.

Note

When you use a randomPatchExtractionDatastore as a source of training data, the datastore extracts multiple random patches from each image for each epoch, so that each epoch uses a slightly different data set. The actual number of training patches at each epoch is the number of training images multiplied by PatchesPerImage. The image patches are not stored in memory.

Creation

Description

example

patchds = randomPatchExtractionDatastore(ds1,ds2,PatchSize) creates a datastore that extracts randomly-positioned patches of size PatchSize from input data in datastore ds1 and response data in datastore ds2.

patchds = randomPatchExtractionDatastore(ds1,ds2,PatchSize,Name,Value) uses name-value arguments to set the PatchesPerImage, DataAugmentation, and DispatchInBackground properties. You can specify multiple name-value arguments.

For example, randomPatchExtractionDatastore(imds1,imds2,50,"PatchesPerImage",40) creates a datastore that randomly generates 40 patches of size 50-by-50 pixels from each image in image datastores imds1 and imds2.

Input Arguments

expand all

Input data containing training input to the network, specified as an ImageDatastore, PixelLabelDatastore (Computer Vision Toolbox), or TransformedDatastore.

Specifying a PixelLabelDatastore requires Computer Vision Toolbox™.

Note

ImageDatastore allows batch-reading of JPG or PNG image files using prefetching. If you use a custom function for reading the images, then prefetching does not happen.

Response data representing the desired network responses, specified as an ImageDatastore, PixelLabelDatastore (Computer Vision Toolbox), or TransformedDatastore. If you specify a TransformedDatastore, then the underlying datastore must be an ImageDatastore or a PixelLabelDatastore.

Specifying a PixelLabelDatastore requires Computer Vision Toolbox.

Note

ImageDatastore allows batch-reading of JPG or PNG image files using prefetching. If you use a custom function for reading the images, then prefetching does not happen.

Properties

expand all

This property is read-only.

Patch size, specified as one of the following.

  • A 2-element vector of positive integers for 2-D patches. PatchSize has the form [r c] where r specifies the number of rows and c specifies the number of columns in the patch.

  • A 3-element vector of positive integers for 3-D patches. PatchSize has the form [r c p] where r specifies the number of rows, c specifies the number of columns, and p specifies the number of planes in the patch.

Number of random patches per image, specified as a positive integer.

Preprocessing applied to input images, specified as an imageDataAugmenter (Deep Learning Toolbox) object or "none". When DataAugmentation is "none", no preprocessing is applied to input images.

Augment data with random transformations, such as resizing, rotation, and reflection, to help prevent the network from overfitting and memorizing the exact details of the training data. The randomPatchExtractionDatastore applies the same random transformation to both patches in each pair. The datastore augments data in real-time while training.

The DataAugmentation property is not supported for 3-D data. To preprocess 3-D data, use the transform function.

Dispatch observations in the background during training, prediction, or classification, specified as false or true. To use background dispatching, you must have Parallel Computing Toolbox™.

Number of observations that are returned in each batch. You can change the value of MiniBatchSize only after you create the datastore. For training, prediction, and classification, the MiniBatchSize property is set to the mini-batch size defined in trainingOptions (Deep Learning Toolbox).

This property is read-only.

Total number of observations in the randomPatchExtractionDatastore. The number of observations is the length of one training epoch.

Object Functions

combineCombine data from multiple datastores
hasdataDetermine if data is available to read
numpartitionsNumber of datastore partitions
partitionPartition a datastore
partitionByIndexPartition randomPatchExtractionDatastore according to indices
previewPreview subset of data in datastore
readRead data from randomPatchExtractionDatastore
readallRead all data in datastore
readByIndexRead data specified by index from randomPatchExtractionDatastore
resetReset datastore to initial state
shuffleShuffle data in datastore
transformTransform datastore
isPartitionableDetermine whether datastore is partitionable
isShuffleableDetermine whether datastore is shuffleable

Examples

collapse all

Create an image datastore containing training images. The datastore in this example contains JPEG color images.

imageDir = fullfile(toolboxdir('images'),'imdata');
imds1 = imageDatastore(imageDir,'FileExtensions','.jpg');

Create a second datastore that transforms the images in imds1 by applying a Gaussian blur.

imds2 = transform(imds1,@(x)imgaussfilt(x,2));

Create an imageDataAugmenter that rotates images by random angles in the range [0, 90] degrees and randomly reflects image data horizontally.

augmenter = imageDataAugmenter('RandRotation',[0 90],'RandXReflection',true)
augmenter = 
  imageDataAugmenter with properties:

           FillValue: 0
     RandXReflection: 1
     RandYReflection: 0
        RandRotation: [0 90]
           RandScale: [1 1]
          RandXScale: [1 1]
          RandYScale: [1 1]
          RandXShear: [0 0]
          RandYShear: [0 0]
    RandXTranslation: [0 0]
    RandYTranslation: [0 0]

Create a randomPatchExtractionDatastore object that extracts random patches of size 100-by-100 from the unprocessed training images and corresponding smoothed response images. Specify the augmentation options by setting the DataAugmentation property.

patchds = randomPatchExtractionDatastore(imds1,imds2,[100 100], ...
    'DataAugmentation',augmenter)
patchds = 
  randomPatchExtractionDatastore with properties:

         PatchesPerImage: 128
               PatchSize: [100 100]
        DataAugmentation: [1×1 imageDataAugmenter]
           MiniBatchSize: 128
         NumObservations: []
    DispatchInBackground: 0

Preview a set of augmented image patches and the corresponding smoothed image patches.

minibatch = preview(patchds);
inputs = minibatch.InputImage;
responses = minibatch.ResponseImage;
test = cat(2,inputs,responses);
montage(test','Size',[8 2])
title('Inputs (Left) and Responses (Right)')

Create an image datastore containing training images.

dataDir = fullfile(toolboxdir('vision'),'visiondata','triangleImages');
imageDir = fullfile(dataDir,'trainingImages');
imds = imageDatastore(imageDir);

Define class names and their associated label IDs. Then, create a pixel label datastore containing the ground truth pixel labels for the training images.

classNames = ["triangle","background"];
labelIDs = [255 0];
labelDir = fullfile(dataDir,'trainingLabels');
pxds = pixelLabelDatastore(labelDir,classNames,labelIDs);

Create a random patch extraction datastore to extract random patches of size 32-by-32 pixels from the images and corresponding pixel labels. Set the optional PatchesPerImage property to extract 512 random patches from each image and pixel label pair.

patchds = randomPatchExtractionDatastore(imds,pxds,32, ...
     'PatchesPerImage',512);

Create a network for semantic segmentation.

layers = [
    imageInputLayer([32 32 1])
    convolution2dLayer(3,64,'Padding',1)
    reluLayer()
    maxPooling2dLayer(2,'Stride',2)
    convolution2dLayer(3,64,'Padding',1)
    reluLayer()
    transposedConv2dLayer(4,64,'Stride',2,'Cropping',1)
    convolution2dLayer(1,2)
    softmaxLayer()
    pixelClassificationLayer()
    ]
layers = 
  10x1 Layer array with layers:

     1   ''   Image Input                  32x32x1 images with 'zerocenter' normalization
     2   ''   Convolution                  64 3x3 convolutions with stride [1  1] and padding [1  1  1  1]
     3   ''   ReLU                         ReLU
     4   ''   Max Pooling                  2x2 max pooling with stride [2  2] and padding [0  0  0  0]
     5   ''   Convolution                  64 3x3 convolutions with stride [1  1] and padding [1  1  1  1]
     6   ''   ReLU                         ReLU
     7   ''   Transposed Convolution       64 4x4 transposed convolutions with stride [2  2] and output cropping [1  1]
     8   ''   Convolution                  2 1x1 convolutions with stride [1  1] and padding [0  0  0  0]
     9   ''   Softmax                      softmax
    10   ''   Pixel Classification Layer   Cross-entropy loss 

Set up training options. To reduce training time, set MaxEpochs to 5.

options = trainingOptions('sgdm', ...
    'InitialLearnRate',1e-3, ...
    'MaxEpochs',5, ...
    'Verbose',false);

Train the network.

net = trainNetwork(patchds,layers,options);

Tips

  • The randomPatchExtractionDatastore expects that the output from the read operation on the input datastores return arrays of the same size.

  • If the input datastore is an ImageDatastore, then the values in its Labels property are ignored by the randomPatchExtractionDatastore.

  • To visualize 2-D data in a randomPatchExtractionDatastore, you can use the preview function, which returns a subset of data in a table. Visualize all of the patches in the same figure by using the montage function. For example, this code displays a preview of image patches from a randomPatchExtractionDatastore called patchds.

    minibatch = preview(patchds);
    montage(minibatch.InputImage)

Version History

Introduced in R2018b