Main Content

Experiment Manager

Design and run experiments by using your MATLAB code

Since R2023b

Description

You can use the Experiment Manager app to create experiments to run your MATLAB® code using various parameter values and compare the results. For example, you can use Experiment Manager to explore how the solution to a system of differential equations responds to different coefficient values or how it evolves from different initial conditions.

Experiment Manager provides visualizations, filters, and annotations to help you manage your experiment results and record your observations. To improve reproducibility, Experiment Manager stores a copy of the experiment definition every time that you run an experiment. You can access past experiment definitions to keep track of the combinations of parameters or hyperparameters that produce each of your results.

Experiment Manager organizes your experiments and results in projects.

  • You can store several experiments in the same project.

  • Each experiment contains a set of results for each time that you run the experiment.

  • Each set of results consists of one or more trials that correspond to a different combination of parameters or hyperparameters.

The Experiment Browser panel displays the hierarchy of experiments and results in a project. The icon next to the experiment name indicates its type.

  • Orange round-bottom flask icon — General-purpose experiment that uses a user-authored experiment function

For more information about setting up experiments, watch How to Set Up and Manage Experiments in MATLAB.

This page contains general information about using Experiment Manager. For information about built-in and custom training experiments, see Experiment Manager (Deep Learning Toolbox). For information about using Experiment Manager with the Classification Learner and Regression Learner apps, see Experiment Manager (Statistics and Machine Learning Toolbox).

Required Products

  • Use Deep Learning Toolbox™ to run built-in or custom training experiments for deep learning and to view confusion matrices for these experiments.

  • Use Statistics and Machine Learning Toolbox™ to run custom training experiments for machine learning and experiments that use Bayesian optimization.

  • Use Parallel Computing Toolbox™ to run multiple trials at the same time or a single trial on multiple GPUs, on a cluster, or in the cloud. For more information, see Run Experiments in Parallel.

  • Use MATLAB Parallel Server™ to offload experiments as batch jobs in a remote cluster. For more information, see Offload Experiments as Batch Jobs to a Cluster.

Experiment Manager app

Open the Experiment Manager App

  • MATLAB Toolstrip: On the Apps tab, under MATLAB, click the Experiment Manager icon.

  • MATLAB command prompt: Enter experimentManager.

Examples

expand all

This example shows how to convert your existing MATLAB code into an experiment that you can run using the Experiment Manager app.

This script creates a histogram that shows that the 13th day of the month is more likely to fall on a Friday than on any other day of the week. For more information, see Chapter 3 of Experiments with MATLAB by Cleve Moler.

date = 13;

daysOfWeek = ["Sunday","Monday","Tuesday","Wednesday", ...
    "Thursday","Friday","Saturday"];
values = zeros(1,7);

for year = 1601:2000
    for month = 1:12
        d = datetime(year,month,date);
        w = weekday(d);
        values(w) = values(w) + 1;
    end
end

[minValue,maxValue] = bounds(values);
avgValue = mean(values);

figure(Name="Histogram")
bar(values)
axis([0 8 floor((minValue-1)/10)*10 ceil((maxValue+1)/10)*10])
line([0 8],[avgValue avgValue],linewidth=4,color="black")
set(gca,xticklabel=daysOfWeek)

You can convert this script into an experiment by following these steps. Alternatively, open the example to skip the conversion steps and load a preconfigured experiment that runs a converted version of the script.

1. Close any open projects and open the Experiment Manager app.

2. A dialog box provides links to the getting started tutorials and your recent projects, as well as buttons to create a new project or open an example from the documentation. Under New, select Blank Project.

3. If you have Deep Learning Toolbox or Statistics and Machine Learning Toolbox, Experiment Manager opens a second dialog box that lists several templates to support your AI workflows. Under Blank Experiments, select General Purpose.

4. Specify the name and location for the new project. Experiment Manager opens a new experiment in the project. The experiment definition tab displays the description, parameters, and experiment function that define the experiment. For more information, see Configure General-Purpose Experiment.

5. In the Description field, enter a description of the experiment:

Count the number of times that a given day and month falls on each day of the week.
To scan all months, set the value of Month to 0.

6. Under Parameters, add a parameter called Day with a value of 21 and a parameter called Month with a value of 0:3:12.

7. Under Experiment Function, click Edit. A blank experiment function called Experiment1Function1 opens in MATLAB Editor. The experiment function has an input argument called params and two output arguments called output1 and output2.

8. Copy and paste your MATLAB code into the body of the experiment function.

9. Replace the hard-coded value for the variable date with the expression params.Day. This expression uses dot notation to access the parameter values that you specified in step 6.

date = params.Day;

10. Add a new variable called monthRange that accesses the value of the parameter Month. If this value equals zero, set monthRange to the vector 1:12.

monthRange = params.Month;
if monthRange == 0
    monthRange = 1:12;
end

11. Use monthRange as the range for the for loop with counter month. Additionally, use the day function to account for months with fewer than 31 days.

for year = 1601:2000
    for month = monthRange
        d = datetime(year,month,date);
        if day(d) == date
            w = weekday(d);
            values(w) = values(w) + 1;
        end
    end
end

12. Rename the output arguments to MostLikelyDay and LeastLikelyDay. Use this code to compute these outputs after you calculate the values of maxValue, minValue, and avgValue:

maxIndex = ~(maxValue-values);
maxIndex = maxIndex.*(1:1:7);
maxIndex = nonzeros(maxIndex)';
MostLikelyDay = join(daysOfWeek(maxIndex));
 
minIndex = ~(values-minValue);
minIndex = minIndex.*(1:1:7);
minIndex = nonzeros(minIndex)';
LeastLikelyDay = join(daysOfWeek(minIndex));

After these steps, your experiment function contains this code:

function [MostLikelyDay,LeastLikelyDay] = Experiment1Function1(params)

date = params.Day;
 
monthRange = params.Month;
if monthRange == 0
    monthRange = 1:12;
end
 
daysOfWeek = ["Sunday","Monday","Tuesday","Wednesday", ...
    "Thursday","Friday","Saturday"];
values = zeros(1,7);
 
for year = 1601:2000
    for month = monthRange
        d = datetime(year,month,date);
        if day(d) == date
            w = weekday(d);
            values(w) = values(w) + 1;
        end
    end
end
 
[minValue,maxValue] = bounds(values);
avgValue = mean(values);
 
maxIndex = ~(maxValue-values);
maxIndex = maxIndex.*(1:1:7);
maxIndex = nonzeros(maxIndex)';
MostLikelyDay = join(daysOfWeek(maxIndex));
 
minIndex = ~(values-minValue);
minIndex = minIndex.*(1:1:7);
minIndex = nonzeros(minIndex)';
LeastLikelyDay = join(daysOfWeek(minIndex));
 
figure(Name="Histogram")
bar(values)
axis([0 8 floor((minValue-1)/10)*10 ceil((maxValue+1)/10)*10])
line([0 8],[avgValue avgValue],linewidth=4,color="black")
set(gca,xticklabel=daysOfWeek)
 
end

To run the experiment, on the Experiment Manager toolstrip, click Run. Experiment Manager runs the experiment function five times, each time using a different combination of parameter values. A table of results displays the output values for each trial.

To display a histogram for each completed trial, under Review Results, click Histogram.

The results of the experiment show that the 21st day of the month is more likely to fall on a Saturday than on any other day of the week. However, the summer solstice, June 21, is more likely to fall on a Sunday, Tuesday, or Thursday.

Set up a general-purpose experiment using Experiment Manager. General-purpose experiments use a user-authored experiment function and support workflows that do not require Deep Learning Toolbox or Statistics and Machine Learning Toolbox.

Open the Experiment Manager app. In the dialog box, you can create a new project or open an example from the documentation. Under New, select Blank Project.

In the next dialog box, you can open a blank experiment template or one of the preconfigured experiment templates to support your workflow. Under Blank Experiments, select the blank template General Purpose.

The experiment is a general-purpose experiment that uses a user-authored experiment function, indicated by the Orange round-bottom flask icon.

The experiment definition tab displays the description, parameters, and experiment function that define the experiment. When starting with a blank experiment template, you must manually configure these parameters.

Experiment definition tab showing the default configuration for a general-purpose experiment

Configure the experiment parameters.

  • Description — Enter a description of the experiment.

  • Parameters — Specify the parameters for the experiment. Click Add to add a row to the table, and enter the names and values of the parameters. Experiment Manager runs multiple trials of your experiment using a different combination of parameters for each trial. For information about the requirements for parameter values, see Exhaustive Sweep.

    For example, for Experiment with Predator-Prey Equations, the parameters Alpha and Beta specify the values of the coefficients of the Lotka-Volterra equations, and the parameters RabbitsInitial and FoxesInitial specify the initial population of rabbits and foxes.

    Parameters section showing with four sets of parameter names and values

  • Experiment Function— Open and modify the function used by the experiment in the MATLAB Editor by clicking Edit. The input to the experiment function is a structure with fields from the Parameters table. The experiment function can return multiple outputs. The names of the output variables appear as column headers in the results table. Each output value must be a numeric, logical, or string scalar.

    For example, for Experiment with Predator-Prey Equations, the experiment function defines differential equations that describes the relationship between two competing populations.

    The experiment function also creates two visualizations, Population Size and Phase Plane, to compare the populations of rabbits and foxes over time.

    Experiment function section showing the function name LotkaVolterraFunction

After you configure the experiment, run the experiment and compare results.

You can decrease the run time of some experiments if you have Parallel Computing Toolbox or MATLAB Parallel Server.

By default, Experiment Manager runs one trial at a time. If you have Parallel Computing Toolbox, you can run multiple trials at the same time or run a single trial on multiple GPUs, on a cluster, or in the cloud. If you have MATLAB Parallel Server, you can also offload experiments as batch jobs in a remote cluster so that you can continue working or close your MATLAB session while your experiment runs.

In the Experiment Manager toolstrip, in the Execution section, use the Mode list to specify an execution mode. If you select the Batch Sequential or Batch Simultaneous execution mode, use the Cluster list and Pool Size field in the toolstrip to specify your cluster and pool size.

For more information, see Run Experiments in Parallel and Offload Experiments as Batch Jobs to a Cluster.

Identify, add, or remove files required by your experiment using the Supporting Files section of the experiment definition tab.

In most cases, Experiment Manager automatically detects required files when you run the experiment. The Detected Files list in the Supporting Files section of the experiment definition tab updates after the experiment is run. The list displays the relative path for files in the project folder and the full path for files outside of the project folder.

If Experiment Manager does not detect some of your supporting files, your trials will produce an error. If this situation occurs, you can manually select files to include by clicking Add in the Additional Files section. You can also update the Detected Files section by clicking Refresh.

Supporting Files section containing a table with the paths of detected files and a table with the paths of additional files

To interrupt an experiment, in the Experiment Manager toolstrip, click the Stop button or Cancel button .

  • Stop — Mark running trials as Stopped and save the results. When the experiment stops, you can display the training plot and export the training results for these trials.

  • Cancel — Mark running trials as Canceled and discard the results. You cannot display the training plot or export the training results for these trials.

Both options save the results of any completed trials and cancel any queued trials. Typically, Cancel is faster than Stop.

Instead of interrupting the entire experiment, you can stop or cancel an individual trial. In the Actions column, click Stop for a trial that is running or click Cancel for a queued trial.

Actions column of the results table showing a Stop button for a running trial

To reduce the size of your experiments, discard the results and visualizations of any trial that is no longer relevant. In the Actions column of the results table, click the Discard button for the trial.

When the training is complete, you can restart a trial that you stopped, canceled, or discarded. In the Actions column of the results table, click the Restart button for the trial.

Actions column of the results table showing a Restart button for a stopped trial

Alternatively, you can restart multiple trials. In the Experiment Manager toolstrip, open the Restart list, select one or more restarting criteria, and click the Restart button .

Note

Stop, cancel, and restart options are not available for all experiment types, strategies, or execution modes.

Experiment Manager runs multiple trials of your experiment using a different combination of parameters or hyperparameters. In the experiment result tab, a table displays the parameter or hyperparameter values and output or metric values for each trial. The columns in the table depend on the experiment type.

To compare your results, you can reorder or display a subset of trials by sorting or filtering a column of values.

  • To reorder trials in the results table, point to the header of a column of values by which you want to sort, click the triangle icon, and select the sorting order.

    Results table showing the drop-down list for the output Denominator column. The list options are: Sort in Ascending Order, Sort in Descending Order, and Show Filter.

  • To display a subset of trials by filtering trials from the results table, on the app toolstrip, click Filters. The Filters panel shows a histogram for each column in the results table that has numeric values and a text filter for each column that has character array or string values. The results table shows only the trials with a value in the selected range. To restore all of the trials in the results table, close the experiment result tab and reopen the results from the Experiment Browser panel.

    Filters panel in Experiment Manager. A string filter for the value "Newton" is applied for the Model parameter, and a numeric filter for values less than 54.7917 is applied for the Theta parameter.

You can also record observations about the results of your experiment by adding an annotation.

  1. Right-click a cell in the results table and select Add Annotation. Alternatively, select a cell in the results table and, on the Experiment Manager toolstrip, select Annotations > Add Annotation.

    Results table showing the drop-down list for a cell in the output Denominator column. The list includes the Add Annotation option.

  2. Then, in the Annotations panel, enter your observations in the text box. You can add multiple annotations for each cell in the results table.

    Annotations panel showing an annotation for the Period parameter in Trial 1. The annotation text is "Largest period."

  3. To sort annotations, use the Sort By drop-down list. You can sort by creation time or trial number. To highlight the cell that corresponds to an annotation, click the link above the annotation. To delete an annotation, click the Discard button to the right of the annotation.

You can analyze the results table for an experiment using visualizations.

For general-purpose and custom training experiments:

  1. To add visualizations for your experiment, create a figure in the setup or training function. Then, specify the name of the visualization by setting the Name property of the figure. If you do not name the figure, Experiment Manager derives the name of the visualization from the axes or figure title.

    For example, in the experiment function for Experiment with Predator-Prey Equations, create a visualization for population size.

    figure(Name="Population Size");
    plot(t,y)
    title("Population v. Time")
    xlabel("Time")
    ylabel("Population")
    legend("Rabbits","Foxes")

  2. After running the experiment, visualize the results. In the Experiment Browser panel, double-click the name of the set of results you want to inspect.

    Then, select a trial in the results table and click the button for a visualization in the Review Results gallery in the Experiment Manager toolstrip. A Visualizations panel appears containing the visualization. To update the visualization for a different trial, select the trial in the results table.

    For example, for a result of Experiment with Predator-Prey Equations, visualize the population size for a trial.

    In the Review Results gallery, the Population Size figure option is selected. The Visualizations panel shows a plot of Population v. Time for the trial selected in the results table.

  3. Record your observations by adding annotations to the results table.

For built-in training experiments, after running the experiment, visualize the results.

  1. In the Experiment Browser panel, double-click the name of the set of results you want to inspect.

  2. Select a trial in the results table and click the button for the training plot or confusion matrix in the Review Results gallery in the Experiment Manager toolstrip. A Visualizations panel appears containing the visualization. To update the visualization for a different trial, select the trial in the results table.

  3. Record your observations by adding annotations to the results table.

Experiment Manager stores a read-only copy of the parameter or hyperparameter values and MATLAB code that produce each set of results for your experiment. You can run an experiment multiple times, each time using a different version of your code but always using the same function name. You can access and revert to an earlier version of your code by opening the experiment source for the earlier result. To see this information:

  1. In the Experiment Browser panel, double-click the name of the set of results you want to inspect.

  2. In the experiment result tab, click View Experiment Source.

  3. In the experiment source tab, inspect the experiment description, parameter or hyperparameter values, and functions that produced the set of results.

  4. To open files located in the project folder that are used by the current result, click the links at the bottom of the tab. These files are read-only, but you can copy them to the project folder, rerun the experiment, and reproduce your results.

For example, for a result of Experiment with Predator-Prey Equations, inspect the experiment source.

Experiment source tab displaying a read-only view of the definition of a general-purpose experiment result

Related Examples

More About

expand all

Tips

  • To reduce the size of your experiments, discard the results and visualizations of any trial that is no longer relevant. In the Actions column of the results table, click the Discard button for the trial.

  • If you have Deep Learning Toolbox or Statistics and Machine Learning Toolbox, you can use Experiment Manager for your AI workflows. For more information, see Experiment Manager (Deep Learning Toolbox) or Experiment Manager (Statistics and Machine Learning Toolbox).

  • In your experiment function, access the parameter values using dot notation. For more information, see Structure Arrays.

  • To navigate Experiment Manager when using a mouse is not an option, use keyboard shortcuts. For more information, see Keyboard Shortcuts for Experiment Manager.

Version History

Introduced in R2023b

expand all