In Matlab unittesting how to modify a property after test suite is created
Show older comments
I would like to modify a property (not a TestParameter) after the test suite is created but I am getting an error "No public field myparam exists for class matlab.unittest.Test". Here is what I am doing in terms of code: I create a class unitclass.m
classdef unitclass < matlab.unittest.TestCase
properties ( SetAccess = public )
myparam = 2;
end
properties (TestParameter)
test_param1 = struct('Low', 0.1,'Medium', 0.5);
test_param2 = struct('Cold', 10,'Hot', 200);
end
methods (Test)
function testError(testcase,test_param1,test_param2)
output = test_param1 * test_param2;
testcase.myparam
testcase.verifyLessThan(output,20);
end
end
end
Then a create a custom unittest SDLTest.m class to modify my parameter
classdef SDLTest < matlab.unittest.Test
methods (Static = true)
function this = set_myparam(this,myparam)
for i = 1:length(this)
this(i).myparam = myparam;
end
end
end
end
when I try to execute the code
mySuite = matlab.unittest.TestSuite.fromClass(?unitclass)
results = SDLTest.set_myparam(mySuite,1)
I get the error "No public field myparam exists for class matlab.unittest.Test".
Help with this is greatly appreciatd.
Thanks.
4 Comments
Andy Campbell
on 9 Dec 2014
Hello Nadjib,
Can you provide more details on why you would like to modify this property? There may be some different solutions to your underlying problem, and some will help more than others depending on precisely what you are doing. Could you provide some more details on your overall workflow and when and why you need to modify one of these properties?
Nadjib
on 10 Dec 2014
Adam
on 10 Dec 2014
In terms of why it doesn't work, I imagine it is because you have an array of matlab.unittest.Test instances and Matlab takes the base class as the type for the array so when you try to call a function or set a property it must be a function or property of the matlab.unittest.Test base class coming out of the TestSuite creation.
Since Andy works for Mathworks with a special interest in the unit testing aspect I'll leave any possible solutions to him as I am not fully clear about your setup!
Accepted Answer
More Answers (1)
Andy Campbell
on 12 Dec 2014
Edited: Andy Campbell
on 12 Dec 2014
If you do not yet have R2014b, there are a few things you can do to at least get something close to this behavior.
One thing you could do for example is write a PlotDiagnostic similar to the one shown in the stack overflow answer. Note you could make it behave like Hugenoet's answer where the plots are not automatically created by the diagnostic but instead it creates the hyperlinks to generate the plots when clicked. An additional feature you could then add is a plugin that adds passing and failing listeners to different qualification events (like VerificationFailed, VerificationPassed, AssertionPassed, etc). The result will be the plugin will be notified whenever a verification/assertion/etc is encountered and it will receive the PlotDiagnostic subclass instance. The plugin, after confirming it is a PlotDiagnostic, could then use the data on the PlotDiagnostic instance to produce plots when the plugin was intalled. Running it when you want to see the plots would then look like:
>> suite = TestSuite.fromClass(?unitclass)
>> runner = TestRunner.withTextOutput;
>> runner.addPlugin(NadjibsPlottingPlugin);
>> runner.run(suite);
What would happen here is the links would be printed by one of the standard framework plugins whenever there was a failure so you could click the links to see individual plots. However, if you add your own plugin you could then also unconditionally produce the plots during the test run. You could even mimic a "log" call by sending an always passing qualification, like so:
testCase.verifyTrue(true, PlotDiagnostic(actual, expected));
More information on how to create a plugin can be found here and here. In your plugin you would need to implement one or more of the "create*" plugin methods so you could add both passed and failed listeners to verification/assertion/fatal assertion calls. Then in the listener callback you would want to check the TestDiagnostic property of the QualificationEventData and if it is one of your custom PlotDiagnostics produce the plot at that time.
Hope that helps and/or is instructive!
3 Comments
Nadjib
on 12 Dec 2014
Nadjib
on 18 Dec 2014
Andy Campbell
on 24 Dec 2014
Hi Nadjib,
I think you don't need the logical variable. Basically you can write the plugin to always produce the plots, and then you can control whether the plot is produced by whether or not the plugin is installed. In other words, if you don't want to see the plots you jsut run it using the default set of plugins. However, if you do want to see the plots you can install the plugin and the plots show up everytime.
If you want to separate the decision to produce the plot from whether the test passes or fails, then you could implement your PlotDiagnostic to produce a diagnostic message that is empty or some other value, but does not produce a plot when the diagnose method is called. Then set_plots_Plugin could look at all diagnostics from verification calls and call some other method on the Diagnostic to actually produce the plots. That way, plots are not produced in failing cases (or passing cases using DiagnosticsValidationPlugin) but are produced whenever you have your special plugin installed which knows how to call the special plot method on your PlotDiagnostic. You can get to the actual diagnostic object supplied by looking at the QualificationEventData that the listener receives (Note, it can be a non-scalar array).
All of this being said, the log method will be much easier once you have the right version of MATLAB.
Does that help?
Categories
Find more on Extend Unit Testing Framework in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!