Clear Filters
Clear Filters

Optimisation of array to smaller size

8 views (last 30 days)
I have an array of size 30x20 with non linear breakpoints (Rows & Columns), that I want to convert to a smaller size of 20x15 where the new breakpoints for this smaller array are unknown and require calculating.
I have started by trying to create a linearly spaced array for the new breakpoints, which as an initial guess is ok, however my purpose here is to minimise the error between the larger and smaller arrays. Given that the original arrays breakpoints are non-linear, the assumption that the new array can be linearly spaced, is not ideal and does not minimise the error - My current attempt looks like this;
original_map = cell2mat(app.table_handle.Data);
original_xbp = str2double(string(app.table_handle.ColumnName));
original_ybp = str2double(string(app.table_handle.RowName));
rows = size(app.table_handle.Data, 1);
cols = size(app.table_handle.Data, 2);
new_rows = size(app.table2_handle.Data, 1);
new_cols = size(app.table2_handle.Data, 2);
[col_indices, row_indices] = meshgrid(1:cols, 1:rows);
[new_col_indices, new_row_indices] = meshgrid(linspace(1, cols, new_cols), linspace(1, rows, new_rows));
reduced_map = interp2(col_indices, row_indices, original_map, new_col_indices, new_row_indices, 'cubic');
Would it be possible to optimise the breakpoint values to minimise the error between the original array and the new array, then interpolate the original array values to those new breakpoint values? I tried to use fminsearch but ran into problems due to the array sizes not being the same between X & Y breakpoint arrays.
For reference, this is my attempt with using fminsearch;
function error = new_function(new_xbp, new_ybp, original_map, original_xbp, original_ybp)
[X, Y] = meshgrid(new_xbp, new_ybp);
new_map = interp2(original_xbp, original_ybp, original_map, X, Y);
[dxx, dxy, dyy] = gradient2(new_map);
error = sum(sum(dxx.^2 + dxy.^2 + dyy.^2));
end
original_map = cell2mat(app.table_handle.Data);
original_xbp = str2double(string(app.table_handle.ColumnName));
original_ybp = str2double(string(app.table_handle.RowName));
new_rows = size(app.table2_handle.Data, 1);
new_cols = size(app.table2_handle.Data, 2);
initial_xbp = linspace(original_xbp(1), original_xbp(end), new_cols);
initial_ybp = linspace(original_ybp(1), original_ybp(end), new_rows);
options = optimset('Display', 'iter');
[optimal_xbp, ~, ~, output] = fminsearch(@(x) new_function(x, initial_ybp, original_map, original_xbp, original_ybp), initial_xbp, options);
[optimal_ybp, fval] = fminsearch(@(y) new_function(optimal_xbp, y, original_map, original_xbp, original_ybp), initial_ybp, options);
[X, Y] = meshgrid(optimal_xbp, optimal_ybp);
reduced_map = interp2(original_xbp, original_ybp, original_map, X, Y);
Any help is really appreciatted!

Accepted Answer

Maneet Kaur Bagga
Maneet Kaur Bagga on 10 May 2024
Hi,
As per my understanding, you want to optimize the breakpoint values for minizmizing the error between the original and reduced arrays, for the given non-linear nature of the original breakpoints. Please refer to the below workaround for the same:
An alternate objective function which can be used is "RMSE (root mean square error)" between the original map and the reduced map. Also, given the complexity of the question using algorithms that can handle contraints and multi-dimesional optimization problems can be used such as "fmincon" or "particleswarm" for global optimization.
Please refer to the following code for better understanding:
function error = objectiveFunction(newBreakpoints, originalMap, originalXbp, originalYbp, newRows, newCols)
% Reshape newBreakpoints into X and Y components
new_xbp = newBreakpoints(1:newCols);
new_ybp = newBreakpoints(newCols+1:end);
% Interpolate using the new breakpoints
[X, Y] = meshgrid(new_xbp, new_ybp);
reducedMap = interp2(originalXbp, originalYbp, originalMap, X, Y, 'cubic');
% Calculate RMSE between the original and the reduced map
error = sqrt(mean((originalMap(:) - reducedMap(:)).^2));
end
I have attached the link to the MathWorks Documentation for better understanding of the functions:
Hope this helps!
  1 Comment
Jordan
Jordan on 10 May 2024
Thank you for the response, I have read through the documentation and found it very useful!
I have since realised that particlswarm requires the Global Optimization Toolbox, which I don;t have access to. I do have access to the normal Optimization Toolbox though and have since tried to use fminunc. I am having issues with respect to initial points on my map having a flat gradient, this causes fminunc to error out and I'm not sure how to resolve this issue, do you have any ideas?

Sign in to comment.

More Answers (0)

Products


Release

R2023b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!