eigs with Extended Capabilities

16 views (last 30 days)
Marko
Marko on 9 Dec 2021
Commented: Andrew Knyazev on 22 Dec 2021
Hello Community,
Extended Capabilities
Thread-Based Environment
Run code in the background using MATLAB® backgroundPool or accelerate code with Parallel Computing Toolbox™ ThreadPool.
This function fully supports thread-based environments. For more information, see Run MATLAB Functions in Thread-Based Environment.
Distributed Arrays
Partition large arrays across the combined memory of your cluster using Parallel Computing Toolbox™.
This function fully supports distributed arrays. For more information, see Run MATLAB Functions with Distributed Arrays (Parallel Computing Toolbox).
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
could either one of this options increase the number of used cores for one eigs calculation?
i have to solve an generalised eigenvalue probem and the size of the eigenvector q is numel(q)=16384
When i use eigs it uses only one core.
Is it possible to increase the number of used cores, with the extended capabilties?
Best Regards,
MJ

Answers (1)

Christine Tobler
Christine Tobler on 9 Dec 2021
The first of these would only be useful if you need to apply EIGS to many problems in parallel, in which case each of these could be solved in a thread.
The second will usually make sense if you have a parallel cluster over which the matrix could be distributed, which I'd expect only makes sense for a larger matrix than 16384-by-16384.
EIGS isn't always running single-threaded, it depends quite a lot on the problem being passed in. How much time is EIGS taking to solve your problem? Could you run your EIGS command with the "Display" option set to true, and post the results here?
  2 Comments
Marko
Marko on 9 Dec 2021
Hello Christine,
here are the results with the "Display" option
=== Generalized eigenvalue problem A*x = lambda*B*x ===
The eigenvalue problem is real non-symmetric.
Matrix B is symmetric positive (semi-)definite.
Computing 50 eigenvalues closest to 1.
Parameters passed to Krylov-Schur method:
Maximum number of iterations: 500
Tolerance: 1e-14
Subspace Dimension: 150
Find eigenvalues of R*(A - sigma*B)\(R'*y) = mu*y, with y = R*x, B = R'*R.
No need to compute R, as it is only used implicitly.
Compute decomposition of (A - sigma*B)...
--- Start of Krylov-Schur method ---
Iteration 1: 49 of 50 eigenvalues converged. Smallest non-converged residual 1.9e-11 (tolerance 1.0e-14).
Iteration 2: 50 of 50 eigenvalues converged.
solving time: 80.5s, k: 0.000, sigma: -0.222, omega: 0.000
=== Generalized eigenvalue problem A*x = lambda*B*x ===
The eigenvalue problem is complex non-Hermitian.
Matrix B is Hermitian positive (semi-)definite.
Computing 50 eigenvalues closest to 1.
Parameters passed to Krylov-Schur method:
Maximum number of iterations: 500
Tolerance: 1e-14
Subspace Dimension: 150
Find eigenvalues of R*(A - sigma*B)\(R'*y) = mu*y, with y = R*x, B = R'*R.
No need to compute R, as it is only used implicitly.
Compute decomposition of (A - sigma*B)...
--- Start of Krylov-Schur method ---
Iteration 1: 44 of 50 eigenvalues converged. Smallest non-converged residual 1.4e-14 (tolerance 1.0e-14).
Iteration 2: 50 of 50 eigenvalues converged.
solving time: 157.0s, k: 0.125, sigma: -0.077, omega: 1.919
First I have many eigs-calls (in the order of multiple 1000 times), this is used for scanning some parameter space. Before i read about the extended capabilites, i solved that with parfor. (Because they could run independend of each other)
But my next task require to solve eigs as fast as possible. (The calls of eigs are not independend, the results of the future eigs, depend on the previous result)
So when i understand you correctly, the distributed arrays should help me solving the eigs problem in less time than in the usual way. Is this correct?
(in the future the size of the matrix would have at least 32768-by-32768)
Andrew Knyazev
Andrew Knyazev on 22 Dec 2021
Open source https://slepc.upv.es/ is for large-scale parallel MPI/OpenMP eigenvalue computations and has a MATLAB interface.

Sign in to comment.

Categories

Find more on Particle & Nuclear Physics in Help Center and File Exchange

Products


Release

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!