Add this suggestion to a batch that can be applied as a single commit. This suggestion is invalid because no changes were made to the code. Suggestions cannot be applied while the pull request is closed. Suggestions cannot be applied while viewing a subset of changes. Only one suggestion per line can be applied in a batch. Add this suggestion to a batch that can be applied as a single commit. Applying suggestions on deleted lines is not supported. You must change the existing code in this line in order to create a valid suggestion. Outdated suggestions cannot be applied. This suggestion has been applied or marked resolved. Suggestions cannot be applied from pending reviews. Suggestions cannot be applied on multi-line comments. Suggestions cannot be applied while the pull request is queued to merge. Suggestion cannot be applied right now. Please check back later.
Following a comment by Meromorphic on the discourse here, this PR implements the matrix exponential function
expm
. It does so in a newstdlib_linalg_matrix_functions
submodule which might eventually accomodate later on implementations of other matrix functions (e.g.logm
,sqrtm
,cosm
, etc).Proposed interface
E = expm(A [, order, err])
whereA
is the inputn x n
matrix,order
(optional) is the order of the Pade approximation, anderr
of typelinalg_state_type
for error handling.Key facts
The implementation makes use of a standard "scaling and squaring" approach to compute the Pade approximation with a fixed order. It is adapted from the implementation by John Burkardt.
Progress
Possible improvements
The implementation is fully working and validated. Computational performances and/or robustness could potentially be improved eventually by considering:
order
for the Pade approximation based on the data. This is used for instance inscipy.linalg.expm
.At the end of the algorithm, the squaring step is implemented using a simple loop involving(irrelevant for this particular task since the scaling factor is chosen as a power of 2, might still be a useful function in the grand scheme of things)matmul
. Additional performances can be gained by implementing a fastmatrix_power
function (seenp.matrix_power
for instance).In practice, it has however never failed me so far.
cc @perazz, @jvdp1, @jalvesz