Skip to content

test(linalg): Add more tests for compute_gramian#541

Merged
ValerianRey merged 2 commits intomainfrom
add-compute_gramian-tests
Jan 30, 2026
Merged

test(linalg): Add more tests for compute_gramian#541
ValerianRey merged 2 commits intomainfrom
add-compute_gramian-tests

Conversation

@ValerianRey
Copy link
Copy Markdown
Contributor

@ValerianRey ValerianRey commented Jan 29, 2026

We have a problem in test_compute_gramian_matrix_input_2. It's basically computing tensordot(t, t, dims=2) instead of tensordot(t, t.T, dims=2). I think there is an error in the logic of compute_gramian in:

    indices_source = list(range(t.ndim - contracted_dims))
    indices_dest = list(range(t.ndim - 1, contracted_dims - 1, -1))

=> This gives indices_source=[] and indices_dest=[] for the example of test_compute_gramian_matrix_input_2 (matrix input, 2 contracted dims) => No transposition.

@ValerianRey ValerianRey added cc: test Conventional commit type for changes to tests. package: linalg labels Jan 29, 2026
@codecov
Copy link
Copy Markdown

codecov bot commented Jan 29, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@PierreQuinton
Copy link
Copy Markdown
Contributor

PierreQuinton commented Jan 29, 2026

Will Github ever provide a good LaTeX interpreter?

A Jacobian in $\mathbb R^{m\times n}$ can be viewed as a linear transformation from $\mathbb R^n$ to $\mathbb R^m$. Alternatively, it can be viewed as a linear transformation from $\mathbb R$ to $R^{m\times n}$ (scalar multiplication $s\mapsto s J$), and going the other way around, it can be viewed as a linear transformation from $\mathbb R^{m\times n}$ to $\mathbb R$:
$A\in\mathbb R^{n\times m} \mapsto \sum_{i,j} J_{i,j} A_{j,i}$
This is the only meaningful way of defining this transformation, and if we apply it to $J^T$, we get
$\sum_{i,j} J_{i,j} J^T_{j,i} = \sum_{i,j} J_{i,j}^2$
For

$$J=\begin{bmatrix} 1 & 2 \\\\ 3 & 4 \end{bmatrix}$$

this yields $1^2+2^2+3^2+4^2=1+4+9+16=30$.

@ValerianRey ValerianRey changed the title test(linalg): Add more tests for compute_gramian test(linalg): Add more tests for compute_gramian Jan 29, 2026
@ValerianRey
Copy link
Copy Markdown
Contributor Author

Thanks for explaining, that makes more sense to me now!

it can be viewed as a linear transformation from $\mathbb R^{m\times n}$ to $\mathbb R$

I think you meant $\mathbb R^{n\times m}$ here. $J^T$ can be viewed as a linear transformation from $\mathbb R^{m\times n} to \mathbb R$ though. So we have our quadratic form ($JJ^T$) doing:
$s \in \mathbb R$ --(apply $J$)--> $sJ \in \mathbb R^{m\times n}$ --(apply $J^T$)--> $sJJ^T \in \mathbb R$ --(right-multiply by s)--> $sJJ^Ts \in \mathbb R$.
I think that was just a typo because everything else works in what you wrote.

@ValerianRey
Copy link
Copy Markdown
Contributor Author

ValerianRey commented Jan 30, 2026

Will Github ever provide a good LaTeX interpreter?

Apparently there's no double-dollar formulas, and \begin{bmatrix} doesn't work with inline (single-dollar) formulas. Also, sometimes, single-dollar formulas do not work for no reason it seems. However, you can use ```math

For example:
```math
A = \begin{bmatrix} 1 & 2 \ 3 & 4\end{bmatrix}
```
gives

$$A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$$

@PierreQuinton

I edited your comment for proper formatting.

@ValerianRey
Copy link
Copy Markdown
Contributor Author

ValerianRey commented Jan 30, 2026

I think I was also quite confused by how torch.tensordot works. When calling torch.tensordot on A and B and an integer number of dimensions d to contract, the d trailing dimensions of A (from left to right) and the d leading dimensions of B (also from left to right) are contracted. I would have expected the leading dimensions of B to be contracted from right to left.

Example (from their documentation):

a = torch.randn(3, 4, 5)
b = torch.randn(4, 5, 6)
c = torch.tensordot(a, b, dims=2)
tensor([[ 8.3504, -2.5436,  6.2922,  2.7556, -1.0732,  3.2741],
        [ 3.3161,  0.0704,  5.0187, -0.4079, -4.3126,  4.8744],
        [ 0.8223,  3.9445,  3.2168, -0.2400,  3.4117,  1.7780]])

I would have expected the correct way to do this to be:

a = torch.randn(3, 4, 5)
b = torch.randn(5, 4, 6)  # 4 and 5 swapped here
c = torch.tensordot(a, b, dims=2)
tensor([[ 8.3504, -2.5436,  6.2922,  2.7556, -1.0732,  3.2741],
        [ 3.3161,  0.0704,  5.0187, -0.4079, -4.3126,  4.8744],
        [ 0.8223,  3.9445,  3.2168, -0.2400,  3.4117,  1.7780]])

But this is the whole reason why we implemented compute_gramian (to move dims before calling tensordot) rather than simply using torch.tensordot.
So in the end it all makes sense IMO.

@ValerianRey ValerianRey merged commit a55afca into main Jan 30, 2026
14 checks passed
@ValerianRey ValerianRey deleted the add-compute_gramian-tests branch January 30, 2026 01:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cc: test Conventional commit type for changes to tests. package: linalg

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants