It might be outside the scope of this package, but if there were examples explained in terms of more familiar linear algebra, that would be so, so lovely.
I tried to give examples of what I think are analogies to linear algebra in the @show _ == _ statements below.
using TensorAlgebra: VectorSpace, Covector, ⊗, Tensor
using LinearAlgebra: dot
V = VectorSpace(:V, Float64)
v = Vector(V,[1,2,3])
α = Covector(V,[1,2,3])
# expected error
@show v(v)
@show α(α)
V = VectorSpace(:V, Float64)
W = VectorSpace(:W, Float64)
a = Float64[1,2,3]
b = Float64[1,2,3,4]
α = Covector(V, a)
β = Covector(W, b)
T = Float64[1 2 3 4;5 6 7 8;9 10 11 12]
t = Tensor((V⊗W)^*, T)
@show dot(T, a * b') == t(α⊗β)
@show (a' * T * b) == t(α, β)
hack1(x) = x'
@show collect(a' * T) == collect(hack1(t(α, -))) # I don't think there is a way to define `collect` such that `hack1` is not necessary, but I also don't know how to best make this analogy
@show collect(T * b) == collect(t(-, β)) # maybe change definition of `collect` to not require `hack1`
And then the manual could even show where the analogy cannot go any further, since there are only two "sides" you can multiply a "2D matrix" by.
It might be outside the scope of this package, but if there were examples explained in terms of more familiar linear algebra, that would be so, so lovely.
I tried to give examples of what I think are analogies to linear algebra in the
@show _ == _statements below.And then the manual could even show where the analogy cannot go any further, since there are only two "sides" you can multiply a "2D matrix" by.