ScaleInvariantAnalysis
This package computes covers of matrices. Given a matrix A, a cover is a matrix C that can be defined as C = a * b', where a and b are non-negative vectors. C must satisfy
\[C_{ij} \;\geq\; |A_{ij}| \quad \text{for all } i, j.\]
For a symmetric matrix the cover is symmetric (b = a), so a single vector suffices: a[i] * a[j] >= abs(A[i, j]).
For symmetric matrices, this package also supports diagonalized covers, where C = Diagonal(d) + a * a' is a cover for A. Here, both a and d are nonnegative vectors.
Why covers?
Covers are the natural scale-covariant representation of a matrix. If you rescale rows by a positive diagonal factor D_r and columns by D_c, the optimal cover transforms as a → D_r * a, b → D_c * b — exactly mirroring how the matrix entries change. Scalar summaries like norm(A) or maximum(abs, A) do not have this property and therefore implicitly encode an arbitrary choice of units.
A concrete example: a 3×3 matrix whose rows and columns correspond to physical variables with different units (position in meters, velocity in m/s, force in N):
julia> using ScaleInvariantAnalysis
julia> A = [1e6 1e3 1.0; 1e3 1.0 1e-3; 1.0 1e-3 1e-6];
julia> a = symcover(A)
3-element Vector{Float64}:
1000.0
1.0
0.001The cover a captures the natural per-variable scale. The normalized matrix A ./ (a .* a') is all-ones and scale-invariant.
Measuring cover quality
A cover is valid as long as every constraint is satisfied, but tighter covers better capture the scaling of A. The log-excess of an entry is log(a[i] * b[j] / abs(A[i, j])) >= 0; it is zero when the constraint is exactly tight. Two summary statistics aggregate these excesses:
cover_lobjective— sum of log-excesses (L1 in log space).cover_qobjective— sum of squared log-excesses (L2 in log space).
Both equal zero if and only if every constraint is tight.
julia> using ScaleInvariantAnalysis
julia> A = [4.0 2.0; 2.0 9.0];
julia> a = symcover(A)
2-element Vector{Float64}:
2.0
3.0
julia> cover_lobjective(a, A)
2.1972245773362196
julia> cover_qobjective(a, A)
2.413897921625164Here's an example where the quadratically-optimal cover differs slightly from the one returned by cover:
julia> using ScaleInvariantAnalysis, JuMP, HiGHS
julia> A = [1 2 3; 6 5 4];
julia> a, b = cover(A)
([1.2674308473260654, 3.4759059767492304], [1.7261686708831454, 1.61137045961268, 2.366993044495631])
julia> aq, bq = cover_qmin(A)
([1.1986299970143055, 3.25358233351279], [1.8441211516912772, 1.6685716234216104, 2.5028574351324164])
julia> a * b'
2×3 Matrix{Float64}:
2.1878 2.0423 3.0
6.0 5.60097 8.22745
julia> aq * bq'
2×3 Matrix{Float64}:
2.21042 2.0 3.0
6.0 5.42884 8.14325Choosing a cover algorithm
| Function | Symmetric | Minimizes | Requires |
|---|---|---|---|
symcover | yes | (fast heuristic) | — |
cover | no | (fast heuristic) | — |
symdiagcover | yes | (fast heuristic) | — |
symcover_lmin | yes | cover_lobjective | JuMP + HiGHS |
cover_lmin | no | cover_lobjective | JuMP + HiGHS |
symcover_qmin | yes | cover_qobjective | JuMP + HiGHS |
cover_qmin | no | cover_qobjective | JuMP + HiGHS |
symcover, cover, and symdiagcover are recommended for production use. They run in O(mn) time for an $m\times n$ matrix A and often land within a few percent of the cover_lobjective-optimal cover (see the quality tests involving test/testmatrices.jl).
The *_lmin and *_qmin variants solve a convex program (via JuMP and HiGHS) and return a global optimum of the respective objective. They are loaded on demand as a package extension — simply load both libraries before calling them:
using JuMP, HiGHS
using ScaleInvariantAnalysis
a = symcover_lmin(A) # globally linear-minimal symmetric cover
a, b = cover_qmin(A) # globally quadratic-minimal general coverIndex of available tools
ScaleInvariantAnalysis.coverScaleInvariantAnalysis.cover_lminScaleInvariantAnalysis.cover_lobjectiveScaleInvariantAnalysis.cover_qminScaleInvariantAnalysis.cover_qobjectiveScaleInvariantAnalysis.dotabsScaleInvariantAnalysis.symcoverScaleInvariantAnalysis.symcover_lminScaleInvariantAnalysis.symcover_qminScaleInvariantAnalysis.symdiagcover
Reference documentation
ScaleInvariantAnalysis.cover — Method
a, b = cover(A; iter=3)Given a matrix A, return vectors a and b such that a[i] * b[j] >= abs(A[i, j]) for all i, j.
a .* b' may not be minimal, but it is tightened iteratively, with iter specifying the number of iterations (more iterations make tighter covers). cover is fast and generally recommended for production use.
See also: cover_lmin, cover_qmin, symcover.
Examples
julia> A = [1 2 3; 6 5 4];
julia> a, b = cover(A)
([1.2674308473260654, 3.4759059767492304], [1.7261686708831454, 1.61137045961268, 2.366993044495631])
julia> a * b'
2×3 Matrix{Float64}:
2.1878 2.0423 3.0
6.0 5.60097 8.22745ScaleInvariantAnalysis.cover_lmin — Function
a, b = cover_lmin(A)Similar to cover_qmin, but returns a linear-minimal cover of A.
ScaleInvariantAnalysis.cover_lobjective — Method
cover_lobjective(a, b, A)
cover_lobjective(a, A)Compute the sum of log-domain excesses over nonzero entries of A:
∑_{i,j : A[i,j] ≠ 0} log(a[i] * b[j] / |A[i,j]|)The two-argument form is for symmetric matrices where the cover is a*a'.
See also: cover_qobjective for the sum of squared log-domain excesses.
ScaleInvariantAnalysis.cover_qmin — Function
a, b = cover_qmin(A)Given a matrix A, return vectors a and b representing the quadratic-minimal cover of A. This solves the optimization problem
min ∑_{i,j : A[i,j] ≠ 0} log(a[i] * b[j] / |A[i,j]|)²
s.t. a[i] * b[j] ≥ |A[i, j]| for all i, jThis implementation is slow. See also:
cover_qobjectivefor the objective minimized by this functioncoverfor a much more efficient option that is not quadratically-optimalsymcover_qminfor a specialization to symmetric matrices
ScaleInvariantAnalysis.cover_qobjective — Method
cover_qobjective(a, b, A)
cover_qobjective(a, A)Compute the sum of squared log-domain excesses over nonzero entries of A:
∑_{i,j : A[i,j] ≠ 0} log(a[i] * b[j] / |A[i,j]|)²The two-argument form is for symmetric matrices where the cover is a*a'.`
See also: cover_lobjective for the sum of log-domain excesses.
ScaleInvariantAnalysis.dotabs — Method
dotabs(x, y)Compute the sum of absolute values of elementwise products of x and y:
∑_i |x[i] * y[i]|ScaleInvariantAnalysis.symcover — Method
a = symcover(A; prioritize::Symbol=:quality, iter=3)Given a square matrix A assumed to be symmetric, return a vector a representing the symmetric cover of A, so that a[i] * a[j] >= abs(A[i, j]) for all i, j.
prioritize=:quality yields a cover that is typically closer to being quadratically optimal, though there are exceptions. prioritize=:speed is often about twice as fast (with default iter=3). In either case, after initialization a is tightened iteratively, with iter specifying the number of iterations (more iterations make tighter covers).
Regardless of which prioritize option is chosen, symcover is fast and generally recommended for production use.
See also: symcover_lmin, symcover_qmin, cover.
Examples
julia> A = [4 -1; -1 0];
julia> a = symcover(A; prioritize=:speed)
2-element Vector{Float64}:
2.0
0.5
julia> a * a'
2×2 Matrix{Float64}:
4.0 1.0
1.0 0.25
julia> A = [0 12 9; 12 7 12; 9 12 0];
julia> a = symcover(A; prioritize=:quality)
3-element Vector{Float64}:
3.4021999694928753
3.54528705924512
3.3847752803845172
julia> a * a'
3×3 Matrix{Float64}:
11.575 12.0618 11.5157
12.0618 12.5691 12.0
11.5157 12.0 11.4567
julia> a = symcover(A; prioritize=:speed)
3-element Vector{Float64}:
4.535573676110727
2.6457513110645907
4.535573676110727
julia> a * a'
3×3 Matrix{Float64}:
20.5714 12.0 20.5714
12.0 7.0 12.0
20.5714 12.0 20.5714ScaleInvariantAnalysis.symcover_lmin — Function
a = symcover_lmin(A)Similar to symcover_qmin, but returns a symmetric linear-minimal cover of A.
ScaleInvariantAnalysis.symcover_qmin — Function
a = symcover_qmin(A)Given a square matrix A assumed to be symmetric, return a vector a representing the symmetric quadratic-minimal cover of A. This solves the optimization problem
min ∑_{i,j : A[i,j] ≠ 0} log(a[i] * a[j] / |A[i,j]|)²
s.t. a[i] * a[j] ≥ |A[i, j]| for all i, jThis implementation is slow. See also:
cover_qobjectivefor the objective minimized by this functionsymcoverfor a much more efficient option that is not quadratically-optimalcover_qminfor a generalization to non-symmetric matrices
ScaleInvariantAnalysis.symdiagcover — Method
d, a = symdiagcover(A; kwargs...)Given a square matrix A assumed to be symmetric, return vectors d and a representing a symmetric diagonalized cover Diagonal(d) + a * a' of A with the diagonal as tight as possible given A and a. In particular,
abs(A[i, j]) ≤ a[i] * a[j] for all i ≠ j, and
abs(A[i, i]) ≤ a[i]^2 + d[i] for all i.Examples
julia> A = [4 1e-8; 1e-8 1];
julia> a = symcover(A)
2-element Vector{Float64}:
2.0
1.0
julia> a * a'
2×2 Matrix{Float64}:
4.0 2.0
2.0 1.0
julia> d, a = symdiagcover(A)
([3.99999999, 0.99999999], [0.0001, 0.0001])
julia> Diagonal(d) + a * a'
2×2 Matrix{Float64}:
4.0 1.0e-8
1.0e-8 1.0For this case, one sees much tighter covering with symdiagcover.