Previous Up Next
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Lecture 1 Covariant and Contravariant Calculi

United in the trinity functional calculus, spectrum, and spectral mapping theorem play the exceptional rôle in functional analysis and could not be substituted by anything else.

1.1 Functional Calculus as an Algebraic Homomorphism

Many traditional definitions of functional calculus are covered by the following rigid template based on the algebra homomorphism property:

Definition 1 An functional calculus for an element aA is a continuous linear mapping Φ: AA such that
  1. Φ is a unital algebra homomorphism
       Φ(f · g)=Φ(f) · Φ (g).
  2. There is an initialisation condition: Φ[v0]=a for for a fixed function v0, e.g. v0(z)=z.

The most typical definition of the spectrum is seemingly independent and uses the important notion of resolvent:

Definition 2 A resolvent of element aA is the function R(λ)=(a−λ e)−1, which is the image under Φ of the Cauchy kernel (z−λ)−1.

A spectrum of aA is the set a of singular points of its resolvent R(λ).

Then the following important theorem links spectrum and functional calculus together.

Theorem 3 (Spectral Mapping) For a function f suitable for the functional calculus:
f( a)=  f(a). (1)

However the power of the classic spectral theory rapidly decreases if we move beyond the study of one normal operator (e.g. for quasinilpotent ones) and is virtually nil if we consider several non-commuting ones. Sometimes these severe limitations are seen to be irresistible and alternative constructions, i.e. model theory cf. Example 22 and [266], were developed.

Yet the spectral theory can be revived from a fresh start. While three compon­ents—functional calculus, spectrum, and spectral mapping theorem—are highly interdependent in various ways we will nevertheless arrange them as follows:

  1. Functional calculus is an original notion defined in some independent terms;
  2. Spectrum (or more specifically contravariant spectrum) (or spectral decomposition) is derived from previously defined functional calculus as its support (in some appropriate sense);
  3. Spectral mapping theorem then should drop out naturally in the form (1) or some its variation.

Thus the entire scheme depends from the notion of the functional calculus and our ability to escape limitations of Definition 1. The first known to the present author definition of functional calculus not linked to algebra homomorphism property was the Weyl functional calculus defined by an integral formula [8]. Then its intertwining property with affine transformations of Euclidean space was proved as a theorem. However it seems to be the only “non-homomorphism” calculus for decades.

The different approach to whole range of calculi was given in [168] and developed in [173, 182, 180, 193] in terms of intertwining operators for group representations. It was initially targeted for several non-commuting operators because no non-trivial algebra homomorphism is possible with a commutative algebra of function in this case. However it emerged later that the new definition is a useful replacement for classical one across all range of problems.

In the following Subsections we will support the last claim by consideration of the simple known problem: characterisation a n × n matrix up to similarity. Even that “freshman” question can be only sorted out by the classical spectral theory for a small set of diagonalisable matrices. Our solution in terms of new spectrum will be full and thus unavoidably coincides with one given by the Jordan normal form of matrices. Other more difficult questions are the subject of ongoing research.

1.2 Intertwining Group Actions on Functions and Operators

Any functional calculus uses properties of functions to model properties of operators. Thus changing our viewpoint on functions, as was done in Section 5, we can get another approach to operators. The two main possibilities are encoded in Definitions 23 and 25: we can assign a certain function to the given operator or vice versa. Here we consider the second possibility and treat the first in the Subsection 1.5.

The representation 1 (23) is unitary irreducible when acts on the Hardy space H2. Consequently we have one more reason to abolish the template definition 1: H2 is not an algebra. Instead we replace the homomorphism property by a symmetric covariance:

Definition 4 ([168]) An contravariant analytic calculus for an element aA and an A-module M is a continuous linear mapping Φ:A(ⅅ)→ A(ⅅ,M) such that
  1. Φ is an intertwining operator
          Φρ1aΦ
    between two representations of the SL2(ℝ) group ρ1 (23) and ρa defined below in (5).
  2. There is an initialisation condition: Φ[v0]=m for v0(z)≡ 1 and mM, where M is a left A-module.

Note that our functional calculus released from the homomorphism condition can take value in any left A-module M, which however could be A itself if suitable. This add much flexibility to our construction.

The earliest functional calculus, which is not an algebraic homomorphism, was the Weyl functional calculus and was defined just by an integral formula as an operator valued distribution [8]. In that paper (joint) spectrum was defined as support of the Weyl calculus, i.e. as the set of point where this operator valued distribution does not vanish. We also define the spectrum as a support of functional calculus, but due to our Definition 4 it will means the set of non-vanishing intertwining operators with primary subrepresentations.

Definition 5 A corresponding spectrum of aA is the support of the functional calculus Φ, i.e. the collection of intertwining operators of ρa with primary representations [159]*§ 8.3.

More variations of contravariant functional calculi are obtained from other groups and their representations [168, 173, 182, 180, 193].

A simple but important observation is that the Möbius transformations (1) can be easily extended to any Banach algebra. Let A be a Banach algebra with the unit e, an element aA with ||a||<1 be fixed, then for for transformations

g: a ↦ g· a=(α a +βe)(β a+αe)−1,    g=


    αβ
    βα


SL2(ℝ) (2)

we introduce SL2(ℝ)-orbit

A={g· a  ∣  g∈ SL2(ℝ)} ⊂ A.

Clearly, A is a left (right) SL2(ℝ)-homogeneous space with the natural left (right) action:

g1: b=g· a ↦ g1−1· b:= (g1−1g)· a    (  g1: g· a ↦ (gg1)· a  )  ,

where g1, gSL2(ℝ) and b=g· a ∈ A. That is, for a fixed aA the set A and the SL2(ℝ)-action on it are completely parametrised by correspondence b=g· ag.

Let us define the resolvent function R(g,b): SL2(ℝ) × A→ A by:

R(g1, b):=(α1e−β1b)−1=(β a+ α )(α′ e−β′ a)−1 ,  (3)

where

      g1=


    α1β1
    β1α1


  ,  g=


    αβ
    βα


SL2(ℝ),  b= g· a∈ A 

and

  


    α′β
    β′α′ 


=g−1g1  =   


    αβ
    −βα 


·


    α1β1
    β1α1


Then, we can directly check that:

R(g1,b)R(g2,g1−1· b)=R(g1g2, b) , (4)

for all g1, g2SL2(ℝ) and b ∈ A.

The last identity is well known in representation theory [159]*§ 13.2(10) and is a key ingredient of induced representations. Thus we can again linearise (2), cf. (23), in the space of continuous functions C(A,M) with values in a left A-module M:

     
  ρa(g): f(b) R(g, b)f(g−1· b)  (5)
  
  = (α e−β b)−1   f


αb − βe
α  e −β b



,    
 

where gSL2(ℝ) and b ∈ A.

For any mM we can define a K-invariant vacuum vector as vm(g−1· a)=mv0(g−1· a) ∈ C(A,M). It generates the associated with vm family of coherent states vm(u,a)=(uea)−1m, where u∈ⅅ.

The wavelet transform defined by the same common formula based on coherent states (cf. (24)):

Wmf(g)= ⟨ fa(g) vm  ⟩, (6)

is a version of Cauchy integral, which maps L2(A) to C(SL2(ℝ),M). It is closely related (but not identical!) to the Riesz-Dunford functional calculus: the traditional functional calculus is given by the case:

  Φ: f ↦ Wmf(0)    for  M=A and  m=e.

The both conditions—the intertwining property and initial value—required by Definition 4 easily follows from our construction. Finally, we wish to provide an example of application of the Corollary 11.

Example 6 Let a be an operator and φ be a function which annihilates it, i.e. φ(a)=0. For example, if a is a matrix φ can be its minimal polynomial. From the integral representation of the contravariant calculus on G=SL2(ℝ) we can rewrite the annihilation property like this:
    
 


G
 φ(g) R(g,a) dg=0.
Then the vector-valued function [Wm f](g) defined by (6) shall satisfy to the following condition:
        
 


G
 φ(g′)  [Wmf] (gg′) dg′=0
due to the Corollary 11.

1.3 Jet Bundles and Prolongations

Spectrum was defined in 5 as the support of our functional calculus. To elaborate its meaning we need the notion of a prolongation of representations introduced by S. Lie, see [267, 268] for a detailed exposition.

Definition 7[268]*Chap. 4 Two holomorphic functions have nth order contact in a point if their value and their first n derivatives agree at that point, in other words their Taylor expansions are the same in first n+1 terms.

A point (z,u(n))=(z,u,u1,…,un) of the jet space Jn∼ⅅ×ℂn is the equivalence class of holomorphic functions having nth contact at the point z with the polynomial:

pn(w)=un
(wz)n
n!
+⋯+u1
(wz)
1!
+u. (7)

For a fixed n each holomorphic function f:ⅅ→ℂ has nth prolongation (or n-jet) jnf: ⅅ → ℂn+1:

jnf(z)=(f(z),f′(z),…,f(n)(z)). (8)

The graph ammaf(n) of jnf is a submanifold of Jn which is section of the jet bundle over ⅅ with a fibre ℂn+1. We also introduce a notation Jn for the map Jn:fammaf(n) of a holomorphic f to the graph ammaf(n) of its n-jet jnf(z) (8).

One can prolong any map of functions ψ: f(z)↦ [ψ f](z) to a map ψ(n) of n-jets by the formula

ψ(n) (Jnf) = Jn(ψ f). (9)

For example such a prolongation ρ1(n) of the representation ρ1 of the group SL2(ℝ) in H2(ⅅ) (as any other representation of a Lie group [268]) will be again a representation of SL2(ℝ). Equivalently we can say that Jn intertwines ρ1 and ρ1(n):

   Jn ρ1(g)= ρ1(n)(g) Jn   for all  gSL2(ℝ).

Of course, the representation ρ1(n) is not irreducible: any jet subspace Jk, 0≤ kn is ρ1(n)-invariant subspace of Jn. However the representations ρ1(n) are primary [159]*§ 8.3 in the sense that they are not sums of two subrepresentations.

The following statement explains why jet spaces appeared in our study of functional calculus.

Proposition 8 Let matrix a be a Jordan block of a length k with the eigenvalue λ=0, and m be its root vector of order k, i.e. ak−1mak m =0. Then the restriction of ρa on the subspace generated by vm is equivalent to the representation ρ1k.

1.4 Spectrum and Spectral Mapping Theorem

Now we are prepared to describe a spectrum of a matrix. Since the functional calculus is an intertwining operator its support is a decomposition into intertwining operators with primary representations (we can not expect generally that these primary subrepresentations are irreducible).

Recall the transitive on ⅅ group of inner automorphisms of SL2(ℝ), which can send any λ∈ⅅ to 0 and are actually parametrised by such a λ. This group extends Proposition 8 to the complete characterisation of ρa for matrices.

Proposition 9 Representation ρa is equivalent to a direct sum of the prolongations ρ1(k) of ρ1 in the kth jet space Jk intertwined with inner automorphisms. Consequently the spectrum of a (defined via the functional calculus Φ=Wm) labelled exactly by n pairs of numbers i,ki), λi∈ⅅ, ki∈ℤ+, 1≤ in some of whom could coincide.

Obviously this spectral theory is a fancy restatement of the Jordan normal form of matrices.


(a)   (b)  (c)
Figure 1.1: Classical spectrum of the matrix from the Ex. 10 is shown at (a). Contravariant spectrum of the same matrix in the jet space is drawn at (b). The image of the contravariant spectrum under the map from Ex. 12 is presented at (c).

Example 10 Let Jk(λ) denote the Jordan block of the length k for the eigenvalue λ. In Fig. 1.1 there are two pictures of the spectrum for the matrix
    a=J3
λ1
⊕     J4
λ2
⊕ J1
λ3
⊕      J2
λ4
,
where
    λ1=
3
4
eiπ/4,   λ2=
2
3
ei5π/6,   λ3=
2
5
ei3π/4,   λ4=
3
5
eiπ/3.
Part (a) represents the conventional two-dimensional image of the spectrum, i.e. eigenvalues of a, and (b) describes spectrum a arising from the wavelet construction. The first image did not allow to distinguish a from many other essentially different matrices, e.g. the diagonal matrix
    
λ1234
,
which even have a different dimensionality. At the same time Fig. 1.1(b) completely characterise a up to a similarity. Note that each point of a in Fig. 1.1(b) corresponds to a particular root vector, which spans a primary subrepresentation.

As was mentioned in the beginning of this section a resonable spectrum should be linked to the corresponding functional calculus by an appropriate spectral mapping theorem. The new version of spectrum is based on prolongation of ρ1 into jet spaces (see Section ??). Naturally a correct version of spectral mapping theorem should also operate in jet spaces.

Let φ: ⅅ → ⅅ be a holomorphic map, let us define its action on functions [φ* f](z)=f(φ(z)). According to the general formula (9) we can define the prolongation φ*(n) onto the jet space Jn. Its associated action ρ1k φ*(n)*(n)ρ1n on the pairs (λ,k) is given by the formula:

φ*(n)(λ,k)=


φ(λ),


k
degλφ






, (10)

where degλφ denotes the degree of zero of the function φ(z)−φ(λ) at the point z=λ and [x] denotes the integer part of x.

Theorem 11 (Spectral mapping) Let φ be a holomorphic mapping φ: ⅅ → ⅅ and its prolonged action φ*(n) defined by (10), then
     φ(a) = φ*(n)  a. 

The explicit expression of (10) for φ*(n), which involves derivatives of φ upto nth order, is known, see for example [137]*Thm. 6.2.25, but was not recognised before as form of spectral mapping.

Example 12 Let us continue with Example 10. Let φ map all four eigenvalues λ1, …, λ4 of the matrix a into themselves. Then Fig. 1.1(a) will represent the classical spectrum of φ(a) as well as a.

However Fig. 1.1(c) shows mapping of the new spectrum for the case φ has orders of zeros at these points as follows: the order 1 at λ1, exactly the order 3 at λ2, an order at least 2 at λ3, and finally any order at λ4.

1.5 Functional Model and Spectral Distance

Let a be a matrix and µ(z) be its minimal polynomial:

  µa(z)=(z−λ1)m1· …· (z−λn)mn.

If all eigenvalues λi of a (i.e. all roots of µ(z) belong to the unit disk we can consider the respective Blaschke product

  Ba(z)=
n
i=1





z−λ i
1−
λi
z





mi





 
,

such that its numerator coincides with the minimal polynomial µ(z). Moreover, for an unimodular z we have Ba(z)=µa(z)µa−1(z)zm, where m=m1+… +mn. We also have the following covariance property:

Proposition 13 The above correspondence aBa intertwines the SL2(ℝ) action (2) on the matrices with the action (23) with k=0 on functions.

The result follows from the observation that every elementary product z−λ i/1−λiz is the Moebius transformation of z with the matrix (

  1−λ i
λi
1

). Thus the correspondence aBa(z) is a covariant (symbolic) calculus in the sense of the Defn. 23. See also the Example 22.

The Jordan normal form of a matrix provides a description, which is equivalent to its contravariant spectrum. From various viewpoints, e.g. numerical approximations, it is worth to consider its stability under a perturbation. It is easy to see, that an arbitrarily small disturbance breaks the Jordan structure of a matrix. However, the result of random small perturbation will not be random, its nature is described by the following remarkable theorem:

Theorem 14 (Lidskii [244], see also [259]) Let Jn be a Jordan block of a length n>1 with zero eigenvalues and K be an arbitrary matrix. Then eigenvalues of the perturbed matrix Jnn K admit the expansion
    λj=  ε ξ1/n  +o(ε),    j=1,…,n,  
where ξ1/n represents all n-th complex roots of certain ξ∈ℂ.

(a)  (b)
Figure 1.2: Perturbation of the Jordan block’s spectrum: (a) The spectrum of the perturbation J100100 K of the Jordan block J100 by a random matrix K. (b) The spectrum of the random matrix K.

The left picture in Fig. 1.2 presents a perturbation of a Jordan block J100 by a random matrix. Perturbed eigenvalues are close to vertices of a right polygon with 100 vertices. Those regular arrangements occur despite of the fact that eigenvalues of the matrix K are dispersed through the unit disk (the right picture in Fig. 1.2). In a sense it is rather the Jordan block regularises eigenvalues of K than K perturbs the eigenvalue of the Jordan block.

Although the Jordan structure itself is extremely fragile, it still can be easily guessed from a perturbed eigenvalues. Thus there exists a certain characterisation of matrices which is stable under small perturbations. We will describe a sense, in which the covariant spectrum of the matrix Jnn K is stable for small ε. For this we introduce the covariant version of spectral distances motivated by the functional model. Our definition is different from other types known in the literature [324]*Ch. 5.

Definition 15 Let a and b be two matrices with all their eigenvalues sitting inside of the unit disk and Ba(z) and Bb(z) be respective Blaschke products as defined above. The (covariant) spectral distance d(a,b) between a and b is equal to the distance ||BaBb||2 between Ba(z) and Bb(z) in the Hardy space on the unit circle.

Since the spectral distance is defined through the distance in H2 all standard axioms of a distance are automatically satisfied. For a Blaschke products we have | Ba(z) |=1 if | z |=1, thus ||Ba||p=1 in any Lp on the unit circle. Therefore an alternative expression for the spectral distance is:

  d(a,b)=2(1−⟨ Ba,Bb  ⟩).

In particular, we always have 0≤ d(a,b) ≤ 2. We get an obvious consequence of Prop. 13, which justifies the name of the covariant spectral distance:

Corollary 16 For any gSL2(ℝ) we have d(a,b)=d(g· a, g· a), where · denotes the Möbius action (2).

An important property of the covariant spectral distance is its stability under small perturbations.

Theorem 17 For n=2 let λ1(ε ) and λ2(ε ) be eigenvalues of the matrix J22· K for some matrix K. Then

λ1(ε ) 
+
λ2(ε ) 
=O(ε ),  however  
λ1(ε )+λ2(ε ) 
=O(ε 2). (11)
The spectral distance from the 1-jet at 0 to two 0-jets at points λ1 and λ2 bounded only by the first condition in (11) is O2). However the spectral distance between J2 and J22· K is O4).

In other words, a matrix with eigenvalues satisfying to the Lisdkii condition from the Thm. 14 is much closer to the Jordan block J2 than a generic one with eigenvalues of the same order. Thus the covariant spectral distance is more stable under perturbation that magnitude of eigenvalues. For n=2 a proof can be forced through a direct calculation. We also conjecture that the similar statement is true for any n≥ 2.

1.6 Covariant Pencils of Operators

Let H be a real Hilbert space, possibly of finite dimensionality. For bounded linear operators A and B consider the generalised eigenvalue problem, that is finding a scalar λ and a vector xH such that:

Ax=λ Bx    or equivalently    (A−λ B)x=0. (12)

The standard eigenvalue problem corresponds to the case B=I, moreover for an invertible B the generalised problem can be reduced to the standard one for the operator B−1A. Thus it is sensible to introduce the equivalence relation on the pairs of operators:

(A,B)∼(DA,DB)   for any invertible operator  D.  (13)

We may treat the pair (A,B) as a column vector (

  A
B

). Then there is an action of the SL2(ℝ) group on the pairs:

g·


    A
B


=


    aA+bB
cA+dB


,   where g=


    ab
cd


SL2(ℝ). (14)

If we consider this SL2(ℝ)-action subject to the equivalence relation (13) then we will arrive to a version of the linear-fractional transformation of the operator defined in (2). There is a connection of the SL2(ℝ)-action (14) to the problem (12) through the following intertwining relation:

Proposition 18 Let λ and xH solve the generalised eigenvalue problem (12) for the pair (A,B). Then the pair (C,D)=g· (A,B), gSL2(ℝ) has a solution µ and x, where
    µ=g· λ=
aλ +b
cλ +d
,  for  g=


      ab
cd


SL2(ℝ),
is defined by the Möbius transformation (1).

In other words the correspondence

  (A,B)↦ all generalised eigenvalues

is another realisation of a covariant calculus in the sense of Defn. 23. The collection of all pairs g· (A,B), gSL2(ℝ) is an example of covariant pencil of operators. This set is a SL2(ℝ)-homogeneous spaces, thus it shall be within the classification of such homogeneous spaces provided in the Subsection ??.

Example 19 It is easy to demonstrate that all existing homogeneous spaces can be realised by matrix pairs.
  1. Take the pair (O, I) where O and I are the zero and identity n× n matrices respectively. Then any transformation of this pair by a lower-triangular matrix from SL2(ℝ) is equivalent to (O, I). The respective homogeneous space is isomorphic to the real line with the Möbius transformations (1).
  2. Consider H=ℝ2. Using the notations ι from Subsection 3.4 we define three realisations (elliptic, parabolic and hyperbolic) of an operator Aι:
    Ai=


            01
    −10


    ,    Aε=


            01
    00


    ,   Aє=


            01
    10


    . (15)
    Then for an arbitrary element h of the subgroup K, N or A the respective (in the sense of the Principle 5) pair h· (Aι,I) is equivalent to (Aι,I) itself. Thus those three homogeneous spaces are isomorphic to the elliptic, parabolic and hyperbolic half-planes under respective actions of SL2(ℝ). Note, that Aι22 I, that is Aι is a model for hypercomplex units.
  3. Let A be a direct sum of any two different matrices out of the three Aι from (15), then the fix group of the equivalence class of the pair (A,I) is the identity of SL2(ℝ). Thus the corresponding homogeneous space coincides with the group itself.

Hawing homogeneous spaces generated by pairs of operators we can define respective functions on those spaces. The special attention is due the following paraphrase of the resolvent:

  R(A,B)(g)=(cA+dB)−1  where    g−1=


ab
cd


∈ SL2(ℝ).

Obviously R(A,B)(g) contains the essential information about the pair (A,B). Probably, the function R(A,B)(g) contains too much simultaneous information, we may restrict it to get a more detailed view. For vectors u, vH we also consider vector and scalar-valued functions related to the generalised resolvent:

  R(A,B)u(g)=(cA+dB)−1u,    and    R(A,B)(u,v)(g)=⟨ (cA+dB)−1u,v  ⟩, 

where (cA+dB)−1u is understood as a solution w of the equation u=(cA+dB)w if it exists and is unique, this does not require the full invertibility of cA+dB.

It is easy to see that the map (A,B)↦ R(A,B)(u,v)(g) is a covariant calculus as well. It worth to notice that function R(A,B) can again fall into three EPH cases.

Example 20 For the three matrices Aι considered in the previous Example we denote by Rι(g) the resolvetn-type function of the pair (Aι, I). Then:
    Ri(g)=
1
c2+d2


      dc
cd


,  Rε(g)=
1
d2


      dc
0d


,  Rє(g)=
1
d2c2


      dc
cd


.
Put u=(1,0)∈ H, then Rι(g) u is a two-dimensional real vector valued functions with components equal to real and imaginary part of hypercomplex Cauchy kernel considered in [195].

Consider the space L(G) of functions spanned by all left translations of R(A,B)(g). As usual, a closure in a suitable metric, say Lp, can be taken. The left action g: f(h)↦ f(g−1h) of SL2(ℝ) on this space is a linear representation of this group. Afterwards the representation can be decomposed into a sum of primary subrepresentations.

Example 21 For the matrices Aι the irreducible components are isomorphic to analytic spaces of hypercomplex functions under the fraction-linear transformations build in Subsection 3.2.

An important observation is that a decomposition into irreducible or primary components can reveal an EPH structure even in the cases hiding it on the homogeneous space level.

Example 22 Take the operator A=AiAє from the Example 19(3). The corresponding homogeneous space coincides with the entire SL2(ℝ). However if we take two vectors ui=(1,0)⊕(0,0) and uє=(0,0)⊕(1,0) then the respective linear spaces generated by functions RA(g)ui and RA(g)uє will be of elliptic and hyperbolic types respectively.

Let us briefly consider a quadratic eigenvalue problem: for given operators (matrices) A0, A1 and A2 from B(H) find a scalar λ and a vector xH such that

Q(λ)x=0,    where  Q(λ)=λ2A2 + λ A1 + A0. (16)

There is a connection with our study of conic sections from Subsection ?? which we will only hint for now. Comparing (16) with the equation of the cycle (3) we can associate the respective Fillmore–Springer–Cnops–type matrix to Q(λ), cf. (??):

Q(λ)=λ2A2 + λ A1 + A0   ←→   CQ=


    A1A0
    A2A1


. (17)

Then we can state the following analogue of Thm. ?? for the quadratic eigenvalues:

Proposition 23 Let two quadratic matrix polynomials Q and Q are such that their FSCc matrices (17) are conjugated CQ=gCQ g−1 by an element gSL2(ℝ). Then λ is a solution of the quadratic eigenvalue problem for Q and xH if and only if µ=g· λ is a solution of the quadratic eigenvalue problem for Q and x. Here µ=g· λ is the Möbius transformation (1) associated to gSL2(ℝ).

So quadratic matrix polynomials are non-commuting analogues of the cycles and it would be exciting to extend the geometry from Section ?? to this non-commutative setting as much as possible.

Remark 24 It is beneficial to extend a notion of a scalar in an (generalised) eigenvalue problem to an abstract field or ring. For example, we can consider pencils of operators/matrices with polynomial coefficients. In many circumstances we may factorise the polynomial ring by an ideal generated by a collection of algebraic equations. Our work with hypercomplex units is the most elementary realisation of this setup. Indeed, the algebra of hypercomplex numbers with the hypercomplex unit ι is a realisation of the polynomial ring in a variable t factored by the single quadratic relation t2+σ=0, where σ=ι2.
site search by freefind advanced

Last modified: October 28, 2024.
Previous Up Next