[PDF] [PDF] Bijective matrix algebra - CORE

AB = I into an explicit bijective proof of the identity BA = I Letting A and B be the Kostka matrix and its inverse, this settles an open problem posed by E˘gecio˘glu  



Previous PDF Next PDF





[PDF] Bijective matrix algebra - CORE

AB = I into an explicit bijective proof of the identity BA = I Letting A and B be the Kostka matrix and its inverse, this settles an open problem posed by E˘gecio˘glu  



[PDF] §54 Injectivité, surjectivité, bijectivité

Théorème d'injectivité f est injective ssi l'une des conditions est satisfaite : 1 Un vecteur b 5 Les vecteurs colonnes de la matrice de f forment une famille libre Ca sert, entre autres, à calculer l'inverse de la matrice (si elle existe) et 



Bijective matrix algebra - ScienceDirectcom

AB = I into an explicit bijective proof of the identity BA = I Letting A and B be the Kostka matrix and its inverse, this settles an open problem posed by E˘gecio˘glu  



[PDF] A function is bijective if and only if has an inverse

30 nov 2015 · We say that f is bijective if it is both injective and surjective Definition 2 Let f : A → B A function g : B → A is the inverse of f if f ◦ g = 1B 



[PDF] Math 217: §24 Invertible linear maps and matrices Professor Karen

If it is invertible, give the inverse map 1 The linear mapping R3 → R3 which scales every vector by 2 Solution note: This is surjective, injective, and invertble  



[PDF] INJECTIVE, SURJECTIVE AND INVERTIBLE Surjectivity: Maps

The map (1 4 -2 3 12 -6 ) is not surjective Let's understand the difference between these two examples: General Fact Let A be a matrix and let Ared be the row 



[PDF] Cours 3: Inversion des matrices dans la pratique - Institut de

maths, année 2012 Clément Rau Cours 3: Inversion des matrices dans la pratique Notion d'inverse d'un application linéaire bijective Dans le cas où f est 



[PDF] Inverses of Square Matrices - UMass Math

26 fév 2018 · To have both a left and right inverse, a function must be both injective and surjective Such functions are called bijective Bijective functions 



[PDF] Linear transformations - Vipul Naik

(7) A linear transformation T : Rm → Rn is bijective if the matrix of T has full row (8) The inverse of a diagonal matrix with all diagonal entries nonzero is the 

[PDF] inverse matrix calculator 4x4 with steps

[PDF] inverse matrix method

[PDF] inverse of 4x4 matrix example pdf

[PDF] inverse of a 3x3 matrix worksheet

[PDF] inverse of a matrix online calculator with steps

[PDF] inverse of bijective function

[PDF] inverse of linear transformation

[PDF] inverse of matrix product

[PDF] inverse relationship graph

[PDF] inverse relationship science

[PDF] inverseur de source courant continu

[PDF] inverter layout

[PDF] invertible linear transformation

[PDF] invest in 7 eleven

[PDF] investigatory project in physics for class 12 cbse pdf

Linear Algebra and its Applications 416 (2006) 917-944 www.elsevier.com/locate/laa

Bijective matrix algebra

Nicholas A. Loehr

a,? , Anthony Mendes b a Department of Mathematics, The College of William & Mary, Williamsburg, VA 23187, United States b Department of Mathematics, California Polytechnic State University, San Luis Obispo,

CA 93407, United States

Received 11 February 2005; accepted 6 January 2006

Available online 28 February 2006

Submitted by R.A. Brualdi

Abstract

IfAandBare square matrices such thatAB=I, thenBA=Iautomatically follows. We prove a

combinatorial version of this result in the case where the entries ofAandBcount collections of signed,

weighted objects. Specifically, we give an algorithm that transforms any given bijective proof of the identity

AB=Iinto an explicit bijective proof of the identityBA=I. LettingAandBbe the Kostka matrix and its inverse, this settles an open problem posed by E gecioglu and Remmel in 1990.

© 2006 Elsevier Inc. All rights reserved.

AMS classification:05E05; 05E10; 15A09; 15A24

Keywords:Involution principle; Combinatorial matrix; Proofs by bijection

1. Introduction

We begin by recalling a well-known result from linear algebra.

Theorem 1.LetRbe a commutative ring with1.IfA,B?M

n (R)are square matrices of order nsuch thatAB=I,thenBA=I. Research supported by a National Science Foundation Postdoctoral Research Fellowship.

Corresponding author.

E-mail addresses:nick@math.wm.edu(N.A. Loehr),aamendes@calpoly.edu(A. Mendes).

0024-3795/$ - see front matter

(2006 Elsevier Inc. All rights reserved.

doi:10.1016/j.laa.2006.01.004brought to you by COREView metadata, citation and similar papers at core.ac.ukprovided by Elsevier - Publisher Connector

918N.A. Loehr, A. Mendes / Linear Algebra and its Applications 416 (2006) 917-944

Proof.By basic properties of determinants, we have R LetA ?M n (R)denote the classical adjoint ofA, whosei,j-entry is(-1) i+j times the determi- nant of the matrix obtained by deleting rowjand columniofA.WehaveA

A=det(A)I.Now,

consider the matrixC=det(B)A

ABA. On one hand,

C=(det(B)A

(AB))A=(det(B)A

I)A=det(B)(A

A)=det(B)det(A)I=1

R I=I.

On the other hand,

C=det(B)(A

A)(BA)=(det(B)det(A)I)(BA)=1

R

IBA=BA.

Hence,BA=C=I.?

The goal of this paper is to present a combinatorial version of this theorem. In combinatorics, one often deals with matrices whose entries enumerate collections of signed, weighted objects. objects associated to each entry of the matrix productAB. By defining suitable sign-reversing involutions on these collections, one can obtain abijective proofthatAB=I. Similarly, we can try to find bijective proofs thatBA=I. In many applications, there are explicitly defined involutions that proveAB=I, but there are no known involutions that proveBA=I. Some examples of this phenomenon are given below. Theorem1easily implies theexistenceof such involutions, but it does not provide an explicitconstructionfor them. In the first part of this paper (§2through §6), we extend Theorem1to an algorithm that takes as input any explicit bijective proof of the identityAB=I, and constructs as output an explicit ous definition of what we mean by a bijective proof of a matrix identity. Second, we must develop combinatorial versions of the basic properties of matrices and determinants used in the proof of Theorem1. We give a combinatorial formulation of each property that is proved by constructing an explicit bijection. Third, we show how each step in the proof of Theorem1can be carried out in this combinatorial setting. A variant of the Garsia-Milne involution principle [5] will be a key technical tool in completing this agenda. The involution principle is of fundamental importance in bijective combinatorics; for other applications of this principle, see for instance [6,7,10]. Now we give some examples to which our general result can be applied. The first example, involving the combinatorial Kostka matrix and its inverse, is thoroughly analyzed in the second part of this paper (§7).

Example 2.LetAbetheKostkamatrixK

n oftheintegern.Theλ,μ-entryofK n -1n wasgiven by E gecioglu and Remmel [4]. Those authors defined sign-reversing involutions proving that K n K -1n =I. They also posed the problem of constructing involutions to show thatK -1n K n =I. author [9]. Our theorem will construct involutions proving the complete resultK -1n K n =I. Besides solving the open problem from [4], our involutions may lead to further progress on the (3+1)-free conjecture of Stanley and Stembridge [3,12].

Example 3.SupposeX

1 ={u :μ?n}andX 2 ={v :μ?n}are two bases for the vector space of symmetric functions of degreen. LettingAbe the transition matrix fromX 1 toX 2 and N.A. Loehr, A. Mendes / Linear Algebra and its Applications 416 (2006) 917-944919

Bthe transition matrix fromX

2 toX 1 ,wehaveAB=IandBA=I. For many classical choices of the basesX 1 andX 2 , the entries of these transition matrices have combinatorial interpretations [1]. Moreover, one can often give combinatorial proofs thatAB=I[11]. Our method will then furnish a combinatorial proof of the companion identityBA=I. The previous example is the special case whereX 1 andX 2 are the monomial basis and the Schur basis (or the Schur basis and the homogeneous basis).

Example 4.For eachn?0, let{u

:μ?n}and{v :μ?n}be bases for the vector space of symmetric functions of degreen. It is well known [8, Theorem I.4.6, p. 63] that the following two statements are equivalent. (1){u }and{v }are dual bases relative to the Hall scalar product. (2)? u (x)v (y)=? i,j (1-x i y j -1 n andB n suchthat(1)isequivalent to the statement thatA n B n =Ifor alln, while (2) is equivalent to the statement thatB n A n =I for alln. Using the result of the present paper, we see that a combinatorial proof of (1) will automatically lead to a combinatorial proof of (2), and vice versa. Example 5.Whitegavebijectiveproofsoftheorthogonalityrelationsofthefirstandsecondkind for the characters of the symmetric groupS n [14,15]. Using a suitable encoding of the character table ofS n as a matrixA, these results can be interpreted as combinatorial proofs of the identities AA T =IandA T A=I. Our general method could be applied to the proof in [14] to obtain automatically a bijective proof of the result in [15]. We could equally well apply our method to Acan also be viewed as the transition matrix between the Schur functionss and the power-sum symmetric functionsp . Thus, this example is another special case of Example3. Example 6.LetAbe then×nmatrix whosei,j-entry isS(i,j), the Stirling number of the second kind. LetBbe then×nmatrix whosei,j-entry iss(i,j), the (signed) Stirling number of the first kind. It is well known (see, e.g., Proposition 1.4.1(a) in [13]) thatAB=I, i.e.,? k?0

S(i,k)s(k,j)=δ

i,j for alli,j. Bijective proofs of this matrix identity can be given using rook theory [11]. Our theorem can therefore be applied to give a bijective proof thatBA=I, i.e.,? k?0 s(i,k)S(k,j)=δ i,j for alli,j.

2. Combinatorial scalars and their properties

up the basic notation used in our model. We also prove the “confluence lemma," which will be a fundamental tool for automatically constructing involutions.

2.1. Ordinary scalars and combinatorial scalars

Definition 7.Lett?0 be a fixed integer. Anordinary scalaris an element of the polynomial ringR=Z[x 1 ,...,x t ].Aweight monomialis an element ofRof the formx e 1 1 x e 2 2

···x

e t t . LetM denote the set of all weight monomials inR.

920N.A. Loehr, A. Mendes / Linear Algebra and its Applications 416 (2006) 917-944

Intuitively, the exponents of the variablesx

i will keep track oftdifferent weights of combina- torial objects. When dealing with unweighted objects, we can lett=0 andR=Z. In this case,

M={1}.

We now formally define the notion of a collection of signed, weighted objects. Definition 8.Acombinatorial scalaris a triple(A,s,w), where

•Ais a finite set of objects.

•s:A→{+1,-1}is asign functionthat assigns a sign (positive or negative) to each object in A. •w:A→Mis aweight functionthat assigns a weight monomial to each object inA. Thegenerating functionfor(A,s,w)is the ordinary scalar ?((A,s,w))=? x?A s(x)w(x)?R. ofsandw. Definition 9.We define combinatorial scalars0,1, and-1as follows: •0=(∅,s,w), wheresandware the empty function.

•1=({1},s,w), wheres(1)=+1 andw(1)=1.

•-1=({-1},s,w), wheres(-1)=-1 andw(-1)=1.

2.2. Equivalence of combinatorial scalars

Definition 10.Let(A,s,w)and(B,s

,w )be two combinatorial scalars. A functionf:A→B is called asign-preserving, weight-preserving bijection(orSPWP-map, for short) if and only if fis a bijection such that s (f(x))=s(x)andw (f(x))=w(x)for allx?A.

We say thatAandBareequivalentviaf, denotedA

f ≡B, if and only if we have explicitly this notation toA≡B.

We stress that the notationA

f ≡B(orA≡B) does not simply mean that there exists some SPWP-mapf:A→B. It means that we have constructed a specific instance of such a map, in terms of previously constructed maps or from scratch. When we say thatfis an “explicit" bijection, we mean that we can write down algorithms that computef(x)given anyx?Aand f -1 (y)given anyy?B. With these conventions in mind, the following lemma is easily proved. N.A. Loehr, A. Mendes / Linear Algebra and its Applications 416 (2006) 917-944921

Lemma 11.LetA,B,Cbe combinatorial scalars.

(1)A id A ≡A. (2)A f ≡BimpliesB f -1 ≡A. (3)A f ≡BandB g ≡CimpliesA g◦f ≡C. (4)A f ≡Bimplies?(A)=?(B). The converse of (4) is not true, since merely knowing that?(A)=?(B)does not allow us to single out a specific SPWP-mapf:A→B.

2.3. Reduction of combinatorial scalars

Definition 12.Let(A,s,w)be a combinatorial scalar. A functiong:A→Ais called asign- reversing, weight-preserving involution(orSRWP-map, for short) if and only if the following conditions hold for alla?A:

•g(g(a))=a;

•w(g(a))=w(a);

•eitherg(a)=a,ors(g(a))=-s(a).

We let Fxd(g)denote{a?A:g(a)=a}, the set offixed pointsofg. Definition 13.LetAandBbecombinatorialscalars.WesaythatAreducestoBviamaps(g,g 0 denotedA (g,g 0 -→Bor simplyA?-→B, if and only if we have explicitly constructed a SRWP-map g:A→Aand we have explicitly constructed a SPWP-mapg 0 :Fxd(g)→B.

As in the case of equivalence, we may only writeA

(g,g 0 -→Bif we have constructed specific mapsgandg 0 and have algorithms to computeg,g 0quotesdbs_dbs20.pdfusesText_26