[PDF] [PDF] Bijective matrix algebra - CORE

ous definition of what we mean by a bijective proof of a matrix identity is called a sign-preserving, weight-preserving bijection (or SPWP-map, for short) if and 



Previous PDF Next PDF





[PDF] Bijective/Injective/Surjective Linear Transformations

1 The linear mapping R3 → R3 which scales every vector by 2 Solution note: This is surjective, injective, and invertble The inverse scales by 1 2 The matrix of 



[PDF] INJECTIVE, SURJECTIVE AND INVERTIBLE Surjectivity: Maps

A linear map A : Rk → Rl is called surjective if, for every v in Rl, we can find u in R Let A be a matrix and let Ared be the row reduced form of A If Ared has a 



[PDF] Bijective matrix algebra - CORE

ous definition of what we mean by a bijective proof of a matrix identity is called a sign-preserving, weight-preserving bijection (or SPWP-map, for short) if and 



Bijective matrix algebra - ScienceDirectcom

ous definition of what we mean by a bijective proof of a matrix identity is called a sign-preserving, weight-preserving bijection (or SPWP-map, for short) if and 



[PDF] Linear transformations - Vipul Naik

a unique matrix, i e , there is a bijection between the set of n × m matrices and the set of linear transformations from Rm to Rn (2) A function (also called map) f 



[PDF] Linear transformations - NDSU

(In the case of the bijection f function g is usually called the inverse of f and Consider a linear transformation A: R5 −→ R4, which is matrix multiplication



[PDF] Vector spaces and linear maps - Stanford University

one-to-one, i e injective, and onto, i e surjective (such a one-to-one and onto Recall that if T : Rn → Rm, written as a matrix, then the jth column of T is Tej, 



[PDF] LECTURE 18: INJECTIVE AND SURJECTIVE FUNCTIONS AND

18 nov 2016 · A function f from a set X to a set Y is injective (also called one-to-one) This is really a basis as if we put them into a matrix and take the 



[PDF] Linear Algebra

a square matrix A is injective (or surjective) iff it is both injective and surjective, i e , iff it is bijective Bijective matrices are also called invertible matrices, because  

[PDF] matrix bipartite graph r

[PDF] matrix generator

[PDF] matrix injective

[PDF] matrix inverse 2x2 calculator

[PDF] matrix inverse method

[PDF] matrix multiplication c++

[PDF] matrix multiplication calculator 2x2

[PDF] matrix multiplication calculator 3x3

[PDF] matrix multiplication calculator desmos

[PDF] matrix multiplication calculator emath

[PDF] matrix multiplication calculator online

[PDF] matrix multiplication calculator with steps

[PDF] matrix multiplication calculator with variables

[PDF] matrix multiplication calculator wolfram

[PDF] matrix multiplication example

Linear Algebra and its Applications 416 (2006) 917-944 www.elsevier.com/locate/laa

Bijective matrix algebra

Nicholas A. Loehr

a,? , Anthony Mendes b a Department of Mathematics, The College of William & Mary, Williamsburg, VA 23187, United States b Department of Mathematics, California Polytechnic State University, San Luis Obispo,

CA 93407, United States

Received 11 February 2005; accepted 6 January 2006

Available online 28 February 2006

Submitted by R.A. Brualdi

Abstract

IfAandBare square matrices such thatAB=I, thenBA=Iautomatically follows. We prove a

combinatorial version of this result in the case where the entries ofAandBcount collections of signed,

weighted objects. Specifically, we give an algorithm that transforms any given bijective proof of the identity

AB=Iinto an explicit bijective proof of the identityBA=I. LettingAandBbe the Kostka matrix and its inverse, this settles an open problem posed by E gecioglu and Remmel in 1990.

© 2006 Elsevier Inc. All rights reserved.

AMS classification:05E05; 05E10; 15A09; 15A24

Keywords:Involution principle; Combinatorial matrix; Proofs by bijection

1. Introduction

We begin by recalling a well-known result from linear algebra.

Theorem 1.LetRbe a commutative ring with1.IfA,B?M

n (R)are square matrices of order nsuch thatAB=I,thenBA=I. Research supported by a National Science Foundation Postdoctoral Research Fellowship.

Corresponding author.

E-mail addresses:nick@math.wm.edu(N.A. Loehr),aamendes@calpoly.edu(A. Mendes).

0024-3795/$ - see front matter

(2006 Elsevier Inc. All rights reserved.

doi:10.1016/j.laa.2006.01.004brought to you by COREView metadata, citation and similar papers at core.ac.ukprovided by Elsevier - Publisher Connector

918N.A. Loehr, A. Mendes / Linear Algebra and its Applications 416 (2006) 917-944

Proof.By basic properties of determinants, we have R LetA ?M n (R)denote the classical adjoint ofA, whosei,j-entry is(-1) i+j times the determi- nant of the matrix obtained by deleting rowjand columniofA.WehaveA

A=det(A)I.Now,

consider the matrixC=det(B)A

ABA. On one hand,

C=(det(B)A

(AB))A=(det(B)A

I)A=det(B)(A

A)=det(B)det(A)I=1

R I=I.

On the other hand,

C=det(B)(A

A)(BA)=(det(B)det(A)I)(BA)=1

R

IBA=BA.

Hence,BA=C=I.?

The goal of this paper is to present a combinatorial version of this theorem. In combinatorics, one often deals with matrices whose entries enumerate collections of signed, weighted objects. objects associated to each entry of the matrix productAB. By defining suitable sign-reversing involutions on these collections, one can obtain abijective proofthatAB=I. Similarly, we can try to find bijective proofs thatBA=I. In many applications, there are explicitly defined involutions that proveAB=I, but there are no known involutions that proveBA=I. Some examples of this phenomenon are given below. Theorem1easily implies theexistenceof such involutions, but it does not provide an explicitconstructionfor them. In the first part of this paper (§2through §6), we extend Theorem1to an algorithm that takes as input any explicit bijective proof of the identityAB=I, and constructs as output an explicit ous definition of what we mean by a bijective proof of a matrix identity. Second, we must develop combinatorial versions of the basic properties of matrices and determinants used in the proof of Theorem1. We give a combinatorial formulation of each property that is proved by constructing an explicit bijection. Third, we show how each step in the proof of Theorem1can be carried out in this combinatorial setting. A variant of the Garsia-Milne involution principle [5] will be a key technical tool in completing this agenda. The involution principle is of fundamental importance in bijective combinatorics; for other applications of this principle, see for instance [6,7,10]. Now we give some examples to which our general result can be applied. The first example, involving the combinatorial Kostka matrix and its inverse, is thoroughly analyzed in the second part of this paper (§7).

Example 2.LetAbetheKostkamatrixK

n oftheintegern.Theλ,μ-entryofK n -1n wasgiven by E gecioglu and Remmel [4]. Those authors defined sign-reversing involutions proving that K n K -1n =I. They also posed the problem of constructing involutions to show thatK -1n K n =I. author [9]. Our theorem will construct involutions proving the complete resultK -1n K n =I. Besides solving the open problem from [4], our involutions may lead to further progress on the (3+1)-free conjecture of Stanley and Stembridge [3,12].

Example 3.SupposeX

1 ={u :μ?n}andX 2 ={v :μ?n}are two bases for the vector space of symmetric functions of degreen. LettingAbe the transition matrix fromX 1 toX 2 and N.A. Loehr, A. Mendes / Linear Algebra and its Applications 416 (2006) 917-944919

Bthe transition matrix fromX

2 toX 1 ,wehaveAB=IandBA=I. For many classical choices of the basesX 1 andX 2 , the entries of these transition matrices have combinatorial interpretations [1]. Moreover, one can often give combinatorial proofs thatAB=I[11]. Our method will then furnish a combinatorial proof of the companion identityBA=I. The previous example is the special case whereX 1 andX 2 are the monomial basis and the Schur basis (or the Schur basis and the homogeneous basis).

Example 4.For eachn?0, let{u

:μ?n}and{v :μ?n}be bases for the vector space of symmetric functions of degreen. It is well known [8, Theorem I.4.6, p. 63] that the following two statements are equivalent. (1){u }and{v }are dual bases relative to the Hall scalar product. (2)? u (x)v (y)=? i,j (1-x i y j -1 n andB n suchthat(1)isequivalent to the statement thatA n B n =Ifor alln, while (2) is equivalent to the statement thatB n A n =I for alln. Using the result of the present paper, we see that a combinatorial proof of (1) will automatically lead to a combinatorial proof of (2), and vice versa. Example 5.Whitegavebijectiveproofsoftheorthogonalityrelationsofthefirstandsecondkind for the characters of the symmetric groupS n [14,15]. Using a suitable encoding of the character table ofS n as a matrixA, these results can be interpreted as combinatorial proofs of the identities AA T =IandA T A=I. Our general method could be applied to the proof in [14] to obtain automatically a bijective proof of the result in [15]. We could equally well apply our method to Acan also be viewed as the transition matrix between the Schur functionss and the power-sum symmetric functionsp . Thus, this example is another special case of Example3. Example 6.LetAbe then×nmatrix whosei,j-entry isS(i,j), the Stirling number of the second kind. LetBbe then×nmatrix whosei,j-entry iss(i,j), the (signed) Stirling number of the first kind. It is well known (see, e.g., Proposition 1.4.1(a) in [13]) thatAB=I, i.e.,? k?0

S(i,k)s(k,j)=δ

i,j for alli,j. Bijective proofs of this matrix identity can be given using rook theory [11]. Our theorem can therefore be applied to give a bijective proof thatBA=I, i.e.,? k?0 s(i,k)S(k,j)=δ i,j for alli,j.

2. Combinatorial scalars and their properties

up the basic notation used in our model. We also prove the “confluence lemma," which will be a fundamental tool for automatically constructing involutions.

2.1. Ordinary scalars and combinatorial scalars

Definition 7.Lett?0 be a fixed integer. Anordinary scalaris an element of the polynomial ringR=Z[x 1 ,...,x t ].Aweight monomialis an element ofRof the formx e 1 1 x e 2 2

···x

e t t . LetM denote the set of all weight monomials inR.

920N.A. Loehr, A. Mendes / Linear Algebra and its Applications 416 (2006) 917-944

Intuitively, the exponents of the variablesx

i will keep track oftdifferent weights of combina- torial objects. When dealing with unweighted objects, we can lett=0 andR=Z. In this case,

M={1}.

We now formally define the notion of a collection of signed, weighted objects. Definition 8.Acombinatorial scalaris a triple(A,s,w), where

•Ais a finite set of objects.

•s:A→{+1,-1}is asign functionthat assigns a sign (positive or negative) to each object in A. •w:A→Mis aweight functionthat assigns a weight monomial to each object inA. Thegenerating functionfor(A,s,w)is the ordinary scalar ?((A,s,w))=? x?A s(x)w(x)?R. ofsandw. Definition 9.We define combinatorial scalars0,1, and-1as follows: •0=(∅,s,w), wheresandware the empty function.

•1=({1},s,w), wheres(1)=+1 andw(1)=1.

•-1=({-1},s,w), wheres(-1)=-1 andw(-1)=1.

2.2. Equivalence of combinatorial scalars

Definition 10.Let(A,s,w)and(B,s

,w )be two combinatorial scalars. A functionf:A→B is called asign-preserving, weight-preserving bijection(orSPWP-map, for short) if and only if fis a bijection such that s (f(x))=s(x)andw (f(x))=w(x)for allx?A.

We say thatAandBareequivalentviaf, denotedA

f ≡B, if and only if we have explicitly this notation toA≡B.

We stress that the notationA

f ≡B(orA≡B) does not simply mean that there exists some SPWP-mapf:A→B. It means that we have constructed a specific instance of such a map, in terms of previously constructed maps or from scratch. When we say thatfis an “explicit" bijection, we mean that we can write down algorithms that computef(x)given anyx?Aand f -1 (y)given anyy?B. With these conventions in mind, the following lemma is easily proved. N.A. Loehr, A. Mendes / Linear Algebra and its Applications 416 (2006) 917-944921

Lemma 11.LetA,B,Cbe combinatorial scalars.

(1)A id A ≡A. (2)A f ≡BimpliesB f -1 ≡A. (3)A f ≡BandB g ≡CimpliesA g◦f ≡C. (4)A f ≡Bimplies?(A)=?(B). The converse of (4) is not true, since merely knowing that?(A)=?(B)does not allow us to single out a specific SPWP-mapf:A→B.

2.3. Reduction of combinatorial scalars

Definition 12.Let(A,s,w)be a combinatorial scalar. A functiong:A→Ais called asign- reversing, weight-preserving involution(orSRWP-map, for short) if and only if the following conditions hold for alla?A:

•g(g(a))=a;

•w(g(a))=w(a);

•eitherg(a)=a,ors(g(a))=-s(a).

We let Fxd(g)denote{a?A:g(a)=a}, the set offixed pointsofg. Definition 13.LetAandBbecombinatorialscalars.WesaythatAreducestoBviamaps(g,g 0 denotedA (g,g 0 -→Bor simplyA?-→B, if and only if we have explicitly constructed a SRWP-mapquotesdbs_dbs21.pdfusesText_27