[PDF] What is an adjacency matrix - Javatpoint





Previous PDF Next PDF



Powers of the Adjacency Matrix and the Walk Matrix

symmetric: matrix aij such that aij = 1 if vertices i and .i are adjacent and 0 otherwise. Powers of the Adjacency Matrix. The following well-known result will 



Distributed NVRAM cache – optimization and evaluation with power

evaluation with power of adjacency matrix. Artur Malinowski and Pawe l Czarnul. Dept. of Computer Architecture. Faculty of Electronics



Trace of Positive Integer Power of Adjacency Matrix

Keywords: Adjacency matrix Complete Graph



Untitled

In addition studying the adjacency matrix of a dominance directed graph we can predict the outcomes by examining the powers of the vertex in the graph. By 



TRACE OF POSITIVE INTEGER POWER OF ADJACENCY MATRIX

Abstract: Finding the trace of positive integer power of a matrix is an Key words or phrases: Adjacency matrix Complete graph



The Power Grid as a Complex Network: a Survey

2012?3?21? We focus on the electricity transmission and distribution Power Grid as ... the Adjacency matrix and Laplacian matrix graph representations.



Chapter 10.3 Counting walks in directed graphs

Raise matrix A to the mth power by multiplying m factors of A. Take the entry in row i The transition matrix M is the adjacency matrix of this graph.



trace of the adjacency matrix × of the cycle graph to the power of two

2021?11?16? adjacency matrix from a cycle graph to the power of two to five. Furthermore the formula of the trace of adjacency.



Untitled

Theorem (Interpretation of the powers of an adjacency matrix). If A is the adjacency matrix of a graph then the (i



Revisiting Power-law Distributions in Spectra of Real World Networks

cant power-law distribution with a cuto in the singular values of the adjacency matrix and eigenvalues of the Laplacian matrix in.



Powers of the Adjacency Matrix and the Walk Matrix

Powers of the Adjacency Matrix and the Walk Matrix Andrew Duncan 4 Introduction The aim of this article is to identify and prove various relations between powers of adjacency matric:es of graphs and various invariant properties of graphs in particular distance diameter and bipartiteness



GRAPH THEORY AND LINEAR ALGEBRA - University of Utah

The adjacency matrix of a graph provides a method of counting these paths by calcu-lating the powers of the matrices Theorem 2 1 Let Gbe a graph with adjacency matrix Aand kbe a positive integer Then the matrix power Ak gives the matrix where A ij counts the the number of paths of length k between vertices v i and v j



What is an adjacency matrix - Javatpoint

In this lecture I will discuss the adjacency matrix of a graph and the meaning of its smallest eigenvalue This corresponds to the largest eigenvalue of the Laplacian which we will examine as well We will relate these to bounds on the chromatic numbers of graphs and the sizes of independent sets of vertices in graphs



The Adjacency Matrix and Graph Coloring - Yale University

adjacency matrix eigenvalues The body of the notes includes the material that I intend to cover in class Proofs that I will skip but which you should know appear in the Appendix and Exercises 3 2 The Adjacency Matrix Let A be the adjacency matrix of a (possibly weighted) graph G As an operator A acts on a vector x 2IRV by (Ax)(u) = X (u



Foundations of Graphs - Michigan State University

represented as an adjacency matrix which describes the connectivity between the nodes De?nition 2 2 (Adjacency Matrix)For a given graph G= fV;Eg the corre-sponding adjacency matrix is denoted as A 2f0;1g N The i;j-th entry of the adjacency matrix A indicated as A i;j represents the connectivity between two nodes v i and v j More



Searches related to power of adjacency matrix filetype:pdf

The Adjacency Matrix A helpful way to represent a graphGis by using a matrixthat encodes the adjacency relations ofG This matrix is called the adjacency matrixofGand facilitates the use of algebraic tools to better understand graph theoreticalaspects

What are adjacency matrices used for in physics?

    Adjacency matrix definition. In graph theory, an adjacency matrix is a dense way of describing the finite graph structure. It is the 2D matrix that is used to map the association between the graph nodes. If a graph has n number of vertices, then the adjacency matrix of that graph is n x n, and each entry of the matrix represents the number of ...

What do the eigenvectors of an adjacency matrix tell us?

    The eigenvectors of the matrix (red lines) are the two special directions such that every point on them will just slide on them. The Mona Lisa example pictured here provides a simple illustration. Each point on the painting can be represented as a vector pointing from the center of the painting to that point.

How to graph adjacency matrix using MATLAB?

    Use the adjacency function to create a new adjacency matrix for the graph. Display the nodes in one hemisphere of the bucky ball by indexing into the adjacency matrix to create a new, smaller graph. To visualize the adjacency matrix of this hemisphere, use the spy function to plot the silhouette of the nonzero elements in the adjacency matrix.

What is an adjacency matrix interior design?

    In interior design an adjacency matrix is a table that shows what spaces should and should not be near to each other on plan. Spending the time to draw this matrix means that you no longer have to leaf through your program every time you can't remember if the client wants the Board Room close to the Break Room.

Spectral Graph TheoryLecture 3

The Adjacency Matrix and Thenth Eigenvalue

Daniel A. SpielmanSeptember 5, 20123.1 About these notes These notes are not necessarily an accurate representation of what happened in class. The notes written before class say what I think I should say. The notes written after class way what I wish I said. Be skeptical of all statements in these notes that can be made mathematically rigorous.

3.2 Overview

In this lecture, I will discuss the adjacency matrix of a graph, and the meaning of its smallest eigenvalue. This corresponds to the largest eigenvalue of the Laplacian, which we will examine as well. We will relate these to bounds on the chromatic numbers of graphs and the sizes of independent sets of vertices in graphs. In particular, we will prove Homan's bound, and some generalizations. Warning:I am going to give an alternative approach to Homan's bound on the chromatic number of a graph in which I use the Laplacian instead of the adjacency matrix. I just worked this out last night, so I still don't know if it is a good idea or not. But, I'm going to go with it. My proof of Homan's bound in the regular case will be much simpler than the proof that I gave in 2009.

3.3 The Adjacency Matrix

LetAbe the adjacency matrix of a (possibly weighted) graphG. As an operator,Aacts on a vectorx2IRVby (Ax)(u) =X (u;v)2Ew(u;v)x(v):(3.1) We will denote the eigenvalues ofAby1;:::;n. But, we order them in the opposite direction than we did for the Laplacian: we assume

12 n:

3-1

Lecture 3: September 5, 20123-2

The reason for this convention is so thaticorresponds to theith Laplacian eigenvalue,i. IfG is ad-regular graph, thenD=Id, and

L=IdA;

and so i=di: So, we see that the largest adjacency eigenvalue of ad-regular graph isd, and its corresponding eigenvector is the constant vector. We could also prove that the constant vector is an eigenvector of eigenvaluedby considering the action ofAas an operator (3.1): ifx(u) = 1 for allu, then (Ax)(v) =dfor allv.

3.4 The Largest Eigenvalue,1

We now examine1for graphs which are not necessarily regular. LetGbe a graph, letdmaxbe the maximum degree of a vertex inG, and letdavebe the average degree of a vertex inG.

Lemma 3.4.1.

d ave1dmax: Proof.The lower bound follows by considering the Rayleigh quotient with the all-1s vector:

1= maxxx

TAxx

Tx1TA11

T1=P i;jA(i;j)n =P id(i)n To prove the upper bound, Let1be an eigenvector of eigenvalue1. Letvbe the vertex on which it takes its maximum value, so1(v)1(u) for allu, and assume without loss of generality that

1(v)6= 0. We have

1=(A1)(v)

1(v)=P

uv1(u)

1(v)=X

uv 1(u) 1(v)X uv1d(v)dmax:(3.2)Lemma 3.4.2.IfGis connected and1=dmax, thenGisdmax-regular. Proof.If we have equality in (3.2), then it must be the case thatd(v) =dmaxand1(u) =1(v) for all (u;v)2E. Thus, we may apply the same argument to every neighbor ofv. As the graph is connected, we may keep applying this argument to neighbors of vertices to which it has already been applied to show that1(z) =1(v) andd(z) =dmaxfor allz2V.

Lecture 3: September 5, 20123-3

3.5 The Corresponding Eigenvector

The eigenvector corresponding to the largest eigenvalue of the adjacency matrix of a graph is usually

not a constant vector. However, it is always a positive vector if the graph is connected. This follows from the Perron-Frobenius theory. In fact, the Perron-Frobenius theory says much more, and it can be applied to adjacency matrices of strongly connected directed graphs. Note that these need not even be diagonalizable! We will defer a discussion of the general theory until we discuss directed graphs, which will happen towards the end of the semester. If you want to see it now, look at the third lecture from my notes from 2009. In the symmetric case, the theory is made much easier by both the spectral theory and the char- acterization of eigenvalues as extreme values of Rayleigh quotients. Theorem 3.5.1.[Perron-Frobenius, Symmetric Case] LetGbe a connected weighted graph, letA be its adjacency matrix, and let12 nbe its eigenvalues. Then a.1 n, and b.1> 2, c. The eigenvalue1has a strictly positive eigenvector. Before proving Theorem 3.5.1, we will prove a lemma that will be useful in the proof and a few other places today. It says that non-negative eigenvectors of non-negative adjacency matrices of connected graphs must be strictly positive. Lemma 3.5.2.LetGbe a connected weighted graph (with non-negative edge weights), letAbe its adjacency matrix, and assume that some non-negative vectoris an eigenvector ofA. Then,is strictly positive. Proof.Assume by way of contradiction thatis not strictly positive. So, there is some vertexu for which(u) = 0. Thus, there must be some edge (u;v) for which(u) = 0 but(v)>0. We would then (A)(u) =X (u;z)2Ew(u;z)(z)w(u;v)(v)>0; as all the termsw(u;z) and(z) are non-negative. But, this must also equal(u) = 0, where is the eigenvalue corresponding to. This is a contradiction.

So, we conclude thatmust be strictly positive.Proof of Theorem 3.5.1.Let1;:::;nbe the eigenvectors corresponding to1;:::;n.

We start with partc. Recall that

1= maxxxx

TAxx Tx:

Lecture 3: September 5, 20123-4

Let1be an eigenvector of1, and construct the vectorxsuch that x(u) =j(u)j;for allu: We will show thatxis an eigenvector of eigenvalue1.

We havexTx=T. Moreover,

T1A1=X

u;vA(u;v)(u)(v)X u;vA(u;v)j(u)jj(v)j=xTAx: So, the Rayleigh quotient ofxis at least1. As1is the maximum possible Rayleigh quotient, the Rayleigh quotient ofxmust be1andxmust be an eigenvector of1. So, we now know thatAhas an eigenvectorxthat is non-negative. We can then apply Lemma 3.5.2 to show thatxis strictly positive. To prove partb, letnbe the eigenvector ofnand letybe the vector for whichy(u) =jn(u)j. In the spirit of the previous argument, we can again show that jnj=jnAnj X u;vA(u;v)y(u)y(v)1yTy=1: To show that the multiplicity of1is 1 (that is,2< 1), consider an eigenvector2. As2is orthogonal to1, it must contain both positive and negative values. We now construct the vector ysuch thaty(u) =j2(u)jand repeat the argument that we used forx. We nd that

2=T2A2

22yTAyy

Ty1: From here, we divide the proof into two cases. First, consider the case in whichyis never zero. In this case, there must be some edge (u;v) for which2(u)<0<2(v). Then the above inequality must be strict because the edge (u;v) will make a negative contribution toT2A2and a positive contribution toyTAy. We will argue by contradiction in the case thatyhas a zero value. In this case, if2=1then ywill be an eigenvector of eigenvalue1. This is a contradiction, as Lemma 3.5.2 says that a non-negative eigenvector cannot have a zero value. So, ifyhas a zero value thenyTAy< 1and

2< 1as well.The following characterization of bipartite graphs follows from similar ideas.

Proposition 3.5.3.IfGis a connected graph, thenn=1if and only ifGis bipartite. Proof.First, assume thatGis bipartite. That is, we have a decomposition ofVinto setsUand Wsuch that all edges go betweenUandW. Let1be the eigenvector of1. Dene x(u) =(

1(u) ifu2U;and

1(u) ifu2W:

Lecture 3: September 5, 20123-5

Foru2U, we have

(Ax)(u) =X (u;v)2Ex(v) =X (u;v)2E(v) =1(u) =1x(u): Using a similar argument foru62U, we can show thatxis an eigenvector of eigenvalue1. To go the other direction, assume thatn=1. We then constructyas in the previous proof, and again observe jnj=jnAnj= X u;vA(u;v)n(u)n(v) X u;vA(u;v)y(u)y(v)1yTy=1: For this to be an equality, it must be the case thatyis an eigenvalue of1, and soy= `1. For the rst inequality above to be an equality, it must also be the case that all the termsn(u)n(v) have the same sign. In this case that sign must be negative. So, we every edge goes between a vertex for whichn(u) is positive and a vertex for whichn(v) is negative. Thus, the signs ofn

give the bi-partition.Thenth eigenvalue, which is the most negative in the case of the adjacency matrix and is the

largest in the case of the Laplacian, corresponds to the highest frequency vibration in a graph. Its corresponding eigenvector tries to assign as dierent as possible values to neighboring vertices.

This is, it tries to assign a coloring. In fact, there are heuristics for ndingkcolorings by using the

k1 largest eigenvectors [AK97].

3.6 Graph Coloring and Independent Sets

A coloring of a graph is an assignment of one color to every vertex in a graph so that each edge attaches vertices of dierent colors. We are interested in coloring graphs while using as few colors as possible. Formally, ak-coloring of a graph is a functionc:V! f1;:::;kgso that for all (u;v)2V,c(u)6=c(v). A graph isk-colorable if it has ak-coloring. The chromatic number of a graph, writtenG, is the leastkfor whichGisk-colorable. A graphGis 2-colorable if and only if it is bipartite. Determining whether or not a graph is 3-colorable is an NP-complete problem. The famous 4-Color Theorem [AH77a, AH77b] says that every planar graph is 4-colorable. A set of verticesSisindependentif there are no edges between vertices inS. In particular, each color class in a coloring is an independent set. The problem of nding large independent sets in a graph is NP-Complete, and it is very dicult to even approximate the size of the largest independent set in a graph. However, for some carefully chosen graphs one can obtain very good bounds on the sizes of in- dependent sets by using spectral graph theory. We may later see some uses of this theory in the analysis of error-correcting codes and sphere packings.

Lecture 3: September 5, 20123-6

3.7 Homan's Bound

Homan proved the following upper bound on the size of an independent set in a graphG. Theorem 3.7.1.LetG= (V;E)be ad-regular graph. Then (G)nndn: I gave an overly-complicated proof of this theorem in 2009. Part of the complication was that I wrote my proof using the adjacency matrix. I will now give a very simple proof using the Laplacian matrix. In fact, I will prove a slightly stronger statement that does not require the graph to be regular. Theorem 3.7.2.LetSbe an independent set inG, and letdave(S)be the average degree of a vertex inS. Then, jSj n

1dave(S)

n To compare these two, observe that in thed-regular casedave=dandn=dn. So, we have

1dave(S)

n=nd n=ndn: Proof.LetSbe an independent set of vertices and letd(S) be the sum of the degrees of vertices inS.

Recall that

n= maxxx TLxx Tx: We also know that the vectorxthat maximizes this quantity isn, and thatnis orthogonal to

1. So, we can rene this expression to

n= max x?1x TLxx Tx:

As we did last class, we will consider the vector

x=Ss1; wheres=jSj=n. AsSis independent, we have x

TLx=j@(S)j=d(S) =dave(S)jSj:

We also computed the square of the norm ofxlast class, and it comes out to x

Tx=n(ss2):

Lecture 3: September 5, 20123-7

So, we have

Re-arranging terms, this gives

1dave(S)

ns;

which is equivalent to the claim of the theorem.Homan's bound using the adjacency matrix eigenvalues does not necessarily hold for irregular

graphs. However, the bound that one would expect to get from it on the chromatic number does. As ak-colorable graph must have an independent set of size at leastn=k, the expected theorem follows.

Theorem 3.7.3.

(G)1nn= 1 +1n: Using Theorem 3.7.2, we can prove (and you will do so on the rst problem set) Gn ndave:

These are the same in the regular case. I'm not sure which is better, or if they are even comparable,

in general.

3.8 Wilf's Theorem

We did not get to the following material in today's lecture, but I assume that you will read it. While we may think of1as being a related to the average degree, it does behave dierently. In

particular, if we remove the vertex of smallest degree from a graph, the average degree can increase.

On the other hand,1can only decrease when we remove a vertex. Let's prove that now. Lemma 3.8.1.LetAbe a symmetric matrix with largest eigenvalue1. LetBbe the matrix obtained by removing the last row and column fromA, and let1be the largest eigenvalue ofB. Then, 11:

Proof.For any vectory2IRn1, we have

y TBy=y 0 T Ay 0

Lecture 3: September 5, 20123-8

So, foryan eigenvector ofBof eigenvalue1,

1=yTByy

Ty= y 0 T Ay 0 y 0 Ty 0 maxx2IRnxTAxx Tx:Of course, this holds regardless of which row and column we remove, as long as they are the same row and column. It is easy to show that every graph is (dmax+1)-colorable. Assign colors to the vertices one-by-one. As each vertex has at mostdmaxneighbors, there is always some color one can assign that vertex that is dierent that those assigned to its neighbors. The following theorem of Wilf improves upon this bound.

Theorem 3.8.2.

(G) b1c+ 1: Proof.We prove this by induction on the number of vertices in the graph. To ground the induction, consider the graph with one vertex and no edges. It has chromatic number 1 and largest eigenvalue zero

1. Now, assume the theorem is true for all graphs onn1 vertices, and letGbe a graph on

nvertices. By Lemma 3.4.1,Ghas a vertex of degree at mostb1c. Letvbe such a vertex and letG fvgbe the graph obtained by removing this vertex. By Lemma 3.8.1 and our induction hypothesis,Gfvghas a coloring with at mostb1c+1 colors. Letcbe any such coloring. We just need to show that we can extendctov. Asvhas at mostb1cneighbors, there is some color in f1;:::;b1c+ 1gthat does not appear among its neighbors, and which it may be assigned. Thus,

Ghas a coloring withb1c+ 1 colors.For an example, consider a path graph with at least 3 vertices. We havedmax= 2, but1<2.

References

[AH77a] Kenneth Appel and Wolfgang Haken. Every planar map is four colorable part i. discharg- ing.llinois Journal of Mathematics, 21:429{490, 1977. [AH77b] Kenneth Appel and Wolfgang Haken. Every planar map is four colorable part ii. reducibil- ity.llinois Journal of Mathematics, 21:491{567, 1977. [AK97] Alon and Kahale. A spectral technique for coloring random 3-colorable graphs.SICOMP:

SIAM Journal on Computing, 26, 1997.1

If this makes you uncomfortable, you could use both graphs on two verticesquotesdbs_dbs6.pdfusesText_12
[PDF] power of board of directors

[PDF] power of ten

[PDF] power spectrum of discrete signal

[PDF] power tool abb knx

[PDF] power word activities

[PDF] power words for education

[PDF] powerful python pdf download

[PDF] powerful sentence starters

[PDF] powerpoint 14th amendment equal protection

[PDF] powerpoint 2013 exercises for practice

[PDF] powerpoint 2016 advanced tutorial pdf

[PDF] powerpoint 2016 notes

[PDF] powerpoint 2016 vocabulary

[PDF] powerpoint 2019 tutorial pdf

[PDF] powerpoint advanced pdf options