[PDF] Witnesses for Boolean Matrix Multiplication and for Shortest Paths





Previous PDF Next PDF



Computing Polynomials with Few Multiplications

16 Sept 2011 Key words and phrases: arithmetic circuits multiplications. 1 Introduction. Arithmetic complexity is a branch of theoretical computer ...



AdderNet: Do We Really Need Multiplications in Deep Learning?

In this paper we present adder networks (AdderNets) to trade these massive multiplications in deep neural networks



On the Arithmetic Complexity of Strassen-Like Matrix Multiplications

The Strassen algorithm for multiplying 2 × 2 matrices requires seven multiplications and 18 additions. The recursive use of this algorithm for matrices of 



Profinite Braid Groups Galois Representations and Complex

1-adic power series for complex multiplications of Fermat type or equivalently



Galois Action on Division Points of Abelian Varieties with Real

Suppose that X has no complex multiplication (i.e. EndcX= Z). Then the image of p.



Lecture 11 – Fast matrix multiplication

16 Jun 2014 373.) 1.4 Boolean matrix multiplication. Strassen's algorithm uses not only additions and multiplications so as to multiply matrices but also ...



Witnesses for Boolean Matrix Multiplication and for Shortest Paths

We use O(n?) to denote the running time of some subcubic algorithm for Boolean matrix multiplication. All our algorithms can be derived from any such algorithm 



A NONCOMMUTATIVE ALGORITHM FOR MULTIPLYING 3 x 3

The standard definition for the multiplication of two n x n matrices yields a noncommutative algorithm using n3 multiplications. Strassen [7] produced a.



Double-Base Chains for Scalar Multiplications on Elliptic Curves

Keywords: Elliptic curve cryptography Scalar multiplication



Fast Secure Matrix Multiplications over Ring-Based Homomorphic

HElib is a software library that implements the Brakerski-Gentry-Vaikuntanathan (BGV) homomorphic scheme in which secure matrix-vector multiplication is 

Witnesses for Boolean Matrix Multiplication and for Shortest Paths

Noga Alon

Zvi GalilyOded MargalitzMoni Naorx

Extended Abstract

Summary of results

The subcubic (O(n!) for! <3) algorithms to multiply Boolean matrices do not provide the witnesses; namely, they computeC=ABbut ifCij= 1 they do not nd an indexk(a witness) such thatAik=Bkj= 1. We design a deterministic algorithm for computing the matrix of witnesses that runs in ~O(n!) time, where here~O(n!) denotesO(n!(logn)O(1)). The subcubic methods to compute the shortest distances between all pairs of vertices also do not provide for witnesses; namely they compute the shortest distances but do not generate information for computing quickly the paths themselves. A witness for a shortest path from v itovjis an indexksuch thatvkis the rst vertex on such a path. We describe subcubic methods to compute such witnesses for several versions of the all pairs shortest paths problem. As a result, we derive shortest paths algorithms that provide characterization of the shortest paths in addition to the shortest distances in essentially the same time needed for computing the distances; namely~O(n(3+!)=2) in the directed case and~O(n!) time in the undirected case. We also design an algorithm that computes witnesses for the transitive closure in the same

time needed to compute witnesses for Boolean matrix multiplication.Department of Mathematics, Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University, Tel

Aviv, ISRAEL. Research supported in part by a USA Israeli BSF grant

yDepartment of Computer Science, Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University,

Tel Aviv, Israel and Department of Computer Science, Columbia University, New York, NY 10027, USA

zDepartment of Computer Science, Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University,

Tel Aviv, Israel

xIBM Almaden Research Center, 650 Harry Road, San Jose, CA 95120, USA

1 Introduction

Consider a Boolean matrix multiplication:C=AB,Cij=Wnk=1(Aik^Bkj). Then3time method that evaluates these expressions gives for everyi;jfor whichCij= 1 all thek's for whichAik= B kj= 1. The subcubic methods on the other hand considerAandBas matrices of integers and do not provide any of thesek's. We call aksuch thatAik=Bkj= 1 awitness(for the fact thatCij= 1). We want to compute in addition to the matrixCa matrix of witnesses. When there is more than one witness for a giveniandjwe are satised with one such witness. We useO(n!) to denote the running time of some subcubic algorithm for Boolean matrix multiplication. All our algorithms can be derived from any such algorithm yielding a corresponding time bound as a function ofw. The best asymptotic bound known at present is the one with the exponent! <2:376 and is due to Coppersmith and Winograd [3]. For two functionsf(n) andg(n) we letg(n) =~O(f(n)) denote the statement thatg(n) is

O(f(n)(logn)O(1)).

Several researchers, including Seidel [6], Karger (personal communication) and the rst three authors, observed that there is a simple randomized algorithm that computes witnesses in~O(n!) time. In Section 2 we describe adeterministicalgorithm for computing the witnesses in~O(n!) time. It is essentially a derandomization of a modied version of the simple randomized algorithm, and relies heavily on the known constructions of small sample spaces with almost independent random variables. We also outline an alternative approach that gives slightly worse running time but may still be useful for matrices of moderate size. Our motivation for studying the computation of witnesses for Boolean matrix multiplication is related to our work on the all pair shortest paths problem. We use the following notation. D=fdijgnij=1is the matrix of edge lengths,dij= +1in case there is no edge fromvitovj. In the positive casedij2 f1;2;:::;M;+1gand in the unrestricted casedij2 f0;1;2;:::;M;+1g. D =fdijgis the matrix of shortest distances. In an earlier paper [2] the rst three authors designed subcubic algorithms for computing all pair shortest distances of directed graphs with integer edge lengths whose absolute value is bounded byM. We denote the problem and its time bound byAPSD(n;M), wherenis the number of vertices in the graph. We showed thatAPSD(n;M) =O((Mn)), where= (3 +!)=2. For ! <2:376, we have <2:688:In a more recent work [4] the second and third authors have improved the dependence onMand obtained better bounds for undirected graphs, in which case APSD(n;M) =O(M(!+1)=2n!logn). A simpleO(n!logn) algorithm for undirectedAPSD(n;1) was discovered independently by Seidel [6], but it does not seem to be extendable to larger edge lengths. All these algorithms do not provide any subcubic deterministic way for nding the shortest paths themselves, only the shortest distances. One cannot have a subcubic algorithm that computes the shortest paths between all pairs of vertices simply because in the example depicted in Figure 1 there are more thann3=27 edges in all shortest paths. One may use the following denition to obtain a more concise representation of all shortest paths: Awitness for a shortest pathfromvitovjis an indexksuch thatvkis the rst vertex on such a path. This denition is certainly sucient in case of positive edge lengths. A shortest path can be easily constructed from these witnesses. This denition is insucient in case of nonpositive cycles. Ifdij=1we want to be able to construct from the witnesses a simple path fromvitovjtogether with a vertexvkon the path and a negative cycle containingvk(i;jandkneed not be distinct). This leads to the need to dene witnesses for paths, not necessarily shortest paths. 1 qqqqqqqq n=3 vertices@@@@ qqqqqqqq n=3 vertices@@@@ q q q q q q q q qn=3 vertices path

Figure 1:

Consider the transitive closure of a directed graph. One could try to dene a witness for a path identically to the denition of a witness for a shortest path: A witness for a path fromvitovjis an indexksuch thatvkis the rst vertex on such a path. Any method for computing witnesses for Boolean matrix multiplication can be immediately used for computing these \witnesses": compute witnesses forAT, whereAis the incidence matrix andTthe transitive closure. Unfortunately this denition is inappropriate as can be seen in Figure 2:vkis a possible witness for the path fromvitovj, but it is a bad choice which leads to a cycle. q-q-q-qq@Iv ivjv k

Figure 2:

We require a matrix ofwitnesses for the transitive closureto satisfy the following condition: If a path fromvitovjexists then such a path can be constructed by following the witnesses. Namely, there is a pathvi=vi0;::;vik=vjand for 1rk,iris the witness for the path fromvir1tovj. In Section 3 we give an an~O(n!) algorithm that computes witnesses for the transitive closure . Coming back to the shortest paths problem we would like to compute in addition to the ma- trixDof shortest distances also

1. Witnesses for shortest paths.

2. A simple negative cycle for eachisuch thatdii=1.

Consequently, shortest paths of nite length can be easily obtained from 1. On the other hand, a shortest path of length1can be represented as a (possibly empty) path from 1 together with a negative cycle (from 2). In Section 4 we rst give an algorithm for computing witnesses for shortest paths when edge lengths are positive, then when edge lengths are nonnegative. Finally we give an algorithm that

generates the characterization of shortest paths in the general case. Its running time is~O(n(3+!)=2).

Summarizing, we get the following bounds forAPSP(n;1):~O(n(3+!)=2) in the directed case and~O(n!) in the undirected case. Recall that the time bounds forAPSD(n;1) areO(n(3+!)=2) in the directed case andO(n!logn) in the undirected case. Indeed, some of the Boolean matrix multiplications are now augmented to compute also witnesses, which explains our motivation to study the latter. (We believe that witnesses for Boolean matrix multiplication will be found useful 2 elsewhere as well.) The fact that the bounds forAPSPare obtained from the bounds forAPSD by adding polylogarithmic factors is not immediate. This would be a simple consequence of our algorithm for matrix multiplication with witnesses if the algorithms just added witnesses to each Boolean matrix multiplication. However, this reason is not the only one needed to explain this coincidence. More details are given in Section 4.

2 Boolean matrix multiplication with witnesses

All the matrices in this section arenbynmatrices, unless otherwise specied. IfMis such a matrix, we letMijdenote the entry in itsith row andjth column. LetAandBbe two matrices withf0;1gentries, and letCbe their product over the integers. Our objective is to nd witnesses for all the positive entries ofC, i.e., for each entryCij>0 ofCwe wish to nd aksuch that A ik=Bkj= 1. This is clearly equivalent to the problem of nding witnesses in the Boolean case. As observed by several researchers (including Seidel, Karger and the rst three authors) there is a simple randomized algorithm that solves this problem in expected running time~O(n!). Here we considerdeterministicalgorithms for the problem. Our best algorithm, described in the next subsection, is, in a sense, a derandomized version of the simple randomized solution, and its running time is~O(n!). The derandomization requires several modications in the straightforward randomized algorithm together with an interesting application of the known constructions of [5] (or [1]) of almostc-wise independent random variables in small sample spaces.

2.1 The algorithm

The rst simple observation is the fact that ifEandFare two matrices withf0;1gentries and G=EFthen one multiplication of matrices with entries at mostnsuces for nding witnesses for all the entries ofGwhich are precisely 1. Indeed, simply replace every 1-entry in thekth row of Fbyk(for all 1kn) to get a matrixF0and computeG0=EF0. Now observe that ifGij= 1 andG0ij=kthenkis a witness forGij. Denec=dloglogn+ 9eand=82c. For two matricesEandFwithf0;1gentries dene

G=E^FbyGij=Eij^Fij.

Here is an outline of the algorithm. BesidesA,BandC=ABit uses two additional matrices: RandD. The way to perform steps 3c and 3d will be described later.

While not all witnesses are known

1. LetLdenote the set of all positive entries ofCfor which there are no known witnesses.

2. LetRbe the all 1 matrix.

3. Perform the followingd1 + 3log4=3netimes:

(a)D A(B^R) (The matrix multiplication is over the integers) (b) LetL0denote the set of all entries ofDinLwhich are at mostc. (c) Find witnesses for all entries inL0. (d)R goodmatrix (see denition of good below). A matrixRisgood(in step 3d above) if the following two conditions hold: a) The total sum of the entries ofD=A(B^R) inLis at most 3=4 of what this sum was while using the previous matrixR. (Observe that this guarantees that after 1+3log4=3niterations all these entries ofDwill vanish.) 3 b) The fraction of entries ofDinLthat go from a value bigger thancto 0 is at most. Lemma 1IfR R^Sin step 3d whereSis a random0;1matrix, then the newRis good with probability at least1=6. The lemma follows from the following three claims: Claim 1The probability that the sum of entries ofDinLgoes down by at least a factor of3=4is at least1=3. To see this, observe that the expected sum of entries ofDinLgoes down by 1=2. Thus, the claim follows from Markov's Inequality.2 Claim 2The probability that a xed entry ofDwhich is at leastcdrops down to0is at most1=2c. This is obvious. Observe that the claim holds even if we only assume that everycentries ofS are independent.2 Claim 3The probability that more than a fractionof the entries ofDinLdrop from at leastc to0is at most12c1=18. This follows from Claim 2 by Markov's Inequality. Since 1=31=8>1=6 the lemma follows.2 Dene=12c+1. The crucial point is to observe that the proof of the above lemma still holds, with almost no change, if the matrixSis not totally random but its entries are chosen from a c-wise-dependent distribution in the sense of [5], [1]. Recall that ifmrandom variables whose range isf0;1garec-wise-dependentthen every subset oficof them attains each of the possible 2 icongurations of 0 and 1 with probability that deviates from 1=2iby at most. Lemma 2IfR R^Sin step 3d where the entries ofSare chosen asn2random variables that arec-wise-dependent, then the newRis good with probability at least1=122. We note that in fact it is sucient to choose only one column and copy it n times. The proof is by the following modied three claims, whose proof is analogous to that of the corresponding previous ones. Claim 4The probability that the sum of entries ofDinLgoes down by at least a factor of3=4is at least1=32.2 Claim 5The probability that a xed entry ofDwhich is at leastcdrops down to0is at most

1=2c+.2

Claim 6The probability that more than a fractionof the entries ofDinLdrop from at leastc to0is at most(12c+)1<22c1= 1=4.2

The lemma follows from the above three claims.2

As shown in [5] and in [1] there are explicit probability spaces withn2random variables which arec-wise-dependent, whose size is (lognc1)2+o(1); 4 which is less than, e.g.,O((logn)5). Moreover, these spaces can be easily constructed in time negligible with respect to the total running time of our algorithm. Now suppose that in step 3d all the matricesSdened by such a probability space are searched, until a good one is found. Checking whether a matrix is good requires only matrix multiplication plusO(n2) operations. Therefore the inner loop (starting at step 3) takes polylog n times matrix multiplication time. It is important to note that during the performance of step 3d, while considering all possible matricesSprovided by our distribution, we can accomplish step 3c as well. This is true sincec-wise-dependence guarantees that every entry inL0will drop to precisely 1 for some of the matricesSand hence, by the observation in the beginning of this subsection, if we replace each matrix multiplication in the search for a goodSby two matrix multiplications as described in that observation, we complete steps 3c and 3d together. In every iteration of the inner loop 3 at mostfraction of the entries ofLare \thrown" (i.e. their witness will not be found in this iteration of the outer loop). Therefore at least (1)1+3log4=3n fraction of the entries ofDinLwillnotbe thrown during the completion of these iterations. For those entries, which are at least 1=2 of the entries inL, a witness is found. Therefore, onlyO(logn) iterations of the outer loop are required, implying the desired~O(n!) total running time.

We have thus proved the following:

Theorem 1The witnesses for the Boolean multiplication of twonbynmatrices can be found in deterministic~O(n!)time.

2.2 An alternative approach

The witnesses for Boolean matrix multiplication can be computed in a dierent manner. Although the running time obtained is slightly worse than that of our previous algorithm, it may give better performance for matrices of moderate size. Here is a rough outline of the approach: We design a sequence of algorithms, the rst algo- rithmALG0, is the naive cubic way: test all thenpossible witnesses for every positive entryCij. The next algorithmALG1, is the following: Consider each of the two matricesAandBas an LLblock matrix where each block is of sizen=Ln=L. Multiply the two block matrices using the trivialL3time algorithm, and using fast matrix multiplication for any multiplication of two blocks. Now we know for each positive entryCij, a product of a block ofAand a block ofBwhich contains a witness. UseALG0for nding witnesses inside that block. The running time is O L 3nL +L2nL 3! An appropriate choice ofLgives anO(n92!4!) time algorithm. The sequence starting with these two algorithms can be extended, where each algorithm uses the previous one and the time complexity converges toO(n!+O(log1=3(n))). This requires several additional ideas including a generalization of the problem to that of nding witnesses for a pre- scribed subset of entries of the product of two rectangular matrices, given certain information on the location of these witnesses. The details are complicated and since the running time is inferior to that of our previous algorithm we do not include them. For any given problem, one can apply any of the algorithms from the sequence above. It seems that for certain possible sizes, one of the algorithmsALGsfor some small integersmay actually be faster than the algorithm in the previous subsection. 5

3 Computing witnesses for the transitive closure

In the introduction we explained why the immediate solution that computes witnesses forAT does not work. Another simple solution is to add lengths to the edges and compute witnesses for shortest paths. However, the best time for computing only the distances in the directed case (even without the witnesses for the paths) isO(n(!+3)=2). The only reason that the immediate solution does not work are the cycles. So we rst nd the strongly connected components ofG, then we contract them into new vertices. Now we can use the immediate algorithm to solve the new problem. Lastly, we \open" the contracted vertices and transform the solution to a solution for the original problem. More formally:

Algorithm

1. Compute the strongly connected components of the input graphG= (V;E), whereV=

fv1;:::;vng. Denote byV0=fv01;v02;:::;v0mgthe set of strongly connected components ofG, wherev0i=fvi1;vi2;:::;virig. We build the contracted graphG0= (V0;E0), where E

0=f(v0i;v0j) :9(vix;vjy)2Eg:Each edge (v0i;v0j)2E0is arbitrarily associated with one

edge (vix;vjy)2E. This can be done inO(n2) time.

2. Solve the transitive closure problem of the graphG0, denote the solution byT0. Compute

witnesses for the Boolean matrix multiplicationT0A0byW0. This step can be done in~O(n!) time.

3. For each strongly connected component nd witnesses for the transitive closure (which is a

clique). Denote the witnesses matrix for that problem by^W; this matrix is dened only for pairs which are in the same strongly connected component. This can be done inO(n2) time, as described in Algorithm 3.1.

4. Expand the solution of the contracted problem into a solution for the whole problem. This

can be done inO(n2) time as described in Algorithm 3.2. Theorem 2The algorithm above computes the matrix of witnesses in time~O(n!).

3.1 Computing witnesses for a strongly connected graph

The algorithm has two stages. In the rst, we perform breadth rst search (BFS) from one of the verticesv0. In the process we generate a BFS treeT. For each edge (u;v)2Tand every descendantwofvwe setW(u;w) v. In the second stage, we use the reverse edges and perform another BFS fromv0. We process a vertex when it is rst visited. Assume we enter rstuusing edge (u;v). We then consider allw2Vand ifW(u;w) is undened we set it tov. Obviously, each stage takesO(n2) time. Correctness follows by induction. The induction hypothesis states that for every processed vertexu, and everyw, starting withuand followingW we obtain a simple path fromutow. The base is true because the rst stage essentially processesv0. For the induction step, assume we processu. IfW(u;w) =zis dened, it was dened in the rst stage and (u;z) is in the BFS tree of the rst stage and followingWwe follow a path on the tree fromutow. If it is undened, we setW(u;w) v, wherevwas processed before. The claim now follows from the induction hypothesis. 6

3.2 Joining solutions

Examine the solution forG0. Suppose thatW0(i;j) =k; by the denition of a witness, there exists an edge (v0i;v0k). Let (vix;vky) be the edge ofGassociated with it.

W(vik1;vjk2) =vkyk1=x

^W(vik1;vix) otherwise.

The time complexity of this algorithm isO(n2).

4 Finding paths

quotesdbs_dbs47.pdfusesText_47
[PDF] multiplications des nombres relatifs

[PDF] multiplicité des critères pour rendre compte de la structure sociale

[PDF] Multiplié deux identités remarquables

[PDF] multiplier

[PDF] Multiplier des equations par (-1) et Diviser des equations par (-2)

[PDF] Multiplier des fractions

[PDF] multiplier des heures

[PDF] multiplier des nombres en écriture fractionnaire

[PDF] multiplier des nombres en écritures fractionnaires

[PDF] Multiplier des nombres positifs en écriture fractionnaire

[PDF] Multiplier des nombres positifs en écriture fractionnaire - 4eme

[PDF] Multiplier des nombres positifs en écriture fractionnaire - Maths

[PDF] multiplier deux fractions

[PDF] multiplier deux racines carrées identiques

[PDF] Multiplier et diviser des nombres relatifs en écriture fractionnaire