Modified Log-Sobolev Inequalities Mixing and Hypercontractivity









Logarithmic Sobolev Inequalities

logarithmic Sobolev inequalities what they are some history analytic
Logsobwpause


Logarithmic Sobolev Inequalities

LOGARITHMIC SOBOLEV INEQUALITIES. By LEONARD GRoss.*. 1. Introduction. Classical Sobolev inequalities state typically


Lectures on Logarithmic Sobolev Inequalities

Jan 1 2012 Section 4.1 Properties of logarithmic Sobolev inequality ... ing to learn how to prove log-Sobolev inequality in infinite-dimensional ...
SPS


1 CONCENTRATION OF MEASURE AND LOGARITHMIC

2.2 Examples of logarithmic Sobolev inequalities p. 26. 2.3 The Herbst argument p. 29. 2.4 Entropy-energy inequalities and non-Gaussian tails.
Berlin





Mass transport and variants of the logarithmic Sobolev inequality

Sep 25 2007 Abstract. We develop the optimal transportation approach to modified log-Sobolev inequalities and to isoperimetric inequalities.


Modified Log-Sobolev Inequalities Mixing and Hypercontractivity

These inequalities turn out to be weaker than the stan- dard log-Sobolev inequality but stronger than the Poincare'. (spectral gap) inequality.
STOC BT


Concentration of measure and logarithmic Sobolev inequalities

4.3 Poincare inequalities and modified logarithmic Sobolev inequalities 5.1 Logarithmic Sobolev inequality for Bernoulli and Poisson measures.
SPS


Logarithmic Sobolev inequalities for unbounded spin systems revisited

or logarithmic Sobolev inequality and to control the dependence of the Logarithmic Sobolev inequalities for compact spin systems have been studied.
SPS





Logarithmic Sobolev Inequalities Essentials: Probabilistic Side

Logarithmic Sobolev Inequalities Essentials: Probabilistic Side. High dimensional probability. Rough lecture notes. Djalil Chafaï & Joseph Lehec.
chafai lehec m lsie lecture notes


Analytic and Geometric Logarithmic Sobolev Inequalities

One basic form of the logarithmic Sobolev inequality is the one for the standard. Gaussian probability measure dµ(x) = (2π)−n/2 e−
jedp.


213524 Modified Log-Sobolev Inequalities Mixing and Hypercontractivity

Modified Log-Sobolev Inequalities, Mixing and

Hypercontractivity

[Extended Abstract]

Sergey Bobkov

Department of Mathematics

University of Minnesota, Minneapolis, MN

bobkov@math.umn.eduPrasad Tetali

School of Mathematics

and College of Computing

Georgia Tech, Atlanta, GA

tetali@math.gatech.edu

ABSTRACT

Motivated by (the rate of information loss or) the rate at whichtheentropyofanergodicMarkovchainrelativeto its stationary distribution decays to zero, we study modi- fied versions of the standard logarithmic Sobolev inequality in the discrete setting of finite Markov chains and graphs. These inequalities turn out to be weaker than the stan- dard log-Sobolev inequality, but stronger than the Poincare" (spectral gap) inequality.We also derive a hypercontractiv- ity formulation equivalent to our main modified log-Sobolev inequality which might be of independent interest.Finally we show that, in contrast with the spectral gap, for bounded degree expander graphs various log-Sobolev-type constants go to zero with the size of the graph.Categories and Subject Descriptors G.3 [Mathematics of Computing]: Probability & Statis- tics-Markov processes, Probabilistic Algorithms

General Terms

Algorithms, Theory

Keywords

Spectral gap, Entropy decay, Sobolev Inequalities

1. INTRODUCTION

Let (M,P,π)denoteanergodicMarkovchainwithafinite state spaceM, transition probability matrixPand station-

ary distributionπ.Forf,g:M→R,letE(f,g)denotethe?Research supported in part by NSF Grant DMS-0103929

†Research supported in part by NSF Grant No.DMS-

0100289; research done while visiting Microsoft Research

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. STOC'03,June 9-11, 2003, San Diego, California, USA.

Copyright 2003 ACM 1-58113-674-9/03/0006 ...

$5.00.

Dirichlet form defined by

E(f,g)=-Eπ(fLg)=-?

x?M f(x)Lg(x)π(x),(1.1) where-L=I-Pis the so-called Laplacian matrix.Then the spectral gap ofPor the smallest non-zero eigenvalue of -Lcan be defined as the optimal positive constant in over allf:M→R.As usual, Var

πf=Eπf

2 -(Eπf) 2 Note that one arrives at such a functional (or variational) definition of the spectral gap in a natural way, when one considers the rate of decay of variance of the distribution of the chain with respect to the stationary distribution.More formally, working in the technically-easier continuous-time, letμt=μ0Ptbe the distribution of the chain at timet,for t≥0, where we useP tto denote the semi-group generated byL:e tL n=0 t n L n n!.Letf t=μt/πdenote the density of

μwith respect toπ,i.e.,f

t(x)=μt(x)/π(x), for allx?M.

Then it is a classical fact that

d lrVar

π(ft)=-2E(ft,ft),(1.3)

which motivates the above definition ofλ

1.On the other

hand, little attention seems to have been given (particu- larly in the context of finite Markov chains) to the following equally natural property: for allt≥0, dlrs(μ t||π)=-E(ft,logft),(1.4) whereD(μ||π)=? x?X

μ(x)log(μ(x)/π(x)) denotes (the in-

formational divergence or) the relative entropy ofμwith respect toπ.Using the standard notation that Ent

πf=

E π(flogf)-(Eπf)log(Eπf), one is then motivated in study- ing the inequality,

2E(f,logf),(1.5)

over allf:M→R ,since one is then able to conclude (after observing that Ent

πf=D(μ||π), wheneverf=μ/π)

that for allt≥0, d lrs(μ 287
If one would rather study convergence to stationarity us- ing the more popular total variation norm:?μ t-π?TV= 1 2? x?M |μt(x)-π(x)|, a well-known inequality (see (2.5)) between the total variation norm and the relative entropy could lead the above discussion further to (see Corollary 2.6)): for every initial distributionμ

0onM, for allt≥0,

t-π? 2TV e -2ρ 0 t ,(1.6) whereπ ?=minx?Mπ(x), thus recovering and in fact im- proving upon a similar bound (see Remark 2.5 below) em- ploying the standard logarithmic Sobolev constant.In Sec- tion 4, we consider a further generalization of (1.3) and (1.4) using Sobolev-type inequalities, which interpolate be- tween the modified log-Sobolev inequality and the Poincar´e inequality. Recall that the standard logarithmic Sobolev inequality is of the form

ρEnt

πf 2 for allf:M→R.Also recall that it is shown in [12] that 1 1+1

4log log(1/π

?1

2ρ,whereτ

2=inf{t>0:

sup 0

Eπ[|μt/π-1|

2 1/2 accurately the convergence to stationarity using sup 0

Eπ[|μt/π-1|

2 1/2

Modified Log-Sobolev Inequalities, Mixing and

Hypercontractivity

[Extended Abstract]

Sergey Bobkov

Department of Mathematics

University of Minnesota, Minneapolis, MN

bobkov@math.umn.eduPrasad Tetali

School of Mathematics

and College of Computing

Georgia Tech, Atlanta, GA

tetali@math.gatech.edu

ABSTRACT

Motivated by (the rate of information loss or) the rate at whichtheentropyofanergodicMarkovchainrelativeto its stationary distribution decays to zero, we study modi- fied versions of the standard logarithmic Sobolev inequality in the discrete setting of finite Markov chains and graphs. These inequalities turn out to be weaker than the stan- dard log-Sobolev inequality, but stronger than the Poincare" (spectral gap) inequality.We also derive a hypercontractiv- ity formulation equivalent to our main modified log-Sobolev inequality which might be of independent interest.Finally we show that, in contrast with the spectral gap, for bounded degree expander graphs various log-Sobolev-type constants go to zero with the size of the graph.Categories and Subject Descriptors G.3 [Mathematics of Computing]: Probability & Statis- tics-Markov processes, Probabilistic Algorithms

General Terms

Algorithms, Theory

Keywords

Spectral gap, Entropy decay, Sobolev Inequalities

1. INTRODUCTION

Let (M,P,π)denoteanergodicMarkovchainwithafinite state spaceM, transition probability matrixPand station-

ary distributionπ.Forf,g:M→R,letE(f,g)denotethe?Research supported in part by NSF Grant DMS-0103929

†Research supported in part by NSF Grant No.DMS-

0100289; research done while visiting Microsoft Research

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. STOC'03,June 9-11, 2003, San Diego, California, USA.

Copyright 2003 ACM 1-58113-674-9/03/0006 ...

$5.00.

Dirichlet form defined by

E(f,g)=-Eπ(fLg)=-?

x?M f(x)Lg(x)π(x),(1.1) where-L=I-Pis the so-called Laplacian matrix.Then the spectral gap ofPor the smallest non-zero eigenvalue of -Lcan be defined as the optimal positive constant in over allf:M→R.As usual, Var

πf=Eπf

2 -(Eπf) 2 Note that one arrives at such a functional (or variational) definition of the spectral gap in a natural way, when one considers the rate of decay of variance of the distribution of the chain with respect to the stationary distribution.More formally, working in the technically-easier continuous-time, letμt=μ0Ptbe the distribution of the chain at timet,for t≥0, where we useP tto denote the semi-group generated byL:e tL n=0 t n L n n!.Letf t=μt/πdenote the density of

μwith respect toπ,i.e.,f

t(x)=μt(x)/π(x), for allx?M.

Then it is a classical fact that

d lrVar

π(ft)=-2E(ft,ft),(1.3)

which motivates the above definition ofλ

1.On the other

hand, little attention seems to have been given (particu- larly in the context of finite Markov chains) to the following equally natural property: for allt≥0, dlrs(μ t||π)=-E(ft,logft),(1.4) whereD(μ||π)=? x?X

μ(x)log(μ(x)/π(x)) denotes (the in-

formational divergence or) the relative entropy ofμwith respect toπ.Using the standard notation that Ent

πf=

E π(flogf)-(Eπf)log(Eπf), one is then motivated in study- ing the inequality,

2E(f,logf),(1.5)

over allf:M→R ,since one is then able to conclude (after observing that Ent

πf=D(μ||π), wheneverf=μ/π)

that for allt≥0, d lrs(μ 287
If one would rather study convergence to stationarity us- ing the more popular total variation norm:?μ t-π?TV= 1 2? x?M |μt(x)-π(x)|, a well-known inequality (see (2.5)) between the total variation norm and the relative entropy could lead the above discussion further to (see Corollary 2.6)): for every initial distributionμ

0onM, for allt≥0,

t-π? 2TV e -2ρ 0 t ,(1.6) whereπ ?=minx?Mπ(x), thus recovering and in fact im- proving upon a similar bound (see Remark 2.5 below) em- ploying the standard logarithmic Sobolev constant.In Sec- tion 4, we consider a further generalization of (1.3) and (1.4) using Sobolev-type inequalities, which interpolate be- tween the modified log-Sobolev inequality and the Poincar´e inequality. Recall that the standard logarithmic Sobolev inequality is of the form

ρEnt

πf 2 for allf:M→R.Also recall that it is shown in [12] that 1 1+1

4log log(1/π

?1

2ρ,whereτ

2=inf{t>0:

sup 0

Eπ[|μt/π-1|

2 1/2 accurately the convergence to stationarity using sup 0

Eπ[|μt/π-1|

2 1/2
  1. log-sobolev inequality for the continuum sine-gordon model
  2. log sobolev inequality proof
  3. logarithmic sobolev inequality
  4. log-sobolev inequalities
  5. logarithmic sobolev inequalities gross
  6. logarithmic sobolev inequalities on noncompact riemannian manifolds
  7. logarithmic sobolev inequalities conditions and counterexamples
  8. gaussian logarithmic sobolev inequality