[PDF] Sur les Propriétés Statistiques de lEntropie de Permutation Multi





Previous PDF Next PDF



Intelligent control systems using algorithms of the entropie potential

Intelligent control systems using algorithms of the entropie potential method. To cite this article: O A Jumaev et al 2021 J. Phys.: Conf. Ser. 2094 022030.



ENTROPIE HOT-WATER BOILER ??150 up to 20000 kW

ENTROPIE ??150 Installation and operation manual. 02. Field application TT150 boilers. ENTROPIE boiler TT150 is a three-pass gas-fired hot water boiler with 



Sur les Propriétés Statistiques de lEntropie de Permutation Multi

06-May-2021 Le travail avec des modèles ordinaux produit une mesure de l'entropie qui est invariante par rapport à l'amplitude du signal ...



Standard Thermodynamic Values.pdf

Gibbs Free Energy. (kJ/mol). (NH4)2O. (l). -430.70096. 267.52496. -267.10656. (NH4)2SiF6. (s hexagonal). -2681.69296. 280.24432. -2365.54992. (NH4)2SO4.



PRODUCTION OF INDUSTRIAL BOILERS

www.entropie.de. Purpose of ??250 boilers. General view of ??250 boiler. Steam boilers of the ??250 series are two-pass gas- fired horizontal steel boilers 



ENTROPIE boiler ??200

ENTROPIE steam boilers TT200 are steel gas- fired three-way flue gas boilers of horizontal type



Mixed Strategy Equilibria in n-Person Games with Entropie Criteria

This paper is organized in the following way. In Section 2 we formu- late a noncooperative n- person game with entropie criteria and define the 



Léopold2016_Evaluating harvest and management strategies for

09-Aug-2016 This publication is available electronically at the following website: http://umr-entropie.ird.nc/application/files/5814/7144/7021/ ...



Brochure Dessalement

SIDEM / ENTROPIE is the world leader in sea water desalination through low temperature distillation processes such as: • Multiple Effect Distillation (MED).



Une borne supérieure pour lentropie topologique dune application

de X. Notons ?l(f) le degré dynamique d'ordre l de f 1 ? l ? k



University of California Berkeley

University of California Berkeley



Entropy - Wikipedia

entropyas disorder: the more microstates there are the less organizedare the particles Asolidhaslowerentropythanagasbecausethemoleculesaremoreordered: theconstraints onthepositionsoftheatomsinthesolidandlimitationsontheirvelocitiesdrasticallyreducethenumberof possiblecon gurations



Handout 7 Entropy - Stanford University

Entropy measures the degree of our lack of information about a system Suppose you throwa coin which may land either with head up or tail up each with probability12 Then wehave some uncertainty about the outcome of each experiment" The uncertainty can bequanti ed by a positive numberS



Entropy and Information Theory - Stanford University

Prologue This book is devoted to the theory of probabilistic information measures and their application to coding theorems for information sources and noisy channels



Searches related to entropie PDF

Standard Entropies Alan D Earhart 2 of 2 11/7/2016 All standard state 25 °C and 1 bar (written to 1 decimal place) P J Linstrom and W G Mallard Eds NIST Chemistry WebBook NIST Standard Reference Database

What is information entropy?

Entropy is the measure of the amount of missing information before reception. Often called Shannon entropy, it was originally devised by Claude Shannonin 1948 to study the size of information of a transmitted message. The definition of information entropy is expressed in terms of a discrete set of probabilities pi{displaystyle p_{i}}so that

What is entropy increase?

A special case of entropy increase, the entropy of mixing, occurs when two or more different substances are mixed. If the substances are at the same temperature and pressure, there is no net exchange of heat or work – the entropy change is entirely due to the mixing of the different substances.

What is the entropy of a mixture?

molecules occupy the volume 2V, so the entropy ofmixingis S=2Nln2, justas inthecoloredballscase. Whenwesplit them, since theparticlesare identical, thereis nowaytotell apart onesplittingfromtheother. Eachhalf has speciesinavolume .

How can entropy change between reversible and irreversible paths?

Since entropy is a state function, the entropy change of the system for an irreversible path is the same as for a reversible path between the same two states. However, the heat transferred to or from, and the entropy change of, the surroundings is different. We can only obtain the change of entropy by integrating the above formula.

UNIVERSIT´ED"ORL´EANS

´ECOLE DOCTORALE [MATH´EMATIQUES,

INFORMATIQUE, PHYSIQUE TH

´EORIQUE ET

ING

´ENIERIE DES SYST`EMES]

[LABORATOIRE PRISME]

Th`ese

pr´esent´ee par :

Antonio DAVALOS-TREVINO

pour obtenir le grade de :Docteur de l"Universit´e d"Orl´eans Discipline/ Sp´ecialit´e :Traitement du Signal On the Statistical Properties of Multiscale Permutation

Entropyand its Refinements, with Applications on

Surface Electromyographic Signals

Th`ese dirig´ee par :

Olivier BUTTELLIMaˆıtre de Conf´erences, Universit´e d"Orl´eans Meryem JABLOUNMaˆıtre de Conf´erences, Universit´e d"Orl´eans

RAPPORTEURS :

Anne HUMEAU-HEURTIERProfesseur, Universit´e d"Angers Steeve ZOZORDirecteur de Recherche, Institut Polytechniquede Grenoble

JURY :

St´ephane CORDIERProfesseur, Universit´e d"Orl´eans, Pr´esident dujury Jean Marc GIRAULTProfesseur, ESEO Grande´Ecole d"Ing´enieurs

G´en´eralistes `a Angers

Franck QUAINEMaˆıtre de Conf´erences, Gipsa-Lab, Grenoble-INP et l"Universit´e de Grenoble-Alpe

Philippe RAVIERMaˆıtre de Conf´erences, Universit´e d"Orl´eans

Acknowledgements

I want to thank Olivier Buttelli for helping me bring my abstract work back to Earth through his guidance, advice and pragmatic perspective. I would also like to thank Meryem Jabloun, whose dynamic discussions and sharp suggestions ignited many of the ideas contained here, and Philippe Ravier, whose methodical and systematic observations aided me in condensing the topics at hand. I would like to extend my gratitude to the jury members: Anne Humeau-Heutier, Steeve Zozor, St´ephane Cordier, Jean-Marc Girault, and Franck Quaine for their examination. I would also like to acknowledge and thank the Consejo Nacional de Ciencia y Tecnolog´ıa (CONACYT), for they provided the necessary funding for this research project. I would also like to thank Juan Mattei, whose editing advice was invaluable. Special thanks my parents, Antonio and Dora, as well as Lydia,Alejandro, and

Pablo for their unconditional support.

Last but not least, I want to thank Edna, for she inspires me every day to be the best version of myself. iii On the Statistical Properties of Multiscale Permutation Entropyand its Refinements, with Applications on Surface

Electromyographic Signals

Antonio D´avalos-Trevi˜no

Abstract

Permutationentropy(PE) and multiscale permutationentropy(MPE) are exten- sively used to measure regularity in the analysis of time series, particularly in the context of biomedical signals. As accuracy is crucial for researchers to obtain op- timal interpretations, it becomes increasingly importantto take into account the statistical properties of MPE. Therefore, in the present work we begin by expanding on the statistical theory behind MPE, with an emphasis on the characterization of its first two moments in the context of multiscaling. Secondly, we explore the composite versions of MPE in order to understand the underlying properties behind theirimproved performance; we also created an entropybenchmark through the calculation of MPE expected values for widely used Gaussian stochastic processes, since that gives us a reference point to use with real biomedical signals. Finally, we differentiate between muscle activity dynamics in isometric contractions through the application of the classical and composite MPE methods on surface electromyographic (sEMG) data. As a result of our project, we found MPE to be a biased statisticthat decreases with respect to the multiscaling factor, regardless of the signal"s probability distribution. We also noticed that the variance of the MPE statistic is highly dependent on the value of MPE itself, and almost equal to its Cram´er-Rao lower bound -in other words, confirming it is an efficient estimator. Despite showing improved results, we realized that the composite versions also modify the MPE estimation due to the measuring of redundant information. In light of our findings, we decided to replace the multiscaling coarse-graining procedure with one of our own, with the intention of improving our estimations. Since our team observed the MPE statistic to be completely characterized by the model parameters when applied to correlated Gaussian models, we developed a gen- eral formulation for expected MPE with low-embedding dimensions. When applied to real sEMG signals, we were able to distinguish between fatigue and non-fatigue states with all methods, especially for high-embedding dimensions. Moreover, we found that our proposed MPE method makes an even clearer difference between the two aforementioned activity states. v

Declaration

I, Antonio D´avalos, hereby declare that this thesis is my ownoriginal work and that all sources have been accurately reported and acknowledged, and that this document has not been previously, in its entirety or in part submittedat any university in order to obtain academic qualifications. I also hereby declare no conficts of interest.

29/02/2020

Orl´eans

Antonio D´avalos

vii

Contents

Acknowledgements iii

Abstract v

Declaration vii

List of Figures xvii

Introduction 1

1 Information

Entropy- Concepts and Definitions 5

1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.2 EntropyFormulations . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.2.1 Classical Shannon"s

Entropy. . . . . . . . . . . . . . . . . . . 7

1.2.2 Tsallis Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.2.3 R´enyi"s

Entropy. . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2.4 EntropyFormulations Remarks . . . . . . . . . . . . . . . . . 10

1.3 Event Partitions for

Entropy. . . . . . . . . . . . . . . . . . . . . . . 10

1.3.1 Approximate

Entropy. . . . . . . . . . . . . . . . . . . . . . 11

1.3.2 Sample

Entropy. . . . . . . . . . . . . . . . . . . . . . . . . . 12

1.3.3 Permutation

Entropy. . . . . . . . . . . . . . . . . . . . . . . 13

1.3.4 Fuzzy

Entropy. . . . . . . . . . . . . . . . . . . . . . . . . . 14

1.4 Signal Pre-Processing: Multiscaling . . . . . . . . . . . . . . .. . . . 15

1.4.1 Multiscale

Entropy. . . . . . . . . . . . . . . . . . . . . . . . 15

1.4.2 Refined Multiscale

Entropy. . . . . . . . . . . . . . . . . . . 16

1.4.3 Composite and Refined Composite Multiscale

Entropy. . . . 17

1.4.4 Refined Composite Multiscale

Entropy. . . . . . . . . . . . . 17

1.4.5 Modified Multiscale

Entropy. . . . . . . . . . . . . . . . . . . 18

1.4.6 Generalized Multiscale

Entropy. . . . . . . . . . . . . . . . . 18

1.5 A Case for Permutation

Entropy. . . . . . . . . . . . . . . . . . . . 19

1.6 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2 Multiscale Permutation

Entropy- Theoretical Statistics 23

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.2 Multiscale Permutation

EntropyBackground . . . . . . . . . . . . . . 24

2.2.1 Permutation

Entropy. . . . . . . . . . . . . . . . . . . . . . . 24

2.2.2 Multiscale Coarse-Graining Procedure . . . . . . . . . . . .. 26

2.3 Multiscale Permutation

EntropyStatistics . . . . . . . . . . . . . . . 27

2.3.1 Previous Considerations . . . . . . . . . . . . . . . . . . . . . 27

ix

MPE Statistics with sEMG Applications

2.3.2 MPE Taylor Series Approximation . . . . . . . . . . . . . . . 28

2.3.3 MPE Expected Value and Bias . . . . . . . . . . . . . . . . . 30

2.3.4 MPE Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

2.3.5 MPE Cram´er-Rao Lower Bound . . . . . . . . . . . . . . . . . 33

2.4 Simulations and Results . . . . . . . . . . . . . . . . . . . . . . . . . 37

2.4.1 Surrogate Model . . . . . . . . . . . . . . . . . . . . . . . . . 38

2.4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

2.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

2.6 MPE Length Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . 46

2.7 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3 MPE on Common Signal Models 49

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

3.2 MPE on Models with Deterministic Signals . . . . . . . . . . . . .. 50

3.2.1 Deterministic Signals . . . . . . . . . . . . . . . . . . . . . . . 50

3.2.2 Deterministic Signals with Noise . . . . . . . . . . . . . . . . . 53

3.3 MPE on Models with Random Gaussian Signals . . . . . . . . . . . 57

3.3.1 Gaussian Ordinal Pattern Distributions . . . . . . . . . . .. 57

3.3.2 White Gaussian Noise and Fractional Gaussian Noise . . . . . 59

3.3.3 First-Order AR Models . . . . . . . . . . . . . . . . . . . . . . 62

3.3.4 First-Order MA Models . . . . . . . . . . . . . . . . . . . . . 64

3.3.5 General Formulation for Correlated Gaussian Models .. . . . 66

3.3.6 ARMA Models Revisited . . . . . . . . . . . . . . . . . . . . . 69

3.4 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

4 Composite MPE Refinements 75

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

4.2 Composite Coarse-Graining Techniques . . . . . . . . . . . . . .. . . 76

4.2.1 Composite Coarse-Graining Procedure . . . . . . . . . . . . .76

4.2.2 Composite MPE . . . . . . . . . . . . . . . . . . . . . . . . . 77

4.2.3 Refined Composite MPE . . . . . . . . . . . . . . . . . . . . . 78

4.3 Composite Downsampling Techniques . . . . . . . . . . . . . . . . .. 81

4.3.1 Composite Downsampling Procedure . . . . . . . . . . . . . . 81

4.3.2 Composite Downsampling Permutation

Entropy. . . . . . . . 83

4.3.3 Refined Composite Downsampling PE . . . . . . . . . . . . . 84

4.4 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 85

4.4.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

4.4.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

4.5 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

5 Bioelectrical Signal Applications 93

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

5.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

5.3 EMG Signals and Biological Complexity . . . . . . . . . . . . . . .. 95

5.3.1 Physiology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

5.3.2 Motor Units and EMG . . . . . . . . . . . . . . . . . . . . . . 97

5.3.3 EMG and Complexity . . . . . . . . . . . . . . . . . . . . . . 98

5.4 MPE on Real sEMG Signals . . . . . . . . . . . . . . . . . . . . . . . 99

5.4.1 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

x

CONTENTS

5.4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

5.4.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

5.5 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

Conclusions 115

A Covariance, Coskewness, and Cokurtosis Matrices 119

B Math Glossary 121

C Acronym Glossary 125

Bibliography 127

xi

List of Figures

1.1Entropyanalysis stage components. We can conceptualize the com-

ponents of any entropymeasure in the following three consecutive steps: we must select the proper entropyformulation to use, define the partition that properly describes the system we are to measure, and decide which kind of pre-processing (if any) will be performed on the experimental data . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.2 The coarse-graining procedure takes the average of all the data points

within non-overlapping segments of sizem. This diagram is based on the one presented in [39]. . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.1 Ordinal pattern examples. The figures represent discrete data points

from a uniformly sampled signal. There are 24 possible patterns for d= 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.2 Test surrogate model from equation (2.52) for dimensiond= 2. (a)

Model"s sample paths for different values ofp=P(x

t< xt+1). (b) The shift termδ(p) is modified in accordance with the Gaussian cu- mulative distribution function, in a way that the variation for the next point in the process has probabilityp. . . . . . . . . . . . . . . . 39

2.3 Three-dimensional theoretical normalized MPE (2.4) ford= 2. (a)

Mean MPE value (2.22) in respect to the pattern probabilitypand normalized time scalem/N. (b) MPE variance (2.31) in respect top andm/N. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

2.4 Normalized MPE (2.4) ford= 2. (a) Mean MPE (2.22) with respect

to pattern probabilityp, which shows a clear maximum atp= 0.5 (the point of equiprobable patterns). (b) MPE variance (2.31) inrespect top. We observe minimum points atp= 0,p= 0.5, andp= 1, as well as maximum points atp= 0.083 andp= 0.917. (c) Mean MPE (2.22) in respect to the normalized time scalem/N. We observe here the linear bias from (2.23), which has the same slope regardless of p. (d) MPE variance (2.31) in respect tom/N. We observe a linear increase, showing that the first element of (2.31) is dominant. . . . . 41

2.5 MPE variance (2.31) for d=2 with respect to normalized time scale

m/N. (a) Pattern probabilityp= 0.3. We observe an almost linear increase with scale, where the first term of (2.31) is dominant. (b) Pattern probabilityp= 0.5, which corresponds to uniform pattern distribution. Here, the linear term in (2.31) vanishes, leaving only a quadratic increase with scale. . . . . . . . . . . . . . . . . . . . . . . 42 xiii

MPE Statistics with sEMG Applications

3.1 Sampled cubic polynomialx=1

3t3-(2

3)t2+ 2t-1

2fort= [0,3]

seconds. The regionst= [0,1] andt= [2,3] sec have a positive slope; thereforep

1= 2/3 andpd!= 1/3. It follows from equation (3.3) that

the normalized PE isH= 0.3552 ford= 3. . . . . . . . . . . . . . . . 51 seconds. Here we show the sampled signals with sampling frequency (a)f s= 8 Hz, (b)fs= 32 Hz, and (c)fs= 216 Hz, with their corresponding values of normalized PE at dimensiond= 3. (d) shows the PE of the sine wave x at different sampling frequenciesf s. The measured PE converges with the theoretical normalized PE (3.3) for the continuous sine wave (H= 0.387). . . . . . . . . . . . . . . . . . 52

3.3 The presence of noise does not affect the signal patterns,as long as

the variation is small compared to the curve"s slope. . . . . . .. . . 53

3.4 Sampled cubic polynomialx=

1

3t3-(2

3)t2+ 2t-1

2fort= [0,3]

seconds, with added white Gaussian noise with standard deviation of (a)σ= 0.001 and (b)σ= 0.005. In the regions near the local maximum and minimum, the white Gaussian noise, rather than the polynomial, determines the ordinal patterns present. . . . .. . . . . . 54

3.5 (a) Parabolic curvex=t

noise atσ={0.005,0.001,0.002}. (b) MPE atd= 3 within respect to the linearly increasing slope of the parabolic curve at different values ofσ. (c) Straight linex=Atwith added white noise at increasing σ= [1e-9,1e-6]. (d) MPE atd= 3 within respect to a linearly increasingσat different values for the slopeA. The MPE values in (b) and (d) come from a local sliding window of Δt= 0.05 sec. The sampling rate for this measurements isf s= 6670Hz. . . . . . . . . . 55 increasing sampling frequencyf s, in the presence of white noise at different signal-to-noise ratio (SNR). (a) Mean MPE vs.f sat SNR = 10 dB, 20 dB, and 30 dB. The MPE follows the MPE of the noise- less sine wave for lowf s, and approaches maximumentropyat high sampling rates. (b) MPE surface representation, withf sand SNR as independent variables. Low entropyvalues are shown in blue, and high entropyin yellow. We observe a clear frontier between regions where noise dominates (yellow), or the underlying deterministic signal is more important (blue). . . . . . . . . . . . . . . . . . . . . . . . . . 56

3.7 For a fixed signal-to-noise ratio (SNR), an increased sampling ratef

s implies the data points are closer together, both in time andampli- tude. Therefore, whenf sis high, the pattern noise dominates over the deterministic signal, and thus, the ordinal pattern is modified. . 56

3.8 Three-dimensional surface for (a) MPE, and (b) var(MPE)for Gaus-

sian models and dimensiond= 3. This representation is possible since the Gaussian pattern symmetries (3.4) allow the pattern pdf to be dependent on only one variablep

1. . . . . . . . . . . . . . . . . . 58

xiv

LIST OF FIGURES

3.9 (a) Average MPE of fGn with respect to the Hurst exponent, for dif-

ferent time scalesm. The curves get downshifted with increasingm. (b) MPE of fGn respect tom, for Hurst exponentsh={0.2,0.5,0.8}. The dotted lines represent the theoretical predictions, while the solid lines measure the mean MPE from 1500 signals of lengthN= 5000. . 62

3.10 (a) MPE curves for AR(1) with respect to their corresponding model

parameterφ. Different curves correspond to different time scalesm, as shown directly in the plots. (b) MPE curves with respect tom, withφ={0.25,0.50,0.75,0.90}. Dotted lines represent theoretical MPE values, while solid lines show the resulting mean MPE from

1500 signals ofN= 1000. . . . . . . . . . . . . . . . . . . . . . . . . 64

3.11 (a) MPE curves for MA(1) with respect to their corresponding model

parameterθ. Different curves correspond to different time scalesm, as shown directly in the plots. (b) MPE curves with respect tom, withquotesdbs_dbs35.pdfusesText_40
[PDF] enthalpie de dissolution formule

[PDF] exercice corrigé flexion charge repartie

[PDF] enthalpie libre unité

[PDF] comment faire un folioscope

[PDF] fiche fabrication zootrope

[PDF] bandes zootrope

[PDF] folioscope facile a faire

[PDF] les mots d'origine étrangère exercices

[PDF] leçon origine des mots cycle 3

[PDF] flocon de von koch exercice

[PDF] réaction inflammatoire pdf

[PDF] mots français d origine étrangère exercices

[PDF] inflammation aigue et chronique

[PDF] inflammation pdf

[PDF] étymologie cycle 3