[PDF] Unsupervised Textual Grounding: Linking Words to Image Concepts





Previous PDF Next PDF



Linking Words / Phrases

Linking Words / Phrases. Personal opinion: In my opinion/ In my view / To my mind / To my way of thinking / I am convinced that / It strikes me that / It is 



LINK WORDS LINK WORDS

LINK WORDS. LINK WORDS. POUR COMMENCER. First firstly



Linking Words (PDF)

Linking Words - A complete list of Transition Words & Conjunctions also called Cohesive Devices – Connecting Words. Linking Words - A complete List - Sorted 



Unsupervised Textual Grounding: Linking Words to Image Concepts Unsupervised Textual Grounding: Linking Words to Image Concepts

We establish those links during a learning task which uses a dataset containing words and images. During inference we extract the linked concepts and use their 



Sentence-starters-linking-words-transitional-phrases-2018.pdf Sentence-starters-linking-words-transitional-phrases-2018.pdf

Below is a list of possible sentence starters transitional and other words that may be useful. To introduce. This essay discusses … … is explored … … is 



SEMINAR 9 - KEY Conjunctions and Linking Words SEMINAR 9 - KEY Conjunctions and Linking Words

Conjunctions and Linking Words. Task 1: 1. so that / in order that. 2. so. 3. because. 4. since. 5. however. 6. whereas. 7. Not only didn't I say that I didn't 



Linking-words-and-phrases.pdf

Using linking words within and between sentences and paragraphs helps to make your writing flow logically. These words act as signposts assisting your 



Linking Words and Expressions

Some linking words should only be used within a sentence. and but so because then until such as. Other linking words should only be used between sentences.



Linking Words (Conjunctions and Connectors)

One sentence can contain multiple types of conjunctions and often does. Coordinating Conjunction. Definition. These linking words give equal value to the two 



cdot-linking-word-documents-to-microstation.pdf

This document is intended to show the process of linking Word files with MicroStation. This process is mainly used for creating note sheets to be included 



LINK WORDS LINK WORDS

LINK WORDS. POUR COMMENCER. First firstly



Linking Words (PDF)

Linking Words - A complete list of Transition Words & Conjunctions also called Cohesive Devices – Connecting Words. Linking Words - A complete List - Sorted 



Sentence-starters-linking-words-transitional-phrases-2018.pdf

words that may be useful. Sentence starters linking words



Connecting your ideas: Linking words

???/???/???? What are linking words and what are they for? Cohesion and clarity are essential elements of academic writing. Making the connections between ...



Coherence: Linking words and phrases

To make your work more readable and meaningful ideas and paragraphs must be linked. Linking words are essential in developing coherent logical arguments 



Transitions & Linking Words

They do this by connecting or linking ideas within a paragraph and providing a bridge between paragraphs. Remember to proofread the writing assignment to make 



Unsupervised Textual Grounding: Linking Words to Image Concepts

We establish those links during a learning task which uses a dataset containing words and images. During inference we extract the linked concepts and use their 



Linking Words and Expressions

Using transitional words and phrases or linking words helps you to make yourself understood more easily make shorter sentences and improve the connections and 



LINKING AND REPORTING WORDS Linking words or transition

Linking words or 'transition signals'



Transitional words and phrases

As a "part of speech" transitional words are used to link words



Linking Words

Linking Words - A complete List - Sorted by categories Freely available from http://www smart-words org/ © 2013 Page 1 of 2 Transition Words and Phrases y in the first place again moreover not only but also to as well as as a matter of fact and together with in like manner also of course in addition then likewise



Linking words and phrases

Linking words are essential in developing coherent logical arguments and discussion in your assignments They show the relationships between the ideas and are the glue that holds your assignment together The table below provides an overview of commonly-used linking words List/sequence words: orders

  • Overview

    This article provides information about how to improve the user experience by linking sections within a PDF file using Adobe Acrobat Pro and also shares tips for sharing a PDF link securely through email or other mediums.

  • Hyperlinks

    Add text or graphic hyperlinks to your electronic documents to help readers find information they need. Improve readability and direct readers to supplemental information with links that are universally compatible.

  • Adobe Acrobat Pro

    Use the Link tool in Adobe Acrobat Pro to create links by choosing link appearance, destination for hyperlink, and editing/deleting links after adding them.

  • Sharing PDFs

    Share PDFs for viewing or signing with a link via email, text message, social media post etc., check activity and timestamps of shared files using Home > Shared option in Acrobat window.

  • Business Data

    Discover steps for preventing loss of business data during disasters and how to bounce back if lost.

  • Website updates

    Give users what they're searching for by making six simple small business website tips and updates.

Unsupervised Textual Grounding: Linking Words to Image Concepts UnsupervisedTextualGrounding: LinkingWordstoImageConcepts

RaymondA.Y eh,MinhN. Do,AlexanderG.Schwing

UniversityofIllinoisatUrbana-Champaign

{yeh17,minhdo,aschwing }@illinois.edu

Abstract

Textualgrounding,i.e.,linkingwor dstoobjects inim- ages,isachallengingbut importanttaskfor roboticsand human-computerinteraction. Existingtechniquesbenefit fromrecentpro gressindeeplearningand generallyfor- mulatethetask asasupervised learningproblem, select- ingabounding boxfrom asetof possibleoptions.T otrain thesedeepnet basedapproac hes,accessto alarg e-scale datasetsisr equired,howe ver,constructingsuchadataset istime-consumingand expensive. Therefore ,wedevelopa completelyunsupervisedmec hanismforte xtualgrounding usinghypothesistesting asamec hanismtolink wordsto detectedimag econcepts.Wedemonstrateourapproac hon theReferItGame datasetandthe Flickr30kdata, outper- formingbaselinesby 7.98%and6.96%respectively.

1.Introduction

Textualgroundingisanimportanttaskfor robotics,

human-computerinteraction,and assistive systems.In- creasingly,weinteractwithcomputers usinglanguage,and itwon" tbelonguntilwewillguideautonomoussystems viacommandssuch as'thecof feemugon thecounter"or the'water bottlenexttothesink. "Whileit iseasyfora humantorelate thenounsin thosephrasesto observedreal worldobjects,computersarechallenged bythecomple xity ofthecommands arisingdueto objectvariability andambi- guityinthe descriptionandthe relations.E.g.,themeaning oftheterm 'nextto" differssignificantly dependingonthe context. Toaddressthosechallenges,e xistingtextual grounding algorithms[

13,43,52,40]benefitsignificantly fromthere-

centprogressin cognitive abilities,inparticular fromdeep netbasedobject detection,objectclassification andseman- ticsegmentation. Morespecifically,fortextual ground- ing,deepnet basedsystemse xtracthigh-lev elfeatureab- stractionsfromh ypothesizedboundingbox esandtextual queries.Bothare thencomparedto assesstheircompatibil- ity.Importantly,trainingof textualgroundingsystemssuch as[

13,40]cruciallyrelies onthea vailabilityof bounding

boxes.However ,itisrathertime-consumingtoconstruct

Awoman givinga

presentation abouta

Chevroletcar

Rockyspotontheleft,

bottomofthe picture. Figure1.T estsetresul tsforgroundingoftextual phrasesfromour approach.Left:Flickr30kEntitiesdataset, phraseandbounding boxarecolor coded.Right:ReferItGamedataset.(Groundtruth boxingreen andpredictedbox inred) large-scaledatasetsthatfacilitate trainingofdeep netbased systems.

Toaddressthisissue,we proposeacompletely unsu-

pervisedmechanismfor textualgrounding. Ourapproach isbasedon ahypothesis testingformulationwhich links wordstoactivated imageconceptssuch assemanticseg- mentationsorother spatialmaps.More specifically,w ords arelinked toimageconceptsifobservinga wordpro vides asignificantsignal thatanimage conceptisacti vated.W e establishthoselinks duringalearning task,whichuses a datasetcontainingw ordsandimages. Duringinference weextract thelinkedconceptsanduse theirspatialmap tocomputea boundingboxusing theseminalsubwindo w searchalgorithmby Lampertetal.[

26].Comparedto ex-

istingtechniques,our resultsareeasy tointerpret.But more importantlyweemphasize thattheapproach canbeeasily combinedwitha supervisorysignal. Wedemonstratetheeffecti venessof thedev elopedtech- niqueonthe twobenchmark datasetsforte xtualground- ing,i.e.,theReferIt Gamedataset[

18]andthe Flickr30k

data[

40].We showsomeresultsinFig. 1andwewill

illustratethatour approachoutperformscompeting unsu- pervisedtextual groundingapproachesbyalarge marginof

7.98%and6.96% ontheReferIt Gameandthe Flickr30k

datasetrespectiv ely. 1 6125

Figure2.Ov erviewof ourproposedapproach:Weoutputtheboundingbox extractedfrom theactiv econceptthat ismostrele vanttothe

inputquery. Therelevanceofa word-phraseand imageconceptislearnedandrepresentedinE(s,c).

2.RelatedW ork

Ourmethodfor unsupervisedtextual groundingcom-

binestheef ficientsubwindow searchalgorithmofLampert etal.[

26]withattention baseddeepnets. Wesubsequently

discussrelatedw orkforte xtualgrounding,attentionmech- anisms,aswell aswork oninferencewith efficientsubwin- dowsearch. isrelatedto imageretriev al.Classicalapproaches learn rankingfunctionsvia recurrentneuralnet s[

34,5],metric

learning[

12],correlationanalysis [23],orneural netem-

beddings[

7,22].

Othertechniquese xplicitlygroundnatural languagein imagesandvideos byjointlylearning classifiersandseman- ticparsers[

35,25].Gongetal.[9]propos eacanonical

correlationanalysistechnique toassociateimages withde- scriptivesentencesusingalatentembeddingspace. Inspirit similarisworkbyWangetal.[

52],which learnsastructure-

preservingembeddingfor image-sentenceretriev al.Itcan beappliedto phraselocalizationusing arankingframe- work. In[

10],text isgeneratedforasetof candidateobject

regionswhichissubsequentlycompared toaquery .There- verseoperation,i.e.,generatingvisual featuresfromquery textwhichissubsequentlymatched toimage regionsis dis- cussedin[ 2]. In[

24],3Dcuboids arealignedto asetof 21nounsrel-

evanttoindoorscenesusingaMark ovrandom fieldbased technique.Amethod forgroundingof scenegraphqueries inimagesis presentedin[

14].Groundingof dependency

treerelationsis discussedin[

16]andreformulated using

recurrentnetsin [ 15].

Subject-Verb-Objectphrasesareconsideredin [

45]todevelopavisualknowledgee xtractionsystem.Their algo-rithmreasonsabout thespatialconsistenc yofthe configu-rationsofthe involv edentities.

In[

13,33]captiongeneration techniquesareused to

scoreaset ofproposalbox esandreturning thehight- estrank ingone.Toavoidapplicationofate xtgener- ationpipelineon boundingboxproposals, [

43]improv e

thephraseencoding usingalong short-termmemory (LSTM)[

11]baseddeep net.

Commondatasetsfor visualgroundingare theRefer-

ItGamedataset[

18]anda newlyintroduced Flickr30kEnti-

tiesdataset[

40],whichpro videsboundingbox annotations

fornounphrases oftheoriginal Flickr30kdataset[ 59].

Videodatasets,althoughnotdirectl yrelatedto our

workinthispaper, wereusedfor spatiotemporallanguage groundingin[

28,60].

Incontrastto allofthe aforementionedmethodswhich

arelargely basedonregionproposalwe suggestusage of efficientsubwindowsearchas asuitableinferenceengine.

Visualattention:Overthepastfe wyears,singleimage

embeddingsextracted fromadeepnet(e.g.,[

32,31,46])

havebeenextendedtoav arietyofimage attentionmodules, whenconsideringVQA. Fore xample,ate xtuallongshort termmemorynet (LSTM)maybe augmentedwitha spatial attention[

62].Similarly, Andreasetal.[1]employ alan-

guageparsert ogetherwitha seriesofneuralnetmodules, oneofwhich attendstore gionsinan image.Thelanguage parsersuggestswhich neuralnetmodule touse.Stacking ofattentionunits wasalso investig atedbyYangetal.[ 57].
Theirstacked attentionnetworkpredictstheanswer succes- contextualinformationfromneighboringimage regionshas beenconsideredby Xiongetal.[

53].Shihetal.[48]use

objectproposalsand andrankre gionsaccordingto rele- 6126
VGG16 Conv5 1414
#ofwords

ConvBlockMaxPoolConvLayerAvg.Pooling

#ofwords256

Attentionmechanism

14 14 "people""water""sky" Figure3.Pipeline ofourproposed networkfor learningimage concepts.Thenetw orksuccessfullylearns thespatialandclass informationofobjects/w ordsinthe imageandnewconcepts, e.g. classeslike “sky"or“water"are notpartofthepre-trainedcl asses. Although,thescore mapsmaynot beofpix el-accuratequality, ex- tractingusefulbounding boxesfrom themarestill feasible. vance.Themulti-hopattentionscheme ofXuetal.[ 54]
wasproposedtoextract fine-graineddetails.A jointatten- tionmechanismw asdiscussedby Luetal.[

30]andFukui

etal.[

8]suggestan efficientouter productmechanismto

combinevisualrepresentation andtext representationbe- foreapplyingattention over thecombinedrepresentation. Additionally,theysuggestedthe useofglimpses.Veryre- cently,Kazemietal.[

17]showed asimilarapproachusing

concatenationinsteadof outerproduct.Importantly ,allof theseapproachesmodel attentionasa singlenetwork. The factthatmultiplemodalities areinv olvedis oftennotcon- sideredexplicitly whichcontraststheaforementionedap- proachesfromthe techniquewepresent.

VeryrecentlyKimetal.[

20]presenteda technique

thatalsointerprets attentionasa multi-variateprobabilistic model,toincorporate structuraldependenciesinto thedeep net.Otherrecent techniquesarew orkbyNam etal.[ 37]
ondualattention mechanismsandw orkbyKim etal.[ 19] onbilinearmodels. Incontrastto thelattertw omodelsour approachiseasy toextendtoanynumberofdatamodalities. Efficientsubwindow search:Efficientsubwindowsearch wasproposedbyLampertetal.[

26]forobject localiza-

tion.Itis basedonan extremelyef fective branchandbound schemethatcan beappliedto alarge classofener gyfunc- tions.Theapproach hasbeenapplied, amongothers,to very efficientdeformablepartmodels[

56],forobject classde-

tection[

27],forweakly supervisedlocalization[ 4],indoor

sceneunderstanding[

47],spatiotemporalobject detection

proposals[

38],andte xtualgrounding[ 58].

3.Appr oach

Weillustrateourapproachfor unsupervisedtextual

groundinginFig.

2,wherewe showho wtolink asetof'im-

ageconcepts," c,(e.g.,objectdetections, andsemanticse g- mentations)tow ords,s,withoute verobserving anybound- ingboxes. Theimageconceptsarerepresentedin theform ofscoremaps, whichcontainboth thespatiallocation, and, whenconsideringthe magnitudeofthe valueat every pixel,

thestrengthof thecorresponding'concept" ataparticular pixel.Bylinkingaw ordtoan 'imageconcept," i.e.,byes-

tablishingadata-dependent assignmentbetweenw ordsand imageconcepts,we findthevisual expressionof eachword. Importantly,forsimplicityofthe framework, eachword is onlyassignedto asingleconcept. Thebounding-boxaccu- mulatingwithinits interiorthehighest scoreofthe linked 'imageconcept"score mapisthe finalprediction.

Werefertocapturingthe 'image-concept"-wordrele-

vanceE(s,c)aslearning.W eproposeas ausefulcue,sta- tisticalhypothesis tests,whichassesswhetheractiv ationof aconceptis independentofthe wordobserv ation.Ifthe probabilityfora conceptactiv ationbeingindependent ofa wordobservationissmall, weobtainastronglinkbetween thecorresponding'image concept"andthe word.

Forinference,given aqueryand animageasin-

put,wefind itslinked scoremapby combiningthedata statisticsE(s,c)obtainedduringtraining withimageand querystatistics.While thequery statisticsindicatew ord- occurrences,imagestatistics aregiv enby'image concept" activations.Tocomputethe'imageconcept" activationswe detectabounding-box onitscorresponding scoremapusing abranch-and-boundtechnique akintothe seminalefficient subwindowsearchalgorithm.Ifthe detectedboundingbox hasconfidencegreater than0.5,and covers morethan5% oftheimage wesaythat theconceptis activated. Thecon- fidenceofthe boundingboxis obtainedbya veragingthe probabilitywithinthe boundingbox.W eth enuse theac- tivatedwordsand'imageconcepts"to selecttheactivated ageconcept"-word relevance.Fromthissubmatrix, wese- lecttheconcept tha thasthe lowestprobabilityofbeingin- dependentfroman yofthe activatedwordobserv ations.The boundingboxdetected onthescore mapcorrespondingto theselectedconcept istheinference result.

Beyondaproperassignmentwe alsoneedthe 'image

concepts"themselves. Wedemonstrateencouragingresults withaset ofpre-trainedconcepts, suchasobject detections, andsemantic segmentations.Wealsoobtain asetof'im- ageconcepts"by trainingacon volutionalneural network topredictthe probabilityofa wordsgivenimageI.By changethearchitecture" sfinaloutput layertospatialaver - agepooling, weobtainascoremapfor eachofthe words. Weincludethescoremaps ifthepredicted wordaccurac y exceeds50%.

Inthefollo wing,wefirst describetheproblemformu-

lationbyintroducing thenotation.W ethendiscuss our formulationforlearning (i.e.,linkingw ordstogi ven'im- ageconcepts")and forcomputationof the'imageconcepts. " Lastly,wedescribeourinference algorithm(i.e.,estimation ofthebounding boxgiv enaw ordandimage concepts).

3.1.Problem Formulation

Letxrefertoboth theinputquery Qandtheinput im-

ageI,i.e.,x=(Q,I).Theimage IhaswidthWand heightH.To parameterizeaboundingboxweuse thetuple 6127

ApproachImageFeatures Accuracy (%)

Supervised

SCRC(2016)[

13]VGG-cls 17.93

GroundeR(2016)[

43]VGG-cls 26.93

Unsupervised

GroundeR(2016)[

43]VGG-cls 10.70

GroundeR(2016)[

43]VGG-det -

EntireimageNone 14.62

LargestproposalNone14.73

MutualInfo.V GG-det16.00

OursVGG-cls 18.68

OursVGG-det 17.88

OursDeeplab-seg 16.83

OursYOLO-det 17.96

OursVGG-cls +VGG-det20.10

OursVGG-cls +YOLO-det20.91

Table1.Phraselocalizationperfor manceonReferItGame (accuracy in%).

ApproachImageFeatures Accuracy(%)

Supervised

CCA(2015)[

40]VGG-cls 27.42

SCRC(2016)[

13]VGG-cls 27.80

CCA(2016)[

41]VGG-det 43.84

GroundeR(2016)[

43]VGG-det 47.81

Unsupervised

GroundeR(2016)[

43]VGG-cls 24.66

GroundeR(2016)[

43]VGG-det 28.94

EntireimageNone 21.99

LargestproposalNone24.34

MutualInfo.V GG-det31.19

OursVGG-cls 22.31

OursVGG-det 35.90

OursDeeplab-seg 30.72

OursYOLO- det36.93

Table2.Phraselocalizationperformance onFlickr30k Entities(ac- curacyin%). y=(y1,...,y4)whichcontainsthe topleft corner(y1,y2) andthebottom rightcorner(y3,y4).We useYtoreferto thesetof allboundingbox esy=(y1,...,y4)?Y=?4i=1{0,...,yi,max}.Herebyyi,maxindicatesthemaxi- mumcoordinatethat canbeconsidered forthei-thvariable (i?{1,...,4})whenprocessing imageI.

Theproblemof unsupervisedtextual groundingisthe

taskofpredicting thecorrespondingbounding boxygiven theinputx,whilethe trainingdatasetDcontainsonlyim- agesandcorresponding queries,i.e.,D={(x)}.We em- phasizethatno 'boundingbox"-querypairs areev erob- served,neitherduringlearningnor duringinference.F ol- lowingpriorwork,pre-trained detectionorclassification featuresareassumed tobereadily available, butno pre- trainednaturallanguage featuresare employed.W esub- sequentlydiscussour formulationforthose twotasks, i.e., inferenceandlearning.

3.2.Learning

Welearnthe'imageconcept"-w ordrelev anceE(s,c)

fromtrainingdata D={(x)}.Importantly, E(s,c)cap- turestherele vancein theformofadistance,i.e.,ifa word sisrelatedto aconceptc,thenE(s,c)shouldbesmall.

Wefirstmakesome assumptionsaboutthe dataandthe

model.We introduceasetofwords ofinterest,S.Allother wordsarecapturedbya specialtoken whichisalso part oftheset S.We usets(Q)?{0,1}todenotethe exis- tenceofthe tokens?Sinaquery Q.Additionally, we letCdenoteaset ofconceptsof interest.Further ,weuse a c(I)?{0,1}todenotewhether imageconceptcisacti- vatedinimageI.Asmentioned before,wesay aconceptis activatediftheboundingboxextracted withefficient sub- windowsearchhasaconfidence greaterthan0.5, andifit coversmorethan5%oftheimage area.

Our'imageconcept"-w ordrelev anceE(s,c)isinspired

bythefollo wingintuitiv eobservation:ifaw ordsisrel-

evanttoaconceptc,thenthe conditionalprobabilityof observingsuchconcept given existenceof theword,i.e.,

P(ac=1|ts=1),shouldbe largerthan theunconditional

probabilityP(ac=1).For example,let"ssaythe query containsthew ord“man." Wewouldthen expecttheproba- bilityofthe “person"conceptto behighergi venkno wledge thatthew ord“man"w asobserved.Tocapture thisintuition each'imageconcept"-w ordpair.

Foreachwordsandimageconcept c,weconstruct the

alternativehypothesisH1(c,s): H

0(s,c):P(ac=1|ts=1)= P(ac=1),

H

1(s,c):P(ac=1|ts=1)>P(ac=1).

Thenullh ypothesischeckswhether theprobabilityofan activatedconceptisindependentofobservingan activated word.Notethatwedon" tcareabout capturingadecrease of

P(ac=1|ts=1)comparedtoP(ac).Hence,we perform

aone-sidedh ypothesistest.

GiventrainingdataD={(x)}containingimage-

querypairsx=(Q,I),wecan counthow manytimes theword s?Sappeared,whichwe denoteN(s)=?

Q?Dts(Q).Next, wecounthowmany timesthew ords

andtheconcept cco-occur,whichwereferto viaN(s,c)=?

Wenowintroducea randomvariablens,cwhichmod-

elsthenumber oftimesconcept coccurswhenkno wing thatsentencetok ensappears.Assumingactofollow a Bernoullidistribution, thenns,cfollowstheBinomialdistri- butionwithN(s)trialsanda successprobabilityof P(ac=

1|ts=1),i.e.,P(ns,c)=B in( N(s),P(ac=1|ts=1)) .

Forlargesentencetok encountsN(s),theBinomial distri- bution P(ns,c)=B in( N(s),P(ac=1|ts=1)) ≈N(μ,σ2) canbeapproximated byanormal distributionNwith meanμ=N(s)P(ac=1|ts=1)andvariance σ2= 6128
peopleclothingbody partsanimalsv ehiclesinstrumentsscene other #Instances5,656 2,306523518 4001621,619 3,374 EntireImage27.83 5.240.7617.56 25.5015.4345.7716.56 Largestproposal31.807.582.10 30.1134.5017.28 41.2117.21 GroundeR,VGG-det (2016)44.329.020.9646.9146.00 19.1428.2316.98 Ours,VGG-det 61.9316.862.48 64.2854.09.87 16.6614.25 Ours,YOLO-det 58.3714.872.2968.9155.0022.22 24.8720.77 Table3.Unsupervisedphraselocalization performanceov ertypeson Flickr30kEntities (accuracyin %).

N(s)P(ac=1|ts=1)( 1-P(ac=1|ts=1)) .Sincewe

useacontinuous distributionto approximateadiscrete one, weapplythe classicalcontinuitycorrection. Justifiedbya meanoccurrencecount of428inourcase, weusethis ap- proximationforcomputational simplicityinthe following. Wenotethatane xactcomputationis feasibleaswell, albeit beingcomputationallyslightly moredemanding.

Tocheckwhetherthenull hypothesisis reasonable,we

assumeitto hold,andcompute theprobabilityof observing occurrencecountslar gerthanthe observedN(s,c).Thequotesdbs_dbs30.pdfusesText_36
[PDF] les sources du droit cours

[PDF] linking words english french

[PDF] link words traduction

[PDF] mots de liaison anglais liste

[PDF] stratégie de localisation définition

[PDF] link words list

[PDF] lecture analytique j'accuse zola

[PDF] jaccuse emile zola résumé

[PDF] jaccuse zola commentaire histoire

[PDF] jaccuse zola questions

[PDF] les principes de la démocratie française

[PDF] les valeurs de la démocratie française

[PDF] valeurs d un pays

[PDF] cours proche et moyen orient terminale es

[PDF] cours moyen orient pdf