[PDF] arXiv:2112.05417v2 [cs.CL] 9 Mar 2022





Previous PDF Next PDF



Unsupervised Editing for Counterfactual Stories

Mario game but never beat the game (alter s2 to s2)? From beat the last level rather than finally beat it and hence Kelly.



— VIDEO GAME REVIEWS: SUPER MARIO ODYSSEY AND

11 déc. 2018 As a bit of background my first video game was ' Legend of Zelda: ... Mario Odyssey



AAAI Press Formatting Instructions for Authors Using LaTeX -- A Guide

In this paper we study unsupervised counterfactual story rewriting



Generating Levels That Teach Mechanics

1 oct. 2018 public domain clone of the 2D platform classic game Super Mario ... find screens that an AI that has full game knowledge can beat but.



Mario Party DS

This Game Card will work only with Nintendo DS systems. the Mario crew! Our tiny heroes must find the other Sky Crystals beat Bowser



INSTRUCTION BOOKLET / MODE DEMPLOI

Once you have confirmed that the is OFF insert the Super Princess Peach. Game Card into the DS Game Card. Slot until it clicks into place



Hi Im Nathan Fouts from Mommys Best Games and Id like to us to

It's a classically styled action-adventure game with dozens and hundreds of levels



Mario Golf: Advance Tour

Insert the MARIO GOLF: ADVANCE TOUR Game Pak into the Game Boy Advance™ and turn the game will record that you beat the tourney in Club Lodge Mode too.



Growing Up Gamer

In addition to finally giving me the chance to actually beat one of these games this was the first nice thing a boy ever did for me. I kept that print-out for 



arXiv:2112.05417v2 [cs.CL] 9 Mar 2022

9 mars 2022 modifying the ending minimally while keeping it natural. For example in Figure 1

Unsupervised Editing for Counterfactual Stories

Jiangjie Chen

|*, Chun Gan}*, Sijie Cheng, Hao Zhou|†, Yanghua Xiaox†, Lei Li~‡ Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University |ByteDance AI Lab}JD.com~University of California, Santa Barbara xFudan-Aishu Cognitive Intelligence Joint Research Center fjjchen19, sjcheng20, shawyhg@fudan.edu.cn,

Abstract

Creatingwhat-ifstories requires reasoning about prior state- ments and possible outcomes of the changed conditions. One can easily generate coherent endings under new conditions, but it would be challenging for current systems to do it with minimal changes to the original story. Therefore, one major challenge is the trade-off between generating a logical story and rewriting with minimal-edits. In this paper, we propose EDUCAT, an editing-based unsupervised approach for coun- terfactual story rewriting. EDUCATincludes a target position detection strategy based on estimating causal effects of the what-ifconditions, which keeps the causal invariant parts of the story. EDUCATthen generates the stories under fluency, coherence and minimal-edits constraints. We also propose a new metric to alleviate the shortcomings of current automatic metrics and better evaluate the trade-off. We evaluate EDU- CATon a public counterfactual story rewriting benchmark. Experiments show that EDUCATachieves the best trade-off over unsupervised SOTA methods according to both auto- matic and human evaluation. The resources of EDUCATare available at: https://github.com/jiangjiechen/EDUCAT.

1 Introduction

Counterfactual reasoningis a hypothetical thinking process to assess possible outcomes by modifying certain prior con- ditions. It is commonly known as "what-if" analysis - "what will happen if ...". It is a big challenge to build an intelligent system with counterfactual reasoning capabili- ties (Pearl 2009; Pearl and Mackenzie 2018). Counterfactual reasoning relies on the ability to find thecausal invariance in data, i.e. the factors held constant with the change of con- ditions in a series of events (Sloman and Lagnado 2004). In this paper, we studyunsupervisedcounterfactual story rewriting, a concrete instance of counterfactual reasoning. We focus onunsupervisedmethods for this task, since hu- mans do not need supervised learning to imagine alternative futures. The task is to create plausible alternative endings given small modifications to the story context.* Work is done during internship at ByteDance AI Lab. †Corresponding authors. ‡Work is done while at ByteDance AI Lab. Copyright © 2022, Association for the Advancement of Artificial

Intelligence (www.aaai.org). All rights reserved.S'2: Kelly never beat the game though.S3: She was playing for so long without beating the level. S4: Finally she beat the last level. S5: Kelly was so happy to !inally beat it.S'3: She was playing for so long without beating the level. S'4: She never beat the last level. S'5: Kelly was so sad to be stuck at the end.S1: Kelly was playing her new Mario game.Counterfactual StorylineS2: She had been playing it for weeks.Original StorylineS'3: She was playing for so long without beating the level. S'4: She beat never beat the last level. S'5: Kelly was so happy to !inally beat it.S'3: She was playing for so long without beating the level. S'4: She never beat the last level. S'5: Kelly was so happy sad to !inally beat it.What if...Iterative Editing by g(x

t+1 !x t x t x t+1

!"Original EndingCounterfactual EndingStep1: Accept Step2: Accept Step3: Reject Step4: Reject Step5: Accept ....Figure 1: Counterfactual story rewriting example from the

TIMETRAVEL(Qin et al. 2019) dataset. Our proposed EDU- CATiteratively edits the original ending to obtain new end- ings. In this task, the major challenge is the trade-off between generatingnaturalstories and modifying the original text withminimal-edits. This requires finding the causal invari- ance in a story, i.e., invariant future events under the change of conditions. Indeed, with a pre-trained language model (LM), it is relatively easy to generate fluent endings un- der new conditions withmassive edits. However, difficul- ties arise when one has to perform accurate reasoning during modifying the endingminimallywhile keeping it natural. For example, in Figure 1, what if Kelly played with the Mario game butnever beat the game(alters2tos02)? From human commonsense,one caneasily createa plausible alter- native story ending by making small edits that Kellynever beat the last level rather thanfinallybeat it, and hence Kelly would besadinstead ofhappy. In this case, theinvariant event is that Kelly still plays all levels until the last, but the variant event would be the consequence of the counterfac- tual intervention. By identifying and keeping the invariantarXiv:2112.05417v2 [cs.CL] 9 Mar 2022 event, an ideal system can generate a plausible ending with few edits to the variant events. Most of the existing methods (Li, Ding, and Liu 2018; Xu et al. 2018; Guan, Wang, and Huang 2019; Guan et al.

2020) focus on the story generation in an auto-regressive

manner. These approaches keep the story logical mainly by exploiting the language modeling ability of LMs such as the GPTs (Radford et al. 2018, 2019; Brown et al. 2020). Few of them (Qin et al. 2019, 2020) deal with the reasoning abil- ity in counterfactual text generation, which requires balanc- ing between coherence and minimal-edits. For example, Qin et al. (2020) propose to keep the balance by constraining the decoding on new endings with a sentence-level similarity scorer with the original ones. However, LMs are known to be hard to control, often leading to over-editing.

In this paper, we propose EDUCAT, anEDiting-based

UnsupervisedCounterfactual generATion method for coun- terfactual story rewriting. Given the original story and a modified condition statement, the challenge is to locate which part to retain (i.e. causal invariance) and which to modify (i.e. causal variance) while maintaining coher- ence to the context after editing. Inspired by causal analy- sis research (Hern

´an 2004), we quantify the potential out-

come after intervention using the ratio between consisten- cies with the counterfactual and initial conditions, which can be computed by an off-the-shelf model. EDUCATemploys a Markov chain Monte Carlo sampling framework (Metropo- lis et al. 1953) for unsupervised generation by iteratively generating token modifications (Miao et al. 2019). With de- sired properties and guidance from the estimated potential outcome, EDUCATgenerates fluent and coherent alternative story endings with minimal edits.

The contributions of this work are as follows:

W efirst solv ethe counterf actualstory re writingtask using unsupervised discrete editing method based on

MCMC sampling.

W edra wi nspirationfrom causal analysis and propose two counterfactual reasoning components that quantify the outcomes of context changes. W econduct e xperimentsto v erifythat E DUCATachieves the best trade-off between coherence and minimal-edits for unsupervised methods.

2 Task Formulation with Causal Model

In counterfactual story rewriting task, given a story consist- ing of a premisez, a story contextxand an endingy, we intervene by alteringxinto a counterfactual contextx0and hope to predict new endingy0. This problem naturally fits to be formulated with aCausal Model, a directed acyclic graph used to encode assumptions on the data generating process. As presented in the Figure 2, the left part shows a simple example of a causal model with treatment(X),effect(Y) andconfounder(Z), respectively. In causal inference, a confounder is a random variable that influences both the treatment and effect variables, causing a spurious correlation (Pearl 2009). Note that in this problem, zconsists of both observed confounders1and unobserved commonsense knowledge, where the latter is very difficultXYZxyzx! y! z tervention on causal model, wherezis the common premise of the story,x;ydenote the original story, andx0;y0are the counterfactual story. to explicitly model. The counterfactual inference can be formulated with ado- operator. As shown in Figure 2, we can intervene on theX variable by applying do(X) =x0to set its value to the counterfactual without changing the rest. The arrow point- ing fromZtoXin the causal model is deleted sinceX no longer depends onZafter the intervention, resulting in a new graphical model. Consequently, the problem of counter- factual story generation can be formally restated as a coun- would the potential outcome ofybe if one changes the story context fromxtox0?

3 Proposed Approach: EDUCAT

In this section, we present an overview and details of ED- UCAT. In general, the rewriting process works as follows: starting with an original full story, EDUCATperforms the following proceduresiteratively:

1.Conflict Detection, it finds possible chunks in current

story endings contradictory to counterfactual conditions;

2.Edits Proposal, it proposes an edited ending and decides

its acceptance based on fluency and coherence scores. The above steps repeat multiple rounds. Each proposal is either accepted or rejected based on desired properties(y), which is defined as the score product of each property score: (y)/Desired Properties z}|{ X

0c(y)Xnc(y)(1)

Finally, we pick the best one according to a ranking function as the output. An illustrative example is given in Figure 1. However, the challenge remains for the quantification of these desired properties for ideal story rewriting. Inspired by causal analysis research, we can quantitatively calculate the difference of story endings" quality given different con- ditions with the Causal Risk Ratio (CRR) (Hern

´an 2004;

Hern ´an and Robins 2020). CRR is defined as follows: CRR =

P(Y=yjdo(X=x0);Z=z)P(Y=yjdo(X=x);Z=z)(2)

The value goes up when the new ending is more consis- tent with the counterfactual condition. However, it is dif- ficult to explicitly calculate both observed and unobserved confounders (z?) inP(Y=yjdo(X=x))as follows:

P(Y=yjdo(X=x))z}|{

X z ?P(Y=yjX=x;Z=z?)P(Z=z?)(3) We make a causal sufficiency assumption that only observed confounder (z) is considered:

P(Y=yjdo(X=x)) = P(Y=yjX=x;Z=z)(4)

So CRR can be calculated by

CRR =

P(Y=yjX=x0;Z=z)P(Y=yjX=x;Z=z)(5)

In this way, we can roughly estimate the influence on possi- ble endings brought by a changed condition. Next, we will elaborate on the details of EDUCAT.

3.1 Constrained Generation via MCMC

In EDUCAT, we direct the Markov chain Monte Carlo

(MCMC) sampling process with counterfactual reasoning ability brought by conflict token detection and desired prop- erties as sampling constraints. EDUCATdirectly samples from the sentence space with three local operations: tokenreplacement,deletionandin- sertion. During sampling, after an edit position is found, the operation is randomly chosen with equal probability. Fi- nally, the proposed new sentence will either be accepted or rejected according to theacceptance ratecomputed by de- sired properties(y). The above process is repeated till con- vergence. Specifically, Metropolis-Hasting sampling (MH) algo- rithm moves the current sentenceytto the next sen- tenceyt+1by generating from the proposal distribution g(yt+1jyt)and accepting it based on an acceptance rate. The sample distribution in MCMC will converge to the station- ary distribution(y)in the Markov chain under mild condi- tions. The acceptance rateat thet-th iteration is defined as follows, (yt+1jyt) = min

1;(yt+1)1=Tg(ytjyt+1)(yt)1=Tg(yt+1jyt)

(6) et al. 2003) (T= 0:95bt5 cin our implementation.) Next, we will describe in detail the design of stationary distribution(y)(x3.2) and transition proposal distribution g(yt+1jyt)(x3.3).

3.2 Desired Properties for Story Rewriting

Aside from the basic fluency property, the original CGMH framework is designed with properties such as similarity and keywords constraints. These simple properties cannot direct the sampling with counterfactual reasoning ability. Instead,

we want the generated new endings to be not onlyfluentinterms of storytelling, but also logicallycoherentwithX0in-

stead ofX. In EDUCAT, we define two score functions in story rewriting, namely, a fluency score functionXLMand a coherence score functionXCoh. Thus, the stationary dis- tribution(y)is defined as the product of fluency score and the coherence score as follows: (y)/ XLM(y)XCoh(y)(7) Fluency ScoreWe compute the probability of the gen- erated ending based on a pre-trained language model, e.g. GPT-2 (Radford et al. 2019). This is important and in line with previous work to guarantee the fluency and readability of the generated sentence. The likelihood is computed au- toregressively as: X

LM(y) =NY

i=1P

LM(yijz;x0;y We denoteyas the proposed ending at the current stage, andyias thei-th token in the ending. Coherence ScoreIntuitively, we want to punish proposed endings contradictory to the counterfactual conditions but consistent with the initial ones. Therefore, the purpose of coherence score functionXCohis to encourage the model to rewrite the original endings. The value ofXCohshould be larger than1if the generated ending is more causally related to counterfactual context than the initial one. Inspired by the X

Cohis defined as follows:

X

Coh(y) =PCoh(Y=yjz;x0)P

Coh(Y=yjz;x)(9)

where the formulation forPCohis fit for any model for quantification that measures the coherence between an end- ing and a story context. In our implementation, we employ conditional sentence probability calculated by a pre-trained language model (e.g., a GPT-2) to measure the coherence within a story in an unsupervised way. Note that we hope to solve this task in an unsupervised way. ButPCohis fully extendable for better story coherence checking models.

3.3 Editing Proposal Design

Regularized by the desired properties, we can make editing proposals by solving two questions: 1)Where to edit?and

2)Edit with what?

Where to Edit: Conflict DetectionIt is critical to know where to edit the original stories to write natural counter- factual stories with only minimal edits. Namely, we need to identify tokens that contradict with the counterfactual con- text (Hao et al. 2021). Meanwhile, causal invariant informa- tion is kept in the unchanged tokens. Also inspired by the calculation of Causal Risk Ratio, we estimate the potential outcome of changing the contexts to find the most likely contradictory tokens. Letybe the cur- rent ending to edit (initialized withy) andyibe the tokens, we define the conflicting probabilityPcf(yi)on thei-th to- ken inyas follows, P cf(yi) = softmax(PLM(yijz;x; yLM(yijz;x0; y The token-level likelihood is computed via a language model. According to the definition,Pcf(yi)is larger ifyiis more causally related to the initial context than the counter- factual one. Those tokens are more likely to contradict with a higher priority to be edited.

Edit with What: Modification ActionWe randomly

sample from three token-level modification actions (replace- ment, deletion, and insertion) with equal probability to find what to use to edit the endings given editing positions. Letytbe the current sentence, the proposal distribution is fromyttoyt+1is given by g(yt+1jyt) =13 X op2fr;d;igg op(yt+1jyt)(11) wheregr,gd,gicorrespond to the replacement, deletion and insertion proposals, respectively. Forreplacement, letyt= [w1;:::;wm;:::;wn], the replacement action replaces the tokenwmwithwc, wherewcis sampled from a pre-selected candidate setQ. Letyt+1= [w1;:::;wc;:::;wn], then the proposal for replacement is g r(yt+1jyt) =1(wc2 Q)PMLM(wm=wcjxm) (12) Here1(wc2 Q)is the indicator function which equals1 ifwc2 Qand0otherwise.PMLM(wm=wcjxm)is the probability of the selected token given the rest of the sen- tencexm. It is computed using a masked language model (MLM), e.g. BERT (Devlin et al. 2019) or RoBERTa (Liu et al. 2019).

The transition function fordeletionis rather sim-

ple:gd(yt+1jyt) = 1if and only ifyt+1= [w1;:::;wm1;wm+1;:::;wn], and 0 for others. Thein- sertionoperation consists of two steps. First, a mask token is inserted into the position and then a replacement operation is performed on the inserted token.

4 Experiments

4.1 Experimental Setup

DatasetWe experiment EDUCATon TIMETRAVEL

(Qin et al. 2019), a standard counterfactual story rewriting dataset. TIMETRAVELis built on ROCStories (Mostafazadeh et al. 2016), which consists of a large set of five-sentence storiesS=s1:5. The first sentences1de- notes the premise of a story,s2sets up the initial context, and the last three sentencess3:5are the story endings. Using causal language we described above,s1,s2,s3:5correspond toZ=z,X=x,Y=y, respectively. In TIMETRAVEL, the initial context was rewritten by humans into a counter- factual contexts02, followed with edited endingss03:5. They correspond toX=x0andY=y0in the causal graphical model. As EDUCATis unsupervised and thus does not need training, we run EDUCATdirectly on the test set. The statistics of TIMETRAVELare reported in Table 1. Only part of the training set is annotated with the edited endings. Each sample in the development and test set is an- notated with 3 and 4 rewritten endings respectively, whichTrain Dev Test # counterfactual context (x0) 96,867 1,871 1,871 # edited endings (y0) 16,752 5,613 7,484Table 1: Statistics of TIMETRAVELdataset. explains the difference between # ofx0and # ofy0in the de- velopment and test set in Table 1. Note that thefourth edited ending, butonlyserves as human baseline. BaselinesFollowing previous work, we categorize the baselines into three classes:1) Unsupervised zero-shot base- lines, with only off-the-shelf pre-trained models for gener- ation, including pre-trained GPT-2 (generating withs1;s02) and DELOREAN(Qinetal.2020).Moreover,incomparisons with unsupervised editing-based methods, we add CGMH (Miao et al. 2019), which is EDUCATwithout conflict de- tection and coherence score;2) Unsupervised training base- lines, GPT-2 +Recon+CF(Qin et al. 2019), which is trainedwith domain dataSand< s1;s02>(i.e. without s

03:5);3) Supervised training baselines, with a GPT-2 +SUP

(Qin et al. 2019)trainedfor predictings03:5fromSands02in the form of< S;[SEP];s1;s02>. Note that in our paper, we aim at using only off-the-shelf pre-trained models for story rewriting, which makes the pre- vious SOTA method DELOREANour major baseline. DE- LOREANiteratively revises the generated tokens by updating their hidden representations during decoding. The update is between the generated and original endings, followed by a BERT to re-rank the generated candidates with the next sen- tence prediction task. Implementation DetailsAll of the pre-trained check- points are inherited from the implementations of Hugging- face (Wolf et al. 2020). Consistent with previous work, we adopt GPT-2, Medium (24 layers) or Small (12 layers), for causal language modeling. We use pre-trained RoBERTa- base as the unsupervised masked language model for token proposal. We keep the first 100 tokens MLM predicts as can- didates. We randomly sample one token as the proposed to- ken based on normalized probabilities. In the experiments, we run EDUCATand its variants for 100 steps.

4.2 Evaluation Metrics

we adopt BLEU-4 (Papineni et al. 2002) and BERTSCORE (Zhang et al. 2020b) as automatic metrics, which are refer- enced metrics. Given ground-truth endings and the gener- ated endings, BLEU computes the number of overlapping n- grams, and BERTSCOREcomputes their semantic similar- sures theminimal-editsproperty well, but correlates poorly with human judgements w.r.t.coherence. For assessing thecoherencewith the counterfactual con- ditions, we propose a simple, unreferenced, and model- based metric ENTSCORE(ENTS). Inspired by researches Metrics Pearson"srSpearman"sKendall"sBLEU 0.2619 0.2454 0.1758

BERTSCORE0.3252 0.3332 0.2385

ENTS (base) 0.3937 0.3973 0.2865

ENTS (large) 0.4685 0.4732 0.3389

HMEAN(large)0.4995 0.4996 0.3662Table 2: The correlation between automatic metrics and hu- man judgements in coherence. HMEANis the harmonic mean between ENTS (large) and BLEU. All of these num- bers are statistically significant atp <0:01. on natural language inference (Kang et al. 2018; Dziri et al.

2019), we fine-tune a RoBERTa (base or large) withbinary

classification objective to check whether a story context en- tails a story ending. We use 28,363 stories with annotated edited endings in TIMETRAVELto train the metric, leading to 113,452 training samples, i.e.,x0contradicts withybut entails byy0andxcontradicts withy0but entailsy. The best metrics achieve the F1 scores of 73.07 (base) and 81.64 (large) in the test set. We take the predicted probability of whether an ending is entailed by the counterfactual context as the output of ENTSCORE. To better evaluate the subtle trade-off in this task, we cal- culate aharmonic meanof ENTSCOREand BLEU to rep- resent the trade-off between coherence and minimal-edits, defined as HMEAN=2BLEUENTSBLEU+ENTS.

Human Evaluation MetricsWe also conduct human

evaluation to compensate for these automatic metrics and assess their ability for this task. Following Qin et al. (2020), our human evaluation mainly focuses on two primary crite- ria: i)coherence, the logical consistency between the coun- terfactual context (s1;s02) and generated endings, and ii) minimal-edits, the extent of minimal revision between two endings. We calculate the pairwise comparison as human metrics. Annotators are asked to score from 0 to 3 and choose the better one or both between two generated outputs from EDUCATand baselines without knowledge of their ori- gins. We arrange a training session before annotation ses- sion, where the annotators annotate some cases and resolve their disputes through discussion. Then, we randomly se- lect 100 samples from the test set. Each sample was rated by three graduate students, paid with local minimum wage. 1 The final decision is made based on the majority vote. uation, we show the ability of these automatic metrics by performing correlation analysis using the scores produced by human annotators on the generated endings. We calcu- late three coefficients, including Pearson"sr, Spearman"s the latter two measure monotonic correlation, where Spear- man"sis more sensitive to abnormal values. According to Table 2, HMEANproves to be the best metric among them1 They reach fair inter-rater agreement with Fleiss"= 0:345 in annotation session.Method BLEU BERT ENTSlHMEANSupervised Training GPT-2

M+SUP76.35 81.72 35.06 48.05Unsupervised Training

GPT-2

M+FT3.90 53.00 52.77 7.26

Recon+CF76.37 80.20 18.00 29.13Off-the-shelf Pre-trained Models GPT-2

M1.39 47.1354.212.71

DELOREAN23.89 59.88 51.40 32.62

CGMH 41.34 73.82 29.80 34.63

EDUCAT44.05 74.0632.2837.26Human 64.76 78.82 80.56 71.80 Table 3: Automatic evaluation results in the test set of TIME- TRAVEL. These methods use GPT-2Mby default. ENTSlis short for ENTSCORE(large). in terms of correlation with human judgements for this task, which is also our primary metric in the experiments.

4.3 Results

Automatic EvaluationTable 3 shows our results w.r.t. automatic metrics. In general, we observe that BLEU and ENTSCOREindicate the trade-off between minimal edits and coherence in this task. Models that generate coherent endings can also cause excessive edits. Among them, EDU- CATachieves the best trade-off in terms of HMEAN, which is also the metric that has the best correlation with human judgements, as shown in Table 2. For supervised and unsupervised training methods, we findRecon+CFscores high on BLEU and BERTSCORE but low on ENTSCORE, suggesting that the endings it gener- ates are not coherent with counterfactual contexts but para- phrased from original endings (Qin et al. 2019). Moreover, the gap remains between supervised methods and unsuper- vised ones.

Interestingly, zero-shot GPT-2

Mand DELOREANper-

form very well in ENTSCOREbut poorly on BLEU and

BERTSCORE. ENTSCOREdraws the decision boundary

based on the change of conditions (s2,s02). Therefore, as long as the ending follows the counterfactual condition, where large-scale language models such as GPT-2 excel,

ENTSCOREwill produce a high score. Zero-shot GPT-2Mdoes not constrain the generation on minimal-edits to the

ing the generation. Hence, it generates fluent endings thanks to the language modeling ability of GPT-2 withover-editing. The same is true for DELOREAN, but it alleviates this prob- lem by constraining on the KL-divergence with original end- ings. Indeed, it is easy to generate coherent endings with massive edits, as even a zero-shot GPT-2 can achieve a highquotesdbs_dbs26.pdfusesText_32

[PDF] Beat It Uptight Es-tu prête pour un bon martèlement bébé ? (a terre

[PDF] Beata de Robien

[PDF] beatclub-greven.de | BC Lounge

[PDF] beatclub-greven.de | Manfred Mann`s Earthband

[PDF] beatclub-greven.de | Radio

[PDF] Beate et Serge KLARSFELD

[PDF] Beati omnes - Alliance Music Publications

[PDF] Beati Quorum Via - Choral Public Domain Library

[PDF] Béatification du Pape Jean-Paul II : Corsica Ferries emmène 700 - Anciens Et Réunions

[PDF] beatification Jean

[PDF] beatificazione dei servi di dio: charles de foucauld

[PDF] béatifié - Page d`accueil la Mésange - La Religion Et La Spiritualité

[PDF] Beatiho :Fiche Technique Son

[PDF] Beating Cheating: Teachers and the Capital Intellectual Crime - Anciens Et Réunions

[PDF] Beatlemania - photo12.com