[PDF] [PDF] Arabic CAP - Center for Applied Second Language Studies

CASLS, a National Foreign Language Resource Center Arabic Computerized Assessment of Proficiency (Arabic proficiency tests in modern foreign languages Angoff procedure A standards setting method in which experts estimate the 



Previous PDF Next PDF





[PDF] here - UM LSA

The Arabic Proficiency Test (APT) is designed to distinguish various levels of proficiency of language-use situations that would be encountered in real-life contexts in order to give an The procedure of handling these two parts is the



[PDF] Arabic Fluency Certificate

Importance of the Arabic Language Fluency Exam Beneficiary language proficiency of non-native Arabic learners with impartiality and objectivity Beneficiary 



[PDF] VersantTM Arabic Test - Versant, a smarter way to test language skills

The Versant Arabic Test measures facility with spoken Arabic, which is a key element in accepted measures of oral proficiency (see Section 7 3 below, Concurrent Validity) http://www coe int/T/DG4/Portfolio/documents/ videoperform pdf



[PDF] Standardized Arabic Test

measurement of language proficiency, taking into account the specificity of this language according to the latest global theories in the teaching of languages, 



[PDF] Assessing Arabic In: The Companion to Language Assessment

Another point of pride for the Arabic language is its historical connection to Islam, since it is the the ILR platform is the Defense Language Proficiency Test 5 ( DLPT 5) The DLPT and an interview procedure for the speaking part The DLPT 



[PDF] Arabic CAP - Center for Applied Second Language Studies

CASLS, a National Foreign Language Resource Center Arabic Computerized Assessment of Proficiency (Arabic proficiency tests in modern foreign languages Angoff procedure A standards setting method in which experts estimate the 



[PDF] 2014 Arabic Written examination - VCAA

21 oct 2014 · 2014 ARABIC EXAM 2 Responses in the wrong language will receive no credit Answer one question in 200–300 words in ARABIC



[PDF] LEVEL OF PROFICIENCY IN ARABIC AND ENGLISH, AND

Search Terms: language proficiency, identity, Arabic language, English "what to expect from a test [which] will decrease the students' anxiety and hopefully the period drawing posters or making manual objects such as a stethoscope

[PDF] arabic language proficiency test sample

[PDF] arabized berber

[PDF] aramex weight charges

[PDF] architecting sustainable applications guided path

[PDF] architecture et patrimoine bordeaux

[PDF] architecture et patrimoine chartres

[PDF] architecture et patrimoine consulting

[PDF] architecture et patrimoine contemporain

[PDF] architecture et patrimoine les essarts

[PDF] architecture et patrimoine ministère de la culture

[PDF] architecture et patrimoine paris

[PDF] architecture of aosp

[PDF] archives d'état civil de paris

[PDF] archives de paris état civil en ligne

[PDF] archives de paris etat civil tables decennales

[PDF] Arabic CAP - Center for Applied Second Language Studies

CASLS REPORT

Technical Report 2010-3

Unlimited Release

Printed August 2010

Supersedes Arabic Final Report

Dated Sept. 2008

Arabic Computerized Assessment of

Proficiency (Arabic CAP)

Martyn Clark

Assessment Director

Prepared by

Center for Applied Second Language Studies

University of Oregon

CASLS, a National Foreign Language Resource Center and home of the Oregon Chinese Flagship Program, is dedicated to improving language teaching and learning. Prepared by the Center for Applied Second Language Studies (CASLS). NOTICE:The contents of this report were developed under a grant from the Department of Education. However, those contents do not necessarily represent the policy of the Department of Education, and you should not assume endorsement by the Federal Government. Printed in the United States of America. This report has been reproduced directly from the best available copy.

Available from CASLS:

Campus: 5290 University of Oregon, Eugene OR 97403

Physical: 975 High St Suite 100, Eugene, OR 97401

Telephone: (541) 346-5699

Fax: (541) 346-6303

E-Mail: info@uoregon.edu

Download: http://casls.uoregon.edu/papers.php2

Technical Report 2010-3

Unlimited Release

Printed August 2010

Supersedes Arabic Final Report

Dated Sept. 2008

Arabic Computerized Assessment of Proficiency

(Arabic CAP)

Martyn Clark

Assessment Director

martyn@uoregon.edu

Abstract

This document was prepared by the Center for Applied Second Language Studies (CASLS). It describes the development of the Arabic Computerized Assessment of Proficiency (CAP). The development of this test was funded through the National Security Education Pro- gram (NSEP) with the mandate of developing an assessment in Modern Standard Arabic in reading, listening, writing, and speaking based on the existing infrastructure for the Standards- based Measurement of Proficiency (STAMP), a previous CASLS project to develop online proficiency tests in modern foreign languages. This document has several major sections. The first and second sections give an overview of the Arabic CAP project and format of the test. The third section details the development of the test items. The fourth describes the technical characteristics of the final test. The fifth section presents validity evidence from an external review and field testing. The final section describes score reporting for Arabic CAP.

Acknowledgment

The development of this test was made possible with funding from the National Security Education Program (NSEP). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of NSEP. Additional materials were developed under a grant from the U.S. Department of Education. However, those contents do not necessarily represent the policy of the U.S. Department of Education, and you should not assume endorsement by the Federal Government.

Contents

Nomenclature....................................................................... 7 Preface ............................................................................. 8 Executive summary.................................................................. 9

1 Overview and purpose of the assessment ........................................... 11

1.1 Construct for the CAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2 Test level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.3 Population served by the assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2 Description of the assessment ..................................................... 15

2.1 Content and structure of the CAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.2 Test Delivery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 Test development ................................................................ 18

3.1 Item writing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3.2 Internal review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3.3 Graphics development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3.4 Revisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4 Technical characteristics .......................................................... 22

4.1 Field testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4.2 Selection of items. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4.3 Preparation for delivery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4.4 Determination of cut scores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5 Validity evidence................................................................. 26

5.1 External review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6 Score reporting .................................................................. 28

6.1 Reading and listening scores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6.2 Writing and speaking scores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

References.......................................................................... 30

Appendix

A Standard setting outline........................................................... 31 B External item review ............................................................. 32 C Rasch summary results ........................................................... 33 D Bin information functions......................................................... 35

List of Figures

1 Arabic reading item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2 Arabic listening item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 Item writing workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4 Map of Arabic field test participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5 "Floor First" delivery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6 Delivery algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24

List of Tables

1 CASLS Benchmark Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2 Language Proficiency Measured by CAP (based on Bachman & Palmer (1996)). . .

3 Advanced Arabic Text Counts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4 Item Counts for Reading and Listening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5 Text and Item Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6 Correlations between proficiency ratings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7 Cut Scores for Scaled Scores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8 Common Speaking Rubric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9 Speaking Scores and Proficiency Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Nomenclature

ACTFLAmerican Council on the Teaching of Foreign Languages Angoff procedureA standards setting method in which experts estimate the percentage of exam- inees expected to be successful on an item AvantAvant Assessment (formerly Language Learning Solutions)

BinA group of test items delivered together

CALCenter for Applied Linguistics

CAPComputerized Assessment of Proficiency

CASLSCenter for Applied Second Language Studies

FSI/ILRForeign Service Institute/Interagency Language Roundtable Item setTwo or more items sharing a common stimulus (e.g., a reading text)

LRCLanguage Resource Center

LevelLevel on a proficiency scale (e.g., Advanced-Mid)

NSEPNational Security Education Program

PanelA term used to describe a particular arrangement of bins RaschA mathematical model of the probability of a correct response which takes person ability and item difficulty into account Routing tableA lookup table used by the test engine to choose the next most appropriate bin for a student Score tableA lookup table used by the scoring engine to determine an examinee"s score based on their test path

STAMPSTAndards-basedMeasurement ofProficiency

Test pathA record of the particular items that an examinee encounters during the test

Preface

The Center for Applied Second Language Studies (CASLS) is a Title VI K-16 National Foreign Language Resource Center at the University of Oregon. CASLS supports foreign language edu- cators so they can best serve their students. The center"s work integrates technology and research with curriculum, assessment, professional development, and program development. CASLS receives its support almost exclusively from grants from private foundations and the federal government. Reliance on receiving competitive grants keeps CASLS on the cutting edge of educationalreformanddevelopmentsinthesecondlanguagefield. CASLSadherestoagrass-roots philosophy based on the following principles: All children have the ability to learn a second language and should be provided with that opportunity. Meaningful communication is the purpose of language learning. Teachers are the solution to improving student outcomes. The Computerized Assessment of Proficiency (CAP) is an online test of proficiency developed by CASLS. In the past, proficiency tests developed at CASLS have been licensed by Avant As- sessment through a technology transfer agreement overseen by the University of Oregon Office of Technology Transfer. These tests are delivered operationally under the nameSTAMP(STAndards- basedMeasurement ofProficiency). We refer to tests under development as CAP to differentiate between research done by CASLS during the development phase from any additional work in the future by Avant Assessment.

Executive summary

CASLS has developed the Arabic Computerized Assessment of Proficiency (Arabic CAP), an on- line assessment of Modern Standard Arabic that covers a proficiency range comparable to the American Council on the Teaching of Foreign Languages (ACTFL) proficiency levels Novice through Advanced in four skills (reading, listening, writing, presentational speaking). This test builds on the style and format of Standards-based Measurement of Proficiency (STAMP) created previously at CASLS. The CAP project introduces a new item development process and a new delivery algorithm for the reading and listening sections. Native speakers of Arabic identified reading and listening passages and CASLS staff wrote corresponding items. A comprehensive review of the test items was conducted in June 2008. Re- viewersexpressedgeneralsatisfactionwiththetestitems, thoughtherewerediscrepanciesbetween the intended proficiency level and the reviewers" estimation of the level. This was most evident with items geared towards the upper proficiency levels. The most promising items were selected for field testing. Empirical information on the items was collected through an adaptive field test. Approximately

1500 students participated in field testing. Speech and writing samples were collected for those test

sections, but no ratings were given. Reading and listening data from the field tests were analyzed using a Rasch methodology. The person reliability from the reading item analysis was estimated at :93 and the person reliability from the listening item analysis was:89. Appropriately functioning items were assembled into a test panel using empirical information to establish a score table and routing table. Cut scores for proficiency levels were set according to the external review. Simula- tions of the delivery algorithm show a correlation ofr=:98 between simulated test taker ability and final ability estimate on the operational panel. The simulation also shows that the reading sec- tion is 90% accurate and the listening section is 89% accurate in identifying the students" "true" proficiency level.

1 Overview and purpose of the assessment

1.1 Construct for the CAP

CAP can be considered a "proficiency-oriented" test. Language proficiency is a measure of a person"s ability to use a given language to convey and comprehend meaningful content in realistic situations. CAP is intended to gauge a student"s linguistic capacity for successfully performing language use tasks. CAP uses test taker performance on language tasks in different modalities (speaking, reading, writing, listening) as evidence for this capacity. In CAP, genuine materials and realistic language-use situations provide the inspiration for read-

ing tasks. In many cases, authentic materials are adapted for the purposes of the test. In other cases,

these materials provide the template or model for materials created specifically for the test. Items are not developed to test a particular grammar point or vocabulary item. Rather, the tasks approx- imate the actions and contexts of the real world to make informal inferences as to how the learner would perform in the "real world."

1.2 Test level

CASLS reports assessment results on the CASLS Benchmark Scale. Several points along the scale have been designated as Benchmark Levels. These Benchmark Levels include verbal descriptions of the proficiency profile of a typical student at that point in the scale. The Benchmark Level descriptions are intended to be comparable to well-known proficiencyquotesdbs_dbs7.pdfusesText_5