Standardised testing and the development and use of proficiency scales: addressing local challenges, meeting global standards.
1. STANDARDIZED TESTING
While standardised testing first occurred in China during the Han Dynasty (206 BC – 220 AD), its application in assessing language ability is relatively recent, dating back to the beginning of the 20th century. Standardised tests have been praised for providing equal opportunities and promoting social justice and meritocracy as well as allowing score comparability among test takers owing to high reliability of test administration, tasks and scoring procedures. However, standardised testing has been criticised for negative consequences such as teaching to the test and even accused of being a mechanism to control teachers and education systems. The debate continues and is relevant to regional education authorities who are undertaking educational reforms in English language teaching and assessment and who may have set student and teacher proficiency benchmarks.
At New Directions, papers will address all aspects of standardised testing, in particular where the focus is on the extent it can address regional ambitions related to language proficiency amongst its language learners and teachers. These papers can be either of a theoretical or a practical nature, with priority given to solutions-based papers.
2. PROFICIENCY SCALES
The Common European Framework of Reference for Languages (CEFR), the most widely used and influential proficiency scale, has seen wide application in the development of language syllabi, materials and tests. Though the CEFR has shown to be a useful framework to which many tests are aligned, it has been criticised for lacking a sound theoretical basis and having inadequate descriptors. There is also an on-going debate as to its application to non-European languages and many countries in the region have extended and refined the CEFR (e.g. Japan), are in the process of developing their own proficiency scales (e.g. China), are considering developing their own scales (e.g. Vietnam) or are mapping their own tests to the CEFR.
At New Directions, papers will address all aspects of the use or refinement of existing scales or on the development and validation of country-specific proficiency scales as well as on the validation of tests aligned to proficiency scales.
3. PERFORMANCE-BASED TESTING
The development, administration and scoring of large numbers of performance-based tests of speaking and writing present a number of challenges. However, the cited benefits of such tests include being able to test a wider range of competences other than just linguistic competence, as well as adding support to the inferences we make based on the test scores as learners are asked to perform similar types of tasks to those that they encounter outside of the testing context. In addition, the positive washback on teaching and learning is well documented. This conference is interested in the tension that exists between the region’s desire for language learners to be able to use English effectively to communicate and the challenges of administering and scoring performance-based speaking and writing tests to large numbers.
At New Directions, papers will address performance-based testing, in particular those that focus on solutions to the tension mentioned above.