Grammaticality judgement test: do item formats affect test performance?
A grammaticality judgement test (GJT) is one of the many ways to measure language proficiency and knowledge of grammar. It was introduced to second language research in the mid 70s. GJT is premised on the assumption that being proficient in a language means having two types of language knowledge: re...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Universiti Putra Malaysia Press
2015
|
Online Access: | http://psasir.upm.edu.my/id/eprint/32880/1/09%20JSSH%28S%29-0064-2015.pdf http://psasir.upm.edu.my/id/eprint/32880/ http://www.pertanika.upm.edu.my/Pertanika%20PAPERS/JSSH%20Vol.%2023%20%28S%29%20Dec.%202015/09%20JSSH%28S%29-0064-2015.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
my.upm.eprints.32880 |
---|---|
record_format |
eprints |
spelling |
my.upm.eprints.328802016-01-28T01:49:37Z http://psasir.upm.edu.my/id/eprint/32880/ Grammaticality judgement test: do item formats affect test performance? Tan, Bee Hoon Mohamad Noor, Nor Izzati A grammaticality judgement test (GJT) is one of the many ways to measure language proficiency and knowledge of grammar. It was introduced to second language research in the mid 70s. GJT is premised on the assumption that being proficient in a language means having two types of language knowledge: receptive knowledge or language competence; and productive knowledge or language performance. GJT is meant to measure the former. In the test, learners judge and decide if a given item, usually taken out of context, is grammatical or not. Over the years, GJT has been used by researchers to collect data about specific grammatical features in testing hypotheses, and data collected by a GJT are said to be more representative of a learner's language competence than naturally occurring data. Collecting such data also allows the collection of negative evidence (ungrammatical samples) to be compared with production problems such as slips and incomplete sentences. Despite the usefulness of GJT, its application is riddled with controversies. Other than reliability issues, it has been debated that certain item formats are more reliable than others. Therefore, the present study seeks to determine if two different item formats correlate with the English language proficiency of 100 ESL undergraduates. Universiti Putra Malaysia Press 2015 Article PeerReviewed application/pdf en http://psasir.upm.edu.my/id/eprint/32880/1/09%20JSSH%28S%29-0064-2015.pdf Tan, Bee Hoon and Mohamad Noor, Nor Izzati (2015) Grammaticality judgement test: do item formats affect test performance? Pertanika Journal of Social Sciences & Humanities, 23 (spec. Dec.). pp. 119-130. ISSN 0128-7702; ESSN: 2231-8534 http://www.pertanika.upm.edu.my/Pertanika%20PAPERS/JSSH%20Vol.%2023%20%28S%29%20Dec.%202015/09%20JSSH%28S%29-0064-2015.pdf |
institution |
Universiti Putra Malaysia |
building |
UPM Library |
collection |
Institutional Repository |
continent |
Asia |
country |
Malaysia |
content_provider |
Universiti Putra Malaysia |
content_source |
UPM Institutional Repository |
url_provider |
http://psasir.upm.edu.my/ |
language |
English |
description |
A grammaticality judgement test (GJT) is one of the many ways to measure language proficiency and knowledge of grammar. It was introduced to second language research in the mid 70s. GJT is premised on the assumption that being proficient in a language means having two types of language knowledge: receptive knowledge or language competence; and productive knowledge or language performance. GJT is meant to measure the former. In the test, learners judge and decide if a given item, usually taken out of context, is grammatical or not. Over the years, GJT has been used by researchers to collect data about specific grammatical features in testing hypotheses, and data collected by a GJT are said to be more representative of a learner's language competence than naturally occurring data. Collecting such data also allows the collection of negative evidence (ungrammatical samples) to be compared with production problems such as slips and incomplete sentences. Despite the usefulness of GJT, its application is riddled with controversies. Other than reliability issues, it has been debated that certain item formats are more reliable than others. Therefore, the present study seeks to determine if two different item formats correlate with the English language proficiency of 100 ESL undergraduates. |
format |
Article |
author |
Tan, Bee Hoon Mohamad Noor, Nor Izzati |
spellingShingle |
Tan, Bee Hoon Mohamad Noor, Nor Izzati Grammaticality judgement test: do item formats affect test performance? |
author_facet |
Tan, Bee Hoon Mohamad Noor, Nor Izzati |
author_sort |
Tan, Bee Hoon |
title |
Grammaticality judgement test: do item formats affect test performance? |
title_short |
Grammaticality judgement test: do item formats affect test performance? |
title_full |
Grammaticality judgement test: do item formats affect test performance? |
title_fullStr |
Grammaticality judgement test: do item formats affect test performance? |
title_full_unstemmed |
Grammaticality judgement test: do item formats affect test performance? |
title_sort |
grammaticality judgement test: do item formats affect test performance? |
publisher |
Universiti Putra Malaysia Press |
publishDate |
2015 |
url |
http://psasir.upm.edu.my/id/eprint/32880/1/09%20JSSH%28S%29-0064-2015.pdf http://psasir.upm.edu.my/id/eprint/32880/ http://www.pertanika.upm.edu.my/Pertanika%20PAPERS/JSSH%20Vol.%2023%20%28S%29%20Dec.%202015/09%20JSSH%28S%29-0064-2015.pdf |
_version_ |
1643830718874255360 |
score |
13.211869 |