Most work in the area of vocabulary has been concerned not with lexical learning as
such, but with the management of vocabulary learning: how to reduce the vocabulary
load, as reflected in the frequency count movement from the time of Ogden in 1930. In
the 1990s, much larger corpora have been created. The British National Corpus and the
Cambridge International Corpus both totaled 100 million words in July 1998. In 1995,
editions of the Collins COBUILD and Longman Dictionary of Contemporary English
coding systems were used to identify high frequency words. Nation’s University Word
List (1990) had been replaced by Academic Word List (Coxhead, 1998). The demand for
high frequency word lists is due to teachers and research scholars’ interest in testing the
word level knowledge of the second language learners. Nation and Robert in “Vocabulary
Size, Text Coverage and Word Lists” discuss the criteria used for vocabulary selection
and teaching (Schmitt and McCarthy, 1997). Schmitt and McCarthy (1997) suggest that
the word selection should be based on representativeness to wide range users of language,
frequency and range. They suggest the inclusion of word families and idioms and set
expressions that could provide a wide range of information like the form and parts of
speech in a word family, frequency, underlying meaning, etc. The major problem of choosing
words based on frequency is solved by computational corpus. White (1999) suggests that
words that are relevant to the needs of the learners and which are easy and likely to
interest the learners should be presented early in the course. The criteria would be mostly
applicable in the early stages, and at more advanced levels, the guiding criterion would be personal interest as at the later stages, words required for specific purposes that should
meet the needs of the learners.
|