Page 12

From SiLang Wiki
Revision as of 17:43, 26 November 2013 by Kourias (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Evaluation methodologies have also been developed for the specific field of language teaching and learning through the support of technology. This field is commonly referred to as Computer Assisted Language Learning (CALL). CALL focuses on software applications, environments, digital media, and respective content with a clear instructional purpose and a language learning objective [27]. Diverse principles and pedagogical frameworks underline the design of CALL applications. Given that CALL software can be dated back to the 70s learning methodologies deployed are clearly influenced by broader educational tendencies in each historical era [61], [62]. CALL software may be categorized as follows based on learning orientation:

• Behaviorist / structural CALL: implemented mainly in the 70’s, it con-sisted of traditional drill and practice content based on the stimulus-response philosophy. Software analyzed students’ input and provided feedback diagnosing their grammatical “mistakes”. In the following dec-ades behaviorist approaches were rejected and abandoned by most lan-guage teachers. The same happened to behaviorist CALL software

• Communicative CALL: it was implemented in the 80’s and 90’s in line with the advent of microcomputers and capitalizing on the ever increasing development and expansion of home/personal computing which, becoming more and more accessible to the masses, paved the way for more com-puters in the classrooms and the development of more sophisticated educa-tional software. As the term implies, it is based on the communicative learning approach that became very popular in the late 70’s and through-out the 80’s. The communicative approach focuses on the language use and less on grammar elements. Grammar is implicitly taught and not in an explicit, or in other words behaviouristic, manner. Students generate origi-nal, flexible, and open language output in the frame of a variety of activi-ties such as: gap-filling, cloze texts, multiple-choice, free-format (text en-try), adventures and simulations, sentence reordering, explanations, etc. A key element of this phase of CALL software is the fact that for the first time games and applications that weren’t initially designed for language learning purposes, such as commercially available games, made their ac-tive presence in didactic procedures

• Integrative CALL: since the middle of the 90’s and the advent of multi-media modern web-browsers and the popularity of the internet and internet based applications CALL has become really hard to be described and de-fined since it has embraced sophisticated tools such as interactive envi-ronments, hyperlinked content, Web 2.0 applications, blogs, wikis, social networking, online audiovisual materials, etc. [62] defined this new era of CALL software as “integrative” since it focuses on a blend of multimedia technology, computer-mediated communication, and task-based language instruction [61], [62]. This phase of CALL software hasn’t yet reached the state of “completeness” or “normalization” [9], [10].

In relation to the effectiveness of CALL towards reaching language learning ob-jectives it is possible to evaluate impact either quantitatively or qualitatively. Evaluation strategies aim to establish whether CALL provides a more inductive environment for language learning as compared to the conventional methods and media. Other evaluation indicators for assessing the impact of CALL towards building language skills include:

• The degree to which ICT is deployed in the context of first or second lan-guage teaching • Practical issues such as availability of software, hardware, and Internet connections and their respective limitations • Budgetary restrictions • Teacher and learner attitudes towards CALL • Adaptations in language teaching approaches • And more

At the practical level CALL software evaluation has in the past been driven by checklists or forms, methodological frameworks for language teaching, or sec-ond language acquisition (SLA) theory and research-based criteria. These are described in more detail below:

• Checklists: they are a very common evaluation tool since the early days of CALL and are still broadly used mainly because of their practicality and cost-efficiency. Checklists consist of a series of questions or elements that need to be answered or classified. The evaluator is asked to respond either to closed- or sometimes open-ended questions based on information gath-ered during the evaluation process. Much criticism has been raised as checklists are often perceived as being restrictive. Another criticism is that sometimes they focus only on the technological aspect of CALL rather than on the pedagogy [28]. However, some researchers are in favor of checklists as an evaluation tool. Susser (2001) argues that the problem does not lie in the validity of checklists generally but rather in their appli-cation in particular circumstances. He further states that checklists can al-ways be adapted and updated to meet specific evaluation needs.

• Methodological frameworks: similarities and compatibilities can be identified between methodological frameworks and checklists; however, they differ in two significant ways: 1. Methodological frameworks tend to be more descriptive and 2. They shed more light on language teaching and learning processes, including considerations that reach beyond the scope of technology as an educational support tool

According to Hubbard [27], a methodological evaluation framework typi-cally involves a description of the components of specific software in rela-tion to a particular goal; in this case, the goal is the evaluation of CALL towards enriching language learning practices. In a methodological framework evaluation questions are not preset. Rather, a methodological framework is a tool that the evaluator can use to create a set of questions or develop some other evaluation scheme that meets the unique needs of each evaluation process. Until the mid-80’s evaluation had largely been conceptualized in terms of checklists. Phillips [43] proposed a framework more explicitly linked to language teaching and learning methodology. It included categories for the CALL software types that were popular at the time. It further described dimensions such as language difficulty, learner focus / skill area such as listening, speaking, reading, or writing, and language focus, such as lexis, grammar, and discourse, which were important to the language learning character of a particular software application. Hubbard [27] expanded Phil-lips’ system by integrating it with the one developed independently by Richards and Rodgers [47] for describing and analyzing language teaching methods in terms of three descriptive categories: approach, design includ-ing the syllabus model, procedure, namely classroom techniques and ac-tivities through which learning design is realized. Hubbard [27] adapted these concepts into categories describing the key elements of evaluation and renamed them as “teacher fit”, “learner fit”, and “operational description”

• Second language acquisition-based (SLA) approaches: software appli-cations are popular in teaching and learning of foreign languages. SLA theories and approaches can be used to develop software evaluation stan-dards. One of the pioneers of this method was Underwood (1984), who es-tablished a case for a communicative approach to CALL based on gener-alizations from research and communicative theory of that era. His 13 points characterizing communicative CALL became a de facto evaluation rubric. Later on, Egbert and Hanson-Smith [21] invented eight generaliza-tions for optimal language learning environments based on Underwood’s previous work. In more recent years the work of Carol Chapelle [13] in-troduced the Computer Applications in Second Language Acquisition (CASLA) framework which is based on the following principles:

1. CALL evaluation is context-specific 2. CALL should be evaluated both in empirical and judgmental way 3. CALL evaluation criteria should stem from instructed SLA theory and research and 4. The main concern should be language learning potential and out-come

Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox