Difference between revisions of "Design Experiments"
(3 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
− | Another approach in educational evaluation is that of the "design experiment". This method draws upon approaches deployed in technology and engineering which look at how a product is designed with the objective of solving a particular problem or to performs in a selected situation. Software usability evaluations draw upon this paradigm. According to Bennett | + | Another approach in educational evaluation is that of the "design experiment". This method draws upon approaches deployed in technology and engineering which look at how a product is designed with the objective of solving a particular problem or to performs in a selected situation. Software usability evaluations draw upon this paradigm. According to Bennett (2003) this approach has direct relevance to educational contexts where the “product” being tested may be a new educational package, for example curriculum, content, or self-access computer-based content, developed with the objective of addressing specific shortcomings in a given context. It evaluates the effects of the new package in a contained environ-ment. Design experiments have features of both the classical research and the il-luminative evaluation approach to evaluation in that they seek both to describe and explain what happens in selected situations and to test a particular hypothesis (Bennett, 2003). |
+ | |||
The various instruments used to collect data in design experiments are usually questionnaires, surveys, interviews, observations, testing, and more. The model or methodology used to gather the data is a fully specified step-by-step procedure. | The various instruments used to collect data in design experiments are usually questionnaires, surveys, interviews, observations, testing, and more. The model or methodology used to gather the data is a fully specified step-by-step procedure. | ||
− | Design experiments must be carefully designed and executed to ensure that the data is accurate and valid. Research methods and the data they produce are mainly grouped into two basic categories: | + | |
− | Quantitative methods lead to “numbers” and statistics | + | Design experiments must be carefully designed and executed to ensure that the data is accurate and valid. Research methods and the data they produce are mainly grouped into two basic categories: |
− | + | ||
− | + | *Quantitative methods that lead to “numbers” and statistics. | |
+ | *Qualitative methods that opt for descriptive data. | ||
+ | |||
+ | The method that is best suited to produce meaningful input depends on the objective of evaluation activities, as well as evaluation design and implementation. Quantitative and qualitative techniques are often combined in what is referred to as “mixed method” evaluations. These aim at a richer and more comprehensive understanding of whether a project’s objectives have been met and to what extent. | ||
+ | |||
+ | |||
+ | ---- | ||
+ | |||
+ | Please click here to go back to the [[Evaluation Approaches]] page. | ||
+ | |||
+ | Click here to go back to the [[Main Page|home]]page | ||
+ | |||
+ | ---- | ||
+ | |||
+ | |||
+ | ====References==== | ||
+ | |||
+ | Bennett, J. (2003) Evaluation Methods In Research, London Continuum |
Latest revision as of 16:27, 28 November 2013
Another approach in educational evaluation is that of the "design experiment". This method draws upon approaches deployed in technology and engineering which look at how a product is designed with the objective of solving a particular problem or to performs in a selected situation. Software usability evaluations draw upon this paradigm. According to Bennett (2003) this approach has direct relevance to educational contexts where the “product” being tested may be a new educational package, for example curriculum, content, or self-access computer-based content, developed with the objective of addressing specific shortcomings in a given context. It evaluates the effects of the new package in a contained environ-ment. Design experiments have features of both the classical research and the il-luminative evaluation approach to evaluation in that they seek both to describe and explain what happens in selected situations and to test a particular hypothesis (Bennett, 2003).
The various instruments used to collect data in design experiments are usually questionnaires, surveys, interviews, observations, testing, and more. The model or methodology used to gather the data is a fully specified step-by-step procedure.
Design experiments must be carefully designed and executed to ensure that the data is accurate and valid. Research methods and the data they produce are mainly grouped into two basic categories:
- Quantitative methods that lead to “numbers” and statistics.
- Qualitative methods that opt for descriptive data.
The method that is best suited to produce meaningful input depends on the objective of evaluation activities, as well as evaluation design and implementation. Quantitative and qualitative techniques are often combined in what is referred to as “mixed method” evaluations. These aim at a richer and more comprehensive understanding of whether a project’s objectives have been met and to what extent.
Please click here to go back to the Evaluation Approaches page.
Click here to go back to the homepage
[edit] References
Bennett, J. (2003) Evaluation Methods In Research, London Continuum