Difference between revisions of "Design Experiments"

From SiLang Wiki
Jump to: navigation, search
 
(3 intermediate revisions by 2 users not shown)
Line 1: Line 1:
Another approach in educational evaluation is that of the "design experiment". This method draws upon approaches deployed in technology and engineering which look at how a product is designed with the objective of solving a particular problem or to performs in a selected situation. Software usability evaluations draw upon this paradigm. According to Bennett [11] this approach has direct relevance to educational contexts where the “product” being tested may be a new educa-tional package, for example curriculum, content, or self-access computer-based content, developed with the objective of addressing specific shortcomings in a given context. It evaluates the effects of the new package in a contained environ-ment. Design experiments have features of both the classical research and the il-luminative evaluation approach to evaluation in that they seek both to describe and explain what happens in selected situations and to test a particular hypothesis [11].
+
Another approach in educational evaluation is that of the "design experiment". This method draws upon approaches deployed in technology and engineering which look at how a product is designed with the objective of solving a particular problem or to performs in a selected situation. Software usability evaluations draw upon this paradigm. According to Bennett (2003) this approach has direct relevance to educational contexts where the “product” being tested may be a new educational package, for example curriculum, content, or self-access computer-based content, developed with the objective of addressing specific shortcomings in a given context. It evaluates the effects of the new package in a contained environ-ment. Design experiments have features of both the classical research and the il-luminative evaluation approach to evaluation in that they seek both to describe and explain what happens in selected situations and to test a particular hypothesis (Bennett, 2003).
 +
 
 
The various instruments used to collect data in design experiments are usually questionnaires, surveys, interviews, observations, testing, and more. The model or methodology used to gather the data is a fully specified step-by-step procedure.  
 
The various instruments used to collect data in design experiments are usually questionnaires, surveys, interviews, observations, testing, and more. The model or methodology used to gather the data is a fully specified step-by-step procedure.  
Design experiments must be carefully designed and executed to ensure that the data is accurate and valid. Research methods and the data they produce are mainly grouped into two basic categories: quantitative and qualitative. 
+
 
Quantitative methods lead to “numbers” and statistics while qualitative methods opt for descriptive data.  The method that is best suited to produce meaningful input depends on the objective of evaluation activities, as well as evaluation de-sign and implementation.  Quantitative and qualitative techniques are often com-bined in what is referred to as “mixed method” evaluations. These aim at a richer and more comprehensive understanding of whether a project’s objectives have been met and to what extent.     
+
Design experiments must be carefully designed and executed to ensure that the data is accurate and valid. Research methods and the data they produce are mainly grouped into two basic categories:
Technically, data is considered quantitative if it is the form of numbers and quali-tative if they describe observations of discourse. Quantitative data typically is information that has been gathered through surveys undertaken by large numbers of randomly selected respondents and is analyzed mathematically. This data can be analyzed using statistical methods and is useful for answering questions on “what”, “when”, and “who”. Images, videos, audio recordings, and other non-text data are considered to be qualitative.
+
 
Qualitative Data is usually gathered by participatory or not observations, inter-views, or focus groups but it can also rely on written documents or can be col-lected through case studies. Emphasis is less on analyzing statistics or on counting numbers of subjects and more on explaining attitudes or ascertaining procedures. A significantly smaller sample of subjects is typically involved. Another differ-ence is that qualitative data stems from open-ended, in depth, and individualized interviews, protocols, and observations. It is better suited for answering questions on “how” and “why”. Qualitative data helps the evaluator / researcher to better understand perceptions, social context and settings, and meanings attached to those. The sample focus groups are selected purposefully according to specific criteria and not randomly.
+
*Quantitative methods that lead to “numbers” and statistics.
 +
*Qualitative methods that opt for descriptive data.   
 +
 
 +
The method that is best suited to produce meaningful input depends on the objective of evaluation activities, as well as evaluation design and implementation.  Quantitative and qualitative techniques are often combined in what is referred to as “mixed method” evaluations. These aim at a richer and more comprehensive understanding of whether a project’s objectives have been met and to what extent.  
 +
      
 +
 
 +
----
 +
 
 +
Please click here to go back to the [[Evaluation Approaches]] page.
 +
 
 +
Click here to go back to the [[Main Page|home]]page
 +
 
 +
----
 +
 
 +
 
 +
====References====
 +
 
 +
Bennett, J. (2003) Evaluation Methods In Research, London Continuum

Latest revision as of 16:27, 28 November 2013

Another approach in educational evaluation is that of the "design experiment". This method draws upon approaches deployed in technology and engineering which look at how a product is designed with the objective of solving a particular problem or to performs in a selected situation. Software usability evaluations draw upon this paradigm. According to Bennett (2003) this approach has direct relevance to educational contexts where the “product” being tested may be a new educational package, for example curriculum, content, or self-access computer-based content, developed with the objective of addressing specific shortcomings in a given context. It evaluates the effects of the new package in a contained environ-ment. Design experiments have features of both the classical research and the il-luminative evaluation approach to evaluation in that they seek both to describe and explain what happens in selected situations and to test a particular hypothesis (Bennett, 2003).

The various instruments used to collect data in design experiments are usually questionnaires, surveys, interviews, observations, testing, and more. The model or methodology used to gather the data is a fully specified step-by-step procedure.

Design experiments must be carefully designed and executed to ensure that the data is accurate and valid. Research methods and the data they produce are mainly grouped into two basic categories:

  • Quantitative methods that lead to “numbers” and statistics.
  • Qualitative methods that opt for descriptive data.

The method that is best suited to produce meaningful input depends on the objective of evaluation activities, as well as evaluation design and implementation. Quantitative and qualitative techniques are often combined in what is referred to as “mixed method” evaluations. These aim at a richer and more comprehensive understanding of whether a project’s objectives have been met and to what extent.



Please click here to go back to the Evaluation Approaches page.

Click here to go back to the homepage



[edit] References

Bennett, J. (2003) Evaluation Methods In Research, London Continuum

Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox