with this philosophy (and perhaps make a brief pass atthe three sources), we have developed our objectives in line with the Tylerrationale. The point is that, given the notion of educational objectives and thenecessity of stating them explicitly and consistently with a philosophy, it makesall the difference in the world what one’s guiding philosophy is since thatconsistency can be as much a sin as a virtue. The rationale offers little by way ofa guide for curriculum making because it excludes so little. Popper’s (1955) dictumholds not only for science, but all intellectual endeavor: “Science does not a i mprimarily, at high probabilities. It aims at high informative content, well backedby experience. But a hypothesis may be very probable simply because it tells usnothing or very little. A high degree of probability is therefore not anindicationof ‘goodness’ it may be merely a symptom of low informative content.”
Tyler’s central hypothesis that a statement of objectives derives in some mannerfrom a philosophy, while highly probable, tells us very little indeed.
2.7.2.5. Selection and Organization of Learning Experiences
Once the crucial first step of stating objectives is accomplished, the rational proceeds relentlessly through the steps of the selection and organization of learning experiences as the means for achieving the ends and, finally, evaluating in terms of those ends. Typically, Tyler recognizes a crucial problem in connection with concept of a learning experience but passes quickly over it: the problem is how can learning experiences be selected by a teacher or a curriculum maker when they are defined as the interaction between a student and his environment. By definition, the learning experience is in some part a function of the perceptions, interests, and previous experience of the student. At least this part of the learning experience is not within the power of the teacher to select. While Tyler is explicitly aware of this, he nevertheless maintains that the teacher can control the learning experience through the “manipulation of the environment in such a way as to set up stimulating situations- situations that will evoke the kind of behavior desired”.
2.7.2.6.Evaluation
“The process of evaluation,” according to Tyler, “is essentially the process of determining to what extent the educational objectives are actually being realized by the program of curriculum and instruction.”(p.69). In other words, the statement of objectives not only serves as the basis for the selection and organization of learning experiences, but the standard against which the program is assessed. To Tyler, then, evaluation is a process by which one matches initial expectations in the form of behavioral objectives with outcomes. Such a conception has a certain commonsensical appeal, and, especially when fortified with models from industry and systems analysis, it seems like a supremely wise and practical way to appraise the success of a venture. Actually, curriculum evaluation as a kind of product control was set forth by Bobbitt as early as 1922, but product control when applied to curriculum presents certain difficulties. One of the difficulties lies in the nature of an aim or objective and whether it serves as the terminus for activity in the sense that the Tyler rationale implies. In other words, is an objective an end point or a turning point? Dewey argued for the latter: “Ends arise and function within action. They are not, as current theories too often imply, things lying outside activity at which the latter is directed. They are not ends or termini of action at all. They are terminals of deliberation, and so turning points in activity. “(1922,p.223). If ends arise only within activity it is not clear how one can state objectives before the activity (learning experience) begins. Dewey’s position, then, has important consequences not just for Tyler’s process of evaluation but for the rationale as a whole. It would mean, for example, that the starting point for a model of curriculum and instruction is not the statement of objectives but the activity (learning experience), and whatever objectives do appear will arise within that activity as a way of adding a new dimension to it. Under these circumstances, the process of evaluation would not be seen as one of matching anticipated consequences with actual outcomes, but as one of describing and of applying criteria of excellence to the activity itself. This view would recognize Dewey’s claim that “even the most important among all the consequences of an act is not necessarily its aim,” and it would be consistent with Merton’s important distinction between manifest and latent functions. The importance of description as a key element in the process of evaluation has also been emphasized by Cronbach (1961): “When evaluation is carried out in theservice of course improvement, the chief aim is to ascertain what effects thecourse has. . . . This is not to inquire merely whether the course is effective or ineffective. Outcomes of instruction are multidimensional, and a satisfactory investigation will map out the effects of the course along these dimensionsseparately.” The most significant dimensions of an educational activity or any activity may be those that are completely unplanned and wholly unanticipated. An evaluation procedure that ignores this fact is plainly unsatisfactory.
2.7.3.Stufflebeam’sCIPP Model (1971)
Stufflebeam approach to evaluation is recognized as the CIPP model. The first letters of each type of evaluation-context, input, process and product-havebeen used to form the acronym CIPP, by which Stufflebeam’s evaluation model isbest known. The CIPP model of evaluation concentrates on four stages of program evaluation:
Context of the program
Input of the program
Process within the program
Product of the program
This comprehensive model considers evaluation to be a continuing process (Ornsteinand Hunkins, 2004).
Stufflebeam (1971) views evaluation as the process of delineating, obtaining andproviding useful information for judging decision alternatives. These processes areexecuted for four types of administrative divisions each of which represents a type ofevaluation.
2.7.3.1.Context Evaluation
Context evaluation involves studying the environment of the program. Its purpose isto define the relevant environment, portray the desired and actual conditionspertaining to that environment, focus on unmet needs and missed opportunities anddiagnose the reason for unmet needs (Ornstein and Hunkins, 1998). Determiningwhat needs are to be addressed by a program helps in defining objectives for theprogram (Worthern, Sanders and Fitzpatrick, 1997). “The results of a contextevaluation are intended to provide a sound basis for either adjusting or establishinggoals and priorities and identifying needed changes” (Stufflebeam and Shinkfeld,1985, p. 172).
2.7.3.2. Input Evaluation
The second stage of the model, input evaluation is designed to provide informationand determine how to utilize resources to meet program goals. Input evaluators assessthe school’s capabilities to carry out the task of evaluation; they consider thestrategies suggested for achieving program goals and they identify the means bywhich a selected strategy will be implemented. Input evaluates specific aspects of thecurriculum plan or specific components of the curriculum plan. It deals with thefollowing questions: Are the objectives stated appropriately? Are the objectivescongruent with the goals of the school? Is the content congruent with the goals andobjectives of the program? Are the instructional strategies appropriate? Do otherstrategies exist that can also help meet the objectives? What is the basis for believingthat using these content and these instructional strategies will enable educators to
successfully attain their objectives? (Ornstein and Hunkins, 1998) An importantcomponent of this analysis is to identify any barriers or constraints in the client’senvironment that may in
fluence or impede the operation of the program. In otherwords, the purpose of Input Evaluation is to help clients consider alternatives interms of their particular needs and circumstances and to help develop a workable planfor them (Stufflebeam, 1980; Stufflebeam and Shinkfeld, 1985).
2.7.3.3. Process Evaluation
The focus of process evaluation is the implementation of a program or a strategy. Themain purpose is to provide feedback about needed modification if the implementationis inadequate. That is, are program activities on schedule? Are they beingimplemented as planned? Are available resources being used efficiently? And doprogram participants accept and carry out their roles? (Stufflebeam, 1980;Stufflebeam and Shinkfeld, 1985). In addition, “process evaluation should provide acomparison of the actual implementation with the intended program, the costs of theimplementation, and participants’ judgments of the quality of the effort”(Stufflebeam and Shinkfeld, 1985. p. 175). Process evaluation includes threestrategies. “The first is to detect or predict defects in the procedural design or itsimplementation stage, the second is to provide information for decisions and the thirdis to maintain a record of procedures as they occur.” This stage, which includes thethree strategies, occurs during the implementation stage of the curriculumdevelopment. It is a piloting process conducted to debug the program before districtwideimplementation. From such evaluation, project decision makers obtaininformation they need to anticipate and overcome procedural difficulties and to makedecisions (Ornstein and Hunkins, 1988, p. 345).
Although the main purpose is to provide feedback on the extent of implementation,process evaluation can fulfill two other functions. They are 1) to provide informationto external audiences who wish to learn about the program and 2) to assist programstaff, evaluators, and administrators in interpreting program outcomes (Gredler,1996).
2.7.3.4. Product Evaluation
The primary function of product evaluation is “to measure, interpret, and judge theattainments of a program” (Stufflebeam and Shinkfeld, 1985, p. 176). Productevaluation, therefore, should determine the extent to which identified needs weremet, as well as identify the broad effects of the program. The evaluation shoulddocument both intended and unintended effects and negative as well as positiveoutcomes (Gredler, 1996). The primary use of product evaluation is to determinewhether a program should be continued, repeated and/or extended to other settings(Stufflebeam, 1980; Stufflebeam and Shinkfeld, 1985). However, it should alsoprovide direction for modifying the program to better serve the needs of participantsand to become more cost effective. Finally, product evaluation is an essentialcomponent of an “accountability report” (Stufflebeam and Shinkfeld, 1985, p. 178).
At this stage, product evaluation helps evaluators to connect activities of the model toother stages of the whole change process (Ornstein and Hunkins, 1988).
2.7.4.Stake’s model (1969)
Stake’s Responsive Model is based on the assumption that the concerns of the stakeholder, for whom the evaluation is done, should be paramount in determining the evaluation issues. According to Stake(1969), “To emphasize evaluation issues that are important for each particular program,I recommend the responsive evaluation approach. It is an approach that trades off some measurement precision in order to increase the usefulness of the findings to persons in and around the program. . . . An educational evaluation is a responsive evaluation if it orients more directly to program activities than to program intents; responds to audience requirements for information; and if the different value perspectives present are referred to in reporting the success and failure of the program” (p. 14).
Stake recommends some steps for an interactive and


دیدگاهتان را بنویسید

with this philosophy (and perhaps make a brief pass atthe three sources), we have developed our objectives in line with the Tylerrationale. The point is that, given the notion of educational objectives and thenecessity of stating them explicitly and consistently with a philosophy, it makesall the difference in the world what one’s guiding philosophy is since thatconsistency can be as much a sin as a virtue. The rationale offers little by way ofa guide for curriculum making because it excludes so little. Popper’s (1955) dictumholds not only for science, but all intellectual endeavor: “Science does not a i mprimarily, at high probabilities. It aims at high informative content, well backedby experience. But a hypothesis may be very probable simply because it tells usnothing or very little. A high degree of probability is therefore not anindicationof ‘goodness’ it may be merely a symptom of low informative content.”
Tyler’s central hypothesis that a statement of objectives derives in some mannerfrom a philosophy, while highly probable, tells us very little indeed.
2.7.2.5. Selection and Organization of Learning Experiences
Once the crucial first step of stating objectives is accomplished, the rational proceeds relentlessly through the steps of the selection and organization of learning experiences as the means for achieving the ends and, finally, evaluating in terms of those ends. Typically, Tyler recognizes a crucial problem in connection with concept of a learning experience but passes quickly over it: the problem is how can learning experiences be selected by a teacher or a curriculum maker when they are defined as the interaction between a student and his environment. By definition, the learning experience is in some part a function of the perceptions, interests, and previous experience of the student. At least this part of the learning experience is not within the power of the teacher to select. While Tyler is explicitly aware of this, he nevertheless maintains that the teacher can control the learning experience through the “manipulation of the environment in such a way as to set up stimulating situations- situations that will evoke the kind of behavior desired”.
2.7.2.6.Evaluation
“The process of evaluation,” according to Tyler, “is essentially the process of determining to what extent the educational objectives are actually being realized by the program of curriculum and instruction.”(p.69). In other words, the statement of objectives not only serves as the basis for the selection and organization of learning experiences, but the standard against which the program is assessed. To Tyler, then, evaluation is a process by which one matches initial expectations in the form of behavioral objectives with outcomes. Such a conception has a certain commonsensical appeal, and, especially when fortified with models from industry and systems analysis, it seems like a supremely wise and practical way to appraise the success of a venture. Actually, curriculum evaluation as a kind of product control was set forth by Bobbitt as early as 1922, but product control when applied to curriculum presents certain difficulties. One of the difficulties lies in the nature of an aim or objective and whether it serves as the terminus for activity in the sense that the Tyler rationale implies. In other words, is an objective an end point or a turning point? Dewey argued for the latter: “Ends arise and function within action. They are not, as current theories too often imply, things lying outside activity at which the latter is directed. They are not ends or termini of action at all. They are terminals of deliberation, and so turning points in activity. “(1922,p.223). If ends arise only within activity it is not clear how one can state objectives before the activity (learning experience) begins. Dewey’s position, then, has important consequences not just for Tyler’s process of evaluation but for the rationale as a whole. It would mean, for example, that the starting point for a model of curriculum and instruction is not the statement of objectives but the activity (learning experience), and whatever objectives do appear will arise within that activity as a way of adding a new dimension to it. Under these circumstances, the process of evaluation would not be seen as one of matching anticipated consequences with actual outcomes, but as one of describing and of applying criteria of excellence to the activity itself. This view would recognize Dewey’s claim that “even the most important among all the consequences of an act is not necessarily its aim,” and it would be consistent with Merton’s important distinction between manifest and latent functions. The importance of description as a key element in the process of evaluation has also been emphasized by Cronbach (1961): “When evaluation is carried out in theservice of course improvement, the chief aim is to ascertain what effects thecourse has. . . . This is not to inquire merely whether the course is effective or ineffective. Outcomes of instruction are multidimensional, and a satisfactory investigation will map out the effects of the course along these dimensionsseparately.” The most significant dimensions of an educational activity or any activity may be those that are completely unplanned and wholly unanticipated. An evaluation procedure that ignores this fact is plainly unsatisfactory.
2.7.3.Stufflebeam’sCIPP Model (1971)
Stufflebeam approach to evaluation is recognized as the CIPP model. The first letters of each type of evaluation-context, input, process and product-havebeen used to form the acronym CIPP, by which Stufflebeam’s evaluation model isbest known. The CIPP model of evaluation concentrates on four stages of program evaluation:
Context of the program
Input of the program
Process within the program
Product of the program
This comprehensive model considers evaluation to be a continuing process (Ornsteinand Hunkins, 2004).
Stufflebeam (1971) views evaluation as the process of delineating, obtaining andproviding useful information for judging decision alternatives. These processes areexecuted for four types of administrative divisions each of which represents a type ofevaluation.
2.7.3.1.Context Evaluation
Context evaluation involves studying the environment of the program. Its purpose isto define the relevant environment, portray the desired and actual conditionspertaining to that environment, focus on unmet needs and missed opportunities anddiagnose the reason for unmet needs (Ornstein and Hunkins, 1998). Determiningwhat needs are to be addressed by a program helps in defining objectives for theprogram (Worthern, Sanders and Fitzpatrick, 1997). “The results of a contextevaluation are intended to provide a sound basis for either adjusting or establishinggoals and priorities and identifying needed changes” (Stufflebeam and Shinkfeld,1985, p. 172).
2.7.3.2. Input Evaluation
The second stage of the model, input evaluation is designed to provide informationand determine how to utilize resources to meet program goals. Input evaluators assessthe school’s capabilities to carry out the task of evaluation; they consider thestrategies suggested for achieving program goals and they identify the means bywhich a selected strategy will be implemented. Input evaluates specific aspects of thecurriculum plan or specific components of the curriculum plan. It deals with thefollowing questions: Are the objectives stated appropriately? Are the objectivescongruent with the goals of the school? Is the content congruent with the goals andobjectives of the program? Are the instructional strategies appropriate? Do otherstrategies exist that can also help meet the objectives? What is the basis for believingthat using these content and these instructional strategies will enable educators to
successfully attain their objectives? (Ornstein and Hunkins, 1998) An importantcomponent of this analysis is to identify any barriers or constraints in the client’senvironment that may in
fluence or impede the operation of the program. In otherwords, the purpose of Input Evaluation is to help clients consider alternatives interms of their particular needs and circumstances and to help develop a workable planfor them (Stufflebeam, 1980; Stufflebeam and Shinkfeld, 1985).
2.7.3.3. Process Evaluation
The focus of process evaluation is the implementation of a program or a strategy. Themain purpose is to provide feedback about needed modification if the implementationis inadequate. That is, are program activities on schedule? Are they beingimplemented as planned? Are available resources being used efficiently? And doprogram participants accept and carry out their roles? (Stufflebeam, 1980;Stufflebeam and Shinkfeld, 1985). In addition, “process evaluation should provide acomparison of the actual implementation with the intended program, the costs of theimplementation, and participants’ judgments of the quality of the effort”(Stufflebeam and Shinkfeld, 1985. p. 175). Process evaluation includes threestrategies. “The first is to detect or predict defects in the procedural design or itsimplementation stage, the second is to provide information for decisions and the thirdis to maintain a record of procedures as they occur.” This stage, which includes thethree strategies, occurs during the implementation stage of the curriculumdevelopment. It is a piloting process conducted to debug the program before districtwideimplementation. From such evaluation, project decision makers obtaininformation they need to anticipate and overcome procedural difficulties and to makedecisions (Ornstein and Hunkins, 1988, p. 345).
Although the main purpose is to provide feedback on the extent of implementation,process evaluation can fulfill two other functions. They are 1) to provide informationto external audiences who wish to learn about the program and 2) to assist programstaff, evaluators, and administrators in interpreting program outcomes (Gredler,1996).
2.7.3.4. Product Evaluation
The primary function of product evaluation is “to measure, interpret, and judge theattainments of a program” (Stufflebeam and Shinkfeld, 1985, p. 176). Productevaluation, therefore, should determine the extent to which identified needs weremet, as well as identify the broad effects of the program. The evaluation shoulddocument both intended and unintended effects and negative as well as positiveoutcomes (Gredler, 1996). The primary use of product evaluation is to determinewhether a program should be continued, repeated and/or extended to other settings(Stufflebeam, 1980; Stufflebeam and Shinkfeld, 1985). However, it should alsoprovide direction for modifying the program to better serve the needs of participantsand to become more cost effective. Finally, product evaluation is an essentialcomponent of an “accountability report” (Stufflebeam and Shinkfeld, 1985, p. 178).
At this stage, product evaluation helps evaluators to connect activities of the model toother stages of the whole change process (Ornstein and Hunkins, 1988).
2.7.4.Stake’s model (1969)
Stake’s Responsive Model is based on the assumption that the concerns of the stakeholder, for whom the evaluation is done, should be paramount in determining the evaluation issues. According to Stake(1969), “To emphasize evaluation issues that are important for each particular program,I recommend the responsive evaluation approach. It is an approach that trades off some measurement precision in order to increase the usefulness of the findings to persons in and around the program. . . . An educational evaluation is a responsive evaluation if it orients more directly to program activities than to program intents; responds to audience requirements for information; and if the different value perspectives present are referred to in reporting the success and failure of the program” (p. 14).
Stake recommends some steps for an interactive and


دیدگاهتان را بنویسید