Research various instructional technology evaluation methodologies utilizing the following websites, the college library, and the Internet:
- Instructional Design Evaluation
- Hostos EdTech – Teaching with Technology – Teaching and Learning Frameworks
- Models for Evaluating Educational Technology
As you explore the different models, answer the following questions:
- What are the key features of the model?
- How is the model used to evaluate instructional technology?
- What are the strengths and weaknesses of the model?
Share your findings in the comments section below.
All methods are excellent; however, I choose the ADDIE method because it provides a structured, systematic framework for instructional designers to create effective and efficient learning experiences by ensuring a comprehensive analysis of needs, clear learning objectives, well-designed content, proper implementation, and thorough evaluation, making it adaptable to different learning environments and situations while maintaining a focus on learner engagement and desired outcomes.
What are the key features of the ADDIE model?
The ADDIE model, which stands for Analyze, Design, Develop, Implement, and Evaluate, is a systematic instructional design framework that focuses on thoroughly analyzing learner needs, designing targeted instruction, developing learning materials, implementing the program, and evaluating its effectiveness, allowing for continuous improvement through feedback at each stage; its key features include a structured approach, flexibility to adapt to different learning environments, and emphasis on data-driven decision making through evaluation.
· How is the ADDIE model used to evaluate instructional technology?
Needs Analysis:
During the “Analyze” phase, the ADDIE model identifies the specific needs and gaps in knowledge or skills that the instructional technology should address, considering learner characteristics and context.
Design Alignment:
The “Design” phase ensures the instructional technology aligns with the identified learning objectives and incorporates appropriate features to facilitate effective learning.
Development Evaluation:
Throughout the “Develop” phase, the instructional technology is evaluated for usability, accessibility, and alignment with the design specifications.
Implementation Monitoring:
In the “Implement” phase, data is collected to monitor learner engagement and progress with the technology, allowing for adjustments as needed.
Summative Evaluation:
The “Evaluate” phase involves a comprehensive assessment of the instructional technology’s effectiveness in achieving the desired learning outcomes.
· What are the strengths and weaknesses of the ADDIE model?
Strengths of the ADDIE model:
Structured Approach:
Provides a clear roadmap for instructional design, ensuring all critical aspects are considered.
Flexibility:
It can be adapted to various learning contexts and technologies, allowing for customization.
Weaknesses of the ADDIE model:
Linear Process:
It can be perceived as too rigid, potentially hindering rapid iteration or adaptation to changing needs.
Time-Intensive:
Each phase requires careful planning and execution, which may not be feasible for projects with tight deadlines.
Instructional Technology: Evaluation Models
Based on my research and knowing several evaluation models, I recommend the ADDIE Model for Instructional Design Evaluation as an effective framework for evaluating instructional technology.
The ADDIE model stands out for its systematic and iterative approach to instructional design and evaluation. It comprises five key phases: Analysis, Design, Development, Implementation, and Evaluation. Each phase builds on the previous one, ensuring a comprehensive process for creating and assessing learning experiences. The Evaluation phase, in particular, emphasizes both formative and summative assessments, which are critical for measuring the effectiveness of instructional technology.
How the ADDIE Model Evaluates Instructional Technology
In the Analysis phase, the model evaluates learner needs, learning goals, and the context in which technology will be applied. This helps identify whether a particular technology aligns with the instructional objectives. The Design phase focuses on creating technology integrated activities, ensuring that the chosen tools effectively support the curriculum. During the Development and Implementation phases, the technology is tested and deployed, with opportunities for real time feedback and adjustments. Finally, the Evaluation phase assesses learner outcomes and the overall success of the technology, providing data to refine the instructional process.
Strengths and Weaknesses
One of the key strengths of the ADDIE model is its flexibility and adaptability. It allows for continuous feedback and iterative improvements, which are essential when integrating new technologies. Additionally, its structured approach ensures that every stage of the process is thoroughly evaluated, reducing the risk of misalignment between the technology and the instructional goals.
However, the model has its limitations. It can be time intensive, as each phase requires careful planning and execution. For educators working under tight deadlines, this might pose a challenge. Additionally, the linear structure of the ADDIE model may not be ideal for rapidly evolving technologies, as it can delay implementation.
Final Recommendation
The ADDIE model is a strong choice for evaluating instructional technology because of its structured and iterative nature. It ensures that technology is not only aligned with educational goals but also effectively enhances learning outcomes. While it requires significant time and resources, the long term benefits of thorough evaluation make it a worthwhile investment for educators and institutions.
As I explored different models, I decided to concentrate on The Triple E Framework, by Dr. Liz Kolb. The Triple E evaluates technology’s role in learning through three principles: Engage, Enhance and Extend. These features evaluate if technology motivates and actively involves students in the learning process, assess how technology aids understanding of concepts in ways traditional methods cannot and measure how technology connects classroom learning to real-world applications.
This framework uses a teacher-friendly rubric to score technology use in lesson plans. It examines how well technology supports active learning, enhances instructional strategies, and extends learning experiences beyond the classroom. Teachers can reflect on and adapt their methods for better student outcomes.
The triple E framework strengths are:
The triple E framework weaknesses are:
The framework ensures technology supports meaningful, real-world learning while keeping lessons effective and relevant.
Kirkpatrick’s Model of Evaluation is a widely used method for evaluating instructional programs.
1: What are the key features of the model? It consists of four levels of evaluationReaction: focuses on how students feel about the learning experience. Learning: focuses on how much students learn and increase their knowledge.Behavior: focuses on the student’s behavior. Results: focus on evaluating student performance.2: How is the model used to evaluate instructional technology?
Kirkpatrick’s Model evaluates instructional technology by assessing its effect by using the four levels Reaction, Learning, Behavior, and Results.
3: What are the strengths and weaknesses of the model?
Its strengths are it provides a clear framework for evaluation. It is widely used in many industries. Its adaptability can be applied to different learning environments.
I focused on the TPACK model. The key features of this model are content knowledge, pedagogical knowledge, and technological knowledge. This model helps teachers weave technology into instruction. In regard to content knowledge, this model allows educators to first focus on the standards that students need to achieve. In regard to pedagogical knowledge, this model also helps teachers to focus on teaching methods in order for students to learn best. In regard to technological knowledge, this model assists teachers in selecting the proper technology tool to enhance their students’ learning.
To evaluate instructional technology, this model causes the teacher to ask guiding questions to ensure the most effective path of learning for students. Some guiding questions are: 1. Does the lesson address the standards? 2. How can students engage with the content? 3. Which technology tool will enhance learning? Which technology tool will hinder learning?
A strength of this model is it helps educators focus on providing meaningful, effective ways to integrate technology into the lesson. A weakness of this model is that it focuses on integrating technology into every lesson but every lesson does not necessarily have to use technology all of the time to be effective.
Bloom’s Taxonomy is a valuable framework for evaluating instructional technology, enabling educators to align technology with learning objectives, design assessments, and promote higher-order thinking. However, its hierarchical structure and focus on cognitive skills have limitations in diverse learning contexts.
Key Features of the Model:
How the Model is Used to Evaluate Instructional Technology:
Strengths of the Model:
Weaknesses of the Model:
Bloom’s Taxonomy: This model is all about understanding how students learn at different levels, from basic recall (Remember) to creative tasks (Create).
Key Features: It consists of six levels: Remember, Understand, Apply, Analyze, Evaluate, and Create, which guide educators in developing learning objectives.
Usage for Evaluating Instructional Technology: Educators can use Bloom’s Taxonomy to assess how well a tech tool supports various learning objectives. For instance, they can evaluate whether a tool helps students remember information or encourages them to analyze and create new content.
Strengths and Weaknesses: The strength of this model is its clarity; it provides a straightforward way to align tech with learning goals. However, a weakness is that it can be a bit rigid, as not all tech tools fit neatly into these categories, which might limit creativity in tech use.
One of the models that evaluates instructional technology peaked my interest. RAIT is a necessary tool that assess the validity of technology. this assessment helps educators know how effective each tool is or cease to be. the research shows that although there were many technologies that proved to be effective, not all were efficient. teachers still required more efficient tools with the problem solving learning happening in schools.
The instructional technology evaluation method that I chose is the Triple E Framework:
It focuses on how the technology keeps students interested and motivated. It encourages active participation and interaction through tech-based activities. Also, it examines how technology enriches learning beyond traditional methods and looks at whether it expands learning opportunities, gives access to new resources, or improves understanding. As far as evidence of learning, it allows us to assesses if technology helps achieve measurable learning outcomes and the evidence of improved student performance, deeper understanding, or enhanced skills.
The Triple E Framework evaluates technology holistically, covering engagement, enhancement, and evidence of learning, for a thorough understanding of its impact.It offers specific criteria for assessing instructional technology, aiding educators in evaluating effectiveness and making informed choices.Also, it promotes effective use by emphasizing engagement and enhancement,it prompts educators to use technology meaningfully to support learning goals and meet student needs.
Some of the weaknesses could be its subjectivity. For example, assessing engagement and enhancement can vary based on personal viewpoints. Measuring challenges by evaluating learning outcomes with technology may be intricate, needing precise alignment with learning goals. Time constraints because implementing the Triple E Framework thoroughly requires substantial time and effort to gather and analyze engagement and learning data.
I explored Bloom’s Taxonomy, which is a framework for categorizing educational outcomes based on their complexity. It allows educators to create effective objectives and activities promoting higher-order thinking.
Bloom’s Taxonomy was introduced in 1956 by Benjamin Bloom and his collaborators, with the aim of categorizing educational goals. In this framework there are six major categories: Knowledge, Comprehension, Application, Analysis, Synthesis, and Evaluation. In the revised Bloom’s Taxonomy (2001), the categories were altered to reflect action words/cognitive processes that people use when learning. The new categories are: Remember, Understand, Apply, Analyze, Evaluate, and Create.
Bloom’s Taxonomy can be used to evaluate instructional technology according to the level of higher order thinking and cognitive engagement. For example, Evaluate and Create would be considered higher-order thinking in comparison to Remember and Understand. Depending on the educational outcomes and the learning activities, different levels of cognitive engagement would be met. I imagine that instructional technology which pushes for greater levels of cognitive engagement, would be evaluated higher according to Bloom’s Taxonomy.
Some strengths of Bloom’s Taxonomy include the establishment of clear objectives or learning goals, which are helpful for both teachers and students. Further, these goals help the instructor be able to plan instruction that meets those objectives, and design assessments that properly evaluate if a student has met those objectives.
Some weaknesses of Bloom’s Taxonomy are the idea that learning is a linear or hierarchical process. Many people disagree with this concept.
Source: https://cft.vanderbilt.edu/guides-sub-pages/blooms-taxonomy/
I also explored TPACK, or the Technological Pedagogical Content Knowledge framework. This framework comes from Lee Shulman’s theory that says that teachers should be equally well-versed in content knowledge, pedagogical principles, and technology advances.
This framework explains the convergence of various concepts. For example:
Together, TPACK involves the combination of Content Knowledge (CK), Pedagogical Knowledge (PK) and Technological Knowledge (TK). TPACK can be used to evaluate instructional technology by looking at how technology use is helping students reach learning goals and increase engagement.
Some strengths of this approach is that it models ways in which technology can be incorporated into teaching, helping to enrich the learning experience and provide multiple means of representation of content, as well as ways for students to show what they understand.
Some weaknesses of this approach are in regard to time and training. Some content teachers may be more well-versed in technology than others, making it easier for them to implement technology in the classroom. Also, accessibility is another weakness. Not all schools have equitable access to technology where students have their own devices and can use digital tools like Nearpod, Quizizz, etc.
Source: https://commons.hostos.cuny.edu/edtech/faculty/teaching-with-technology/teaching-learning-frameworks/tpack/
The model I looked into was the Triple E framework which has engagement in learning goals, enhancement in learning goals, and extension of learning goals. Each component is different in its way.
Strength: It can measure how well the technology tools they using are helping their students learn. It just focuses on the learning goal and the student. The teacher can see where they can support their student and become better.
Weakness: Not all students might know how to use that technology. The teacher loses focus on what else is important. Students will not understand the goal and might be time-consuming.
The model that I looked into was SAMR model which is an acronym for substitution, augmentation, modification and redefinition. This is a framework that integrates technology into the curriculum. It focuses on the enhancement and transformation of how students learn.
A strength is how it is broken down and is easy to follow. Educators can fit instructional technology into this framework and it can be beneficial for creating lesson plans.
A weakness is it doesn’t fully have an evaluation on whether or not the instructional technology is helpful/meaningful it is more of a framework to create and utilize the technology.
The ADDIE model, standing for Analysis, Design, Development, Implementation, and Evaluation, offers a structured approach to assessing instructional technology and design effectiveness. Although primarily an instructional design framework, its evaluation phase is crucial for understanding the impact of technology on learning.
Key Features:
Strengths:
Weaknesses:
Overall, the ADDIE evaluation model offers a valuable tool for assessing instructional technology and its impact on learning. However, its effectiveness depends on careful consideration of its strengths and weaknesses, and adapting it to suit the specific context and constraints of the evaluation project.
The model I will evaluate is the TPACK. TPACK stands for Technological Pedagogical Content Knowledge. The key features of this model are built on Lee Shulman’s theory of subject matter knowledge in which it should be integrated into content, pedagogical and technology knowledge. TPACK proposes that educators should be equally versatile in pedagogical principles and technological advances. The theory suggests that content knowledge, pedagogical knowledge and technology knowledge should be balanced among all three. This model is used to evaluate instructional technology by having teachers create technology-infused lessons that enhance topic learning and foster student engagement through TPACK. The TPACK framework also motivates educators to evaluate their own work and keep refining their teaching techniques. The TPACK framework’s weakness is that teachers frequently lack proficiency in particular domains, such as technology and knowledge of the subject matter. When it comes to lesson planning, teachers occasionally use the internet, which suggests a greater level of pedagogical competence than technological or subject-matter expertise. The TPACK allows you to understand how technology could strengthen teaching strategies and establish connections with subject matter. It also gives you the chance to consider if technology is truly advancing education in meaningful ways. Using TPACK as a framework for technology integration and decision-making whether you’re a principal, instructional coach, tech administrator, or classroom teacher. It’s suitable, for instance, if students use their IPads simply to enter responses to questions in Google Docs. However, how can students utilize their IPads to enhance the content’s meaning, and how can you apply more creative pedagogy? As a starting point, these Google Docs exercises are useful.
Out of the all the instructional models shown the two I chose to research and analyze were SAMR model and The Backwards Design model.The SAMR Model consists of 4 key features that direct educators in integrating technology into instruction. Key features:Substitution mentions technology acting as a direct substitute for traditional tools, with no functional change. Augmentation where technology provides functional improvements, improving the learning experience while still serving as a substitute.Modification which is significant to the redesign of tasks that changes how students engage with content. Redefinition which allows for the creation of new tasks that were previously impossible, basically changing the learning process. The model is used to evaluate instructional technology by assessing the level of integration within teaching practices, helping educators decide whether their technology use just replaces traditional methods or intentionally improves learning experiences. Within its strengths, the SAMR model supports innovation by inspiring educators to seek innovative uses of technology, provides a clear framework for evaluating technology integration, and directs professional development by encouraging reflection on teaching practices. However, the model also has weaknesses, including the potential for misrepresentations of the challenges involved in technology integration, a habit for educators to focus only on achieving higher levels without providing genuine improvement of learning outcomes, and diversity in efficiency depending on specific educational contexts, making universal application challenging.The Backward Design Model consists of three main stages.Key features:Identify Desired Result , where educators explain clear learning objectives and outcomes to direct instruction.Determine Acceptable Evidence, which involves showing how student learning will be assessed, given assessments match with the desired outcomes.Plan Learning Experiences and Instruction, which is where lessons and activities are designed to support achieving these outcomes successfully.This model is used to evaluate instructional technology by guaranteeing that the technology matches with the outlined learning goals, supports planned assessments, and improves the instructional design. Educators assess whether the technology facilitates the desired learning outcomes and integrates effortlessly into the instructional activities. The strengths of the Backward Design Model include its goal-oriented approach, which ensures that all instructional elements match with educational objectives, providing clarity and consistency in teaching and assessment practices. However, its weaknesses can include a certain stability, making it challenging to adapt to unexpected changes or new opportunities during instruction, as well as the potentially labor-intensive nature of the initial planning phase, which may stop some educators from fully implementing the model.
In looking through the links that were provided; I decided to further research Bloom’s Taxonomy. The key features are the steps of Bloom’s Taxonomy. Bloom’s Taxonomy breaks down the retention process. It provides 6 steps that helps you remember new content. And when you remember new content you can then analyze and evaluate it to further understand.
It’s strength is that you retain new information. It’s weakness is that it views learning as a hierarchy process and that each step can stand alone.
Using the sites provided, I focused on TPACK.
TPACK stands for Technological Pedagogical Content Knowledge. The key features of this model are the “theory that subject matter knowledge should be integrated into pedagogical content knowledge. TPACK proposes that educators should be equally versatile in pedagogical principles and technological advances as they are in subject matter knowledge.” Basically the key feature is that there should be balance among the 3!
This model is used to evaluate instructional technology by taking a holistic approach; it allows teachers to explore how there is a balance.
A weakness of this model/one thing I’d change is providing more guidance or a checklist that teachers or curriculum designers could use to check the effectiveness of their plans or lessons.
ADDIE–
It’s an acronym that stands for Analysis, Design, Development, Implementation, and Evaluation. It is a comprehensive model of design that also serves as a model of evaluation. Its 5 phases serve as key features used to guide the process of creating effective learning experiences. Analysis includes Identifying the problem, goals, and objectives. The design specifies the learning objectives and desired outcomes. It develops assessment strategies that are specific to the needs identified. The development creates and assembles the content, learning material, instructional activities, and needed resources. Implementation delivers the instructional materials to the learners while providing training and support. Lastly, evaluations gather feedback from learners and stakeholders. It’s used to analyze and determine if learning goals were met.
Its strengths include that It is student-centered. It focuses on designing appropriate materials based on needs. It works based on the feedback from the evaluations and supports continuous revisions. ADDIE considers all aspects of learning.
Weaknesses-
It can be too detailed and time-consuming. It requires expertise and time, resources seldom available.
Kirkpatrick’s Model of Evaluation –
A widely used framework used for assessing the effectiveness of instructional design models. The evaluation consists of 4 levels, Reaction Learning, Behavior, and Results. Reaction measures participants’ reactions and assesses their overall feelings about the training. Learning evaluates the extent to which participants have retained the knowledge and skills. It is measured through a variety of assessments. Behavior assesses the extent to which participants use what they have learned on the job or in their everyday activities and it requires observation and assessment over time to verify if the training led to any improvements. Lastly, results measure the outcomes of the program, whether there’s improved performance, increased productivity, cost savings, or other benefits.
Its strength is that it’s a comprehensive framework. It considers all aspects, from immediate reactions to long-term results. It encourages ongoing assessment and continuous improvements, focusing on the importance of actual behavior change and real-world outcomes. The training will have a value that can be adapted to different types of training programs and instructional technologies.
Weaknesses-
Its comprehensive evaluation at all four levels can be resource-intensive, requiring significant time, effort, and money. Its successful implementation requires a lot of careful planning, resources, and expertise.
TPACK, Technological Pedagogical Content Knowledge–
Is a model that emphasizes integrating technology into teaching and learning. Its 3 primary forms of knowledge are Content Knowledge, understanding the subject that needs to be taught. Pedagogical Knowledge, understanding the methods and practices of teaching and learning, and Technological Knowledge, understanding and effectively using technology tools and resources. TPACK assesses whether the technology supports the learning of specific content effectively. It determines if the technology helps learners understand and engage with the material and evaluates if the technology aligns with proven teaching practices and instructional strategies. Its strengths include the encouragement of thoughtful and purposeful use of technology to enhance learning. TPACK promotes the development skills needed to integrate technology in a meaningful way and it provides a framework for professional development and ongoing learning.
Weaknesses-
It is resource-intensive. Having access to technology, professional development, and ongoing support is essential. Its effective implementation may require significant time, effort, and resources and it requires a deep understanding of all three areas and how they work together.
There are various instructional technology evaluation models to consider when you need to confirm that the instruction you designed was effective and reached your target goal. Target goals are expectations you set.
In researching various instructional technology evaluation methodologies utilizing the following websites we found that the main focus of all methodologies is that evaluation is at the forefront of all design methodologies. Another main focus is on the outcome, verifying that the goal of designing inclusive and accessible learning outcomes was met.
Some key features for the Evaluation Models
Why we evaluate
Are our instructional goals aligned with the requirements of the instructional program? Are our lesson plans, instructional materials, media, and assessments, aligned with learning needs? Do we need to make any changes to our design to improve the effectiveness and overall satisfaction with the instruction? Does the implementation provide effective instruction and carry out the intended lesson plan and instructional objectives? Have the learners obtained the knowledge and skills that are needed? Are our learners able to transfer their learning into the desired contextual setting?
In terms of Behavior how do learners apply what they learned to change their behavior? Attitudes and behavior are important indicators towards the acceptance and success of an instructional program”. Dick, Carey and Carey (2015)
Learning– what knowledge is learned skills developed or attitudes changed? When learners master the content of the training or exhibit proper learning through assessment, one can assume the effectiveness of the program and identify what did not work if the learning outcomes show adverse results”.
Reactions – how satisfied are the learners with the instruction? Attitudes and Reactions to Learning, Morrison et al. (2019) “explained there are two uses for attitudinal evaluations:evaluating the instruction and evaluating outcomes within the learning”.
Formative evaluation is conducted during the design process to provide feedback that informs the design process. The cycle of formative evaluations (expert, review, field train, design and one on one). Summative evaluation is conducted at the end of the design process to determine if the instructional product achieves the intended outcomes. And evaluations often centers on measuring achievement of objectives. Summative Evaluation: main question “did it solve the problem”? Confirmative evaluation is conducted over time to determine the lasting effects of instruction, determines strengths and weaknesses of instruction and instructional materials. Focuses on the transfer of knowledge or skill into a long-term context. evaluation “To determine if instruction is effective and if it met the organization’s defined instructional needs”. Moseley and Solomon (1997) described confirmative evaluation as maintaining focus on what is important to your stakeholders and ensuring the expectations for learning continue to be met.
________________________________________
The model provides guidelines for how the steps in the evaluation process interact with these different kinds of evaluations.
THE HOW
Formative evaluation is an iterative process that requires the involvement of instructional designers, subject matter experts, learners, and instructors, In all stages of evaluation it is important that learners are selected that will closely match the characteristics of the target learner population.
There are evaluations for all assessments. Choosing the right one for you requires an understanding of what goes into creating, designing the Educational Material.
The model that I chose is the Triple E framework. The Triple E framework consists of engagement, enhancement, and extension. The model is meant to motivate students to learn and allow them to focus on the lesson through engagement. The model enhances the students learning by allowing students to actively participate in learning. Extension by providing real-life examples to students.
Strengths : the Triple E framework has a positive impact on students achievement and learning outcomes.
Weaknesses:
The Triple E framework doesn’t work for everyone.
I researched the TPack model which analyzes the overlap of content , technology and pedagogy. This model places an emphasis on educators being knowledgeable in technology and up to date on its advances as well as content areas and blending it all together.
This model definitely values instructional technology since educators should be familiar and comfortable teaching technology, and be able to teach the basics. TPack framwork also emphasizes that skills taught should be carried out in everyday activities. The framework also places importance in incorporating content with technology to make the content understood or using content to understand technology. An example of this can be teaching hardware or also making the connection of how x-rays helped in science.
The strengths of this model include the strong emphasis on incorporating technology to learn academic content and everyday skills. It also provides the opportunity for students to learn about technology around them.
The weakness would be that sometimes all the overlapping can make it a little hard to understand and difficult to target all components.
One model I researched was ADDIE. The ADDIE evaluation model is a systematic approach used to assess and evaluate instructional technology and the effectiveness of the instructional design. ADDIE stands for Analysis, Design, Development, Implementation and Evaluation. ADDIE primarily used as an instructional design model its evaluation phase plays a crucial role in assessing and evaluating the impact of the use of instructional technology.
Key features of this model:
Strengths of ADDIE evaluation model:
Weaknesses of ADDIE evaluation model:
The CIPP Model of Evaluation is used to achieve accountability in education, analyzing the data to drive instruction and decisions. There are four basic concepts for this model; Context, Input, Process, Product. It allows learners to actively be involved in the process and identify areas of development, by having a hands on learning experience.
Context: Set goals and develop questions surrounding how to improve and meet that goal.
Input: Plan improvements and efforts. Think about how to use resources and identify strategies and procedures to meet the desired end goal.
Process: Take action, identify and predict outcomes of its design. Record and track data as it occurs.
Product: This the outcome, where the learner measures and interprets the data.
This can be used for formative and summative assessments.
Bloom’s Taxonomy is a model that classifies learning objectives into different levels of cognitive complexity. The key features of Bloom’s Taxonomy include:
When it comes to evaluating instructional technology, Bloom’s Taxonomy can be used as a guide to assess the level of cognitive engagement facilitated by the technology. By considering the verbs associated with each level, educators can determine whether the technology encourages lower-level cognitive processes (such as Remembering and Understanding) or higher-level thinking skills (such as Analyzing and Creating). This evaluation can help educators select appropriate instructional technology that aligns with their desired learning objectives and promotes the desired level of cognitive engagement.
A strength of Bloom’s Taxonomy:
A weakness of Bloom’s Taxonomy:
Kirkpatrick’s Model of Evaluation allows instructional designers to look at the success of technology training and implementation. This would be a helpful tool when introducing a new technology component and training instructors or even students on a new tool.
The model is broken down into four parts, Results, Behavior, Learning, Reaction,
Reaction is most on par with engagement. Students and the instructor need to be reacting positively to the and because of this positive reaction they are more likely to carry on.
Learning: Are the students mastering the content being taught? This is an ongoing stage that must be revisited. If students are not learning or meeting the goal then perhaps the technology is a successful integration.
Behavior: The model stresses the importance of the attitudes and behaviors towards the material/lesson. The more positivity imbued on the lesson/technology component the more successful the students will be. Here it is important to consider a tool like a checklist or rubric to collect data. Are the people trained using and integrating the technology?
Results: This is a key component. Here is how the model developers explain success, “suggested evaluators measure the efficiency of learning by comparing the skills mastered with the time taken; cost of program development; continuing expenses; reactions towards the program; and long-term benefits of the program.” This description acknowledges the complexity of educational programs and the constraints of time and budget.
As indicated above this model is most helpful outside of a classroom when looking at perhaps a school or district wide adoption of a new technology piece.
The model is strong in that it gives some clear focus to what a training/integration model should involve. It takes into account multiple viewpoints such as the trainees attitude towards the material and whether or not the work is actually being used afterwards. It also suggests tools like rubrics and checklists which ensures the data is objective.
Weaknesses:
I feel you need to have a more formal training in the method to use it effectively so it is not something you can just read about and implement. So the steps ask for subjective opinions like where other not the trainees have a positive attitude. People might tell you what you want to hear.
SAMR is a model that can be used within the classroom. It can be used to design and enhance online activities. It adds value as it transforms the lessons. Substitution (S) and augmentation (A) are the parts that enhance the lesson. Modification (M) and redefinition (R) transform the lesson. Small things like having the students type their work instead of handwriting, upload digital PDFs instead of handouts, or participate in a quiz online are activities that fall under the SAMR model. SAMR is great for helping teachers reflect on their practices and the uses of technology. They can reflect on how technology can be used to achieve a specific goal or outcome. When used as a planning tool,teachers can design,develop, and carry out lessons that involve digital tools that enhance lessons and student learning.
Backwards planning is another model that can be used. It is great because you know the end product and have to look for ways to achieve it. It gives you a change to plan differentiated, accessible, and equitable activities. There are three stages. I have shared the link below with a summary. It is a great tool and work well for myself as a computer science teacher. I do most of my lessons in stations due to limitations of materials. The stages allow my to backwards plan and meet the needs of my students in grades 3-8.
https://citycollegeoer.commons.gc.cuny.edu/backward-design-p1/#:~:text=Wiggins%20and%20McTighe%20break%20down,based%20on%20the%20previous%20two
.
Stage 1: Identify Desired Results
Wiggins and McTighe provide a list of questions which instructors should ask themselves in this stage of design:
Stage 2: Determine Acceptable Evidence
In stage two, you will ask yourself the following questions:
This is the stage in which you will decide what type of assignments will best suit your needs as they align with the course goals. Some assignment types and considerations include:
This is also the stage in which you will decide which assessments are formative and which are summative.
Stage 3: Plan Learning Experience and Instruction
At this point, you are ready to plan the rest of the course, including reading assignments (OER materials) and daily activities. Wiggins and McTighe suggest asking the following questions:
I appreciate how you made this model so much easier to understand and I can see the value in using it for reflection or even in a more formal setting.
The key features of Kirkpatrick’s Model of Evaluation are the Results, Behaviors, Learning and Reactions. This model determines if the targeted outcomes are achieved and if learning has taken place. In addition, the behavior and attitudes of students are also taken into account. The model determines if students have changed their behaviors and are applying what they have learned. It also gathers frequent feedback to see if the learners are satisfied. I believe the strength of this model is analyzing the student’s behaviors and attitudes. This way you can gage how well the student enjoyed the learning process. In addition, if students are satisfied they will not drop out and there is less frustration of the learner. This positive reaction can lead to greater understanding of the learner because they are enjoying themselves. A weakness could be that there are a lot of different assessments taking place. All this date can be overwhelming and hard to keep up with as you are also teaching the material. Another weakness could be that not all students will be satisfied. How can you achieve a 100% satisfactions?
The key features of Blooms’s Taxonomy are the different levels of assessments. It starts with a basic understanding and increases in complexity to creating. It uses action verbs to correspond to the different levels of complexity. Teachers can use these different levels of assessment to see how effective the instruction has been. A strength of the model is the common language that can be used by all teachers. The use of verbs can also be easy for teachers to create different activities for each level. These verbs can also lead to differentiation of instructions so learners enter at their own level. A weakness of the model is that there are many levels of complexity which can be overwhelming for teachers. Will all learners reach the high level of complexity? Therefore, there is differentiation but is there a high level of expectation for all our learners.
I do agree Bloom’s Taxonomy is overwhelming for teachers because of its many levels of complexity. Also Bloom’s taxonomy does not think of the learner and the differences that each learner brings to the table.
After reviewing some of the models, here are some reflections:
Kirkpatrick’s Model of Evaluation
The key features of the model are it takes instructional components and the outcomes from those components to determine whether the instruction met the desired goal. It focuses on the results, behavior, learning and reaction.
The model is used to evaluate the instructional technology by reviewing and assessing the targeted outcomes, if skills are developed and how satisfied the students are.
The strengths of the model are that it focuses on the students and how they engage and enjoy learning the targeted lesson.
The weakness of the model is that all students are not going to feel the same about a lesson. There are students with different needs and skill levels in the class, so the use of students as indicators might make this harder to use.
TPACK
The model is used to evaluate the instructional technology by using the content knowledge, pedagogical knowledge and technology knowledge to evaluate the effectiveness.
The strength of the model is that it blends all aspects of education together. It gives the teacher a way to meet all of the students’ needs in a lesson. By employing this model, the lesson should be very effective.
The weakness of the model is that “TPACK proposes that educators should be equally versatile in pedagogical principles and technological advances as they are in subject matter knowledge.” A lot of teachers may not be as versed in 1 or all three of the focused areas. New teachers need time to learn and grow. So this would not work for them.
RAIT
The key feature of the model is that it allows schools to quickly identify, evaluate and recommend educational technology.
The model is used to evaluate the instructional technology by having a semester long evaluation that collects data of the strengths and weaknesses. Then there is a protocol to review the data to decide if the tool is appropriate or not.
The strength of the model is that it allows teachers to get specific timely results so that they can make an informed decision and not waste time on something that doesn’t meet their needs.
The weakness of the model that a semester might not be enough time to accurately assess if the tool has the desired outcome.
As you can see, there are pros and cons to all the models. You have to choose one and test out if the model meets the goals you have.