Category Archives: EDUU 625

iNACOL Standards Self-Assessment

iNACOL Standards for EDUU 625:

Standard E: The online teacher models, guides, and encourages legal, ethical, and safe behavior related to technology use.

My Score: 2 (Yes, I do this.) I have delivered Common Sense Media Nearpod lessons on digital citizenship, and try to model ethical online behavior at all times when teaching.

Standard F: The online teacher is cognizant of the diversity of student academic needs and incorporates accommodations into the online environment.

My Score: 1 (I do this infrequently.) Although I care very much about accommodating student needs in my teaching, I must admit that I have tended to approach accommodations from a reactive, rather than a proactive, stance.

Standard G: The online teacher demonstrates competencies in creating and implementing assessments in online learning environments in ways that ensure validity and reliability of the instruments and procedures.

My Score: 2 (Yes, I do this.) Both on my own and in collaboration with other teachers in a Professional Learning Community, I feel that designing valid and reliable electronic assessments is a strength for me.

Standard H: The online teacher develops and delivers assessments, projects, and assignments that meet standards-based learning goals and assesses learning progress by measuring student achievement of the learning goals.

My Score: 1 (I kind of get this and might do it.) If you had asked me this question three or four years ago, I would have given myself a 2; however, it has been difficult to adjust assessment practices to accurately measure the new Common Core standards, particularly the higher-order thinking skills.

Standard I: The online teacher demonstrates competency in using data from assessments and other data sources to modify content and to guide student learning.

My Score: 1 (I do this infrequently.) Although I think my PLC partners and I are very effective at meaningful collaboration and genuine reflection on our practice, time is a serious limiting factor that prevents me from meeting this standard as often as I would like.


International Association for K-12 Online Learning. (2011). National standards for quality online teaching. Retrieved from


Digital Curation: A Tool for 21st Century Learning

My first teaching job was in a high school science department that had a shared office where each of the teachers had a desk with one or two file cabinets. In those days, most lesson plans were still on paper, although there was a computer in the room with a floppy disk drive and a modest connection to the recently-invented world wide web. When I began working in that office, the department chair encouraged me to freely peruse and borrow lessons and resources from any of the other teachers, and then pointed out an empty file cabinet that I should use to begin my own collection of lessons that I would be willing to share.

I quickly learned which teachers could usually be trusted to have a well-organized drawer of carefully vetted lesson ideas, and which teachers simply stored 35 copies of every worksheet that came with their adopted textbook. The cabinets packed full of paper worksheets weren’t where I usually would find the best lesson plans, assessments, and project ideas, and herein lies the distinction between curating vs. collecting. Curators try to share a relatively small number of the best resources, whereas collectors tend to stash everything they can get their hands on. In his video, Pant (2013) gave the example of the sommelier as a curator of fine wine, which I think is an excellent analogy. Twenty years ago, I never would have consulted a sommelier except maybe to help pick the wine to be served at my wedding reception–but now, with a smartphone in my pocket, why wouldn’t I want to peek at a trusted wine review web site when I’m deciding which bottle of wine to pick up at the grocery store?

Simply put, technology has made quality curation available to everyone, including teachers. Educators now have access to literally millions of their colleagues’ virtual file cabinets on the Internet. These resources aren’t all paper worksheets, either; a URL can point to almost any type of media, from movies to blog posts to interactive learning environments. CourseWorld, for example, is a curated set of 16,000 educational videos that have been selected and indexed by a staff of over 50 experts in the humanities and arts (Nelson, 2013). What makes this site powerful is that the videos are organized in a well-designed topical hierarchy that allows a teacher to quickly drill down, with just a few mouse clicks, to a small set of vetted videos on a specific topic.

As we have discussed earlier in this course, Universal Design for Learning (UDL) philosophy emphasizes the variation of representation in teaching, so that students with varying abilities and learning styles will be able to succeed (CAST, 2011). A well-curated resource list should allow teachers to quickly access a variety of learning resources, preferably in a variety of formats, so that different types of learners can be supported. Curation itself can be an excellent authentic assessment task for students, because they would use higher-level thinking skills as they evaluate which resources they should collect into a portfolio. This is not just a cute way to structure a hands-on lesson; curation is quickly becoming a 21st century job skill, as more and more career fields depend on web-based resources for communication, training, design, and collaboration. Curation even has the power to open whole new types of learning for students. Sheninger (2013) described how high-school students used MIT OpenCourseWare to learn about video-game programming. This learning resource contained a carefully curated set of coding lessons, which the students were able to freely access as they were trying to figure out how to code their video games. Perhaps this is the most exciting possibility for digital curation: that people are free to use curated resources to quickly and efficiently teach themselves whatever they want to learn about virtually any subject.

Of course, digital curation does open up ethical and legal issues. Some of the best educational content on the Internet has been produced by people who have invested significant amounts of money and/or time. We teachers have liberal fair-use rights under copyright law, but we don’t get to steal expensive resources for free. A teacher who violates terms of use restrictions, even with the best of intentions, can expose himself (and the school district) to significant financial and legal liability. Even more importantly, we teachers have a responsibility to keep our students safe online. Many online learning resources are intended for older children or adults, and don’t feature the privacy protections and/or content filters that should be in place for younger children. Here is where curation is especially important: teachers should be able to quickly filter out web sites and web-based learning tools that aren’t appropriate to students at their grade level. In fact, this may be a part of the teaching role that won’t change by the end of the 21st century. No matter how much knowledge becomes available on the world wide web, and no matter how well that information is curated and organized for students, we will still need human teachers to guide students safely along their learning journey.


CAST. (2011). UDL at a glance [Video file]. Retrieved from

Nelson, S. (2013, September 24). CourseWorld curates repository of free arts and humanities media [Web log comment]. Retrieved from THE Journal web site:

Pant, A. (2013, October 7). Art of curation in education – course and instructor introduction [Video file]. Retrieved from

Sheninger, E. (2013, March 22). OCW supports independent study for N.J. high school students (via MIT News) [Web log comment]. Retrieved from

Immersive Learning: The Teacher Is Still the Teacher

The creators of the Scientopolis immersive science environment have created an interactive world where students can learn science by controlling virtual avatars in a medieval town (Immersive Education, 2012). As students make their way through the immersive learning activity, they use data from a variety of sources, including information provided by the simulation itself, which students can analyze using built-in data table and graph generators (Immersive Education, 2012). Of course, even though the students’ avatars are trapped in a virtual world of the past, the students themselves have access to an internet-connected computer, so they can also take full advantage of the research potential of the devices they have at hand.

Ideally, a teacher should structure a learning activity using this software in a way that requires students to synthesize information from a variety of sources. In the Scientopolis weather scenario, for example, students must devise a practical solution for a multi-year drought based on simulation data and their own understanding of meteorology from their science lessons (Immersive Education, 2012). If I were using this tool in my own science classroom, I would try to present the problem as a complex one that has more than one plausible answer; that way, students would be forced to make difficult decisions based on careful cost-benefit analysis. This unit on drought would be particularly relevant to my students, who live in California’s Central Valley, where the entire population is quite familiar with the challenges a community faces when water is in short supply.

An immersive and complex learning experience should contain assessments that are also immersive and complex. Formative assessment is crucial in such a learning activity. It may be tempting for a teacher to assume a back-seat role while students are working independently in their virtual worlds, but that would be a mistake. Just because students are learning by doing in an online environment, it is still the teacher’s responsibility to make sure that students are on-track towards meeting the project’s predetermined learning goals. In the specific case of the Scientopolis module, a teacher might use a variety of periodic checks for understanding, including quick surveys at the end of each daily lesson, or perhaps a longer paragraph writing prompt that asks the student to summarize progress towards the objectives. Also, teachers should not forget to check in, face-to-face, with students on a regular basis.

These formative assessments should then be used to make any necessary adjustments as the project unfolds. A teacher may discover, for example, that the project timeline may need to be adjusted, or some struggling students may need to be provided with strategic hints in order to catch up. Also, teachers should have one or more enrichment activities ready to assign in case one or more advanced students complete their projects early.

When it comes to summative assessment, teachers should not rely solely on multiple-choice or similar objective tests when students complete an immersive learning experience. After all, much of what the students learn would be impossible to measure with multiple-choice test questions anyway. Ideally, students should be asked to demonstrate their learning by completing a practical project. In the drought example mentioned above, for instance, students might prepare a real-life narrated multimedia presentation about climate change and drought for a real-life town hall meeting. Such an assessment would require a carefully constructed rubric to ensure that students clearly understand the teacher’s expectations before they begin work. As Palloff and Pratt (2009) explained, rubrics can also help minimize the chance of conflict and disagreement about project grading (p. 70). Thus, by careful design, a teacher might use an immersive resource like Scientopolis to teach valuable critical-thinking skills while motivating students to achieve at higher levels.


Immersive Education. (2012, June 12). iED 2012 save science [Video file]. Retrieved from

Palloff, R. M., & Pratt, K. (2009). Assessing the online learner: Resources and strategies for faculty. San Francisco, CA: Jossey-Bass.

Rubrics: An Essential Tool for 21st Century Learning

I’m not sure why, but I don’t remember a lot of rubrics being used when I was in high school and college. My high-school history teacher, for example, required us to write a five-paragraph essay each week for the entire school year. He was a notoriously difficult grader, and always returned our essays to us with plenty of comments scribbled in red ink. Even though he was a very dedicated teacher and his feedback was very useful, it was always a bit of a guessing game for us to try to discern what he expected from our weekly essays. Rubrics would have helped us tremendously even then, back in the 20th Century, because they would have removed much of the guesswork from our writing.

In the 21st Century, of course, rubrics are even more important because students have so much more creative freedom associated with their learning. When I was writing my weekly history essays 25 years ago, I was probably relying on just one or two sources of information–probably a textbook chapter plus maybe a photocopied article. Students in a high school history class today, of course, would be expected to do much more than write the same five-paragraph essay each week. Modern web tools allow students to create more authentic projects. As the University of Colorado Denver (2006) stated in their online rubric tutorial, rubrics can provide clear descriptions of teachers’ expectations across a broad range of assignment types, from written reports to experiments, design tasks, and other real-world demonstrations of learning. In fact, I can imagine a 21st Century history teacher giving students a free-form assignment on a topic–say, the Civil War, for example. Even if students are allowed to select the format of their Civil War project from a long list of options (oral report, role-playing skit, video clip, web site, etc.), a savvy teacher might be able to use the exact same rubric that covers all of these options.

Another benefit of rubrics to the 21st Century learner is that they force assessment to be criterion-referenced, rather than norm-referenced (University of Colorado Denver, 2006). Without clearly stated learning objectives, it can be easy for teachers to slip into a bell-curve mentality. Virtually all of my college math and science courses in the 1990s were graded on a curve. Most of the professors in these classes based our grades on norm-referenced, multiple-choice tests. For those of us who wanted to earn an “A,” it wasn’t enough to complete all of our work on-time and at a high level of quality. We also had to look over our shoulders and make sure our exam scores were always were one or two standard deviations above the mean. In these classes, I remember students would often ask professors what would happen if every student in the class was a genius who did terrific work–could everyone in the course receive an “A” grade? Rubrics help break this sad concept of sorting students by keeping the focus where it should be: on whether or not students have mastered the essential learning objectives. In a perfect world, a student should receive the same grade for the same learning, regardless of who the teacher is or who else happens to be enrolled in the same class section. In this sense, well-crafted rubrics can be an important way to ensure equity of grading.

Rubrics have even more power as learning tools when they are designed and scored by collaborative teams of teachers. The University of California Denver (2006) suggested that the reliability of a rubric can be improved by having multiple graders score an assessment against the same rubric. In recent years, I have been fortunate enough to participate in such a process. Last year, for example, the high school I taught at assigned two campus-wide writing benchmarks. We graded these essays using our common District writing rubric. During the scoring sessions for these benchmark essays, instructional coaches from the District Office were on hand to help us calibrate our scoring with sample papers, and we were able to ask one another’s help when we had to make difficult judgement calls. Again, this was a great opportunity for rubrics to enhance 21st Century learning, as our students’ papers and rubrics were shared electronically, which streamlined the process significantly. Student work was also electronically screened for plagiarism, thus further enhancing the reliability of the assessment. Activities like this are time-consuming, of course, but whenever teams of teachers use real-time common assessment data to help them improve their instruction, that is a golden opportunity to improve learning that shouldn’t be passed up.


University of Colorado Denver. (2006). Creating a rubric: An online tutorial for faculty. Retrieved from

Academic Integrity & Online Assessment

One of my favorite ways to support academic integrity is to ask students questions that don’t have a simple, single answer. Palloff and Pratt (2009) suggested that plagiarism is more difficult when students must solve real-life problems because they might not be able to find resources that fit the unique local context of such an assignment (p. 46). This week’s Midterm assignment that I have just submitted was a good example of this strategy, because we were asked to design a presentation that we might use with our real-life colleagues. On this assignment, it would have been difficult for me to copy someone else’s answers, because my local school district and community are different from those of my classmates. My presentation, therefore, is designed with a unique audience in mind, so it’s unlikely that another student’s responses would be fully applicable to my local context, and an observant professor might note inconsistencies if a student tried to cheat in this way. Even if I were the sort of student who cheated (and I am not!), the assignment’s creative possibilities and clear relevance might persuade me to work honestly.

In the specific case of our Midterm this week, the fusion of two different media sources (YouTube and Prezi) helps guard against plagiarism because the time stamps and account information of both sources can be compared. It might be possible for a crafty plagiarist to falsify such information on either a Prezi or a YouTube video, but creating matching false details for both platforms would be more difficult.

I think dishonesty could be further prevented by adding a web cam requirement to the screencast videos. I elected to add a webcam to my assignment anyway, mainly because I wanted to gain some practice with this software feature (Wise, 2017). By showing my face and recording my own voice, my professor has an opportunity to compare my appearance, voice, and (perhaps most importantly) nonverbal cues and facial expressions compared to my appearances in other videos and webinars. Many online assessment services now incorporate photographing and/or capturing video of the student during testing; the same advantages of preventing impersonation apply here (Pearson Education, 2017). Also, if my video narrative doesn’t match the detail, tone, or syntax of my report, then that might be a red flag that at least some portions of my project might have been plagiarized.

The integrity of this assignment could be bolstered even further by requiring students to present their Prezis at a synchronous online webinar like Adobe Connect. The professor might lead a structured impromptu discussion before, during, or after the presentation. It would be difficult for a plagiarist to effectively answer detailed questions in real time.

If the authentic context is a priority, perhaps a student could be required to show his or her Prezi to one or more real-life colleagues, who would then have to submit a separate evaluation directly to the professor. Last year, for example, I had to submit a portfolio and video clip as part of my Google Certified Trainer application (Google for Education, 2017). In addition, I had to provide Google with the names and contact information for three people whom I had trained within the past year. These three people had to submit separate evaluations of my work directly to Google via their work Google accounts. It would have been very difficult for me to cheat on this portion of my application because I would have had to hack into the preexisting Google emails of 3 separate people with whom I work. To be honest, planning and executing a successful training session would be less labor-intensive than cheating on such an assessment!


Google for Education. (2017). Google for education: Certified trainer program. Retrieved from 

Palloff, R. M., & Pratt, K. (2009). Assessing the online learner: Resources and strategies for faculty. San Francisco, CA: Jossey-Bass.

Pearson Education. (2017). Deliver your own exam: Testing outside a test center. Retrieved from   

Wise, B. (2017). Khan Academy: A rationale for blended learning at the high school level [Prezi file]. Retrieved from

Universal Design for Learning (UDL): It’s About the Students.

At our live meeting last week, my partner mainly affirmed the modifications I had made to my AP Chemistry lab design project. My partner didn’t have many suggestions for improving the paper itself, so I focused mainly on improving my report’s structure and clarity, rather than adding any new ideas. If I could revise my paper a second time, I would add a few words about accessibility, especially after what we have learned in our class over the past week. After all, accessibility isn’t just a good idea; it’s the law! According to Section 504, for example, students with disabilities must be given opportunities to achieve the same results and benefits as students without disabilities (Smith, 2017). Of the several modifications I proposed for my lesson, two were particularly relevant to the Universal Design for Learning (UDL) philosophy.

First, I decided to allow, rather than prohibit, my students to use the internet to research possible experimental designs prior to writing their own procedure. The original lesson, which was given to me by a College Board AP Summer Institute trainer, contained this prohibition mainly as a guard against plagiarism. I wrote in my paper about how this modification would parallel the changing role of the teacher in the 21st century classroom, from the sage-on-the-stage to the guide-on-the-side. My original paper did not mention how this modification would increase variation of student engagement, which is one of the three primary elements of the UDL Guidelines (CAST, 2015). If I could revise my paper a second time, I would add a section describing how students with disabilities and/or sensory impairments might deepen their involvement in the project if given the opportunity to find relevant online video clips, visual aides, blog posts, especially if I took the time to locate, vet, and share a few of these resources with my students. The original assignment had absolutely no support for this. I must admit that any disabled student in my AP Chemistry course in the past would be likely to take a passive role while his or her lab partners would do most of the thinking, discussing, and decision-making about how to design the group’s experiment.

Second, I decided to change the post-lab assessment to incorporate peer editing and feedback via electronic comments. Again, this change in the assignment reflected an evolution in the teaching role, because I wanted to open up the revision process, so that the teacher was not the only person providing feedback to the learner. But I’m afraid I missed the mark in regards to UDL again here, because I was only imagining students providing typed commentary feedback to one another. The third UDL guideline, variation of action and expression, emphasizes the value of allowing students to express their knowledge in different ways (CAST, 2015). One refinement I might add to this feedback function would be to provide feedback in the form of audio clips. I recently learned about a web-based tool, Kaizena, which allows students and teachers to leave audio feedback, which opens the door for disabled and/or impaired students to communicate more effectively about their writing (Carey, 2015). In a fully online classroom, this sort of interactive peer reflection could also be facilitated via online hangout, similar to our live meeting earlier this week. I suspect that allowing audio comments, whether asynchronous or synchronous, would be helpful to all students, not just those with disabilities or impairments. This is perhaps the true genius of the UDL guidelines; inclusive design, after all, isn’t just a way to address compliance for specific disabilities, but rather a way to increase accessibility for all people (CAST, 2011). In the end, we educators should remember that good lesson design isn’t just about the teacher. It’s also about the student.


Carey, J. (2015). Leave voice comments in Google Docs with Kaizena [Web log comment]. EdTechTeacher. Retrieved from

CAST. (2011). UDL at a glance [Video file]. Retrieved from 

CAST. (2015). About universal design for learning. Retrieved from

Smith, T. E. C. (2017). Section 504, the ADA, and public schools. LD Online: The Educators’ Guide to Learning Disabilities and ADHD. Retrieved from

How Should Data Be Used in 21st Century Classroom?

Image source: Pixabay

How should data be used in the 21st century classroom? This is the million-dollar question (or, to be more precise, the multi-billion-dollar question) that faces educators today. Bill Gates has demonstrated that data-driven philanthropy can help mobilize limited resources to solve persistent human problems. Modern data technologies, for example, have helped alleviate some of the human suffering caused by infectious diseases and famine in Africa (Goldstein, 2013, para. 3).

In the case of America’s education system, I see a lot of potential to for data to help, because schools are highly complex systems with complex sets of interacting variables. I was trained as a biologist, and the complexity of our education system is akin to that of the biological world. Because there are so many species in so many habitats on our planet, it took several decades just for scientists to make enough sense of the flood of available data to develop a coherent theory–natural selection–in order to explain it all. A critical breakthrough occurred when early biologists developed a standardized system of classifying species, so that they could at least agree on what to call each species, and how to place groups of species into categories by using measurable data that could give insights into their evolutionary relationships.

I see a parallel development in American education today. Our students come from a fantastic diversity of cultural and socioeconomic backgrounds with widely different learning styles and abilities, and they are taught in a dizzying variety of school settings. Meaningful reform and improvement cannot occur until educators to come to consensus on which curriculum standards to adopt, and how student learning of those standards should be measured. The widespread adoption of the Common Core standards has been a huge step forward in this regard, but in a perfect world, student learning needs to be assessed consistently as well, so that apples-to-apples comparisons may be made. I hope that the Smarter Balanced assessments (SBAC) will provide some much-needed clarity in how we measure student learning. The test questions on this assessment do a good job of testing levels of understanding that weren’t easily measured by traditional multiple-choice test items by employing technologically enhanced question types (Smarter Balanced Assessment Consortium, n.d.). However, one of the tricky things about the Smarter Balanced tests is figuring out how individual test questions relate to the standards and the claims, which are the big-picture learning goals upon which the State bases its student and school reports.

Trying to solve the puzzle of standards mapping on the SBAC; what’s more, many of the standards can be mapped to more than one claim. As a teacher, I want to be able to harness the best data analysis programs to give me practical advice about how to modify my instruction to best meet the needs of each of my students. I don’t want to try to learn all of the intricacies of the data analysis, because that would take valuable time that I would much rather spend crafting good lessons and working with my students. If I were in charge of a school campus or District, I would want to try to use a carefully vetted consulting firm, such as Learning Forward, to analyze the wealth of available data. As Eric Brooks described in his video clip, the best insights for school leaders come from the skilled analysis of multiple sources of data, including non-testing data like attitude surveys (Learning Forward, 2012). Such data analysis might help teachers not only adjust their curricula and assignments, but also their methods and attitudes in ways that would enhance student learning.

Teachers, administrators, and parents might feel uneasy about trusting a hidden computer algorithm to inform their practice, as well they should (Modern School, 2013). The motives of for-profit data analysis companies must always be monitored, because schools have a sacred responsibility to protect the safety and privacy of their students. What’s more, we have to be assured that data analysis algorithms are culturally sensitive, so that we don’t make educational decisions based on data that were produced by culturally biased tests. But the potential benefits of using data to inform decision making in schools cannot be overstated. If computers can help us successfully land rovers on Mars or immunize thousands of children in Africa, perhaps they can help us better teach our students too.


Goldstein, D. (2013, January 31). Can big data save American schools? Bill Gates is betting yes. The Atlantic. Retrieved from

Modern School. (2013, March 12). Is Bill Gates data mining your children? [Web log comment]. Retrieved from

Smarter Balanced Assessment Consortium. (n.d.) Smarter assessments. Retrieved from

Learning Forward. (2012, April 6). Data standard [Video file]. Retrieved from

Formative Assessment Matters More Now

Waters (2012) suggested that every minute a teacher spends on formative assessment is a minute lost from instruction (p. 8). But I doubt that formative assessment and instruction are really a zero-sum game. As my master teacher told me over 20 years ago, a good assessment should be a learning experience too. The trick, I think, is for the teacher to break out of the comfortable routine of measuring all learning with quick and easy multiple-choice tests, to be more precise, the sort of test questions that have single, predetermined correct answers. Designing a good formative assessment takes time, to be sure, so why not use that time to students’ advantage by incorporating a thought-provoking article or short video clip into an assessment?

Image Source: Pixabay

A good formative assessment should require students to formulate ideas that extend beyond the context(s) in which the information was taught. A few years ago, I taught a high-school anatomy course with partner who is a formative assessment guru. She requires her students to write quick paragraph assessments based on one or two brief excerpts from articles. She carefully designs her writing prompts so that students not only summarize the key concepts from the article and their prior learning, but also apply that knowledge to solve a critical-thinking problem that they have not encountered before. Some of the prompts even ask questions that have more than one possible answer, so there is an opportunity for the assessment to provoke further discussion and debate in the classroom. She grades these short-paragraph assessments efficiently and holistically based on rather simple criteria:

  • Did the student demonstrate sufficient mastery of what has been taught recently?
  • Was the student able to make a logical conclusion about the critical-thinking problem that was supported with evidence?

Based on the results of each assessment, she is able to make immediate adjustments to her instruction—and student groupings—the very next day.

As I was completing my own self-assessment for this assignment, I realized that an effective formative assessment should also contain a question or two that asks the student to express his or her own assessment of progress. I don’t think it’s necessary to ask students to complete the exact same questions before and after their learning, as we have been asked to do this week. But I do think that students should be asked to reflect on how their thinking has changed over the course of a unit or an entire course term. Such assessments need not be lengthy; in fact, every lesson can easily be concluded by asking students to rate their own understanding of the lesson on a scale from 1 to 5. Over longer time scales, I think students should be asked to write reflections on learning goals every couple of weeks. Such writing can be a powerful learning experience for both the student and the teacher, who might gain valuable feedback that can be used to adjust upcoming lessons and/or improve the course for the next year.

Strategies such as these can be employed in a traditional classroom; in fact, I doubt any of these ideas is really new. But in an online or blended learning environment, these formative assessment techniques become essential, because teachers and students might not be in the same classroom at the same time, or they might not even be in the same part of the world. Teachers of online courses must take formative assessment very seriously because the students are not physically present, so their body language, attitudes, and emotional states might be complete mysteries.

Technology, of course, opens up whole new categories of possible formative assessment techniques. As Horn and Staker (2012) described, formative assessment can be constantly interwoven throughout learning by using adaptive instruction tools like Lexia (para. 6). In my school district, the printed math textbook has been completely replaced with GoMath and ThinkCentral, both of which are adaptive, interactive learning learning modules published by HMH. Students in such programs are constantly asked to solve problems independently, and the computer software makes instant decisions about the next step, whether a student needs remediation or intervention, or is ready to progress to the next step. These tech tools are sometimes aggravating when they do not work, and I doubt they can replace the intuition and interpersonal relationship of a dedicated teacher. However, even the most skeptical tradition-minded teacher must admit that technology is opening the door to many new assessment methods, and that traditional paper tests with multiple-choice questions are going the way of the dinosaur.


Horn, M., & Staker, H. (2012, November 14). Formative assessment is foundational to blended learning. THE Journal: Transforming Education Through Technology. Retrieved from

Waters, J. K. (2012). Resolving the formative assessment catch-22. THE Journal: Transforming Education Through Technology. Retrieved from