Tag Archives: assessment

The Importance of Feedback: Both Fast and Slow

Feedback is one of the most important parts of a teacher’s job. Even if they don’t always act like it, students tend to be very interested in teachers’ reactions to their work. The emotional course of a young person’s entire day can be strongly affected by the sort of feedback–both positive and negative–that a teacher may provide. The power of effective feedback, of course, doesn’t only impact a student’s affective domain; cognitive development also depends on frequent and specific feedback from teachers. As Wiggins (2012) pointed out, feedback based on formative assessment is one of the most powerful factors affecting student learning.

For adult learners, of course, feedback is no less important, but it needs to be structured a little differently. Adults, especially professional educators, often have a well-earned sense of their own expertise, which they may have developed over the course of many years or even decades of classroom experience with young people. Even the most personable and trustworthy administrator, instructional coach, or trainer might offer a well-stated and specific suggestion to a teacher, only to be rebuffed as a non-expert who doesn’t know what he or she is talking about. In short, providing critical feedback to adults is complicated.

Mochari (2014) described how Abraham Lincoln was masterful at providing feedback to his generals during the Civil War. After Lincoln’s death, unsent letters were discovered in Lincoln’s desk that he had wisely decided not to send. Writing these letters must have helped Lincoln clarify his thinking, and perhaps even vent some of his frustrations and anxieties about trying to keep the young nation together in spite of a brutal war. Among Mochari’s takeaways was the importance of putting yourself into someone else’s shoes before criticizing him or her (Mochari, para. 11). I once worked for a superintendent who developed a well-deserved reputation for fits of rage, during which he would yell at employees so loudly that others could hear his every word through the wall. Hearing my colleagues, some of whom had been educators for decades, being cursed at and belittled didn’t just affect their morale and self-esteem; it had negative effects on everyone within earshot. While this is a rather extreme example, the fact remains that supervisors must tread lightly when providing feedback.

Of course, treading lightly isn’t always possible. Swartz (n.d.) outlined a system of writing feedback she used with her online language arts students. Among her insights was a commitment to provide feedback via electronic comments within 24 hours of a student writing submission. Prompt feedback is key to learning because the learner needs to hear both positive reassurances and suggestions for improvement while their work is still fresh in their minds. Most people have had the dubious experience of receiving feedback so long after finishing a job that many of the decisions and actions associated with the work have been forgotten.

As I consider the feedback mechanisms in my Demo Unit for this course, I believe there are several good ways that the instructor can provide prompt and specific feedback to learners. Wiki posts and threaded online discussions, for example, give instructors an opportunity to participate in a discussion in real-time, certainly within the 24-hour constraint that Swartz developed. Virtual online meetings and video conferences allow for feedback that is even quicker, in that they approximate the face-to-face conversations that occur constantly in a classroom environment.

Although feedback is most effective when it is prompt, it is sometimes important for teachers to take at least a little more time to carefully reflect on which suggestion(s) will have the most impact on learning. Several of my Demo Unit assignments have rubric-based feedback, associated with formal grading tasks, which are somewhat slower than online meetings and simultaneous discussions. The advantage to slower feedback is that great, breakthrough ideas often require more time and reflection before they can be formulated. If there is one lesson that we can learn from Abraham Lincoln, it is that the slower, more deliberate approach may not always be the most popular choice, but it is sometimes the most effective.

References

Mochari, I. (2014, February 11). Abraham Lincoln’s brilliant method for handling setbacks. Retrieved from http://www.inc.com/ilan-mochari/lincoln-lesson-setbacks.html

Swartz, J. (n.d.). Strategies for providing substantive feedback in language arts in the online environment. Retrieved from http://itlab2.coe.wayne.edu/it6230/casestudies/english/english.html

Wiggins, G. (2012). Seven keys to effective feedback. Educational Leadership, 70(1), 10-16. Retrieved from http://www.ascd.org/publications/educational-leadership/sept12/vol70/num01/Seven-Keys-to-Effective-Feedback.aspx

Authenticity In the Exciting World of Education Technology

The design of my Demo Unit was an authentic experience because I decided to make a course for the adult professional educators whom I work with every day as part of my job. It would have been much easier for me to build a Demo Unit full of lesson resources that I have used in the past with my high school science students, but that would not have been an authentic learning experience for me, because I no longer teach high school students. In short, I decided to look forward to the future, building learning activities that I might use next year, rather than looking backward.

I think it’s important to look ahead when designing curriculum. Too often, I see teachers design activities that are primarily designed to review prior knowledge. I’m not sure this is a great way to build student motivation for learning. Especially at the secondary level, teachers often precede a key exam or quiz with one or more review days. These lessons typically include vocabulary games, Jeopardy!-style competitions, and teacher-led rehashing of previously learned (or, perhaps, previously not-learned) concepts. While many of these activities are designed with good intentions, and may help remind students what they should remember for their upcoming test, I wonder if they are very effective. When students hear that today is a review day, they may feel entitled to turn off their brains to some extent, because their teacher is basically telegraphing that nothing new will be learned today. Ideally, however, something new should be learned every day!

So I am tempted to blow up the idea of the review day. In fact, maybe we should even blow up the idea of a test or quiz as a primary summative assessment altogether. Imagine how much more engaged a student might be in a one- or two-day summative project that would require some independent research and application of key concepts to practical problem-solving. Rather than give my students a multiple-choice quiz about atomic structure, for example, why not assign them to prepare multimedia presentations about how atomic structure relates to everyday teenage problems, like crack-resistant cell phone screens or artificial sweeteners? After all, I typically put my chemistry students to sleep with even the best-designed lessons about electron orbitals, but if I taught them how those orbitals related to, say, the interaction of a drug molecule with the human brain, suddenly their interest level would increase dramatically.

Upon final reflection about our Demo Unit assignment at Brandman, one modification that could have made the learning much more authentic would have been allowing some flexibility about the platform used. Although Blackboard and CourseSites are still very good ways to structure online learning units, there are newer platforms with more reliable functionality, like Canvas. In my specific case, because I work in a district that has a 1:1 Chromebook implementation, it would have made more sense for me to structure my Demo Unit using Google Sites, Classroom, Hangouts, and other Google Apps. Of course, I can appreciate the difficulty in modifying course content to stay up to date with the latest developments in education technology.

In fact, I can foresee a time when something else may replace even Google as the predominant technology platform in my district. Both Apple and Microsoft, for example, are developing their own classroom technology environments. I wouldn’t be surprised at all to see my colleagues and their students using OneDrive and/or Microsoft Classroom just a few years from now. Just don’t tell them I said that! It can be daunting enough for educators just to keep up with the amazing variety of apps and web tools that are available right now. Considering how rapidly education technology may evolve over the next five years, I think we have much to be excited–and frightened–about.

My Demo Course Summative Assessment: A Reflection

Whenever I work with teachers, I try to constantly reflect on my role. I might be the technology expert, but my teachers will always know more about their specific grade levels and subject areas than I do. Even within the realm of technology, I frequently run into teachers who know things that I do not, especially when they have direct experiences with resources and contexts that I lack. I often liken my role as an athletic coach. Coaches have a unique big-picture understanding of the team’s goals, points of emphasis, and strategy going into each game. But smart coaches understand that the players on the field see and understand things that cannot be perceived from the sidelines, no matter how many years the coach may have played the game. Successful coaches listen to their players constantly, and keep an open mind about changing the game plan, even while they might have to sometimes tell players things they may not want to hear. It’s an idea that makes me a little nervous sometimes, to be perfectly honest.

Now that I have added the Summative Assessment Portfolio to my Demo Shell, I feel that it is a well-designed framework for my students to demonstrate and extend their learning throughout the Demo Unit. I have carefully struck a balance between providing enough structure and guidance so everyone will know what the portfolio looks like and how to assemble it, while still trying to remain true to that coach role I would like to portray. I have still left enough open to interpretation that my teachers can fill in the blanks with artifacts that are relevant to their unique jobs. This was not an easy task, and I’m not sure it’s fully done yet.

I might decide to make further refinements, because there are still a few elements of this summative assessment that I am not completely happy with. Although I think my video clip guide to New Google Sites is helpful, I would like to provide my students with some additional static links to some more in-depth instructions about how to work with New Google Sites (Wise, 2017a). I have located one possible resource, a 17-minute video clip posted by Technology for Teachers and Students (2016). I am not sure this is the best tutorial resource, however, and am still looking for one or more tutorials to add. This is very important because New Google Sites is very different than Classic Google Sites, so most easily-found tutorials about Classic Google Sites are likely to distract and confuse my students.

Also, my static template portfolio is not at all complete, because the individual assignments in my Weekly Demo Shell are not complete either (Wise, 2017b). I’ve decided to complete these individual assignments first, so I can include sample artifacts that look consistent. I might even stop calling it a template because, as far as I know, there is no easy way to make a template New Google Site, in the way that a Word, PowerPoint, Google Docs, or Google Slides file might be shared as a template. Rather, I might call this resource a sample portfolio, rather than a template.

References

Technology for Teachers and Students. (2016, August 22). The NEW Google sites – 2016 tutorial [Video file]. Retrieved from https://youtu.be/OsNat-3-D3s

Wise, B. (2017a). How to build your media tools portfolio on new Google sites [Video file]. Retrieved from https://www.youtube.com/watch?v=kGHN2_oklJg

Wise, B. (2017b). Template – media tools. Retrieved from https://sites.google.com/mail.brandman.edu/wise-eduu628-demo-summ-port

Making a Google Sites Portfolio Assessment More Accessible Via Screencast

Technology in education is certainly a two-edged sword. On the one hand, modern technology gives students fantastic opportunities to learn in ways that were previously difficult, expensive, or impossible for teachers to design. Teachers often appreciate new technology tools that allow them to perform many of the complex or tedious tasks of education, like correcting papers and analyzing testing data, more effectively and efficiently. Modern media tools also have the potential to convey instruction–and develop problem-solving skills–much more effectively than the overhead projectors and chalkboards I used when I started my teaching career (Net Industries, 2017, para. 11).

On the other hand, new hardware and software learning tools are being developed and modified at a rapid pace that can overwhelm even the most tech-savvy educators. Often when I show teachers a new tech tool to make their jobs easier, they will ask me how long I think it will take for this tool to be replaced by something even better. My worst fear is that education technology might turn into a sort of Red Queen’s Race, in which teachers, like Alice in Wonderland, must constantly run just to stay in the same place (Carroll, 1871).

I’m currently working on designing my Demo Course Unit, which is a mini-course for teachers who would like to integrate visual media tools into their instruction. I’d like my teachers to build an electronic portfolio as a summative assessment for this Demo Unit, so that they can collect and share several artifacts that represent what they have learned about using modern technology tools in their lesson designs. I have decided that the best way to provide structure to this rather open-ended assessment is to provide my teachers with a template they may use to construct their New Google Sites portfolios (Wise, 2017b). My template will be shared with my teachers in a way that incorporates some variation in representation, which is an important consideration of Universal Design for Learning (CAST, 2015). I will provide my teachers with both a direct link to the template site and a narrated screencast video with subtitles (Wise, 2017a).

References

Carroll, L. (1871). Through the looking-glass [Project Gutenberg version]. Retrieved from https://www.gutenberg.org/files/12/12-h/12-h.htm

CAST. (2015). About universal design for learning. Retrieved from http://www.cast.org/our-work/about-udl.html#.WI4edhsrLD4

Net Industries. (2017). Media and learning – definitions and summary of research, do media influence the cost and access to instruction? Retrieved from State University Education Encyclopedia web site: http://education.stateuniversity.com/pages/2211/Media-Learning.html

Wise, B. (2017a). How to build your media tools portfolio on New Google Sites [Video file]. Retrieved from https://youtu.be/kGHN2_oklJg

Wise, B. (2017b). Template – media tools. Retrieved from https://sites.google.com/mail.brandman.edu/wise-eduu628-demo-summ-port/

Using Google Sites to Create a Portfolio that Teachers Will Really Use

Since I left the classroom to work as an EdTech Specialist in my district, I didn’t think it would be appropriate to design a Demo Unit based on K-12 content standards such as the Common Core or NGSS. Rather, because my students are adult teachers with students of their own, I needed to find a set of standards that address professional learning and practice for teachers using technology in their classrooms. Fortunately, such a set of standards already exists; in fact, the ISTE Standards for Teachers have been adopted and used by many educators (International Society for Technology in Education, 2008).

For the purposes of my Demo Unit, I have selected ISTE Teacher Standard 2a, which states that teachers “[d]esign or adapt relevant learning experiences that incorporate digital tools and resources to promote student learning and creativity” (International Society for Technology in Education, 2008, p. 1). I selected this standard because I try not to dazzle my teachers with all of the latest technological tools, but rather to show them how technology tools can be thoughtfully and strategically used to support and enhance student learning. I love this particular standard because it focuses on using technology tools to support not only student learning, but also creativity. I am convinced that technology should not be used only as a substitute for traditional textbook-based instruction. Teachers should also use technology to allow their students to create meaningful products with relevance to their everyday lives.

 

Blooms-Taxonomy

In keeping with the spirit of 21st Century learning, this standard’s key verb is to design, which occupies perhaps the highest position on Bloom’s taxonomy (Armstrong, 2017). In my school district, we spend a lot of time and resources on thoughtful lesson design. I work in the Education Services division of my District Office, where I frequently collaborate with our twelve Instructional Coaches on lesson studies and concept-building activities with grade-level and subject-area teams of teachers. We often help teachers build complete lessons using complex templates that incorporate instructional norms including lesson objectives, content and skill development, embedded checks for understanding, relevance, and (of course) technology.

 

My teachers work in virtually every type of classroom imaginable, from transitional kindergarten to adult school, including both general-education and special-education settings. Thus, it is important for my summative assessment to be open-ended enough that each teacher would be able to create a practical and relevant project that could be used with his or her own students. I decided the best way to accomplish this goal was to assign my teachers to construct an online portfolio of lessons, learning activities, and assessments using the New Google Sites.

The ISTE Standard gives the option of designing or adapting lessons; therefore, it isn’t essential that the teacher personally design each item in the portfolio from scratch. In fact, teachers need to know how to efficiently and strategically adapt preexisting lesson resources to meet their students’ needs. The rubric for this portfolio won’t focus excessively on details of the artifacts themselves. Rather, I hope to focus my teachers’ attention mainly on the planning, feedback, and reflection associated with each artifact. In the end, I want my teachers to design a portfolio of technology-based lesson resources that they, their colleagues, and their students will really use.

References

Armstrong, P. (2017). Bloom’s taxonomy. Retrieved from Vanderbilt University Center for Teaching: https://cft.vanderbilt.edu/guides-sub-pages/blooms-taxonomy/

International Society for Technology in Education. (2008). ISTE standards for teachers. Retrieved from http://iste.org/standards

Immersive Learning: The Teacher Is Still the Teacher

The creators of the Scientopolis immersive science environment have created an interactive world where students can learn science by controlling virtual avatars in a medieval town (Immersive Education, 2012). As students make their way through the immersive learning activity, they use data from a variety of sources, including information provided by the simulation itself, which students can analyze using built-in data table and graph generators (Immersive Education, 2012). Of course, even though the students’ avatars are trapped in a virtual world of the past, the students themselves have access to an internet-connected computer, so they can also take full advantage of the research potential of the devices they have at hand.

Ideally, a teacher should structure a learning activity using this software in a way that requires students to synthesize information from a variety of sources. In the Scientopolis weather scenario, for example, students must devise a practical solution for a multi-year drought based on simulation data and their own understanding of meteorology from their science lessons (Immersive Education, 2012). If I were using this tool in my own science classroom, I would try to present the problem as a complex one that has more than one plausible answer; that way, students would be forced to make difficult decisions based on careful cost-benefit analysis. This unit on drought would be particularly relevant to my students, who live in California’s Central Valley, where the entire population is quite familiar with the challenges a community faces when water is in short supply.

An immersive and complex learning experience should contain assessments that are also immersive and complex. Formative assessment is crucial in such a learning activity. It may be tempting for a teacher to assume a back-seat role while students are working independently in their virtual worlds, but that would be a mistake. Just because students are learning by doing in an online environment, it is still the teacher’s responsibility to make sure that students are on-track towards meeting the project’s predetermined learning goals. In the specific case of the Scientopolis module, a teacher might use a variety of periodic checks for understanding, including quick surveys at the end of each daily lesson, or perhaps a longer paragraph writing prompt that asks the student to summarize progress towards the objectives. Also, teachers should not forget to check in, face-to-face, with students on a regular basis.

These formative assessments should then be used to make any necessary adjustments as the project unfolds. A teacher may discover, for example, that the project timeline may need to be adjusted, or some struggling students may need to be provided with strategic hints in order to catch up. Also, teachers should have one or more enrichment activities ready to assign in case one or more advanced students complete their projects early.

When it comes to summative assessment, teachers should not rely solely on multiple-choice or similar objective tests when students complete an immersive learning experience. After all, much of what the students learn would be impossible to measure with multiple-choice test questions anyway. Ideally, students should be asked to demonstrate their learning by completing a practical project. In the drought example mentioned above, for instance, students might prepare a real-life narrated multimedia presentation about climate change and drought for a real-life town hall meeting. Such an assessment would require a carefully constructed rubric to ensure that students clearly understand the teacher’s expectations before they begin work. As Palloff and Pratt (2009) explained, rubrics can also help minimize the chance of conflict and disagreement about project grading (p. 70). Thus, by careful design, a teacher might use an immersive resource like Scientopolis to teach valuable critical-thinking skills while motivating students to achieve at higher levels.

References

Immersive Education. (2012, June 12). iED 2012 save science [Video file]. Retrieved from https://youtu.be/pgvDKXkbCMo

Palloff, R. M., & Pratt, K. (2009). Assessing the online learner: Resources and strategies for faculty. San Francisco, CA: Jossey-Bass.

Rubrics: An Essential Tool for 21st Century Learning

I’m not sure why, but I don’t remember a lot of rubrics being used when I was in high school and college. My high-school history teacher, for example, required us to write a five-paragraph essay each week for the entire school year. He was a notoriously difficult grader, and always returned our essays to us with plenty of comments scribbled in red ink. Even though he was a very dedicated teacher and his feedback was very useful, it was always a bit of a guessing game for us to try to discern what he expected from our weekly essays. Rubrics would have helped us tremendously even then, back in the 20th Century, because they would have removed much of the guesswork from our writing.

In the 21st Century, of course, rubrics are even more important because students have so much more creative freedom associated with their learning. When I was writing my weekly history essays 25 years ago, I was probably relying on just one or two sources of information–probably a textbook chapter plus maybe a photocopied article. Students in a high school history class today, of course, would be expected to do much more than write the same five-paragraph essay each week. Modern web tools allow students to create more authentic projects. As the University of Colorado Denver (2006) stated in their online rubric tutorial, rubrics can provide clear descriptions of teachers’ expectations across a broad range of assignment types, from written reports to experiments, design tasks, and other real-world demonstrations of learning. In fact, I can imagine a 21st Century history teacher giving students a free-form assignment on a topic–say, the Civil War, for example. Even if students are allowed to select the format of their Civil War project from a long list of options (oral report, role-playing skit, video clip, web site, etc.), a savvy teacher might be able to use the exact same rubric that covers all of these options.

Another benefit of rubrics to the 21st Century learner is that they force assessment to be criterion-referenced, rather than norm-referenced (University of Colorado Denver, 2006). Without clearly stated learning objectives, it can be easy for teachers to slip into a bell-curve mentality. Virtually all of my college math and science courses in the 1990s were graded on a curve. Most of the professors in these classes based our grades on norm-referenced, multiple-choice tests. For those of us who wanted to earn an “A,” it wasn’t enough to complete all of our work on-time and at a high level of quality. We also had to look over our shoulders and make sure our exam scores were always were one or two standard deviations above the mean. In these classes, I remember students would often ask professors what would happen if every student in the class was a genius who did terrific work–could everyone in the course receive an “A” grade? Rubrics help break this sad concept of sorting students by keeping the focus where it should be: on whether or not students have mastered the essential learning objectives. In a perfect world, a student should receive the same grade for the same learning, regardless of who the teacher is or who else happens to be enrolled in the same class section. In this sense, well-crafted rubrics can be an important way to ensure equity of grading.

Rubrics have even more power as learning tools when they are designed and scored by collaborative teams of teachers. The University of California Denver (2006) suggested that the reliability of a rubric can be improved by having multiple graders score an assessment against the same rubric. In recent years, I have been fortunate enough to participate in such a process. Last year, for example, the high school I taught at assigned two campus-wide writing benchmarks. We graded these essays using our common District writing rubric. During the scoring sessions for these benchmark essays, instructional coaches from the District Office were on hand to help us calibrate our scoring with sample papers, and we were able to ask one another’s help when we had to make difficult judgement calls. Again, this was a great opportunity for rubrics to enhance 21st Century learning, as our students’ papers and rubrics were shared electronically, which streamlined the process significantly. Student work was also electronically screened for plagiarism, thus further enhancing the reliability of the assessment. Activities like this are time-consuming, of course, but whenever teams of teachers use real-time common assessment data to help them improve their instruction, that is a golden opportunity to improve learning that shouldn’t be passed up.

Reference

University of Colorado Denver. (2006). Creating a rubric: An online tutorial for faculty. Retrieved from http://www.ucdenver.edu/faculty_staff/faculty/center-for-faculty-development/Documents/Tutorials/Rubrics/index.htm

Academic Integrity & Online Assessment

One of my favorite ways to support academic integrity is to ask students questions that don’t have a simple, single answer. Palloff and Pratt (2009) suggested that plagiarism is more difficult when students must solve real-life problems because they might not be able to find resources that fit the unique local context of such an assignment (p. 46). This week’s Midterm assignment that I have just submitted was a good example of this strategy, because we were asked to design a presentation that we might use with our real-life colleagues. On this assignment, it would have been difficult for me to copy someone else’s answers, because my local school district and community are different from those of my classmates. My presentation, therefore, is designed with a unique audience in mind, so it’s unlikely that another student’s responses would be fully applicable to my local context, and an observant professor might note inconsistencies if a student tried to cheat in this way. Even if I were the sort of student who cheated (and I am not!), the assignment’s creative possibilities and clear relevance might persuade me to work honestly.

In the specific case of our Midterm this week, the fusion of two different media sources (YouTube and Prezi) helps guard against plagiarism because the time stamps and account information of both sources can be compared. It might be possible for a crafty plagiarist to falsify such information on either a Prezi or a YouTube video, but creating matching false details for both platforms would be more difficult.

I think dishonesty could be further prevented by adding a web cam requirement to the screencast videos. I elected to add a webcam to my assignment anyway, mainly because I wanted to gain some practice with this software feature (Wise, 2017). By showing my face and recording my own voice, my professor has an opportunity to compare my appearance, voice, and (perhaps most importantly) nonverbal cues and facial expressions compared to my appearances in other videos and webinars. Many online assessment services now incorporate photographing and/or capturing video of the student during testing; the same advantages of preventing impersonation apply here (Pearson Education, 2017). Also, if my video narrative doesn’t match the detail, tone, or syntax of my report, then that might be a red flag that at least some portions of my project might have been plagiarized.

The integrity of this assignment could be bolstered even further by requiring students to present their Prezis at a synchronous online webinar like Adobe Connect. The professor might lead a structured impromptu discussion before, during, or after the presentation. It would be difficult for a plagiarist to effectively answer detailed questions in real time.

If the authentic context is a priority, perhaps a student could be required to show his or her Prezi to one or more real-life colleagues, who would then have to submit a separate evaluation directly to the professor. Last year, for example, I had to submit a portfolio and video clip as part of my Google Certified Trainer application (Google for Education, 2017). In addition, I had to provide Google with the names and contact information for three people whom I had trained within the past year. These three people had to submit separate evaluations of my work directly to Google via their work Google accounts. It would have been very difficult for me to cheat on this portion of my application because I would have had to hack into the preexisting Google emails of 3 separate people with whom I work. To be honest, planning and executing a successful training session would be less labor-intensive than cheating on such an assessment!

References

Google for Education. (2017). Google for education: Certified trainer program. Retrieved from https://edutrainingcenter.withgoogle.com/certification_trainer 

Palloff, R. M., & Pratt, K. (2009). Assessing the online learner: Resources and strategies for faculty. San Francisco, CA: Jossey-Bass.

Pearson Education. (2017). Deliver your own exam: Testing outside a test center. Retrieved from https://home.pearsonvue.com/Test-Owner/Deliver-your-exam/Testing-outside-a-test-center.aspx#OP   

Wise, B. (2017). Khan Academy: A rationale for blended learning at the high school level [Prezi file]. Retrieved from http://prezi.com/soll5du3vxny/?utm_campaign=share&utm_medium=copy

How Should Data Be Used in 21st Century Classroom?

students-99506_1280
Image source: Pixabay

How should data be used in the 21st century classroom? This is the million-dollar question (or, to be more precise, the multi-billion-dollar question) that faces educators today. Bill Gates has demonstrated that data-driven philanthropy can help mobilize limited resources to solve persistent human problems. Modern data technologies, for example, have helped alleviate some of the human suffering caused by infectious diseases and famine in Africa (Goldstein, 2013, para. 3).

In the case of America’s education system, I see a lot of potential to for data to help, because schools are highly complex systems with complex sets of interacting variables. I was trained as a biologist, and the complexity of our education system is akin to that of the biological world. Because there are so many species in so many habitats on our planet, it took several decades just for scientists to make enough sense of the flood of available data to develop a coherent theory–natural selection–in order to explain it all. A critical breakthrough occurred when early biologists developed a standardized system of classifying species, so that they could at least agree on what to call each species, and how to place groups of species into categories by using measurable data that could give insights into their evolutionary relationships.

I see a parallel development in American education today. Our students come from a fantastic diversity of cultural and socioeconomic backgrounds with widely different learning styles and abilities, and they are taught in a dizzying variety of school settings. Meaningful reform and improvement cannot occur until educators to come to consensus on which curriculum standards to adopt, and how student learning of those standards should be measured. The widespread adoption of the Common Core standards has been a huge step forward in this regard, but in a perfect world, student learning needs to be assessed consistently as well, so that apples-to-apples comparisons may be made. I hope that the Smarter Balanced assessments (SBAC) will provide some much-needed clarity in how we measure student learning. The test questions on this assessment do a good job of testing levels of understanding that weren’t easily measured by traditional multiple-choice test items by employing technologically enhanced question types (Smarter Balanced Assessment Consortium, n.d.). However, one of the tricky things about the Smarter Balanced tests is figuring out how individual test questions relate to the standards and the claims, which are the big-picture learning goals upon which the State bases its student and school reports.

Trying to solve the puzzle of standards mapping on the SBAC; what’s more, many of the standards can be mapped to more than one claim. As a teacher, I want to be able to harness the best data analysis programs to give me practical advice about how to modify my instruction to best meet the needs of each of my students. I don’t want to try to learn all of the intricacies of the data analysis, because that would take valuable time that I would much rather spend crafting good lessons and working with my students. If I were in charge of a school campus or District, I would want to try to use a carefully vetted consulting firm, such as Learning Forward, to analyze the wealth of available data. As Eric Brooks described in his video clip, the best insights for school leaders come from the skilled analysis of multiple sources of data, including non-testing data like attitude surveys (Learning Forward, 2012). Such data analysis might help teachers not only adjust their curricula and assignments, but also their methods and attitudes in ways that would enhance student learning.

Teachers, administrators, and parents might feel uneasy about trusting a hidden computer algorithm to inform their practice, as well they should (Modern School, 2013). The motives of for-profit data analysis companies must always be monitored, because schools have a sacred responsibility to protect the safety and privacy of their students. What’s more, we have to be assured that data analysis algorithms are culturally sensitive, so that we don’t make educational decisions based on data that were produced by culturally biased tests. But the potential benefits of using data to inform decision making in schools cannot be overstated. If computers can help us successfully land rovers on Mars or immunize thousands of children in Africa, perhaps they can help us better teach our students too.

References

Goldstein, D. (2013, January 31). Can big data save American schools? Bill Gates is betting yes. The Atlantic. Retrieved from http://www.theatlantic.com/business/archive/2013/01/can-big-data-save-american-schools-bill-gates-is-betting-on-yes/272719/

Modern School. (2013, March 12). Is Bill Gates data mining your children? [Web log comment]. Retrieved from http://modeducation.blogspot.com/2013/03/is-bill-gates-data-mining-your-children.html

Smarter Balanced Assessment Consortium. (n.d.) Smarter assessments. Retrieved from http://www.smarterbalanced.org/assessments/#interim

Learning Forward. (2012, April 6). Data standard [Video file]. Retrieved from https://youtu.be/rvfp-5hCeMk

Formative Assessment Matters More Now

Waters (2012) suggested that every minute a teacher spends on formative assessment is a minute lost from instruction (p. 8). But I doubt that formative assessment and instruction are really a zero-sum game. As my master teacher told me over 20 years ago, a good assessment should be a learning experience too. The trick, I think, is for the teacher to break out of the comfortable routine of measuring all learning with quick and easy multiple-choice tests, to be more precise, the sort of test questions that have single, predetermined correct answers. Designing a good formative assessment takes time, to be sure, so why not use that time to students’ advantage by incorporating a thought-provoking article or short video clip into an assessment?

test-986935_1280
Image Source: Pixabay

A good formative assessment should require students to formulate ideas that extend beyond the context(s) in which the information was taught. A few years ago, I taught a high-school anatomy course with partner who is a formative assessment guru. She requires her students to write quick paragraph assessments based on one or two brief excerpts from articles. She carefully designs her writing prompts so that students not only summarize the key concepts from the article and their prior learning, but also apply that knowledge to solve a critical-thinking problem that they have not encountered before. Some of the prompts even ask questions that have more than one possible answer, so there is an opportunity for the assessment to provoke further discussion and debate in the classroom. She grades these short-paragraph assessments efficiently and holistically based on rather simple criteria:

  • Did the student demonstrate sufficient mastery of what has been taught recently?
  • Was the student able to make a logical conclusion about the critical-thinking problem that was supported with evidence?

Based on the results of each assessment, she is able to make immediate adjustments to her instruction—and student groupings—the very next day.

As I was completing my own self-assessment for this assignment, I realized that an effective formative assessment should also contain a question or two that asks the student to express his or her own assessment of progress. I don’t think it’s necessary to ask students to complete the exact same questions before and after their learning, as we have been asked to do this week. But I do think that students should be asked to reflect on how their thinking has changed over the course of a unit or an entire course term. Such assessments need not be lengthy; in fact, every lesson can easily be concluded by asking students to rate their own understanding of the lesson on a scale from 1 to 5. Over longer time scales, I think students should be asked to write reflections on learning goals every couple of weeks. Such writing can be a powerful learning experience for both the student and the teacher, who might gain valuable feedback that can be used to adjust upcoming lessons and/or improve the course for the next year.

Strategies such as these can be employed in a traditional classroom; in fact, I doubt any of these ideas is really new. But in an online or blended learning environment, these formative assessment techniques become essential, because teachers and students might not be in the same classroom at the same time, or they might not even be in the same part of the world. Teachers of online courses must take formative assessment very seriously because the students are not physically present, so their body language, attitudes, and emotional states might be complete mysteries.

Technology, of course, opens up whole new categories of possible formative assessment techniques. As Horn and Staker (2012) described, formative assessment can be constantly interwoven throughout learning by using adaptive instruction tools like Lexia (para. 6). In my school district, the printed math textbook has been completely replaced with GoMath and ThinkCentral, both of which are adaptive, interactive learning learning modules published by HMH. Students in such programs are constantly asked to solve problems independently, and the computer software makes instant decisions about the next step, whether a student needs remediation or intervention, or is ready to progress to the next step. These tech tools are sometimes aggravating when they do not work, and I doubt they can replace the intuition and interpersonal relationship of a dedicated teacher. However, even the most skeptical tradition-minded teacher must admit that technology is opening the door to many new assessment methods, and that traditional paper tests with multiple-choice questions are going the way of the dinosaur.

References

Horn, M., & Staker, H. (2012, November 14). Formative assessment is foundational to blended learning. THE Journal: Transforming Education Through Technology. Retrieved from https://thejournal.com/articles/2012/11/14/formative-assessment-is-foundational-to-blended-learning.aspx

Waters, J. K. (2012). Resolving the formative assessment catch-22. THE Journal: Transforming Education Through Technology. Retrieved from http://online.qmags.com/TJL0912?pg=20&mode=1#pg20&mode1