My Journey from Ed-Tech Novice to Expert

According to Herr (2007), one of the most important differences between an expert and a novice is that experts are able to recognize patterns fluently with little or no effort (para. 14). Whereas a novice might be able to mobilize good strategies in order to solve a problem, experts have the ability to take a step back and assess whether or not the problem at hand is truly the most important problem.

Early in my teaching career, I once asked my principal if she could increase my copy budget, because I didn’t have enough funds to duplicate all of the lab instruction booklets I wanted for my science students. Because the principal was an expert, her response to my request was not a simple yes or no answer. Instead, she asked me a series of reflective questions about how my lab program was structured, in order to better judge whether my lab instruction packets were the best use of limited funds. Her motivation may have been, in part, to help me find ways to stay within my copy budget (after all, principals know that money doesn’t grow on trees), but she also did something that only experts can do: She changed the conversation from a relatively minor funding request into a much more valuable reflection on what my students were expected to learn from their labs, and how I expected them to demonstrate that learning.

Fast-forward a couple of decades to the present year, and most of my colleagues regard me as an educational technology expert. One principal I recently worked for even refers to me as a technology guru. I don’t know if I can quite live up to that moniker, but as I reflect on my own learning journey through my current master’s degree program, I see that even experts can learn more. In one of my earliest posts on this blog, I wrote about technology’s potential to expand learning opportunities for students by motivating them to learn (Wise, 2016). While this is true, I didn’t mention another very important advantage of modern instructional technology: its capacity to make learning more accessible for students with different abilities and/or learning styles.

Over the past few months, I’ve learned a lot about assistive technologies and universal design. For many students with special needs, modern technology makes a tremendous difference–not only in their learning, but in their entire lives. One of my colleagues recently showed me several types of software and devices that allow moderately and severely handicapped students to communicate; such technologies, she explained, give students access to language (K. Blevins, personal communication, March 8, 2017). I hadn’t thought about this idea much, mainly because I didn’t need to. As a general-education teacher, I’d only thought about assistive technologies and universal-design philosophy when I needed to adapt my curriculum because a special-needs student was enrolled in my class. Now my perspective has changed: Universal design isn’t just for students with special needs; it can benefit all students.

Of course, one of the consequences of being labeled an expert is that people turn to me for advice and answers to their trickiest problems. I hope that, over the course of the next several months, I may gain a deep understanding of universal design, so that I may provide useful services to not only my special-education colleagues, but for general-ed teachers as well. If I can manage to pull that off, then perhaps I may be one step closer to becoming a legitimate expert.

References

Herr, N. (2007). How experts differ from novices. Retrieved from The Sourcebook for Teaching Science web site: http://www.csun.edu/science/ref/reasoning/how-students-learn/2.html

Wise, B. (2016). Three ways electronic learning will be important to the classroom of the future [Web log comment]. Retrieved from https://bwisetech.wordpress.com/2016/10/30/three-ways-electronic-learning-will-be-important-to-the-classroom-of-the-future/

iNACOL Standards Self-Assessment

iNACOL Standards for EDUU 625:

Standard E: The online teacher models, guides, and encourages legal, ethical, and safe behavior related to technology use.

My Score: 2 (Yes, I do this.) I have delivered Common Sense Media Nearpod lessons on digital citizenship, and try to model ethical online behavior at all times when teaching.

Standard F: The online teacher is cognizant of the diversity of student academic needs and incorporates accommodations into the online environment.

My Score: 1 (I do this infrequently.) Although I care very much about accommodating student needs in my teaching, I must admit that I have tended to approach accommodations from a reactive, rather than a proactive, stance.

Standard G: The online teacher demonstrates competencies in creating and implementing assessments in online learning environments in ways that ensure validity and reliability of the instruments and procedures.

My Score: 2 (Yes, I do this.) Both on my own and in collaboration with other teachers in a Professional Learning Community, I feel that designing valid and reliable electronic assessments is a strength for me.

Standard H: The online teacher develops and delivers assessments, projects, and assignments that meet standards-based learning goals and assesses learning progress by measuring student achievement of the learning goals.

My Score: 1 (I kind of get this and might do it.) If you had asked me this question three or four years ago, I would have given myself a 2; however, it has been difficult to adjust assessment practices to accurately measure the new Common Core standards, particularly the higher-order thinking skills.

Standard I: The online teacher demonstrates competency in using data from assessments and other data sources to modify content and to guide student learning.

My Score: 1 (I do this infrequently.) Although I think my PLC partners and I are very effective at meaningful collaboration and genuine reflection on our practice, time is a serious limiting factor that prevents me from meeting this standard as often as I would like.

Reference

International Association for K-12 Online Learning. (2011). National standards for quality online teaching. Retrieved from http://www.inacol.org/resource/inacol-national-standards-for-quality-online-teaching-v2/

Digital Curation: A Tool for 21st Century Learning

My first teaching job was in a high school science department that had a shared office where each of the teachers had a desk with one or two file cabinets. In those days, most lesson plans were still on paper, although there was a computer in the room with a floppy disk drive and a modest connection to the recently-invented world wide web. When I began working in that office, the department chair encouraged me to freely peruse and borrow lessons and resources from any of the other teachers, and then pointed out an empty file cabinet that I should use to begin my own collection of lessons that I would be willing to share.

I quickly learned which teachers could usually be trusted to have a well-organized drawer of carefully vetted lesson ideas, and which teachers simply stored 35 copies of every worksheet that came with their adopted textbook. The cabinets packed full of paper worksheets weren’t where I usually would find the best lesson plans, assessments, and project ideas, and herein lies the distinction between curating vs. collecting. Curators try to share a relatively small number of the best resources, whereas collectors tend to stash everything they can get their hands on. In his video, Pant (2013) gave the example of the sommelier as a curator of fine wine, which I think is an excellent analogy. Twenty years ago, I never would have consulted a sommelier except maybe to help pick the wine to be served at my wedding reception–but now, with a smartphone in my pocket, why wouldn’t I want to peek at a trusted wine review web site when I’m deciding which bottle of wine to pick up at the grocery store?

Simply put, technology has made quality curation available to everyone, including teachers. Educators now have access to literally millions of their colleagues’ virtual file cabinets on the Internet. These resources aren’t all paper worksheets, either; a URL can point to almost any type of media, from movies to blog posts to interactive learning environments. CourseWorld, for example, is a curated set of 16,000 educational videos that have been selected and indexed by a staff of over 50 experts in the humanities and arts (Nelson, 2013). What makes this site powerful is that the videos are organized in a well-designed topical hierarchy that allows a teacher to quickly drill down, with just a few mouse clicks, to a small set of vetted videos on a specific topic.

As we have discussed earlier in this course, Universal Design for Learning (UDL) philosophy emphasizes the variation of representation in teaching, so that students with varying abilities and learning styles will be able to succeed (CAST, 2011). A well-curated resource list should allow teachers to quickly access a variety of learning resources, preferably in a variety of formats, so that different types of learners can be supported. Curation itself can be an excellent authentic assessment task for students, because they would use higher-level thinking skills as they evaluate which resources they should collect into a portfolio. This is not just a cute way to structure a hands-on lesson; curation is quickly becoming a 21st century job skill, as more and more career fields depend on web-based resources for communication, training, design, and collaboration. Curation even has the power to open whole new types of learning for students. Sheninger (2013) described how high-school students used MIT OpenCourseWare to learn about video-game programming. This learning resource contained a carefully curated set of coding lessons, which the students were able to freely access as they were trying to figure out how to code their video games. Perhaps this is the most exciting possibility for digital curation: that people are free to use curated resources to quickly and efficiently teach themselves whatever they want to learn about virtually any subject.

Of course, digital curation does open up ethical and legal issues. Some of the best educational content on the Internet has been produced by people who have invested significant amounts of money and/or time. We teachers have liberal fair-use rights under copyright law, but we don’t get to steal expensive resources for free. A teacher who violates terms of use restrictions, even with the best of intentions, can expose himself (and the school district) to significant financial and legal liability. Even more importantly, we teachers have a responsibility to keep our students safe online. Many online learning resources are intended for older children or adults, and don’t feature the privacy protections and/or content filters that should be in place for younger children. Here is where curation is especially important: teachers should be able to quickly filter out web sites and web-based learning tools that aren’t appropriate to students at their grade level. In fact, this may be a part of the teaching role that won’t change by the end of the 21st century. No matter how much knowledge becomes available on the world wide web, and no matter how well that information is curated and organized for students, we will still need human teachers to guide students safely along their learning journey.

References

CAST. (2011). UDL at a glance [Video file]. Retrieved from http://www.udlcenter.org/resource_library/videos/udlcenter/udl

Nelson, S. (2013, September 24). CourseWorld curates repository of free arts and humanities media [Web log comment]. Retrieved from THE Journal web site: https://thejournal.com/articles/2013/09/24/courseworld-launches-free-liberal-arts-video-platform.aspx#UMJwHJzTfQFaL6t5.99

Pant, A. (2013, October 7). Art of curation in education – course and instructor introduction [Video file]. Retrieved from https://youtu.be/s5gpOQjuPh0

Sheninger, E. (2013, March 22). OCW supports independent study for N.J. high school students (via MIT News) [Web log comment]. Retrieved from https://mitopencourseware.wordpress.com/2013/04/02/ocw-supports-independent-study-for-n-j-high-school-students-via-mit-news/

Immersive Learning: The Teacher Is Still the Teacher

The creators of the Scientopolis immersive science environment have created an interactive world where students can learn science by controlling virtual avatars in a medieval town (Immersive Education, 2012). As students make their way through the immersive learning activity, they use data from a variety of sources, including information provided by the simulation itself, which students can analyze using built-in data table and graph generators (Immersive Education, 2012). Of course, even though the students’ avatars are trapped in a virtual world of the past, the students themselves have access to an internet-connected computer, so they can also take full advantage of the research potential of the devices they have at hand.

Ideally, a teacher should structure a learning activity using this software in a way that requires students to synthesize information from a variety of sources. In the Scientopolis weather scenario, for example, students must devise a practical solution for a multi-year drought based on simulation data and their own understanding of meteorology from their science lessons (Immersive Education, 2012). If I were using this tool in my own science classroom, I would try to present the problem as a complex one that has more than one plausible answer; that way, students would be forced to make difficult decisions based on careful cost-benefit analysis. This unit on drought would be particularly relevant to my students, who live in California’s Central Valley, where the entire population is quite familiar with the challenges a community faces when water is in short supply.

An immersive and complex learning experience should contain assessments that are also immersive and complex. Formative assessment is crucial in such a learning activity. It may be tempting for a teacher to assume a back-seat role while students are working independently in their virtual worlds, but that would be a mistake. Just because students are learning by doing in an online environment, it is still the teacher’s responsibility to make sure that students are on-track towards meeting the project’s predetermined learning goals. In the specific case of the Scientopolis module, a teacher might use a variety of periodic checks for understanding, including quick surveys at the end of each daily lesson, or perhaps a longer paragraph writing prompt that asks the student to summarize progress towards the objectives. Also, teachers should not forget to check in, face-to-face, with students on a regular basis.

These formative assessments should then be used to make any necessary adjustments as the project unfolds. A teacher may discover, for example, that the project timeline may need to be adjusted, or some struggling students may need to be provided with strategic hints in order to catch up. Also, teachers should have one or more enrichment activities ready to assign in case one or more advanced students complete their projects early.

When it comes to summative assessment, teachers should not rely solely on multiple-choice or similar objective tests when students complete an immersive learning experience. After all, much of what the students learn would be impossible to measure with multiple-choice test questions anyway. Ideally, students should be asked to demonstrate their learning by completing a practical project. In the drought example mentioned above, for instance, students might prepare a real-life narrated multimedia presentation about climate change and drought for a real-life town hall meeting. Such an assessment would require a carefully constructed rubric to ensure that students clearly understand the teacher’s expectations before they begin work. As Palloff and Pratt (2009) explained, rubrics can also help minimize the chance of conflict and disagreement about project grading (p. 70). Thus, by careful design, a teacher might use an immersive resource like Scientopolis to teach valuable critical-thinking skills while motivating students to achieve at higher levels.

References

Immersive Education. (2012, June 12). iED 2012 save science [Video file]. Retrieved from https://youtu.be/pgvDKXkbCMo

Palloff, R. M., & Pratt, K. (2009). Assessing the online learner: Resources and strategies for faculty. San Francisco, CA: Jossey-Bass.

Rubrics: An Essential Tool for 21st Century Learning

I’m not sure why, but I don’t remember a lot of rubrics being used when I was in high school and college. My high-school history teacher, for example, required us to write a five-paragraph essay each week for the entire school year. He was a notoriously difficult grader, and always returned our essays to us with plenty of comments scribbled in red ink. Even though he was a very dedicated teacher and his feedback was very useful, it was always a bit of a guessing game for us to try to discern what he expected from our weekly essays. Rubrics would have helped us tremendously even then, back in the 20th Century, because they would have removed much of the guesswork from our writing.

In the 21st Century, of course, rubrics are even more important because students have so much more creative freedom associated with their learning. When I was writing my weekly history essays 25 years ago, I was probably relying on just one or two sources of information–probably a textbook chapter plus maybe a photocopied article. Students in a high school history class today, of course, would be expected to do much more than write the same five-paragraph essay each week. Modern web tools allow students to create more authentic projects. As the University of Colorado Denver (2006) stated in their online rubric tutorial, rubrics can provide clear descriptions of teachers’ expectations across a broad range of assignment types, from written reports to experiments, design tasks, and other real-world demonstrations of learning. In fact, I can imagine a 21st Century history teacher giving students a free-form assignment on a topic–say, the Civil War, for example. Even if students are allowed to select the format of their Civil War project from a long list of options (oral report, role-playing skit, video clip, web site, etc.), a savvy teacher might be able to use the exact same rubric that covers all of these options.

Another benefit of rubrics to the 21st Century learner is that they force assessment to be criterion-referenced, rather than norm-referenced (University of Colorado Denver, 2006). Without clearly stated learning objectives, it can be easy for teachers to slip into a bell-curve mentality. Virtually all of my college math and science courses in the 1990s were graded on a curve. Most of the professors in these classes based our grades on norm-referenced, multiple-choice tests. For those of us who wanted to earn an “A,” it wasn’t enough to complete all of our work on-time and at a high level of quality. We also had to look over our shoulders and make sure our exam scores were always were one or two standard deviations above the mean. In these classes, I remember students would often ask professors what would happen if every student in the class was a genius who did terrific work–could everyone in the course receive an “A” grade? Rubrics help break this sad concept of sorting students by keeping the focus where it should be: on whether or not students have mastered the essential learning objectives. In a perfect world, a student should receive the same grade for the same learning, regardless of who the teacher is or who else happens to be enrolled in the same class section. In this sense, well-crafted rubrics can be an important way to ensure equity of grading.

Rubrics have even more power as learning tools when they are designed and scored by collaborative teams of teachers. The University of California Denver (2006) suggested that the reliability of a rubric can be improved by having multiple graders score an assessment against the same rubric. In recent years, I have been fortunate enough to participate in such a process. Last year, for example, the high school I taught at assigned two campus-wide writing benchmarks. We graded these essays using our common District writing rubric. During the scoring sessions for these benchmark essays, instructional coaches from the District Office were on hand to help us calibrate our scoring with sample papers, and we were able to ask one another’s help when we had to make difficult judgement calls. Again, this was a great opportunity for rubrics to enhance 21st Century learning, as our students’ papers and rubrics were shared electronically, which streamlined the process significantly. Student work was also electronically screened for plagiarism, thus further enhancing the reliability of the assessment. Activities like this are time-consuming, of course, but whenever teams of teachers use real-time common assessment data to help them improve their instruction, that is a golden opportunity to improve learning that shouldn’t be passed up.

Reference

University of Colorado Denver. (2006). Creating a rubric: An online tutorial for faculty. Retrieved from http://www.ucdenver.edu/faculty_staff/faculty/center-for-faculty-development/Documents/Tutorials/Rubrics/index.htm

Academic Integrity & Online Assessment

One of my favorite ways to support academic integrity is to ask students questions that don’t have a simple, single answer. Palloff and Pratt (2009) suggested that plagiarism is more difficult when students must solve real-life problems because they might not be able to find resources that fit the unique local context of such an assignment (p. 46). This week’s Midterm assignment that I have just submitted was a good example of this strategy, because we were asked to design a presentation that we might use with our real-life colleagues. On this assignment, it would have been difficult for me to copy someone else’s answers, because my local school district and community are different from those of my classmates. My presentation, therefore, is designed with a unique audience in mind, so it’s unlikely that another student’s responses would be fully applicable to my local context, and an observant professor might note inconsistencies if a student tried to cheat in this way. Even if I were the sort of student who cheated (and I am not!), the assignment’s creative possibilities and clear relevance might persuade me to work honestly.

In the specific case of our Midterm this week, the fusion of two different media sources (YouTube and Prezi) helps guard against plagiarism because the time stamps and account information of both sources can be compared. It might be possible for a crafty plagiarist to falsify such information on either a Prezi or a YouTube video, but creating matching false details for both platforms would be more difficult.

I think dishonesty could be further prevented by adding a web cam requirement to the screencast videos. I elected to add a webcam to my assignment anyway, mainly because I wanted to gain some practice with this software feature (Wise, 2017). By showing my face and recording my own voice, my professor has an opportunity to compare my appearance, voice, and (perhaps most importantly) nonverbal cues and facial expressions compared to my appearances in other videos and webinars. Many online assessment services now incorporate photographing and/or capturing video of the student during testing; the same advantages of preventing impersonation apply here (Pearson Education, 2017). Also, if my video narrative doesn’t match the detail, tone, or syntax of my report, then that might be a red flag that at least some portions of my project might have been plagiarized.

The integrity of this assignment could be bolstered even further by requiring students to present their Prezis at a synchronous online webinar like Adobe Connect. The professor might lead a structured impromptu discussion before, during, or after the presentation. It would be difficult for a plagiarist to effectively answer detailed questions in real time.

If the authentic context is a priority, perhaps a student could be required to show his or her Prezi to one or more real-life colleagues, who would then have to submit a separate evaluation directly to the professor. Last year, for example, I had to submit a portfolio and video clip as part of my Google Certified Trainer application (Google for Education, 2017). In addition, I had to provide Google with the names and contact information for three people whom I had trained within the past year. These three people had to submit separate evaluations of my work directly to Google via their work Google accounts. It would have been very difficult for me to cheat on this portion of my application because I would have had to hack into the preexisting Google emails of 3 separate people with whom I work. To be honest, planning and executing a successful training session would be less labor-intensive than cheating on such an assessment!

References

Google for Education. (2017). Google for education: Certified trainer program. Retrieved from https://edutrainingcenter.withgoogle.com/certification_trainer 

Palloff, R. M., & Pratt, K. (2009). Assessing the online learner: Resources and strategies for faculty. San Francisco, CA: Jossey-Bass.

Pearson Education. (2017). Deliver your own exam: Testing outside a test center. Retrieved from https://home.pearsonvue.com/Test-Owner/Deliver-your-exam/Testing-outside-a-test-center.aspx#OP   

Wise, B. (2017). Khan Academy: A rationale for blended learning at the high school level [Prezi file]. Retrieved from http://prezi.com/soll5du3vxny/?utm_campaign=share&utm_medium=copy

Universal Design for Learning (UDL): It’s About the Students.

At our live meeting last week, my partner mainly affirmed the modifications I had made to my AP Chemistry lab design project. My partner didn’t have many suggestions for improving the paper itself, so I focused mainly on improving my report’s structure and clarity, rather than adding any new ideas. If I could revise my paper a second time, I would add a few words about accessibility, especially after what we have learned in our class over the past week. After all, accessibility isn’t just a good idea; it’s the law! According to Section 504, for example, students with disabilities must be given opportunities to achieve the same results and benefits as students without disabilities (Smith, 2017). Of the several modifications I proposed for my lesson, two were particularly relevant to the Universal Design for Learning (UDL) philosophy.

First, I decided to allow, rather than prohibit, my students to use the internet to research possible experimental designs prior to writing their own procedure. The original lesson, which was given to me by a College Board AP Summer Institute trainer, contained this prohibition mainly as a guard against plagiarism. I wrote in my paper about how this modification would parallel the changing role of the teacher in the 21st century classroom, from the sage-on-the-stage to the guide-on-the-side. My original paper did not mention how this modification would increase variation of student engagement, which is one of the three primary elements of the UDL Guidelines (CAST, 2015). If I could revise my paper a second time, I would add a section describing how students with disabilities and/or sensory impairments might deepen their involvement in the project if given the opportunity to find relevant online video clips, visual aides, blog posts, especially if I took the time to locate, vet, and share a few of these resources with my students. The original assignment had absolutely no support for this. I must admit that any disabled student in my AP Chemistry course in the past would be likely to take a passive role while his or her lab partners would do most of the thinking, discussing, and decision-making about how to design the group’s experiment.

Second, I decided to change the post-lab assessment to incorporate peer editing and feedback via electronic comments. Again, this change in the assignment reflected an evolution in the teaching role, because I wanted to open up the revision process, so that the teacher was not the only person providing feedback to the learner. But I’m afraid I missed the mark in regards to UDL again here, because I was only imagining students providing typed commentary feedback to one another. The third UDL guideline, variation of action and expression, emphasizes the value of allowing students to express their knowledge in different ways (CAST, 2015). One refinement I might add to this feedback function would be to provide feedback in the form of audio clips. I recently learned about a web-based tool, Kaizena, which allows students and teachers to leave audio feedback, which opens the door for disabled and/or impaired students to communicate more effectively about their writing (Carey, 2015). In a fully online classroom, this sort of interactive peer reflection could also be facilitated via online hangout, similar to our live meeting earlier this week. I suspect that allowing audio comments, whether asynchronous or synchronous, would be helpful to all students, not just those with disabilities or impairments. This is perhaps the true genius of the UDL guidelines; inclusive design, after all, isn’t just a way to address compliance for specific disabilities, but rather a way to increase accessibility for all people (CAST, 2011). In the end, we educators should remember that good lesson design isn’t just about the teacher. It’s also about the student.

References

Carey, J. (2015). Leave voice comments in Google Docs with Kaizena [Web log comment]. EdTechTeacher. Retrieved from http://edtechteacher.org/kaizena-jen-carey/

CAST. (2011). UDL at a glance [Video file]. Retrieved from http://www.udlcenter.org/resource_library/videos/udlcenter/udl 

CAST. (2015). About universal design for learning. Retrieved from http://www.cast.org/our-work/about-udl.html#.WI4edhsrLD4

Smith, T. E. C. (2017). Section 504, the ADA, and public schools. LD Online: The Educators’ Guide to Learning Disabilities and ADHD. Retrieved from http://www.ldonline.org/article/6108/

How Should Data Be Used in 21st Century Classroom?

students-99506_1280
Image source: Pixabay

How should data be used in the 21st century classroom? This is the million-dollar question (or, to be more precise, the multi-billion-dollar question) that faces educators today. Bill Gates has demonstrated that data-driven philanthropy can help mobilize limited resources to solve persistent human problems. Modern data technologies, for example, have helped alleviate some of the human suffering caused by infectious diseases and famine in Africa (Goldstein, 2013, para. 3).

In the case of America’s education system, I see a lot of potential to for data to help, because schools are highly complex systems with complex sets of interacting variables. I was trained as a biologist, and the complexity of our education system is akin to that of the biological world. Because there are so many species in so many habitats on our planet, it took several decades just for scientists to make enough sense of the flood of available data to develop a coherent theory–natural selection–in order to explain it all. A critical breakthrough occurred when early biologists developed a standardized system of classifying species, so that they could at least agree on what to call each species, and how to place groups of species into categories by using measurable data that could give insights into their evolutionary relationships.

I see a parallel development in American education today. Our students come from a fantastic diversity of cultural and socioeconomic backgrounds with widely different learning styles and abilities, and they are taught in a dizzying variety of school settings. Meaningful reform and improvement cannot occur until educators to come to consensus on which curriculum standards to adopt, and how student learning of those standards should be measured. The widespread adoption of the Common Core standards has been a huge step forward in this regard, but in a perfect world, student learning needs to be assessed consistently as well, so that apples-to-apples comparisons may be made. I hope that the Smarter Balanced assessments (SBAC) will provide some much-needed clarity in how we measure student learning. The test questions on this assessment do a good job of testing levels of understanding that weren’t easily measured by traditional multiple-choice test items by employing technologically enhanced question types (Smarter Balanced Assessment Consortium, n.d.). However, one of the tricky things about the Smarter Balanced tests is figuring out how individual test questions relate to the standards and the claims, which are the big-picture learning goals upon which the State bases its student and school reports.

Trying to solve the puzzle of standards mapping on the SBAC; what’s more, many of the standards can be mapped to more than one claim. As a teacher, I want to be able to harness the best data analysis programs to give me practical advice about how to modify my instruction to best meet the needs of each of my students. I don’t want to try to learn all of the intricacies of the data analysis, because that would take valuable time that I would much rather spend crafting good lessons and working with my students. If I were in charge of a school campus or District, I would want to try to use a carefully vetted consulting firm, such as Learning Forward, to analyze the wealth of available data. As Eric Brooks described in his video clip, the best insights for school leaders come from the skilled analysis of multiple sources of data, including non-testing data like attitude surveys (Learning Forward, 2012). Such data analysis might help teachers not only adjust their curricula and assignments, but also their methods and attitudes in ways that would enhance student learning.

Teachers, administrators, and parents might feel uneasy about trusting a hidden computer algorithm to inform their practice, as well they should (Modern School, 2013). The motives of for-profit data analysis companies must always be monitored, because schools have a sacred responsibility to protect the safety and privacy of their students. What’s more, we have to be assured that data analysis algorithms are culturally sensitive, so that we don’t make educational decisions based on data that were produced by culturally biased tests. But the potential benefits of using data to inform decision making in schools cannot be overstated. If computers can help us successfully land rovers on Mars or immunize thousands of children in Africa, perhaps they can help us better teach our students too.

References

Goldstein, D. (2013, January 31). Can big data save American schools? Bill Gates is betting yes. The Atlantic. Retrieved from http://www.theatlantic.com/business/archive/2013/01/can-big-data-save-american-schools-bill-gates-is-betting-on-yes/272719/

Modern School. (2013, March 12). Is Bill Gates data mining your children? [Web log comment]. Retrieved from http://modeducation.blogspot.com/2013/03/is-bill-gates-data-mining-your-children.html

Smarter Balanced Assessment Consortium. (n.d.) Smarter assessments. Retrieved from http://www.smarterbalanced.org/assessments/#interim

Learning Forward. (2012, April 6). Data standard [Video file]. Retrieved from https://youtu.be/rvfp-5hCeMk

Formative Assessment Matters More Now

Waters (2012) suggested that every minute a teacher spends on formative assessment is a minute lost from instruction (p. 8). But I doubt that formative assessment and instruction are really a zero-sum game. As my master teacher told me over 20 years ago, a good assessment should be a learning experience too. The trick, I think, is for the teacher to break out of the comfortable routine of measuring all learning with quick and easy multiple-choice tests, to be more precise, the sort of test questions that have single, predetermined correct answers. Designing a good formative assessment takes time, to be sure, so why not use that time to students’ advantage by incorporating a thought-provoking article or short video clip into an assessment?

test-986935_1280
Image Source: Pixabay

A good formative assessment should require students to formulate ideas that extend beyond the context(s) in which the information was taught. A few years ago, I taught a high-school anatomy course with partner who is a formative assessment guru. She requires her students to write quick paragraph assessments based on one or two brief excerpts from articles. She carefully designs her writing prompts so that students not only summarize the key concepts from the article and their prior learning, but also apply that knowledge to solve a critical-thinking problem that they have not encountered before. Some of the prompts even ask questions that have more than one possible answer, so there is an opportunity for the assessment to provoke further discussion and debate in the classroom. She grades these short-paragraph assessments efficiently and holistically based on rather simple criteria:

  • Did the student demonstrate sufficient mastery of what has been taught recently?
  • Was the student able to make a logical conclusion about the critical-thinking problem that was supported with evidence?

Based on the results of each assessment, she is able to make immediate adjustments to her instruction—and student groupings—the very next day.

As I was completing my own self-assessment for this assignment, I realized that an effective formative assessment should also contain a question or two that asks the student to express his or her own assessment of progress. I don’t think it’s necessary to ask students to complete the exact same questions before and after their learning, as we have been asked to do this week. But I do think that students should be asked to reflect on how their thinking has changed over the course of a unit or an entire course term. Such assessments need not be lengthy; in fact, every lesson can easily be concluded by asking students to rate their own understanding of the lesson on a scale from 1 to 5. Over longer time scales, I think students should be asked to write reflections on learning goals every couple of weeks. Such writing can be a powerful learning experience for both the student and the teacher, who might gain valuable feedback that can be used to adjust upcoming lessons and/or improve the course for the next year.

Strategies such as these can be employed in a traditional classroom; in fact, I doubt any of these ideas is really new. But in an online or blended learning environment, these formative assessment techniques become essential, because teachers and students might not be in the same classroom at the same time, or they might not even be in the same part of the world. Teachers of online courses must take formative assessment very seriously because the students are not physically present, so their body language, attitudes, and emotional states might be complete mysteries.

Technology, of course, opens up whole new categories of possible formative assessment techniques. As Horn and Staker (2012) described, formative assessment can be constantly interwoven throughout learning by using adaptive instruction tools like Lexia (para. 6). In my school district, the printed math textbook has been completely replaced with GoMath and ThinkCentral, both of which are adaptive, interactive learning learning modules published by HMH. Students in such programs are constantly asked to solve problems independently, and the computer software makes instant decisions about the next step, whether a student needs remediation or intervention, or is ready to progress to the next step. These tech tools are sometimes aggravating when they do not work, and I doubt they can replace the intuition and interpersonal relationship of a dedicated teacher. However, even the most skeptical tradition-minded teacher must admit that technology is opening the door to many new assessment methods, and that traditional paper tests with multiple-choice questions are going the way of the dinosaur.

References

Horn, M., & Staker, H. (2012, November 14). Formative assessment is foundational to blended learning. THE Journal: Transforming Education Through Technology. Retrieved from https://thejournal.com/articles/2012/11/14/formative-assessment-is-foundational-to-blended-learning.aspx

Waters, J. K. (2012). Resolving the formative assessment catch-22. THE Journal: Transforming Education Through Technology. Retrieved from http://online.qmags.com/TJL0912?pg=20&mode=1#pg20&mode1

MOOCs for K-12: Maybe Not-So-Massive

massive2

As a former high school science teacher, I can appreciate some of the potential benefits of transplanting massive open online courses (MOOCs) from the University world into the K-12 environment. May (2013) cautioned that MOOCs are still relatively new innovations that need to have many kinks worked out, but the popularity of massive online courses indicates that, in one form or another, they probably are here to stay at the college level.

A common frustration in many traditional public high schools is that some advanced and/or elective courses can attract a passionate following among a frustratingly small number of students. Throughout my career, for example, I have always been ready, willing, and able to teach my terrific AP Chemistry course, which I regularly advertise to anyone who will listen. Despite my best efforts, however, AP Chemistry is really, really hard. In many years, I’ve only managed to recruit five or ten passionate chemistry-loving students to sign up. I know advanced elective teachers in other subject areas who share similar enrollment challenges. I suspect that many, if not most, college-prep juniors and seniors miss out on the opportunity to take at least one AP course each year due to a lack of demand among their peers. A carefully designed system of MOOCs might help alleviate this problem. Given the right resources and enough time, for example, I might move my AP Chemistry curriculum into an MOOC format and teach it simultaneously to a mixture of live students in my classroom and remote students who log on from other places. Just five years ago, such a solution might have proved impractical or impossibly expensive, but now that many schools are implementing 1:1 wireless internet programs, such a system could probably be launched with just a modest investment of time and equipment.

Of course, MOOCs are far from perfect at the University level, so one must wonder whether they would work out in high schools. Locke (2013) wrote, for example, that MOOCs are plagued with both high dropout rates and rampant cheating (para. 4). In case you haven’t noticed, high schools also have significant problems in these areas. Also, as Stark and Lewin (2013) pointed out, MOOCs are typically free and not-for-credit, which I fear contributes to a mentality of high innovation and loosened expectations. Public schools must have the highest curriculum and instruction standards because what we do is so important. Most of all, any online learning program designed for children must demonstrate that it can mitigate the loss of face-to-face social interactions between students and teachers that is essential to learning. So in spite of their potential, I’m afraid MOOCs won’t pass muster for our children unless a laser-sharp focus on learning can be maintained.

One of my biggest concerns with MOOCs is in the area of assessment. I’m not only concerned about student cheating, but also with a concern described by Locke (2013): the difficulty online students have being able to ask their teachers questions. Most importantly, I’m not sure a MOOC teacher would be able to use formative assessment results to modify and improve instruction. Suppose I enrolled a few hundred students in my MOOC-ized AP Chemistry course, for example. Would I be able to meet the goal of employing a quick check for understanding every five minutes or so? Would I be able to adjust my lesson delivery in real-time? Those hundreds of students would most likely be viewing online video recordings of my lessons at different times, so even if I did embed frequent interactive checks for understanding, I would be unlikely to review the results of such checks until days or weeks had passed–if at all.

In spite of the many concerns I have with MOOCs, I don’t think problems like these are insurmountable. Especially as technology and accessibility continue to improve, MOOCs might soon find a niche in K-12 education. I keep thinking about those little groups of five to ten disappointed chemistry students that haven’t been able to take my AP Chemistry class over many of the past 20 years. I’ll bet that thousands of high schools across America have similar small groups of students who might have taken an online advanced elective course. While far from perfect, MOOC versions of these courses would certainly be better than nothing!

References

Locke, M. (2013). MOOC: Will these four letters change K-12? Retrieved from http://www.scholastic.com/browse/article.jsp?id=3758098

May, G. S. (2013, September 10). The great MOOC experiment. Inside Higher Ed. Retrieved from https://www.insidehighered.com/views/2013/09/10/essay-context-behind-mooc-experiments

Stark, S., & Lewin, T. (2013, January 8). Welcome to the brave new world of MOOCs (massive open online courses) [Video file]. The New York Times. Retrieved from https://youtu.be/KqQNvmQH_YM?list=PLzA4KUaZQgse0MKXRlJN1GxpO84KBJa6E

Note: I created the animated GIF on this post using Google Drawing and Screen To Gif for Windows. Original image source: https://pixabay.com/en/teacher-class-classroom-students-44735/