When my technology failed me in two classes in a row, I gained a new-to-me understanding of why exactly the use of new technology has been so relatively slow to become ubiquitous in classrooms, and why some perfectly intelligent people have dug in their heels and refused to jump on the computer bandwagon.
Some quick thoughts that have probably always been obvious to everyone but me:
- Most education organizations don’t really have the funds to pay for top-of-the line gadgetry, back-up versions of said gadgetry, or adequate staff devoted to keeping said gadgetry functioning. So it’s likely to go wrong, and when it does, we’re unlikely to have great infrastructure to get it going again.
- Spending time planning a lesson and then having to completely throw it out the window and improvise on the spot, especially repeatedly, is frustrating.
- Planning a “just in case” back-up lesson for every hour of intended computer-based instruction would take a ridiculous amount of prep time.
- Teachers don’t like feeling helpless when their students patiently watch them fiddle with non-responsive machines during class time.
Again, I think these points are not earth-shattering. But in a way they were to me – I’m otherwise pretty into making use of digital technology to expand learning and social interactions.
Even after my sudden flash of understanding it, I still think the decision to flat-out not use digital technology grossly underestimates its benefits and potential. But I think many of us who embrace it grossly underestimate the amount of crap involved with making it work.