and I'm all out of bubble gum…

This write-up is an articulation of an idea that I first stumbled across last spring, in conversation with my colleagues around technology and academic integrity. I am enormously grateful to Lynette Sumpter and Ned Sherrill for pushing me and supporting me and challenging me and engaging me in the discussion that led me to this idea.

Let me preface this by noting that I am not a history teacher. Or a writing instructor. Or a teacher of social studies or literature.

But, I have played one on TV. And I have spent the last few years faking it in various settings (for example, teaching Third Form Seminar at St. Grottlesex, a studies skills class thinly disguised as a history class — or my current role teaching Media Studies, which is on the cusp of becoming Mark Taylor and teaching “liberal arts” as a discipline unto itself).

Fear and Loathing in the Classroom

I have been struck by how we teach our students to take part in academic discourse. I fear that we are teaching them to be fearful, rather than confident, outspoken and — above all — well-spoken. When I work with ninth graders writing their “big research paper”, it is clear that they are already viewing the process with trepidation. Not because they are not looking forward to the work (in fact, they often become absorbed in their research and legitimately excited about their chosen topic). They are terrified that they will do something that causes me to nail them to the wall for plagiarism. They are absolutely terrified that they will screw up their citations, botch their bibliography, accidentally confuse a quotation with a paraphrase, or in some other way incur the wrath of the gods of academic integrity.

This is ridiculous. And they come to me this way, already scared.

Why Cite Sources At All?

Let’s take a step back from the panicking ninth graders.

Let’s consider how we live our lives, as adults, day to day. Consider, if you will, a conversation with your friends. Better still, a conversation with my friends (they’re loads of fun): we’re talking about something that we have to do that feels ironically poorly suited to our temperment and someone around the table mutters “Snakes. Why did it have to be snakes?” Yes, we’re pathetic tools of popular culture. But we’re also making a reference to a known source. A reference that everyone around the table recognizes and appreciates.

In fact, by making that reference, we’re using a short hand phrase to conjure up a whole idea. At the most crass level. We’re probably not going to swing into the pharaohs’ catacombs on our bullwhip, blazing torch in hand, Nazis on our tails. But we’ve connected our plight to Indy’s, in our friends’ minds. Usually, among my friends, this then generates snickers. I am not someone who would plausibly swing on the end of a bullwhip.

This, at its core, is the purpose of citing another author: to make use of that author’s ideas in support of your own. To reference a whole, complex argument, made elsewhere, by selecting a short, notable phrase that stands in for the more complex idea. When we refer to “trickle-down economics”, we’re using an opaque term to describe a whole theory of how the world works — whether or not we agree with it, this provides a shared reference point between us and our interlocutors. By providing this shared reference point, we are providing an anchor on which to build our own arguments, share our own ideas, develop our our creations — in a way that will be more easily understood. We don’t have to reinvent the wheel every time.

When we cite other sources and authors, we select sources that lend credibility to our own point of view, by dint of the the sources’ own credibility. Our arguments are stronger when we cite well-known and well-respected sources. Our arguments are stronger when we can clearly connect them to sources that are clearly well-founded, or are clearly connected to well-founded sources.

Thus, if we want our argument to have credibility, we want to connect our argument to other well-respected sources. We want to make those connections explicit, rather than making that argument in a vacuum. We want to accessorize our argument with the respect and credibility already carried by our sources.

So, What’s the Big Deal?

The big deal is that this isn’t how or why I was taught to cite my sources in academic papers. And it isn’t how I’ve seen students being taught to cite their sources. I have seen only fear and intimidation: “if you do not cite your sources, you will be penalized for plagiarism. If you are penalized for plagiarism, you won’t get into the college of your choice or you’ll be expelled from the college of your choice or you might become Vice-President of the United States.” <shiver> I don’t see anyone making positive arguments for academic citation when we are introducing it to our students. We only threaten the students with punishment.

I don’t know about you, but I tend not to really enjoy doing things that I’m doing out of fear. I often dig in my heels (I’m an ornery S.O.B.). I certainly don’t try and do what I’m doing better because I’ve been intimidated into doing it (in fact, I’ll probably passive aggressively do it worse). And it takes me a long time (20+ years) to find any real purpose or excitement in something that I have been bullied into doing. I understand the obligation to do it. I do it. But I cuss about it. And drag my feet.

Technology has Changed Citation

At the start of every school year, one or the other of my colleagues will forward around the ever-growing list of things that our new students won’t be familiar with (the original Star Wars, Indiana Jones movies that aren’t execrable, tape cassettes, a time before white earbuds, etc.). What bears more than a little examination are the things that our students are familiar with.

I have a whole digression about the issue of digital natives, and whether or not it’s even a defensible opinion, let alone a fact of life. Let us at least stipulate that our students have a different relationship with technology and media than we did growing up, and that that relationship is increasingly facilitated by technology.

How about a bizarre mash-up video that combines clips from television shows that are, at best, unfamiliar to us with music we think is terrible? How about a collage of corporate slogans standing in for a personal statement? What are these moron kids up to now? Isn’t this entirely beneath our dignity and station to even pay lip service to the activity that’s going on here? Well, this is the students having a conversation. Their conversations are moving out of “meat space” and online (note the rise of concern about cyberbullying and sexting — these aren’t new phenomena, they’re just moving from behind the gym to the digital realm).

In fact, on the creative front (moving away from a digression on online agression), these collages and mash-ups and massively uncited multimedia conglomerations are the same kinds of conversations that my friends and I have. Only we have them in person. Where I grew up with VHS and quoting movies endlessly, today’s students are able to literally quote movies. And music videos. And magazines. And web cites. And anything else that has floated across their consciousness in the form of bits. Even their teachers (and the first time I was an unwitting component of a student video mash-up was over a decade ago).

These videos, which are clearly violations of copyright law, intellectual property treaties and, often, good taste are just our students engaged in conversation. Online and digitally.

Transitioning from Conversation to Academic Discourse

Understanding that what we see as recreational plagiarism and piracy is, in fact, informal discourse is the first step towards connecting the dots with our students. What we need to help our students do — what our role as teachers is — is to engage in code-switching: when is it appropriate to have an informal conversation? When do you start to cite your sources? What are you doing — for yourself and your sources — by citing them? What standards can we use to cite sources? When do those standards apply?

Overall: what’s the point of this exercise? Are we doing it because we’re afraid that somebody will use Google to find out that a dozen other people have had this same idea, and we want to get there first? Or are we doing it because we’ve read something powerful, insightful, revelatory… and we want to share the impact of that source with our own audience? Are we sharing our sources because if we don’t someone will accuse us of falsifying our research, or because we have come across a marvelous, well-researched data set that, with our analysis, screams in support of our conclusions? Are we citing the work of others out of a grudging sense of obligation to them for work already done, or because making reference to other works makes our own easier: we are bringing worlds of ideas into our effort through the careful selection of a word or phrase?

In the past two years, this is what I have started to try to do with my students: rather than threatening them, engage them in that part of the world of research and ideas that I find so invigorating and exciting. Rather than whaling on them for botching a quotation, explain to them why getting their quotations and citations right (for their context) is meaningful.

For the first time, last week, when I was grinding away on my students to include links in their blog entries, and I asked why, one of my students said:

“Because you might want to read them too?”

December 24th, 2009

Posted In: Teaching

Tags: , , , , , , , , , , , , , , , , , ,

This post is part of a series that are components of my “Expert Plan” at my school, looking to create a shared resource for my colleagues as the school moves towards greater adoption of laptops and technology in our pedagogy.

The Model

I have an inherent prejudice against teaching students (and faculty) to use user computers as tools by providing step-by-step directions for a specific progress. I believe that, while totally helpful in the individual instance of that specific process, the step-by-step instructions are, in the end, handicapping: they do not introduce the learner to the underlying concepts that might guide their further, more extensive use of the same tool independently.

That said, periodically I need students or colleagues to do exactly one specific sequence of steps. This year I have been experimenting with presenting these sequences of steps as “screencasts” — videos of me doing the process while I narrate what I’m doing. This has a number of advantages, not least being that it is far, far faster to create a screencast than to write a set of instructions. Additionally, the screencast presents as a manageable video, rather than as an overwhelming 17-step sequence of directions. Additonally, rather than describing the process, learners are able to see the process as it plays out.

In Practice

I have created perhaps a dozen or so screencasts so far this fall, and I have settled into using Screencast-O-Matic, which I like because it does not require installing additonal software (as Jing does) and it is free (as Jing is) and it makes it easy for me to post my screencasts either to the Screencast-O-Matic site (for free), to YouTube (for free) or to save a high quality video file to my computer, that I can edit in iMovie or Sony Vegas Movie Studio. With the video hosted on either Screencast-O-Matic or YouTube, I’m then able to embed the video in a blog post or wiki page for the learners to view.

One technical issue that I ran into is that the Screencast-O-Matic streaming video requires Java to be installed and allowed to run (which is generally true on all computers), and that the series of dialogs to permit this are disconcerting and derailing for some learners. In general, where I can (for videos under ten minutes), I have also posted the videos to YouTube, which requires less from the user to view it. The YouTube videos, viewed in HD are still somewhat lower quality than the Screencast-O-Matic-hosted videos, but they’re generally fine.

I have also been experimenting with OmniDazzle as a way of highlighting parts of the screen as I talk and work in screencasts, making it easier for learners to follow my mouse motions and directions.

A learning issue that I have run into is that some folks (more faculty than students) have been unwilling to click play to watch the video. The process of learning new technology without an actual person standing at their elbow is too overwhelming to contemplate (this is not inference, this is what I was told by those faculty).


I’m not entirely gung ho about screencasts, for the reasons listed above in The Model — that I want learners to understand concepts, rather than steps. I fear that presenting a shamanistic approach to learning technology — “do this sequence of arcane steps and the magic happens” — undermines long term learning. That said, I feel that I am able to better present concepts without intimidating learners in a screencast when I am just talking, rather than presenting a paragraph-sized annotation to each step of a set of directions.

The screencasting approach does, of course, not address all learning styles. It works for more than the directions, I believe, capturing both visual and auditory learners, but it is still not the same as working with the learner to help them accomplish the process in person, themselves. To this end, I have tried to hold screencasts in reserve as a reenforcement for in-class learning, rather than as a sole source of learning about a particular process.

November 22nd, 2009

Posted In: "Expert Plan", Educational Technology, Social Media, Teaching

Tags: , , , , , , , , , , ,

This post is part of a series that are components of my “Expert Plan” at my school, looking to create a shared resource for my colleagues as the school moves towards greater adoption of laptops and technology in our pedagogy.

The Model

I wanted to structure the feedback that my students presented to each other during our video critique. We often brainstorm the criteria that we will be using to review work as it is presented, and I post our criteria on the board or on the wiki as a reminder throughout the process. This time, I created a new Form in Google Docs and entered our criteria as questions. I then embedded the form in our class notes for the day, and each student filled out the form as we viewed video. I embedded the responses as a spreadsheet on a linked page, so that the the students could review the feedback they had received and post their responses to our class blog.

In Practice

Creating the form live went relatively smoothly — the only hang-up was my inability to type in public. Fortunately, the students were proofreading on the screen and caught me when I made errors. They were also able to help guide me when I got distracted and forgot what I was doing (“Mr. Battis, we’ve already got that question at the bottom of the screen…”).

It took very little instruction for the students to figure out how to use the form. The most complicated thing they had to do was refresh the page after they had submitted their feedback so that they could get a fresh, blank form for the next video.

We settled into a routine where I played each video through twice, once for them to watch, and once to remind them of details as they entered their feedback.


After we finished reviewing videos and posting feedback, I put the question to the students: was this better, worse or the same as having a verbal critique of the same material (which we had done in the earlier digital photography unit). The response, by and large, appeared to be that this was actually really helpful: there were enough things to pay attention to while watching the video that being able to take notes into the form let them not forget things that were important.

November 22nd, 2009

Posted In: "Expert Plan", Collaborative Writing and Editing, Educational Technology, Teaching

Tags: , , , , , , , , , , ,

This post is part of a series that are components of my “Expert Plan” at my school, looking to create a shared resource for my colleagues as the school moves towards greater adoption of laptops and technology in our pedagogy.

The Model

For the last few years, I have found that, when appropriate, I get far more use out of my notes if I take them on a computer. Using the computer allows me to keep my notes organized, to instantly create links to related information (either within my notes or on the web), to flag my own questions as they arise (and unflag them as they are answered), to find ideas in my notes later (search is way faster than flipping through my notebooks and legal pads), to share my notes with colleagues and students, and to link to as references and resources in later iterations of documents.

In Practice

It’s not always kosher to have your laptop open in a conversation. If I take notes in a one-on-one meeting in my laptop, there is a real danger that I will be talking to my computer rather than the person I am meeting with. (Simultaneously, if I take the notes on my laptop, I am able to refer back to them more easily than in handwriting.) Personally, I have found that if I feel compelled to take notes by hand, that those notes are not going to make it into my computer except in extraordinary circumstances, and that the only service that paper notes have for me is as a memory aid (“the information has passed from at least one neuron to at least one other neuron, crossing at least one synapse in the process, giving you a faint hope of remembering the information.” — Duane Bailey).

If there are network connectivity problems (or battery power level issues), my notes may either not be available or may disappear entirely (as happened at one point this fall, taking notes on [a major collaborative project] presentation). This doesn’t happen with notebooks. However, referring back to the last paragraph… those notes would have gone into the ether anyway (for me at least) if I had taken them on paper.

I find that I am much more willing to share my digital notes than I would hand-written notes — not just because of legibility issues, although those are real, but also because when I share my notes, I share it with an expectation that the recipient will be adding some input to those notes, adding value for me as well.

I have also found that using the tagging feature of the wiki gives me a tool for taking attendance at a meeting — who was there, so that I can find notes based not just on content, but on the makeup of the meeting: “I know we discussed this in EdTech, I think Scott said something about it…”


As someone who spent years not taking notes on anything, simply remembering what was said to the best of my ability, I find that taking notes on my computer is a massive advantage: it allows me to empty my brain and forget things with confidence. And taking my notes in a wiki makes them instantly shareable and referable from any computer, anywhere. I love it.

November 22nd, 2009

Posted In: "Expert Plan", Collaborative Writing and Editing, Educational Technology, Teaching

Tags: , , , , , , , , ,

Nate Kogan, writing about his plans for “classroom 2.0” collaborative writing assignments in his history classes in the coming year, notes student resistance to working collaboratively:

While many students seem to dislike group work, I think the resistance stems more from the fear of being saddled with all the work by one’s potentially indolent group-mates rather than from inherent resistance to collaborative work.

[Full disclosure: Nate is my brother-in-law, and I have been following his thoughts about teaching with technology with some interest all summer.]

While studying for my M.Ed., I found myself revisiting the role of student full-time after a decade-long hiatus: it brought me back to the classroom with fresh eyes. The process raised two big pedagogical questions for me:

  1. How do I teach my students to ask questions? Not even good questions, just questions. I realized that, as a long-time A student, I’d never spent much time confused, and therefore hadn’t had to spend much time figuring out how to get unconfused. Over the course of my studies, I realized that this might be the most important thing that I, as a high school teacher, might be teaching any of my students. And, never having been taught (or, at least, never having noticed being taught — merely encouraged to just do it) how to learn reflectively and ask questions that clarify and resolve my areas of confusion… I was (am) a little at sea about how one teaches this intentionally.
  2. How do I teach my students how to work collaboratively? This is, after all, what the real world is about. It’s vanishingly unlikely that my students will find a role for themselves in which they don’t have to work with other people toward a shared goal (okay, one or two of them may turn out to be costumed superheroes or reclusive, genius novelists… but the vast bulk are going to have to play well with others).

I think that these two questions are related: working collaboratively with peers creates a more free-flowing and less performance-anxiety-inducing environment than working independently and presenting to the teacher (and one’s classmates). Or, at least, it can. That environment could, if the stars align and sufficient support and guidance is provided, even result in an setting in which students are free to debate, critique and improve each other’s work. That is, they could learn how to reflect on what they’ve learned and ask questions of each other about that work and the progress that they have made.

In graduate school, I had a wealth of group project experiences. The least successful was my year-long “school developer” project that was, effectively, my master’s thesis with a group of four other students working with a local elementary school to develop a strategic plan for expansion from K-5 to K-8. Lord, this experience was miserable, partly because my Meyers Briggs profile was the complete inverse of my teammates, and partly because we had no idea what we were doing as a team and were thrown into trying to do things before we actually were a team. (It calls to my mind my panicky, parental feelings of inadequacy when my classes have to interact with outsiders early in their time together — I don’t trust them enough to believe that they’ll be presentable, and they don’t trust me enough to believe that I haven’t set them up for a fall or for boredom.) Long story short, the team suffered from terrible group dynamics, mission creep, lousy communication with our “client” school, confusing feedback from our peer teams and professor, and unclear end goals for both the class project and the school’s mission.

The most successful team of which I was a member was actually formed to write a single collaborative paper over the course of the semester. The professor, Janice Jackson — a former elementary school teacher and district administrator and all-around mensch — spent the first half of the semester devoting significant portions of class to not only teaching about group dynamics in the abstract, but giving us time as teams to work through those very dynamics as we learned about them. By the time we had any kind of work for which we were accountable, we were, quite honestly, a little tired of meeting and exasperated at Prof. Jackson. It all felt excessive. However, when we sat down to do our research and write it up, it turned out that all of our (very diverse) Meyers Briggs personalities meshed, that we each had clear roles within the group that we had explicitly negotiated, that we had clear expectations both of each other that we had explicitly stated and of what our end result should be (that we had proposed and had approved by Prof. Jackson, explicitly), and that we felt safe working through early drafts of our paper sections together and receiving what was sometimes drastic criticism and demands for reworking.

The process matters. It really, really matters. And that class with Prof. Jackson was the first time that I had ever worked in a group in an academic setting that had consciously set aside (or had set aside for it) time to figure out how to be a group. That that experience was bolstered by conceptual background in group dynamics surely didn’t hurt. Prof. Jackson gave us both the time and background to develop clear understandings of both the norms of our group and our own roles within the group. And this idea of understanding one’s role in the group, and trusting one’s collaborators to fulfill their own roles and responsibilities, is key to successful collaboration. Without that trust, one ends up either abdicating all responsibility (“yeesh, what a bunch of clowns — there’s no way we’re going to do well, why should I try?”) or striving to fill in all the perceived gaps (“yeesh, what a bunch of clowns — if I want it done right, I’ve got to do it myself.”).

So, how to develop this experience of a trusting, collaborative project with high school students? They’re certainly at a different developmental place than I was at 30 (well, I hope they are — mostly for my sake). I don’t think that loading them down with all the conceptual background and vocabulary that we received from Prof. Jackson will make a sale to them. But I do think that striving to develop that environment of trust and delegation among teammates, with clear understanding of roles is worthwhile.

I’ve tried to do this in a number of settings. When I was working at as an outdoor, experiential educator, I found that large group projects could be done well by delegating specific roles within the project to specific students, thus providing clear accountability for specific portions of the project. My preferred iteration is to work with the students to develop a top-down design (what Wiggins’ confusingly refers to as a “backwards design”) that parcels out the work into self-designed and allocated responsibilities. One iteration of this was to present a large question to the group (“How does human management impact the ecosystem?”) and then help each student develop an area of expertise within the larger question (water, birds, tourism, sound, etc.). The wrinkle is that no student can accurately predict a topic in which they will maintain an abiding interest throughout the project, and therefore slippage and shifting will occur and needs to be negotiated gracefully.

Another approach, which I used last year with my Application Design class as we were working to build a CNC lathe, was to break the project apart into modules with the class, and then solicit volunteers for small teams to tackle each module. We prioritized the modules, and each student was responsible for shifting from module to module as they were interested or the module needed development to support dependent modules of other teams. Students were encouraged to engage with other teams, and sometimes shifted from team to team based on changing interests, but there was always a core student or pair of students who was, at the end of the day, managing each module and responsible at least for rallying other students to that module’s cause.

In both cases, I found that developing clear (and concise — unlike this entry) roles for the students in collaboration with the students gave them significantly more buy-in. Students who engaged with the project were able to throw themselves into it without fear of having to “carry” their peers (each student’s contributions were documented along the way — automatically by Google Code, in the case of the CNC lathe project), and, in fact, over-achieving students tended to provide a catalyst for under-achievers: they asked thoughtful and critical questions, provided assistance and generally raised the intellectual atmosphere a notch or two. Simultaneously, the multiplicity of roles and modules provides enough overlap that if one or two students totally peace out, the rest of the team can gnash their teeth briefly and move on without being hindered or damaged. Where successful, I found that I had students who were pushing me to do more research to support their work and that I was relying on their work and questions to lead the class.

In both cases, I also took some significant time out both early on and throughout the project to step back, examine and work through group dynamics. Not necessarily conceptually, but pragmatically working to resolve issues and grudges (and, not insignificantly, to celebrate and highlight successes). While I strongly encouraged my students to hold each other accountable and to work issues with their teammates out with their teammates, I was also a consultant to individual exasperated students on how to do this, and a general-purpose umpire for the whole team, calling time-out when it looked like a brawl (or tears) was brewing. In my umpire role, I was also able to highlight particularly good or interesting work by individual students or teams for the entire class, providing a clear model of the desired outcomes and behaviors.

August 11th, 2009

Posted In: Teaching

Tags: , , , , , , , , ,

Andrew Watt’s response to Sarah Fine’s recent opinion piece in the Washington Post captures much of what resonated in her piece with me as an independent school teacher: the challenge of simultaneously charting one’s own career and life goals while working towards institutional goals which may be formulated, articulated and executed with varying levels of clarity and thoughtfulness. I think we can simply stipulate that administrative transparency and collaborative decision-making go a long way towards both better decisions and teacher longevity. (It’s really hard to imagine wanting to stay at a job where your responsibilities are both out of your own hands and unpredictable, right?)

What gave me pause was the throw-away thought at the end of Watt’s response:

The other side of this equation is the revolution in technology.  Whether they’re technophilic teachers who embrace tech but chafe against daunting rules and regulations, or technophobes who fear so much as a cellphone in a student pocket, teachers are right to see computers, cellphones, and the Internet as a threat to their existence.

Because there are learning resources out there now which are better than at least some teachers, in some subject areas.  The range and depth of these offerings are only going to increase.

I. Am. Not. On. Board. With. This.

And it’s not because I’m a raging technophile (which I am), or because I cling to an older model of teaching and classrooms (granted, I want to grow up to be Frank Boyden). It’s because I believe that teaching is not about content-delivery. Teaching is about helping students learn. And the best way for students to learn is to work (and play and live) with adults who espouse and model learning, how to learn and joy in learning.

Yes, technology is changing how we deliver content — and how we manage our classrooms, and how we assess student work, and how we research, and what sort of work counts as “work” by and for our students. And automobiles replaced the horse, the printing press replaced scribes, machines replaced craftsmen, etc. Change happens. The role of the teacher, however, remains essentially the same: facilitate, support and develop the learning process for students. How that work is done may change dramatically, but it is fundamentally the same goals with new techniques.

Teachers aren’t going to become superfluous because of technology. They’re going to become more necessary. They are more necessary.

[N.B. This is not an indictment of technophobe teachers. Suffice it to say that one of the real joys of my job in the past few years has been to engage in collaborative learning with master teachers who self-identify as technophobes. As we discuss how technology might support their teaching goals, I simultaneously learn a great deal about how to formulate and evaluate those goals, with masterful techniques demonstrated. Thank you! More on this at another time.]

August 10th, 2009

Posted In: Teaching

Tags: , , , , , , , , , , ,

Having just struck upon the similarities between Pink’s six new senses and Gardner’s multiple intelligences, I continue to be fascinated by examples of folks employing these ideas in creative ways: enter Basildon Coder, recently highlighted on Slashdot for describing a Wodehouse-ian approach to code refactoring. As always, I look at this and start to ponder how to use it in the classroom with my students: one of the real challenges that my students face is not the development of new code (although that is challenging) but figuring out how to use a body of code written by someone else (me, their classmates, some godawful Windows GDI API, etc.). I have been struck by the difficulty my students have faced this year in grasping the 50,000 foot view of coding — perhaps a visual representation like this might be a first step. Sort of a Powers of 10 for programming.

March 23rd, 2008

Posted In: Computer Science, Educational Technology

Tags: , , , , , , , , ,

Having just driven my sister to the Philadelphia airport, I am reminded of the value of education founded in general principles, rather than a rote memorization of steps to accomplish specific goals.

I grew up in Philadephia and I have no idea where I am or how to get there on most of my trips. This is doubly true when driving to the airport. I simply know a route (in the case of the airport, for many years all I knew was that if I got in a particular lane on the expressway, I would eventually end up at the airport). I don’t know the geography. If I had to leave my route for a detour (as I did a year or two ago), I would have no idea how to recover.

Compare this to my knowledge of Somerville, where I lived for nine months and drove far less than when I was in Philadelphia (and yes, the walking knowledge is part of my point). Somerville was the the third city in four years that I had lived in, and I had developed a different approach to learning the lay of the land than my approach to Philadelphia. I got lost. I got lost a lot. I printed out directions to every place I wanted to go, but when I thought I saw a shortcut or knew my way, I took it. Sometimes this went badly. But I rapidly developed a much better sense not just for routes, but for the entire geography of the city (I can’t speak to Boston, but this worked in Cambridge as well). I had taken enough wrong turns that I had a sense of how the streets were connected (even if I didn’t always know the names).

I started working in IT when I was in high school, supporting my school’s AppleTalk network and doing odd consulting gigs along the way. In the consulting, I had several regular clients who hired me to help them learn how to use their computers. These regular clients took copious notes as I explained to them how to perform various tasks on their computers (use a word processor, print, save a file). Those folks who noted down that the menubar was where actions (or, in one English teacher’s case, verbs) were stored, that each window represented a file on the hard drive, and so forth, were rarely heard from again: they had grasped the general principles of the situation. The ones who titled their notes “How to Save a MacWrite File” and then took step-by-step notes… those were job security. Not only did they tend to lose their notes (notes more akin to a treasure map than to knowledge), but they were unable to generalize from those notes to other related concepts like “How to Save a MacPaint File” or “How to Save an AppleWorks Spreadsheet” (and they weren’t entirely certain that a file and a spreadsheet were the same thing).

Why do I mention this? The takers of step-by-step notes, maps to the hidden treasure of the Save command were learning how to use their computers by rote memorization, with no real understanding of what they were doing or how discrete parts of the process they had learned could be applied to other, similar situations. They were driving to the Philadelphia airport.

When confronted with an alien technology (or landscape or process or culture), our natural inclination is to find out how to do the few specific things that we need to do (order food, print a paper, hail a taxi, etc.). If we learn those tasks in isolation, without learning the underlying and fundamental principles that define how that technology or landscape work, we continue to operate in alien terrain. It’s quicker and easier, initially, to have our cheat sheet than to probe the situation and figure out how the dang thing works.

The temptation when teaching students (or faculty) how to use technology is to provide the step-by-step directions, neatly illustrated with screenshots, describing how to perform X, Y or Z task that needs to be done for the assignment. I have certainly been guilty of this myself, even as recently as this fall (I tried to have the best of both worlds, describing the steps, but also what the steps were doing… but I have little confidence that anyone read those longer explanations under the time pressure of September and the start of classes). These cheat sheets prevent us and our students from learning how to use the system.

Earlier this fall, a fellow teacher described his approach to teaching his students how to use different web sites. He doesn’t. He gives them the URL of the main site, tells them what to look for, and gives them an evening to poke at it to figure out how to get the information they need out of it. They might collaborate and share their learning. They might intuit how the system works. They might not get it that first night and have to seek help from their peers. But they don’t have trouble with the second assignment: they have learned how the site works on that first evening.

This seems like an argument for teaching by not teaching. Rather, it is an argument for teaching by coaching, by presenting challenges to our students for which we have adequately prepared them and allowing our students to strive and succeed. The role of the teacher is not to be the master of all knowledge, but the sage adviser capable of guiding students to the knowledge. In practice, this is not easier but rather much harder than traditional teaching: it’s easy to tell someone else how to do something, to explain what you know so that they might understand it. It is much harder to create a situation in which genuine learning can take place, to not interfere while that learning is going on, and to help facilitate and process that learning during and afterwards.

This is the challenge for teachers of technology.

Rather than teaching our students how to use a specific technology to perform a specific task, we need to present our students with appropriate tools and background to learn to use those tools. Academic computing is often relegated to computer applications classes, where students learn skills devoid of context, or to specific projects where a student “learns PowerPoint.” Instead, we need to think more broadly: what are the academic computing skills that we wish for our students to have? How can we challenge our students to develop those skills? How will we know when they have attained this knowledge?

Do we want our students to learn to use Word and PowerPoint? Well, not really: I don’t care what programs they learn to use. Let’s rephrase the question: Do we want our students to learn how to develop and write about their ideas and present those ideas in a clear and compelling manner? Hell yes. We have several tools available to facilitate this, including Word and PowerPoint. But these are just tools. Offering a class in computer applications is like offering a class on pencils: everyone involved will want to gouge out their eyeballs by the second hour. These tools have to be learned as just that: tools, part of a process larger than themselves.

The great fear of teachers who are asked to use fancy pants technological tools in their classes is that they will need to know more about these tools than their students. I ask instead that teachers know more about the skills that they wish their students to acquire, and be willing to coach students towards honing those skills while using technology, rather than teach students to use specific tools.

Certainly a teacher can’t ask a student to use a tool with which they themselves have no familiarity. But if this is a tool that is supposed to help a student achieve and exhibit the desired skills, and the teacher is him or herself a master of those skills… shouldn’t it be incumbent upon the teacher to either a) be familiar with the tool or b) reevaluate whether or not the tool is itself useful to the skill? (I’m eating my own dog food on this one: I’m writing this blog!)

This fall I have worked with a number of teachers who want to learn specific tools to enhance their own teaching. This is how these tools will end up being taught, not because we have a mandate that all of our graduates should master the Microsoft Office suite. In much the same way that a history teacher who doesn’t use outlines for his or her own analyses is going to be less well-equiped to teach his or her students to use outlines, a teacher who doesn’t use technology is going to be ill-equiped to teach their students. (And, by corollary, leaders in schools should also be using technology to support their work with faculty — same reasoning: if it’s really a useful tool, we should be using it!)

All of this brings us back to the key point however: we don’t really learn until we have had to get ourselves unlost. And, as teachers, we need to be willing to let our students get lost. Not terribly, Robinson Crusoe, Moses-in-the-desert, talking-to-our-volleyball lost, but lost on the way from Someville to Cambridge. Define a bite-sized goal for our students and ask them to chew it on their own: ask them to learn to use a technology on their own. Give them a introduction, point them to the areas they will need to explore, and let them explore!

December 28th, 2007

Posted In: Educational Technology

Tags: , , , , , , , , , , , ,