Browsed by
Tag: courseware

How Training without Lecturing breaks the fourth wall

How Training without Lecturing breaks the fourth wall

There is, I have discovered, an imaginary wall between the teacher and the taught, and you will feel it no more strongly than when you opt not to lecture.

What I have learned in the last five years of teaching faculty how to use courseware is that my grand ideas sound really good on paper, and sound good to the ears of chairs, administrators, and even instructors themselves, but they rarely work out as planned. My grand ideas have been these: don’t waste time with fake “training” courses, encourage instructors to use the time we’ve booked to actually build their own courses, with help on hand. If asked, any instructor will tell you that they have more important things to do than sit in a lab and listen to some instructional technologist or (in my case) librarian go on at great length about best practices or feedback we’ve heard from students. They have a deadline, and it’s usually something like tomorrow or the next day, to get this course ready and online. They are often annoyed that they system doesn’t work the way they want it to/hope it will/expect it will, and have exactly 12 seconds of attention to spare. This is why I thought my grand ideas would work out: I’m not going to ask you to sit and listen to me for an hour before you go home and build your course alone. I say: forget the first part, let’s jump to the second, but do it more efficiently. You work on your course: I’ll answer your questions as required. We can learn from each other’s questions. We’ll all walk away having accomplished something.

It never worked. First off, the labs where these training/work sessions happen are built like classrooms, with a podium and a screen and desks that usually face the front. The room itself tells everyone what they should be doing, and it’s not what they want to do or what I hope they will do. Second, no one’s ever ready. We do the training a week or two before classes start, and 9 times out of 10 the syllabus is still in progress, the documents are all over the instructor’s home computer (not in the lab with us), TAs haven’t been assigned, assignments are still being sorted out. So I can book a room to get the work done, but the content is rarely with us. So what happens instead is I (or one of my esteemed colleagues) gets in front of the room and lectures. We lecture about courseware. We point out where the tools are, we walk through the clicks. Here’s how you do it, guys. We pepper the lecture with experience, feedback from students, ideas we’ve seen work well, and those that don’t work so well. We end up serving up exactly what everyone would tell us they don’t want.

So this year, we decided to throw the whole process out and start again. As with any educational enterprise, we had to sit and think about where the value in our training lies. While I can talk at great length about all the tools and how best to use them, my experience is that little if any of my grand words sink in. Of course that’s how it works: the research clearly shows that training of this nature isn’t terribly effective, and I can vouch for that based on the phone calls I get. How often do we get questions from faculty where the answers were delivered in at training session several weeks (or days) prior? About 95% of the time, easily. It’s not that they’re not paying attention; our method just doesn’t work. They feel successful at the time; we have really good interactions with faculty, they clearly understand that we know what we’re talking about, they appreciate that it is our job to help them and we will pull out all the stops to do so. Everyone walks away happy. It’s just that our training objectives (giving instructors the tools to feel confident in creating an excellent, effective online course presence) are rarely met.

We distilled the positives of our current situation down to these: we need to continue to make sure instructors know that we’re friendly, helpful, and available for them on an on-going basis. If nothing else comes across, this has to. The thing we value the very most is our one-on-one discussions with instructors about their use of technology in their courses; we want to keep that. That interaction is valuable for both of us. Beyond that, everything was fair game.

So first, we decided to stop using classrooms to conduct training. The format is too familiar and too controlled. We don’t want everyone to take a seat and stick in it. We want them to move around. A moving body learns better than a stationary one. So no claiming seats. Next, we would not lecture. No lectures. The learning that was going to happen around us would be active, not passive. We’re not going to insert answers into your head. You’re going to have to forage for your answers.

We set up four zones in a room. At the front near the entrance we have a demonstration zone, with no seating, but one very large whiteboard, a projector, a wii remote, and a IR pen. In the demonstration zone you can use the IR pen to interact with a training shell. Here we demonstrate how tools are used, where to click, how to create elements, etc. based on the questions that are coming from faculty. It’s off-the-cuff and tailored to the instructor in front of us. The advantage of the large format is that other instructors see what’s being demonstrated from anywhere in the room and come forward to interact with it (and us) if they’re interested in the topic.

The second zone is simple a table. Here we encourage instructors with their own laptops to open them up and work with a familiar machine. On the table we have our “how to get your course into Blackboard in a hurry” document, which walks you through each of the basic, necessary steps.

The third zone is the Petting Zoo, which consists of six computers each displaying a different training course shell. They’re designed so that you can play with or look at the tool in action. If required there is a laptop sitting next to the computer with the student view of the same course shell, so you can set it up/create/add content as an instructor and then see what it looks like for a student. There are printed signs on each station advertising which tool is being displayed. On the desk at each station are post it notes with ideas on them for how and why to use this tool. Next to the monitor are printed sheets with step-by-step instructions on exactly how to set up and use this tool.

The fourth zone is simply two computers against the far wall where instructors can log into their own accounts and build their courses.

The basic plan was this: we knew everyone would be a bit uncomfortable at first, not knowing what to do, so we thought we’d start with a short lecturette about some concepts rather than tools. First: the idea that the “course menu” shouldn’t remain in its default state, but rather should be understood as a table of contents for the class. We’d give them a brief dissection of the main page, so they knew where the basic elements were. After that we’d introduce the areas to the instructors, including a brief introduction about each of the petting zoo stations. Point out the instruction sheets. Encourage them to ask their questions and check out whatever stations interest them. Then we let them go.

The very first time we did this, I shuddered a little about two beats after I stopped talking. You can feel the uncertainty, the tiny bit of panic, both on our side and theirs. They expect us to edutain them. There is a silence that needs to be filled, and it should be filled with my confident voice. They (and we) expect us to do the work, the song and dance, while they observe us. This is, at the heart of it, what “learning” looks like in higher ed, doesn’t it? We are so familiar with this set up that taking it away causes real insecurity for everyone involved. But within about four minutes we had faculty playing with tools at the petting zoo, getting questions answered at the demonstration area, and talking to each other at the workstations and around the table. Rather than spend all my time going through the basic rigmarole, I was answering specific questions and brainstorming creative ways to encourage student participation. How to get students to comment on each other’s blogs, which tool to pick for a specific task, how best to tackle groups within large classes. Rather than reciting the content of our tip sheets and how-to documents, we got to spend time using our imaginations and experience. It was exhilarating.

Not only that: most of the instructors stayed longer than the booked time, took more printed paper than usual, and actually (gasp) worked on their courses. I couldn’t believe it. When we give everyone their own computer to work on, no one wants to build their own courses. I think perhaps the fact that we spend most of the time lecturing has the effect of us claiming all the air in the room. When we stop, and force everyone to become an active participant in the training, there’s more autonomy to go around. Everyone seems to take charge of their situations a little more. When instructors have to choose their spot rather than having one essentially assigned, they seem far more willing to get to work. I felt like I did more, even though I was talking to the crowd so much less.

And all those basic questions? The paper does the talking. I don’t have to worry about forgetting to mention how to make your course available, or how to upload a document. There’s a simple set of instructions for that. People with experience and imagination are far more valuable sharing that rather than the basic how-tos.

Every time we run one of these training sessions, and we’ve done five of them so far, it starts out with the same tension; everyone in the room looks at us, a little nervous, wondering what they heck we expect from them. With the librarians, they all stood in an orderly row.

“I know this is uncomfortable at first,” I said as we started. “When we don’t lecture, it breaks the fourth wall.”

“There is no fourth wall,” one of the librarians protested, clearly uncomfortable with being put in this situation. (I can always count on librarians to voice what few others are willing to.)

I looked up at them, in a line, literally forming a wall themselves. “Yeah,” I said. “There really is.”

Within a few minutes, they were all hard at work, papers in hand, discussions on-going. The demonstration area was busy. All the petting zoo stations were occupied, mostly with a pair looking at the tools and discussing them. It’s not the trainers and the trainees anymore. It’s just us, together, learning.

Sometimes, Web 2.0 Hurts

Sometimes, Web 2.0 Hurts

Oh boy. I didn’t see this one coming, though I suppose I should have: Students Used for Cheap Labour. This is a link to our student newspaper, and possibly it loads better in your browser than it does in mine, but I had to view its source to get at the content, so I will explain. Steve Joordens, a psychology prof at UTSC, has been working on a piece of software that has students engaging in not just reading and responding to articles, but actually grading each other’s work:

The program PeerScholar is currently being used to mark two written assignments, which are worth 5 percent each. After writing their own answers in the program, students are asked to log in later during the week to read over other students’ answers. Students are then asked to grade each answer based on criterion available on the website. All student work is graded by five students, to provide fairness in the marking, Joordans [sic]claims.

I’ve met Steve. I went over to UTSC a few months ago to talk with him about what he’s doing and get a demo. He’s a very nice guy, very smart guy, and while he’s taken a very different approach to instructional technology than I have, his work is very interesting. I found myself very challenged by what he’s doing because it’s so radically different and yet so similar to the work I’m doing myself. The pool of data he’s gathered means that he can do some serious statistical analysis on how students grade, the numbers of students who will try to game the system, how to account for gaming the system, etc. It hit my like a brick wall; stats. Instructional technology as a thing that gathers stats, from which we can extrapolate and learn something about the user group. It’s just not in my repetoire of goals, what can I say, that’s what a background in english, history and theological studies gets you. Seeing a demo of PeerScholar showed me my biases very, very clearly. It was like looking into a mirror for the first time. Revealing and a little unsettling.

My focus has always been more touchy-feely, more humanities than social sciences, in that I’m more interested in using “web 2.0” to create a culture of feedback inside a class, to use comment features as a way to train students to work up a response to everything they read, to make reading scholarly work simply another form of dialogue rather than monologue. As a way to help build a sense of community, because community always needs to be built and strengthened. I generally steer clear of grading per se; assessment is a grey area for me in a lot of ways, and while I have ideas about it, I still feel that the instructor is the best judge when it comes to assessing student work. When it comes to interactive work, it seems to me that grading less rather than more (grading the whole experience, the whole process, rather than a single instance) is the way to go. So it wouldn’t have occurred to me to include students grading each other as a feature. Reacting to each other? Yes. Leaving feedback, starting a discussion, quoting each other, definitely. But grading seems so…formal. Final. Mercenary, somehow. But Professor Joordens is a working instructor, with a huge class to teach, so I can easily see how he would stop to consider how technology could help automate the process. If they don’t automate it, students in those classes will only be able to express themselves through scantron sheets. I appreciate what he’s trying to do. I can absolutely understand and respect the desire to get those students getting more engaged and doing more writing about what they’re reading. I can’t think of a more passive and limiting educational experience than nothing but multiple choice exams for assessment. So I see where he’s coming from.

I didn’t see this coming, though:

However, according CUPE 3902, since marking and grading of student work is a paid position at U of T, the students are subsequently covered by the Collective Agreement for Teaching Assistants, which also makes them members of the union. As a result of this, CUPE 3902 is arguing that students are being made to work for free, which CUPE 3902 Chair Anil Varughese claims is to “compensate for the failure to hire enough trained and qualified teaching assistants to evaluate them.”

Ack! Slippery slope, isn’t it. Reading an article and responding to it is coursework, but reading another student’s response and assigning it a grade is paid labour. I absolutely see CUPE’s point, though, and so does Professor Joordens:

On the UTSC’s PSYA01 website, Joordans [sic] goes on to say, “I will be completely honest. The original reason for seriously considering a peer-to-peer evaluation process was financial. We cannot afford to pay a large team of TAs to mark written answers for large classes. Moreover, it would take them so long to do the marking that it also just wouldn’t be practical. Peer-to-peer evaluation, when combined with great internet programming, is fast and cheap.”

Oops.

The Star has weighed in on this issue as well: Peer Marking Gets a Negative Grade:

Jemy Joseph, 20, “absolutely loved the idea” when she found out her course at the University of Toronto Scarborough also featured short, written assignments that would be returned with assessments of ability to write and think critically.
Her problem was that the marking — worth 10 per cent of her final grade — was done by her 1,500 classmates, as part of peerScholar, an online evaluation program in limited use at the school.

“The idea behind it is great because you’re not just getting graded but you’re also getting some sort of feedback,” said Joseph, who took the course in 2004. “But I’m not comfortable with getting marks from random students who have no experience in grading and may not put a lot of work into it.”

If I recall correctly, the statistics indicate that students are getting roughly the same grade from each other than they would get from a graduate TA. Though possibly that’s an aggregate statistic, I’m not sure. (Stats: really not my territory.) I don’t think this student is actually complaining about the grade she got, but more about the relative emptiness behind it. She feels cheated out of not getting that feedback from the person teaching the course, or someone who is part of the authority of the course staff. There’s a piece missing there that we need to define. I think it’s easy to see the value that faculty bring to courses, but often the shift into using more technology in the classroom makes people forget about that value, or think it can be replaced by something automated. But students clearly still value the experience and knowledge of instructors themselves. You can give them the grades they want, give them a relatively easy and quick way to get those grades, but they still want more of the faculty member’s time and thoughts. This is a good thing; students aren’t necessarily just here to pick up a grade.

More from the Star article:

“We’re not opposed to finding ways to move beyond multiple-choice testing,” Chantal Sundaram, a representative with Local 3902 of the Canadian Union of Public Employees, said yesterday. “But we think the best way to do that, to have more critical thinking and more long, written answers in introductory courses, is by hiring more teaching assistants. …

“This practice raises issues around our collective agreement and our workplace, but we believe it’s also an issue around the quality of education for the undergraduate students.”

Again, the union has a point. If multiple choice is not desireable and we accept Steve Joordens’ mission, what are the options when faced with 1500 students per term who want to take PSY100?

The basic structure of the system Steve Joordens created is, I think, sound; students can still read and evaluate each other’s work, I think, it just can’t translate directly into a grade. It seems to me. I hadn’t considered how very carefully we need to tread when moving interactive internet applications into the classroom in a deeply unionized environment. I’ve always been on the side of hiring more TAs when technology is involved rather than fewer; the more feedback from official, experienced sources, the better.

This grievance is definitely one to grow on.