Saturday, December 27, 2014

Teaching dilemmas series: teaching with integrity

 Note: This is Part 2 in a series. If you are so inclined, I would encourage you to read the previous post and answer the poll question before reading on.

****************************************

There is a lot about teaching that I love. A lot.

I like making up assignments. It's fun to try to push myself to think of new and creative ways to have students develop the skills I think are important in a psychology degree (true fact: I do my assignments). I also like writing lectures and making slides. Deciding on what point I want to make, and then figuring out how to build up to that point, taking concepts apart to make the components clearer to students, making figures and schematics that can help students create mental schemas (did you see what I did there? ;))... that is challenging, but in a really fun way.

The thing I don't like is being responsible for making moral and ethical judgments about individual students. These are often subjective judgments that require me to hear about deeply personal circumstances that can be very uncomfortable for students to share with me. And after hearing that information, I often have to make decisions that students don't like. It sucks. I'd like to say yes to every request I get from students, but I have an ethical obligation to be fair and equitable. That means often saying no to extensions, or very late submissions, or grade bumps, or a plethora of special requests I get from students each semester.



When I want to know more about a topic, I either make it a lecture topic in one of my courses, or present it at a conference. Nothing inspires deep learning of a topic that the threat of having to publicly share what you've learned with other people! :D I am very interested in how academics make decisions about special requests or individual circumstances that are affecting students. My interest is inspired by a sincere desire to do the right thing. When confronted with these dilemmas, I am constantly asking myself, "Am I doing the right thing?". It helps immensely to know what other people are doing, what other options for resolution are out there. But it also helps to reflect on my past experiences. What decisions have I made in the past, and how did that turn out? What ethical approach did I take in that situation and why?

This post is a preamble to several upcoming posts on ethical ideologies as they apply to teaching. I have found one guy writing about this in the pedagogical literature. One guy. These decisions occupy such a huge component of my time and mental effort in the course of a semester that it seems like more pedagogical research should be out there. But no. There is just one guy. His name is Dr. Bruce Macfarlane, and the good news is that while he's an army of one, he has written very eloquently and intelligently on the topic. So I am going to take his lead and write individual posts about the different ethical positions, as described by Donelson Forsyth, and explain their application to teaching in higher education. Along the way, I will also reveal my own ethical ideology in approaching ethical dilemmas in teaching.

And of course, all of this will feed in to the conference presentation I've proposed with Dr. Suzanne Wood in Psychology at the Society of Teaching and Learning in Higher Education conference in June :D Two birds, one stone.

Friday, December 12, 2014

Teaching Dilemmas Series: Survey Question

I am going to be starting a series of posts on the ethics of teaching and ethical teaching dilemmas. To get things started, I wanted to present an ethical dilemma that is very familiar to me (and I'm sure, other educators). I have no illusions about the number of readers I have or the likelihood of engaging with this poll, but I still thought it would be fun to post. Perhaps in a few years, I'll have a consensus ;)

Here is the scenario: You are an instructor at a large university in Canada. You teach one of the larger second year courses in Chemistry. One of your students has come to office hours to ask for an exception. The student had completed the homework that was due yesterday and claims the homework was actually completed well in advance, but the student forgot to turn it in yesterday between classes because he was preoccupied by a recent breakup. Because the homework assignments are worth so little (only 5% per assignment), your course policy is that late submissions are not accepted. You don't know it, but there is another student in the class with a similar situation, except it was the death of his dog rather than a break-up. That student has decided to accept the consequences of his actions, and will not approach you to ask for an exception. The student in your office is one that you know very well, and you believe his story about the break-up. He is asking you to make an exception to this rule and allow him to turn in his assignment without penalty.

Monday, December 1, 2014

I see you

Students enrolled in large courses often think the prof can't see them – that we must be looking at a sea of faces, and we can't tell one from the other. Life pro tip: I can see you. All of you.

Ok, I don't teach in Convocation Hall, so I can't speak to a 1500 person audience, but in my courses with 200-250 students, I absolutely know when someone is out of place. I know if there is a new note-taker in for the day, or if someone has brought their boyfriend/girlfriend to a lecture. I can often tell when students are surfing the internet, or texting. The reason I don't call people out for this is because it doesn't bother me, but don't mistake that for me not noticing.

Last year I had a student who was awesome. She came to class regularly, and she looked interested. I love that – when I can see all the faces, and they all have a look of "when will this end??!", I find myself focussed on the handful of students who are still with me. She was one of those students.

I almost told her so at the midterm review. After the review session, she came to get some help on a concept that students often struggle with (ROC curves in Perception), and after a few minutes of further explanation on my part, her face lit up. She figured it out. As she walked out of the class, I wanted to say, "Hey thanks for giving me a friendly face to look for in lecture. Your interest keeps me going." I didn't though, because I felt self-conscious about it, and I thought, "I will tell her another time."

That was about 2 days before the midterm, which she missed. She also missed the final exam. She deferred her deferred exam – actually, she deferred her exams, because she missed several of her finals that semester. As happens to many (too many) students at UofT, she started to struggle with anxiety and depression. Her struggles with mental health nearly derailed her undergraduate degree, and I think about her walking away from me at the midterm review session with some regret. Would things have turned out differently if I had reached out to that student?

Maybe not. Anxiety and depression often take on a life of their own, and a kind word from me probably wouldn't have prevented her struggle. But I want students to know that I genuinely care. I've been through tough times myself – I have a vivid memory of sobbing, really, truly sobbing, while cleaning my apartment in the 1st year of undergrad because I was so overwhelmed and just couldn't studying anymore. Seriously, picture that: a disheveled young woman loudly sobbing while slowly loading dishes into the dishwasher before continuing to sob while watering plants and dusting the livingroom. It's kind of funny... now.  

Anyway, I just wanted to say two things. One: you're not invisible. I know exactly what you're up to. And two: I know undergrad is hard. I've been there. If you're struggling, (a) you are not alone, and (b) there are lots of supports on campus. I recommend starting with your college registrar, but you can also try CAPS. Either way, with exams coming up, please be kind to yourself.

Tuesday, November 11, 2014

Evaluating Exam Efficacy

Some instructors pride themselves on writing exam questions that only 20% of the class get right because they think it helps them figure out who the strong students are and who the weak students are. It's a noble goal, but this is not actually the best metric to use to evaluate an exam's ability to discriminate strong from weak students.

When I'm evaluating if my exam is effective and valid, I'm generally interested in an exam item's difficulty and discriminability. 

The metric I use to determine if an exam question is easy or difficult is the probability that students in the class get the question correct - p(correct). If p(correct) for a multiple choice question is 0.25 (i.e. 25% of students get the answer correct), it suggests that the exam question is too difficult. On a 5-option multiple choice question, chance is 0.20, so p(correct) suggests that most (if not all) students who got the question correct were guessing. I am conservative, so I tend to throw out questions that have a p(correct) below 0.3 Ideally, the p(correct) on multiple choice exam questions will be between 0.5 and 0.8, but I keep questions with a p(correct) anywhere from 0.3 and 0.9, but then look at whether it has good discriminability.

A question that has good discriminability will have a p(correct) that is higher for students who performed well on the exam overall, and is lower for students who performed poorly. If you take the students who scored in the top 25% of the class, on any given question they should outperform students who scored in the bottom 25% of the class. The bigger the difference in p(correct) for those two groups, the better the discriminability for the individual question. Looking at performance for each exam item as a function of the top vs. bottom quartile of the class is a rough estimate, but a more precise measure is the point biserial coefficient. This is the correlation for the performance on an exam question with the overall exam performance. Ideally, the point biserial will be at least 0.25, and I have questions that are as high as 0.45 0.57 (a new high for Psy270!). If an exam question is really tough (ex. p(correct) = 0.3) but it has a high point biserial coefficient (0.3+), I will keep it in, despite it's difficulty. The best questions I have in my arsenal are p(correct) = 0.7 and point biserial coefficient > 0.32.



The worst questions are questions that have a low p(correct) and a negative point biserial coefficient. A negative coefficient means that students who performed poorly on the exam were more likely to get the question correct that students who performed well. Generally, when I look at these questions I find they are unintentionally misleading. The strongest students in the class tend to read into these questions too much, over-interpreting them and talk themselves out of the correct answer, while poorer students don't know enough to be mislead. I will always toss out these questions because they don't do what they're supposed to do - discriminate weak from strong students.

Finally, I look at the overall exam reliability. The reliability coefficient indicates the likelihood that the exam will produce consistent results. High reliability means that students who answered a given question correctly were likely to answer other questions correctly. While the reliability coefficient can theoretical range from 0.00 to 1.00, but in practice they tend to range between 0.5 and 0.9. An exam with a reliability coefficient above 0.9 is excellent, and is generally where standardized testing services like Educational Testing Services want their exams to fall. I'm pretty pleased to say that my exam reliability coefficients for my second year courses are between 0.85 and 0.9.

So the next time a prof tells you that tough questions help them discriminate the strong students from the weak students, ask them what metric they're using the make that evaluation. Ideally it won't exclusively be based on how many students answered the question correctly.

Tuesday, October 7, 2014

Creating Good Exams


When I'm creating an new exam for a course, I typically consider 3 major factors: the category of questions, the content coverage, and the basic comprehension of the exam including using best practices for writing good exam questions.



Bloom's Taxonomy of Learning is a classification of learning objectives that educators use to evaluate their practice and students' performance. At the very bottom is knowledge. A student needs to be able to recall the basic concepts of a discipline to be able to do anything else with the content. However, this is the lowest level. As you move up the taxonomy, the kinds of learning objectives increase in difficulty, as do the skills necessary to meet the objectives. Educators have mapped different kinds of exam questions on to the taxonomy. For simplicity, I usually use 3 exam question classifications:
  • Conceptual Understanding Questions: These questions go beyond recall and assess students’ understanding of important concepts. Answer choices to these questions are often based on common student misconceptions or points that students tend to confuse.
  • Application Questions: These questions require students to apply their knowledge and understanding to particular situations and contexts. Application questions often ask students to make a decision or choice in a given scenario, connect course content to “real-world” situations, implement procedures or techniques, or predict the outcome of experiments.
  • Critical Thinking Questions: These questions operate at the higher levels of thinking, requiring students to analyze relationships among multiple concepts or make evaluations based on particular criteria. These questions require students to engage critical thinking skills (i.e. identifying what the question is really about, recognizing unstated assumptions, prioritizing information for problem-solving. etc).
I aim to have about 1/3rd of questions come from each category of questions. It's important that students can recall the basic concepts, but I'm not that interested in students' ability to memorize and regurgitate my lectures. Application and critical thinking questions are more informative. They allow me to determine if students can actually use the concepts intelligently. 

I also work really hard to make sure there is fair coverage of the material. Exams can't test every concept. They are a sample of the concepts we've covered that allow me to estimate the total amount of information a student can recall and use from the course. But my selection of questions isn't totally random. For example, I try to make the number of questions per lecture proportional to the length of the lecture. I sometimes have short lectures to accommodate in-class tutorials, and those lectures have fewer questions on the exam. If I spent a lot of time on something in lecture, generally it will show up more prominently on the exam.

Finally, I take into account best practices for write clear exam questions. I don't want students to do poorly on the exam because they weren't sure what the question was getting at. I try to use simple grammar and I try to avoid colloquialisms that might be difficult for English Language Learners. I also try to avoid negatives. Sometimes it's hard to avoid, but if I feel like I need to include a negative, like "Which of the following is FALSE", I draw attention to the negative by making it capital letters or underlining it, or bolding it, or all three. 

At the McMaster Symposium on Education & Cognitive, I attended a workshop on how to write good multiple choice exams led by Dr. Joe Kim. Some of the changes I've made based on that workshop is to try to avoid using "None of the Above" or "All of the Above" because they tend to be correct more that 25% of the time (i.e. above chance). In addition, to correctly select "All of the Above" a student only has to recognize two of the options as correct. I've also stopped using options like "Both (A) and (C)". These are more tests of working memory and logic, and it's not fair to evaluate students on their core abilities rather than their understanding of course concepts.

I do other things to make sure my exams are fair and effective means of evaluating students' understanding and skills related to the course content, but those are the big 3 for me. Do you think I'm missing anything?

Wednesday, September 3, 2014

The week before




The week before lectures begin is easily my busiest week of the year. I have a kabillion things to do (that's a real value, right?).

I have all of the content for my courses that needs to be loaded onto Blackboard, the course management software that we use at UofT, but it needs to be perfect before I do. The last thing I want is to notice a typo on a document, have to edit the document, save as pdf, and re-upload to Blackboard. Instead, I hold everything back until I'm sure it's in the best condition possible, and then I try to let go of any of the little typos. Because no matter how hard you look, there are always tyops.

But committing to a decision on things like exams scheduling, assignment weight, and the overall course calendar of events is nerve-wracking. What if I accidentally schedule a term test for the same week I'm expected to present at a conference? Or the week before my final exams are due in to the Faculty of Arts & Science? Where will I find the time to create a really great final exam, when I'm preparing an answer key for the term test & training the TAs on how to use it?

My first semester at UofT, I accidentally scheduled all three of my courses to have a term test in the same week. That meant that the week before, when I was still preparing & delivering lectures, I also was creating 3 exams that needed to be to the Photocopy Centre before Friday. And the next week, I was madly writing the answer keys for the TAs just as fast as I could to make sure the TAs had them before the exams were actually done. And then I spent the rest of the week and the entire weekend writing 9 hours of new lecture material for the following week (which takes about 5-12hrs to prep per hour of lecture, depending on the topic). It was something of a nightmare. Of course, I had less than a month to prepare for 3 courses,  over 1000 students  and 14 TAs that semester, so I still marvel at how I survived that semester.

I'm really trying to make two points: (1) In writing the syllabus, there are a lot of variables to consider. If you see something on a syllabus that you think doesn't make sense or seems unfair (for example, a term test the day after Thanksgiving), try to put yourself in your instructor's shoes. Suddenly, the decision might make sense; and (2) this is why my Blackboard pages aren't available yet. I want to make the content perfect and anticipate all possible dangers before I release the Blackboard pages into the world. Please be patient ;)

Wednesday, August 20, 2014

Education & Reseach

I recently attended the McMaster Symposium on Education & Cognition. One of the organizers, Dr. Joe Kim, told a very interesting story. He was with colleagues who were discussing the over-haul of a medical program at another Ontario university from its current incarnation to a more evidence-based program. Everyone joked, "What were they basing their medical training on before??!"

Hahahaha...right?! Seriously, what are you teaching medical students, if not what research has to say about the practice of medicine?!



After the jokes were over, Dr. Kim asked the faculty members he was with, "How many of you use evidence-based teaching practices?" No one laughed. There was mostly an uncomfortable silence.

This very much echos my own experience. When I started teaching, it was trial-and-error. I tried to emulate the instructors that I admired when I was an undergraduate, and I threw in a few things that I thought were interesting and innovative but had no reason to believe would work. I did not consult the literature.

There is a whole world of research on best practices for instructional design in high education, but far too few university-level instructors take advantage of it. It comes naturally to most researchers to go to the literature when they have a discipline-specific question, but for some reason, it doesn't occur to us to do the same thing when we have a teaching question. My own teaching practice has evolved to rely quite heavily on relevant educational and cognitive research, and I think I can genuinely call my practice evidence-based. I also think there is a move among instructors and educational developers at universities to make educational research more prominent in professional development.

I get a few students every year who challenge me on my teaching practice. They question my slides, or my evaluation rubric, or my exam questions, etc. I appreciate that students want to understand my motivation (it's part and parcel why I started this blog). I think students should challenge more of their instructors to produce evidence to support their teaching practice. At the very least, it will keep us on our toes! And it could very well motivate more instructors to migrate to evidence-based practice.

I wrote more about my experiences at the EdCog Symposium for the Centre for Teaching Innovation & Support (CTSI) blog. If you're interested in finding out about some of the research presented and some ideas about how I might implement practical applications of the research, click here to read on.