TRIUMF & The BC Association of Physics Teachers
Are pleased to announce
“Kindling your passion for physics teaching”
Are you going to be teaching physics next year?
Come and be inspired by award-winning physics researchers and educators while networking with colleagues sharing practical resources, and learning about physics applications.
A Conference and Workshops for Secondary Science Teachers
Provincial Pro-D Day, Friday October 19th, 2018 at TRIUMF, Vancouver BC.
We are delighted to announce this year’s keynote speaker, Officer of the Order of Canada,
Dr. Jaymie Matthews – UBC Physics and Astronomy
SAVE THE DATE!
Detailed program and registration information to follow in September 2018.
Fix 15: Don’t leave students out of the grading process. Involve students; they can and should play key roles in assessment and grading and promote achievement.
Brilliant. Happy to end this journey on a positive note note. In my last post I was questioning leaving my students more “in the dark” so to speak about assessment, but I’m pleased to see that this isn’t a helpful practice. It’s nice to see my instincts and / or training are largely congruent to Ken’s 15 Fixes.
So, unfortunately with some of my students, I don’t feel that their involvement in assessment promotes achievement… yet. This is more of a longer term cultural shift for them I think, and it just takes time and consistency. Hopefully another year at the same school and the same crew of kids brings out that ownership in their own learning.
On that note I’m going to cut it short and say thanks for tuning in. I had fun with this project, and I always enjoy looking critically at my own practice. I strive to find better ways to engage my students and help them feel a sense of pride and curiosity around my lessons and their school. I think assessment in general is a real driver of both positive and negative associations with school, and I’m hoping that these 15 Fixes “rekindle the fire” so to speak about your own forward thinking in your classroom. Have a great summer!
Fix 14: Don’t summarize evidence accumulated over time when learning is developmental and will grow with time and repeated opportunities; in those instances, emphasize more recent achievement
Have I mentioned that this book is amazing, and the lessons summarized are both intuitive and overwhelming all at the same time? I have dove in the deep end; I am on board, I have bought into these best practices for assessment. That said, organizing something like this, in my head or on my computer, is a seemingly difficult task. Maybe I just picture some of my parent teacher interviews I have had. I am a very organized person, but my system looks completely disorganized to a parent- especially a critical parent wondering why their child isn’t doing so hot. I usually have student names going along the Y axis, learning outcomes going across the X axis, and have somehow (even though it is on a screen or on paper) added a Z axis for time. I have both paper based records for my day to day observations, and I update marks “that count” every few weeks into our system. There is a number system to show levels of understanding (1, 2 or 3); I have tried colour coordinating, and I have a variety of shorthand letters or symbols that mean various things. I try to annotate any missing assignments for the student as they arise (to me an excused absence for the dentist is different than sleeping in or skipping). I think relationships are more important than record keeping, but I also don’t want to get caught with my pants down if a parent or principal has questions. It’s a delicate balance…
At the end of the day, learning takes time, and I really try to be clear with my students that their practice doesn’t count; taking risks in class is not going to bring their marks down. The term “weight” is a little confusing for some of them, even some grade 10’s and 11’s still don’t get that a 20 question test at the end of a unit will drive their grade disproportionately more than a 20 question assignment. I think students should know how they are being marked, and understand the reasons behind it. You can’t hit a bullseye if you don’t know where it is. Maybe that is my problem? I’m too transparent with my students, and they don’t have the maturity, vocabulary or training to understand it? Should I just say how the course will be on the syllabus and the first day of class and then leave more mystery to it all? Something to consider I guess…
Getting back to Fix 14 before I totally go off the rails. Maybe the difficulty I have scaffolding this is focussing on too big or long-term of learning goals. Is the learning goal I know the locations and charges of protons, neutrons and electrons in an atom or is it chemical processes require energy change as atoms are rearranged? One goal is 60 minutes tops, the other is 60 days. So keeping a running tally of where a student is at, so to speak, with one goal is much easier to document than the other.
So as usual, my reflection has both reaffirmed my philosophy about science education and assessment and also totally twisted everything around backwards and upside down at the same time. Thanks, as usual even though I don’t mention it specifically, for any comments or insight you can provide. As a related side note, please do keep in mind the Canadian Assessment for Learning Conference & Symposium – Location in Delta – May 1-3. I will be there will bells on.
Fix 13: Don’t use information from formative assessments and practice to determine grades; use only summative evidence.
I think I have mentioned this before, but I promise I am not looking ahead; I just mentioned this yesterday! If it was me learning something new I would only want summative evidence to determine my grade. If I had my 6 month performance evaluation at work scheduled (hypothetical) I would want to practice a specific set of skills, and have my final mark only based on the end of my learning journey. I think a student learning an introduction to Biology or Chemistry at their grade-specific level should be no different. Why would you want your mark to be an average of all your mixed results throughout?
The students, however, seem to think differently; maybe it is an age thing? My own daughter is in kindergarten. She would never have a final test on writing her name at the end of June. Her grade is based on evidence of an ongoing evolution of her skills. Do primary students have their reporting based entirely on Formative assessment? Then by Grade 11 and 12 students have their grade based mainly on Summative assessment? Upper intermediate and junior high school seem to be bridging the gap, from what I have seen. So how do I make it clear to a 14 or 15 year old that giving a final grade based on formative checkpoints in the middle isn’t really fair to them?
Here is how I have orchestrated the conversation, to get my students to “buy into” having a mark based mostly on Summative assessment. Most of my kids are starting to drive when I teach them. To get your driver’s license in BC you have to go through the following steps:
Students see the safety implication with this analogy, and agree that they wouldn’t want the ability to drive to be based on all of their early attempts “weighing down the average”. So back to the skills and knowledge being measured in a science class. I think the biggest psychological obstacle with basing the students mark entirely on summative assessment is what the students picture a test is; maybe it is a “test” thing? Fifty or more multiple choice questions, no talking, no phone, no binder. This environment is proven to cause anxiety for many, adults and children included. And is it really an accurate reflection of learning? Maybe for some students, but certainly not for all.
I really like the direction education is moving, whereby summative assessment is a conversation or a presentation of learning in a means of your choosing. Summative assessment can be include self-assessment, because let’s face it, students are much harder critics than we are. When was the last time you studied for a written or multiple choice test as an adult? Our “tests” are based mainly on observations and conversations, and sure may be a score at the end, but it is a much more authentic experience of where we are at in a given context. Think job interview or checking in on your quarterly sales goals. Don’t our students deserve the same time and respect?
Fix 12: Don’t include zeros in grade determination when evidence is missing or as punishment; use alternatives, such as reassessing to determine real achievement or use “I” for Incomplete or Insufficient Evidence
Ok readers, here we go, the final few fixes. Ironically I feel a little like my students do this time of year, in that I’m ready to accept the zero in my final 3 or 4 tasks because I’m burnt out. Almost there though, close to the finish line and ready to finish strong.
Throughout the school year I try to give students progress reports every two or three weeks, and have “missing” as a placeholder if they forgot to hand an assignment in, or chose not to do it for whatever reason. I have two very different students in mind who, by June 22nd when our report cards were due, had nearly half of the terms assignments missing. I did not put any zero’s in; both students were made aware of the catch-up day on Monday, but I did not make it mandatory for either of them. I Both students took a final unit exam: one student got 50% on the test, the other got 80%. I am pretty confident that this is a realistic evaluation of how much both students understand of the subject. I left the holes in my marks books as holes and posted their grades based solely on this final test.
This particular class I’m describing is Science 10; my breakdown is 30% formative assessment (labs, assignments) and 70% summative (Chapter Quizzes and Unit Tests). I actually consulted the class, and we came up with this breakdown collaboratively. I describe what formative and summative mean to them, and that I want to help them learn as much as possible during instruction, then measure it at the end; that to me is the point of assessment. Many students want some credit for “practice” assignments, and realise that daily work will (usually) bring their mark up, and their level of understanding up. It’s interesting to me that they don’t see their mark being the same thing as their level of understanding. Additionally, many students feel anxiety around tests, and don’t feel it is fair to base the whole course on summative assessments. It was interesting for me to hear their opinions about assessment, and I try to be honest and responsive to their opinions so we can come up with a fair plan for everyone.
I’m curious now to run the numbers on my 80 and 50 percentage students, to see how much completing their missing assignments would affect their mark, and I can only speculate how much it would affect their level of understanding. The potential disconnect between these two is something I will definitely keep in mind as I plan for next year…
Fix 11: Don’t rely only on the mean; consider other measures of central tendency and use professional judgment
Good morning. I was feeling guilty because I wanted this done by the end of June, and darn it, it’s not looking optimistic. Oh well, good to do some professional development over the summer months. Thankfully this one is an easy one, so I will keep it short, and maybe (but don’t quote me) might even reflect on Fix 12 before 3:00. Fingers crossed.
The last time I calculated the mean for a class or a test was in 2011, and I had 62 students taking Science 10 in two blocks in Semester 1. The sample size was high; there was pretty good anonymity for top and bottom marks on the “tails” of my bell curve. Now I have 9 students taking Science 10, so mean was useless before and it is completely delusional now. Interestingly though, a sarcastic thanks MyEd, our district does it automatically for me. I don’t even look at that number at the bottom of my column of final grades; it is meaningless.
Great news, we are back to professional judgement, which is an amazing safety net for those that are wrapped up around numbers and accountability. Your doctor takes a few measurements and runs a few tests, but most of the time the doctors professional judgement had the diagnosis long before the numbers were run or the samples were sent away. You are a professional just like a doctor. Know what tests to run, and how often to run them based on the individual student. Trust your judgement, and as long as you continue to monitor the progress in your students in a timely and constructive manner you are good! Focus on the conversations with them, not having a high average or a narrow standard deviation. Students are people first, you can’t treat them like an algorithm.
Fix 10: Don’t rely on evidence gathered using assessments that fail to meet standards of quality; rely only on quality assessments
Haha, maybe the end of the school year wasn’t the best time to attempt these 15 fixes. Ok, yes, challenge accepted. I will only accept quality assignments. Quality for one, however, can be mediocre for others. Fair is not equal… I digress.
Here is the climate in Science 10 lately. We are looking at how ecosystems change over time; the students have a bunch of jumbled up stages in secondary succession. Their task is to put the text in order, and illustrate what that would actually look like (make a cartoon). Student A reads the directions carefully, cuts and pastes the steps, rearranges them into the correct order and does a satisfactory job at illustrating. Their conclusion is written clearly on the back of the page by the end of the period. Student B does not read the directions, writes out the steps in shorthand (almost illegibly) on a scrap of paper and attempts to use the Storyboard That website to digitally draw out a cartoon. Student’s A can verbally explain the stages in succession; their work matches their understanding. Student B, however, can explain the science pretty accurately verbally, but never actually completed any assignment to support their explanation, despite 3 gentle and encouraging reminders in 3 successive class periods. The partially completed version of the assignment I saw several times left in my classroom was certainly not a quality assignment.
I am not wrapped up in the means of how I get my evidence, but at the end of the day I want to be accurate and consistent, and I want the students to be clear on how they can improve their level of understanding. Student B would take 12 months to complete a 5 month course if everything I required was “quality”. That, or the current situation, whereby he finished the course in 5 months but his mark is not the greatest because I didn’t have the patience or tenacity to wait for quality.
One of my mantras for assessment is that weighing a pig does not make it fatter. I don’t want to collect droves of assignments; I would much rather collect one quality assignment every few weeks (depending on the age and subject). This, for me, definitely brings about a bigger issue and a paradigm shift for both students and their parents, as well as my colleagues.
I guess my take home message after being mindful of this fix is that I need to know my students. Only after I know them and their interests and abilities can I truly understand what quality looks like for them specifically. Then, and I know this is lacking for me at times, I need to stay diligent with those repeat offenders and keep giving back low-quality evidence until it is good enough. Man. Assessment is so cool – it takes both sensitivity and discipline simultaneously.
Fix 9: Compare students achievement to pre-set standards, not to each other.
Haha, this one makes me laugh. This makes me think of one of my Science 10’s, whose assignments could be the answer keys. In fact they are better than the answer key because her printing is neater than mind. In the spectrum of meeting-achieving-exceeding standards, this particular student is exceeding, head and shoulders above my standards and the standards suggested in the BC Curriculum for that matter. No teacher could ever fairly compare other students to her, because of the fact that she has such exceptional efforts and abilities. It wouldn’t be fair to either of the parties. Even just considering the “meeting expectations” students; all of my students are so unique, with their skill sets, abilities, interests and work habits, it’s like comparing apples to oranges, even if they are categorically both “meeting expectations”.
An interesting and related side note: I am having one last crack at the old Science 10 curriculum. Guilty, sorry, as charged, but I had my reasons. The thing about this curriculum is the learning standards are like this:
Explain half life with reference to rates of radioactive decay
Ok, let’s tweak that a little bit…
“I can explain half life with reference to rates of radioactive decay”
“Ok kids, here are 100 pennies; your job is to model parent isotope with heads, daughter isotopes with tails, graph it… and answer the conclusion question at the end”. The standards for this particular learning goal, or this particular classroom activity are completely transparent and very unambiguous; the standard is also a manageable enough chunk to approach in one or two classroom activities.
More interesting for me, the new learning standard “chemical processes require energy change as atoms are rearranged”. This is still a preset standard, and I’m still not going to compare my students to each other, but this obviously requires some unpacking to make it helpful for both the students to learn from and teacher to assess. For this daunting task, lately, I am trying to develop Learning Maps for each of my units. It takes some front loading (what else did I want to work on in July?!) but once the scaffold is there, periodical check ins with students and yourself for assessment is seamless and constructive. Learning Maps are amazing because they remove scores and they put language to the learning goals; they are discrete, because all students have to have a conversation with you and it looks the same, and lastly it factors in the multiple entry points into a topic and just emphasises forward growth.
Sorry folks, I kind of drifted away from my central thesis there. My specific instructions for blogging were “short and to the point <winky face>”. I guess I am “approaching expectations” in the blogging department today.
Fix 7: Don’t organize information in grading records by assessment method, or simply summarize into a single grade; organize and report evidence by standard / learning goal
Ok, for anybody out there reading this, I would love to see how you do this. I struggle with this. I have heard that I need to try FreshGrade but haven’t yet. Maybe in July I will do some recon…Currently, in Biology 11, I give the students our Learning Goals every Monday morning to start the week. However when I report evidence it is definitely a summarized grade and is not specifically organised by standards (just the title of the assignment). I do specify as to whether or not my marks are Formative or Summative, and I always reiterate the weight of major tests etc. Almost all of the time for Formative assessment I use a three point scale to indicate the level of comprehension. I find that our system just doesn’t allow for the kind of information I want to include (conversations, observations etc.) all in one place.
When I teach math, and teach out of the textbook, I have a system I enjoy that works for both myself and the students. The learning goals are plentiful, which can be intimidating for some, but we pick away at the goals; the students day to day activities as well as my reporting are lined up around clear descriptive standards. When I teach Biology, however, which is my forte, I the organization fails me to some extent. Maybe it’s because I have only taught the course twice, and the curriculum has been different each time…Or maybe it’s MyEd. Or maybe it’s because my “bell curve” is so broad that my standards are different for some students compared to the others.
I am, by nature, pretty organized and Type A. I can admit it. I have an MSc, but since going into Education, from applied science, my focus has evolved from numbers and algorithms into more personal documentation and note keeping. I have seen Kindergarten teachers’ mark books and I think we highschool teachers could learn a thing or two. A kindergartener is never going to get a score / 5 on raising their butterfly from a caterpillar. You will never see a list of assignments in chronological order of the worksheets they attempted to describe metamorphosis. Kindergarten records are a living document because a Kindergartener changes month to month- scratch that- day to day. No “grades” are permanent, it is just a snapshot in time of where they are at and how we can all support growth.
I want to spend my time teaching my students, and preparing ways to inspire them at their level. I went into this career knowing that record keeping is a big part of it, but I am skeptical there is software that can do this in a meaningful way that wouldn’t be a huge sink to my time that would be value added. Hmm… Kind of leaving this one hanging which I don’t like. Feel free to comment please. Until next time.
Guest Blog by Kevin Yapps.
Kevin has taught in various Science settings. His current role is teaching French Immersion Sciences as well as a combined French Science-Math option at WL Seaton Secondary Vernon, BC. Kevin is an avid tinkerer and maker and loves incorporating it into his Science instruction. @yappsolutely
Osmosis: abstract and difficult to demonstrate
For years, I have been trying to come up with concrete methodology that will help learners better understand osmosis. Laboratory exercises and online videos have made it much simpler to understanding diffusion. However, I have always found it daunting to come up with activities that make it easier to understand osmosis because it almost seems counter-intuitive.
A colleague shared a lab-activity with me where students used red onion epidermis. Students were instructed to do a classic wet-mount slide preparation. Then, with the use of a pipette, they injected a saline solution between the slide and the slide cover. All this was to be executed in real-time with the prepared slide on the stage: talk about complicated! Even when or if it worked, it was difficult for the untrained eye to understand what it was looking at.
Let’s just say that the students, like always, loved the fact that they were doing “lab-work,” but they executed it without fully knowing whether or not it worked. I was left having to explain what they should be seeing. A student mentioned that everything happened too fast. This made me reflect. I didn’t want to throw the activity out. It had so much potential.
How it all came together: the “a-ha moment”
I began to reflect on the activity. What went well? What went horribly wrong and what needed to be improved? About a week prior to this activity, students were encouraged to use their smartphones during lab-work to snap photos of pond life through the microscope. This was a very engaging activity for students and they were proud to share, via Airdrop or email, the various photos and videos of microorganisms (I even began to archive these in a Google Photos photo album). A collective problem with this technique was finding a way to hold the phone still to take a photo. Some used 2 sets of hands! Then, they had to crop out the impertinent segments of their photos.
I thought to myself: “If only there was a way to hold the cameras to take better photos. I could 3D-print a mounting apparatus! Photos will be clear and I can even take video like I would with a tripod.” Using a 3D printer was not a viable option for me at that point, so I began to improvise with selfie-sticks and “oogoo” (silicone and cornstarch). Let’s just say that my prototypes were marginally functional.
Then, one evening, while surfing aimlessly on the Interweb, I stumbled upon a DIY microscope video from the Instructables website: a home-made microscope that used a small acrylic lens coupled with the focusing power of a smartphone camera. Subsequent Youtube searches yielded demonstrations; one of them being with red-onion cells. I immediately concluded that this project could solve my previous problems with camera stillness and videos. To boot, I had also come to the realization that time-lapse was a function that existed on most new smartphones.
I now had a way to watch the phenomenon in real-time, in time-lapse and then repeatedly by using video. For example, once salt water is injected into the cellular matrix, cytoplasm leaves the cells to balance the exterior. In real-time, this is difficult to notice as it happens over minutes. Time-lapse permits the observer to watch 5 minutes-worth of footage in a few seconds. Moreover, the time-bar in the video application permits the observer to go back and forth in time. If you miss something, it’s always accessible! Having the luxury of going over the visual component of osmosis with learners is a very powerful tool. (The following photo’s were all taken using these microscopes.)
*3x 4 ½” x 5/16” carriage bolts
*9x 5/16” nuts
*3x 5/16” wing nuts
*5x 5/16” washers
**¾” x 7” x 7” plywood — for the base
***⅛” x 7” x 7” plexiglass — for the camera stage
***⅛” x 3” x 7” plexiglass — for the specimen stage
****11/32 inch threaded Acrylic lens (use two for increased magnification)
*****LED click light (necessary only for viewing backlit specimens)
*Can be found at any hardware store – It is better to buy from a wholesaler to cut costs
**I use old shelving – You will need a saw to cut this with the appropriate dimensions
***I found used plexiglass in the school. Canvassing your local glass retailers for donations (even scrap pieces work)
****I found small acrylic lenses on Amazon
***** LED lights can easily be found at any hardware stores
Use the Instructables website if you are uncomfortable troubleshooting on your own. You will need the following tools and accessories for building purposes:
Potential related projects and cross-curricular
There is the potential to collaborate with a visual-arts class. Students can share their photos with collaborating arts students who can transform microscopy photos into works of art. This could work well with pond life organisms, insects, etc.
If you are not comfortable using small power tools in your classroom (i.e. drills and saws), you can converse with your school’s tech-ed instructor. High school and middle school shops usually possess the necessary tools.
Every semester, I have students use previous classes’ home-made microscopes. They use the “Design Thinking” methodology to establish a plan to build a 2.0 version.