2019 Science Games Steering Committee
Engineers and Geoscientists BC is looking for two educators to volunteer on our 2019 Science Games Steering Committee. This group develops the activities for our annual Science Games event in March. At the Science Games students from Grades 1-6 work in teams as they complete various hands-on science challenges. Division 1 activities are designed for students in Grades 1-3 and Division 2 activities are designed for students Grades 4-6. We’re looking for educators to volunteer on the committee and provide insight on the science curriculum for these grades and how we can tailor our challenges so they are appropriate for these age groups. Learn more about this volunteer opportunity or apply online.
If you have questions about this volunteer opportunity, please contact firstname.lastname@example.org.
TRIUMF & The BC Association of Physics Teachers
Are pleased to announce
“Kindling your passion for physics teaching”
Are you going to be teaching physics next year?
Come and be inspired by award-winning physics researchers and educators while networking with colleagues sharing practical resources, and learning about physics applications.
A Conference and Workshops for Secondary Science Teachers
Provincial Pro-D Day, Friday October 19th, 2018 at TRIUMF, Vancouver BC.
We are delighted to announce this year’s keynote speaker, Officer of the Order of Canada,
Dr. Jaymie Matthews – UBC Physics and Astronomy
SAVE THE DATE!
Detailed program and registration information to follow in September 2018.
Fix 15: Don’t leave students out of the grading process. Involve students; they can and should play key roles in assessment and grading and promote achievement.
Brilliant. Happy to end this journey on a positive note note. In my last post I was questioning leaving my students more “in the dark” so to speak about assessment, but I’m pleased to see that this isn’t a helpful practice. It’s nice to see my instincts and / or training are largely congruent to Ken’s 15 Fixes.
So, unfortunately with some of my students, I don’t feel that their involvement in assessment promotes achievement… yet. This is more of a longer term cultural shift for them I think, and it just takes time and consistency. Hopefully another year at the same school and the same crew of kids brings out that ownership in their own learning.
On that note I’m going to cut it short and say thanks for tuning in. I had fun with this project, and I always enjoy looking critically at my own practice. I strive to find better ways to engage my students and help them feel a sense of pride and curiosity around my lessons and their school. I think assessment in general is a real driver of both positive and negative associations with school, and I’m hoping that these 15 Fixes “rekindle the fire” so to speak about your own forward thinking in your classroom. Have a great summer!
Fix 14: Don’t summarize evidence accumulated over time when learning is developmental and will grow with time and repeated opportunities; in those instances, emphasize more recent achievement
Have I mentioned that this book is amazing, and the lessons summarized are both intuitive and overwhelming all at the same time? I have dove in the deep end; I am on board, I have bought into these best practices for assessment. That said, organizing something like this, in my head or on my computer, is a seemingly difficult task. Maybe I just picture some of my parent teacher interviews I have had. I am a very organized person, but my system looks completely disorganized to a parent- especially a critical parent wondering why their child isn’t doing so hot. I usually have student names going along the Y axis, learning outcomes going across the X axis, and have somehow (even though it is on a screen or on paper) added a Z axis for time. I have both paper based records for my day to day observations, and I update marks “that count” every few weeks into our system. There is a number system to show levels of understanding (1, 2 or 3); I have tried colour coordinating, and I have a variety of shorthand letters or symbols that mean various things. I try to annotate any missing assignments for the student as they arise (to me an excused absence for the dentist is different than sleeping in or skipping). I think relationships are more important than record keeping, but I also don’t want to get caught with my pants down if a parent or principal has questions. It’s a delicate balance…
At the end of the day, learning takes time, and I really try to be clear with my students that their practice doesn’t count; taking risks in class is not going to bring their marks down. The term “weight” is a little confusing for some of them, even some grade 10’s and 11’s still don’t get that a 20 question test at the end of a unit will drive their grade disproportionately more than a 20 question assignment. I think students should know how they are being marked, and understand the reasons behind it. You can’t hit a bullseye if you don’t know where it is. Maybe that is my problem? I’m too transparent with my students, and they don’t have the maturity, vocabulary or training to understand it? Should I just say how the course will be on the syllabus and the first day of class and then leave more mystery to it all? Something to consider I guess…
Getting back to Fix 14 before I totally go off the rails. Maybe the difficulty I have scaffolding this is focussing on too big or long-term of learning goals. Is the learning goal I know the locations and charges of protons, neutrons and electrons in an atom or is it chemical processes require energy change as atoms are rearranged? One goal is 60 minutes tops, the other is 60 days. So keeping a running tally of where a student is at, so to speak, with one goal is much easier to document than the other.
So as usual, my reflection has both reaffirmed my philosophy about science education and assessment and also totally twisted everything around backwards and upside down at the same time. Thanks, as usual even though I don’t mention it specifically, for any comments or insight you can provide. As a related side note, please do keep in mind the Canadian Assessment for Learning Conference & Symposium – Location in Delta – May 1-3. I will be there will bells on.
Fix 13: Don’t use information from formative assessments and practice to determine grades; use only summative evidence.
I think I have mentioned this before, but I promise I am not looking ahead; I just mentioned this yesterday! If it was me learning something new I would only want summative evidence to determine my grade. If I had my 6 month performance evaluation at work scheduled (hypothetical) I would want to practice a specific set of skills, and have my final mark only based on the end of my learning journey. I think a student learning an introduction to Biology or Chemistry at their grade-specific level should be no different. Why would you want your mark to be an average of all your mixed results throughout?
The students, however, seem to think differently; maybe it is an age thing? My own daughter is in kindergarten. She would never have a final test on writing her name at the end of June. Her grade is based on evidence of an ongoing evolution of her skills. Do primary students have their reporting based entirely on Formative assessment? Then by Grade 11 and 12 students have their grade based mainly on Summative assessment? Upper intermediate and junior high school seem to be bridging the gap, from what I have seen. So how do I make it clear to a 14 or 15 year old that giving a final grade based on formative checkpoints in the middle isn’t really fair to them?
Here is how I have orchestrated the conversation, to get my students to “buy into” having a mark based mostly on Summative assessment. Most of my kids are starting to drive when I teach them. To get your driver’s license in BC you have to go through the following steps:
Students see the safety implication with this analogy, and agree that they wouldn’t want the ability to drive to be based on all of their early attempts “weighing down the average”. So back to the skills and knowledge being measured in a science class. I think the biggest psychological obstacle with basing the students mark entirely on summative assessment is what the students picture a test is; maybe it is a “test” thing? Fifty or more multiple choice questions, no talking, no phone, no binder. This environment is proven to cause anxiety for many, adults and children included. And is it really an accurate reflection of learning? Maybe for some students, but certainly not for all.
I really like the direction education is moving, whereby summative assessment is a conversation or a presentation of learning in a means of your choosing. Summative assessment can be include self-assessment, because let’s face it, students are much harder critics than we are. When was the last time you studied for a written or multiple choice test as an adult? Our “tests” are based mainly on observations and conversations, and sure may be a score at the end, but it is a much more authentic experience of where we are at in a given context. Think job interview or checking in on your quarterly sales goals. Don’t our students deserve the same time and respect?
Fix 12: Don’t include zeros in grade determination when evidence is missing or as punishment; use alternatives, such as reassessing to determine real achievement or use “I” for Incomplete or Insufficient Evidence
Ok readers, here we go, the final few fixes. Ironically I feel a little like my students do this time of year, in that I’m ready to accept the zero in my final 3 or 4 tasks because I’m burnt out. Almost there though, close to the finish line and ready to finish strong.
Throughout the school year I try to give students progress reports every two or three weeks, and have “missing” as a placeholder if they forgot to hand an assignment in, or chose not to do it for whatever reason. I have two very different students in mind who, by June 22nd when our report cards were due, had nearly half of the terms assignments missing. I did not put any zero’s in; both students were made aware of the catch-up day on Monday, but I did not make it mandatory for either of them. I Both students took a final unit exam: one student got 50% on the test, the other got 80%. I am pretty confident that this is a realistic evaluation of how much both students understand of the subject. I left the holes in my marks books as holes and posted their grades based solely on this final test.
This particular class I’m describing is Science 10; my breakdown is 30% formative assessment (labs, assignments) and 70% summative (Chapter Quizzes and Unit Tests). I actually consulted the class, and we came up with this breakdown collaboratively. I describe what formative and summative mean to them, and that I want to help them learn as much as possible during instruction, then measure it at the end; that to me is the point of assessment. Many students want some credit for “practice” assignments, and realise that daily work will (usually) bring their mark up, and their level of understanding up. It’s interesting to me that they don’t see their mark being the same thing as their level of understanding. Additionally, many students feel anxiety around tests, and don’t feel it is fair to base the whole course on summative assessments. It was interesting for me to hear their opinions about assessment, and I try to be honest and responsive to their opinions so we can come up with a fair plan for everyone.
I’m curious now to run the numbers on my 80 and 50 percentage students, to see how much completing their missing assignments would affect their mark, and I can only speculate how much it would affect their level of understanding. The potential disconnect between these two is something I will definitely keep in mind as I plan for next year…
Fix 11: Don’t rely only on the mean; consider other measures of central tendency and use professional judgment
Good morning. I was feeling guilty because I wanted this done by the end of June, and darn it, it’s not looking optimistic. Oh well, good to do some professional development over the summer months. Thankfully this one is an easy one, so I will keep it short, and maybe (but don’t quote me) might even reflect on Fix 12 before 3:00. Fingers crossed.
The last time I calculated the mean for a class or a test was in 2011, and I had 62 students taking Science 10 in two blocks in Semester 1. The sample size was high; there was pretty good anonymity for top and bottom marks on the “tails” of my bell curve. Now I have 9 students taking Science 10, so mean was useless before and it is completely delusional now. Interestingly though, a sarcastic thanks MyEd, our district does it automatically for me. I don’t even look at that number at the bottom of my column of final grades; it is meaningless.
Great news, we are back to professional judgement, which is an amazing safety net for those that are wrapped up around numbers and accountability. Your doctor takes a few measurements and runs a few tests, but most of the time the doctors professional judgement had the diagnosis long before the numbers were run or the samples were sent away. You are a professional just like a doctor. Know what tests to run, and how often to run them based on the individual student. Trust your judgement, and as long as you continue to monitor the progress in your students in a timely and constructive manner you are good! Focus on the conversations with them, not having a high average or a narrow standard deviation. Students are people first, you can’t treat them like an algorithm.
Fix 10: Don’t rely on evidence gathered using assessments that fail to meet standards of quality; rely only on quality assessments
Haha, maybe the end of the school year wasn’t the best time to attempt these 15 fixes. Ok, yes, challenge accepted. I will only accept quality assignments. Quality for one, however, can be mediocre for others. Fair is not equal… I digress.
Here is the climate in Science 10 lately. We are looking at how ecosystems change over time; the students have a bunch of jumbled up stages in secondary succession. Their task is to put the text in order, and illustrate what that would actually look like (make a cartoon). Student A reads the directions carefully, cuts and pastes the steps, rearranges them into the correct order and does a satisfactory job at illustrating. Their conclusion is written clearly on the back of the page by the end of the period. Student B does not read the directions, writes out the steps in shorthand (almost illegibly) on a scrap of paper and attempts to use the Storyboard That website to digitally draw out a cartoon. Student’s A can verbally explain the stages in succession; their work matches their understanding. Student B, however, can explain the science pretty accurately verbally, but never actually completed any assignment to support their explanation, despite 3 gentle and encouraging reminders in 3 successive class periods. The partially completed version of the assignment I saw several times left in my classroom was certainly not a quality assignment.
I am not wrapped up in the means of how I get my evidence, but at the end of the day I want to be accurate and consistent, and I want the students to be clear on how they can improve their level of understanding. Student B would take 12 months to complete a 5 month course if everything I required was “quality”. That, or the current situation, whereby he finished the course in 5 months but his mark is not the greatest because I didn’t have the patience or tenacity to wait for quality.
One of my mantras for assessment is that weighing a pig does not make it fatter. I don’t want to collect droves of assignments; I would much rather collect one quality assignment every few weeks (depending on the age and subject). This, for me, definitely brings about a bigger issue and a paradigm shift for both students and their parents, as well as my colleagues.
I guess my take home message after being mindful of this fix is that I need to know my students. Only after I know them and their interests and abilities can I truly understand what quality looks like for them specifically. Then, and I know this is lacking for me at times, I need to stay diligent with those repeat offenders and keep giving back low-quality evidence until it is good enough. Man. Assessment is so cool – it takes both sensitivity and discipline simultaneously.
Fix 9: Compare students achievement to pre-set standards, not to each other.
Haha, this one makes me laugh. This makes me think of one of my Science 10’s, whose assignments could be the answer keys. In fact they are better than the answer key because her printing is neater than mind. In the spectrum of meeting-achieving-exceeding standards, this particular student is exceeding, head and shoulders above my standards and the standards suggested in the BC Curriculum for that matter. No teacher could ever fairly compare other students to her, because of the fact that she has such exceptional efforts and abilities. It wouldn’t be fair to either of the parties. Even just considering the “meeting expectations” students; all of my students are so unique, with their skill sets, abilities, interests and work habits, it’s like comparing apples to oranges, even if they are categorically both “meeting expectations”.
An interesting and related side note: I am having one last crack at the old Science 10 curriculum. Guilty, sorry, as charged, but I had my reasons. The thing about this curriculum is the learning standards are like this:
Explain half life with reference to rates of radioactive decay
Ok, let’s tweak that a little bit…
“I can explain half life with reference to rates of radioactive decay”
“Ok kids, here are 100 pennies; your job is to model parent isotope with heads, daughter isotopes with tails, graph it… and answer the conclusion question at the end”. The standards for this particular learning goal, or this particular classroom activity are completely transparent and very unambiguous; the standard is also a manageable enough chunk to approach in one or two classroom activities.
More interesting for me, the new learning standard “chemical processes require energy change as atoms are rearranged”. This is still a preset standard, and I’m still not going to compare my students to each other, but this obviously requires some unpacking to make it helpful for both the students to learn from and teacher to assess. For this daunting task, lately, I am trying to develop Learning Maps for each of my units. It takes some front loading (what else did I want to work on in July?!) but once the scaffold is there, periodical check ins with students and yourself for assessment is seamless and constructive. Learning Maps are amazing because they remove scores and they put language to the learning goals; they are discrete, because all students have to have a conversation with you and it looks the same, and lastly it factors in the multiple entry points into a topic and just emphasises forward growth.
Sorry folks, I kind of drifted away from my central thesis there. My specific instructions for blogging were “short and to the point <winky face>”. I guess I am “approaching expectations” in the blogging department today.