The Fall Intake for the BCScTA Roots Grant is underway. We are currently accepting applications until November 30, 2018. Please go to the following link to apply.
Communities of science teachers are the roots of the BCScTA.
The BScTA is offering another two Science Roots Grants to support collaborative groups in quality science education initiatives to its current members for the 2018-2019 School Year.
In recognition of this, The BCScTA is supporting several collaborative groups, consisting of BCTF members, by providing funding for quality science education initiatives. These initiatives can take many forms and we are open to unique and creative submissions.
Examples of such initiatives could be: a collaborative teacher group developing assessment tools or science lab activities; developing methods of integrating more inquiry-based approaches; an exchange or articulation between elementary and secondary science teachers so that both levels vertically align with each other; a book club to expand on a teaching approach or idea; or sound approaches to authentically integrate First People’s Principles of Learning.
For this intake, the Science World kindly supports the Roots Grant recipients with admission for up to 90 students from their respective schools and $750 each for reimbursement of transportation.
For more information, contact firstname.lastname@example.org
The BCScTA Learning Community is a pilot project that we are trying in order to network our membership and share ideas. Consider joining this group. We will provide a current resource such as a book or teaching tool. We ask that you would participate in a Twitter chat where you provide your thoughts as they relate to the current title. At the moment we are working through the book: Launch: Using Design Thinking to Boost Creativity and Bring Out the Maker in Every Student, by John Spencer. It just might challenge you to try something new in your classroom.
If you are willing to participate in the discussion and would like to share ideas with other educators, Click the link below to join:
*Please note that our first intake was initiated in October. We are planning to include new titles for both existing and new members who wish to join the conversation. Your patience is appreciated as we try this new adventure together.
— John Munro BCScTA President
2019 Science Games Steering Committee
Engineers and Geoscientists BC is looking for two educators to volunteer on our 2019 Science Games Steering Committee. This group develops the activities for our annual Science Games event in March. At the Science Games students from Grades 1-6 work in teams as they complete various hands-on science challenges. Division 1 activities are designed for students in Grades 1-3 and Division 2 activities are designed for students Grades 4-6. We’re looking for educators to volunteer on the committee and provide insight on the science curriculum for these grades and how we can tailor our challenges so they are appropriate for these age groups. Learn more about this volunteer opportunity or apply online.
If you have questions about this volunteer opportunity, please contact email@example.com.
TRIUMF & The BC Association of Physics Teachers
Are pleased to announce
“Kindling your passion for physics teaching”
Are you going to be teaching physics next year?
Come and be inspired by award-winning physics researchers and educators while networking with colleagues sharing practical resources, and learning about physics applications.
A Conference and Workshops for Secondary Science Teachers
Provincial Pro-D Day, Friday October 19th, 2018 at TRIUMF, Vancouver BC.
We are delighted to announce this year’s keynote speaker, Officer of the Order of Canada,
Dr. Jaymie Matthews – UBC Physics and Astronomy
SAVE THE DATE!
Detailed program and registration information to follow in September 2018.
Fix 15: Don’t leave students out of the grading process. Involve students; they can and should play key roles in assessment and grading and promote achievement.
Brilliant. Happy to end this journey on a positive note note. In my last post I was questioning leaving my students more “in the dark” so to speak about assessment, but I’m pleased to see that this isn’t a helpful practice. It’s nice to see my instincts and / or training are largely congruent to Ken’s 15 Fixes.
So, unfortunately with some of my students, I don’t feel that their involvement in assessment promotes achievement… yet. This is more of a longer term cultural shift for them I think, and it just takes time and consistency. Hopefully another year at the same school and the same crew of kids brings out that ownership in their own learning.
On that note I’m going to cut it short and say thanks for tuning in. I had fun with this project, and I always enjoy looking critically at my own practice. I strive to find better ways to engage my students and help them feel a sense of pride and curiosity around my lessons and their school. I think assessment in general is a real driver of both positive and negative associations with school, and I’m hoping that these 15 Fixes “rekindle the fire” so to speak about your own forward thinking in your classroom. Have a great summer!
Fix 14: Don’t summarize evidence accumulated over time when learning is developmental and will grow with time and repeated opportunities; in those instances, emphasize more recent achievement
Have I mentioned that this book is amazing, and the lessons summarized are both intuitive and overwhelming all at the same time? I have dove in the deep end; I am on board, I have bought into these best practices for assessment. That said, organizing something like this, in my head or on my computer, is a seemingly difficult task. Maybe I just picture some of my parent teacher interviews I have had. I am a very organized person, but my system looks completely disorganized to a parent- especially a critical parent wondering why their child isn’t doing so hot. I usually have student names going along the Y axis, learning outcomes going across the X axis, and have somehow (even though it is on a screen or on paper) added a Z axis for time. I have both paper based records for my day to day observations, and I update marks “that count” every few weeks into our system. There is a number system to show levels of understanding (1, 2 or 3); I have tried colour coordinating, and I have a variety of shorthand letters or symbols that mean various things. I try to annotate any missing assignments for the student as they arise (to me an excused absence for the dentist is different than sleeping in or skipping). I think relationships are more important than record keeping, but I also don’t want to get caught with my pants down if a parent or principal has questions. It’s a delicate balance…
At the end of the day, learning takes time, and I really try to be clear with my students that their practice doesn’t count; taking risks in class is not going to bring their marks down. The term “weight” is a little confusing for some of them, even some grade 10’s and 11’s still don’t get that a 20 question test at the end of a unit will drive their grade disproportionately more than a 20 question assignment. I think students should know how they are being marked, and understand the reasons behind it. You can’t hit a bullseye if you don’t know where it is. Maybe that is my problem? I’m too transparent with my students, and they don’t have the maturity, vocabulary or training to understand it? Should I just say how the course will be on the syllabus and the first day of class and then leave more mystery to it all? Something to consider I guess…
Getting back to Fix 14 before I totally go off the rails. Maybe the difficulty I have scaffolding this is focussing on too big or long-term of learning goals. Is the learning goal I know the locations and charges of protons, neutrons and electrons in an atom or is it chemical processes require energy change as atoms are rearranged? One goal is 60 minutes tops, the other is 60 days. So keeping a running tally of where a student is at, so to speak, with one goal is much easier to document than the other.
So as usual, my reflection has both reaffirmed my philosophy about science education and assessment and also totally twisted everything around backwards and upside down at the same time. Thanks, as usual even though I don’t mention it specifically, for any comments or insight you can provide. As a related side note, please do keep in mind the Canadian Assessment for Learning Conference & Symposium – Location in Delta – May 1-3. I will be there will bells on.
Fix 13: Don’t use information from formative assessments and practice to determine grades; use only summative evidence.
I think I have mentioned this before, but I promise I am not looking ahead; I just mentioned this yesterday! If it was me learning something new I would only want summative evidence to determine my grade. If I had my 6 month performance evaluation at work scheduled (hypothetical) I would want to practice a specific set of skills, and have my final mark only based on the end of my learning journey. I think a student learning an introduction to Biology or Chemistry at their grade-specific level should be no different. Why would you want your mark to be an average of all your mixed results throughout?
The students, however, seem to think differently; maybe it is an age thing? My own daughter is in kindergarten. She would never have a final test on writing her name at the end of June. Her grade is based on evidence of an ongoing evolution of her skills. Do primary students have their reporting based entirely on Formative assessment? Then by Grade 11 and 12 students have their grade based mainly on Summative assessment? Upper intermediate and junior high school seem to be bridging the gap, from what I have seen. So how do I make it clear to a 14 or 15 year old that giving a final grade based on formative checkpoints in the middle isn’t really fair to them?
Here is how I have orchestrated the conversation, to get my students to “buy into” having a mark based mostly on Summative assessment. Most of my kids are starting to drive when I teach them. To get your driver’s license in BC you have to go through the following steps:
Students see the safety implication with this analogy, and agree that they wouldn’t want the ability to drive to be based on all of their early attempts “weighing down the average”. So back to the skills and knowledge being measured in a science class. I think the biggest psychological obstacle with basing the students mark entirely on summative assessment is what the students picture a test is; maybe it is a “test” thing? Fifty or more multiple choice questions, no talking, no phone, no binder. This environment is proven to cause anxiety for many, adults and children included. And is it really an accurate reflection of learning? Maybe for some students, but certainly not for all.
I really like the direction education is moving, whereby summative assessment is a conversation or a presentation of learning in a means of your choosing. Summative assessment can be include self-assessment, because let’s face it, students are much harder critics than we are. When was the last time you studied for a written or multiple choice test as an adult? Our “tests” are based mainly on observations and conversations, and sure may be a score at the end, but it is a much more authentic experience of where we are at in a given context. Think job interview or checking in on your quarterly sales goals. Don’t our students deserve the same time and respect?
Fix 12: Don’t include zeros in grade determination when evidence is missing or as punishment; use alternatives, such as reassessing to determine real achievement or use “I” for Incomplete or Insufficient Evidence
Ok readers, here we go, the final few fixes. Ironically I feel a little like my students do this time of year, in that I’m ready to accept the zero in my final 3 or 4 tasks because I’m burnt out. Almost there though, close to the finish line and ready to finish strong.
Throughout the school year I try to give students progress reports every two or three weeks, and have “missing” as a placeholder if they forgot to hand an assignment in, or chose not to do it for whatever reason. I have two very different students in mind who, by June 22nd when our report cards were due, had nearly half of the terms assignments missing. I did not put any zero’s in; both students were made aware of the catch-up day on Monday, but I did not make it mandatory for either of them. I Both students took a final unit exam: one student got 50% on the test, the other got 80%. I am pretty confident that this is a realistic evaluation of how much both students understand of the subject. I left the holes in my marks books as holes and posted their grades based solely on this final test.
This particular class I’m describing is Science 10; my breakdown is 30% formative assessment (labs, assignments) and 70% summative (Chapter Quizzes and Unit Tests). I actually consulted the class, and we came up with this breakdown collaboratively. I describe what formative and summative mean to them, and that I want to help them learn as much as possible during instruction, then measure it at the end; that to me is the point of assessment. Many students want some credit for “practice” assignments, and realise that daily work will (usually) bring their mark up, and their level of understanding up. It’s interesting to me that they don’t see their mark being the same thing as their level of understanding. Additionally, many students feel anxiety around tests, and don’t feel it is fair to base the whole course on summative assessments. It was interesting for me to hear their opinions about assessment, and I try to be honest and responsive to their opinions so we can come up with a fair plan for everyone.
I’m curious now to run the numbers on my 80 and 50 percentage students, to see how much completing their missing assignments would affect their mark, and I can only speculate how much it would affect their level of understanding. The potential disconnect between these two is something I will definitely keep in mind as I plan for next year…
Fix 11: Don’t rely only on the mean; consider other measures of central tendency and use professional judgment
Good morning. I was feeling guilty because I wanted this done by the end of June, and darn it, it’s not looking optimistic. Oh well, good to do some professional development over the summer months. Thankfully this one is an easy one, so I will keep it short, and maybe (but don’t quote me) might even reflect on Fix 12 before 3:00. Fingers crossed.
The last time I calculated the mean for a class or a test was in 2011, and I had 62 students taking Science 10 in two blocks in Semester 1. The sample size was high; there was pretty good anonymity for top and bottom marks on the “tails” of my bell curve. Now I have 9 students taking Science 10, so mean was useless before and it is completely delusional now. Interestingly though, a sarcastic thanks MyEd, our district does it automatically for me. I don’t even look at that number at the bottom of my column of final grades; it is meaningless.
Great news, we are back to professional judgement, which is an amazing safety net for those that are wrapped up around numbers and accountability. Your doctor takes a few measurements and runs a few tests, but most of the time the doctors professional judgement had the diagnosis long before the numbers were run or the samples were sent away. You are a professional just like a doctor. Know what tests to run, and how often to run them based on the individual student. Trust your judgement, and as long as you continue to monitor the progress in your students in a timely and constructive manner you are good! Focus on the conversations with them, not having a high average or a narrow standard deviation. Students are people first, you can’t treat them like an algorithm.