Friday, October 11, 2013

Inter-Rater Reliability Information

Mark Your Calendars now...



This fabulous gathering will be held
Monday, October 28th in the Media Center!


All BILS plus any trained observer who wishes to do observations this year.


The definition of Inter-Rater Reliability on Wikipedia:
In statistics, inter-rater reliability, inter-rater agreement, or concordance is the degree of agreement among raters. It gives a score of how much homogeneity, or consensus, there is in the ratings given by judges. It is useful in refining the tools given to human judges, for example by determining if a particular scale is appropriate for measuring a particular variable. If various raters do not agree, either the scale is defective or the raters need to be re-trained.

In much simpler terms: 

You will watch a video of a model lesson before you come to the session.
You will score the teacher on pre-determined areas from the observation rubric.
You will come to the meeting ready to "defend"/"explain" why you scored each domain the way that you did.

The shared conversation will help us have a common sense of what is Proficient or Basic.  This will make our observation coaching more consistent and cohesive.

 Remember....this meeting is mandatory for you to be on the list to do observations!

 If you have any questions about this....don't hesitate to ask!

PLEASE RSVP by October 22nd
 and I will send you the link to the Video along with the domains to be scored!

No comments:

Post a Comment