2.7 Assessment
Candidates model and facilitate the effective use of diagnostic, formative, and summative assessments to measure student learning and technology literacy, including the use of digital assessment tools and resources. (PSC 2.7/ISTE 2g)
ITEC 7305
Data Inventory
The Data Inventory assignment allowed me to understand the types of assessment and the results from those assessments which describe the overall performance of our students at Norcross High School. I conducted a Data Inventory for our whole school. I then charted the types of assessments our teachers use at the formative and diagnostic levels in the classroom as well as at the summative level of the department, district, and state. The charted summary of the types of data we use revealed which types of assessments were being used and when and in what frequency. This charting revealed some imbalances in the type and frequency of the assessments we give; I used the chart to make recommendations to our curriculum teams about how to improve the balance of types and frequency of assessments in our school, especially those which target formative progress and use digital tools to do that.
The Data Coach’s Guide recommends that assessments be given - and analyzed- more frequently at the formative level and less frequently at the summative level (Love, et al, 2008, p. 129). Naturally, summative assessments in a large district like GCPS will be the EOCs for a course, the Milestones exam, and specialty tests like Access for ELLs, AP and IB Course exams. When I performed the data inventory for Norcross High School, the aim was to examine our school’s usage pattern of formative and diagnostic as well as summative assessments to generate data about student progress. The exercise quickly revealed an imbalance in favor of district-level summative assessment as our main source of data. Because summative and quarterly district assessments generate data infrequently, it is not enough to rely on those types of assessments to measure student learning in ways that can adjust teaching practices in time to make a difference. Neither did the assessments we use to measure student learning have a digital tools focus, either. (It is not enough to say that these assessments are taken online.) At the formative assessment level, I found that the 9th grade ELA team discusses the team’s performance on common formative assessments, that Science tracks constructed response performance, and SS tracks in class Gateway practice performance. But no team included digital tools for assessing student performance with daily or weekly frequency. I made recommendations to suit this gap.
As a result of the inventory, I recommended digital tools to improve technology literacy as well as to facilitate diagnostic, formative assessment at a higher frequency than before. For instance, I suggested the use of Flipgrid to Science and ELA so that students could summarize important concepts for the day in a short video. I modeled the use of Flipgrid with my own AP classes so that they could practice two skills which were the most difficult ones on the AP exam. When other teachers tried Flipgrid, they reported that their students were more accurate in reporting the most important details of what they learned in an individual video than when the teacher simply asked for call-and -response in the whole classroom setting. Further, I suggested the use of eCLASS modules for Math since the LMS platform offers embedded opportunities for remediation and for self-assessment and skill building. Through our LMS, students were more engaged and could build technology literacy instead of relying on graded papers to come back to them a week later. The feedback from digital diagnostic tools is immediate and effective for students to self-assess. I helped some Math teachers build mini-modules for teaching, assessing and reteaching content. Then, I used H5P to code a few formative assessment opportunities in Social Studies. A look at the Declaration of Independence through coded hotspots revealed important historical clues that students miss when they read the plain text of the document. When teachers realized that they could use digital tools to assess skills at the micro-level and at a frequency much higher than their previous rate, they were willing to adopt the new tools into practice with their other units. The Data Inventory allowed me to see the gaps in our formative and diagnostic assessment. Ultimately, I modeled and facilitated the use of digital tools to address the lowest performance skills as revealed through summative assessment data.
In creating the chart for the Data Inventory, the imbalance of types of assessment was immediately apparent. Although this was a relatively simple exercise to perform - essentially a list of the assessments and methods of data collection we carry on in school - it suggested the idea that summative assessment rules our efforts in large districts and that departments make huge decisions based on aggregate data. However, teachers don’t really get as much use out of that aggregate data. There are individual students in the classroom whose learning needs demands immediate instruction. The gaps in formative and diagnostic assessment are not always clear, since there is so much focus on interpreting data from summative and district tests. I had not previously considered what that imbalance in assessment data meant for the individual student who is trying to learn a skill and the individual classroom teacher who must move that child’s performance, no matter what the aggregate data says about how we all performed on Milestones. I would alter the chart to ask the question “What ratio of diagnostic and formative to summative assessment produces the best data for the classroom teacher and the student who tries to learn?”
Teachers who invited me to help them develop common formative assessments using digital tools reported that the work was interesting, motivating, and of practical use for them and their students. Again, I was surprised at how using a computer or a device automatically engages a student just because of the technological component. Teachers were able to adjust their instruction on a daily basis because the feedback that they gave was often optimized by the digital tools we used. H5P, for instance, is embedded into eCLASS easily and lets the students self-assess. Further, students took more ownership of moving toward their own learning goals than they had when they were awaiting feedback from the teachers.
References
Love, N., et al. (2008). The data coach’s guide to improving learning for all students. Corwin Press, Thousand Oaks, CA.
Data Inventory
The Data Inventory assignment allowed me to understand the types of assessment and the results from those assessments which describe the overall performance of our students at Norcross High School. I conducted a Data Inventory for our whole school. I then charted the types of assessments our teachers use at the formative and diagnostic levels in the classroom as well as at the summative level of the department, district, and state. The charted summary of the types of data we use revealed which types of assessments were being used and when and in what frequency. This charting revealed some imbalances in the type and frequency of the assessments we give; I used the chart to make recommendations to our curriculum teams about how to improve the balance of types and frequency of assessments in our school, especially those which target formative progress and use digital tools to do that.
The Data Coach’s Guide recommends that assessments be given - and analyzed- more frequently at the formative level and less frequently at the summative level (Love, et al, 2008, p. 129). Naturally, summative assessments in a large district like GCPS will be the EOCs for a course, the Milestones exam, and specialty tests like Access for ELLs, AP and IB Course exams. When I performed the data inventory for Norcross High School, the aim was to examine our school’s usage pattern of formative and diagnostic as well as summative assessments to generate data about student progress. The exercise quickly revealed an imbalance in favor of district-level summative assessment as our main source of data. Because summative and quarterly district assessments generate data infrequently, it is not enough to rely on those types of assessments to measure student learning in ways that can adjust teaching practices in time to make a difference. Neither did the assessments we use to measure student learning have a digital tools focus, either. (It is not enough to say that these assessments are taken online.) At the formative assessment level, I found that the 9th grade ELA team discusses the team’s performance on common formative assessments, that Science tracks constructed response performance, and SS tracks in class Gateway practice performance. But no team included digital tools for assessing student performance with daily or weekly frequency. I made recommendations to suit this gap.
As a result of the inventory, I recommended digital tools to improve technology literacy as well as to facilitate diagnostic, formative assessment at a higher frequency than before. For instance, I suggested the use of Flipgrid to Science and ELA so that students could summarize important concepts for the day in a short video. I modeled the use of Flipgrid with my own AP classes so that they could practice two skills which were the most difficult ones on the AP exam. When other teachers tried Flipgrid, they reported that their students were more accurate in reporting the most important details of what they learned in an individual video than when the teacher simply asked for call-and -response in the whole classroom setting. Further, I suggested the use of eCLASS modules for Math since the LMS platform offers embedded opportunities for remediation and for self-assessment and skill building. Through our LMS, students were more engaged and could build technology literacy instead of relying on graded papers to come back to them a week later. The feedback from digital diagnostic tools is immediate and effective for students to self-assess. I helped some Math teachers build mini-modules for teaching, assessing and reteaching content. Then, I used H5P to code a few formative assessment opportunities in Social Studies. A look at the Declaration of Independence through coded hotspots revealed important historical clues that students miss when they read the plain text of the document. When teachers realized that they could use digital tools to assess skills at the micro-level and at a frequency much higher than their previous rate, they were willing to adopt the new tools into practice with their other units. The Data Inventory allowed me to see the gaps in our formative and diagnostic assessment. Ultimately, I modeled and facilitated the use of digital tools to address the lowest performance skills as revealed through summative assessment data.
In creating the chart for the Data Inventory, the imbalance of types of assessment was immediately apparent. Although this was a relatively simple exercise to perform - essentially a list of the assessments and methods of data collection we carry on in school - it suggested the idea that summative assessment rules our efforts in large districts and that departments make huge decisions based on aggregate data. However, teachers don’t really get as much use out of that aggregate data. There are individual students in the classroom whose learning needs demands immediate instruction. The gaps in formative and diagnostic assessment are not always clear, since there is so much focus on interpreting data from summative and district tests. I had not previously considered what that imbalance in assessment data meant for the individual student who is trying to learn a skill and the individual classroom teacher who must move that child’s performance, no matter what the aggregate data says about how we all performed on Milestones. I would alter the chart to ask the question “What ratio of diagnostic and formative to summative assessment produces the best data for the classroom teacher and the student who tries to learn?”
Teachers who invited me to help them develop common formative assessments using digital tools reported that the work was interesting, motivating, and of practical use for them and their students. Again, I was surprised at how using a computer or a device automatically engages a student just because of the technological component. Teachers were able to adjust their instruction on a daily basis because the feedback that they gave was often optimized by the digital tools we used. H5P, for instance, is embedded into eCLASS easily and lets the students self-assess. Further, students took more ownership of moving toward their own learning goals than they had when they were awaiting feedback from the teachers.
References
Love, N., et al. (2008). The data coach’s guide to improving learning for all students. Corwin Press, Thousand Oaks, CA.