This article provides a broad overview of how iMentor measures short-term outcomes of students’ knowledge, skills and mindsets throughout the program year and the history of our assessment strategy.
You can navigate quickly throughout the article by following these links:
- Why assessments?
- What do assessments look like in our program?
- What feedback/learning went into the creation of assessments?
- How can I learn more about assessments?
Why assessments?
iMentor’s assessment strategy is a leap forward in our attempt to understand whether students learn the knowledge, skills, and mindsets (KSMs) taught as part of our curriculum. The KSMs in iMentor’s curriculum have been identified as critical for post-secondary success (read more here). iMentor’s assessment strategy is a first step toward holding ourselves accountable to our stakeholders and measuring the impact of our program on student KSMs.
Assessments provide a unique opportunity for program implementers to utilize short-term program impact data within the scope of the program year. Typically, the data collected in our beginning-of-year/end-of-year surveys has limited use for program implementers within that specific school year due to the timing of its release and measurement limitation. Assessments allow for implementers to have data about student knowledge on a timeline that supports real-time remediation, pair support, and participation incentives. You can read more here to learn about the many ways assessment data can be used.
Additionally, assessments strengthen the quality and diversity of iMentor’s program impact data. Assessments were designed to deliberately move our impact analysis forward by moving beyond student self-report surveys as the primary way of understanding iMentor’s impact. By collecting data from students, mentors, and program staff we can create a “triangulation” effect that allows us to better understand our students’ true level of proficiency on our knowledge, skills, and mindsets. This will help us better understand the bias reported by any one respondent, influence pair support, and help pairs have richer conversations about student growth.
What do assessments look like in our program?
There are two types of assessments implemented in our program: online quizzes and in-person pair rubrics. Each grade level will experience two of each type of assessment in one program year. Assessments are tied to key learning outcomes from that grade level.Online Quizzes
Online quizzes take place during iMentor classes and are typically implemented after the specific unit has been taught. Quizzes are written into lesson plans as do-nows and are implemented through the platform. You can read more about how online assessments here and how to administer assessments through the platform here.
In-Person Pair Rubrics
The in-person pair rubrics take place at two curricular events during the school year. They are embedded in an activity the pairs engage in at the event, typically a chat-and-chew. Together, a mentor and mentee complete a rubric about the mentee’s goals or their pair relationship. In-person pair rubrics operate as an activity that fosters rich conversation between the pair, while also collecting data (validated by both the mentee and mentor) about students’ long-term goals and the mentee/mentor relationship.
Rubrics are also administered through the platform, with the mentor being invited to take the assessment and the pair completing it together on the mentor’s phone or a provided computer. You can read more about how to administer assessments through the platform here.
What feedback/learning went into the creation of assessments?
The assessments being administered this program year are informed by over two years of national-regional team collaboration, program manager feedback, the quiz data of over 2,600 mentees, and rubric data from 950 pairs in NYC, Chicago and the Bay Area.Online unit quizzes were first piloted in the 2017-18 program year in 12th grade. Seventeen program managers participated in the pilot and provided feedback of their experiences. Key takeaways from this pilot were: 1) quizzes could be administered through the iMentor platform, 2) students were generally accepting of the idea of a quiz in iMentor class, and 3) fewer assessments would help reduce the burden on PMs, and (4) assessments should be kept to 5-6 minutes.
In the 2018-19 program year, two online assessments were administered by PMs across all grades and iMentor regions. Additionally, in-person assessments were piloted for the first time across all grades and regions. Multiple interviews were held across the year to get feedback directly from PMs and program implementers regarding the successes and challenges of quiz administration and utilizing assessment data for insights.
The feedback from regional teams and student assessment data, as well as the collaboration between the OLI, Platform, NPT and ROPI teams has led to multiple improvements to the assessment model today.
Those include but are not limited to: technological integration of assessments with Platform to increase data quality updated assessment content informed by student data and program manager feedback, improved training on content and administration procedures for program teams, and strengthened data sharing processes across both national and regional teams.
How can I learn more about assessments?
Below are links to assessment articles throughout the learning center and on box. Through these links you can learn more about the content explored in assessments, how to administer assessments, and how to actionize the insights provided through assessment data to strength iMentor’s work:- What we learned from online and in-person assessments, 2018-19
- How to administer assessments and manage completion
- Online Assessments 101
- Assessment talking points for students