This form is designed to provide constructive feedback to your colleague. Your responses will remain confidential and will be used solely for professional development purposes.
Your full name
Your employee ID or code (if applicable)
Teacher being evaluated (full name)
Teacher's employee ID or code (if applicable)
Subject(s) taught by the teacher
Grade level(s) or age group(s) taught
Your relationship to the teacher
How many times have you observed this teacher in the past 12 months?
Date of most recent observation
Evaluate how well the teacher designs lessons that align with curriculum standards and learning objectives.
Please rate the following aspects of lesson planning:
Strongly Disagree | Disagree | Neutral | Agree | Strongly Agree | |
|---|---|---|---|---|---|
Clarity of learning objectives | |||||
Alignment with curriculum standards | |||||
Appropriateness of content sequencing | |||||
Integration of prior knowledge | |||||
Inclusion of differentiated instruction strategies |
How often does the teacher share lesson plans with colleagues for feedback?
Never
Rarely
Sometimes
Often
Always
Does the teacher incorporate interdisciplinary connections in lesson plans?
What strengths did you observe in the teacher's lesson planning?
What suggestions do you have for improving lesson planning?
Rate the teacher's effectiveness in the following classroom management areas:
Ineffective | Somewhat Effective | Effective | Highly Effective | |
|---|---|---|---|---|
Establishing clear rules and routines | ||||
Maintaining a respectful classroom culture | ||||
Managing student behavior proactively | ||||
Using positive reinforcement strategies | ||||
Maximizing instructional time | ||||
Creating an inclusive environment for all students |
How does the teacher handle disruptive behavior?
Ignores it
Publicly reprimands
Privately addresses
Uses restorative practices
Varies strategy based on context
Did you observe any instances of bias or favoritism?
Describe the physical learning environment (seating, displays, accessibility):
How does the teacher ensure student voice and agency in the classroom?
Evaluate the teacher's instructional techniques:
Needs Improvement | Developing | Proficient | Exemplary | |
|---|---|---|---|---|
Clarity of explanations | ||||
Use of questioning strategies | ||||
Pacing of the lesson | ||||
Use of formative assessment | ||||
Integration of technology | ||||
Encouragement of critical thinking | ||||
Promotion of collaborative learning |
Which teaching methodologies did you observe? (Select all that apply)
Lecture
Inquiry-based learning
Project-based learning
Flipped classroom
Socratic seminar
Station rotation
Direct instruction
Gamification
Storytelling
Concept mapping
How effectively does the teacher check for understanding during the lesson?
Rarely checks
Uses only closed questions
Uses varied strategies
Continuously adapts based on feedback
Does the teacher encourage student-to-student dialogue?
Highlight one pedagogical strategy that was particularly effective and explain why:
Suggest one alternative pedagogical approach for a segment of the lesson:
Rate the level of student engagement in the following areas:
Very Low | Low | Moderate | High | Very High | |
|---|---|---|---|---|---|
Active participation in discussions | |||||
On-task behavior during activities | |||||
Peer collaboration | |||||
Asking higher-order questions | |||||
Demonstrating curiosity | |||||
Persistence when facing challenges |
What percentage of students were actively engaged throughout the lesson?
Less than 50%
50–69%
70–84%
85–94%
95–100%
Did you observe any students who were consistently disengaged?
Which strategies did the teacher use to boost engagement? (Select all that apply)
Real-world connections
Choice in assignments
Movement activities
Competitive games
Surprise elements
Student polls
Storytelling
Humor
Multimedia resources
Peer teaching
How did the teacher address different learning styles and needs?
Provide examples of moments when student engagement peaked and explain why:
Evaluate the teacher's assessment and feedback practices:
Rarely | Sometimes | Often | Consistently | |
|---|---|---|---|---|
Timeliness of feedback | ||||
Specificity of comments | ||||
Alignment of assessments with objectives | ||||
Use of rubrics or criteria | ||||
Opportunities for student self-assessment | ||||
Variety of assessment types |
How often does the teacher provide formative feedback during the lesson?
Never
Once or twice
Several times
Continuously
Does the teacher involve students in co-constructing success criteria?
Which feedback strategies did you observe? (Select all that apply)
Verbal comments
Written notes
Peer feedback
Digital badges
Exit tickets
Think-pair-share reflections
Video feedback
Audio comments
Learning conferences
Portfolios
Share an example of feedback that was particularly impactful:
Suggest ways to enhance the feedback loop for student growth:
Rate the teacher's professional behaviors:
Below Expectations | Meets Expectations | Exceeds Expectations | |
|---|---|---|---|
Punctuality and reliability | |||
Respectful communication with colleagues | |||
Willingness to share resources | |||
Participation in professional development | |||
Openness to feedback | |||
Contribution to team goals |
How often does the teacher collaborate with colleagues on interdisciplinary projects?
Never
Once per year
Once per term
Monthly
Weekly or more
Has the teacher mentored or supported new colleagues?
Does the teacher actively engage in professional learning communities?
Provide examples of how the teacher demonstrates lifelong learning:
How does the teacher handle professional disagreements or conflicts?
Evaluate the teacher's reflective practices:
Never | Rarely | Sometimes | Often | Always | |
|---|---|---|---|---|---|
Seeks student feedback | |||||
Adjusts instruction based on data | |||||
Sets professional goals | |||||
Documents growth over time | |||||
Shares failures as learning opportunities |
When something goes wrong in a lesson, how does the teacher typically respond?
Blames students
Ignores the issue
Makes minor adjustments
Reflects and revises thoroughly
Does the teacher maintain a reflective journal or portfolio?
Describe a moment when the teacher demonstrated vulnerability and learning from mistakes:
What evidence did you see of the teacher's growth over time?
Evaluate the teacher's use of technology and innovation:
Not Observed | Emerging | Developing | Proficient | Transformative | |
|---|---|---|---|---|---|
Purposeful integration of digital tools | |||||
Encourages digital citizenship | |||||
Uses data dashboards to inform instruction | |||||
Flips learning effectively | |||||
Creates opportunities for creative tech use |
Which digital tools or platforms did you observe? (Select all that apply)
Learning Management System
Interactive whiteboard
Student response system (e.g. clickers, apps)
Virtual reality/Augmented reality
Coding platforms
Collaborative documents
Video conferencing
Gamified quizzes
Digital portfolios
AI-powered tools
Does the teacher support students in creating digital content?
How does technology enhance or hinder learning in this teacher's classroom?
Suggest one emerging technology that could benefit this teacher's practice:
Evaluate the teacher's commitment to equity and inclusion:
Never | Seldom | Sometimes | Frequently | Consistently | |
|---|---|---|---|---|---|
Represents diverse perspectives in materials | |||||
Adapts for students with special needs | |||||
Challenges stereotypes and biases | |||||
Uses culturally relevant examples | |||||
Ensures equitable participation | |||||
Communicates high expectations for all |
How does the teacher accommodate students with different language backgrounds?
No accommodations
Relies on aides
Uses visuals and gestures
Provides multilingual resources
Leverages student assets
Did you observe any instances of micro-aggressions or exclusion?
Which inclusive practices did you observe? (Select all that apply)
Flexible seating
Choice boards
Sentence starters
Multilingual word walls
Culturally responsive texts
Universal Design for Learning (UDL)
Restorative circles
Affinity groups
Family/community funds of knowledge
Trauma-informed strategies
How does the teacher ensure that marginalized voices are heard and valued?
What steps could further strengthen inclusivity in this classroom?
Evaluate the teacher's engagement with families and community:
Never | Rarely | Sometimes | Often | Always | |
|---|---|---|---|---|---|
Communicates regularly with parents | |||||
Invites parent expertise into class | |||||
Offers flexible meeting times | |||||
Shares positive news, not just problems | |||||
Connects curriculum to community issues |
How frequently does the teacher invite parents or community members into the classroom?
Never
Once per year
Once per term
Monthly
Weekly or more
Does the teacher co-design projects with community partners?
Which communication channels does the teacher use with families? (Select all that apply)
Email newsletters
Phone calls
Text messages
Class website
Parent portal
Social media
Home visits
Community walks
Student-led conferences
Multilingual flyers
Provide an example of how the teacher honored family or community knowledge:
What opportunities exist to deepen family or community involvement?
Overall, how would you rate this teacher's effectiveness?
How confident are you that this teacher positively impacts student learning?
Rank the following areas from strongest (1) to most in need of growth (5):
Content knowledge | |
Pedagogical skills | |
Classroom management | |
Assessment practices | |
Professional collaboration |
Summarize this teacher's greatest strengths in one paragraph:
Identify one high-leverage area for growth and suggest concrete steps:
If this teacher were to lead a workshop for colleagues, what topic would you recommend?
Would you be comfortable recommending this teacher for a leadership role?
Any additional comments or observations not covered above:
Analysis for Teacher Peer Evaluation Form
Important Note: This analysis provides strategic insights to help you get the most from your form's submission data for powerful follow-up actions and better outcomes. Please remove this content before publishing the form to the public.
This Teacher Peer Evaluation Form exemplifies best-practice instructional evaluation design by balancing quantitative rubrics with rich qualitative prompts, ensuring evaluators can capture both measurable performance indicators and nuanced contextual evidence. The form’s progressive structure—moving from identification, through classroom-specific domains, to holistic reflection—mirrors the natural flow of an instructional observation cycle, reducing cognitive load for evaluators. Mandatory fields are strategically limited to identification and high-impact items, maximizing completion rates while still collecting sufficient data for professional-growth conversations.
Another major strength is the meta-cognitive framing: the opening confidentiality statement primes evaluators for candid, growth-oriented feedback rather than superficial praise, which research shows dramatically improves the acceptance and implementation of peer feedback. The inclusion of relationship context (mentor/mentee, cross-departmental, etc.) allows administrators to weight feedback appropriately, acknowledging that peer observations vary in depth and perspective. Finally, the form’s emphasis on evidence-based comments—through required open-text fields—mitigates halo bias and ensures feedback is actionable, aligning with adult-learning principles that drive sustained pedagogical improvement.
The evaluator’s full name is collected to maintain accountability and integrity within the peer-review process; knowing who provided each review discourages ad-hominem or baseless criticism and supports follow-up calibration conversations if ratings diverge sharply. The form’s privacy promise (“responses will remain confidential”) is critical here—it signals that while names are collected, they will not be shared with the evaluatee, striking an ethical balance between transparency and psychological safety. From a data-quality standpoint, a named evaluator allows the professional-development office to track inter-rater reliability over time, flagging evaluators who consistently skew high or low relative to cohort norms.
Making this field mandatory is non-controversial because anonymous peer reviews in K-12 settings often lack credibility with teachers’ unions and can erode trust in the evaluation system. The single-line open-ended format keeps data entry friction minimal; however, the absence of format validation (e.g., Last, First) may introduce minor inconsistencies that complicate later matching to HR systems. Still, the benefits of accountability and longitudinal analytics outweigh the cost of occasional data-cleaning, making this a well-justified mandatory field.
Accurately identifying the evaluatee is foundational for routing feedback to the correct personnel file and for aggregating multiple peer reviews into a coherent growth plan. The form’s use of free-text rather than a drop-down roster introduces flexibility for small schools or guest teachers who may not appear in pre-populated lists, but it also risks typos that could orphan submissions. To mitigate this, many districts pair this field with an optional employee-ID box, allowing HR to disambiguate homonyms or misspellings—an approach mirrored in this form.
Because peer evaluations often feed into tenure, licensure, or merit-pay decisions, correctly linking feedback to the right educator is legally mandated; hence the mandatory status is unquestionable. The form would benefit from a future enhancement: real-time type-ahead that suggests existing teacher names to reduce keystrokes while preserving flexibility, but even without this, the current design satisfies compliance requirements without imposing heavy technical overhead on smaller districts.
Collecting subject information enables the professional-development team to contextualize pedagogical ratings; what constitutes exemplary instruction in Early Literacy differs dramatically from Advanced Placement Physics. The placeholder examples (“Mathematics, Biology, History”) subtly cue evaluators to list all relevant subjects, discouraging the common error of naming only the primary prep. This field also powers longitudinal dashboards that reveal department-level trends—such as universally low ratings on technology integration in the Social-Studies department—guiding targeted PD investments.
The mandatory designation is appropriate because without subject context, subsequent ratings lose meaning: a 5-star “use of formative assessment” is interpreted differently for a Visual Arts teacher versus a Algebra I teacher. Free-text entry preserves granularity for specialized courses like “IB Sports, Exercise & Health Science,” something a rigid drop-down would constrain. The minor data-cleaning burden is offset by the analytical depth gained, making this a high-value mandatory field.
Grade-band data is essential for developmental appropriateness checks; exemplary classroom-management strategies for Grade 1 (hand signals, carpet meetings) would be developmentally incongruous in Grade 12. By capturing both grade and age ranges (placeholder: “Grade 9–10, ages 14–16”), the form accommodates international, special-education, and Montessori contexts where age-grade alignment varies. This field also feeds into predictive analytics—e.g., middle-school teachers consistently rating lower on “student persistence” can trigger additional supports before burnout manifests.
Mandatory status is justified because peer feedback that lacks grade context can misinform administrators; for instance, low engagement ratings might simply reflect age-appropriate behavior rather than instructional weakness. The dual format (grade + age) future-proofs the data as schools increasingly adopt competency-based progression models, ensuring the dataset remains analytically relevant as educational paradigms evolve.
Understanding the relational dynamic between evaluator and evaluatee is crucial for interpreting potential bias—mentors may overemphasize positives, while department rivals might skew negative. The placeholder examples (“Same department, Cross-departmental peer, Mentor, Mentee”) normalize a spectrum of professional relationships, reducing evaluator anxiety about role ambiguity. This data also enables calibration weights in advanced analytics systems; feedback from a same-department colleague typically receives higher weight than cross-departmental input.
Keeping this field mandatory preserves data integrity; optional responses would leave large gaps, undermining statistical reliability when aggregating multiple reviews. The open-text format respects the nuance of hybrid roles (e.g., “Department colleague & PLC co-facilitator”), something a rigid taxonomy would flatten. Although the field is susceptible to social-desirability bias (few admit to adversarial relationships), triangulation with other mandatory fields like observation frequency helps analytics engines flag outliers for human review.
Observation frequency serves as a proxy for feedback validity; a single walk-through yields qualitatively different insights than eight sustained observations. This numeric field enables administrators to apply credibility weighting—reviews based on multiple observations receive higher influence in summative ratings. Longitudinally, the data can reveal systemic issues such as chronically under-observed novice teachers, prompting leadership to redistribute observation loads.
The mandatory requirement is defensible because without frequency metadata, districts risk violating collective-bargaining agreements that stipulate minimum peer-observation counts. The numeric open-ended format allows for precision (e.g., “11”), but lacks upper-bound validation, so theoretically an evaluator could enter “999”; nevertheless, such outliers are easily scrubbed during analysis. Overall, the field’s analytic utility and contractual compliance value far outweigh minor data-quality risks.
Recency of observation determines the relevance of subsequent ratings; feedback based on a lesson observed nine months ago has diminished utility for immediate growth planning. Capturing the date also prevents gaming, where an evaluator might reference an exemplary lesson from years past to inflate ratings. When paired with the numeric frequency field, the date enables calculation of observation cadence—districts can flag teachers who receive three observations within one week followed by months of silence, a pattern associated with compliance-driven rather than growth-driven cultures.
The mandatory status aligns with state-level evaluation regulations that require peer observations to occur within defined windows (often 60 days of summative conferences). The date-type input enforces ISO formatting, eliminating ambiguity between U.S. and international conventions. While evaluators can theoretically back-date, audit logs tied to submission timestamps can detect anomalies, preserving data integrity with minimal user friction.
This single-choice item quantifies the teacher’s openness to collaborative planning, a leading indicator of professional-learning-community health. The ordinal scale (Never → Always) maps cleanly to Likert analyses, enabling departments to benchmark collaboration norms. When cross-tabulated with “collaborates on interdisciplinary projects,” the field can expose hidden silos—such as Math teachers sharing plans weekly but never engaging in cross-curricular design.
Mandatory capture is justified because collaborative practice is a contractual component in many district evaluation frameworks; omitting it would create blind spots. The four-point scale avoids neutral midpoint inflation, nudging evaluators toward a directional judgment. The absence of “Not applicable” respects the reality that sharing can occur even in departmentalized models, preserving data completeness for longitudinal studies.
This open-ended prompt elicits evidence-rich narratives that rubric scores alone cannot provide, aligning with qualitative-research principles that value contextualized insight. Requiring a response ensures evaluators articulate concrete exemplars, reducing halo or horns bias; when peers must cite specific strengths, they often moderate extreme ratings. The field also feeds directly into individualized growth plans—teachers can double-down on highlighted strengths while addressing deficits.
The mandatory status is pedagogically sound because exclusive focus on deficits demoralizes staff; identifying strengths sustains motivation and models asset-based feedback for students. The multiline text box invites elaboration, but the absence of a minimum-character count leaves room for superficial answers; future iterations could add a soft prompt (“Please write at least 50 words”) to deepen reflection without triggering compliance resistance.
Classroom-management efficacy is the single strongest in-house predictor of student-learning growth, making this item high-stakes for both evaluatee and administration. The response options progress from reactive (“Publicly reprimands”) to restorative (“Uses restorative practices”), subtly promoting culturally-responsive discipline models. When aggregated at the school level, the distribution can reveal policy implementation gaps—e.g., 70% of staff selecting “Varies strategy” suggests solid training, whereas a plurality selecting “Publicly reprimands” flags needed professional development.
Mandatory capture is non-negotiable because discipline disproportionality is a federal compliance issue; without peer-reported data, districts may remain unaware of disparate practices until a formal complaint emerges. The single-choice constraint streamlines analysis, though it sacrifices nuance; the follow-up comment boxes elsewhere in the section allow evaluators to qualify selections, achieving a practical balance between granularity and usability.
Formative-assessment frequency correlates strongly with achievement gains, especially for marginalized student groups, positioning this item as a equity-centered metric. The four-tier scale ranges from “Rarely checks” to “Continuously adapts,” operationalizing the concept of responsive teaching. Cross-analysis with “percentage of students actively engaged” can spotlight classrooms where frequent checks coexist with low engagement, suggesting overly procedural questioning techniques rather than cognitively demanding checks.
The mandatory requirement aligns with evidence-based evaluation rubrics used in most U.S. states; omitting it would compromise criterion validity. The scale’s asymmetry (no neutral midpoint) encourages decisive ratings, yet the descriptors are sufficiently nuanced to minimize random variance. Ultimately, the data empowers instructional coaches to target support—teachers at the “Uses only closed questions” level benefit from professional development on open, equity-oriented checking strategies.
Student-engagement percentage serves as a proxy for instructional vitality; peer reviewers are uniquely positioned to observe subtle off-task behaviors that teachers immersed in delivery may miss. The five-band ordinal scale aligns with classroom-observation protocols like CLASS and Danielson, ensuring data can be triangulated across instruments. When disaggregated by grade and subject, the metric can expose systemic bias—e.g., Special-Education classrooms chronically rated below 50% may indicate inadequate paraprofessional staffing rather than poor teaching.
Mandatory status is warranted because engagement data is a core component of most state evaluation rubrics; allowing evaluators to skip the item would introduce non-random missingness that biases summative ratings. The percentage bands are wide enough to reduce inter-rater disagreement yet granular enough to detect meaningful differences; the 95–100% category is intentionally aspirational, discouraging ceiling-effect inflation.
Feedback frequency is a leading indicator of student self-regulation and agency, aligning with Hattie’s evidence that immediate, task-level feedback yields large effect sizes. The four-point scale captures the difference between sporadic and continuous feedback without overwhelming reviewers with excessive granularity. Linking this field to “timeliness of feedback” in the matrix section enables data-quality cross-checks; large discrepancies (e.g., “Continuously” here but “Rarely” in the matrix) flag responses for human review.
The mandatory requirement is defensible because feedback density is a contractual component in many district evaluation frameworks; omitting it would undermine criterion coverage. The absence of a neutral midpoint nudges evaluators toward actionable judgments, while the common-language descriptors reduce rater training time—important for districts that rotate peer evaluators annually.
This item operationalizes culturally-responsive pedagogy, a federal compliance priority under Title III and ESSA. The five-tier scale moves from deficit-based (“No accommodations”) to asset-based (“Leverages student assets”), modeling best practice for evaluators themselves. When disaggregated by ELL population density, the data can reveal preparation gaps—schools with high ELL enrollment but predominantly “Uses visuals and gestures” may need targeted PD on translanguaging strategies.
Mandatory capture is essential because language-accommodation data is required for state reporting; optional responses would create systematic gaps that trigger audit findings. The scale’s progression is intuitive yet research-aligned, ensuring that even novice evaluators can reliably distinguish between levels. The data ultimately guides resource allocation—teachers at the “Provides multilingual resources” level can be tapped to lead future ELL workshops, scaling expertise internally.
This item assesses growth mindset, a predictor of teacher retention and student resilience. The four options range from externalizing (“Blames students”) to reflective (“Reflects and revises thoroughly”), providing actionable insight for coaching conversations. When aggregated, the distribution can signal school culture—buildings where a plurality selects “Reflects and revises” often correlate with high staff-efficacy scores on climate surveys.
The mandatory requirement aligns with professional-practice rubrics that value adaptive expertise; skipping the item would omit critical attitudinal data. The scale avoids neutral midpoint inflation, nudging evaluators toward a directional judgment while the descriptors remain sufficiently nuanced to support reliable classification. Instructional coaches can use the data to differentiate follow-up: teachers who “Blame students” benefit from cognitive-coaching cycles, whereas those who “Make minor adjustments” may need support with data-analysis tools.
This summative, open-ended prompt compels evaluators to synthesize disparate evidence into a coherent narrative, reinforcing the cognitive act of sense-making that improves subsequent observation skills. Requiring a response ensures that every review concludes on an asset-based note, counteracting the negativity bias that can emerge when raters focus heavily on deficits. The paragraph-length constraint yields data that is concise enough for administrator scanning yet rich enough for inclusion in growth-plan documents.
Mandatory status is psychologically astute because ending with strengths increases the likelihood that the evaluatee will accept accompanying critique, leveraging the “feedback sandwich” effect documented in organizational-behavior literature. The absence of a minimum-word count leaves room for eloquence or brevity, but the single-paragraph instruction discourages superficial bullet lists. Ultimately, the field generates qualitative quotes that can be featured in recognition ceremonies, reinforcing a culture of appreciative inquiry.
This prompt operationalizes the principle of targeted, manageable improvement—asking for one high-leverage area prevents the overwhelm that occurs when teachers receive laundry lists of recommendations. Requiring concrete steps nudges evaluators toward coaching language (“Try think-alouds during reading”) rather than vague judgments (“Improve questioning”). The data can be thematically coded to reveal school-wide growth areas, informing future professional-development calendars.
Mandatory capture is essential because optional responses would result in a high percentage of skipped fields, undermining the form’s formative purpose. The multiline text box invites specificity, but the absence of a word minimum could still yield terse answers; nonetheless, the current design achieves a pragmatic balance between depth and completion rates. When paired with the preceding strengths paragraph, this field completes the “glow-grow” protocol that research shows maximizes teacher receptivity to peer feedback.
Mandatory Question Analysis for Teacher Peer Evaluation Form
Important Note: This analysis provides strategic insights to help you get the most from your form's submission data for powerful follow-up actions and better outcomes. Please remove this content before publishing the form to the public.
Question: Your full name
Justification: Mandatory identification ensures evaluator accountability, which research shows improves the thoughtfulness and honesty of peer feedback; anonymous reviews correlate with more extreme and less actionable comments. Collecting names also enables professional-development offices to track inter-rater reliability over time and to provide targeted calibration training for evaluators whose ratings consistently deviate from cohort norms, thereby safeguarding the integrity of the entire peer-review system.
Question: Teacher being evaluated (full name)
Justification: Accurately linking feedback to the correct educator is a contractual and legal requirement for personnel files, tenure decisions, and state reporting; misattributed reviews can invalidate summative evaluations and expose districts to grievances. A mandatory, free-text field preserves flexibility for small schools or guest teachers who may not appear in pre-populated rosters, while optional employee-ID capture provides an additional disambiguation layer that HR can use to rectify typos or homonyms.
Question: Subject(s) taught by the teacher
Justification: Subject context is essential for valid interpretation of pedagogical ratings; exemplary “use of manipulatives” carries different weight in Kindergarten math than in AP Literature. Mandatory capture enables the district to benchmark against curriculum standards and to generate department-level analytics that guide resource allocation—e.g., universally low technology-integration ratings in the Visual and Performing Arts department can trigger targeted PD investments.
Question: Grade level(s) or age group(s) taught
Justification: Grade-band data is required to apply developmentally appropriate criteria during review; classroom-management strategies effective in Grade 1 (hand signals, carpet meetings) would be incongruous in Grade 12. Making this field mandatory ensures that subsequent ratings are interpreted through an age-appropriate lens, protecting teachers from invalid comparisons and supporting compliance with evaluation rubrics that explicitly reference developmental appropriateness.
Question: Your relationship to the teacher
Justification: Understanding the relational dynamic allows administrators to weight feedback appropriately and to detect potential bias—mentors may overemphasize positives, whereas cross-departmental peers may lack content context. Mandatory collection preserves dataset completeness, enabling advanced analytics to flag outlier ratings where the evaluator’s relationship suggests a conflict of interest or, conversely, an especially credible longitudinal perspective.
Question: How many times have you observed this teacher in the past 12 months?
Justification: Observation frequency serves as a proxy for feedback validity and is frequently codified in collective-bargaining agreements that stipulate minimum peer-observation counts. Mandatory capture ensures compliance with these contractual provisions and enables weighting algorithms that assign higher credibility to reviews based on multiple sustained observations, thereby improving the overall reliability of summative evaluations.
Question: Date of most recent observation
Justification: Recency determines the relevance of associated ratings and is a legal requirement in many state evaluation frameworks that mandate observations within defined windows (typically 60 days of summative conferences). A mandatory date field prevents gaming behaviors such as referencing outdated lessons and supports audit trails that verify timely feedback cycles.
Question: How often does the teacher share lesson plans with colleagues for feedback?
Justification: Collaborative planning is a leading indicator of professional-learning-community health and is often embedded as a criterion in district evaluation rubrics. Mandatory capture ensures criterion completeness and supplies longitudinal data that can reveal departmental silos or exemplary collaboration cultures, informing leadership decisions about PLC facilitation and resource allocation.
Question: What strengths did you observe in the teacher's lesson planning?
Justification: Requiring an asset-based narrative counteracts the negativity bias that can emerge when evaluators focus heavily on deficits, thereby increasing the likelihood that the evaluatee will accept subsequent critique. Mandatory open-text input yields evidence-rich comments that can be directly quoted in growth plans, modeling best-practice feedback and reinforcing a culture of appreciative inquiry.
Question: How does the teacher handle disruptive behavior?
Justification: Classroom-management efficacy is a core component of most state evaluation rubrics and a federal compliance concern regarding discipline disproportionality. Mandatory capture ensures that administrators have peer-reported data to identify disparate practices early, enabling targeted professional development before formal complaints arise.
Question: How effectively does the teacher check for understanding during the lesson?
Justification: Formative-assessment frequency is strongly correlated with achievement gains, especially for marginalized student groups, and is explicitly referenced in many instructional-framework rubrics. A mandatory response ensures that equity-relevant data is available for every review, supporting both formative coaching and summative scoring.
Question: What percentage of students were actively engaged throughout the lesson?
Justification: Engagement percentage is a required metric in most classroom-observation protocols and serves as a proxy for instructional vitality. Mandatory capture guarantees dataset completeness, enabling disaggregated analyses that can expose systemic bias or staffing issues, such as chronically low engagement in Special-Education classrooms that may indicate inadequate paraprofessional support.
Question: How often does the teacher provide formative feedback during the lesson?
Justification: Feedback density is a contractual element in many district evaluation frameworks and a predictor of student self-regulation. Mandatory collection ensures that every peer review contains data needed for both compliance and instructional coaching, supporting timely interventions for teachers who infrequently check for understanding.
Question: How does the teacher accommodate students with different language backgrounds?
Justification: Language-accommodation data is required for federal Title III and ESSA compliance, and exclusion would trigger audit findings. A mandatory response ensures that districts can monitor culturally-responsive practices and identify teachers who may benefit from additional ELL professional development, thereby reducing legal risk and improving equity outcomes.
Question: When something goes wrong in a lesson, how does the teacher typically respond?
Justification: Growth-mindset orientation is a predictor of both teacher retention and student resilience, and is explicitly referenced in many professional-practice rubrics. Mandatory capture supplies attitudinal data that instructional coaches can use to differentiate follow-up—teachers who “blame students” benefit from cognitive-coaching cycles, whereas those who “reflect and revise” may need data-analysis tools, ensuring that scarce coaching resources are allocated efficiently.
Question: Summarize this teacher's greatest strengths in one paragraph:
Justification: Ending every review with a required strengths paragraph leverages the feedback-sandwich effect, increasing the probability that the evaluatee will read and implement accompanying critiques. Mandatory status ensures no review concludes on a purely deficit-based note, supporting district efforts to sustain educator morale and model asset-based feedback cultures.
Question: Identify one high-leverage area for growth and suggest concrete steps:
Justification: Asking for a single, actionable recommendation prevents overwhelming teachers with laundry lists that reduce implementation likelihood. Mandatory capture ensures every review yields a targeted next step that instructional coaches can revisit during follow-up cycles, thereby closing the feedback loop and demonstrating continuous-improvement ROI to stakeholders.
Overall Mandatory Field Strategy Recommendation:
The current form employs a judicious mandatory-field strategy that balances legal compliance, data integrity, and user burden. By limiting required inputs to identification, contextual metadata, and high-impact instructional behaviors, the district maximizes completion rates while still collecting sufficient evidence for both formative coaching and summative decisions. Future enhancements could include conditional logic that makes optional fields mandatory only when relevant—e.g., if “Yes” is selected for bias observation, the descriptive follow-up would become required—thereby deepening data richness without increasing baseline friction. Additionally, providing real-time progress indicators (e.g., “70% complete”) and clear explanations next to mandatory asterisks can further reduce abandonment while preserving the robust dataset needed for evidence-based human-capital decisions.