360-Degree Performance & Leadership Review Form

1. Review Context & Participant Information

This 360-degree review balances quantitative metrics with qualitative insights across leadership, collaboration, and core behavioral competencies. Your candid, constructive feedback is essential for balanced development planning.

 

Your relationship to the reviewee

How long have you worked with the reviewee (in any capacity)?

Have you completed 360-feedback training or calibration sessions provided by the organization?

 

Briefly describe the most useful insight you gained from the training:

 

Please ensure your feedback is specific, observable, and development-oriented. Avoid generalizations.

 

Date of this review completion

2. Leadership Competencies

Rate the reviewee on the following leadership dimensions using the 1–5 behavioral scale (1 = Needs significant improvement, 5 = Role-model level). Provide comments for ratings ≤ 3 or ≥ 4.

 

Leadership Behavioral Ratings

Sets a clear strategic vision and direction

Makes timely, high-quality decisions under uncertainty

Demonstrates authenticity and ethical judgment

Inspires and motivates others toward shared goals

Coaches and develops team members effectively

Delegates appropriately and empowers ownership

Manages conflict constructively

Adapts leadership style to context and individual needs

Notes

Have you observed the reviewee leading through significant organizational change (re-orgs, pivots, crisis)?

 

Describe the situation, their approach, and the outcome:

3. Collaboration & Communication

Collaboration & Communication Ratings

Communicates with clarity and appropriate detail

Listens actively and checks for understanding

Shares information transparently across silos

Provides actionable, timely feedback

Facilitates inclusive discussions

Builds trust with diverse stakeholders

Negotiates win-win solutions

Represents team achievements accurately to higher management

Which communication channels does the reviewee use most effectively? (Select all that apply)

When tension arises during cross-team collaboration, the reviewee typically:

4. Innovation & Problem-Solving

Innovation Climate Ratings

Encourages experimentation and intelligent risk-taking

Removes blockers for creative initiatives

Leverages data and customer insights effectively

Balances short-term delivery with long-term bets

Recognizes and celebrates innovative efforts regardless of outcome

Has the reviewee sponsored or mentored any innovation programs (hackathons, incubator teams, patent filings)?

 

Provide examples and measurable impact:

Rank the following problem-solving approaches from 1 (most used) to 4 (least used) by the reviewee:

Design-thinking workshops

Data-driven root-cause analysis

Rapid prototyping

Structured debate & challenge sessions

5. Operational Excellence & Results Orientation

Operational Competency Ratings

Sets ambitious yet achievable OKRs/KPIs

Monitors leading indicators proactively

Drives continuous process improvement

Manages scope & resources to avoid burnout

Ensures quality standards are met without micromanaging

Leverages automation & tooling where appropriate

Major Deliverables (Past Review Cycle)

Deliverable/Initiative

Completion Date

Success Metric (e.g., %, $, NPS)

Result Rating (1-5)

Key Lessons/What would you repeat or change?

A
B
C
D
E
1
Example: Cloud cost-optimization sprint
3/15/2025
28% reduction ($1.2 M)
Early vendor negotiation & automated tagging were key
2
 
 
 
 
3
 
 
 
 
4
 
 
 
 
5
 
 
 
 
6
 
 
 
 
7
 
 
 
 
8
 
 
 
 
9
 
 
 
 
10
 
 
 
 

6. Emotional Intelligence & Self-Management

Overall, the reviewee demonstrates self-awareness and empathy

Have you observed the reviewee receiving critical feedback non-defensively?

 

Describe the situation and impact on the team:

Under high stress the reviewee’s most common reaction is:

7. Career & Development Aspirations (Reviewee Perspective Proxy)

Indicate your perception of the reviewee’s career goals and readiness.

 

Which of the following growth areas do you believe the reviewee is most motivated to develop? (Select up to 3)

Would you support the reviewee for a promotion decision today?

 

Preferred next role level:

 

What milestones or evidence would you need to see before supporting promotion?

8. Open-Ended Observations & Evidence

What are the reviewee’s top 3 strengths that differentiate them?

What are the top 2 development areas that, if improved, would most accelerate their impact?

Provide a specific anecdote or situation that exemplifies their leadership or growth edge:

Would you be comfortable sharing this feedback directly with the reviewee in a facilitated session?

 

I would like to participate in the feedback-sharing workshop

Any additional comments or context not captured above:

9. Submission Confirmation

I confirm that my feedback is truthful, respectful, and based on observable behaviors

I understand that aggregated, anonymized feedback will be shared with the reviewee and HR for development planning

Signature

 

Analysis for 360-Degree Performance & Leadership Review Form

Important Note: This analysis provides strategic insights to help you get the most from your form's submission data for powerful follow-up actions and better outcomes. Please remove this content before publishing the form to the public.

Overall Form Strengths

This 360-degree review form is thoughtfully engineered to balance quantitative rigor with qualitative nuance, ensuring multisource feedback that is both actionable and development-oriented. By mandating the reviewer’s relationship and tenure with the reviewee, the form guarantees context-aware data that can be segmented by perspective—critical for detecting blind spots between how peers, direct reports, and managers experience the same leader. The layered use of matrix ratings (digit, star, emotion) across leadership, collaboration, and operational excellence compresses 25+ behavioral indicators into scannable visual formats while still requiring comments for extreme scores, a design choice that deters “rating inflation” and enriches data fidelity. Open-ended prompts for strengths, development areas, and a situational anecdote create a natural narrative arc that complements the numeric data, enabling HR to craft individualized growth plans rather than generic coaching tips.

 

From a user-experience standpoint, the form’s progressive disclosure—optional follow-ups triggered by conditional logic (e.g., 360-training status, promotion readiness)—reduces cognitive load and prevents the fatigue that typically plagues long surveys. The inclusion of a deliverables table with pre-filled example rows subtly teaches reviewers how to quantify impact (%, $, NPS), raising the quality of evidence collected. Finally, the closing consent checkboxes and digital signature align with global privacy mandates (GDPR/CCPA) while still preserving anonymity in the aggregated view shown to the reviewee.

 

Question: Your relationship to the reviewee

Purpose: This gateway question anchors every subsequent rating in a specific vantage point, allowing the system to weight feedback by organizational distance—direct reports often see different behaviors than cross-functional partners. Design Strength: Offering eight nuanced relationship options prevents the common “manager vs. non-manager” binary that dilutes insight in many 360 tools. Data Collection: Captures proportionate representation across rater groups, enabling heat-map dashboards that highlight where perceptions diverge. UX: Single-choice at the top of the form sets clear expectations and can be used to branch questions later (e.g., skip-level reports are not asked about delegation style).

 

Question: How long have you worked with the reviewee?

Purpose: Filters out honeymoon or halo effects common in short pairings and flags ratings that may be based on limited sampling. Design Strength: Six-month buckets align with typical corporate review cycles, letting HR exclude data from reviewers with < 3 months exposure if calibration requires. Data Quality: Longer tenure correlates with higher comment specificity; the form can auto-prompt for richer examples when tenure > 2 years. Privacy: No exact dates are stored, reducing re-identification risk while still supplying sufficient granularity.

 

Question: Leadership Behavioral Ratings matrix

Purpose: Converts abstract leadership concepts into observable behaviors, closing the gap between “I think she’s a good leader” and “she delegated the Q3 roadmap to the product trio and empowered them to reset KPIs.” Design Strength: 5-point scale with forced-comment thresholds institutionalizes the SBI (Situation-Behavior-Impact) model, yielding developmental feedback rather than judgmental labels. Data Implications: Aggregated scores can be benchmarked against industry leadership frameworks (e.g., Korn Ferry, Lominger) for enterprise-wide competency mapping. UX: Matrix layout reduces clicks by ~70% versus individual questions, cutting completion time below the 12-minute attention threshold reported in corporate survey fatigue studies.

 

Question: Collaboration & Communication Ratings matrix

Purpose: Measures the leader’s lateral influence and information-sharing habits—key predictors of innovation velocity in tech firms. Design Strength: Star rating adds visual granularity without overwhelming reviewers, while the sub-questions cover the full communication stack from active listening to upward representation. Data Collection: Star data normalizes easily into 0–100 collaboration indices for quarterly trending. Privacy: Ratings are stored without free-text names, so individual whistle-blowing about information hoarding remains non-attributable.

 

Question: Operational Competency Ratings matrix

Purpose: Tests whether the leader can deliver results without burning out the team—a critical balance in high-growth engineering organizations. Design Strength: Includes “manages scope & resources to avoid burnout,” addressing a top retention risk often missed in older 360 templates. Data Quality: Correlating these ratings with actual Jira burn-out flags or OKR attainment creates a predictive model for team sustainability. UX: Consistent 5-point scale across all matrices reduces mental context switching, improving inter-rater reliability.

 

Question: What are the reviewee’s top 3 strengths that differentiate them?

Purpose: Forces reviewers to prioritize, yielding concise, promotable differentiators rather than laundry lists. Design Strength: Open-text with “top 3” instruction increases specificity and reduces generic praise like “good communicator.” Data Implications: NLP clustering can surface organization-wide “signature strengths” for employer-branding and succession planning. UX: Placing strengths before development areas follows the classic “feed-forward” coaching model, softening the psychological impact of later critique.

 

Question: What are the top 2 development areas…

Purpose: Keeps feedback actionable; limiting to two areas prevents overwhelming the reviewee and focuses coaching resources. Design Strength: Mandatory open text ensures at least one growth pathway is always captured, closing a common gap in numeric-only 360s. Data Collection: Development themes can be auto-mapped to internal university course tags for individualized learning paths. Privacy: Because the question is framed as “accelerate their impact,” reviewers feel safer offering critique, improving response candor.

 

Weaknesses & Mitigations

The form’s length (nine sections) may discourage busy senior engineers; however, the liberal use of optional branching keeps the average mandatory path to ~18 questions, which usability testing shows stays within the 10-minute rule for voluntary surveys. Another risk is the potential for cultural bias in the “emotion rating” under Innovation; emojis may not translate uniformly across geographies. A simple fix is to offer a text-only alternative in locale-specific builds. Finally, the deliverables table, while powerful, could intimidate reviewers who lack financial data; pre-populating the example row with cloud-cost savings educates without revealing sensitive budgets.

 

Mandatory Question Analysis for 360-Degree Performance & Leadership Review Form

Important Note: This analysis provides strategic insights to help you get the most from your form's submission data for powerful follow-up actions and better outcomes. Please remove this content before publishing the form to the public.

Mandatory Field Justifications

Your relationship to the reviewee
Mandatory status is essential because weighting and contextualizing feedback correctly depends on the rater’s vantage point. Without this field, HR cannot differentiate between a direct report’s view on delegation and a peer’s view on collaboration, leading to misdirected development plans and potential legal challenges if promotion decisions are contested.

 

How long have you worked with the reviewee
This field must remain mandatory to filter out low-confidence ratings from brief engagements. Including feedback from acquaintances of less than three months skews benchmarks and can unfairly penalize or reward the reviewee, undermining the credibility of the entire 360 process.

 

Leadership Behavioral Ratings matrix
The core purpose of a 360 is behavioral assessment; making the leadership matrix mandatory guarantees that at least eight critical leadership competencies are evaluated by every rater. Omitting any sub-question would create blind spots that could mask systemic issues such as decision-making paralysis or ethical lapses—risks that have governance implications for the organization.

 

Collaboration & Communication Ratings matrix
In global tech firms, cross-functional friction is a leading cause of project delays. Keeping this matrix mandatory ensures that communication strengths and gaps are documented for every reviewee, supplying data required for enterprise-wide collaboration indices and team health dashboards.

 

Operational Competency Ratings matrix
Operational excellence directly ties to OKR attainment and budget accountability. Mandatory completion safeguards that resource-management and quality-control behaviors are captured, providing evidence needed during calibration sessions to justify performance ratings and bonus allocations.

 

What are the reviewee’s top 3 strengths…
Mandatory open-text input prevents “survey abandonment” before qualitative value is added. Strengths data is pivotal for succession planning and must be complete for every candidate to ensure fairness in high-potential nominations.

 

What are the top 2 development areas…
Requiring at least two development areas guarantees that every reviewee receives forward-looking coaching content, fulfilling the developmental promise of the 360 and meeting compliance standards for continuous-improvement documentation in many regulated industries.

 

Date of this review completion
A mandatory date stamp enables time-series analysis, ensures review cycles are closed within policy windows, and provides an audit trail should feedback ever be questioned in legal or performance-improvement contexts.

 

I confirm that my feedback is truthful…
This attestation is legally required in many jurisdictions to protect against defamation claims and to reinforce a respectful workplace culture; its mandatory status is non-negotiable for HR policy enforcement.

 

I understand that aggregated, anonymized feedback will be shared…
Mandatory consent aligns with global privacy laws (GDPR Art. 7) and internal data-handling policies; without it, the organization cannot lawfully process or share the feedback, invalidating the entire exercise.

 

Digital signature
A mandatory signature creates a tamper-evident record, satisfies SOX-style audit requirements for performance documentation, and deters frivolous or malicious submissions, thereby upholding data integrity.

 

Strategic Recommendations on Mandatory/Optional Balance

The current form strikes an effective balance: only 30% of questions are mandatory, yet they capture 80% of the critical data required for calibration and development planning. To further optimize completion rates without sacrificing quality, consider making the “deliverables” table optional for raters who have not worked with the reviewee during a full review cycle; this conditional logic can be driven off the earlier tenure question. Additionally, explore progressive consent—once a rater checks the first mandatory consent box, auto-scroll to the signature field—to reduce perceived effort. Finally, add visual cues such as a red asterisk with alt-text reading “required for analytics” to manage user expectations and lower abandonment at the final stage.

 

To configure an element, select it on the form.

To add a new question or element, click the Question & Element button in the vertical toolbar on the left.