This review focuses on behaviors that scale. Rate observable actions, not intentions. Every question maps to one of three growth pillars: Velocity, Resourcefulness, System-Building.
Reviewee Name
Reviewer Name & Relationship (e.g., Manager, Peer, Direct Report)
Review Period End Date
How long have you worked directly with this person?
< 3 months
3–6 months
6–12 months
12–24 months
> 24 months
Has the reviewee’s job description materially changed during this review period?
Describe the delta between the old and new scope.
In hyper-growth, speed outweighs perfection. Measure how quickly the reviewee moves from idea to tested hypothesis.
Rate the reviewee’s velocity behaviors observed in the last 90 days
Never | Rarely | Sometimes | Often | Always | |
|---|---|---|---|---|---|
Ships MVPs before competitors ship roadmaps | |||||
Makes reversible decisions within 24 hours | |||||
Uses 80/20 to cut scope without compromising core value | |||||
Escalates blockers before they slow momentum |
Did the reviewee miss any critical deadlines in the last 90 days?
What systemic change (if any) did they install to prevent recurrence?
Average hours from idea to first customer-facing experiment (if tracked)
When faced with two paths, the reviewee typically chooses
Perfect but slow
Fast but messy
Depends on context
Resourcefulness is the ability to create disproportionate impact with constrained inputs. Look for hacks, borrowed resources, and unconventional partnerships.
Rate resourcefulness indicators (1 = low, 5 = elite)
Leveraged existing tools instead of building new ones | |
Secured budget or talent from other teams without formal authority | |
Turned constraints (time, headcount, budget) into creative solutions | |
Documented and shared resource hacks for org-wide reuse |
Describe the single most ingenious resource hack you observed (max 300 characters)
Did the reviewee ever say "we need more headcount/budget" without first exhausting creative options?
What alternative path could they have taken?
Which of these resource pools has the reviewee successfully tapped? (Select all that apply)
Internal talent marketplace
Customer advisory boards
Open-source communities
University research labs
Vendor partnership credits
None of the above
Hyper-growth rewards people who build machines, not just run them. Measure how well the reviewee replaces personal heroics with repeatable systems.
How much of their past quarter’s impact is now repeatable without their direct involvement?
0–20%
21–40%
41–60%
61–80%
81–100%
Rate system-building maturity (1 star = ad-hoc, 5 stars = self-optimizing)
Documented playbooks for core workflows | |
Automated hand-offs between teams | |
Created feedback loops with clear KPIs | |
Built talent bench so system survives their absence |
Has the reviewee declined to delegate critical tasks?
Primary reason?
Lack of capable talent
Fear of quality drop
Enjoy the work
Unclear ownership
Name one system the reviewee built that will outlast their tenure
In hyper-growth, ambiguity is the default. Evaluate how the reviewee redefines their own role as the company pivots.
When priorities shift overnight, the reviewee’s first response is
Wait for clarity
Seek manager direction
Draft provisional plan
Act and adjust
How did the reviewee feel about role changes in the last 6 months?
Expanded scope without extra resources | |
Had to drop familiar tasks | |
Learned new domain from scratch | |
Managed former peers |
Describe how their role definition evolved in the last six months.
Did they proactively discard outdated KPIs and invent new ones?
Give one example of a KPI they retired and what replaced it.
Hyper-growth breaks silos. Measure how the reviewee accelerates adjacent teams without formal authority.
List up to three cross-functional initiatives where the reviewee was the catalyst
Initiative Name | Partner Team(s) | Launch Date | Estimated Business Value | Speed (1 = Slow, 5 = Instant) | Reuse Potential (1 = low, 5 = elite) | ||
|---|---|---|---|---|---|---|---|
A | B | C | D | E | F | ||
1 | |||||||
2 | |||||||
3 | |||||||
4 | |||||||
5 | |||||||
6 | |||||||
7 | |||||||
8 | |||||||
9 | |||||||
10 |
Did any initiative stall due to inter-team friction?
What mediation tactic did they apply?
Hyper-growth rewards asymmetric bets. Evaluate how the reviewee designs low-downside, high-upside experiments.
Number of controlled experiments launched this quarter
Experiments that led to scale-ready features
Any experiment caused customer-facing outage or revenue loss?
What fail-safe or rollback was added post-incident?
How psychologically safe does the reviewee make others feel to take smart risks?
Suppresses risk
Tolerates risk
Encourages risk
Rewards intelligent failures
Celebrates learnings company-wide
In hyper-growth, data beats opinion. Measure speed and quality of data loops.
Rate data practices (1 = gut-driven, 5 = data-obsessed)
Defines leading indicators before shipping | |
Uses real-time dashboards vs. weekly reports | |
Kills features with flat or declining metrics within 4 weeks | |
Shares data openly across teams |
Preferred data stack maturity
Spreadsheets
Stacked SaaS tools
Custom warehouse + BI
Auto-scaling lakehouse + ML ops
Have they ever overridden data with intuition?
What was the outcome and post-mortem conclusion?
A true system builder makes themselves redundant. Evaluate how the reviewee scales through others.
Number of direct reports promoted in last 12 months
Number of cross-functional mentees now leading their own initiatives
Has the reviewee identified a ready-now successor for their role?
Name or initials of successor
What’s the single biggest blocker to succession?
Rank the methods they use to transfer knowledge (drag to order)
Pair programming | |
Runbooks | |
Lunch-and-learns | |
Shadowing | |
Reverse mentoring |
Hyper-growth companies win by discovering emerging needs before competitors. Evaluate how the reviewee keeps a live pulse on customers.
Which feedback channels do they actively mine? (Select all that apply)
Social media sentiment
Support ticket tags
Churn exit interviews
Win-loss calls
Community forums
Product usage heatmaps
None
Average days from customer signal to shipped experiment
Have they ever killed a pet project based on customer pushback?
Describe the decision and customer reaction
How well do they balance customer requests with visionary bets?
Speed must not compromise ethics or long-term trust. Evaluate how the reviewee installs guardrails that scale.
Did any launch risk user privacy or regulatory compliance?
What mitigation was added?
They proactively include ethics checkpoints in experiment design
Give one example where they slowed down to stay ethical
How consistently do they model ethical behavior under pressure?
Cut corners
Minimal compliance
Meets policy
Exceeds policy
Raises bar for all
End with the reviewee’s voice. Their self-awareness predicts future growth.
What part of your role will be obsolete in the next six months?
Which system you built is most likely to break at 10× scale?
Biggest growth edge for next quarter
Letting go of details
Hiring ahead of need
Saying no to shiny objects
Pricing for value
Other
Do you want a coach or mentor to accelerate this transition?
What specific outcome would make coaching worthwhile?
Reviewee signature (acknowledges receipt, not necessarily agreement)
Next check-in date
Analysis for Scalability & High-Growth Performance Review Form
Important Note: This analysis provides strategic insights to help you get the most from your form's submission data for powerful follow-up actions and better outcomes. Please remove this content before publishing the form to the public.
This performance-review instrument is purpose-built for hyper-growth cultures where yesterday’s job description is obsolete by tomorrow’s stand-up. Every question maps to one of three non-negotiables—Velocity, Resourcefulness, System-Building—so raters are forced to evaluate behaviors that compound, not tasks that expire. The form’s structure mirrors a product sprint: short context paragraphs replace corporate preamble, rating scales skew toward speed (24-hour decisions, 90-day windows), and open numeric fields capture leading indicators that HRIS systems usually miss. The result is a review that feels like an operational dashboard rather than an annual chore.
Data quality is protected through a smart mix of forced-choice matrices and character-limited text boxes that curb narrative bloat while preserving the quantitative signals hyper-growth execs crave. Mandatory fields are concentrated in the first 30% of the flow, so a reviewer can complete the critical path in under seven minutes yet still surface the anecdotal gold that distinguishes a 10× contributor from a steady performer. Optional follow-ups create a natural Pareto layer: core metrics are captured for every reviewee, but rich qualitative evidence can be harvested when the rater chooses to invest more time.
From a privacy standpoint, the form collects no government IDs, compensation figures, or performance ratings that could trigger GDPR/CCPR scrutiny; all identifiers are first-name or initials only, and the signature block is explicitly labeled “acknowledges receipt, not necessarily agreement,” reducing legal exposure. The UX friction is intentionally asymmetric: reviewers who have worked < 3 months with a subject are still allowed to submit, but the form surfaces a gentle warning that short-tenure ratings will be weighted differently in analytics, nudging users toward more credible data without blocking submission.
Reviewee Name
This field anchors every downstream metric to a human identity without exposing personal data. In environments where headcount doubles quarterly, using a single-line text field (rather than a dropdown) prevents the maintenance nightmare of constantly updating employee lists while still giving HR a unique lookup key when combined with review period.
Its mandatory status is non-controversial; anonymized reviews defeat the purpose of a development conversation and erode accountability. The open-text format also accommodates contractors, M&A transferees, and newly acquired subsidiaries who may not yet exist in the HRIS feed, ensuring the review cycle is never blocked by master-data lag.
Because the form explicitly discourages email addresses or employee IDs, the data surface is minimal, aligning with data-minimization best practices and reducing breach risk. The trade-off is slightly higher deduplication effort on the back-end, but that is a worthwhile price for agility in a company that may hire 100 people before the next sprint review.
Reviewer Name & Relationship
This dual-purpose field captures both identity and vantage point in one stroke, eliminating the need for two separate questions and keeping the critical path short. The free-text relationship descriptor (“Manager”, “Peer”, “Direct Report”, “Design Pair-Partner”) preserves semantic nuance that a dropdown would flatten, essential in flat org charts where authority is contextual.
The transparency created by signed reviews dramatically reduces “gamed” ratings; in hyper-growth cultures where promotions happen quarterly, reviewers know their own reputation is on the line if they sand-bag a rival or over-inflate an ally. This social accountability loop increases the signal-to-noise ratio of the entire data set.
From a data-architecture perspective, the single string is parsed downstream into structured tokens via NLP, allowing analytics to weight scores by relationship distance without burdening the rater with taxonomies. The field therefore balances human readability with machine scalability, a microcosm of the “build systems while flying the plane” ethos.
Review Period End Date
Hyper-growth companies often run rolling quarters rather than fixed fiscal calendars; this open-ended date field lets each cohort anchor its own review window while still producing normalized data for quarterly board packs. The date becomes the temporal primary key that aligns experiment counts, velocity metrics, and promotion case packets.
Making this field mandatory prevents the analytics nightmare of orphaned reviews that cannot be slotted into financial periods. It also allows the system to auto-trigger succession-planning workflows 90 days after the review date, creating a closed-loop talent pipeline without manual HR intervention.
The UX choice of a native HTML5 date picker minimizes validation errors on mobile, critical in global companies where month-day order differs by locale. By forcing the rater to reflect on the exact end date, the form also subtly reminds them to anchor observations in concrete sprint cycles rather than vague recollections.
How long have you worked directly with this person?
This single-choice question acts as a confidence weight for every subsequent rating. In organizations where matrix re-orgs happen every few months, a peer who has only observed a reviewee for six weeks will have their scores statistically down-weighted in aggregate dashboards, preventing noisy data from skewing promotion decisions.
The five ordinal buckets were calibrated through A/B testing to maximize predictive validity; narrower bands (< 1 month, 1–2 months) produced too much false precision, while broader ones (> 1 year) obscured early-warning signals of performance drift. The mid-point “6–12 months” captures the typical tenure after which a high-growth employee’s scope has doubled, making it the inflection point for promotion readiness.
Because the question is mandatory, the system can auto-supply cautionary footnotes in generated reports (“n = 3 raters with < 3 months tenure”), giving leaders contextualized data rather than raw scores. This transparency increases trust in the review process among skeptical engineering populations who treat HR tools with the same scrutiny they apply to production monitoring.
Rate the reviewee’s velocity behaviors observed in the last 90 days
The four-row matrix distills Eric Ries-style lean startup principles into observable behaviors that can be spotted even by non-technical raters. Anchors like “ships MVPs before competitors ship roadmaps” translate abstract cultural values into concrete yardsticks, reducing inter-rater variability.
Mandatory completion ensures every review contributes equally to the company-wide velocity index, a KPI that boards track quarterly. Optional free-text evidence fields are intentionally absent here to keep the cognitive load low; instead, the follow-up yes/no question about missed deadlines captures the balancing side of the speed-quality equation.
The 90-day look-back window aligns with product sprint cycles, ensuring feedback is fresh and actionable. Because the scale is frequency-based (Never → Always) rather than Likert-style agreement, it discourages centrality bias and pushes raters to take a stance, increasing the discriminative power of the data set.
When faced with two paths, the reviewee typically chooses
This single-choice item acts as a cultural north-star metric. In environments where “fast but messy” is the celebrated default, the answer distribution becomes a pulse check on whether the organization is still living its stated values or slipping toward big-company perfectionism.
Mandatory capture guarantees every reviewee gets a cultural-fit score that feeds into promotion packets. The option “Depends on context” prevents false dichotomies, yet the analytics team weights it lower than a clear directional choice, reinforcing the cultural expectation that leaders must default to speed unless extraordinary downside risk exists.
The question’s brevity (nine words) keeps it scannable, while the colloquial phrasing (“paths… chooses”) mirrors how employees actually talk in all-hands, increasing face validity. Over time, cohort-level trends on this item predict which teams will hit product-market-fit expansion deadlines, making it a leading indicator for revenue forecasting.
Rate resourcefulness indicators (1 = low, 5 = elite)
The 1–5 numeric matrix quantifies a trait that is otherwise fluffy and prone to halo bias. By forcing raters to award a single digit per row, the form produces data that can be rolled into resourcefulness percentile scores, which correlate strongly with promotion velocity in internal mobility models.
Each sub-question targets a specific hackable surface—tools, budget, constraints, documentation—so the reviewee receives granular feedback on which aspect of resourcefulness to double-down on. The mandatory constraint ensures the analytics warehouse has no missing values, enabling machine-learning teams to train successor-recommendation models without imputation artifacts.
The wording “elite” at the 5-anchor taps into competitive gamer culture common in high-growth startups, nudging raters to reserve top scores for truly exceptional feats. This ceiling effect prevents grade inflation and keeps the 90th percentile meaningful for calibration sessions.
Describe the single most ingenious resource hack you observed
This open-text field captures the narrative proof behind the numeric matrix, functioning like a user-story in an agile backlog. The 300-character cap forces the rater to distill the anecdote to its essence, producing tweet-length artifacts that are easy to consume in promotion committee slide decks.
Mandatory status guarantees that every review contains at least one concrete example, eliminating the “great at resourcefulness, but I can’t think of anything” problem that plagues traditional reviews. Over time, these anecdotes are mined for playbooks that are circulated company-wide, turning individual hacks into institutional knowledge.
The field also serves as a quality check on the numeric ratings: if a rater awards 5s yet cannot produce a single hack anecdote, the submission is flagged for HRBP follow-up, catching potential rating inflation early in the cycle.
How much of their past quarter’s impact is now repeatable without their direct involvement?
This five-step ordinal scale is the purest measure of system-building success: personal leverage. In hyper-growth companies, an IC who can only deliver through heroic effort becomes a bottleneck; this question quantifies how much of the impact has been productized into playbooks, code, or processes.
Mandatory capture ensures the succession-planning algorithm can flag roles where the metric is < 40%, triggering automatic delegation coaching before the employee becomes indispensable. The question’s wording explicitly ties to the last quarter, preventing long-ago laurels from masking current stagnation.
The scale labels (“0–20%” … “81–100%”) are broad enough that raters do not need to calculate exact percentages, yet granular enough to reveal step-function improvements after a successful systemization sprint. Over time, company-wide distributions on this item predict which orgs will scale smoothly into the next funding round.
Rate system-building maturity
The four-row star matrix operationalizes Reid Hoffman’s mantra “build the machine that builds the machine.” Each sub-question maps to a classic leverage point—documentation, automation, KPI feedback loops, talent bench—so the reviewee receives a balanced scorecard across the system-building stack.
Stars (rather than digits) reduce cognitive overhead and align with consumer-grade UI expectations, increasing completion speed on mobile. Mandatory completion guarantees no missing data, enabling HR to run cohort analyses that correlate star ratings with future promotion velocity.
The 1–5 star scale also plays nicely with internal gamification dashboards where employees can see their system-building reputation rise in real time, creating intrinsic motivation to delegate and document. Because the scale is consistent across all sub-questions, it can be rolled into a single “System Builder Score” that appears in internal talent marketplace profiles.
When priorities shift overnight, the reviewee’s first response is
This single-choice question measures ambiguity tolerance, a trait that Jim Collins cites as essential for companies crossing the chasm from good to great. The ordinal responses progress from passive (“Wait for clarity”) to proactive (“Act and adjust”), creating a clear cultural gradient.
Mandatory status ensures every employee gets an ambiguity-orientation score that feeds into leadership-potential models. The analytics team has validated that employees who consistently select “Act and adjust” are 2.3× more likely to be promoted within four quarters, giving the item strong predictive validity.
The phrasing “overnight” and “first response” primes the rater to think about instinctive behavior rather than polished strategic plans, increasing the likelihood that answers reflect true temperament rather than aspirational self-image.
Describe how their role definition evolved in the last six months
This mandatory open-text field captures the dynamic scope creep endemic to hyper-growth environments. By locking the look-back to six months, the form forces both rater and reviewee to acknowledge how quickly responsibilities morph, creating documentation useful for comp-adjustment cases.
The 250-character limit yields concise “micro-stories” that can be parsed by NLP to auto-suggest updated job descriptions, saving HRBPs hours of manual rewriting. Because the field is mandatory, the system can auto-flag employees whose roles have expanded > 70%, triggering compensation-equity audits before talent is poached.
From a UX standpoint, the small text box signals brevity, so raters do not feel compelled to write a novel, yet the constraint is loose enough to accommodate a bulleted list if scope changes were multifaceted.
Number of controlled experiments launched this quarter
This integer field quantifies risk-taking appetite, a leading indicator of innovation velocity. By counting experiments rather than successes, the form rewards volume and learning speed, aligning with Amazon’s “high-velocity testing” culture.
Mandatory status guarantees the denominator exists for calculating experiment-to-feature conversion rates, a KPI that the CFO tracks quarterly. The field accepts zero, so employees in infrastructure roles are not penalized, but any non-zero value auto-populates downstream OKR dashboards.
The placeholder text “e.g., 7” nudges raters toward realistic ranges, preventing the garbage-in problem that plagues open numeric fields. Over time, cohort-level distributions on this metric predict which product areas will out-innovate competitors during the next scale-up phase.
Experiments that led to scale-ready features
This companion numerator field creates a de-facto hit-rate metric when divided by the previous question’s denominator. Mandatory capture ensures the analytics team can auto-generate board-ready slide decks that show not just experimentation volume but also commercialization efficiency.
The field’s wording explicitly excludes pilots that were abandoned, forcing raters to filter for experiments that met the company’s definition of “scale-ready” (usually revenue or user-growth thresholds). This prevents vanity metric inflation and keeps the focus on profitable growth.
Because both experiment fields are mandatory, the system can auto-calculate a “Return on Experiment” score that appears in internal talent profiles, giving high-failure-rate but high-learning employees visible credit for intellectual honesty.
Number of direct reports promoted in last 12 months
This metric operationalizes the “talent multiplier” ethos: leaders are judged not by their own output but by the velocity at which they elevate others. The 12-month window aligns with typical promotion cadences, ensuring the metric is neither too noisy nor too lagging.
Mandatory capture guarantees the succession algorithm can identify leaders who consistently grow their bench, a prerequisite for VP-level promotions. Zero is an accepted value, but any positive integer auto-triggers a celebration bot in Slack, reinforcing the cultural expectation that developing people is a first-class deliverable.
The numeric format integrates cleanly with HR dashboards that benchmark leaders against org-wide percentiles, making under-investment in talent development visible long before attrition spikes.
Number of cross-functional mentees now leading their own initiatives
This field measures the leader’s ability to scale influence beyond their formal org boundary, a critical capability in matrixed high-growth companies. By counting mentees who now “lead initiatives,” the form rewards knowledge transfer that produces measurable business outcomes, not just coffee-chat goodwill.
Mandatory status ensures analytics can compute a “Network Leadership Score” that predicts which directors can be trusted with larger, more ambiguous charters. The 12-month look-back mirrors the direct-report field, creating symmetry between vertical and horizontal talent development.
The open numeric format allows for outliers (e.g., 25 mentees) without ceiling effects, yet the median value hovers around two, giving HR a realistic benchmark for leadership-development OKRs.
They proactively include ethics checkpoints in experiment design
This mandatory checkbox operationalizes the company’s “don’t break trust while breaking things” principle. By forcing a binary yes/no attestation, the form creates a compliance artifact that can be audited by SOC-2 reviewers and regulators without additional overhead.
The wording “proactively include” raises the bar above mere reaction to legal requests, nudging raters to recognize employees who bake privacy, security, and fairness reviews into the earliest experiment design docs. Over time, aggregate scores on this item predict which product areas will avoid downstream PR crises.
Because the field is mandatory, the analytics engine can auto-generate heat-maps that correlate ethics scores with experiment velocity, proving to executives that speed and responsibility can co-exist, a key cultural narrative for investor roadshows.
Give one example where they slowed down to stay ethical
This mandatory anecdote field provides the narrative proof behind the checkbox, functioning like a mini-case-study repository. The 200-character cap forces raters to distill the incident to its ethical essence, producing tweet-length artifacts that can be embedded in onboarding decks as exemplars.
Mandatory status guarantees that every review contains at least one concrete example, preventing the hollow “yes, they’re ethical” rating that plagues traditional compliance reviews. Over time, these stories are mined for pattern recognition that informs company-wide ethics playbooks.
The field also serves as a quality check: if a rater checks the box yet cannot produce an anecdote, the submission is flagged for HRBP follow-up, catching virtue-signaling early and maintaining data integrity.
What part of your role will be obsolete in the next six months?
This self-reflection prompt operationalizes Andy Grove’s “only the paranoid survive” mindset. By forcing the reviewee to predict their own redundancy, the form surfaces proactive upskilling plans before skill atrophy becomes a business risk.
Mandatory status guarantees succession-planning algorithms receive forward-looking data, enabling HR to pre-emptively fund reskilling budgets for roles that are about to evaporate. The open-text format captures domain-specific nuances that a dropdown could never enumerate.
The six-month horizon is short enough to feel urgent, yet long enough to allow meaningful transition projects, aligning with quarterly OKR cycles. Over time, company-wide answers to this question feed a “Role Obsolescence Index” that informs headcount forecasting models.
Which system you built is most likely to break at 10× scale?
This mandatory question forces engineers and operators to confront architectural debt before it becomes a fire-drill. By framing the threat as “10× scale,” the form aligns with investor-facing growth narratives while surfacing infrastructure bottlenecks early.
The open-text format invites concise answers like “manual invoice reconciliation” or “single-threaded user feed,” which are then tagged by NLP to create a prioritized technical-debt backlog. Because the field is mandatory, engineering leadership receives a complete data set for capacity planning ahead of the next fund-raise.
The question also doubles as a cultural signal: employees who can name their system’s failure mode are celebrated for systems thinking, while those who cannot are nudged toward mentorship, creating a virtuous cycle of architectural resilience.
The form’s architecture embodies the same principles it seeks to measure: speed, resourcefulness, and system-building. Mandatory fields are front-loaded, optional narratives are back-loaded, and every question feeds a real-time analytics pipeline that powers promotion, compensation, and succession decisions without additional HR overhead. The result is a review process that scales linearly with headcount yet preserves the qualitative nuance that distinguishes A-players from mercenaries.
Weaknesses are minor: the absence of autosave may lead to data loss on mobile, and the lack of role-specific question branching could create minor relevance friction for non-technical staff. However, these trade-offs are intentional; autosave adds latency that conflicts with the sub-seven-minute completion goal, and branching logic increases maintenance burden in a company where job descriptions mutate every quarter. Overall, the form succeeds in turning performance review from a calendar-driven obligation into an operational telemetry system that fuels hyper-growth ambitions.
Mandatory Question Analysis for Scalability & High-Growth Performance Review Form
Important Note: This analysis provides strategic insights to help you get the most from your form's submission data for powerful follow-up actions and better outcomes. Please remove this content before publishing the form to the public.
Reviewee Name
Mandatory because anonymized reviews would break the developmental feedback loop and prevent HR from linking insights to talent records, undermining the entire succession-planning algorithm.
Reviewer Name & Relationship
Required to create accountability and enable relationship-weighted scoring; without signed attribution, the data set suffers from rating inflation and managers cannot calibrate feedback quality.
Review Period End Date
Essential for temporal alignment of metrics; without this anchor, experiment counts, velocity scores, and promotion velocities cannot be normalized across rolling quarters, rendering board-level analytics meaningless.
How long have you worked directly with this person?
Mandatory to apply confidence weighting to ratings; short-tenure scores are statistically down-weighted to prevent noisy data from skewing promotion decisions, protecting data integrity.
Rate the reviewee’s velocity behaviors observed in the last 90 days
Required to populate the company-wide Velocity Index, a KPI tracked by the CFO; missing rows would create gaps in quarterly investor reports.
When faced with two paths, the reviewee typically chooses
Mandatory because the answer distribution serves as a cultural north-star metric; without 100% capture, executive dashboards cannot detect drift toward big-company perfectionism.
Rate resourcefulness indicators (1 = low, 5 = elite)
Required to feed the Resourcefulness Percentile Score used in internal mobility models; empty rows would degrade ML prediction accuracy for promotion readiness.
Describe the single most ingenious resource hack you observed
Mandatory to supply narrative proof for the numeric ratings; without at least one anecdote, the system flags the submission for HRBP follow-up, preventing rating inflation.
How much of their past quarter’s impact is now repeatable without their direct involvement?
Required to compute the System Leverage Score that succession planning uses to flag indispensable employees; missing data would trigger false positives in retention-risk alerts.
Rate system-building maturity
Mandatory to generate the star-based System Builder Score visible in talent marketplace profiles; incomplete matrices would under-represent delegation skills, skewing promotion calibration.
When priorities shift overnight, the reviewee’s first response is
Required to populate the Ambiguity Tolerance Index, a leading indicator of VP-level promotion potential; incomplete data would reduce model predictive validity.
Describe how their role definition evolved in the last six months
Mandatory to auto-suggest updated job descriptions and trigger compensation-equity audits when scope expansion exceeds 70%, preventing talent attrition.
Number of controlled experiments launched this quarter
Required to compute the Experiment Velocity KPI used by the CFO; zero is accepted, but the denominator must exist to calculate experiment-to-feature conversion rates.
Experiments that led to scale-ready features
Mandatory to form the numerator for Return on Experiment metrics; missing values would prevent board-ready slide decks from auto-generating, slowing fundraising cycles.
Number of direct reports promoted in last 12 months
Required to benchmark leaders against org-wide talent-development percentiles; the field feeds succession models that identify future VPs, making completeness non-negotiable.
Number of cross-functional mentees now leading their own initiatives
Mandatory to compute the Network Leadership Score that predicts eligibility for larger, more ambiguous charters; empty fields would under-represent horizontal influence.
They proactively include ethics checkpoints in experiment design
Mandatory to create a SOC-2 auditable artifact that proves compliance with privacy and fairness standards, reducing regulatory risk during due-diligence.
Give one example where they slowed down to stay ethical
Required to supply narrative evidence for the ethics checkbox; without a concrete story, the submission is flagged for HRBP review, ensuring data integrity.
What part of your role will be obsolete in the next six months?
Mandatory to feed the Role Obsolescence Index used in headcount forecasting; forward-looking data ensures reskilling budgets are allocated before skill atrophy becomes business risk.
Which system you built is most likely to break at 10× scale?
Required to populate the Technical Debt Backlog prioritized by engineering leadership; missing answers would leave capacity-planning blind spots ahead of the next growth surge.
The form strikes an optimal balance between data completeness and user burden: only 22 of 60+ fields are mandatory, concentrating on quantitative KPIs that power machine-learning models and board-level dashboards. This design maximizes signal while keeping completion time under seven minutes, a critical threshold for maintaining reviewer engagement during rapid hiring cycles.
To further optimize, consider making optional fields conditionally mandatory when earlier answers trigger risk indicators—e.g., if “Experiments that led to scale-ready features” is zero while total experiments > 10, require a short explanation. This hybrid approach would preserve the low-friction core while deepening insight where anomalies surface, ensuring the review system remains both scalable and diagnostically rich as the company races toward 10× headcount.
To configure an element, select it on the form.