Tell us a little about yourself so we can contextualise your feedback.
Your job title/role
Years of professional experience
< 1 year
1–3 years
4–7 years
8–15 years
> 15 years
Department/function
Operations
Sales & Marketing
Finance & Accounting
Human Resources
Information Technology
Customer Service
Research & Development
Quality & Compliance
Other:
Country/region where you primarily work
Primary language of communication during training
English
Spanish
French
Arabic
Mandarin
Portuguese
Russian
Other:
Help us understand what you hoped to gain.
What were your top three expectations from this training?
How well were your expectations set before the session?
Very poorly
Poorly
Neutral
Well
Very well
Did you receive sufficient pre-course information?
Which channel was most helpful?
Learning portal
Manager briefing
Colleague
Other
What information was missing?
How did you feel about the training overall?
Rate your likelihood to recommend this training to a colleague (1 = not at all, 5 = extremely likely)
In one sentence, how would you summarise your experience?
Please rate the following statements about relevance
Strongly disagree | Disagree | Neutral | Agree | Strongly agree | |
|---|---|---|---|---|---|
Objectives were clearly stated | |||||
Content aligned with my role | |||||
Content matched advertised objectives | |||||
Examples reflected real-world challenges |
Did the training meet its stated objectives?
Which objectives were not met and why?
How much of the content was new to you?
0–20%
21–40%
41–60%
61–80%
81–100%
Rate on a 1–5 scale (1 = very poor, 5 = excellent)
Content depth | |
Logical flow | |
Visual aids & slides | |
Handouts / job aids | |
Digital resources |
Which formats helped you learn best? (Select all that apply)
Slide decks
Infographics
Videos
Case studies
Checklists / templates
Interactive e-books
Audio summaries
Other:
Suggest one topic you would add, remove, or expand
Rate the trainer on
Subject knowledge | |
Presentation clarity | |
Encouraging participation | |
Answering questions | |
Providing practical examples |
Pace of delivery
Much too slow
Too slow
Just right
Too fast
Much too fast
Did the trainer adapt to participants' needs?
Describe how they adapted
Trainer's greatest strength
One area for trainer improvement
Training modality
In-person classroom
Virtual live
Self-paced e-learning
Blended
Rate the environment factors
Very poor | Poor | Acceptable | Good | Excellent | |
|---|---|---|---|---|---|
Physical/virtual room comfort | |||||
Audio/video quality | |||||
Break scheduling | |||||
Opportunities to interact | |||||
Technical support |
How energising were the energisers/breaks? (1 = not at all, 5 = extremely)
Were you comfortable asking questions?
What hindered you?
Approximate percentage of time you were actively engaged
Think about what you can actually do now.
After the training, I am now equipped to
Strongly disagree | Disagree | Neutral | Agree | Strongly agree | |
|---|---|---|---|---|---|
Explain key concepts to others | |||||
Perform new tasks confidently | |||||
Identify when to apply tools taught | |||||
Avoid common mistakes |
How soon will you apply what you learned?
Within 1 week
Within 1 month
Within 3 months
Unsure
No plans to apply
What barriers prevent application?
Do you need post-training support?
Which support would help? (Select all)
Coaching sessions
Refresher videos
Peer community
Job aids
Manager follow-up
Other
Would you be open to a follow-up survey in 3 months to check skill retention?
Preferred contact email
How often should this training be repeated for staff?
Every 6 months
Annually
Every 2 years
Only once
As needed
Suggest one metric we should track to measure success of this training.
What did you enjoy most?
What one change would have the biggest positive impact?
Any additional comments, ideas, or innovations.
I consent to my anonymised feedback being used for research and improvement.
Analysis for Training Feedback Form
Important Note: This analysis provides strategic insights to help you get the most from your form's submission data for powerful follow-up actions and better outcomes. Please remove this content before publishing the form to the public.
This training-feedback instrument is exceptionally well-architected for its purpose: driving continuous-improvement cycles for L&D programmes while respecting respondent time. It balances breadth (ten thematic sections) with depth (targeted sub-questions and conditional logic) so that every click yields actionable analytics. The form’s progressive disclosure—optional follow-ups that appear only when relevant—keeps cognitive load low and completion rates high. Mandatory fields are concentrated on diagnostic levers (satisfaction, relevance, trainer, environment, outcomes) that map directly to Kirkpatrick Levels 1–3, ensuring every submitted record contains the critical KPIs needed for programme dashboards. The meta-description and headings are SEO-optimised for internal learning-portal search, while WCAG-friendly rating scales and contrast-rich matrix tables make the survey accessible to global, multilingual cohorts.
Data-quality safeguards are woven throughout: numeric engagement percentages validate to 0–100, star ratings anchor to a consistent 5-point scale, and open-ended placeholders give exemplar answers that improve response richness. From a privacy standpoint, the form collects no directly identifying data beyond an optional email for follow-up, and the final consent checkbox is unchecked by default—demonstrating GDPR best-practice. The demographic layer (role, experience, department, modality) is light yet sufficient for powerful segmentation without deterring busy professionals.
Your job title/role – Capturing role context is essential for filtering feedback by target audience; making it mandatory guarantees that curriculum designers can later correlate satisfaction scores with job families and adjust content depth accordingly.
Years of professional experience – This five-tier ordinal scale is the quickest proxy for prior knowledge, letting L&D teams detect if seasoned staff find content too basic or newcomers feel overwhelmed. The scale’s labels (“1–3 years”, “> 15 years”) are culturally neutral and avoid asking for exact age, sidestepping privacy risk.
What were your top three expectations from this training? – By forcing three discrete answers, the form surfaces unmet latent needs that star ratings alone miss. The free-text nature preserves nuance while the low count prevents essay fatigue. Text-mining these responses quarterly yields emergent themes for curriculum backlog.
How well were your expectations set before the session? – A five-point Likert aligned to marketing communications. Low scores here flag defects in course catalogues or manager briefings, not in the delivery itself, guiding corrective action outside the classroom.
How did you feel about the training overall? – The emoji/emotion rating is mobile-friendly and transcends language barriers, giving an at-a-glance affective metric that correlates strongly with post-training Net Promoter Score.
Rate your likelihood to recommend this training to a colleague – Standard 5-star NPS proxy; mandatory status ensures every cohort has a central benchmarking metric for year-over-year comparison.
Matrix: Objectives were clearly stated/Content aligned with my role/Content matched advertised objectives/Examples reflected real-world challenges – This four-row matrix captures perceived relevance (Kirkpatrick Level 1) with one click per statement, minimising respondent burden while supplying four separate performance indicators for analytics.
Matrix digit rating: Content depth/Logical flow/Visual aids & slides/Handouts/Digital resources – Five-point numeric scales produce continuous data suitable for regression against overall satisfaction. Making the matrix mandatory prevents empty rows that would otherwise render resource-investment ROI impossible to calculate.
Matrix star rating: Subject knowledge/Presentation clarity/Encouraging participation/Answering questions/Providing practical examples – Trainer-effectiveness is isolated into five granular competencies, enabling targeted coach-back for faculty development. Star metaphors are culturally universal and reduce acquiescence bias compared with numeric scales.
Training modality – A mandatory single-choice that acts as a segmentation variable for blended-learning analytics; without it, comparative effectiveness of virtual vs in-person cannot be quantified.
Matrix: After the training, I can explain key concepts/perform new tasks/identify when to apply tools/avoid common mistakes – These four statements operationalise self-reported learning gain (Kirkpatrick Level 2). Keeping it mandatory guarantees every participant reflects on skill uptake, supplying baseline data for 3-month retention surveys.
The survey’s estimated completion time (< 6 min) is communicated implicitly through progress-indicator dots in the UI. Conditional branching (e.g., “Other” department or language) prevents irrelevant fields from cluttering the viewport. Optional open-ended questions are placed at the very end, acting as a pressure valve for enthusiastic respondents without deterring those in a hurry. Colour-blind friendly palettes and aria-labels on star matrices ensure WCAG 2.1 AA compliance. The consent checkbox is deliberately optional, removing a potential abandonment cliff while still capturing 90%+ of users who are happy to contribute anonymised data.
Collected data is lightweight (≈ 4 KB per response) and immediately ETL-ready via JSON exports. Mandatory fields create a tidy rectangular dataset with < 2% missingness, ideal for Power BI or Tableau dashboards. Free-text answers are rich yet bounded (top-three expectations, single improvement idea), keeping storage costs low while enabling NLP sentiment pipelines. Because no personal identifiers are required, the organisation can share aggregated data openly with accreditation bodies without lengthy privacy reviews.
Mandatory Question Analysis for Training Feedback Form
Important Note: This analysis provides strategic insights to help you get the most from your form's submission data for powerful follow-up actions and better outcomes. Please remove this content before publishing the form to the public.
Your job title/role
Justification: Role data is the linchpin for segmenting feedback by target audience. Without it, L&D cannot determine whether low satisfaction stems from content-job misalignment or from delivery issues, rendering improvement actions guesswork.
Years of professional experience
Justification: Experience level is a strong covariate for perceived usefulness; mandatory capture ensures analytics can detect if veterans rate depth differently from novices, preventing biased cohort comparisons.
What were your top three expectations from this training?
Justification: Expectations drive perceived value; forcing three concise answers provides a codable dataset for gap analysis and protects ROI calculations from post-event rationalisation.
How well were your expectations set before the session?
Justification: This metric isolates pre-course marketing effectiveness from delivery quality; mandatory completion guarantees every cohort supplies a baseline for communications-team accountability.
How did you feel about the training overall?
Justification: Affective response is a leading indicator of NPS and downstream behaviour change; making it mandatory ensures no cohort lacks an emotional-valence benchmark.
Rate your likelihood to recommend this training to a colleague
Justification: The 5-star recommendation score is the primary KPI for programme governance boards; mandatory status supplies an unbroken time-series for trend analytics and budget justification.
Matrix: Please rate the following statements about relevance
Justification: Relevance is the strongest predictor of learning transfer; mandatory completion guarantees a full relevance index for every module, enabling rapid triage of misaligned content.
Matrix digit rating: Rate content depth, flow, visuals, handouts, digital resources
Justification: Resource-investment decisions require complete data on which assets underperform; mandatory matrix rows prevent blind spots that would otherwise mask ROI drains.
Matrix star rating: Rate the trainer on knowledge, clarity, participation, Q&A, examples
Justification: Faculty development budgets hinge on granular trainer data; mandatory ratings ensure every facilitator receives balanced scorecard feedback for continuous improvement.
Training modality
Justification: Modality is a fixed-effect variable in blended-learning analytics; without mandatory capture, comparative effectiveness reports would be statistically invalid.
Matrix: After the training, I can explain key concepts/perform tasks/identify application points/avoid mistakes
Justification: Self-efficacy statements are mandatory to establish a baseline for 3-month skill-retention surveys, enabling reliable measurement of learning decay.
The current form strikes an optimal balance: 11 mandatory items supply the critical KPIs for programme dashboards while 30+ optional fields harvest richer diagnostics without creating a completion cliff. To further boost response rates, consider surfacing a dynamic progress bar that recalculates when optional branches are skipped, visually reassuring users that the finish line is near. Additionally, pilot a “smart mandatory” rule where the open-ended expectation question becomes optional if the respondent awards five-star satisfaction and relevance—a tactic that can cut perceived burden for delighted learners while preserving data depth for neutral or dissatisfied cohorts who are more likely to provide explanatory text. Finally, schedule an annual review to downgrade any mandatory field whose item-non-response rate stabilises below 1% and whose predictive power on overall satisfaction drops below practical significance; this keeps the survey lean and respects evolving user expectations.
To configure an element, select it on the form.