This form captures evidence of your growth as a young designer-engineer. Answer honestly so we can celebrate strengths and target next steps.
Student first name
Student last name
Project nickname
Which broad problem area did you choose?
Accessibility & Assistive Tech
Sustainability & Climate
Transportation & Mobility
Entertainment & Game Design
Health & Well-being
Other:
In one sentence, what problem are you trying to solve?
Tell the story of your project from idea to final iteration. Use first-person ('I' or 'we').
Describe how you identified user needs (empathy phase).
How many different concept sketches did you create before choosing one?
What criteria did you use to pick your best concept?
Did you build a quick cardboard/paper mock-up before making the real prototype?
What did the mock-up reveal that CAD or sketches could not?
Why did you skip the mock-up, and will you change that next time?
How many formal iterations (major changes) did your prototype go through?
Describe one memorable failure and how it guided your next iteration.
Rate your comfort level with receiving critical feedback.
Very uncomfortable
Uncomfortable
Neutral
Comfortable
Very comfortable
Which iteration are you most proud of and why?
Safe habits protect you, your peers, and the equipment. Reflect truthfully.
Which personal protective equipment (PPE) did you use? (Select all that apply)
Safety glasses
Closed-toe shoes
Heat-resistant gloves
Dust mask/respirator
Ear protection
None
Did you perform a risk assessment before starting each new tool or material?
List one hazard you identified and the control measure you applied:
What prevented you from doing a risk assessment, and how can you build that habit?
How did you store sharp tools when not in use?
In designated rack/sheath
On bench but away from edge
Left on bench unattended
Other
Did you ever work alone in the workshop?
Explain the circumstances and what safety steps you took:
If you used hot tools (glue gun, soldering iron), how did you secure the cord?
Used cord shortener/clip
Taped cord to bench
Hung cord over edge
No special action
Did you report any broken or malfunctioning equipment?
What was the equipment and issue?
Why was the issue not reported, and what could go wrong if others use it?
On a 1–5 scale, how confident are you that you can identify when a tool is unsafe?
Good designers choose materials wisely, considering function, cost, and environmental impact.
List each primary material you used and its properties
Material name | Form (sheet, rod, filament...) | Source (new, recycled, reclaimed) | Approx. cost per unit | Quantity used | Key property (light, strong, flexible...) | ||
|---|---|---|---|---|---|---|---|
A | B | C | D | E | F | ||
1 | Corrugated cardboard | sheet | recycled | $0.50 | 3 | easy to laser-cut | |
2 | |||||||
3 | |||||||
4 | |||||||
5 | |||||||
6 | |||||||
7 | |||||||
8 | |||||||
9 | |||||||
10 |
Which digital fabrication tools did you use? (Select all)
3-D printer
Laser cutter
CNC router/mill
Vinyl cutter
None
Did you calculate an approximate carbon footprint or waste mass for your project?
Summarise your findings:
What information would you need to perform that calculation next time?
How will you dispose of or repurpose leftover material?
Return to stock
Recycle via school programme
Take home
Trash
Donate to art class
Not decided
Propose one way to make your project more sustainable without hurting performance:
Did you work in a team for any part of the project?
How many teammates (including you)?
Describe a moment when collaboration could have improved your outcome:
Rate your team on the following behaviours (1 = rarely, 5 = always)
Listened to all ideas before judging | |
Shared workload evenly | |
Gave constructive feedback | |
Resolved conflicts respectfully | |
Met agreed deadlines |
What is one communication strategy your team used that others could adopt?
Upload a photo or screenshot of your team’s planning board (Trello, Miro, paper, etc.)
Engineers rely on data, not opinions. Show us your evidence.
State the testable hypothesis or success criterion for your final prototype.
Record results from each test
Test name | Variable measured | Target value | Actual value | Pass/Fail | Notes or observations | ||
|---|---|---|---|---|---|---|---|
A | B | C | D | E | F | ||
1 | Load test | max mass | 2 | 2.3 | Pass | Slight bending noticed | |
2 | |||||||
3 | |||||||
4 | |||||||
5 | |||||||
6 | |||||||
7 | |||||||
8 | |||||||
9 | |||||||
10 |
Did you collect qualitative feedback from real users?
Summarise the most surprising comment:
What percentage of tests passed on the first attempt?
Which single test caused the biggest redesign, and what did you change?
Upload a short video (≤30 s) or image showing your testing setup
Growth happens when we pause and reflect. Be honest and specific.
How did you feel at each stage of the project?
At the start (ideation) | |
During first build | |
After first failure | |
Final presentation |
Rank these engineering habits from 1 (strongest) to 5 (needs work)
Creative brainstorming | |
Time management | |
Safe tool use | |
Data-driven decisions | |
Resilience after setbacks |
What will you do differently in the next project?
Would you like to exhibit this project at the school Maker Faire?
Describe an interactive element visitors could try:
Set one measurable goal for your next design cycle:
Sign to confirm all information is accurate
Analysis for Middle School Design & Engineering Assessment Form
Important Note: This analysis provides strategic insights to help you get the most from your form's submission data for powerful follow-up actions and better outcomes. Please remove this content before publishing the form to the public.
This assessment brilliantly translates the engineering-design cycle into language and tasks that resonate with middle-school cognition. By scaffolding reflection from empathy through testing, the form makes abstract habits (iteration, resilience, data-driven decisions) concrete and observable. The mix of open, numeric, rating, and file-upload items captures both qualitative nuance and quantitative evidence, while the narrative prompts (“Describe one memorable failure…”) normalize productive failure—a core tenet of a Maker mindset. Safety and sustainability sections embed responsible innovation early, and conditional logic (mock-up yes/no, team yes/no) keeps cognitive load appropriate for 11–14-year-olds. The form’s visual hierarchy, friendly placeholders, and optional signature signal trust, encouraging honest self-assessment rather than compliance.
Minor opportunities for improvement include: a progress indicator would reduce abandonment in longer sections; allowing photo uploads for every iteration could enrich evidence; and pre-populating currency or units in the materials table would reduce input errors. Still, the instrument already balances depth with usability better than most middle-school rubrics.
Purpose: Accurate attribution of work to an individual student for grading, portfolio tracking, and parent communication.
Effective Design: Separating first and last names simplifies alphabetizing rosters and prevents comma confusion; single-line text keeps entry quick on mobile.
Data Implications: Collects PII, so storage must be FERPA-compliant; hyphenated or multi-word names fit gracefully in the generous placeholder.
User Experience: Familiar micro-task; autocomplete from browser reduces keystrokes. Error prevention could be enhanced with regex that warns against numeric input.
Purpose: Captures student agency in problem selection and allows teachers to audit curriculum coverage across sustainability, accessibility, etc.
Effective Design: Single-choice with an “Other” escape valve prevents forced mis-categorization while keeping analysis manageable.
Data Implications: Creates a clean categorical variable for quick pivot charts; follow-up text box for “Other” is optional, so novelty is preserved without cluttering analytics.
User Experience: Radio buttons are accessible and color-blind friendly; pairing with icons could further aid younger readers.
Purpose: Forces clarity and measurability—if a student can’t articulate the problem, the solution risks being a solution in search of a problem.
Effective Design: The one-sentence constraint mirrors professional design-brief practice and discouraces scope creep.
Data Implications: Text can be NLP-analyzed for action verbs and beneficiaries, giving teachers rapid insight into student empathy framing.
User Experience: Placeholder example (“People drop wet umbrellas…”) provides a concise template; character counter could reduce anxiety without removing the limit.
Purpose: Documents evidence of human-centered design, distinguishing true user empathy from solution-first guessing.
Effective Design: Open multiline invites storytelling; prompt cues (interviewed, observed, counted) scaffold recall for novices.
Data Implications: Rich qualitative data enables rubric scoring on empathy depth; can be reliably blind-scored with moderate inter-rater agreement.
User Experience: First-person framing (“I…”) keeps voice authentic and reduces plagiarism temptation.
Purpose: Quantifies ideation fluency; research shows correlation between sketch count and final solution novelty.
Effective Design: Numeric input auto-validates positive integers; small width on mobile keeps focus.
Data Implications: Creates a discrete variable that can be plotted against iteration count or final grade to surface patterns.
User Experience: Immediate feedback could celebrate higher numbers (“Great divergent thinking!”) to reinforce process over speed.
Purpose: Reveals student understanding of trade-offs (cost, time, material) and decision-matrix thinking.
Effective Design: Multiline allows bulleted or prose format; placeholder examples model SMART-style criteria.
Data Implications: Responses can be coded for presence of constraints vs. objectives, giving insight into systems thinking.
User Experience: Optional rubric hint link could guide weaker writers without prescribing content.
Purpose: Measures iterative mindset; higher counts typically correlate with robustness and learning velocity.
Effective Design: Numeric input; definition of “formal” is clarified in parentheses, reducing variance in interpretation.
Data Implications: Longitudinal data can show growth from Grade 6 to 8 within the same student.
User Experience: Visual dial or slider might gamify entry, though number box keeps accessibility high.
Purpose: Normalizes failure as data, directly supporting a growth mindset and resilience.
Effective Design: Narrative prompt invites storytelling, which is memorable and shareable during parent conferences.
Data Implications: Text mining can flag absence of causal language (“because”, “so”) to identify students needing scaffolds.
User Experience: Emotional safety is reinforced by framing failure as “memorable,” implying value rather than shame.
Purpose: Distinguishes engineers from hobbyists—quantified success metrics enable objective evaluation.
Effective Design: Placeholder models a rigorous, measurable statement with units; mandatory status ensures every student practices the skill.
Data Implications: Creates a binary rubric check for presence of metric, numeric target, and unit; can be machine-graded.
User Experience: Sentence-stem template (“The … will … when …”) could lower writing load without reducing rigor.
Purpose: Encourages metacognition and transfers learning forward, closing the reflection loop.
Effective Design: Open question avoids prescriptive categories, honoring student autonomy.
Data Implications: Coded responses reveal class-wide patterns (e.g., time-management mentions spike mid-year), informing curriculum tweaks.
User Experience: Positioned at the end, the question capitalizes on peak reflection after testing and emotion rating.
Mandatory Question Analysis for Middle School Design & Engineering Assessment Form
Important Note: This analysis provides strategic insights to help you get the most from your form's submission data for powerful follow-up actions and better outcomes. Please remove this content before publishing the form to the public.
Question: Student first name
Justification: Accurate student identification is foundational for gradebook entry, portfolio linkage, and FERPA-compliant record keeping. Without a first name, anonymized submissions create administrative overhead and risk mis-attribution during parent conferences or exhibition nights.
Question: Student last name
Justification: The last name ensures unique identification within grade-level rosters that often contain duplicate first names. It also supports alphabetical sorting for quick retrieval during formative feedback sessions and end-term reporting.
Question: Which broad problem area did you choose?
Justification: This categorical field is essential for curriculum analytics—teachers must verify that students are distributed across accessibility, sustainability, health, etc., to confirm balanced exposure to design contexts. It also enables targeted resource allocation (e.g., ordering assistive-tech materials) and is a key filter for school-wide STEM fair categories.
Question: In one sentence, what problem are you trying to solve?
Justification: A concise problem statement is the linchpin of the entire engineering cycle; it anchors subsequent ideation, criteria setting, and testing. Making it mandatory guarantees that every student practices distilling messy situations into actionable design challenges, a skill emphasized in NGSS engineering standards.
Question: Describe how you identified user needs (empathy phase)
Justification: Empathy evidence distinguishes human-centered design from solution-first tinkering. Requiring this response ensures that students document interviews, observations, or surveys, providing teachers with rubric data on empathy depth and guarding against “gadgeteering” projects that miss real users.
Question: How many different concept sketches did you create before choosing one?
Justification: A numeric count offers an objective proxy for ideation fluency, a core component of creativity metrics in middle-school engineering. Mandatory entry prevents students from skipping divergent thinking, reinforcing classroom norms that quantity of ideas precedes quality filtering.
Question: What criteria did you use to pick your best concept?
Justification: Explicit criteria document student reasoning and enable assessment of decision-matrix skills. Without a mandatory response, many students default to “it looked cool,” undermining instruction on trade-offs. The field provides data for rubric rows targeting constraint identification and objective evaluation.
Question: How many formal iterations (major changes) did your prototype go through?
Justification: Iteration count is a leading indicator of resilience and refinement, both emphasized in the Maker mindset. Making this mandatory counters the “one and done” mentality, supplying teachers with longitudinal data to show students that higher iteration correlates with stronger final performance.
Question: Describe one memorable failure and how it guided your next iteration
Justification: Requiring failure narrative normalizes productive failure and supplies evidence for affective rubric rows on growth mindset. It also guards against students retroactively claiming perfect execution, ensuring authentic reflection that can be shared during exhibitions to destigmatize mistakes.
Question: State the testable hypothesis or success criterion for your final prototype
Justification: A quantified hypothesis is the cornerstone of data-driven engineering. Making this mandatory ensures every student leaves the course having practiced writing objective, measurable success metrics, aligning with NGSS practice “Planning and Carrying Out Investigations.”
Question: What will you do differently in the next project?
Justification: This forward-looking reflection closes the learning loop and provides teachers with actionable feedback on instructional gaps (e.g., repeated mentions of poor time management signal need for scaffolding). Mandatory status guarantees that even reluctant reflectors engage in metacognition, supporting transfer to future design cycles.
The current set of 11 mandatory fields strikes an effective balance: they cover identity, problem framing, process evidence, and reflection—elements indispensable for demonstrating mastery of the engineering cycle. To optimize completion rates without sacrificing data quality, consider surfacing a progress bar and autosaving after each section. Additionally, you can convert some optional numeric fields (e.g., test pass percentage) to conditionally mandatory when a student indicates they performed tests, ensuring contextual rigor. Finally, provide a “Why required?” micro-help icon next to each mandatory asterisk; transparency increases student buy-in and reduces perception of arbitrary compliance.
To configure an element, select it on the form.