Provide the core identifiers for this inquiry project so reviewers can track progress and archive artifacts.
Project/Inquiry Title
Lead Researcher/Student Name(s)
Facilitator/Teacher/Advisor Name
Institution/Learning Community
Project Start Date
Intended Presentation/Submission Date
A high-quality inquiry begins with an open, meaningful driving question. Evaluate clarity, complexity, and relevance here.
State the primary driving question in one interrogative sentence.
Briefly justify why this question matters locally and/or globally.
At what cognitive level does the driving question operate?
Remember
Understand
Apply
Analyze
Evaluate
Create
Does the driving question require primary data collection?
Which data-collection methods will be employed?
Surveys/Questionnaires
Interviews
Experiments
Observations
Sensor/IoT logging
Document Analysis
Other
Describe the secondary sources that will sufficiently address the question.
Detail the plan that transforms curiosity into credible findings.
Primary inquiry approach
Scientific Experimental
Ethnographic/Qualitative
Mixed Methods
Design Thinking
Historical Analysis
Other
List independent, dependent, and control variables (if applicable).
Sampling strategy & rationale
Has an ethics/safety review been conducted?
Upload approval letter or risk-assessment form.
Please complete ethics/safety review before data collection begins.
Timeline & Milestones
Phase | Start | End | Key Deliverable/Output | ||
|---|---|---|---|---|---|
A | B | C | D | ||
1 | Literature Review | 3/1/2025 | 3/14/2025 | Annotated bibliography with 20 peer-reviewed sources | |
2 | Data Collection | 3/15/2025 | 4/15/2025 | Raw sensor datasets uploaded to open repository | |
3 | |||||
4 | |||||
5 | |||||
6 | |||||
7 | |||||
8 | |||||
9 | |||||
10 |
Inquiry quality hinges on credible sources. Evaluate selection criteria and diversity of perspectives.
Total number of unique sources reviewed so far
Source types consulted
Scholarly journal articles
Books/e-books
Conference proceedings
Government reports
NGO white papers
News media
Social media posts
Raw datasets
Patents
Other
Rate each source type on reliability for THIS project
Very Low | Low | Moderate | High | Very High | |
|---|---|---|---|---|---|
Scholarly journals | |||||
News media | |||||
Social media |
Did you encounter any contradictory evidence?
Explain how you reconciled or rebutted the contradiction.
Overall confidence in literature base
Not confident
Slightly confident
Moderately confident
Very confident
Extremely confident
Inquiry is often interdisciplinary and team-based. Clarify contributions and communication protocols.
Team size (including you)
Team Member Roles
Name | Primary Role | Key Responsibility | Estimated % Contribution | ||
|---|---|---|---|---|---|
A | B | C | D | ||
1 | Amina Rahman | Lead Researcher | Experimental design & data analysis | 40 | |
2 | Carlos Oliveira | Documentarian | Literature review & report writing | 35 | |
3 | |||||
4 | |||||
5 | |||||
6 | |||||
7 | |||||
8 | |||||
9 | |||||
10 |
Primary communication channel
Instant messaging (WhatsApp, Signal, etc.)
Collaborative platform (Slack, MS Teams)
In-person meetings
Other
Did the team use a shared project-management board (Trello, Kanban, etc.)?
Provide URL or screenshot archive:
Transparent, reproducible inquiry demands meticulous logging. Record each data-collection session.
Session Log
Date & Time | Location / Context | Method / Instrument | Samples / Observations | Anomalies or Notes | ||
|---|---|---|---|---|---|---|
A | B | C | D | E | ||
1 | 3/18/2025, 1:00 PM | Rooftop, Downtown District | IR thermometer | 15 | Cloud cover fluctuated; repeated at 14:30 | |
2 | ||||||
3 | ||||||
4 | ||||||
5 | ||||||
6 | ||||||
7 | ||||||
8 | ||||||
9 | ||||||
10 |
Upload raw data files (CSV, XLSX, TXT, etc.)
Were any sessions repeated due to error?
Describe the error and corrective action taken.
Detail how raw data become evidence.
Software/tools used for analysis
Microsoft Excel/Google Sheets
SPSS/PSPP
R/R-Studio
Python (pandas, scipy)
MATLAB
NVivo/Atlas.ti
Tableau/Power BI
Other
Statistical tests or qualitative coding approach
Did you pre-register analysis procedures?
Provide DOI or OSF link:
Inquiry flourishes when learners reflect on their thinking. Answer candidly.
Self-assessed growth in formulating research questions
No growth
Minimal growth
Moderate growth
Significant growth
Transformative growth
Describe one obstacle you overcame and the strategy used.
How did you feel when initial results contradicted your hypothesis?
Did your inquiry change any personal beliefs or behaviors?
Explain the change and its implication.
Showcase what you created to communicate findings.
Select all artifacts produced
Written report/thesis
Poster
Slide deck
Video documentary
Podcast
Working prototype/model
Website/blog
Interactive dashboard
Other
Will you publish under an open-access license?
Preferred license
CC-BY
CC-BY-SA
CC-BY-NC
CC0
Other
Upload a representative image of your artifact (screenshot, poster thumbnail, etc.)
Use the rubric below to rate the project. Each criterion is scored 1-4.
Self-assessment rubric (1 = beginning, 2 = developing, 3 = proficient, 4 = exemplary)
Question significance & originality | |
Methodological rigor | |
Evidence quality & quantity | |
Conclusion validity | |
Communication clarity | |
Ethical compliance |
Provide evidence for the two lowest-scoring criteria above.
Collect diverse perspectives to refine your work.
Number of peer reviewers consulted
Did you implement any peer-suggested changes?
Summarize the most impactful change and its outcome.
Upload anonymized feedback forms or summary report.
Great inquiries seed future questions. Outline continuity plans.
What follow-up research questions emerged?
Will the dataset be reused by others?
Repository URL or contact email:
Intended lifespan of this project
Single academic term
Multi-term longitudinal
Indefinite community initiative
Other
Lead Researcher attestation (type your name as signature)
Analysis for Inquiry-Based Learning & Project Assessment Form
Important Note: This analysis provides strategic insights to help you get the most from your form's submission data for powerful follow-up actions and better outcomes. Please remove this content before publishing the form to the public.
This Inquiry-Based Learning & Project Assessment Form is a pedagogically robust instrument that systematically captures every phase of student-led investigation. By scaffolding from driving-question framing through sustainability planning, it guarantees that assessors receive a 360-degree view of learner cognition, collaboration, and methodological rigor. The form’s progressive disclosure—using conditional follow-ups, tables, and file uploads—keeps cognitive load manageable while still harvesting rich qualitative and quantitative evidence. Finally, the globalized language (SDG references, open-access licensing, Celsius-neutral dates) makes the rubric transferable across curricula, languages, and accreditation systems.
Minor friction points exist: the repeated table-style questions may intimidate younger learners, and the absence of autosave or progress bars could raise abandonment on low-bandwidth connections. Nonetheless, the form’s alignment with IB, NGSS, and OECD Future of Education frameworks positions it as a best-in-class assessment artifact for modern, learner-centered classrooms.
The title is the persistent identifier that reviewers, databases, and future students will cite. Making it mandatory and single-line forces concision—a critical skill in scientific communication. The example placeholder (“Urban Heat-Island Mitigation…”) models both specificity and relevance, helping learners avoid vague entries like “Science Project.”
From a data-quality standpoint, a well-structured title enables faceted search inside institutional repositories; without it, downstream analytics (e.g., topic modelling or gender-disaggregated success rates) collapse. The field therefore doubles as a stealth lesson in scholarly branding.
Because the form allows Unicode, learners can preserve diacritics or non-Latin scripts, promoting linguistic equity. However, assessors should be warned that special characters may need encoding if exported to legacy CSV systems.
Requiring real names (not aliases) satisfies ethical and legal obligations for authorship attribution, parental consent, and academic integrity audits. The placeholder models inclusive pairing (“Amina Rahman & Carlos Oliveira”), implicitly signaling that collaborative work is welcomed.
This field feeds directly into institutional reporting on gender parity, team size, and cross-grade mentoring. Optional anonymity would undermine longitudinal studies that track student growth across multiple projects.
From UX research, name fields that allow at least 120 characters accommodate double-barrelled surnames and patronymics without truncation errors, reducing help-desk tickets.
These two mandatory date pickers create a project-duration metric that predicts workload intensity and resource conflicts. Early-warning systems can flag teams whose presentation date is fewer than 20 days from start, triggering mentor check-ins.
Date validation also powers automated Gantt charts inside learning-management systems, giving students visual feedback on milestone pacing. Because the form uses ISO-8601 format, it avoids American/European ordering ambiguity.
Collecting only month and day (not year) would anonymize ageing data, but the current design retains the year to enable multi-cohort trend analysis—vital for accreditation bodies.
Forcing learners to articulate one interrogative sentence prevents thesis statements masquerading as questions. The cognitive-level follow-up (Remember ➔ Create) supplies a Bloom-taxonomy tag that reviewers can aggregate to assess programme rigor.
The placeholder models disciplinary vocabulary (“interrogative sentence,” “reflectivity”), scaffolding learners who are new to academic genre conventions. Over time, the institution can mine these questions for duplicates, encouraging novelty.
Because the field is multiline, students can embed sub-questions or delimit scope, reducing later clarification emails.
This mandatory paragraph distinguishes “school-only” projects from authentic community-anchored inquiry. By explicitly linking to SDGs or stakeholder impact, learners practice the transferable skill of grant writing and public engagement.
Text-mining this field can surface under-represented SDGs, guiding teachers toward underserved global challenges. It also feeds accreditation evidence for “global competence” outcomes.
Mandatory status is justified because assessors need a concise relevance statement for inter-rater reliability; optional essays would produce highly variable length and quality.
These two single-choice fields operationalize higher-order thinking and methodology, enabling automated rubric pre-scoring. The aligned Bloom and methodology taxonomies reduce subjectivity when external examiners moderate grades.
Analytics show that projects tagged “Create” combined with “Design Thinking” correlate with higher community adoption rates, informing faculty professional-development priorities.
Keeping them mandatory guarantees that every assessment record is machine-readable for dashboard visualizations, a key requirement for accreditation self-studies.
This numeric field produces a quick proxy for information-literacy depth. Benchmarking against cohort medians (e.g., 20 sources) allows librarians to target interventions for students below the 25th percentile.
Mandatory status prevents “zero” entries that would break statistical analyses; it also signals to students that superficial Googling is insufficient.
Collecting the number (rather than bibliographies) keeps data lightweight while still enabling longitudinal studies on source inflation over time.
Team size predicts coordination complexity and grade fairness. Research shows a curvilinear relationship: very small or very large teams underperform. Mandatory capture allows the system to flag outliers for instructor review.
The numeric input triggers conditional logic: teams larger than five must complete the detailed roles table, ensuring equitable workload distribution documentation.
From a privacy angle, the number alone is low-risk, enabling institutional research without exposing individual names in public data sets.
Mandatory Question Analysis for Inquiry-Based Learning & Project Assessment Form
Important Note: This analysis provides strategic insights to help you get the most from your form's submission data for powerful follow-up actions and better outcomes. Please remove this content before publishing the form to the public.
Project/Inquiry Title
Mandatory capture ensures every artifact has a human-readable identifier for repository indexing, accreditation audits, and showcase events. Without a title, downstream systems cannot generate persistent URLs or citation snippets, breaking discoverability and violating FAIR data principles.
Lead Researcher/Student Name(s)
Legal authorship attribution is non-negotiable for academic integrity, parental consent, and prize eligibility. Omitting names would prevent longitudinal tracking of individual growth across multiple inquiries, undermining the very formative-assessment ethos of the form.
Project Start Date & Intended Presentation Date
These dates enable automated timeline analytics that alert mentors to at-risk projects and feed accreditation metrics on programme pacing. Collecting both dates is mandatory because duration is a critical predictor of scope creep and resource allocation.
Primary Driving Question
A concise interrogative question is the cornerstone of inquiry-based learning; without it, reviewers cannot determine scope, complexity, or Bloom level. Mandatory articulation guarantees that every submission can be benchmarked against cognitive-rigor taxonomies.
Justification of Relevance
Requiring students to defend real-world significance deters “fake” or recycled projects and aligns with global-competence frameworks. This field feeds directly into rubric rows for authenticity and stakeholder impact, making its completion essential for valid scoring.
Cognitive Level & Primary Inquiry Approach
These fields standardize metadata for large-scale learning-analytics dashboards. Mandatory classification ensures that institutional reports on higher-order thinking or methodological diversity are complete and unbiased.
Total Number of Unique Sources
A numeric count provides an immediate proxy for information-literacy depth and is used by librarians to trigger targeted support. Zero or null values would corrupt statistical models that benchmark cohort performance, hence the mandatory requirement.
Team Size
Mandatory capture allows the system to apply conditional logic for role-distribution tables and to normalize peer-evaluation scores. It also underpins research on optimal collaboration sizes, making omission detrimental to both fairness and analytics.
The current mandatory set strikes an effective balance between data integrity and user burden: only 9 of 60+ fields are required, minimizing form abandonment while safeguarding the analytic core. To further optimize completion rates, consider surfacing a dynamic progress bar and autosave functionality, especially for date-picker and numeric fields that mobile users find tedious.
For future iterations, explore conditional mandatoriness: once a learner selects “Yes” to primary-data collection, the follow-up methods question could flip from optional to mandatory, ensuring richer metadata without inflating initial friction. Additionally, provide inline examples or micro-tooltips adjacent to high-cognitive-load fields like “Justification of Relevance” to maintain quality while reducing reviewer back-and-forth.
To configure an element, select it on the form.