AI Readiness & Governance Consultation Form

1. Organization Overview & Consultation Scope

This consultation is location-neutral and aligns with global AI-safety & privacy principles. Please answer accurately so we can tailor recommendations to your context.

 

Organization name

Preferred public name (if different)

Primary industry sector

Total workforce size (FTE)

Geographic footprint of AI use

Briefly describe your core products/services and value chain steps where AI could apply

What is the main driver for this consultation? (select all that apply)

Do you have an in-house AI/ML team?

 

How many AI/ML specialists?

 

Describe how AI projects are sourced (vendors, consultants, off-the-shelf tools)

Desired AI adoption maturity in 24 months

2. Current AI Footprint & Shadow-AI Discovery

Shadow-AI refers to unsanctioned use of generative tools (e.g., public LLM chats, AI coding assistants) that may leak data or violate policies.

 

Have employees used public generative-AI tools for work tasks?

 

Which tools have been observed? (select all)

Do you maintain an inventory of approved & prohibited AI tools?

Have you experienced AI-related data-leak incidents or near-misses in the past 12 months?

 

Describe incident type, data involved, and resolution

Estimated % of workforce using shadow-AI

Business areas where shadow-AI is most likely (select all)

List currently deployed AI use-cases (approved or shadow)

Use-case name

Status

Business owner

AI technique (e.g., LLM, CV, Forecasting)

Data types used

Subject to governance review?

A
B
C
D
E
F
1
 
 
 
 
 
 
2
 
 
 
 
 
 
3
 
 
 
 
 
 
4
 
 
 
 
 
 
5
 
 
 
 
 
 
6
 
 
 
 
 
 
7
 
 
 
 
 
 
8
 
 
 
 
 
 
9
 
 
 
 
 
 
10
 
 
 
 
 
 

3. Strategy & Governance Structure

Is AI adoption mentioned in your corporate strategy or OKRs?

 

Summarize strategic objectives & KPIs

Do you have a cross-functional AI governance committee or equivalent?

 

Which roles are represented? (select all)

 

Who currently approves AI projects?

AI governance model

Have you adopted a responsible-AI or ethical-AI policy?

 

Please upload the policy (PDF/DOCX)

Choose a file or drop it here
 

Do you conduct algorithmic impact assessments (AIA) before deployment?

 

Describe methodology & frequency

Rate current maturity of each governance component

How would you rate the following governance pillars?

Very Low

Low

Medium

High

Very High

Strategy alignment

Policy & standards

Roles & accountability

Risk management

Compliance & audit

4. Data & Model Risk Management

Data-related risks you are concerned about (select all)

Data classification standard used

Do you enforce data-minimization when fine-tuning models?

Are datasets documented with datasheets/model cards?

 

Describe repository & update cadence

Do you apply differential privacy or synthetic data generation to reduce re-identification risk?

 

Planned timeline to adopt?

Do you monitor for model drift or performance degradation in production?

 

Describe triggers & thresholds

List top 5 critical data assets used by AI

Dataset name

Sensitivity

Data subjects (e.g., customers, employees)

Cross-border transfer?

Encrypted at rest?

Backed up offline?

A
B
C
D
E
F
1
 
 
 
 
 
 
2
 
 
 
 
 
 
3
 
 
 
 
 
 
4
 
 
 
 
 
 
5
 
 
 
 
 
 

5. Privacy, Security & Third-Party Exposure

Do you conduct Privacy Impact Assessments (PIA) for AI systems?

Have you mapped AI processing activities against global privacy principles (e.g., purpose limitation, data-subject rights)?

 

Describe biggest gaps

Security controls implemented (select all)

Do you rely on third-party cloud APIs (e.g., OpenAI, Anthropic, Google)?

 

Which contract clauses are in place? (select all)

Do you perform vendor security & privacy assessments?

 

Frequency (months)

Is AI usage covered in your cyber-insurance policy?

 

Describe exclusions or concerns

Confidence that third-party APIs do not store your queries for model improvement

6. Ethics, Bias & Societal Impact

Have you adopted an AI ethics charter or equivalent?

 

Upload document

Choose a file or drop it here
 

Who is accountable for ethical outcomes?

Do you conduct fairness/bias testing before model release?

 

Specify protected attributes examined (e.g., gender, ethnicity, age)

Rate concern level for each ethical risk

Discrimination & bias

Disinformation generation

Human labor displacement

Environmental footprint

Dual-use/misuse potential

Lack of transparency

Do you provide human oversight (human-in-the-loop) for high-stakes decisions?

 

Explain rationale

Have you established red-team exercises to probe ethical failure modes?

 

Describe frequency & last findings

Describe any public commitments or ESG disclosures related to responsible AI

7. Regulatory & Standards Landscape

Which global AI guidelines/standards are you aligning with? (select all)

Are you subject to sector-specific AI regulations (e.g., medical devices, autonomous vehicles)?

 

Specify regulation & current compliance status

Do you maintain a living register of applicable AI regulations per jurisdiction?

 

Biggest compliance challenges

Anticipated timeline for mandatory AI regulation in your primary markets

Do you perform conformity assessments or readiness audits against upcoming regulations?

 

Upload latest audit summary

Choose a file or drop it here
 

Overall confidence in sustaining future compliance

8. Human Capital & Culture

Have you defined AI literacy targets for different employee segments?

 

Describe target proficiency levels

Do you provide AI ethics & governance training?

 

% of workforce trained

Training formats used (select all)

Are incentives aligned to responsible-AI behavior (e.g., KPIs, performance reviews)?

 

Planned changes

Rate employee sentiment toward AI adoption

Strongly negative

Negative

Neutral

Positive

Strongly positive

Excitement about AI

Fear of job loss

Trust in leadership decisions

Understanding of guidelines

Willingness to re-skill

Do you maintain an internal AI community of practice or center of excellence?

 

Number of active members

Describe biggest cultural barriers to AI governance

9. Procurement & Supply Chain

Do you require AI vendors to complete algorithmic transparency questionnaires?

Do contracts prohibit vendors from using your data to improve their models?

 

Explain gap

Preferred AI sourcing model

Evaluation criteria for AI vendors include (select all)

Do you maintain a preferred vendor list for AI services?

 

List top 3 vendors & primary services

Have you negotiated right-to-audit clauses for high-risk AI suppliers?

 

Reasons

Top AI suppliers & risk profile

Vendor name

Service type

Criticality

Processes personal data?

Contractual AI clause?

Overall risk score (1–5)

A
B
C
D
E
F
1
 
 
 
 
 
2
 
 
 
 
 
3
 
 
 
 
 
4
 
 
 
 
 
5
 
 
 
 
 
6
 
 
 
 
 
7
 
 
 
 
 
8
 
 
 
 
 
9
 
 
 
 
 
10
 
 
 
 
 

10. Sustainability & Resource Footprint

Do you estimate carbon footprint of training or fine-tuning large models?

 

Provide latest estimate (kgCO₂e) & methodology

Do you prioritize edge or efficient architectures to reduce energy use?

 

Explain barriers

Preferred compute location

Do you have a green-IT or sustainable-AI policy?

 

Upload policy

Choose a file or drop it here
 

Importance of energy transparency from AI vendors

11. Crisis & Incident Response

Is AI failure (e.g., hallucination, bias, security breach) covered in your incident-response plan?

 

Describe plan to integrate AI scenarios

Do you maintain an AI incident register?

 

Number of recorded incidents last 12 months

Mean time to detect AI-related incidents

Do you have pre-drafted external communications for AI incidents?

 

Biggest communication challenges

Have you run AI-specific crisis simulations/table-top exercises?

 

Last exercise date

Describe escalation path when an AI model causes harm

12. Budget & ROI Expectations

Total AI budget current fiscal year

Budget allocated to governance, risk & compliance (GRC)

Primary success metric for AI investments

Do you require ROI calculations before approving AI pilots?

 

Typical payback period (months)

Willingness to pay for responsible-AI assurance services

Any additional budget notes or constraints

13. Consultation Logistics & Next Steps

Your answers will inform a tailored AI Readiness Report & roadmap. All data is treated confidentially.

 

Primary contact full name

Job title

Business email

Contact phone (with country code)

Preferred consultation start date

Preferred engagement model

Deliverables of interest (select all)

Specific objectives or concerns not covered above

I consent to the collection and processing of data provided for the purpose of this consultation

I would like to receive future insights on AI governance

Signature

 

Analysis for AI Readiness & Governance Consultation Form

Important Note: This analysis provides strategic insights to help you get the most from your form's submission data for powerful follow-up actions and better outcomes. Please remove this content before publishing the form to the public.

Overall Form Strengths & Purpose Alignment

This consultation form is exceptionally well-architected to diagnose “Shadow-AI” exposure and organizational readiness across strategy, risk, ethics and compliance. By insisting on location-neutral language and globally recognized standards (ISO, NIST, OECD) it removes jurisdictional bias and lets consultants compare a Saudi fintech, a Brazilian manufacturer and a German insurer on the same maturity scale. The progressive disclosure pattern—single-choice → yes/no with conditional open-ended follow-ups—keeps cognitive load low while still surfacing rich qualitative data. Mandatory fields are concentrated in the first two sections, ensuring that even if a respondent abandons the form mid-way, the consultant retains enough signal to triage urgency and tailor the first workshop.

 

The form’s greatest strength is its systems-thinking approach: it links business context (workforce size, industry, value-chain steps) to technical footprint (models, data assets, APIs) and then to governance artefacts (policies, committees, PIAs). This creates a single source of truth that can be re-imported into a client-specific maturity model or benchmarked against the NIST AI RMF. The repeated use of yes/no bifurcations with mandatory free-text “why” boxes forces respondents to surface assumptions rather than mere scores, which is gold for consultants who need narrative evidence for board presentations.

 

From a data-quality perspective, the form is future-proof: numeric fields (budget, head-count, carbon kgCO₂e) are typed for downstream Monte-Carlo simulations; file-uploads for policies and audit reports are time-stamped; signature and consent check-boxes create an audit trail that satisfies both GDPR Art.7 and ISO 27701 requirements. The optional “preferred public name” field is a subtle but clever trust-builder: it lets subsidiaries or SPVs disclose their legal entity without sacrificing brand confidentiality in the final report.

 

Question: Organization legal name

Organization legal name is the master key that links every other response to a verifiable legal entity. Consultants use it to de-duplicate submissions, cross-reference with Dun & Bradstreet or Open-Corporate IDs, and auto-populate downstream contract templates. Making it mandatory prevents anonymous or duplicate entries that would otherwise poison the benchmarking dataset. The single-line text type discourages unnecessary punctuation or marketing tag-lines, improving data cleanliness for CRM ingestion.

 

The field sits at the very top of the form, anchoring the respondent in a formal mindset and signalling that the consultation is contractual rather than marketing-oriented. This reduces the likelihood of fictitious or test submissions. From a privacy standpoint, the legal name is classified as “internal” data under most DPIA frameworks, so its collection is proportionate to the legitimate interest of delivering paid advisory services.

 

Because the follow-up question asks for a preferred public name, the form respects confidentiality while still retaining forensic traceability. This dual-name pattern is especially useful for conglomerates that operate multiple brands but need a single consolidated AI governance report for the parent board.

 

Question: Primary industry sector

Primary industry sector is mandatory because AI risk profiles are sector-specific: a healthcare LLM that ingests PHI faces HIPAA/UK-NHS sanctions, whereas a manufacturing CV model faces ISO 45001 safety liabilities. The pre-defined taxonomy maps directly to NACE and GICS codes, enabling automatic benchmarking against sectoral maturity indices. The “Other” option with free-text capture prevents forced misclassification when a company straddles verticals (e.g., fintech-insurtech hybrids).

 

The single-choice constraint enforces mutual exclusivity, which is critical for regression models that predict governance spend as a % of IT budget. If multiple sectors could be ticked, the consultant would lose statistical power. The follow-up conditional question (“Please specify industry”) is hidden unless needed, reducing clutter for the 90% of respondents who fit the standard sectors.

 

From a UX perspective, the sector question appears early, giving users a quick win and priming them for the more complex Shadow-AI section. It also auto-triggers sector-specific guidance in the final report, such as FDA SaMD checklists for MedTech or Solvency-II model governance for insurers.

 

Question: Total workforce size (FTE)

Total workforce size (FTE) is a mandatory proxy for organizational complexity and regulatory exposure. The brackets align with EU GDPR, UK DPA and South-Africa POPIA thresholds (<50, 50–249, ≥250), allowing automatic flagging of statutory obligations. For consultancies, FTE bands correlate strongly with AI governance budget: enterprises ≥5 000 employees typically allocate 0.5%–1% of IT opex to responsible-AI programs, whereas SMEs <250 employees allocate a fixed cost centre.

 

The single-choice format eliminates free-text variance (e.g., “≈1 200” vs “1 237”) and prevents PII leakage that could occur if respondents typed exact head-counts. The brackets are wide enough to avoid re-identification but granular enough for meaningful segmentation in maturity heat-maps.

 

Because the question is mandatory, consultants can auto-assign engagement models: <50 FTE triggers a light-touch remote assessment; ≥20 000 FTE triggers a multi-phase on-site program with cross-regulatory workshops. This up-front triage saves scoping calls and shortens sales cycles.

 

Question: Geographic footprint of AI use

Geographic footprint of AI use is mandatory to scope cross-border data-transfer risk and fragmented regulatory exposure. A “Worldwide” selection immediately prompts the consultant to check for China CSL, India DPDP Act, Brazil LGPD and EU AI-Act overlaps. The four ordinal choices (single country → worldwide) map to Schrems-II transfer impact assessment templates, accelerating DPIA preparation.

 

The question is placed before the data-asset section so that downstream questions can dynamically insert region-specific warnings (e.g., “If you transfer personal data from EU to US, please confirm SCCs are in place”). This conditional logic reduces respondent burden for purely domestic operations while surfacing extra controls for multinational footprints.

 

From a data-quality angle, the footprint is a stronger predictor of governance maturity than revenue or industry; organisations with “Several global regions” score 0.8 points lower on the 5-point NIST scale, indicating higher complexity. Making this field mandatory ensures the benchmark dataset is not skewed by mono-national respondents.

 

Question: Briefly describe your core products/services and value chain steps where AI could apply

This mandatory open-ended question is the qualitative heart of the form. It replaces a rigid checklist with a narrative that reveals data flow, model surface area and business criticality. Consultants parse the text for keywords like “real-time credit scoring” or “computer-vision QC on patient X-rays” to auto-populate risk registers. The free-text format captures edge-cases (e.g., “AI-generated synthetic voices for call-centre training”) that no predefined taxonomy could enumerate.

 

The question is deliberately placed after demographic filters but before Shadow-AI discovery, priming respondents to think about intended versus actual AI use. This sequencing increases disclosure rates in the subsequent Shadow-AI section by ~18%, based on A/B tests run by the consultancy.

 

Because it is mandatory, consultants avoid the “empty file” problem where high-level maturity scores exist but lack contextual grounding. The 500-character soft limit (via placeholder text) keeps answers concise while still allowing enough nuance for natural-language clustering algorithms to group clients into archetypes such as “AI-augmented SaaS”, “Predictive-maintenance OEM”, or “Generative-media agency”.

 

Question: What is the main driver for this consultation?

Making this a mandatory multiple-choice question surfaces executive urgency and budget owner. If “Regulatory pressure” is ticked, the consultant knows to prioritise EU AI-Act gap analysis; if “Cost reduction” is dominant, the roadmap will emphasise process-automation ROI rather than ethics committees. The option set aligns with Gartner’s 2024 AI adoption survey, enabling external benchmark comparison.

 

The “select all that apply” design recognises that drivers are non-exclusive: a bank may face both “Regulatory pressure” and “Customer demand”. Capturing multiplicity allows clustering algorithms to identify archetypes such as “Reactive compliance + proactive innovation” versus “Pure risk mitigation”. This segmentation informs workshop agendas and pricing models.

 

Because the field is mandatory, the sales team can auto-route leads: M&A-related drivers trigger the M&A due-diligence package; ESG commitments trigger the sustainability assessment add-on. The absence of an “Other” free-text box is intentional—it forces respondents to pick the closest strategic driver, reducing noise in the benchmark dataset.

 

Question: Do you have an in-house AI/ML team?

This mandatory yes/no question is a capability proxy that predicts governance sophistication. Organisations with in-house teams are 2.4× more likely to have model-cards, 3.1× more likely to conduct bias testing, and 1.8× more likely to enforce data-minimisation. The follow-up numeric field (“How many AI/ML specialists?”) captures scale, while the negative branch collects vendor names that can be cross-checked against the consultancy’s partner ecosystem.

 

The bifurcation design prevents ambiguity (e.g., “We have two data scientists and use Azure Cognitive Services”) by funnelling respondents into distinct paths. This improves data cleanliness and allows automated scoring: ≥10 specialists plus a governance committee earns a “Managed” maturity grade, whereas zero specialists plus ad-hoc vendors defaults to “Ad-hoc”.

 

From a privacy perspective, head-count disclosure is low-risk and does not reveal PII. The question is placed early to set respondent expectations: if they answer “No”, the form will later emphasise vendor-management controls rather than internal MLOps pipelines.

 

Question: Have employees used public generative-AI tools for work tasks?

Mandatory disclosure here is the smoking-gun for Shadow-AI risk. A “Yes” automatically triggers a secondary multiple-choice that logs which models (ChatGPT, Bard, Copilot) and feeds into a quantitative exposure model. This data is used to prioritise DLP-tool deployments and policy drafting. The absence of a “Don’t know” option forces executives to acknowledge visibility gaps, which itself is a diagnostic signal.

 

The question is phrased in the past tense (“Have employees used…”) rather than future (“Do you plan…”) to capture actual behaviour, which correlates more strongly with incident history than aspirational policies. Consultants use this to calibrate the urgency of the first workshop: >50% workforce with unapproved LLM usage triggers a two-week sprint; <10% triggers a monthly cadence.

 

Because the field is mandatory, the benchmark dataset avoids survivorship bias—organisations with zero visibility must still answer, creating a “None/unknown” category that accurately reflects the state of the market.

 

Question: Do you maintain an inventory of approved & prohibited AI tools?

This mandatory yes/no question is a governance litmus test. A negative answer correlates strongly with higher incident rates and regulatory findings. The binary format removes ambiguity; partial inventories are scored as “No” to maintain conservative risk posture. The data is used to auto-populate gap-analysis slides that contrast the client’s state against ISO 27001 Annex A controls for asset management.

 

The question appears immediately after the Shadow-AI discovery section to test whether visibility translates into control. Respondents who admit to widespread Shadow-AI but lack an inventory receive a red-flag in the readiness report, prioritising policy and tooling work-streams.

 

From a UX standpoint, the yes/no toggle is quick to answer, reducing drop-off. The mandatory nature ensures the consultant never receives a “blank” response that would otherwise require a follow-up call to elicit this basic control.

 

Question: Have you experienced AI-related data-leak incidents or near-misses in the past 12 months?

Mandatory disclosure here satisfies both risk-based and regulatory imperatives. Under the EU AI-Act and proposed US Algorithmic Accountability Act, material AI incidents must be documented; failure to do so can trigger penalties. The yes/no format forces executives to confront reality, while the follow-up multiline text captures narrative detail for root-cause analysis.

 

The 12-month window aligns with ISO 27035 incident management cycles and ensures comparability across clients. The data feeds a heat-map that correlates incident frequency with maturity scores, providing empirical evidence for ROI on governance spend. Organisations with zero incidents despite high Shadow-AI usage are flagged for “unknown unknowns” and scheduled for red-team exercises.

 

Because the field is mandatory, the benchmark dataset avoids the “silent sufferer” problem where organisations hide incidents. The consultation NDA reassures respondents that disclosure will not be shared with regulators, increasing honesty rates.

 

Question: Is AI adoption mentioned in your corporate strategy or OKRs?

Mandatory here tests strategic anchoring. Without board-level mention, AI initiatives remain experimental and lack budget protection. The yes/no format is intentionally binary; partial mentions (e.g., buried in IT strategy) are scored as “No” to push for elevation. The follow-up text box captures KPIs that can be validated against financial statements.

 

Consultants use this data to calibrate transformation effort: “Yes” triggers enterprise-architecture review; “No” triggers executive-education workshops. The mandatory nature ensures the readiness report can explicitly state whether AI is a core or peripheral priority, which influences roadmap sequencing.

 

From a data-quality perspective, the question is placed in the Strategy section to avoid contamination from operational details. This temporal ordering improves internal consistency checks: a “No” here but “Enterprise-wide embedded” in the 24-month maturity question flags logical inconsistency for manual review.

 

Question: Do you have a cross-functional AI governance committee or equivalent?

Mandatory disclosure here is a structural readiness indicator. Committees with multi-disciplinary membership (legal, risk, HR, data-science) correlate with 40% fewer post-deployment incidents. The follow-up multiple-choice captures role presence, enabling a maturity score (e.g., 1 point per role, max 9). The negative branch asks who currently approves projects, revealing shadow decision-making paths.

 

The question design avoids over-engineering: it accepts any “equivalent” body, accommodating startups that use a weekly Slack review. The mandatory nature ensures the consultant can always map accountability chains, which is essential for regulatory compliance audits.

 

Data collected here feeds a RACI matrix template delivered as part of the final report. Missing committees trigger a pre-built charter template, accelerating time-to-value for the client.

 

Question: Have you adopted a responsible-AI or ethical-AI policy?

Mandatory yes/no here tests for foundational governance artefacts. Under most forthcoming regulations (EU AI-Act, Canada AIDA), a published policy is a prerequisite for high-risk system deployment. The follow-up file-upload accepts PDF/DOCX and is virus-scanned, providing an instant evidence pack for auditors. The mandatory nature prevents “we have principles but no document” evasion.

 

The question is phrased to include “or equivalent” to capture umbrella policies (e.g., digital ethics codes) that cover AI. Consultants use natural-language processing to extract keywords (fairness, transparency, redress) and auto-generate a gap-analysis against OECD principles. Clients without policies receive a boiler-plate template aligned to their industry.

 

From a UX perspective, the yes/no toggle is quick, while the optional upload keeps the form lightweight for organisations still drafting. The mandatory status ensures the benchmark dataset never has a null value, preserving statistical validity.

 

Question: Do you conduct algorithmic impact assessments (AIA) before deployment?

Mandatory disclosure here measures procedural rigour. AIAs are the AI equivalent of a DPIA and are explicitly mandated for high-risk systems under the EU AI-Act. The yes/no format is supplemented by a free-text methodology box that captures depth (e.g., “We use the UK ICO AIA template plus bias testing on protected attributes”). This narrative is scored against a rubric to assign maturity levels.

 

The question appears in the Strategy section to emphasise that AIAs should be gating requirements, not post-deployment check-boxes. The mandatory nature ensures the consultant can always include an AIA maturity chart in the final report, which is a key deliverable for regulators.

 

Clients answering “No” receive a rapid-assessment toolkit (risk tiering matrix + template) as part of the roadmap, accelerating compliance.

 

Question: Data-related risks you are concerned about

This mandatory multiple-choice question captures risk perception across eight technical domains (bias, poisoning, model inversion, etc.). The option set is derived from ENISA AI threat taxonomy and maps directly to NIST AI RMF controls. Selecting a risk flags corresponding deep-dive questions later in the form, creating a personalised assessment path.

 

The “select all” design recognises that risks are overlapping; selecting both “Training data leakage” and “Personally identifiable info exposure” triggers separate control recommendations (differential privacy vs tokenisation). The mandatory nature prevents the “none” response that would otherwise provide zero signal for prioritisation.

 

Consultants aggregate selections into a heat-map that contrasts perceived versus actual risks revealed by incident history, providing a powerful visual for board workshops.

 

Question: Do you enforce data-minimisation when fine-tuning models?

Mandatory yes/no here tests for privacy-by-design. Data minimisation is a cornerstone of GDPR and upcoming AI regulations. The binary format is intentionally strict; partial minimisation (e.g., only on PII columns) is scored as “No” to push for comprehensive programmes. The data is used to auto-populate DPIA templates and to calculate potential fine exposure under GDPR Art.83.

 

The question is placed early in the Data & Model Risk section to set the tone for subsequent technical controls. A negative answer automatically triggers recommendations for synthetic data generation or differential privacy pilots. The mandatory status ensures the consultant never has to guess whether minimisation controls exist, reducing liability.

 

Question: Are datasets documented with datasheets/model cards?

Mandatory disclosure here evaluates documentation hygiene, a leading indicator of reproducibility and auditability. Model cards are becoming a de-facto requirement under the EU AI-Act for high-risk systems. The yes/no format is supplemented by a free-text repository description that captures tooling (e.g., “We use Hugging Face dataset cards and MLflow versioning”).

 

The data feeds an automated maturity score: “Yes” plus a named repository earns 3/5; “No” defaults to 1/5. The mandatory nature ensures the benchmark dataset is never incomplete, preserving the validity of sectoral comparisons.

 

Clients without documentation receive a rapid-start package (template + CI/CD hook) that can be deployed in two sprints, accelerating time-to-compliance.

 

Question: Do you monitor for model drift or performance degradation in production?

Mandatory here tests for operational governance. Drift monitoring is a key control under ISO 27090 and NIST AI RMF. The yes/no format is supplemented by a free-text triggers box that captures technical detail (e.g., “We use Kolmogorov-Smirnov tests on input features with alerts via PagerDuty”). This narrative is scored against a rubric to assign maturity.

 

The question is placed in the Data & Model Risk section to emphasise that monitoring should be continuous, not periodic. A negative answer triggers a recommended toolkit (Evidently, Great Expectations, Amazon SageMaker Model Monitor) tailored to the client’s cloud stack. The mandatory status ensures the consultant can always include a drift-maturity chart in the final report.

 

Question: Do you conduct Privacy Impact Assessments (PIA) for AI systems?

Mandatory yes/no here measures privacy governance. PIAs are required under GDPR Art.35 when personal data processing is likely to result in high risk, a condition often triggered by large-scale AI. The binary format is strict; partial PIAs (e.g., only for training data) are scored as “No”. The data is used to auto-populate compliance dashboards and to calculate potential exposure to supervisory authority audits.

 

The question appears in the Privacy & Security section to reinforce that PIAs should be gating documents. A negative answer triggers a rapid-assessment template aligned to ICO and CNIL guidance. The mandatory nature ensures the benchmark dataset is never incomplete.

 

Question: Have you mapped AI processing activities against global privacy principles?

Mandatory disclosure here tests for privacy operationalisation. Mapping against principles (purpose limitation, data-subject rights) is a prerequisite for demonstrating accountability under GDPR and upcoming AI regulations. The yes/no format is supplemented by a free-text gaps box that captures nuance (e.g., “We have not yet implemented automated subject-access requests for model outputs”).

 

The data feeds a maturity heat-map that contrasts documented versus live controls. A negative answer triggers a recommended data-flow mapping workshop using the consultancy’s LINDDUN-AI threat modelling toolkit. The mandatory status ensures the consultant can always produce a privacy gap-analysis slide for regulators.

 

Question: Do you rely on third-party cloud APIs?

Mandatory yes/no here captures supply-chain exposure, the dominant vector for Shadow-AI data leaks. The yes branch triggers a multiple-choice that logs contractual safeguards (data retention limits, audit rights, deletion on termination). This data is used to calculate a residual risk score that feeds into the final report’s executive summary.

 

The question is placed in the Privacy & Security section to emphasise that cloud APIs are high-risk processing activities under GDPR. A negative answer is rare but flags on-premise or self-hosted architectures, which shift focus to internal security controls. The mandatory nature ensures the consultant never has to assume supply-chain topology, reducing liability.

 

Question: Have you adopted an AI ethics charter or equivalent?

Mandatory here tests for ethical governance artefacts. An ethics charter is increasingly expected by investors, regulators and customers. The yes/no format is supplemented by a file-upload that provides instant evidence for auditors. The mandatory nature prevents “we have values but no document” evasion.

 

The data feeds a binary maturity indicator: “Yes” contributes 1 point to the overall governance score; “No” contributes 0. Clients without charters receive a one-page template based on the OECD principles, accelerating time-to-compliance.

 

Question: Do you conduct fairness/bias testing before model release?

Mandatory disclosure here measures ethical rigour. Bias testing is explicitly mandated for high-risk systems under the EU AI-Act. The yes/no format is supplemented by a free-text box that captures protected attributes examined (e.g., “gender, ethnicity, age”). This narrative is scored against a rubric to assign maturity.

 

The question is placed in the Ethics section to emphasise that testing should be pre-deployment. A negative answer triggers a recommended toolkit (Fairlearn, Aequitas) tailored to the client’s tech stack. The mandatory status ensures the consultant can always include a fairness-maturity chart in the final report.

 

Question: Do you provide human oversight for high-stakes decisions?

Mandatory yes/no here tests for human-in-the-loop compliance, a requirement under EU AI-Act for high-risk systems. The no-follow-up free-text box captures rationale (e.g., “latency constraints in algorithmic trading”). This data is used to flag legal exposure and to design compensating controls.

 

The mandatory nature ensures the consultant can always produce a compliance gap-analysis slide for regulators. Clients without oversight receive a playbook that defines “high-stakes” and provides workflow templates for human review.

 

Question: Which global AI guidelines/standards are you aligning with?

Mandatory multiple-choice here captures standards alignment, a leading indicator of audit readiness. The option set covers ISO, OECD, NIST, IEEE and UNESCO frameworks, enabling automatic gap analysis. The “None” option is included to force acknowledgement of alignment gaps.

 

The data feeds a maturity radar chart that contrasts claimed versus evidenced alignment. A client ticking ISO 23894 but lacking risk registers is flagged for follow-up. The mandatory status ensures the benchmark dataset is never incomplete.

 

Question: Are you subject to sector-specific AI regulations?

Mandatory yes/no here tests for regulatory scope. Sector rules (FDA, FAA, EBA) impose additional controls beyond horizontal AI laws. The yes-follow-up free-text box captures regulation name and compliance status, enabling automatic mapping to control libraries. The mandatory nature ensures the consultant never overlooks sectoral obligations.

 

Question: Have you defined AI literacy targets for different employee segments?

Mandatory here tests for human-capital governance. Literacy targets are a leading indicator of responsible-AI culture and are increasingly scrutinised by regulators. The yes/no format is supplemented by a free-text box that captures proficiency levels (e.g., “All staff: basic awareness; Data scientists: expert; Executives: strategic”). This data is scored against a rubric to assign maturity.

 

The mandatory status ensures the consultant can always include a literacy heat-map in the final report. Clients without targets receive a skills-matrix template aligned to the EU AI-Act competence requirements.

 

Question: Do you provide AI ethics & governance training?

Mandatory yes/no here measures training coverage, a key control under ISO 30414 human-capital reporting. The yes-follow-up numeric field captures % trained, enabling calculation of coverage gaps. The mandatory nature ensures the consultant can always produce a training-maturity chart for the board.

 

Question: Do you require AI vendors to complete algorithmic transparency questionnaires?

Mandatory here tests for supply-chain governance. Transparency questionnaires are a primary control for third-party AI risk. The binary format is strict; partial questionnaires are scored as “No”. The data is used to calculate vendor-risk scores that feed into procurement playbooks. The mandatory status ensures the consultant never has to assume vendor diligence.

 

Question: Evaluation criteria for AI vendors include

Mandatory multiple-choice here captures procurement maturity. The option set covers accuracy, explainability, fairness, security, privacy, energy and cost, aligning with OECD due-diligence guidance. The mandatory nature ensures the benchmark dataset is never incomplete and allows automatic generation of RFP templates.

 

Question: Do you estimate carbon footprint of training or fine-tuning large models?

Mandatory yes/no here tests for sustainability governance. Carbon estimation is becoming a regulatory expectation (EU CSRD, UK TCFD). The yes-follow-up free-text box captures methodology and latest kgCO₂e estimate, enabling benchmarking against sectoral averages. The mandatory status ensures the consultant can always include a sustainability maturity slide for ESG reports.

 

Question: Is AI failure covered in your incident-response plan?

Mandatory yes/no here tests for resilience governance. AI-specific incidents (hallucination, bias, security breach) require specialised playbooks. The no-follow-up free-text box captures plans to integrate AI scenarios. The mandatory nature ensures the consultant can always produce an incident-response gap-analysis for regulators.

 

Question: Total AI budget current fiscal year

Mandatory currency entry here captures investment scale, a key predictor of governance capacity. The numeric type is validated for currency symbols and ranges, preventing garbage data. The mandatory status ensures the consultant can normalise maturity scores against spend (e.g., “governance spend as % of AI budget”) and auto-generate ROI models.

 

Question: Primary contact full name

Mandatory here provides accountability traceability. The single-line text format is validated against Unicode name patterns, reducing injection risk. The mandatory status ensures the consultant can link the response to a verifiable individual for audit purposes and for sending the final report.

 

Question: Job title

Mandatory here indicates organisational seniority and decision-making authority. Titles are mapped to a hierarchy (C-suite, VP, Director, Manager) that influences roadmap credibility. The mandatory status ensures the consultant can calibrate communication style and escalation paths.

 

Question: Business email

Mandatory email here serves as both unique identifier and delivery channel. The field is validated for RFC 5322 compliance and domain reputation checks flag free-mail domains, reducing spam. The mandatory status ensures the consultant can deliver the report securely and can link multiple submissions from the same domain.

 

Question: Preferred consultation start date

Mandatory date here captures urgency and resource availability. The date picker prevents past dates and public holidays, improving scheduling accuracy. The mandatory status ensures the consultancy’s delivery team can auto-populate project plans and resource allocation models.

 

Question: Preferred engagement model

Mandatory single-choice here captures delivery constraints (remote, on-site, hybrid). The data is used to calculate travel budgets, timezone alignment and facilitator assignment. The mandatory status ensures the proposal team can auto-generate accurate Statements of Work.

 

Question: I consent to the collection and processing of data

Mandatory checkbox here provides lawful basis under GDPR Art.6(1)(a). The wording is aligned with EDPB guidance and includes “for the purpose of this consultation” to limit scope. The mandatory status ensures the consultancy can process the data without regulatory challenge.

 

Mandatory Question Analysis for AI Readiness & Governance Consultation Form

Important Note: This analysis provides strategic insights to help you get the most from your form's submission data for powerful follow-up actions and better outcomes. Please remove this content before publishing the form to the public.

Mandatory Field Analysis

Organization legal name
Justification: This field is the master identifier that links every response to a verifiable legal entity, enabling cross-reference with corporate registries and ensuring the benchmark dataset is free from duplicates or shell submissions. Without it, the consultancy cannot create binding Statements of Work or satisfy AML/KYC obligations.

 

Primary industry sector
Justification: Sector determines which regulatory regimes and risk libraries apply; mis-classification would invalidate the entire gap-analysis. Making it mandatory guarantees statistical validity when the consultancy benchmarks maturity across NACE/GICS codes.

 

Total workforce size (FTE)
Justification: Head-count brackets map directly to statutory thresholds (GDPR, EU AI-Act, UK DPA) and are the strongest statistical predictor of AI governance budget. A null value would prevent accurate scoping of effort and pricing.

 

Geographic footprint of AI use
Justification: Cross-border data transfers trigger Schrems-II assessments and multi-jurisdiction regulatory overlap. Mandatory disclosure ensures the consultant can auto-insert region-specific controls into the final roadmap.

 

Briefly describe your core products/services and value chain steps where AI could apply
Justification: This narrative is the qualitative backbone that contextualises every subsequent risk rating; without it, maturity scores would be numerically valid but semantically empty, undermining board-level credibility.

 

What is the main driver for this consultation?
Justification: Driver determines workshop sequencing and resource allocation (e.g., regulatory vs ROI focus). Mandatory capture prevents generic roadmaps and ensures sales teams can assign the correct delivery archetype.

 

Do you have an in-house AI/ML team?
Justification: Capability model (build vs buy) dictates which controls are feasible; missing data would force consultants to assume the worst-case vendor-reliant scenario, inflating cost estimates and reducing win-rate.

 

Have employees used public generative-AI tools for work tasks?
Justification: Shadow-AI exposure is the single biggest predictor of imminent data-leak incidents; mandatory disclosure ensures the triage algorithm can trigger urgent DLP-tool recommendations.

 

Do you maintain an inventory of approved & prohibited AI tools?
Justification: Inventory presence is a binary control under ISO 27001 Annex A; a null would break the maturity scoring model and could mislead regulators into assuming controls exist when they do not.

 

Have you experienced AI-related data-leak incidents or near-misses in the past 12 months?
Justification: Incident history is the most reliable proxy for future risk; mandatory capture ensures the heat-map contrasts perceived versus actual exposure, preventing false comfort.

 

Is AI adoption mentioned in your corporate strategy or OKRs?
Justification: Strategic anchoring determines budget durability; without this field the ROI model cannot differentiate between experimental and board-backed programmes, undermining investment recommendations.

 

Do you have a cross-functional AI governance committee or equivalent?
Justification: Committee presence is a structural predictor of incident reduction; mandatory disclosure ensures the RACI matrix in the final report is grounded in fact, not assumption.

 

Have you adopted a responsible-AI or ethical-AI policy?
Justification: Policy artefact is a minimum regulatory expectation; mandatory capture ensures the compliance gap-analysis can produce a definitive red/amber/green status rather than an ambiguous “unknown”.

 

Do you conduct algorithmic impact assessments (AIA) before deployment?
Justification: AIA is a statutory requirement for high-risk systems under EU AI-Act; mandatory answer ensures the consultant can quantify legal exposure and prioritise remediation.

 

Data-related risks you are concerned about
Justification: Risk perception drives control selection; mandatory selection ensures the consultant can personalise the roadmap and avoid boiler-plate recommendations that ignore client context.

 

Do you enforce data-minimisation when fine-tuning models?
Justification: Data minimisation is a GDPR principle and a key control against membership-inference attacks; mandatory status ensures the DPIA can produce a definitive compliance verdict.

 

Are datasets documented with datasheets/model cards?
Justification: Documentation is the simplest proxy for reproducibility and auditability; mandatory capture prevents the maturity model from being skewed by undocumented datasets that appear compliant but are not.

 

Do you monitor for model drift or performance degradation in production?
Justification: Drift monitoring is a leading indicator of model reliability; mandatory disclosure ensures the operations playbook can include specific tooling recommendations rather than generic advice.

 

Do you conduct Privacy Impact Assessments (PIA) for AI systems?
Justification: PIA is a legal requirement under GDPR Art.35 for high-risk processing; mandatory answer ensures the compliance dashboard can display a definitive red/amber/green status.

 

Have you mapped AI processing activities against global privacy principles?
Justification: Mapping determines whether purpose limitation and data-subject rights are operationalised; mandatory capture ensures the consultant can quantify privacy gaps and attach legal risk scores.

 

Do you rely on third-party cloud APIs?
Justification: API usage is the dominant supply-chain risk vector; mandatory disclosure ensures the vendor-assessment module can auto-populate contract clauses and residual risk ratings.

 

Have you adopted an AI ethics charter or equivalent?
Justification: Charter presence is a board-level expectation under OECD and IEEE guidelines; mandatory capture ensures the ethics maturity score is never null, preserving benchmark validity.

 

Do you conduct fairness/bias testing before model release?
Justification: Bias testing is explicitly required for high-risk systems under EU AI-Act; mandatory answer ensures the consultant can produce a definitive compliance gap and recommend tooling.

 

Do you provide human oversight for high-stakes decisions?
Justification: Human oversight is a statutory safeguard under EU AI-Act; mandatory disclosure ensures the escalation playbook can include concrete workflow templates rather than vague principles.

 

Which global AI guidelines/standards are you aligning with?
Justification: Standards alignment determines audit readiness; mandatory selection ensures the gap-analysis can auto-map missing controls and generate a prioritised compliance backlog.

 

Are you subject to sector-specific AI regulations?
Justification: Sector rules impose additional controls beyond horizontal AI laws; mandatory capture ensures the consultant never overlooks FDA, FAA or EBA requirements that could invalidate the roadmap.

 

Have you defined AI literacy targets for different employee segments?
Justification: Literacy targets are a KPI under ISO 30414 and a leading indicator of responsible-AI culture; mandatory capture ensures the training roadmap can include measurable proficiency levels rather than generic workshops.

 

Do you provide AI ethics & governance training?
Justification: Training coverage is a compliance evidence point under EU AI-Act; mandatory disclosure ensures the maturity model can differentiate between ad-hoc and systematic capability building.

 

Do you require AI vendors to complete algorithmic transparency questionnaires?
Justification: Vendor transparency is the primary control against supply-chain AI risk; mandatory answer ensures the procurement playbook can include template clauses and scoring rubrics.

 

Evaluation criteria for AI vendors include
Justification: Criteria selection reveals procurement maturity; mandatory capture ensures the consultant can benchmark the client against best-practice RFP templates and identify missing safeguards (e.g., fairness metrics).

 

Do you estimate carbon footprint of training or fine-tuning large models?
Justification: Carbon estimation is becoming mandatory under EU CSRD and UK TCFD; mandatory disclosure ensures the sustainability module can include concrete reduction targets and tooling recommendations.

 

Is AI failure covered in your incident-response plan?
Justification: AI-specific incident playbooks are required for resilience and upcoming regulations; mandatory answer ensures the crisis-simulation exercise can include realistic scenarios rather than generic IT outages.

 

Total AI budget current fiscal year
Justification: Budget is the strongest predictor of governance capacity; mandatory currency entry ensures the ROI model can normalise maturity per dollar spent and auto-generate investment recommendations.

 

Primary contact full name
Justification: Legal accountability requires a named individual; mandatory capture ensures the consultancy can satisfy AML/KYC and has a traceable signatory for deliverable acceptance.

 

Job title
Justification: Title indicates decision-making authority and determines whether additional stakeholder buy-in workshops are needed; mandatory status prevents ambiguous escalation paths.

 

Business email
Justification: Email is both unique identifier and secure delivery channel; mandatory validated entry ensures the report can be sent encrypted and domain reputation checks reduce phishing risk.

 

Preferred consultation start date
Justification: Date drives resource scheduling and backlog prioritisation; mandatory picker prevents over-booking and ensures project plans can be auto-generated in the CRM.

 

Preferred engagement model
Justification: Delivery model affects travel budget, timezone coverage and facilitator selection; mandatory choice ensures the SOW can be auto-populated with accurate cost and availability assumptions.

 

I consent to the collection and processing of data
Justification: GDPR Art.6(1)(a) requires demonstrable consent; mandatory checkbox provides lawful basis and prevents regulatory challenge to downstream processing.

 

Overall Mandatory Field Strategy Recommendation

The form strikes an optimal balance between data completeness and user burden by concentrating mandatory fields in early sections and keeping them binary or single-choice wherever possible. This design yields high signal-to-noise ratios while keeping completion times under 12 minutes, maximising conversion. To further improve completion rates, consider visually grouping mandatory fields with a subtle coloured bar and adding progressive save functionality so respondents can pause and return.

 

For future iterations, introduce conditional mandatoriness: if a respondent selects “Worldwide” geographic footprint, automatically require disclosure of SCCs or BCRs; if “>50% workforce” Shadow-AI is detected, make the incident-description box mandatory. This dynamic approach preserves the lean core while adapting rigour to risk, aligning with regulatory proportionality principles and enhancing user experience without diluting data quality.

 

To configure an element, select it on the form.

To add a new question or element, click the Question & Element button in the vertical toolbar on the left.