Provide general information about your organisation to contextualise the audit.
Organisation name
Brief description of core business activities
Total number of employees (worldwide)
Primary industry sector
Financial Services
Healthcare
Manufacturing
Retail & E-commerce
Energy & Utilities
Telecommunications
Government & Public Sector
Education
Technology & Software
Transportation & Logistics
Other:
Geographic regions where critical IT services are hosted or operated
North America
South America
Europe
Middle East & Africa
South Asia
East Asia
South-East Asia
Oceania
Does your organisation have a documented Business Continuity Policy approved by executive leadership?
Date of last policy review or approval
Describe any informal or draft continuity arrangements in place
Is there a separate, documented Disaster Recovery (DR) Policy?
DR Policy version or identifier
Has a Business Continuity Management System (BCMS) been formally implemented (e.g., aligned with ISO 22301 or similar)?
Which framework or standard primarily guides your BCMS?
ISO 22301
NIST SP 800-34
BS 25999
COBIT
Custom internal framework
Other:
Are Business Continuity and Disaster Recovery responsibilities written into job descriptions for relevant roles?
Is there a dedicated budget line for BC/DR activities?
Does the Board (or equivalent governing body) receive regular BC/DR performance reports?
Has a formal enterprise-wide Business Impact Analysis (BIA) been conducted?
Date of last complete BIA
Is the BIA updated at least annually or after significant organisational changes?
Are Recovery Time Objectives (RTO) defined for all critical IT services?
Top 5 critical IT services — RTO & RPO summary
IT Service/System | Business Process Supported | RTO (hours) | RPO (hours) | ||
|---|---|---|---|---|---|
A | B | C | D | ||
1 | |||||
2 | |||||
3 | |||||
4 | |||||
5 |
Are Recovery Point Objectives (RPO) defined for all critical data repositories?
Which approach is primarily used for quantifying downtime impact?
Financial loss per hour
Revenue loss per hour
Customer attrition risk
Regulatory non-compliance fines
Reputational score impact
Qualitative descriptors (High/Med/Low)
Other
Rate the frequency of risk assessment activities
Never | Ad-hoc | Annually | Semi-annually | Quarterly or more | |
|---|---|---|---|---|---|
Threat landscape review | |||||
Vulnerability assessment | |||||
BIA update | |||||
Supply chain risk review | |||||
Cloud service provider risk review |
Are automated, encrypted backups performed for all production databases?
What backup media types are primarily used?
Disk (VTL/DAS/NAS)
Tape
Cloud object storage
Hybrid (disk + cloud)
Hybrid (disk + tape)
Other
Which backup scheme is predominantly used?
Full daily
Full + incremental
Full + differential
Synthetic full
Snapshot-based
Mirror/replication only
Other
Are backups stored in at least two geographically separated locations?
Are backup restorations tested at least quarterly?
Is immutability (WORM) enabled for backup data to protect against ransomware?
Describe any challenges or gaps in current backup processes
Are critical systems deployed in an N+1 or higher redundancy configuration?
Is there automatic failover capability at the application layer?
Database high availability method
Always-On/Data Guard
Master-slave replication
Multi-master clustering
Log shipping
Cloud managed HA
None
Are redundant internet connections (ISP diversity) in place for primary data centres?
Are uninterruptible power supplies (UPS) and backup generators tested monthly?
Is network equipment (switches, routers) deployed in redundant pairs?
Rate the maturity of resilience controls
Use the scale: 1 = Not implemented, 2 = Partially implemented, 3 = Implemented but not tested, 4 = Tested and documented, 5 = Continuously improved
Server clustering | |
Load balancing | |
RAID/disk redundancy | |
UPS/generator | |
Environmental monitoring | |
Fire suppression |
Type of DR site currently established
Hot site (fully equipped, ready within 1–4 h)
Warm site (hardware in place, data restore required)
Cold site (space only, hardware to be procured)
Reciprocal agreement with partner
Cloud-based DRaaS
No DR site
Is the DR site located in a different seismic zone from the primary site?
Is the DR site connected via diverse network paths (no single points of failure)?
Have you performed a full failover drill to the DR site within the last 12 months?
Describe any constraints or risks related to the DR site (e.g., capacity limits, shared resources)
Which cloud service models does your organisation utilise?
IaaS
PaaS
SaaS
FaaS/Serverless
Private cloud
None
Do cloud provider contracts include explicit BC/DR obligations and RTO/RPO commitments?
Are cloud workloads deployed across multiple availability zones or regions?
Is an exit strategy (cloud repatriation or provider change) documented and tested?
Do you maintain an up-to-date register of all third-party services critical to IT operations?
List any cloud-native DR tools or services in use (e.g., AWS RDS automated backups, Azure Site Recovery)
Is there a documented Incident Response Plan aligned with BC/DR requirements?
Are incident severity levels defined with corresponding escalation matrices?
How is the Incident Response Team (IRT) activated?
Automated alert threshold
24/7 manned phone line
Email distribution list
On-call rotation
Other
Is a mass-notification system (SMS, push, voice) deployed for crisis communications?
Are pre-approved public relations/communications templates available for major outages?
Rate the maturity of incident response activities
Use the scale: 1 = Ad-hoc, 2 = Documented but inconsistent, 3 = Followed for major incidents, 4 = Standardised and audited, 5 = Continuously improved
Detection and alerting | |
Escalation procedures | |
Root cause analysis | |
Stakeholder communications | |
Post-incident review | |
Lessons learned tracking |
Is there an annual BC/DR testing schedule approved by management?
Which types of tests are conducted at least annually?
Table-top exercise
Walk-through
Simulation
Failover test
Full-scale test
None
Are test results documented with action items and tracked to closure?
Is BC/DR training included in new-employee onboarding?
Average training hours per employee on BC/DR topics in the last 12 months
Are specialised BC/DR certifications encouraged or sponsored for IT staff?
Describe the biggest lesson learned from the most recent BC/DR test
Is data restoration capability tested specifically against ransomware scenarios?
Are offline, encrypted backup copies maintained and periodically verified?
Is an Endpoint Detection and Response (EDR) solution deployed across the enterprise?
Are critical systems segmented from general corporate networks?
Patch management frequency for critical security updates
Within 24 h
Within 72 h
Within 7 days
Within 30 days
Variable/ad-hoc
Is a cyber-incident play-book integrated with overall BC/DR plans?
Overall confidence in cyber-resilience programme
Very Low
Low
Moderate
High
Very High
Is there an up-to-date register of all critical suppliers (including cloud providers)?
Do supplier contracts include BC/DR clauses and right-to-audit provisions?
Are alternative suppliers identified for critical components or services?
Rate supplier BC/DR assessment practices
Use the scale: 1 = Not performed, 2 = Partial, 3 = Complete for top suppliers, 4 = Audited regularly, 5 = Continuously improved
Supplier risk questionnaire | |
Evidence of DR testing | |
SLA with penalties | |
Ongoing monitoring | |
Exit strategy defined |
Is dependency mapping (applications, data, networks) maintained and reviewed?
Describe any single-source suppliers that could significantly impact IT continuity
Are BC/DR KPIs reported to senior management at least quarterly?
Has an independent third-party BC/DR audit been performed in the last 24 months?
Is there a continuous improvement programme with post-test enhancements tracked?
Maturity level of BC/DR programme
Initial (ad-hoc)
Managed (documented)
Defined (standardised)
Quantitatively managed (metrics driven)
Optimising (continuous improvement)
Outline the top three improvement initiatives planned for the next 12 months
Any additional comments or context not captured elsewhere
Analysis for IT Business Continuity & Disaster Recovery Audit Form
Important Note: This analysis provides strategic insights to help you get the most from your form's submission data for powerful follow-up actions and better outcomes. Please remove this content before publishing the form to the public.
This IT Business Continuity & Disaster Recovery Audit form is a comprehensive, regulation-neutral instrument designed to benchmark an organisation’s preparedness across thirteen critical domains. Its modular structure—beginning with organisation context and cascading through governance, risk, technical resilience, cloud, incident response, supply-chain and continuous improvement—mirrors best-practice audit frameworks such as ISO 22301 and NIST SP 800-34. By combining closed questions (yes/no, single choice, matrix ratings) with targeted open-ended prompts and conditional follow-ups, the form balances quantitative scoring with rich qualitative insight, enabling both quick gap identification and deep-dive narrative evidence.
The progressive disclosure pattern (e.g., “yes” triggers date or version fields) reduces cognitive load while ensuring data fidelity. Mandatory fields are sparingly used, focusing only on items essential for audit scoping (organisation identity, core activities, policy existence, BIA currency, backup encryption). This design respects user time, mitigates form abandonment, and still guarantees that auditors receive the minimum dataset required to contextualise responses. Matrix and rating scales provide granular maturity indicators that can be trended over time, supporting continuous-improvement programmes. Overall, the form is a robust, audit-ready data-collection vehicle that scales from small enterprises to global corporations.
The opening question captures the legal entity undergoing audit, ensuring traceability of submitted data and enabling cross-reference with external risk intelligence (e.g., regulatory sanctions, financial ratings). Requiring an exact string rather than a drop-down preserves flexibility for multinational subsidiaries and joint ventures while avoiding ambiguous acronyms. From a data-quality perspective, this field becomes the primary key against which historical audits can be compared, supporting year-over-year maturity trending. Privacy implications are minimal because the organisation name is typically public record; however, the form should remind users if subsidiary names differ from brand names to prevent duplicate submissions. UX friction is negligible because respondents intuitively know their employer’s name, resulting in near-zero abandonment at this first step.
This free-text field contextualises technology dependencies for the auditor. For example, “online securities trading” immediately signals low tolerance for downtime, whereas “local dairy distribution” may imply seasonal tolerance. The open format encourages nuance—companies can highlight regulated processes, proprietary platforms, or customer-facing channels—information that canned industry sectors cannot capture. Because the field is mandatory, auditors are protected from receiving blank submissions that would otherwise necessitate follow-up calls, accelerating engagement scoping. Data quality hinges on conciseness; the form could add a 250-character limit to curb verbosity while retaining flexibility. From a privacy standpoint, disclosures here rarely contain confidential data, yet the organisation may wish to anonymise product names if they reveal strategic direction.
This yes/no gateway question quickly segregates mature programmes from ad-hoc efforts. By demanding executive approval, the form tests governance rigour rather than merely the existence of a shelf-ware document. Follow-up fields (date of last review or narrative of informal arrangements) create an instant maturity heat-map: organisations lacking policy are channelled toward compliance gaps, while those with policy supply freshness timestamps that feed risk scoring algorithms. Mandatory status is justified because without an approved policy, downstream questions about testing, budgeting, and auditing lose context. Data collectors benefit from a binary filter that accelerates report segmentation. Respondents experience minimal burden because the factual presence or absence of a policy is readily known to continuity managers.
The BIA is the cornerstone of every credible BC/DR programme; without it, RTO/RPO targets, resource allocation, and recovery strategies are speculative. Making this question mandatory forces organisations to confront baseline preparedness before claiming sophistication in later sections. The follow-up date field yields a staleness indicator that auditors can compare against regulatory expectations (many frameworks recommend annual refresh). From a data-collection standpoint, the binary response collapses complex effort into an auditable metric suitable for executive dashboards. Users may hesitate if the BIA was performed by a third party and internal ownership is unclear; guidance text could clarify that any authorised entity counts. Nonetheless, the question’s centrality to audit integrity makes mandatory status defensible.
Encryption and automation together mitigate two prevalent failure patterns: human forgetfulness and ransomware tampering. By mandating this question, the form sets a non-negotiable hygiene standard that underpins every subsequent recovery objective. The yes/no format yields a high-confidence data point for regulatory reporting (e.g., GDPR, HIPAA) while the absence of an encryption qualifier in the negative path implicitly flags a critical gap. Data quality is strong because database administrators possess accurate backup catalogues. Respondents may interpret “all” literally; organisations with legacy non-encrypted systems might answer “no,” prompting remedial action—a desirable outcome. UX impact is low because the question requires a single click, though smaller firms using managed services may need to verify with vendors first.
Mandatory Question Analysis for IT Business Continuity & Disaster Recovery Audit Form
Important Note: This analysis provides strategic insights to help you get the most from your form's submission data for powerful follow-up actions and better outcomes. Please remove this content before publishing the form to the public.
Organisation name
Justification: The organisation name is the primary identifier for every audit record, ensuring that submissions can be uniquely tracked, referenced in regulatory filings, and compared across fiscal periods. Without this datum, duplicate or conflicting entries could arise, undermining data integrity and invalidating executive reporting.
Brief description of core business activities
Justification: This narrative contextualises technology dependencies and regulatory obligations, enabling auditors to apply sector-specific benchmarks (e.g., PCI-DSS for payments, GxP for pharma). A mandatory response prevents generic or incomplete submissions that would otherwise necessitate time-consuming clarification calls, thereby accelerating audit scoping and maintaining project velocity.
Does your organisation have a documented Business Continuity Policy approved by executive leadership?
Justification: Executive-approved policy signifies governance commitment and allocates formal resources to continuity management. Mandating this question establishes a baseline maturity gate; organisations lacking policy are channelled toward remediation paths, ensuring that the audit does not falsely elevate preparedness scores based on undocumented practices.
Has a formal enterprise-wide Business Impact Analysis (BIA) been conducted?
Justification: The BIA quantifies criticality, dependencies, and acceptable downtime, forming the evidential foundation for all subsequent RTO/RPO declarations. A mandatory response guarantees that recovery objectives are evidence-based rather than anecdotal, supporting defensible regulatory assertions and preventing understated risk exposure.
Are automated, encrypted backups performed for all production databases?
Justification: Encryption and automation collectively mitigate ransomware and human error—two dominant causes of data loss. By making this field mandatory, the audit enforces a non-negotiable control that underpins every recovery commitment, ensuring that gap analyses highlight encryption absence as a critical finding rather than an optional enhancement.
The form adopts a minimal-mandatory philosophy: only five of 70+ fields are required, striking an effective balance between data sufficiency and user completion rates. This approach respects busy practitioners’ time while safeguarding the audit’s analytical core. To further optimise, consider making industry sector mandatory if automated benchmarking against peer groups is desired, and introduce conditional mandation—e.g., if a respondent selects “No DR site,” force a narrative explanation to capture risk rationale without burdening those with mature sites.
Additionally, apply visual cues (red asterisk) and inline help text clarifying why each mandatory field matters; transparency increases compliance and reduces abandonment. Finally, periodically review mandatory status as the organisation’s data maturity evolves: once a certain threshold of completeness is reached, some fields could be relaxed to optional to encourage deeper voluntary disclosure elsewhere.
To configure an element, select it on the form.