Sundance Networks
811 Ann St. • Stroudsburg, PA 18360 • 570-476-1320 • support@sundancenetworks.com

AI Readiness Assessment

Assess a client’s organizational readiness to adopt AI (strategy, data, tech, people, risk, and value).

Client & Engagement Info

Basic context for scoring and follow-ups.

Readiness Summary

Scores auto-update as you fill the form.
Overall Readiness
0.0 / 5
Not scored yet
Strategy & Leadership
0.0
Avg per question
Data Readiness
0.0
Avg per question
Tech & Architecture
0.0
Avg per question
People & Skills
0.0
Avg per question
Process & Ops
0.0
Avg per question
Security, Risk & Governance
0.0
Avg per question
Change & Adoption
0.0
Avg per question

1) Strategy & Leadership

0.0 Avg score
AI vision is defined and aligned to business objectives.
Clear priorities (revenue, efficiency, risk reduction, customer experience).
Executive sponsorship and funding are committed.
Named sponsor, budget line, and decision authority.
AI success metrics/KPIs are defined.
E.g., time saved, conversion lift, error reduction, SLA improvement.
Portfolio of AI use cases is identified and prioritized.
Ranking by value, feasibility, risk, and time-to-impact.
AI operating model is defined.
Who owns models, data, approvals, and ongoing support.

2) Data Readiness

0.0 Avg score
Critical data sources are known and accessible.
Ownership, location, and access paths are documented.
Data quality is sufficient for AI use.
Completeness, accuracy, timeliness, missing fields understood.
Data is labeled/structured enough for targeted models.
Schemas or taxonomies exist; unstructured data is manageable.
Data governance is in place.
Policies for classification, lineage, retention, and usage approval.
Privacy/legal status of data is understood.
PII/PHI/IP constraints documented; consent and contracts considered.

3) Technology & Architecture

0.0 Avg score
Current systems can integrate AI outputs.
APIs, workflows, automation tools, or UI touchpoints exist.
Infrastructure can support AI workloads.
Cloud/on-prem capacity, scaling, GPU/compute availability.
Tools for model deployment/monitoring exist or are planned.
MLOps, logging, drift detection, rollback, A/B testing.
Data pipelines are reliable and automated.
ETL/ELT, streaming/batch, versioning, SLAs.
Vendor/stack decisions are rationalized.
Build vs buy, LLM provider plan, cost controls, SLAs.

4) People, Skills & Culture

0.0 Avg score
AI-relevant roles exist or can be staffed.
Product owner, data/ML engineers, analysts, security, legal.
Staff has baseline AI literacy.
Understanding of what AI can/can’t do; prompt habits; risks.
Org culture supports experimentation.
Tolerance for pilots, learning, measured failure.
Cross-functional collaboration is strong.
Data/IT/business/security work together without friction.
Training/upskilling plan is in place.
Formal path for users, admins, and advanced practitioners.

5) Process & Operations

0.0 Avg score
Processes are documented and measurable.
Inputs/outputs are clear enough to automate or augment.
A pilot/POC method is understood.
Test-learn-scale, success gates, timelines, ownership.
Operational impact is planned.
Support, monitoring, escalation, and workload shifts.
AI is integrated into workflows (not bolted on).
Clear handoffs between humans and systems.
ROI tracking is planned post-launch.
“Before/after” measurement and continuous iteration.

6) Security, Risk & Governance

0.0 Avg score
Security review path exists for AI tools.
Vendor due diligence, SOC2/ISO, pen-test, access rules.
Policies for safe AI use are defined.
Prompting rules, prohibited data, logging, human review.
Regulatory/compliance requirements are addressed.
HIPAA, GDPR, FERPA, PCI, industry-specific rules.
Model risk is understood and mitigated.
Bias, hallucination, drift, explainability, audit trails.
Incident response for AI is planned.
What happens if output is wrong/harmful or data leaks.

7) Change Management & Adoption

0.0 Avg score
Stakeholders are engaged early.
Users, managers, IT, legal/security involved in planning.
Communication plan exists for rollout.
What changes, who benefits, and how to get help.
User enablement & support are ready.
Training, documentation, champions, help desk path.
Adoption is measured and iterated.
Usage analytics + feedback loops.
Workforce impact is considered.
Role changes, reskilling, policy on AI assistance.

8) Use-Case Opportunity & Priorities

Pick likely wins and assess feasibility.

9) Overall Assessment Notes & Recommendations

Your synthesis and next steps.