Sundance Networks
811 Ann St. • Stroudsburg, PA 18360 • 570-476-1320 •
support@sundancenetworks.com
AI Readiness Assessment
Assess a client’s organizational readiness to adopt AI (strategy, data, tech, people, risk, and value).
Dark mode
Generate Report (PDF)
Save/Download HTML
Export JSON
Print
Reset
Client & Engagement Info
Basic context for scoring and follow-ups.
Client Organization
Primary Contact
Email
Assessment Date
Industry / Sector
Business Model / Key Services
Assessment Scope & Goals
Readiness Summary
Scores auto-update as you fill the form.
Overall Readiness
0.0 / 5
Not scored yet
Strategy & Leadership
0.0
Avg per question
Data Readiness
0.0
Avg per question
Tech & Architecture
0.0
Avg per question
People & Skills
0.0
Avg per question
Process & Ops
0.0
Avg per question
Security, Risk & Governance
0.0
Avg per question
Change & Adoption
0.0
Avg per question
Scale: 0 = Not in place, 1 = Ad-hoc, 2 = Basic, 3 = Defined, 4 = Strong, 5 = Best-in-class.
1) Strategy & Leadership
0.0
Avg score
AI vision is defined and aligned to business objectives.
Clear priorities (revenue, efficiency, risk reduction, customer experience).
Select…
0
1
2
3
4
5
Executive sponsorship and funding are committed.
Named sponsor, budget line, and decision authority.
Select…
0
1
2
3
4
5
AI success metrics/KPIs are defined.
E.g., time saved, conversion lift, error reduction, SLA improvement.
Select…
0
1
2
3
4
5
Portfolio of AI use cases is identified and prioritized.
Ranking by value, feasibility, risk, and time-to-impact.
Select…
0
1
2
3
4
5
AI operating model is defined.
Who owns models, data, approvals, and ongoing support.
Select…
0
1
2
3
4
5
2) Data Readiness
0.0
Avg score
Critical data sources are known and accessible.
Ownership, location, and access paths are documented.
Select…
0
1
2
3
4
5
Data quality is sufficient for AI use.
Completeness, accuracy, timeliness, missing fields understood.
Select…
0
1
2
3
4
5
Data is labeled/structured enough for targeted models.
Schemas or taxonomies exist; unstructured data is manageable.
Select…
0
1
2
3
4
5
Data governance is in place.
Policies for classification, lineage, retention, and usage approval.
Select…
0
1
2
3
4
5
Privacy/legal status of data is understood.
PII/PHI/IP constraints documented; consent and contracts considered.
Select…
0
1
2
3
4
5
3) Technology & Architecture
0.0
Avg score
Current systems can integrate AI outputs.
APIs, workflows, automation tools, or UI touchpoints exist.
Select…
0
1
2
3
4
5
Infrastructure can support AI workloads.
Cloud/on-prem capacity, scaling, GPU/compute availability.
Select…
0
1
2
3
4
5
Tools for model deployment/monitoring exist or are planned.
MLOps, logging, drift detection, rollback, A/B testing.
Select…
0
1
2
3
4
5
Data pipelines are reliable and automated.
ETL/ELT, streaming/batch, versioning, SLAs.
Select…
0
1
2
3
4
5
Vendor/stack decisions are rationalized.
Build vs buy, LLM provider plan, cost controls, SLAs.
Select…
0
1
2
3
4
5
4) People, Skills & Culture
0.0
Avg score
AI-relevant roles exist or can be staffed.
Product owner, data/ML engineers, analysts, security, legal.
Select…
0
1
2
3
4
5
Staff has baseline AI literacy.
Understanding of what AI can/can’t do; prompt habits; risks.
Select…
0
1
2
3
4
5
Org culture supports experimentation.
Tolerance for pilots, learning, measured failure.
Select…
0
1
2
3
4
5
Cross-functional collaboration is strong.
Data/IT/business/security work together without friction.
Select…
0
1
2
3
4
5
Training/upskilling plan is in place.
Formal path for users, admins, and advanced practitioners.
Select…
0
1
2
3
4
5
5) Process & Operations
0.0
Avg score
Processes are documented and measurable.
Inputs/outputs are clear enough to automate or augment.
Select…
0
1
2
3
4
5
A pilot/POC method is understood.
Test-learn-scale, success gates, timelines, ownership.
Select…
0
1
2
3
4
5
Operational impact is planned.
Support, monitoring, escalation, and workload shifts.
Select…
0
1
2
3
4
5
AI is integrated into workflows (not bolted on).
Clear handoffs between humans and systems.
Select…
0
1
2
3
4
5
ROI tracking is planned post-launch.
“Before/after” measurement and continuous iteration.
Select…
0
1
2
3
4
5
6) Security, Risk & Governance
0.0
Avg score
Security review path exists for AI tools.
Vendor due diligence, SOC2/ISO, pen-test, access rules.
Select…
0
1
2
3
4
5
Policies for safe AI use are defined.
Prompting rules, prohibited data, logging, human review.
Select…
0
1
2
3
4
5
Regulatory/compliance requirements are addressed.
HIPAA, GDPR, FERPA, PCI, industry-specific rules.
Select…
0
1
2
3
4
5
Model risk is understood and mitigated.
Bias, hallucination, drift, explainability, audit trails.
Select…
0
1
2
3
4
5
Incident response for AI is planned.
What happens if output is wrong/harmful or data leaks.
Select…
0
1
2
3
4
5
7) Change Management & Adoption
0.0
Avg score
Stakeholders are engaged early.
Users, managers, IT, legal/security involved in planning.
Select…
0
1
2
3
4
5
Communication plan exists for rollout.
What changes, who benefits, and how to get help.
Select…
0
1
2
3
4
5
User enablement & support are ready.
Training, documentation, champions, help desk path.
Select…
0
1
2
3
4
5
Adoption is measured and iterated.
Usage analytics + feedback loops.
Select…
0
1
2
3
4
5
Workforce impact is considered.
Role changes, reskilling, policy on AI assistance.
Select…
0
1
2
3
4
5
8) Use-Case Opportunity & Priorities
Pick likely wins and assess feasibility.
High-Value Areas (check all that apply)
Customer support automation
Internal knowledge assistant
Document summarization
Marketing content generation
Sales enablement & proposals
Forecasting & planning
Quality / compliance review
Workflow automation
Personal productivity copilots
Fraud / anomaly detection
Recommendation / personalization
Other
Top 3 Use Cases (describe)
Expected Value & Owner
Feasibility / Constraints
Quick-Win Candidate?
Select…
Yes — can pilot in 2–6 weeks
Maybe — needs prep work
No — long-term/complex
Why / Notes
9) Overall Assessment Notes & Recommendations
Your synthesis and next steps.
Key Strengths
Key Gaps / Risks
Recommended Next Steps (30/60/90)