Recap of 30-Minute Q’s (Session 1)
On August 13, we launched our new webinar series 30-Minute Q’s: Critical Conversations for State & Local Government with a packed first session. Together with Microsoft, we set out to tackle some of the most pressing questions state and local government leaders face as AI adoption accelerates.
The response was clear: while agencies see AI’s potential to modernize service delivery and citizen engagement, most leaders admit they aren’t confident their current policies can support responsible, secure use. That’s exactly why this conversation matters and why governance must come first.
If you missed the session, you can catch the on-demand recording here.

Key insights from Session 1
1. Governance first, tools second
Governance isn’t something you buy. It’s about defining rules for data collection, ownership, retention, and AI use. Tools like Microsoft Purview enforce those rules once established, helping classify, label, and protect sensitive information.
2. Don’t wait – start securely
You don’t need every policy perfected before piloting AI. Tim Cone advised a “secure as you go” approach:
- Test Copilot in a bounded environment with labeled data
- Use Purview and DSPM controls to safeguard sensitive information
- Pair technology with adoption planning, staff training, and clear acceptable-use guidelines
3. Why governance is critical for state and local governments
AI makes hidden data discoverable. That raises the stakes for privacy, retention, and transparency – especially for agencies managing benefits, HR, and health data. Governance is the guardrail that protects citizens and maintains trust.
4. Preparing for AI agents
Next-generation AI agents will act on behalf of staff. These “non-human identities” require the same rigor as privileged accounts: approvals, time-boxed access, monitoring, and limits. Microsoft Entra ID already provides a foundation for these controls.
5. Keep humans in the loop
AI can streamline resident services, such as helping applicants gather required documents, while staff retain final decision-making authority. This model improves efficiency and citizen experience without compromising fairness or compliance.
What SLG leaders should do now
- Form an AI steering committee (IT, security, records/legal, program owners)
- Map critical data and define what AI can access
- Classify and label sensitive data in Purview
- Pilot with guardrails and track outcomes
- Train staff and set clear acceptable-use guidelines
- Prepare for AI agents with identity and access controls
What’s next in the series?
Session 2 – Data Readiness for Copilot
September 16, 2025
We’ll dive into preparing SLG data for Copilot, from classification and labeling to governance practices that ensure responsible use. Register here.