Our Approach to AI Compliance and Governance
We begin every engagement with an AI inventory audit. Most organizations deploying AI for the first time are surprised to discover how many AI systems they are already using, often embedded in software platforms, HR systems, marketing tools, and customer service products purchased as off-the-shelf solutions. Each of these systems carries compliance obligations that the organization may not have formally assessed. The inventory creates a complete picture before we build governance structures.
From the inventory, we conduct a risk assessment that prioritizes compliance work by exposure. AI systems that make or influence decisions affecting people, including hiring, lending, healthcare, and service eligibility, carry higher compliance requirements than AI systems used for internal productivity or content generation. Evanston organizations often have both categories. A university research office might use AI for grant writing assistance and also for research participant screening. The governance requirements for each are very different.
We then build three layers of governance infrastructure:
Policy and documentation. Written policies governing AI system selection, deployment, and retirement. Acceptable use guidelines for employees using AI tools in their work. Data governance standards specifying what organizational data can be used to train or feed AI systems. Disclosure protocols for when and how the organization communicates AI use to customers, clients, patients, or the public. Incident response procedures for when AI systems produce harmful, inaccurate, or unexpected outputs.
Oversight mechanisms. Clear responsibility assignments for AI governance, typically including a designated AI governance lead with authority to review and approve AI deployments. A defined process for reviewing new AI system adoptions, including vendor AI systems embedded in purchased software. A regular review cycle for deployed AI systems to assess ongoing performance, fairness, and compliance. An escalation path for employees who observe AI system behavior that raises concerns.
Technical controls. Audit logging configurations that capture AI system inputs and outputs for record-keeping and review. Access controls governing who can modify AI system configurations. Testing protocols for AI systems before deployment and after significant changes. Documentation standards for AI model versions, training data, and performance benchmarks.
What Evanston Organizations Face in Practice
Northwestern University-affiliated organizations occupy an interesting position in AI governance. The university's research environment has high standards for research ethics, institutional review, and data protection. Those standards do not automatically transfer to the commercial or operational AI systems affiliated organizations run. A faculty startup spinning out of Northwestern's tech transfer office is a commercial enterprise subject to commercial AI governance requirements, not university IRB protocols. We help these organizations build the commercial governance structures they need without assuming the university framework covers them.
Healthcare-adjacent organizations in Evanston, from medical practices serving the Northwestern community to wellness and behavioral health providers along Ridge Avenue, face HIPAA compliance obligations for any AI system that processes protected health information. This includes AI tools embedded in practice management software, AI-assisted clinical documentation systems, and AI tools used for patient communications. Many organizations are using these systems without having formally assessed their HIPAA compliance posture. We build that assessment and the remediation plan.
Evanston's financial advisory and wealth management community, serving the North Shore's substantial affluent residential base, uses AI for client analytics, portfolio monitoring, and communication personalization. SEC guidance and Illinois state regulations impose disclosure and fiduciary requirements on automated advice. We build compliance frameworks that satisfy those requirements while allowing advisors to actually use the AI tools available to them.
