Connecticut AG Puts Businesses on Notice: Old Laws Still Apply to AI
Case Overview
Connecticut Attorney General William Tong has issued a formal advisory warning that all existing state laws—including consumer protection, data privacy, anti-discrimination, and healthcare regulations—apply to artificial intelligence systems. This marks a significant enforcement signal: regulators will not wait for AI-specific legislation to hold organizations accountable for algorithmic harms. For healthcare practices deploying AI tools for scheduling, clinical decision support, patient communications, or billing, this means HIPAA compliance, anti-discrimination laws, and state privacy statutes all govern AI usage today. Practices cannot claim regulatory uncertainty as a defense if an AI system causes a breach, discrimination, or privacy violation.
Key Claims
- No AI exemption: Existing consumer protection, data privacy, and healthcare compliance frameworks apply immediately to AI deployments
- Enforcement readiness: The AG's office will pursue violations even without AI-specific statutes
- Liability attribution: Organizations cannot deflect responsibility by claiming the AI "made the decision"
- Documentation requirement: Businesses must be able to explain how AI systems make decisions and what data they process
- Vendor accountability: Practices remain liable for third-party AI tools they deploy
Legal Implications
This advisory establishes a critical precedent: regulatory agencies will interpret existing law to cover AI, rather than waiting for new legislation. For HIPAA-covered entities, this means several immediate consequences. First, any AI tool processing protected health information (PHI) requires a Business Associate Agreement and must meet encryption, access control, and audit logging requirements. Second, AI-driven decisions—such as automated appointment confirmations, billing predictions, or treatment recommendations—create audit trails that must be preserved and could be subject to enforcement review. Third, if an AI system causes a breach (through improper data access, inadequate security, or flawed logic), OCR can cite existing HIPAA Security Rule violations without needing new AI-specific regulations.
The advisory also signals heightened scrutiny of vendor relationships. Many practices adopt AI-powered practice management, telehealth, or billing tools without conducting vendor risk assessments or verifying security controls. Connecticut's stance means practices cannot claim ignorance of a vendor's AI capabilities as a defense—the practice is liable for any HIPAA violation the AI creates.
What This Means for Your Practice
If your practice uses any AI-enabled tool—chatbots, automated reminders, predictive analytics, voice transcription, or clinical decision support—you must:
- Inventory all AI systems currently in use and confirm each has a valid BAA
- Verify security controls: confirm the vendor encrypts PHI, logs access, and meets HIPAA technical safeguards
- Document AI decision logic: understand what data the system processes and how it makes decisions
- Audit AI outputs: regularly review AI-generated communications and decisions for accuracy and compliance
- Train staff: ensure your team knows when they're using AI and what compliance requirements apply
Practices that assume "we're too small for AI" are at risk—many SaaS tools embed AI features without explicit disclosure.
If your practice uses any AI-enabled tool—chatbots, automated reminders, predictive analytics, voice transcription, or clinical decision support—you must: - Inventory all AI systems currently in use and confirm each has a valid BAA - Verify security controls: confirm the vendor encrypts PHI, logs access, and meets HIPAA technical safeguards - Document AI decision logic: understand what data the system processes and how it makes decisions - Audit AI outputs: regularly review AI-generated communications and decisions for accuracy and compliance - Train staff: ensure your team knows when they're using AI and what compliance requirements apply Practices that assume "we're too small for AI" are at risk—many SaaS tools embed AI features without explicit disclosure..
How Patient Protect Helps
Patient Protect's Vendor Risk Scanner automatically tracks BAAs and assesses third-party security controls, flagging AI-enabled vendors that lack proper agreements. The Autonomous Compliance Engine generates real-time tasks when new tools are added, ensuring you document AI systems and verify HIPAA compliance before deployment. Policy Generation creates customizable policies covering AI usage, data processing, and vendor management—critical for demonstrating due diligence if an AI tool causes an incident. Security Alerts monitor for unusual access patterns that could indicate an AI system is processing PHI improperly. 80+ Training Modules include vendor management and technology risk content to educate staff on AI compliance obligations.
At $39-$99/month with no contracts, Patient Protect delivers enterprise-grade compliance automation designed for independent practices—competitors charge $259-$2,000/month for static documentation. Start a free trial at hipaa-port.com or check your risk at patient-protect.com/risk-assessment.
AI-generated analysis · Verify with original source
