Healthcare AI Firm Sued Over Alleged Unlawful Disclosures of Genetic Data
Case Overview
A healthcare artificial intelligence company faces multiple class action lawsuits alleging unauthorized collection and disclosure of genetic testing results used to train AI models. The company reportedly accessed genetic data from a third-party testing provider's database without proper authorization from the individuals whose information was used. This case highlights the intersection of emerging AI technology and traditional HIPAA privacy protections, particularly around secondary use of protected health information (PHI) for purposes beyond the original collection intent.
Key Claims
The lawsuits center on allegations that sensitive genetic testing results were collected and used for AI model training without patient authorization. The claims raise critical questions about:
- Permissible Use and Disclosure – Whether genetic data accessed through a third-party database constitutes a HIPAA-compliant disclosure when used for AI training purposes
- Business Associate Relationships – The adequacy of Business Associate Agreements (BAAs) when one entity provides data to another for secondary purposes like machine learning
- Minimum Necessary Standard – Whether bulk genetic data access for AI training violates HIPAA's minimum necessary principle
- Patient Authorization – The requirement for explicit consent when PHI is used for purposes substantially different from treatment, payment, or healthcare operations
Genetic information represents one of the most sensitive categories of PHI, carrying implications for not just the individual but biological relatives.
Legal Implications
This litigation pattern signals increasing regulatory and legal scrutiny of AI companies operating in healthcare data spaces. Even when data is obtained through business relationships with HIPAA-covered entities, the receiving organization must ensure compliance with privacy rules. Key exposures include:
Civil Liability: Class action damages for privacy violations can reach millions, particularly when sensitive genetic data affects entire patient populations.
HIPAA Penalties: OCR may investigate whether the data source (the genetic testing company) improperly disclosed PHI, and whether the AI company operated as a business associate without adequate safeguards. Penalties range from $100 to $50,000 per violation.
Regulatory Precedent: How this case resolves may establish precedent for AI training data sourcing across healthcare, affecting how practices share data with technology vendors.
What This Means for Your Practice
The case underscores that every vendor relationship involving PHI requires scrutiny, especially as AI tools proliferate in healthcare:
-
Review AI Vendor Contracts: Any practice using AI-powered tools (diagnostic support, patient communication, scheduling optimization) must verify the vendor's data handling practices and ensure a compliant BAA is in place
-
Understand Data Flow: When vendors say they "train models on healthcare data," ask specifically whether YOUR patient data will be used, how it's de-identified, and whether patients must authorize the use
-
Genetic Data Warrants Extra Care: Labs, pharmacogenomics services, and specialized testing create particularly sensitive PHI that requires explicit authorization for any non-treatment use
-
Secondary Use Policies: Document your practice's policies on data sharing with third parties, particularly for research, quality improvement, or technology development purposes
The case underscores that every vendor relationship involving PHI requires scrutiny, especially as AI tools proliferate in healthcare: 1.
How Patient Protect Helps
Patient Protect's Vendor Risk Scanner specifically addresses the growing complexity of third-party vendor compliance. The platform tracks BAA status, vendor security posture, and data access scope across all business associates—critical as AI vendors proliferate.
The Autonomous Compliance Engine automatically flags when vendor relationships change or new data-sharing arrangements are proposed, generating required review tasks and policy updates in real time. When a new AI tool is adopted, the system prompts for BAA verification, authorization requirements, and minimum necessary analysis.
Policy Generation creates customizable, HIPAA-compliant data sharing and authorization policies that address both traditional disclosures and emerging use cases like AI training. The 80+ Training Modules include specific content on vendor management, minimum necessary standards, and patient authorization requirements—keeping your entire team current on these evolving obligations.
Built with Zero Trust Architecture and AES-256-GCM encryption, Patient Protect models the security-first approach that should govern all vendor relationships. Start a free trial at hipaa-port.com or check your risk at patient-protect.com/risk-assessment.
This editorial was generated by AI from publicly available source material and is clearly labeled as such. It does not constitute legal, compliance, or professional advice. Inclusion of any entity does not imply wrongdoing. Patient Protect makes no warranties regarding accuracy or completeness. Verify all information with the original source before relying on it.

