New York Enacts Law Requiring Safety Protocols for AI Companies: A Key Step in AI Chatbot Regulation


In a groundbreaking move, New York has passed a new law designed to protect users interacting with AI companions. The law’s primary goal is to prevent psychological, physical, and financial harm during AI chatbot interactions by mandating specific safety protocols for high-risk situations.

Effective November 5, the law includes several critical provisions. One of the most important requires AI companies to implement mechanisms that address potential suicidal thoughts, self-harm, or harm to others expressed by users. If detected, these protocols must direct users to appropriate crisis services, such as suicide prevention hotlines.

Another key feature is a mandatory notification that must be displayed every three hours during an ongoing AI interaction. This notification must remind users that the AI is not a human being and cannot feel emotions, helping to prevent misunderstandings and set clear expectations for the interaction.

While the law is still narrow in scope, it represents a crucial step toward regulating AI chatbots, a field that has raised significant concerns, particularly after reports of suicides linked to AI interactions. The legislation focuses on protecting vulnerable individuals, offering a concrete solution to serious risks in an increasingly digital world.

The law also includes an enforcement provision, allowing individuals who suffer physical or financial harm due to a violation of the law’s safety protocols to seek legal action, including damages and other remedies.

Although the law doesn’t address all aspects of safety in AI interactions, such as protecting sensitive data or preventing more subtle psychological harms, it sets an important precedent for other states and countries to follow. By clearly defining the necessary safety measures, the law not only enhances user protection but also encourages AI companies to adopt better practices globally.

With this move, New York takes a leading role in AI regulation, setting a necessary standard to ensure that technology is both innovative and safe for everyone.

Take the first step

What is the first step?

Talk to an expert with proven experience who can help you identify your company’s data privacy needs.

Why take the first step?​

Taking the first step is important. Right from the beginning, the expert can help you identify what data privacy project would be the best for your company’s needs and what methodology should be applied, avoiding the risk of losing money and wasting time.

Copyright © 2026 ETHOSFY – All rights reserved.