What New York actually decided about AI in legal practice

After two years of bar-association reports, an Office of Court Administration committee, a Senate bill that hasn’t moved, and a freshly-codified rule in the Rules of the Chief Administrator, what has New York actually decided about generative AI in the practice of law? I’ve been watching the architecture take shape since the NYSBA Task Force published its first report in April 2024, and an answer has now come into view. It is surpisingly clean.

New York’s bar associations and court system declined to write a single new ethics rule. They reaffirmed the existing ones. They codified the choice in Part 161 of the Rules of the Chief Administrator, which takes effect on June 1. And Kleyman Law Group, P.C. v Kaloidis, decided in Brooklyn on April 13, shows what the choice looks like in operation.

The decision

What’s worth understanding first is the decision that was made by not writing a new rule.

In April 2024, the New York State Bar Association’s Task Force on Artificial Intelligence published a roughly ninety-page Report and Recommendations. It is a careful document that surveyed a wide range of factors including technology, risks, regulatory landscape, foreign comparisons, and the state of practitioner education. Notably, it did not propose a single new Rule of Professional Conduct. Its conclusion was that existing rules, Rule 1.1 (competence), Rule 1.6 (confidentiality), and Rule 5.3 (supervision of nonlawyer assistants), already cover generative AI’s application to a lawyer’s job. What was missing, the report said, was education and voluntary guidelines, not a new rulebook.

The New York City Bar reached the same conclusion in Formal Opinion 2024-5 on generative AI in the practice of law. The American Bar Association did the same later that summer in Formal Opinion 512, extending the analysis to Rule 1.5 (fees) and Rule 1.4 (client communication) on top of competence, confidentiality, and supervision.

The state’s court system did the same thing. Chief Administrative Judge Joseph Zayas convened an Advisory Committee on AI in April 2024. By October 2025 the committee produced an Interim Policy on the Use of Artificial Intelligence for UCS judges and staff, internal-facing, with mandatory training. Then on March 25, 2026, the Office of Court Administration added Part 161 to the Rules of the Chief Administrator, this time governing attorneys and parties.

By the time the rule actually took shape, the appellate courts had already started running it. The Third Department’s Deutsche Bank Natl. Trust Co. v LeTennier sanctioned an attorney $7,500 in January for filing a brief with six fictitious cases. The First Department publicly censured an attorney by reciprocal discipline in February. Kleyman arrived in April.

What Part 161 says, and what it doesn’t

I’ll quote the operative language in full:

It is the policy of the Unified Court System that the use by attorneys and parties of artificial intelligence tools in preparing papers submitted to a court should not be prohibited, as long as such use is in accordance with the duties and responsibilities that apply to individuals who submit papers to a court. Since those duties and responsibilities already apply to all submissions, regardless of whether AI tools were used, attorneys and parties should not be required, upon submitting papers, to disclose to the court that they have used AI in the preparation of such papers.

— 22 NYCRR § 161.3

That paragraph is the architecture in compressed form. AI use is permitted. AI disclosure to the court is not required. Existing duties on the books, including the Rules of Professional Conduct, the 22 NYCRR 130-1.1 sanctions rule for frivolous filings, and the signature-as-certification regime, are the duties that apply to everyone.

Appendix A’s model rule goes one step further: a court that wants to write its own AI rule is encouraged to require certification that the lawyer reviewed the filing for fabricated content, with sanctions available if the certification turns out to be false. Parsed against 22 NYCRR 130-1.1, which has for decades made the signature on every paper a certification that the paper isn’t frivolous and isn’t materially false, that model rule merely restates existing obligations.

The path not taken

A pending bill in Albany, Senate Bill S2698, introduced January 22, 2025, would amend the CPLR to require any attorney who uses generative AI in drafting a brief to disclose that use upon filing and to certify that a human reviewed the document. So far, the bill has not moved. Section 161.3 is in genuine tension with its premise. OCA looked at the question of whether to require disclosure and answered no; the legislature, asked the same question, has not yet answered. Individual judges, meanwhile, have not waited for either: a patchwork of standing orders requiring varying levels of disclosure has emerged in the trial courts, and §161.4 expressly invites it. Some judges want paragraph-level disclosure of which portions of a brief were AI-drafted. Others require a one-line certification only. A litigator now has to read Part 161 and the assigned judge’s standing rules. The statewide rule does not unify practice; it ratifies the patchwork, on purpose. But as time marches on and Courts develop varying rules, there will likely need to be some more centralized force imposing some more precise frameworks and guidelines on the use of AI.

Reply

Avatar

or to participate

Keep Reading