A US Securities and Exchange Commission committee has recommended a new rule that would mandate companies to analyze and report all AI efforts — including decisions to not use AI for some purposes.
Attorneys who have studied the proposal note that the AI rule — just like the SEC’s cybersecurity rule from about two years ago — won’t technically require anything to be reported that wouldn’t have already required reporting. The new rule refers only to material AI efforts and ever since the creation of the SEC some 90 years ago, anything material has always required disclosure.
But they theorize that the SEC committee believes that many public-company boards and their senior executives don’t fully understand the scope and potential impact of their various AI efforts. The new rule would force those executives to create committees and to formally review all AI decisions, potentially unearthing material issues that would otherwise not have occurred to those executives.
Cybersecurity consultant Brian Levine, a former federal prosecutor who today serves as executive director of FormerGov, argues that this extra focus could make a significant difference for many enterprises.
“It will help focus people. It puts it in front of everyone who needs to be thinking about AI,” Levine said.
As for requiring companies to examine and disclose where they are either not using or where they might be underinvesting compared to rivals, Levine said that could help executives understand “that there is a risk that our implementation of AI may not keep up with stakeholders and competitors.”
The proposed rule comes from the SEC Investor Advisory Committee (IAC) and was discussed during the Dec. 4 IAC meeting.
Companies can write their own definitions of AI
Another controversial aspect of the proposed rule is that it fails to define AI, instead instructing companies to write their own definitions. Some legal experts have suggested that the committee didn’t literally want companies to evaluate all uses of AI, given that the technology dates back to the 1950s and exists in some form in just about every piece of software that businesses use. They more likely intended for such evaluations to focus on relatively recent AI popularizations, especially generative AI and agentic AI.
Under the proposed rule, companies would “self-define what they mean by artificial intelligence and then rely on that definition throughout its disclosures in describing AI-related risks, their AI deployment strategy if any and capital expenses and R&D expenditures related to the implementation and deployment of AI, amongst other material information.”
Monica Washington Rothbaum, a senior attorney with J&Y Law, said that it would be “risky for a company to define AI differently” because it makes “apple to apple” investor comparisons difficult if not impossible.
“Requiring companies to disclose AI-related risks is a smart move. But letting each company define AI however they see fit is a loophole waiting to be exploited. Without a consistent baseline, you risk turning disclosures into PR spin rather than meaningful accountability,” Rothbaum said.
But Rothbaum does find value in forcing companies to disclose where management has opted to not use AI or to use it less than they might have otherwise.
“Under-disclosing material risks like reliance on flawed AI models can expose companies to liability when things go wrong. Failing to invest in AI responsibly could also lead to competitive disadvantages that shareholders deserve to know about,” Rothbaum said. “This isn’t theoretical. AI is already shaping the way we look at hiring, customer service, and security. These are core operations that can affect a company’s value. If you can’t clearly explain how your AI decisions are made and who’s accountable for making them, then you’re already behind. Transparency like that has to be the cost of doing business today.”
Braden Perry, a litigation, regulatory, and government investigations attorney with law firm Kennyhertz Perry, is not a fan of the proposed rule because he sees it unlikely to help investors make decisions.
Asked the probability that such a rule would deliver useful information to investors and potential investors, Perry said, “None. In terms of an overall understanding from a shareholder, there will likely be zero usable information.”
Will filing reveal anything useful?
This concern is partly based on the many SEC cybersecurity filings that have used boilerplate language — and use SEC exemptions to reveal nothing specific.
According to Perry, the key part of the AI definition portion is that the definition — once used — has to be used consistently throughout all filings.
“Adopt a clear, enterprise-wide definition of AI and use it consistently across SEC filings, internal policies, and marketing, so you do not redefine the term to suit the story you want to tell in a given quarter,” Perry said. “The IAC recommendation explicitly contemplates requiring issuers to define what they mean by AI, in part because inconsistent definitions are already making disclosures hard for investors to compare. Allowing companies to define AI themselves is a double-edged sword, since it can either promote honest, business-specific clarity or invite opportunistic word games.”
Some attorneys suggested that companies should be careful about AI phrasing or face potential actions from the SEC and the US Federal Trade Commission (FTC).
“Be very cautious about AI marketing. The SEC has already shown, through its AI washing enforcement actions, that it is willing to charge firms that exaggerate their AI capabilities or mislead investors about how embedded AI is in their products and processes,” Perry said. “A disclosure regime that asks companies to explain where AI is used, how it is governed, and how it affects operations will only make it easier for the SEC to test whether those claims are real.”
Lexi Reese, CEO of AI vendor Lanai, also expressed concern about allowing companies to write their own AI definitions.
“Giving companies the freedom to define AI may reduce short-term compliance friction, but it creates exactly the kind of fragmented, incomparable disclosure environment that leaves investors guessing,” Reese said. “If one company calls an autonomous decision system AI and another calls the same thing a data-driven tool, their disclosures will look compliant while describing two different universes of risk.”
AI specialist Rob Lee, chief of research for the cybersecurity training firm the SANS Institute, said the rule might prove helpful in raising board and C-level awareness about what companies are actually doing with AI.
But as with the earlier SEC cybersecurity rule, Lee said he was unhappy that the rule includes “a massive number of get-out-of-jail-free cards. Who is going to actually disclose anything? What are they disclosing? They don’t even mention shadow IT. How do you track unsanctioned AI use in your company?”
Not even all members of the IAC were happy with the rule’s phrasing. IAC member John Gulliver submitted an official dissent to the proposed rule, expressing particular concern with each company’s ability to write its own AI definition.
“These definitions would likely change from year-to-year or quarter-to-quarter. I don’t see how this benefits investors,” Gulliver wrote. But he also said that he doubted the details required are realistic.
The proposed rule would “require public companies to provide highly specific disclosures of how their use of AI impacts employees at their company and the company’s customers. It’s good that this is only required when the use of AI is financially material to the company. But unfortunately, I think this is an impossible task,” Gulliver wrote. “Does the SEC really have the AI expertise necessary to determine what these line-item disclosures should be? And how is a company supposed to know the precise impact of AI on hiring or their customers? There are many macroeconomic and industry-specific factors that affect jobs and customers. In my view, accurately isolating AI-specific impacts would be a difficult guessing game.”
No Responses