ROHAN SHARMA ADVISES FEDERAL RESERVE ON ALGORITHMIC TRANSPARENCY IN SUPERVISORY STRESS TEST MODELS

Rohan Sharma (center) speaking about the Trustworthy AI Index on stage with Nicholas Thompson (right) and John Borthwick (left) at The Atlantic CEO Council.
The P&L Gap”: Rohan Sharma explains why 2025 AI tech fails in 2015 Org Charts. Backstage at The Atlantic CEO Council.

FOR IMMEDIATE RELEASE Washington, D.C. — January 25, 2026

ROHAN SHARMA ADVISES FEDERAL RESERVE ON ALGORITHMIC TRANSPARENCY IN SUPERVISORY STRESS TEST MODELS

WASHINGTON, D.C. — Rohan Sharma, a 2025 Aspen Institute Civic AI Leader and member of the U.S. Technical Advisory Group to ISO, has formally submitted technical commentary to the Board of Governors of the Federal Reserve System regarding the “Enhanced Transparency and Public Accountability of the Supervisory Stress Test Models and Scenarios” (Docket No. R-1873). The submission, now published by the Federal Reserve, outlines a critical framework for integrating algorithmic governance and “glass-box” transparency into the stress capital buffer requirements that underpin the stability of the United States banking sector. Mr. Sharma’s intervention leverages his extensive work with the ACM Technology Policy Committee to argue that as financial institutions increasingly deploy complex predictive models, the regulatory apparatus must evolve to scrutinize the interpretability and resilience of these systems against tail risks.

The commentary addresses a pivotal shift in financial regulation: the transition from static capital planning to dynamic, model-driven stress testing. In his submission (Comment ID: FR-2025-0063-01-C04), Mr. Sharma warns that without rigorous public accountability and model explainability, the supervisory models used to determine capital adequacy could obscure systemic vulnerabilities rather than reveal them. Drawing on frameworks established in his seminal text, AI & the Boardroom, Mr. Sharma advises the Board to adopt standards that treat algorithmic opacity as a material risk factor, ensuring that the models dictating capital reserves are as auditable as the assets they evaluate.

“The integrity of our financial infrastructure relies not only on the quantity of capital held by major institutions but on the clarity of the models that mandate those reserves,” said Rohan Sharma. “As the Federal Reserve seeks to enhance transparency, we must recognize that algorithmic accountability is no longer a technical niche but a pillar of macroeconomic sovereignty. We cannot allow the complexity of stress test models to outpace our capacity for oversight; to do so would be to invite systemic fragility under the guise of sophistication.”

Operationalizing Governance in Capital Planning

Mr. Sharma’s submission provides specific recommendations on harmonizing the Federal Reserve’s transparency goals with emerging global standards on AI and model risk management. The commentary highlights the necessity of “adversarial audit” capabilities—allowing external stakeholders and regulators to test the robustness of stress test scenarios against unforeseen market correlations. By aligning the Federal Reserve’s Regulation LL with broader principles of algorithmic transparency, the submission argues for a regulatory environment where model assumptions are open to rigorous, data-driven challenge.

This engagement follows Mr. Sharma’s ongoing work with the International Organization for Standardization (ISO) and the World Economic Forum, where he has consistently advocated for governance frameworks that bridge the gap between technical innovation and public policy. His recommendations to the Federal Reserve underscore the urgent need to view model risk through the lens of institutional trust and public accountability.

“Transparency in supervisory modeling is the bedrock of market confidence,” Sharma added. “If the mechanisms of stress testing remain black boxes, we deny the market the ability to accurately price risk. True financial resilience requires that the mathematical architectures governing our economy be subject to the same rigorous democratic scrutiny as the policies they enforce.”

Media Assets

  • Official Headshot: Available upon request (High-resolution, 300 DPI, neutral background).
  • Documentation: Full text of Comment ID FR-2025-0063-01-C04 is available via the Federal Reserve Board website.

ABOUT ROHAN SHARMA Rohan Sharma is a globally recognized authority on artificial intelligence governance, digital transformation, and enterprise risk, and a 2025 Aspen Institute Civic AI Leader. His work sits at the intersection of technology, public policy, and board-level decision-making, where he advises executives and institutions on the strategic, regulatory, and capital implications of advanced AI systems. Mr. Sharma serves on the U.S. Technical Advisory Group to the International Organization for Standardization (ISO), contributing to the development of global AI safety and quality standards, and leads the Law Sub-committee of the ACM Technology Policy Committee, shaping legal and governance perspectives on emerging technologies. He is an Agenda Contributor to the World Economic Forum, an advisor to Stanford Seed, and has previously held senior leadership roles driving AI-enabled transformation at Apple, Disney, and Fortune 100 enterprises. An author and public intellectual, Mr. Sharma wrote the Springer-published AI & the Boardroom, a widely cited text on AI governance and executive oversight that has been referenced by NATO, Google DeepMind, and peer-reviewed academic journals including Nature and Emerald. He is also the author of Minds of Machines and a frequent speaker at global C-suite forums, including TEDx Yale and The Atlantic CEO Summit. Mr. Sharma serves as a strategic advisor to UCLA Anderson, and a mentor with Techstars. He resides in California with his family.

MEDIA CONTACT
Name: Rohan Sharma
Title: Managing Principal, Zenolabs AI
Email: [email protected]
Phone: +1 (323) 23 8723
Website: http://www.rohansharma.ne