A new policy framework proposes the “AFCP Standard”—a joint OPM-EEOC certification regime to operationalize Civil Rights law in the age of automated recruitment.
WASHINGTON, D.C. — [12/20/2025] — As federal agencies accelerate the adoption of Artificial Intelligence to modernize the civil service, a new policy memorandum released today by Aspen Institute Civic AI Fellow Rohan Sharma outlines the urgent architecture needed to prevent automated discrimination. Titled “The Algorithmic Fairness Certification Program (AFCP),” the framework proposes a unified, government-wide standard to audit and certify AI hiring tools before they are deployed.
With the Office of Management and Budget (OMB) and Congress increasing scrutiny on “High-Risk” AI, the AFCP offers the first operational roadmap to bridge the gap between high-level principles (NIST AI RMF) and daily procurement realities. The memorandum calls for the Office of Personnel Management (OPM) and the Equal Employment Opportunity Commission (EEOC) to establish a joint oversight board that mandates rigorous pre-deployment testing for all federal hiring algorithms.
“We are currently hiring at the speed of algorithms but governing at the speed of paper,” said Rohan Sharma, author of the memorandum and CEO of governance technology firm Zenolabs.AI. “The Federal Government is the nation’s largest employer. If we cannot prove—mathematically—that our hiring robots are fair, we risk scaling discrimination across the entire civil service. The AFCP provides the ‘Check Engine Light’ that agencies need before they hand over the keys to AI.”
The “AFCP” Protocol: A New Standard for Digital Civil Rights The policy memorandum, developed during the Aspen Institute’s Winter 2025 Civic AI cohort, argues that voluntary vendor assurances are no longer sufficient. Instead, it proposes a mandatory certification requiring:
- Counterfactual Fairness Testing: A technical requirement to test if an AI model would make the same hiring decision if an applicant’s race or gender were swapped.
- The “Four-Fifths” Digital Baseline: Codifying the long-standing EEOC disparate impact rule into a continuous, automated audit for all hiring software.
- Zero-Knowledge Compliance: A mechanism for vendors to prove fairness without exposing proprietary trade secrets or sensitive applicant data.
- Public “Fairness” Registry: A transparent log of certified tools, ensuring accountability for agencies and the public.
Bridging Policy and Engineering Unlike theoretical white papers, the AFCP is grounded in technical feasibility. It leverages existing authorities under Title VII of the Civil Rights Act and Executive Order 14110, offering agencies a “shovel-ready” regulatory structure that protects merit-based hiring while enabling innovation.
“Efficiency and equity are not zero-sum,” Sharma added. “By standardizing how we measure bias, we give compliant vendors a fast lane to government contracts and give agency leaders the confidence to modernize.”
Availability The full policy memorandum, “The Algorithmic Fairness Certification Program (AFCP): A Policy Framework for Federal AI Hiring,” is available for download immediately at Policy Memo
About the Author Rohan Sharma is an Aspen Institute Civic AI Fellow and the CEO of Zenolabs.AI, a governance technology firm building the automated compliance infrastructure for the AI age. He is the author of AI & The Boardroom (Springer Nature) and the inventor of the “Trustworthy AI Index” (U.S. Patent Pending). His work focuses on operationalizing regulatory standards into technical reality.
Media Contact:
Rohan Sharma
Aspen Institute Civic AI Fellow, Principla, Zenolabs.AI
rohan@rohansharma
www.rohansgarma.net