Strengthen America Strengthen America A 21st-Century Compact

§ Constitutional Amendment

Algorithmic Accountability in Government

Current Status

Existing Law

  • Fifth Amendment requires due process for federal government actions affecting life, liberty, or property
  • Fourteenth Amendment extends due process to state actions
  • Administrative Procedure Act (1946) requires notice and comment for agency rulemaking
  • No constitutional or statutory framework specifically addressing algorithmic decision-making

Current Authority

  • Agencies may use algorithms and AI in decision-making without specific authorization
  • OMB Circular A-130 provides guidance on federal information management
  • Executive Order 13960 (2020) established AI principles for federal agencies (non-binding)
  • No enforceable right to explanation of algorithmic decisions

Existing Limitations

  • Due process doctrine developed for human decision-makers, not algorithms
  • No requirement for algorithmic transparency in government decisions
  • No right to human review of automated decisions
  • Agencies not required to disclose use of algorithms in individual determinations
  • No audit requirements for algorithmic bias in government systems

Problem

Specific Harm

  • Government algorithms determine benefits eligibility, risk assessments, and enforcement targeting without transparency
  • Citizens denied benefits or targeted for enforcement cannot understand or challenge algorithmic reasoning
  • Algorithmic bias in training data perpetuates discrimination at scale
  • "Black box" decisions violate spirit of due process even when technically compliant
  • No mechanism to identify or correct systematic algorithmic errors

Who is Affected

  • Benefits applicants subject to automated eligibility determinations
  • Citizens subject to algorithmic risk scoring (child welfare, criminal justice, fraud detection)
  • Individuals targeted by algorithmic enforcement prioritization
  • Communities disproportionately affected by biased training data
  • All citizens lacking transparency in government decision-making

Gaps in Current Law

  • Due process requires notice and opportunity to be heard, but not explanation of algorithmic factors
  • APA notice-and-comment applies to rules, not individual algorithmic determinations
  • No statutory right to human review of automated decisions
  • No requirement to test algorithms for bias before deployment
  • No mandatory disclosure of algorithmic decision-making use

Accountability Failures

  • Agencies deploy algorithms without bias testing or impact assessment
  • Citizens cannot identify when algorithms influenced their case
  • No systematic audit of algorithmic outcomes for disparate impact
  • Errors in algorithmic systems may affect thousands before detection
  • No mechanism for affected individuals to trigger algorithm review

Proposed Reform

Primary Policy Change

Establish constitutional right to transparency, explanation, and human review when government algorithms significantly affect individual rights, benefits, or liberty.

New Requirements

  • Disclosure: Government agencies shall disclose when algorithmic or automated systems significantly influence decisions affecting individual rights, benefits, enforcement actions, or liberty
  • Explanation: Upon request, individuals shall receive plain-language explanation of the principal factors an algorithmic system considered in their case, the weight assigned to key factors, and how their data compared to decision thresholds
  • Human Review: Individuals shall have the right to request human review of any algorithmic determination significantly affecting their rights, benefits, or liberty; human reviewers shall have authority to override algorithmic recommendations
  • Bias Auditing: Before deployment, government algorithms significantly affecting individual rights shall undergo independent testing for accuracy and disparate impact across protected classes; results shall be published
  • Ongoing Monitoring: Deployed algorithms shall be audited annually for outcome disparities; agencies shall report findings to Congress and remediate identified bias

New Prohibitions

  • Government may not deny benefits, impose penalties, or restrict liberty based solely on algorithmic determination without human review opportunity
  • Algorithms may not use factors that serve as proxies for protected characteristics where such use produces disparate impact without compelling justification
  • Agencies may not withhold algorithmic decision-making disclosure by claiming trade secrets for systems affecting individual rights

Enforcement

  • Congress shall have power to enforce by appropriate legislation
  • GAO Information Technology and Cybersecurity team shall audit federal algorithmic systems and report annually to Congress on compliance, accuracy, and disparate impact findings
  • Individuals may seek judicial review of algorithmic determinations under existing APA framework
  • Agencies shall designate Algorithmic Accountability Officers responsible for compliance (existing staff; no new positions required)
  • Inspector General offices shall include algorithmic accountability in agency audits

What Changes

Before After
No disclosure of algorithmic decision-making Mandatory disclosure when algorithms significantly influence individual determinations
No right to explanation of algorithmic factors Right to plain-language explanation of factors, weights, and thresholds
No guaranteed human review of automated decisions Right to human review with override authority
No pre-deployment bias testing required Independent bias auditing before deployment with published results
No systematic monitoring for disparate impact Annual outcome audits with congressional reporting
Agencies may rely solely on algorithmic determinations Prohibition on algorithm-only decisions affecting rights/benefits/liberty
Trade secret claims block transparency Trade secrets cannot shield disclosure for rights-affecting systems
Due process developed for human decision-makers Constitutional framework adapted for algorithmic governance

ROI

Federal Budget Impact (10-Year, Estimated)

Note: Constitutional amendments are not CBO-scoreable. Estimates based on comparable programs, research, and implementing legislation projections.

Costs:

Item 10-Year Source
GAO ITC algorithmic auditing expansion $0.15B [GAO ITC baseline]
Agency compliance (existing staff designated) $0.10B Est.
Pre-deployment bias testing $0.20B [AI audit market data]
Human review infrastructure $0.25B [Appeals process benchmarks]
Explanation system development $0.15B [IT modernization]
Contingency (15%) $0.13B
Total $0.98B

Savings:

Item Gross Capture Net Source
Reduced erroneous denials/enforcement (algorithm errors) $3.0B 20% $0.60B [Benefits error rates]
Avoided litigation from algorithmic decisions $1.0B 30% $0.30B [Agency litigation costs]
Improved targeting efficiency (bias removal) $2.0B 15% $0.30B [Program integrity data]
Total $1.20B

Result: Net +$0.22B (Estimated - Not CBO-Scoreable)


Societal Benefits

Benefit Annual NPV (3%) NPV (7%) Source
Reduced algorithmic discrimination harm $2.0B $17.1B $14.1B [Civil rights research]
Public trust in government decisions $1.5B $12.8B $10.5B [Trust-compliance studies]
Avoided wrongful benefit denials $1.0B $8.5B $7.0B [Benefits error research]
Due process value (dignity, fairness) $0.5B $4.3B $3.5B [Legal scholarship]
Total $5.0B $42.7B $35.1B

Summary

Category 10-Year Notes
Federal Budget +$0.22B Estimated - Not CBO-scoreable; uses existing oversight bodies
Societal $35.1B - $42.7B NPV at 3-7%; discrimination reduction, trust, due process

Confidence: LOW

Estimation Basis: Federal government uses hundreds of algorithmic systems affecting benefits, enforcement, and risk assessment. Michigan's unemployment algorithm falsely accused 40,000+ people of fraud (93% error rate in some periods). Child welfare algorithms show documented racial disparities. Benefits programs have 5-15% error rates, with algorithmic systems potentially scaling errors. Audit costs estimated from commercial AI audit market ($50K-$500K per system) scaled to federal inventory. Trust benefits derived from research linking procedural fairness to compliance. Major uncertainty exists regarding scope of affected systems, behavioral responses, and implementation choices.

References

  1. GAO, "Artificial Intelligence: An Accountability Framework for Federal Agencies" (2021)
  2. Michigan Unemployment Insurance Agency audit (false fraud accusations)
  3. Allegheny County child welfare algorithm racial disparity studies
  4. OMB, "Guidance for Regulation of Artificial Intelligence Applications" (2020)
  5. Administrative Conference of the United States, "Algorithmic Accountability" recommendations

Change Log

Date Change Source
2025-01-20 Created as narrow constitutional amendment establishing algorithmic accountability rights for government decisions affecting individuals; routes oversight to GAO ITC and existing IG offices; no new agencies Amendment review