AAMC

Principles to runtime control

The AAMC Principles for the Responsible Use of AI in and for Medical Education are not advisory. The map below ties each principle to the runtime control that operationalizes it.

AAMC principles 7 5 verified · 2 partial
Methodology v2.4.1 cross-signed with runtime
Scale 8,000+ programs · 800+ institutions
PBC active public benefit corporation

AAMC Principles → Runtime control

The AAMC's Principles for the Responsible Use of Artificial Intelligence in and for Medical Education describe what responsible AI in academic medicine should look like. The table below maps each principle to the runtime control that operationalizes it on the Thalamus platform, the type of evidence the control emits, and the most recent verification timestamp.

PrincipleSummaryRuntime controlEvidenceStatusLast verified
Educational mission alignmentAI applications must support — and not replace — human judgment in admissions and educational decisions.Cortex outputs are surfaced as signals to human reviewers, never as filter or auto-reject actions. Any output formatted as a filter recommendation is held for review.Held-for-review record + policy-violation period logVerifiedTue, 12 May 2026 01:18:42 UTC
Equity, fairness, and bias mitigationTools must be evaluated for disparate impact across protected characteristics — and reevaluated as data, models, or contexts change.Daily test set runs against three text-analysis prompts × five demographic variations. Findings published as signed test records within 48 hours.Test record + fairness-matrix verificationVerifiedTue, 12 May 2026 00:08:11 UTC
Transparency to the medical education communityHow AI is used, what it does, and what its limits are must be explainable to applicants, programs, and institutions.Every AI check is paired with an evidence-span citation and the prompt version that produced it. Evidence-span integrity is verified before any output leaves the system boundary.Per-check record with source_span_hash fieldVerifiedMon, 11 May 2026 23:42:35 UTC
Privacy and the protection of applicant dataApplicant information used to train, evaluate, or run AI must be protected with appropriate safeguards.All AI-check records use masked applicant identifiers. Source content is never published; only the SHA-256 of evidence spans is included in records.Record schema verification + redaction policy versionVerifiedTue, 12 May 2026 02:21:09 UTC
Continuous monitoring and improvementAI tools must be monitored after deployment — not only validated once before launch.Per-period verification rate, performance-variation detection across 14-day rolling windows, and Merkle-rooted publication of every AI-check record in the period. STH-signed and externally anchorable.Per-period Merkle root + STH signatureVerifiedTue, 12 May 2026 02:00:00 UTC
Accountability with auditable governanceDecisions about AI policy, scope, and lifecycle must be documented and reviewable by community stakeholders.Methodology versions are cross-signed with their corresponding runtime configurations. A new methodology version cannot ship unless its runtime configuration record also verifies.Cross-signature record linking methodology version → runtime configPartialMon, 11 May 2026 18:55:21 UTC
Engagement with affected communitiesApplicants, residents, fellows, and other affected community members must have meaningful avenues to surface concerns about AI tools.Per-applicant lookup endpoint publishes the record chain for any AI signal about an applicant's submission. Disputes open a tracked reinvestigation record.Disclosure record + dispute case IDPartialMon, 11 May 2026 17:12:09 UTC
Methodology cross-signature

The published version on thalamusgme.com and the runtime configuration executing in production must correspond. Neither side can change without the other being re-signed.

  1. 01

    Career Interest Badge

    v2.4.1

    Categorical extraction using a prompted LLM against the full statement text. Outputs one of {Strong, Moderate, Limited, Not expressed} with a source-span citation. Never operates without the underlying text available for human review.

    cross-signed
  2. 02

    Grade Normalization

    v2.4.1

    Logistic-regression-class internal model trained on de-identified historical transcript data. Operates only on structured inputs. Produces a normalized percentile band per academic dimension, never an overall ranking.

    cross-signed
  3. 03

    Cortex Screening

    v2.4.1

    Composite signal derived from feature-extraction outputs (LLM-based) and internal classification models. Every signal is paired with the source spans and feature values that produced it. Used by program reviewers as a starting point for human review, not as a decision.

    cross-signed
Partner and framework alignment
AAMC × ERAS
Thalamus is the leading platform for graduate medical education recruitment, with deep integration into the Electronic Residency Application Service supporting 8,000+ residency and fellowship programs across 800+ institutions.
CHAI framework
Artificial Intelligence: The Thalamus Way is publicly committed to alignment with the Coalition for Health AI’s principles for trustworthy AI in healthcare.
Public benefit corporation
Thalamus operates as a public benefit corporation. Its responsible-AI commitments are part of its public-benefit obligation, not a marketing posture.
National advisory boards
Applicant (medical students, residents, fellows) and educator (UME, GME) advisory boards review AI-feature roadmaps before deployment.