Principles to runtime control
The AAMC Principles for the Responsible Use of AI in and for Medical Education are not advisory. The map below ties each principle to the runtime control that operationalizes it.
AAMC Principles → Runtime control
The AAMC's Principles for the Responsible Use of Artificial Intelligence in and for Medical Education describe what responsible AI in academic medicine should look like. The table below maps each principle to the runtime control that operationalizes it on the Thalamus platform, the type of evidence the control emits, and the most recent verification timestamp.
| Principle | Summary | Runtime control | Evidence | Status | Last verified |
|---|---|---|---|---|---|
| Educational mission alignment | AI applications must support — and not replace — human judgment in admissions and educational decisions. | Cortex outputs are surfaced as signals to human reviewers, never as filter or auto-reject actions. Any output formatted as a filter recommendation is held for review. | Held-for-review record + policy-violation period log | Verified | Tue, 12 May 2026 01:18:42 UTC |
| Equity, fairness, and bias mitigation | Tools must be evaluated for disparate impact across protected characteristics — and reevaluated as data, models, or contexts change. | Daily test set runs against three text-analysis prompts × five demographic variations. Findings published as signed test records within 48 hours. | Test record + fairness-matrix verification | Verified | Tue, 12 May 2026 00:08:11 UTC |
| Transparency to the medical education community | How AI is used, what it does, and what its limits are must be explainable to applicants, programs, and institutions. | Every AI check is paired with an evidence-span citation and the prompt version that produced it. Evidence-span integrity is verified before any output leaves the system boundary. | Per-check record with source_span_hash field | Verified | Mon, 11 May 2026 23:42:35 UTC |
| Privacy and the protection of applicant data | Applicant information used to train, evaluate, or run AI must be protected with appropriate safeguards. | All AI-check records use masked applicant identifiers. Source content is never published; only the SHA-256 of evidence spans is included in records. | Record schema verification + redaction policy version | Verified | Tue, 12 May 2026 02:21:09 UTC |
| Continuous monitoring and improvement | AI tools must be monitored after deployment — not only validated once before launch. | Per-period verification rate, performance-variation detection across 14-day rolling windows, and Merkle-rooted publication of every AI-check record in the period. STH-signed and externally anchorable. | Per-period Merkle root + STH signature | Verified | Tue, 12 May 2026 02:00:00 UTC |
| Accountability with auditable governance | Decisions about AI policy, scope, and lifecycle must be documented and reviewable by community stakeholders. | Methodology versions are cross-signed with their corresponding runtime configurations. A new methodology version cannot ship unless its runtime configuration record also verifies. | Cross-signature record linking methodology version → runtime config | Partial | Mon, 11 May 2026 18:55:21 UTC |
| Engagement with affected communities | Applicants, residents, fellows, and other affected community members must have meaningful avenues to surface concerns about AI tools. | Per-applicant lookup endpoint publishes the record chain for any AI signal about an applicant's submission. Disputes open a tracked reinvestigation record. | Disclosure record + dispute case ID | Partial | Mon, 11 May 2026 17:12:09 UTC |
The published version on thalamusgme.com and the runtime
configuration executing in production must correspond. Neither side
can change without the other being re-signed.
- 01
Career Interest Badge
v2.4.1Categorical extraction using a prompted LLM against the full statement text. Outputs one of {Strong, Moderate, Limited, Not expressed} with a source-span citation. Never operates without the underlying text available for human review.
cross-signed - 02
Grade Normalization
v2.4.1Logistic-regression-class internal model trained on de-identified historical transcript data. Operates only on structured inputs. Produces a normalized percentile band per academic dimension, never an overall ranking.
cross-signed - 03
Cortex Screening
v2.4.1Composite signal derived from feature-extraction outputs (LLM-based) and internal classification models. Every signal is paired with the source spans and feature values that produced it. Used by program reviewers as a starting point for human review, not as a decision.
cross-signed
- AAMC × ERAS
- Thalamus is the leading platform for graduate medical education recruitment, with deep integration into the Electronic Residency Application Service supporting 8,000+ residency and fellowship programs across 800+ institutions.
- CHAI framework
- Artificial Intelligence: The Thalamus Way is publicly committed to alignment with the Coalition for Health AI’s principles for trustworthy AI in healthcare.
- Public benefit corporation
- Thalamus operates as a public benefit corporation. Its responsible-AI commitments are part of its public-benefit obligation, not a marketing posture.
- National advisory boards
- Applicant (medical students, residents, fellows) and educator (UME, GME) advisory boards review AI-feature roadmaps before deployment.
