Why MIT needs to gradually and responsibly train its future doctors in the AI era
The Harvard-MIT Health Sciences & Technology Program must also ensure that technical AI literacy doesn’t come at the expense of student mental health
In the field of medicine, the reach of AI has expanded from early research initiatives to direct clinical applications. AI has already been used to catch abnormalities early on through radiology and imaging, to manage patient flows in hospitals, and to simplify medical documentation through transcription systems, among other applications. However, even as AI use has expanded in medicine, we do not yet have clear guidelines on its use.
While many clinicians have voluntarily adopted AI tools for tasks such as diagnostic support and note-taking, others have stayed on the periphery. This self-driven adoption of AI, along with a sense among clinicians that they must “catch up” to better implement AI-based tools, has created a feeling of subtle pressure and uncertainty in medical circles. Pioneering medical institutions such as MIT must prepare future medical professionals for this transition in a systematic way.
It is therefore essential that these curricula are integrated into medical education while balancing the well-being of medical students and the demands placed upon them. To do so, AI literacy should be incorporated into existing medical education frameworks in a structured manner and as a core competency, alongside medical systems analysis, statistics, and medical ethics. Embedding AI into familiar curriculum structures, as opposed to open-ended self-learning, gives students clear expectations, reducing uncertainty and anxiety around AI use. Students, too, agree that AI learning is key, yet must be done while keeping both AI’s benefits and risks in mind. According to a survey of 145 pre-medical students and medical school applicants, over 66% of participants believed that AI-based skills are essential for their careers, and 81% responded that physicians should be aware of AI risks, including hallucinations, bias, data quality issues, and overreliance on results.
Throughout their training, medical students and physicians must learn to assess the benefits and drawbacks of publicly available AI models, recognize structural and implicit biases in clinical datasets, understand data privacy and the Health Insurance Portability and Accountability Act (HIPAA), and guide ethics and policy discussions. These new responsibilities, however, increase an already tall burden on what clinicians must learn through medical training — exacerbated by stressors such as burnout, costly educational loans, and now, practicing medicine amidst rapid technological advancements with significant implications on the field.
A formal curriculum might cover a basic overview of the architecture of various AI models, data management, data confidentiality, outcome assessment methods (e.g. for diagnostic accuracy, finalizing treatment, or patient safety), along with the common pitfalls of using AI. Medical students can gain more exposure to AI-based tools through journal clubs that critically analyze model architecture and outcomes. Clerkships can offer hands-on didactics about integrating AI-based tools into clinical workflows. Quantitatively-oriented programs such as the Harvard-MIT Health Sciences & Technology (HST) Program can provide students with opportunities to build or modify simple open-source AI models using common coding platforms, which might involve AI coding agents such as Cursor or Microsoft Copilot.
While such structured learning can reduce ambiguity and overwhelm around AI use for medicine, we must also offer adequate mental health support to students and residents. For instance, protected wellness time in clerkships can provide scheduled rest periods for students and recent graduates. Other measures could include longitudinal faculty mentorships with frequent student check-ins and peer support groups. Schools can also offer skills-based sessions on stress management and cognitive behavioral strategies. In addition, institutions should prioritize confidential, no-cost counseling for all students.
As medical institutions like MIT begin to train their cohort of future healthcare professionals about the role of AI in medicine, they must ensure that the training is being done strategically and holistically. We need clinicians who are not only technically trained in using technology, but who also feel confident and mentally resilient while navigating it responsibly.
Dr. Archana Podury ‘23 earned her MD from Harvard Medical School in the Harvard-MIT Health Sciences & Technology (HST) Program and is now an admissions consultant at Inspira Advantage, a medical school admissions consulting firm.