
Anoop Muniyappa is the inaugural Director of AI Innovation in Medical Education for the Department of Medicine. We interviewed Anoop to better understand his role, his vision, and how he can support DOM faculty in their own work in AI and education.
What do you do in your new role, Director of AI Innovations in Medical Education?
Over the past few months since starting the role, I have been meeting with education and AI leaders to understand the needs and opportunities in medical education, characterize the landscape of AI tools for education, and identify priority areas where AI solutions can effectively address these needs. In the short-term, I am working to build infrastructure to support the piloting and robust evaluation of educational AI tools within UCSF, identify opportunities to integrate new technologies, and develop educational resources to help faculty and trainees effectively leverage the rapidly growing suite of AI tools both for clinical care and education. For the medium- to long-term, my goal is the develop a strategic roadmap for us as an institution to re-envision what medical education looks like in the AI-enabled present/future, and to lead the way in effectively integrating these tools to improve our training programs.
How did you get interested in AI?
Broadly, I think technology is the greatest enabler of systems-based approaches to solving our most pressing problems in healthcare, whether it is providing better access for our patients, offloading the administrative burden on our healthcare providers, or delivering the right information to the right person at the right time. AI is the latest technology in our toolkit, and it holds enormous potential to transform many aspects of healthcare. My interest in AI stems from not only a desire to build better systems for care delivery, but also to enable the current and next generation of physicians to optimally leverage these tools to improve care for patients and providers alike.
If a faculty member is interested in AI and education, where should I start to learn about the topic?
Great question! Here are a few of my top resources:
- The UCSF AI website has several helpful training resources for those looking to get started.
- The Versa wiki has some excellent community resources, and Versa office hours are a great place to learn and connect with the Versa community.
- The Education Scholarship Conference (ESCape) hosts weekly meetings to provide feedback on medical education projects, has featured several AI education projects, and is a great place to learn and solicit feedback.
- I have found both the monthly AI seminar series and the CoDEx seminar series very helpful in learning about ongoing work from researchers and leaders in AI and diagnostic reasoning.
This is a rapidly evolving area, and we are working to develop more training resources and spaces for collaboration.
If a faculty member has a project idea using AI in the education realm, are you able to assist them in getting started?
Absolutely! Please reach out to [email protected], and I would be happy to meet, direct you to resources, identify collaborators, or support however I can.
What are a few ways you are most excited to see AI integrated into medical education?
There are many exciting AI tools in development and potential opportunities to improve education. I would say my top three are: (1) enabling precision education through better mapping of trainee experience and personalized feedback enabled by AI, (2) simplifying the process and improving the quality of clinical feedback for trainees through AI tools, several of which are already being developed by our faculty, and (3) leveraging large language models (LLMs) to facilitate rapid information retrieval, support diagnostic reasoning, and improve evidence-based education and clinical management.

Conan MacDougall is the Associate Dean for Integrated Core Curriculum and Professor of Clinical Pharmacy in the Department of Clinical Pharmacy. He is a leader in integrating the use of large language models (LLMs) into work in teaching and education. We interviewed him to hear how he has learned to use LLMs in his own teaching/education work to help our readers feel more comfortable using LLMs in their teaching/education.
How did you get started working with large language models (LLM) early on? Which LLM do you use, and why did you choose that one? Do you use VERSA?
Soon after ChatGPT came out, I was in a meeting where my Dean said “well, who is going to teach people about this?” and I was not quick enough to avert my gaze! Before then, I did not know an LLM from an LLE, so I am very much self-taught but have found there are lots of great resources online that can bring people up to “good enough” speed without requiring deep technical understanding.
My LLM of choice for most tasks is ChatGPT – I pay $20/month for the Pro version which allows access to advanced models and features, including ability to work with files, design “custom GPTs”, and use functions like Deep Research. I do use Versa for any queries involving any potentially sensitive information, including student information (even anonymized comments). Versa’s models are more than powerful enough for most queries, but right now there are some interface advantages for ChatGPT. I’m excited for new functions Versa is bringing that will, I think, bring more parity between them.
Are there some things you have created using LLMs that you think would be useful to faculty members, and you want to share?
When I find there’s a task that I use an LLM for on more than just a couple of occasions, I will use the CustomGPT function of ChatGPT Pro to create a dedicated “bot” to perform that function. These have the specific prompts pre-built so I don’t have to enter a complex set of instructions every time. If you have a (free or paid) ChatGPT account, you can access of these directly by the link and if you use another LLM, I’ll happily share the prompt (the specific instructions for the LLM):
Article reviewer: after I read an article, to make sure I really understood it, I have the LLM read it and create a 10-question quiz on the article.
Presentation feedback: this takes uploaded PowerPoints written in standard “bullet-point-ese” and makes suggestions on converting them to the Evidence-Assertion framework, an approach associated with better retention from learners
Assessment item generator: for the course I teach, I uploaded the course materials to the bot and gave it instructions on how to write assessment items according to best practices. Then I can generate a whole variety of assessment items with explanations and rubrics corresponding to the course learning objectives. They are rarely perfect as generated but get me about 80% of the way to writing good items.
Tutorbot: a custom tutor for my course that is “trained” to act as a tutor and to call on our specific course materials and objectives. In our early pilot, 80% of students found it valuable to help their learning. We received an AME innovations grant to create a “cookbook” to make it easy for other faculty to build these tutorbots for their courses and host them on Versa.
Rarely do I create the prompts from scratch – the internet has libraries of prompts people have already created and tested that can be the seeds for writing prompts. Also, it turns out the LLMs are pretty good at writing prompts for themselves if you give them a vague description of what you want them to do and iterate from there!
What is the first step you would recommend for a faculty member to start using LLMs to enhance their work as a teacher and their work as an educator?
The single highest yield resource I have found for beginning in AI in education is the work of the Wharton professor, Ethan Mollick. His Substack and YouTube channel, and the repository of prompts on his lab’s website, are invaluable. Beyond that, just taking time to experiment. For a few days, try and see whether an LLM can help with each of your various work tasks (using Versa for any sensitive data!). Learn about the different models (which can be hard because their naming is inscrutable) so you understand which ones are the right tool for the job. If you were previously underwhelmed with your LLM experience and it’s been a while since you’ve used one, I’d encourage giving the newer versions a spin – the power of these models has increased substantially, especially models that incorporate “reasoning”. These models don’t just spit an answer out right away but take time to plan and iterate on their answers and allow the user to see this “thought” process, which can be very illuminating.
What’s the most important concern for faculty to be aware of when incorporating LLMs into their teaching?
In the educational space, I’m less worried about LLMs giving wrong information - their “hallucination rate” is improving, and as long as learners have adequate oversight and skills in double-checking, I think this can be overcome. I do think that the closer to the patient the LLM is used and the fewer layers of supervision, the more caution I would use. Mostly, I’m worried about LLMs increasing superficiality in learning. Obviously, learners can completely bypass learning by using LLMs to cheat – present LLM’s work as their own. But even well-intentioned students can use LLMs to study in ways that don’t improve their learning, using LLMs to just summarize information as an instant “Cliff’s Notes” generator. There are some preliminary studies that show that without specific guidance, students will often use LLMs in ways that can be unproductive for learning. Fortunately, it’s not hard to use “pedagogical prompting” to get the LLM to act in ways that stimulate student learning. Ultimately, there is promise for LLMs to act as personalized tutors like the ones we are working on that can adapt to a learner’s level of knowledge and keep them in the “zone of proximal development” for optimal learning. Then we have to think about how we want to intentionally redesign our learning experiences to best use this tool (a great perspective on this here).