I’m holding my medical informatics elective again. This semester, eight 2nd-year medical students joined me. Their assignment this week was to “converse” with ChatGPT and discover ways that this tool can help them as medical students.
Here’s what the students did:
- Give ChatGPT the paper cases they were given during their module rotation. They asked ChatGPT to give a working impression and a plan for management. While in general, ChatGPT got the diagnosis correctly, the students noted that it lacked in giving differential diagnoses. When pressed to give differentials, ChatGPT approached it per symptoms instead of integrating the symptoms. My students were not impressed!
- Simulate a history-taking session. ChatGPT was given the case summary and asked to pretend to be the patient while my student took the history by asking a series of questions. ChatGPT embellished details not present in the case summary!
- Ask ChatGPT to write questions on a specific topic, with varying levels of difficulty, and in the style of the US MLE. I hadn’t thought of doing that! It was able to make multiple choice questions and also case-based questions.
In our discussion, students realized the following:
- ChatGPT sounded like an LU4 student (like them!). The plan of management ChatGPT gave was based on principles and not really patient-specific. We discussed how AI tools have long been controversial because some doctors are afraid they will be no longer be needed and replaced by AI.
- ChatGPT failed to recognize concomitant diseases, but got the main diagnosis. But as ChatGPT generated differentials based on individual symptoms, a student said that perhaps ChatGPT can be used to augment their own list of possible differentials.
- One student confessed not knowing what to ask ChatGPT exactly. We then discussed that if AI tools progress to the point that they will be useful to physicians, and that might happen during their generation, they would need to know how to use them properly.