On February 13th, 2026, during a routine check, our pediatrician noticed that my four-year-old boy couldn’t see properly from his right eye. What followed was a weeks-long journey through specialists, emergency rooms, and ultimately a rare disease diagnosis. Along the way, I used Claude Code as a preparation tool and to assist me properly as a parent.
This is what I learned and what I want to share.
The discovery
A week after the pediatrician’s finding, we visited an ophthalmologist. She examined my son and was almost ready to send us home with a prescription for glasses, when she noticed an abnormality in the retina. She recommended we go to the specialized eye clinic at the Zurich University Hospital for further evaluation.
The referral process took longer than expected, so I sought, in the meanwhile, a second opinion from another ophthalmologist, who confirmed there was indeed something unusual in the retina.
Everyone’s initial suspect was the same: three months earlier, my son had hit his eye and nose hard enough to require surgery for a broken nasal bone. A traumatic injury seemed like the obvious explanation for the eye as well.
On February 25th I decided not to wait any longer and went directly to the eye clinic’s emergency room. They ran a full set of imaging tests and gave us an appointment for the following day to discuss the results.
That evening, I organized everything I had (my son’s data, the timeline of events, the images from the examinations) into a folder and fed it all to Claude Code.
Claude’s assessment
Claude identified the condition as Coats disease, a rare condition (roughly 1 in 100,000) characterized by abnormal blood vessel development in the retina. It predominantly affects boys and is unilateral in 95% of cases. It is unrelated to trauma.
I don’t trust AI blindly, and I recommend against doing so. A model works with the information you provide and the patterns it has learned. It doesn’t perform a physical examination and it has no clinical experience. But it gave me a clear direction and helped me prepare a series of questions for the doctors the next day. Because of what I just said, the Claude response was suspicious: since it’s a rare disease, the probability of this diagnosis is very low, but at the same time, the fact that Claude identified it made it immediately more probable to be correct.
The next day, after another four hours of checks, the doctors confirmed the diagnosis: I did not mention my findings. The questions Claude had prepared helped me have a more productive conversation with the medical team, even though I was still processing the news.
Over the next two days, more tests followed. The team recommended surgery and a consultation with the clinic director the following Monday.
Preparation
That weekend, with a confirmed diagnosis, I used Claude again to build a comprehensive understanding of Coats disease:
- the Shields classification staging system
- treatment options by stage
- visual prognosis data,
- recurrence rates
- and the latest research from 2024-2025 including the first international consensus guidelines.
All these data were fetched by Claude on medical publications and articles.
By Monday I had a list of very well organized questions:
- staging and severity,
- treatment plan
- visual prognosis
- risks and complications
- follow-up protocols
The doctors were receptive to the detailed questions, and the conversations were significantly more productive than they would have been otherwise.
Where AI falls short
When we met the surgeon, the conversation was fundamentally different from what AI could provide.
Claude had given me statistical outcomes: average recurrence rates, mean visual acuity results across “large” patient cohorts, probability distributions based on staging. It was thorough and well-sourced. It was also, in our case, more pessimistic than the surgeon’s assessment.
The surgeon could evaluate our son’s specific anatomy, the exact pattern of telangiectasia, and draw on his experience with similar cases. His assessment accounted for individual factors that population-level statistics cannot capture. Where Claude presented average outcomes for a given stage, the surgeon could say what he expected for this particular case based on what he saw.
This is the key limitation of AI in this context: it works with aggregated data and gives you the statistical average. A surgeon works with individual cases and gives you a specific assessment. Both are useful, but they serve different purposes.
Recommendations
If you find yourself facing an unexpected medical diagnosis, here is what I’d suggest based on this experience:
Use AI to prepare, not to diagnose. The value was not in the initial identification. The value was in the days that followed, where I could rapidly build deep knowledge about a condition I had never heard of. I walked into every appointment with specific, informed questions.
Always verify with professionals. Bring what you learn to the doctors. They examine, they operate, they have clinical judgment built from direct experience. AI is a research assistant.
Start early. The moment you have information — reports, images, a timeline — organize it and start researching. The earlier you understand the condition, the more productive your medical conversations will be.
Ask detailed questions. Doctors have limited time. Arriving prepared means you can focus on the specifics that matter for your case. Knowing to ask about subfoveal nodules, combination therapy protocols, and amblyopia management timelines helped us understand the full picture faster.
Every clinical decision comes from the doctors. AI made me a better prepared parent during a difficult period, and that preparation made a real difference in how we navigated the process.