AI tools typically generate a broad damages range using the details you type in. That may help you understand categories like medical bills or long-term care.
In real Berkeley cases, however, the value swings based on practical factors that aren’t captured by a form:
- Referral and follow-up gaps common in outpatient settings (missed calls, delayed specialty appointments, unclear discharge instructions).
- Timeline complexity—symptoms that worsen while you’re waiting for the next step in care.
- Documentation quality—what was charted, when it was charted, and whether providers recorded functional limitations (especially important for ongoing mobility, work ability, or caregiving capacity).
- Insurance and hospital-system processes that can change how early evidence is preserved.
Instead of treating an AI output as a prediction, use it as a starting checklist: What evidence would be needed to support each category? That’s where a legal review becomes essential.


