After a harmful outcome, many people want something immediate: a rough range, a sense of whether it’s “worth pursuing,” and guidance on what categories of harm might matter.
AI tools typically try to approximate value by taking inputs like:
- the severity of injury
- treatment duration
- medical costs already incurred
- reported ongoing symptoms
- sometimes non-economic impacts (pain, loss of enjoyment, emotional distress)
That can be useful as a checklist—but not as a case conclusion.
In Kirkwood, a common scenario is a patient who had to keep up with work schedules near St. Louis while symptoms worsened. When that happens, people may delay documenting functional limits or assume the problem “will resolve.” An AI estimate can’t know which records were missing at the time—or how those gaps affect proof.


