Using Multimodal Data to Improve Precision of Inpatient Event Timelines

Adv Knowl Discov Data Min. 2024 May:14648:322-334. doi: 10.1007/978-981-97-2238-9_25. Epub 2024 May 1.

Abstract

Textual data often describe events in time but frequently contain little information about their specific timing, whereas complementary structured data streams may have precise timestamps but may omit important contextual information. We investigate the problem in healthcare, where we produce clinician annotations of discharge summaries, with access to either unimodal (text) or multimodal (text and tabular) data, (i) to determine event interval timings and (ii) to train multimodal language models to locate those events in time. We find our annotation procedures, dashboard tools, and annotations result in high-quality timestamps. Specifically, the multimodal approach produces more precise timestamping, with uncertainties of the lower bound, upper bounds, and duration reduced by 42% (95% CI 34-51%), 36% (95% CI 28-44%), and 13% (95% CI 10-17%), respectively. In the classification version of our task, we find that, trained on our annotations, our multimodal BERT model outperforms unimodal BERT model and Llama-2 encoder-decoder models with improvements in F1 scores for upper (10% and 61%, respectively) and lower bounds (8% and 56%, respectively). The code for the annotation tool and the BERT model is available (link).

Keywords: Absolute Timeline Prediction; Multimodal Data; Temporal Information; Timeline Construction.