posted on 2020-02-18, 08:23authored byFernando Pérez-García, Rachel Sparks, John DuncanJohn Duncan, Sebastien Ourselin
Poster and presentation presented for my MPhil upgrade at the Department of Medical Physics and Biomedical Engineering, University College London, United Kingdom on 19th February 2020.
About one third of patients with epilepsy are drug-resistant. Nearly half of drug-resistant epilepsy patients have focal epilepsy, which may be cured by resective surgery. However, only 40 to 70% of patients are seizure-free after the surgery. Therefore, retrospective studies are needed to correlate the clinical evaluation, resected brain structures and surgical outcome. To quantify the resected structures, the resection cavity must be segmented on the postoperative magnetic resonance image (MRI).
Manual analysis of VT and 3D medical images suffers from intra- and inter-rater variability and is a time-consuming and tedious process. Deep learning has seen great success in tasks such as automatic video classification and image segmentation. These methods require large amounts of annotated data, which are scarce in clinical scenarios due to concerns over patient privacy, the financial and time burden collecting data as part of a clinical trial, and the need for annotations from highly- trained raters. Synthesis of training instances from larger publicly available datasets of normal MRI scans allows using self-supervised transfer learning techniques to overcome these challenges.
We developed an algorithm to simulate resection cavities on preoperative MRI scans. We gathered several public preoperative MRI datasets and curated a new dataset of pre- and post-operative scans, called EPISURG. We manually annotated 133 of these postoperative images to test our models against them.
We present an algorithm to simulate resection cavities and show that no manual annotations are needed to perform automatic resection cavity segmentation.