<div>This is the dataset to support the paper:</div><div><br></div><div>Fernando Pérez-García et al., 2021, <i>Transfer Learning of Deep Spatiotemporal Networks to Model Arbitrarily Long Videos of Seizures</i>.</div><div><br></div><div>The paper has been accepted for publication at the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021).</div><div>A preprint is available on arXiv: https://arxiv.org/abs/2106.12014</div><div><br></div><div>Contents:</div><div>1) A CSV file "seizures.csv" with the following fields:</div><div> - Subject: subject number</div><div> - Seizure: seizure number</div><div> - OnsetClonic: annotation marking the onset of the clonic phase</div><div> - GTCS: whether the seizure generalises</div><div> - Discard: whether one (Large, Small), none (No) or both (Yes) views were discarded for training.</div><div>2) A folder "features_fpc_8_fps_15" containing two folders per seizure.</div><div> The folders contain features extracted from all possible snippets from the small (S) and large (L) views. The snippets were 8 frames long and downsampled to 15 frames per second. The features are in ".pth" format and can be loaded using PyTorch: https://pytorch.org/docs/stable/generated/torch.load.html</div><div> The last number of the file name indicates the frame index. For example, the file "006_01_L_000015.pth" corresponds to the features extracted from a snippet starting one second into the seizure video. Each file contains 512 numbers representing the deep features extracted from the corresponding snippet.</div><div>3) A description file, "README.txt".</div>
Funding
NPIF EPSRC Doctoral - University College London 2017
Engineering and Physical Sciences Research Council