Fernando Pérez-García et al., 2021, Transfer Learning of Deep Spatiotemporal Networks to Model Arbitrarily Long Videos of Seizures.
The paper has been accepted for publication at the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021).
A preprint is available on arXiv: https://arxiv.org/abs/2106.12014
Contents:
1) A CSV file "seizures.csv" with the following fields:
- Subject: subject number
- Seizure: seizure number
- OnsetClonic: annotation marking the onset of the clonic phase
- GTCS: whether the seizure generalises
- Discard: whether one (Large, Small), none (No) or both (Yes) views were discarded for training.
2) A folder "features_fpc_8_fps_15" containing two folders per seizure.
The folders contain features extracted from all possible snippets from the small (S) and large (L) views. The snippets were 8 frames long and downsampled to 15 frames per second. The features are in ".pth" format and can be loaded using PyTorch: https://pytorch.org/docs/stable/generated/torch.load.html
The last number of the file name indicates the frame index. For example, the file "006_01_L_000015.pth" corresponds to the features extracted from a snippet starting one second into the seizure video. Each file contains 512 numbers representing the deep features extracted from the corresponding snippet.
3) A description file, "README.txt".
Funding
NPIF EPSRC Doctoral - University College London 2017
Engineering and Physical Sciences Research Council