An open source convolutional neural networks platform for medical image analysis and image-guided therapy
Get StartedNiftyNet is a TensorFlow-based open-source convolutional neural networks (CNNs) platform for research in medical image analysis and image-guided therapy. NiftyNet’s modular structure is designed for sharing networks and pre-trained models. Using this modular structure you can:
The code is available via GitHub, or you can quickly get started with the PyPI module available here.
NiftyNet currently supports medical image segmentation and generative adversarial networks. NiftyNet is not intended for clinical use. Other features of NiftyNet include:
Easy-to-customise interfaces of network components
Sharing networks and pre-trained models
Support for 2-D, 2.5-D, 3-D, 4-D inputs
Efficient discriminative training with multiple-GPU support
Implementation of recent networks (HighRes3DNet, 3D U-net, V-net, DeepMedic)
Comprehensive evaluation metrics for medical image segmentation
If you use NiftyNet in your work, please cite Gibson and Li et al. 2017.
The NiftyNet platform originated in software developed for Li et al. 2017. Please click below for the full citations and BibTeX
entries.
E. Gibson, W. Li, C. Sudre, L. Fidon, D. Shakir, G. Wang, Z. Eaton-Rosen, R. Gray, T. Doel, Y. Hu, T. Whyntie, P. Nachev, M. Modat, D. C. Barratt, S. Ourselin, M. J. Cardoso and T. Vercauteren (2018) NiftyNet: a deep-learning platform for medical imaging, Computer Methods and Programs in Biomedicine.
@InProceedings{niftynet18,
author = {Eli Gibson and Wenqi Li and Carole Sudre and Lucas Fidon and Dzoshkun Shakir and Guotai Wang and Zach Eaton-Rosen and Robert Gray and Tom Doel and Yipeng Hu and Tom Whyntie and Parashkev Nachev and Marc Modat and Dean C. Barratt and Sebastien Ourselin and M. Jorge Cardoso and Tom Vercauteren},
title = {NiftyNet: a deep-learning platform for medical imaging},
year = {2018},
volume = {158},
pages = {113-122},
journal = {Computer Methods and Programs in Biomedicine},
}
Li W., Wang G., Fidon L., Ourselin S., Cardoso M.J., Vercauteren T. (2017) On the Compactness, Efficiency, and Representation of 3D Convolutional Networks: Brain Parcellation as a Pretext Task. In: Niethammer M. et al. (eds) Information Processing in Medical Imaging. IPMI 2017. Lecture Notes in Computer Science, vol 10265. Springer, Cham. DOI: 10.1007/978-3-319-59050-9_28
@InProceedings{niftynet17,
author = {Li, Wenqi and Wang, Guotai and Fidon, Lucas and Ourselin, Sebastien and Cardoso, M. Jorge and Vercauteren, Tom},
title = {On the Compactness, Efficiency, and Representation of 3D Convolutional Networks: Brain Parcellation as a Pretext Task},
booktitle = {International Conference on Information Processing in Medical Imaging (IPMI)},
year = {2017}
}
A number of models from the literature have been (re)implemented in the NiftyNet framework. These are listed below. All networks can be applied in 2D, 2.5D and 3D configurations and are reimplemented from their original presentation with their default parameters.
Kamnitsas, K., Ledig, C., Newcombe, V. F., Simpson, J. P., Kane, A. D., Menon, D. K., Rueckert, D., Glocker, B. (2017) Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. DOI: 10.1016/j.media.2016.10.004
Li W., Wang G., Fidon L., Ourselin S., Cardoso M.J., Vercauteren T. (2017) On the Compactness, Efficiency, and Representation of 3D Convolutional Networks: Brain Parcellation as a Pretext Task. In: Niethammer M. et al. (eds) Information Processing in Medical Imaging. IPMI 2017. Lecture Notes in Computer Science, vol 10265. Springer, Cham. DOI: 10.1007/978-3-319-59050-9_28
Fidon, L., Li, W., Garcia-Peraza-Herrera, L.C., Ekanayake, J., Kitchen, N., Ourselin, S., Vercauteren, T. (2017) Scalable multimodal convolutional networks for brain tumour segmentation. MICCAI 2017
Çiçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T., and Ronneberger, O. (2016) 3D U-net: Learning dense volumetric segmentation from sparse annotation. MICCAI 2016
Milletari, F., Navab, N., & Ahmadi, S. A. (2016) V-net: Fully convolutional neural networks for volumetric medical image segmentation. 3DV 2016
Further details can be found in the GitHub networks section here.
Publications relating to the various loss functions used in the NiftyNet framework can be found listed below.
Milletari, F., Navab, N., & Ahmadi, S. A. (2016) V-net: Fully convolutional neural networks for volumetric medical image segmentation. 3DV 2016
Sudre, C. et. al. (2017) Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations. DLMIA 2017
Brosch et. al. (2015) Deep Convolutional Encoder Networks for Multiple Sclerosis Lesion Segmentation. MICCAI 2015
Fidon, L. et. al. (2017) Generalised Wasserstein Dice Score for Imbalanced Multi-class Segmentation using Holistic Convolutional Networks. MICCAI 2017 (BrainLes)
NiftyNet is released under the Apache License, Version 2.0. Please see the LICENSE file in the NiftyNet source code repository for details.
This project is grateful for the support from the Wellcome Trust, the Engineering and Physical Sciences Research Council (EPSRC), the National Institute for Health Research (NIHR), the Department of Health (DoH), Cancer Research UK (CRUK), King's College London (KCL), the Science and Engineering South Consortium (SES), the STFC Rutherford-Appleton Laboratory, and NVIDIA.
© The NiftyNet Consortium 2019