Large-kernel Attention for Efficient and Robust Brain Lesion Segmentation logo

Large-kernel Attention for Efficient and Robust Brain Lesion Segmentation

Vision transformers are effective deep learning models for vision tasks, including medical image segmentation.

GitHub Link

The GitHub link is https://github.com/liamchalcroft/mdunet

Introduce

The GitHub repository "MDUNet" by liamchalcroft presents a U-Net model that utilizes a matrix decomposition framework proposed by Guo et al. in their paper "Visual Attention Network." The code was initially derived from NVIDIA's nnUNet implementation and was created for submission to the MICCAI 2022 BrainLes ISLES and ATLAS challenges. The repository's structure and format are closely aligned with NVIDIA's implementation, and users are advised to refer to their documentation for guidance.

Content

This repo contains code used for submission to the MICCAI 2022 BrainLes ISLES and ATLAS challenges. The repo was originally a fork of the NVIDIA nnUNet implementation (https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Segmentation/nnUNet) and so the structure / format is heavily based on this, and it is recommended to use their documentation as a first point of contact.

Alternatives & Similar Tools

LongLLaMA-handle very long text contexts, up to 256,000 tokens logo

LongLLaMA is a large language model designed to handle very long text contexts, up to 256,000 tokens. It's based on OpenLLaMA and uses a technique called Focused Transformer (FoT) for training. The repository provides a smaller 3B version of LongLLaMA for free use. It can also be used as a replacement for LLaMA models with shorter contexts.