SimMatchV2: Semi-Supervised Learning with Graph Consistency logo

SimMatchV2: Semi-Supervised Learning with Graph Consistency

Semi-Supervised image classification is one of the most fundamental problem in computer vision, which significantly reduces the need for human labor.

GitHub Link

The GitHub link is https://github.com/mingkai-zheng/simmatchv2

Introduce

The GitHub repository "SimMatchV2" presents code and pretrained models for the ICCV2023 paper on "SimMatchV2 Semi-Supervised Learning with Graph Consistency." The repository includes PyTorch evaluation and training code. The code is adapted from a reference source and is designed for distributed training in a slurm environment. Pretrained models for SimMatchV2 with different settings are provided, along with their corresponding accuracy percentages. Researchers can also find evaluation scripts and citation details in the repository. The paper's authors are Zheng, Mingkai et al., and the work focuses on semi-supervised learning using graph consistency. Semi-Supervised image classification is one of the most fundamental problem in computer vision, which significantly reduces the need for human labor.

Content

This repository contains PyTorch evaluation code, training code and pretrained models for SimMatchV2. Most of the code in this repository is adapt from here. For details see SimMatchV2: Semi-Supervised Learning with Graph Consistency by Mingkai Zheng, Shan You, Lang Huang, Chen Luo, Fei Wang, Chen Qian, and Chang Xu To run the code, you probably need to change the Dataset setting (ImagenetPercent function in data/imagenet.py), and Pytorch DDP setting (dist_init function in utils/dist_utils.py) for your server environment. The distributed training of this code is based on slurm environment, we have provided the training scrips in script/train.sh We also provide the pre-trained model. If you want to test the pre-trained model, please download the weights from the link above, and move them to the checkpoints folder. The evaluation scripts also have been provided in script/train.sh If you find that SimMatch interesting and help your research, please consider citing it:

Alternatives & Similar Tools

LongLLaMA-handle very long text contexts, up to 256,000 tokens logo

LongLLaMA is a large language model designed to handle very long text contexts, up to 256,000 tokens. It's based on OpenLLaMA and uses a technique called Focused Transformer (FoT) for training. The repository provides a smaller 3B version of LongLLaMA for free use. It can also be used as a replacement for LLaMA models with shorter contexts.