Efficient RGB-D semantic segmentation has received considerable attention in mobile robots, which plays a vital role in analyzing and recognizing environmental information.
SegPrompt: Boosting Open-world Segmentation via Category-level Prompt Learning
In this work, we propose a novel training mechanism termed SegPrompt that uses category information to improve the model's class-agnostic segmentation ability for both known and unknown categories.
GitHub Link
The GitHub link is https://github.com/aim-uofa/segpromptIntroduce
The repository "aim-uofa/SegPrompt" contains the official implementation of the ICCV 2023 paper titled "SegPrompt Boosting Open-World Segmentation via Category-level Prompt Learning." The authors propose SegPrompt for improving open-world segmentation through category-level prompt learning. They introduce a new benchmark called LVIS-OW, which involves reorganizing COCO and LVIS datasets into Known-Seen-Unseen categories for better evaluating open-world models. The repository provides dataset preparation instructions, benchmark details, and evaluation scripts. Acknowledgments are given to related repositories like Mask2Former and Detectron2, and the paper encourages proper citation if the project is used. In this work, we propose a novel training mechanism termed SegPrompt that uses category information to improve the model's class-agnostic segmentation ability for both known and unknown categories.Content
1Zhejiang University, 2The University of Adelaide, Please follow the instructions in Mask2Former Here we provide our proposed new benchmark LVIS-OW. First prepare COCO and LVIS dataset, place them under $DETECTRON2_DATASETS following Detectron2 The dataset structure is as follows: Or you can directly use the command to generate from the json file of COCO and LVIS. We thank the following repos for their great works: If you found this project useful for your paper, please kindly cite our paper.Alternatives & Similar Tools
Recent leading zero-shot video object segmentation (ZVOS) works devote to integrating appearance and motion information by elaborately designing feature fusion modules and identically applying them in multiple feature stages.
Google Gemini, a multimodal AI by DeepMind, processes text, audio, images, and more. Gemini outperforms in AI benchmarks, is optimized for varied devices, and has been tested for safety and bias, adhering to responsible AI practices.
Video ReTalking, advanced real-world talking head video according to input audio, producing a high-quality
Then transplant it to the real world to solve complex problems
LongLLaMA is a large language model designed to handle very long text contexts, up to 256,000 tokens. It's based on OpenLLaMA and uses a technique called Focused Transformer (FoT) for training. The repository provides a smaller 3B version of LongLLaMA for free use. It can also be used as a replacement for LLaMA models with shorter contexts.