SAILOR: Structural Augmentation Based Tail Node Representation Learning logo

SAILOR: Structural Augmentation Based Tail Node Representation Learning

In the pursuit of promoting the expressiveness of GNNs for tail nodes, we explore how the deficiency of structural information deteriorates the performance of tail nodes and propose a general Structural Augmentation based taIL nOde Representation learning framework, dubbed as SAILOR, which can jointly learn to augment the graph structure and extract more informative representations for tail nodes.

GitHub Link

The GitHub link is https://github.com/jie-re/sailor

Introduce

The repository "Jie-Re/SAILOR" contains the PyTorch code for the paper titled "SAILOR Structural Augmentation Based Tail Node Representation Learning," presented at CIKM-2023. The paper proposes a method for tail node representation learning through structural augmentation. The provided installation guide lists necessary dependencies, and running examples are included using different configurations. The citation information for the paper is also provided. In the pursuit of promoting the expressiveness of GNNs for tail nodes, we explore how the deficiency of structural information deteriorates the performance of tail nodes and propose a general Structural Augmentation based taIL nOde Representation learning framework, dubbed as SAILOR, which can jointly learn to augment the graph structure and extract more informative representations for tail nodes.

Content

We provide the code (in pytorch) for our paper "SAILOR: Structural Augmentation Based Tail Node Representation Learning" (SAILOR for short), which is published in CIKM-2023. The following commands are used for installing key dependencies; other can be directly installed via pip or conda. A full redundant dependency list is in requirements.txt.

Alternatives & Similar Tools

LongLLaMA-handle very long text contexts, up to 256,000 tokens logo

LongLLaMA is a large language model designed to handle very long text contexts, up to 256,000 tokens. It's based on OpenLLaMA and uses a technique called Focused Transformer (FoT) for training. The repository provides a smaller 3B version of LongLLaMA for free use. It can also be used as a replacement for LLaMA models with shorter contexts.