Deep neural networks are vulnerable to universal adversarial perturbation (UAP), an instance-agnostic perturbation capable of fooling the target model for most samples.
RMP-Loss: Regularizing Membrane Potential Distribution for Spiking Neural Networks
Spiking Neural Networks (SNNs) as one of the biology-inspired models have received much attention recently.
GitHub Link
The GitHub link is https://github.com/yfguo91/mpbnIntroduce
The project "MPBN" presents the official implementation of Membrane Potential Batch Normalization for Spiking Neural Networks, introduced at ICCV2023. The approach involves adding a Batch Normalization (BN) layer before the firing function in spiking neural networks to normalize the membrane potential again after the nonlinear activation. The provided dataset can be downloaded automatically, and instructions to begin training are given. The citation for the method is also included. Spiking Neural Networks (SNNs) as one of the biology-inspired models have received much attention recently.Content
Official implementation of Membrane Potential Batch Normalization for Spiking Neural Networks (ICCV2023). The spiking neuron is much more complex with the spatio-temporal dynamics. The regulated data flow after the BN layer will be disturbed again by the membrane potential updating operation before the firing function, i.e., the nonlinear activation. Therefore, we advocate adding another BN layer before the firing function to normalize the membrane potential again, called MPBN. The dataset will be download automatically.Alternatives & Similar Tools
Google Gemini, a multimodal AI by DeepMind, processes text, audio, images, and more. Gemini outperforms in AI benchmarks, is optimized for varied devices, and has been tested for safety and bias, adhering to responsible AI practices.
Video ReTalking, advanced real-world talking head video according to input audio, producing a high-quality
Then transplant it to the real world to solve complex problems
LongLLaMA is a large language model designed to handle very long text contexts, up to 256,000 tokens. It's based on OpenLLaMA and uses a technique called Focused Transformer (FoT) for training. The repository provides a smaller 3B version of LongLLaMA for free use. It can also be used as a replacement for LLaMA models with shorter contexts.
Large Language and Vision Assistant