When Monte-Carlo Dropout Meets Multi-Exit: Optimizing Bayesian Neural Networks on FPGA logo

When Monte-Carlo Dropout Meets Multi-Exit: Optimizing Bayesian Neural Networks on FPGA

Bayesian Neural Networks (BayesNNs) have demonstrated their capability of providing calibrated prediction for safety-critical applications such as medical imaging and autonomous driving.

GitHub Link

The GitHub link is https://github.com/os-hxfan/bayesnn_fpga

Introduce

This GitHub repository, titled "FPGA-based hardware acceleration for dropout-based Bayesian Neural Networks," offers a FPGA-based accelerator for implementing dropout-based Bayesian Neural Networks (BayesNNs). The repository supports multi-exit Monte Carlo Dropout (MCD) and multi-exit Masksembles on FPGA. It contains software artifacts for evaluating accuracy and ECE, as well as hardware artifacts for assessing the proposed FPGA-based accelerator's performance. The software is based on PyTorch, while the hardware implementation utilizes HLS4ML and QKeras. The repository provides code, models, datasets, and tools for both software and hardware evaluation. Detailed setup instructions are available in the README files. The associated paper is titled "When Monte-Carlo Dropout Meets Multi-Exit Optimizing Bayesian Neural Networks on FPGA," authored by Fan et al. and presented at the 60th ACM/IEEE Design Automation Conference in 2023. Bayesian Neural Networks (BayesNNs) have demonstrated their capability of providing calibrated prediction for safety-critical applications such as medical imaging and autonomous driving.

Content

FPGA-based hardware acceleration for dropout-based Bayesian Neural Networks (BayesNNs). We support both multi-exit Monte Carlo Dropout (MCD) and multi-exit Masksembles on FPGA. This repo contains the artifacts of our DAC'23 paper and TCAD'23 submission. Pls cite us if you found it helpful. The software is based on Pytorch, and the hardware implementation is based on HLS4ML and QKeras. Pls cite our paper and give this repo _ if you feel the code and paper are helpful! Our paper is online now (link)! If you found it helpful, pls cite us using:

Alternatives & Similar Tools

FocusFlow: Boosting Key-Points Optical Flow Estimation for Autonomous Driving logo

Based on the modeling method, we present FocusFlow, a framework consisting of 1) a mix loss function combined with a classic photometric loss function and our proposed Conditional Point Control Loss (CPCL) function for diverse point-wise supervision; 2) a conditioned controlling model which substitutes the conventional feature encoder by our proposed Condition Control Encoder (CCE).

LongLLaMA-handle very long text contexts, up to 256,000 tokens logo

LongLLaMA is a large language model designed to handle very long text contexts, up to 256,000 tokens. It's based on OpenLLaMA and uses a technique called Focused Transformer (FoT) for training. The repository provides a smaller 3B version of LongLLaMA for free use. It can also be used as a replacement for LLaMA models with shorter contexts.