Google's AI Medical Language Model
Bio-SIEVE: Exploring Instruction Tuning Large Language Models for Systematic Review Automation
Medical systematic reviews can be very costly and resource intensive.
GitHub Link
The GitHub link is https://github.com/ambroser53/bio-sieveIntroduce
The project "Bio-SIEVE" explores the use of Large Language Models (LLMs) for automating literature screening in medical systematic reviews. The study focuses on training LLMs to perform abstract screening based on specific selection criteria. The best model developed, named Bio-SIEVE, outperforms both ChatGPT and traditional methods, displaying better generalization across medical domains. The study also investigates multi-task training but finds that single-task Bio-SIEVE performs better. The models, code, and dataset information are made available for reproducibility. The project's models, training process, and evaluation on various datasets are detailed, highlighting its potential for streamlining biomedical systematic reviews. Medical systematic reviews can be very costly and resource intensive.Content
The adapter weights for the 4 best models trained as part of this project can be found and used from HuggingFace: Instruct Cochrane consists of 5 main splits as detailed in the table below: The dataset can be constructed from separate lists of DOIs, as described in data/README.md. Models are all trained using a modified version of the QLoRA training script (Dettmers et. al.). An example training script is found below with the necessary parameters to recreate out model from the dataset. Models are evaluated on four datasets: Test, Subsets, Safety-First and Irrelevancy. Test evaluates the performance on the raw cochrane reviews. Subsets allow for comparison with logistic regression baselines as it allows for k-fold cross validation while training per review, simluating the existing active learning methods in literature. Safety-First better approximates the include/exclude process on just abstracts and titles. The test set is the final decision based on full-text screening, hence it is not always possible to derive their decision from the abstract and title alone. Irrelevancy is based on the subsets, wherein abstracts from completely different reviews are tested to evaluate whether the model can exclude samples far from the decision boundary. Details on using the evaluations scripts can be found in evaluation/README.md.Alternatives & Similar Tools
Google Gemini, a multimodal AI by DeepMind, processes text, audio, images, and more. Gemini outperforms in AI benchmarks, is optimized for varied devices, and has been tested for safety and bias, adhering to responsible AI practices.
AI-Powered Health Platform
Video ReTalking, advanced real-world talking head video according to input audio, producing a high-quality
Then transplant it to the real world to solve complex problems
LongLLaMA is a large language model designed to handle very long text contexts, up to 256,000 tokens. It's based on OpenLLaMA and uses a technique called Focused Transformer (FoT) for training. The repository provides a smaller 3B version of LongLLaMA for free use. It can also be used as a replacement for LLaMA models with shorter contexts.