A blazing fast inference solution for text embeddings models
-
Updated
Jul 16, 2024 - Rust
A blazing fast inference solution for text embeddings models
LSP server leveraging LLMs for code completion (and more?)
A work in progress to build out solutions in Rust for MLOPs
An LLM interface (chat bot) implemented in pure Rust using HuggingFace/Candle over Axum Websockets, an SQLite Database, and a Leptos (Wasm) frontend packaged with Tauri!
Rust cli that summarizes text with pre-trained models
Embeddings rust API serving intfloat/multilingual-e5-large using huggingface/candle with CUDA enabled
Fullstack chatbot built using Rust. Made using Candle, Leptos, Actix, Tokio and Tailwind. Uses quantized Mistral 7B Instruct v0.1 GGUF models.
A lightweight Rust application to test interaction with large language models. Currently supports running GGUF quantized models with hardware acceleration.
Web App with Spotify API integration, automated dockerization using github actions and Hugging Face integration
GIACC - Generate Images, Art, Code and Conversations
Rust wrapper over chat APIs from HuggingFace and various reverse engineering python code/libraries
Rocket translate is a blazingly fast translation API written in Rust.
Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
Rust-based AI Chatbot powered by the Leptos framework and Hugging Face model, delivering intelligent and engaging conversations. Explore the future of conversational AI with this efficient and versatile open-source project. 🚀 #Rust #AI #Chatbot
Add a description, image, and links to the huggingface topic page so that developers can more easily learn about it.
To associate your repository with the huggingface topic, visit your repo's landing page and select "manage topics."