inferless

Building World's most reliable Serverless GPU Inference Offering. In Private Beta

Github Data

Followers 41
Following 0

AI Project

Public repos: 152Public gists: 0

deepseek-r1-distill-qwen-32b

A distilled DeepSeek-R1 variant built on Qwen2.5-32B, fine-tuned with curated data for enhanced performance and efficiency. <metadata> gpu: A100 | collections: ["vLLM"] </metadata>
star: 5fork: 13
language: Python
created at: 2025-01-27
updated at: 2025-03-11

huatuogpt-o1-70b

A medical LLM built on LLaMA-3.1-70B, employing detailed step-by-step reasoning for complex medical problem-solving. <metadata> gpu: A100 | collections: ["HF Transformers","Variable Inputs"] </metadata>
star: 0fork: 0
language: Python
created at: 2025-01-10
updated at: 2025-03-05

vLLM-GGUF-model-template

star: 1fork: 1
language: Python
created at: 2024-08-13
updated at: 2025-02-14

Llama-2-7B-Chat-GGUF

Llama-2-7B-Chat-GGUF model is part of Meta's Llama 2 model family, which is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the quantized GGUF 7B fine-tuned model, optimized for dialogue use cases.
star: 0fork: 0
language: Python
created at: 2024-08-06
updated at: 2024-09-24

Llama-3.1-8B-Instruct-GGUF

star: 0fork: 3
language: Python
created at: 2024-08-02
updated at: 2024-09-11

stable-diffusion-3

A latent diffusion model fine-tuned on diverse image–text pairs, balancing quality and speed. <metadata> gpu: A10 | collections: ["Diffusers"] </metadata>
star: 0fork: 2
language: Python
created at: 2024-06-21
updated at: 2025-03-04

google-Paligemma-3b

PaliGemma is a cutting-edge open vision-language model (VLM) developed by Google. It is designed to understand and generate detailed insights from both images and text, making it a powerful tool for tasks such as image captioning, visual question answering, object detection, and object segmentation.
star: 4fork: 1
language: Python
created at: 2024-05-20
updated at: 2025-02-03

Animagine-xl-3.0

Animagine XL 3.0 is the latest version of the sophisticated open-source anime text-to-image model, building upon the capabilities of its predecessor, Animagine XL 2.0. Developed based on Stable Diffusion XL, this iteration boasts superior image generation with notable improvements.
star: 3fork: 10
language: Python
created at: 2024-03-24
updated at: 2025-02-03

Playground-v2.5

Playground v2.5 is a diffusion-based text-to-image generative model, and a successor to Playground v2. Playground v2.5 is the state-of-the-art open-source model in aesthetic quality. Our user studies demonstrate that our model outperforms SDXL, Playground v2, PixArt-α, DALL-E 3, and Midjourney 5.2.
star: 7fork: 3
language: Python
created at: 2024-03-21
updated at: 2025-02-12

sdxl-lightning

A lightning-fast text-to-image generation model that generate high-quality 1024px images in a few steps. <metadata> gpu: T4 | collections: ["Diffusers"] </metadata>
star: 3fork: 5
language: Python
created at: 2024-03-19
updated at: 2025-03-04

stable-cascade

A cascaded text-to-image diffusion model that sequentially refines outputs for enhanced detail, resolution, and overall image quality. <metadata> gpu: T4 | collections: ["Diffusers"] </metadata>
star: 0fork: 1
language: Python
created at: 2024-03-19
updated at: 2025-03-04

stable-diffusion-2-1

This stable-diffusion-2-1 model is fine-tuned from stable-diffusion-2 (768-v-ema.ckpt) with an additional 55k steps on the same dataset (with punsafe=0.1), and then fine-tuned for another 155k extra steps with punsafe=0.98.
star: 0fork: 7
language: Python
created at: 2024-02-07
updated at: 2025-02-03

stable-video-diffusion

This model converts a single still image into a coherent video sequence with consistent, realistic motion. <metadata> gpu: T4 | collections: ["Diffusers"] </metadata>
star: 6fork: 4
language: Python
created at: 2024-02-06
updated at: 2025-03-04

ComfyUI

ComfyUI is a node-based GUI for Stable Diffusion. In this template we will import ComfyUI on Inferless.
star: 2fork: 5
language: Python
created at: 2023-12-01
updated at: 2025-02-03

stable-diffusion-xl-turbo

A distilled and cost-effective variant of SDXL that delivers high-quality text-to-image generation with accelerated inference speed. <metadata> gpu: T4 | collections: ["Diffusers"] </metadata>
star: 3fork: 10
language: Python
created at: 2023-11-29
updated at: 2025-03-03

stable-diffusion-s3-image-save

Uses Stable Diffusion to generate images and automatically uploads them to an S3 bucket. <metadata> gpu: A100 | collections: ["S3 Storage", "Complex Outputs"] </metadata>
star: 0fork: 0
language: Python
created at: 2023-11-27
updated at: 2025-03-03

stable-diffusion-webhook

Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
star: 1fork: 1
language: Python
created at: 2023-11-14
updated at: 2024-10-04

stabilityai-stable-diffusion-xl

Generates high-quality images from text prompts using XL Refiner. <metadata> gpu: A100 | collections: ["Diffusers"] </metadata>
star: 0fork: 12
language: Python
created at: 2023-10-31
updated at: 2025-03-04

stable-diffusion-v1-5

A text-to-image model by Stability AI, renowned for generating high-quality, diverse images from text prompts. <metadata> gpu: T4 | collections: ["Diffusers"] </metadata>
star: 0fork: 1
language: Python
created at: 2023-10-31
updated at: 2025-03-04

stable-diffusion-2-inpainting

An advanced text-guided inpainting model that fills masked image regions with contextually coherent, high-quality details. <metadata> gpu: T4 | collections: ["Diffusers"] </metadata>
star: 1fork: 2
language: Python
created at: 2023-10-09
updated at: 2025-03-04

Llama-2-7B-GPTQ

Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
star: 0fork: 12
language: Python
created at: 2023-08-08
updated at: 2024-09-12

dreamshaper

A ControlNet model designed for Stable Diffusion, providing brightness adjustment for colorizing or recoloring images. <metadata> gpu: T4 | collections: ["Diffusers"] </metadata>
star: 0fork: 2
language: Python
created at: 2023-07-26
updated at: 2025-03-04

stable-diffusion-controlnet

This model with SD-ControlNet-Canny uses ControlNet with Canny edge maps to guide inpainting, producing consistent, detailed edits. <metadata> gpu: T4 | collections: ["Diffusers"] </metadata>
star: 0fork: 2
language: Python
created at: 2023-07-19
updated at: 2025-03-04