inferless

Building World's most reliable Serverless GPU Inference Offering. In Private Beta

Github Data

Followers 33
Following 0

AI Project

Public repos: 135Public gists: 0

Llama-2-7B-Chat-GGUF

Llama-2-7B-Chat-GGUF model is part of Meta's Llama 2 model family, which is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the quantized GGUF 7B fine-tuned model, optimized for dialogue use cases.
star: 0fork: 0
language: Python
created at: 2024-08-06
updated at: 2024-09-24

Llama-3.1-8B-Instruct-GGUF

star: 0fork: 3
language: Python
created at: 2024-08-02
updated at: 2024-09-11

Stable-diffusion-3

star: 0fork: 1
language: Python
created at: 2024-06-21
updated at: 2024-09-11

google-Paligemma-3b

PaliGemma is a cutting-edge open vision-language model (VLM) developed by Google. It is designed to understand and generate detailed insights from both images and text, making it a powerful tool for tasks such as image captioning, visual question answering, object detection, and object segmentation.
star: 4fork: 1
language: Python
created at: 2024-05-20
updated at: 2024-10-24

Animagine-xl-3.0

Animagine XL 3.0 is the latest version of the sophisticated open-source anime text-to-image model, building upon the capabilities of its predecessor, Animagine XL 2.0. Developed based on Stable Diffusion XL, this iteration boasts superior image generation with notable improvements.
star: 3fork: 8
language: Python
created at: 2024-03-24
updated at: 2024-09-23

Playground-v2.5

Playground v2.5 is a diffusion-based text-to-image generative model, and a successor to Playground v2. Playground v2.5 is the state-of-the-art open-source model in aesthetic quality. Our user studies demonstrate that our model outperforms SDXL, Playground v2, PixArt-α, DALL-E 3, and Midjourney 5.2.
star: 6fork: 2
language: Python
created at: 2024-03-21
updated at: 2024-10-04

SDXL-Lightning

SDXL-Lightning is a lightning-fast text-to-image generation model. It can generate high-quality 1024px images in a few steps. For more information, please refer to our research paper: SDXL-Lightning: Progressive Adversarial Diffusion Distillation.
star: 3fork: 5
language: Python
created at: 2024-03-19
updated at: 2024-09-12

Stable-cascade

Stable-cascade model is built upon the Würstchen architecture and its main difference to other models like Stable Diffusion is that it is working at a much smaller latent space. Why is this important? The smaller the latent space, the faster you can run inference and the cheaper the training becomes.
star: 0fork: 1
language: Python
created at: 2024-03-19
updated at: 2024-09-11

stable-diffusion-2-1

This stable-diffusion-2-1 model is fine-tuned from stable-diffusion-2 (768-v-ema.ckpt) with an additional 55k steps on the same dataset (with punsafe=0.1), and then fine-tuned for another 155k extra steps with punsafe=0.98.
star: 0fork: 7
language: Python
created at: 2024-02-07
updated at: 2024-09-12

stable-video-diffusion

(SVD) Image-to-Video is a latent diffusion model trained to generate short video clips from an image conditioning. This model was trained to generate 25 frames at resolution 576x1024 given a context frame of the same size, finetuned from SVD Image-to-Video [14 frames]. We also finetune the widely used f8-decoder for temporal consistency.
star: 6fork: 4
language: Python
created at: 2024-02-06
updated at: 2024-09-11

ComfyUI

ComfyUI is a node-based GUI for Stable Diffusion. In this template we will import ComfyUI on Inferless.
star: 2fork: 4
language: Python
created at: 2023-12-01
updated at: 2024-09-15

stable-diffusion-xl-turbo

SDXL-Turbo is a distilled version of SDXL 1.0, trained for real-time synthesis. SDXL-Turbo is based on a novel training method called Adversarial Diffusion Distillation (ADD) (see the technical report), which allows sampling large-scale foundational image diffusion models in 1 to 4 steps at high image quality.
star: 3fork: 8
language: Python
created at: 2023-11-29
updated at: 2024-09-11

stable-diffusion-s3-image-save

This stable-diffusion-2-1 model is fine-tuned from stable-diffusion-2 (768-v-ema.ckpt) with an additional 55k steps on the same dataset (with punsafe=0.1), and then fine-tuned for another 155k extra steps with punsafe=0.98.
star: 0fork: 0
language: Python
created at: 2023-11-27
updated at: 2024-09-13

stable-diffusion-webhook

Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
star: 1fork: 1
language: Python
created at: 2023-11-14
updated at: 2024-10-04

stable-diffusion-xl

SDXL consists of an ensemble of experts pipeline for latent diffusion: In a first step, the base model is used to generate (noisy) latents, which are then further processed with a refinement model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/) specialized for the final denoising steps.
star: 0fork: 9
language: Python
created at: 2023-10-31
updated at: 2024-09-12

Stable-diffusion-v1-5

Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. For more information about how Stable Diffusion functions, please have a look at 🤗's Stable Diffusion blog.
star: 0fork: 1
language: Python
created at: 2023-10-31
updated at: 2024-09-13

Stable-diffusion-2-inpainting

Stable Diffusion Inpainting is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input, with the extra capability of inpainting the pictures by using a mask.
star: 1fork: 2
language: Python
created at: 2023-10-09
updated at: 2024-09-13

Llama-2-7B-GPTQ

Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
star: 0fork: 11
language: Python
created at: 2023-08-08
updated at: 2024-09-12

DreamShaper

Controlnet v1.1 was released in lllyasviel/ControlNet-v1-1 by Lvmin Zhang. This checkpoint is a conversion of the original checkpoint into diffusers format. It can be used in combination with Stable Diffusion, such as runwayml/stable-diffusion-v1-5.
star: 0fork: 1
language: Python
created at: 2023-07-26
updated at: 2024-09-13

Stablediffusion-controlnet

ControlNet is a neural network structure to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlNet conditioned on Canny edges. It can be used in combination with Stable Diffusion.
star: 0fork: 1
language: Python
created at: 2023-07-19
updated at: 2024-09-16