TOP AI Developers by monthly star count
TOP AI Organization Account by AI repo star count
Top AI Project by Category star count
Top Growing Speed list by the speed of gaining stars
Top List of who create influential repos with little people known
Projects and developers that are thriving yet have not been updated for a long time.
Rankings | Organization Account | Related Project | Project intro | Star count |
---|---|---|---|---|
1 | UI-TARS-desktop | A GUI Agent application based on UI-TARS(Vision-Language Model) that allows you to control your computer using natural language. | 14.7K | |
2 | VLM-R1 | Solve Visual Understanding with Reinforced VLMs | 5.2K | |
3 | minimind-v | 🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours! | 3.9K | |
4 | SpatialLM | SpatialLM: Training Large Language Models for Structured Indoor Modeling | 3.4K | |
5 | MiniMax-01 | The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention | 2.9K | |
6 | Skywork-R1V | Skywork-R1V2:Multimodal Hybrid Reinforcement Learning for Reasoning | 2.6K | |
7 | Local-File-Organizer | An AI-powered file management tool that ensures privacy by organizing local texts, images. Using Llama3.2 3B and Llava v1.6 models with the Nexa SDK, it intuitively scans, restructures, and organizes files for quick, seamless access and easy retrieval. | 2.4K | |
8 | vlms-zero-to-hero | This series will take you on a journey from the fundamentals of NLP and Computer Vision to the cutting edge of Vision-Language Models. | 1.0K | |
9 | VisRAG | Parsing-free RAG supported by VLMs | 611 | |
10 | UniWorld-V1 | UniWorld: High-Resolution Semantic Encoders for Unified Visual Understanding and Generation | 560 | |
11 | vlmrun-hub | A hub for various industry-specific schemas to be used with VLMs. | 510 | |
12 | llama-assistant | AI-powered assistant to help you with your daily tasks, powered by Llama 3, DeepSeek R1, and many more models on HuggingFace. | 486 | |
13 | ghostwriter | Use the reMarkable2 as an interface to vision-LLMs (ChatGPT, Claude, Gemini). Ghost in the machine! | 436 | |
14 | Flame-Code-VLM | Flame is an open-source multimodal AI system designed to translate UI design mockups into high-quality React code. It leverages vision-language modeling, automated data synthesis, and structured training workflows to bridge the gap between design and front-end development. | 367 | |
15 | joycaption | JoyCaption is an image captioning Visual Language Model (VLM) being built from the ground up as a free, open, and uncensored model for the community to use in training Diffusion models. | 349 | |
16 | VoRA | [Fully open] [Encoder-free MLLM] Vision as LoRA | 299 | |
17 | open-cuak | Reliable Automation Agents at Scale | 279 | |
18 | VLM2Vec | This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR25] | 266 | |
19 | Kolosal | Kolosal AI is an OpenSource and Lightweight alternative to LM Studio to run LLMs 100% offline on your device. | 227 | |
20 | dingo | Dingo: A Comprehensive Data Quality Evaluation Tool | 182 | |
21 | Llama3.2-Vision-Finetune | An open-source implementaion for fine-tuning Llama3.2-Vision series by Meta. | 156 | |
22 | ChatRex | Code for ChatRex: Taming Multimodal LLM for Joint Perception and Understanding | 156 | |
23 | Namo-R1 | A CPU Realtime VLM in 500M. Surpassed Moondream2 and SmolVLM. Training from scratch with ease. | 133 | |
24 | qapyq | An image viewer and AI-assisted editing/captioning/masking tool that helps with curating datasets for generative AI models, finetunes and LoRA. | 127 | |
25 | BALROG | Benchmarking Agentic LLM and VLM Reasoning On Games | 117 | |
26 | simlingo | [CVPR 2025, Spotlight] SimLingo (CarLLava): Vision-Only Closed-Loop Autonomous Driving with Language-Action Alignment | 110 | |
27 | BreezeApp | BreezeAPP 是一款為 Android 和 iOS 平台開發的純手機 AI 應用程式。從 App Store下載,即可在不連網的狀態下享受多項 AI 功能。源碼由聯發創新基地(MediaTek Research)提供。我們旨在推廣兩個概念: 人人都可以在自己的手機上自由選擇並運行不同的LLM - one is free to choose one's own LLM to run on a phone,以及任何app開發者都可以輕鬆寫作創意的純手機AI應用 - any dev can create purely phone-based AI apps easily。 | 104 | |
28 | Surveillance_Video_Summarizer | VLM driven tool that processes surveillance videos, extracts frames, and generates insightful annotations using a fine-tuned Florence-2 Vision-Language Model. Includes a Gradio-based interface for querying and analyzing video footage. | 102 | |
29 | pyvisionai | The PyVisionAI Official Repo | 97 | |
30 | Modality-Integration-Rate | The official code of the paper "Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration Rate". | 96 | |
31 | Helpful-Doggybot | Helpful DoggyBot: Open-World Object Fetching using Legged Robots and Vision-Language Models | 90 | |
32 | Mini-LLaVA | A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability. | 89 | |
33 | VLM-Grounder | [CoRL 2024] VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding | 85 | |
34 | TrustEval-toolkit | TrustEval: A modular and extensible toolkit for comprehensive trust evaluation of generative foundation models (GenFMs) | 79 | |
35 | SparseVLMs | Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference". | 77 | |
36 | tokens | A token management platform that reverse-engineers the conversation interfaces of ChatGPT, Cursor, Grok, Claude, Windsurf, Gemini, and Sora, converting them into the OpenAI format./Token管理平台,逆向ChatGPT、Cursor、Grok、Claude、Windsurf、Gemini、Sora平台的对话接口转OpenAI格式 | 74 | |
37 | 3d-conditioning | Enhance and modify high-quality compositions using real-time rendering and generative AI output without affecting a hero product asset. | 61 | |
38 | SeeDo | [IROS 2025] Human Demo Videos to Robot Action Plans | 54 | |
39 | sources | READ THE README | 50 | |
40 | ReachQA | Code & Dataset for Paper: "Distill Visual Chart Reasoning Ability from LLMs to MLLMs" | 48 | |
41 | SeeGround | [CVPR'25] SeeGround: See and Ground for Zero-Shot Open-Vocabulary 3D Visual Grounding | 46 | |
42 | Emma-X | Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning | 39 | |
43 | All-Things-Multimodal | Hub for researchers exploring VLMs and Multimodal Learning:) | 38 | |
44 | PhysBench | [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models for Physical World Understanding> | 36 | |
45 | reverse_vlm | 🔥 Official implementation of "Generate, but Verify: Reducing Visual Hallucination in Vision-Language Models with Retrospective Resampling" | 34 | |
46 | GVA-Survey | Official repository of the paper "Generalist Virtual Agents: A Survey on Autonomous Agents Across Digital Platforms" | 33 | |
47 | Video-Bench | Video Generation Benchmark | 32 | |
48 | AIN | AIN - The First Arabic Inclusive Large Multimodal Model. It is a versatile bilingual LMM excelling in visual and contextual understanding across diverse domains. | 31 | |
49 | vision-ai-checkup | Take your LLM to the optometrist. | 31 | |
50 | VLM_GRPO | An implementation of GRPO for Unsloth's VLMs training | 31 | |
51 | awesome-turkish-language-models | A curated list of Turkish AI models, datasets, papers | 30 | |
52 | UrBench | [AAAI 2025]This repo contains evaluation code for the paper “UrBench: A Comprehensive Benchmark for Evaluating Large Multimodal Models in Multi-View Urban Scenarios” | 29 | |
53 | Vision-language-models-VLM | vision language models finetuning notebooks & use cases (paligemma - florence .....) | 27 | |
54 | SAM_Molmo_Whisper | An integration of Segment Anything Model, Molmo, and, Whisper to segment objects using voice and natural language. | 23 | |
55 | saint | a training-free approach to accelerate ViTs and VLMs by pruning redundant tokens based on similarity | 22 | |
56 | gptparse | Document parser for RAG | 20 | |
57 | Re-Align | A novel alignment framework that leverages image retrieval to mitigate hallucinations in Vision Language Models. | 19 | |
58 | bubbaloop | 🦄 Serving Platform for Spatial AI and Robotics. | 19 | |
59 | cadrille | cadrille: Multi-modal CAD Reconstruction with Online Reinforcement Learning | 19 | |
60 | SubtitleAI | An AI-powered tool for summarizing YouTube videos by generating scene descriptions, translating them, and creating subtitled videos with text-to-speech narration | 17 | |
61 | video-search-and-summarization | Blueprint for Ingesting massive volumes of live or archived videos and extract insights for summarization and interactive Q&A | 17 | |
62 | worldcuisines | WorldCuisines is an extensive multilingual and multicultural benchmark that spans 30 languages, covering a wide array of global cuisines. | 16 | |
63 | exif-ai | A Node.js CLI and library that uses OpenAI, Ollama, ZhipuAI, Google Gemini or Coze to write AI-generated image descriptions and/or tags to EXIF metadata by its content. | 13 | |
64 | srbench | Source code for the Paper "Mind the Gap: Benchmarking Spatial Reasoning in Vision-Language Models" | 12 | |
65 | CAD-GPT | [AAAI2025] CAD-GPT: Synthesising CAD Construction Sequence with Spatial Reasoning-Enhanced Multimodal LLMs | 12 | |
66 | TRIM | We introduce new approach, Token Reduction using CLIP Metric (TRIM), aimed at improving the efficiency of MLLMs without sacrificing their performance. | 11 | |
67 | computer-agent-arena-hub | Computer Agent Arena Hub: Compare & Test AI Agents on Crowdsourced Real-World Computer Use Tasks | 11 | |
68 | Cross-the-Gap | [ICLR 2025] - Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality Inversion | 11 | |
69 | wildcard | 最新野卡wildcard虚拟信用卡使用指南:wildcard注册教程,如何开通野卡信用卡?如何为野卡充值和提现? | 11 | |
70 | VLM-Safety-MU | Safety Mirage: How Spurious Correlations Undermine VLM Safety Fine-tuning | 11 | |
71 | CII-Bench | Can MLLMs Understand the Deep Implication Behind Chinese Images? | 9 | |
72 | sentinel | Securade.ai Sentinel - A monitoring and surveillance application that enables visual Q&A and video captioning for existing CCTV cameras. | 9 | |
73 | EgoNormia | EgoNormia | Benchmarking Physical Social Norm Understanding in VLMs | 9 | |
74 | ImagineFSL | Official implementation of "ImagineFSL: Self-Supervised Pretraining Matters on Imagined Base Set for VLM-based Few-shot Learning" [CVPR 2025 Highlight] | 9 | |
75 | MyColPali | The PyQt6 application using ColPali and OpenAI to show Efficient Document Retrieval with Vision Language Models | 8 | |
76 | Qwen2-VL-Colaboratory-Sample | Colaboratory上でQwenLM/Qwen2-VLをお試しするサンプル | 7 | |
77 | vlm-api | REST API for computing cross-modal similarity between images and text using the ColPaLI vision-language model | 7 | |
78 | Chitrarth | Chitrarth: Bridging Vision and Language for a Billion People | 7 | |
79 | ollama | Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 2, and other large language models. | 7 | |
80 | ide-cap-chan | ide-cap-chan is a utility for batch image captioning with natural language using various VL models | 6 | |
81 | Dex-GAN-Grasp | DexGANGrasp: Dexterous Generative Adversarial Grasping Synthesis for Task-Oriented Manipulation - IEEE-RAS International Conference on Humanoid Robots (Humanoids) 2024 | DOI: 10.1109/Humanoids58906.2024.10769950 | 5 | |
82 | RoomAligner | A focus on aligning room elements for better flow and space utilization. | 5 | |
83 | VLM-ZSAD-Paper-Review | Reviews of papers on zero-shot anomaly detection using vision-Language models | 4 | |
84 | Multimodal-VideoRAG | Multimodal-VideoRAG: Using BridgeTower Embeddings and Large Vision Language Models | 4 | |
85 | svlr | SVLR: Scalable, Training-Free Visual Language Robotics: a modular multi-model framework for consumer-grade GPUs | 3 | |
86 | ComfyUI-YALLM-node | Yet another set of LLM nodes for ComfyUI (for local/remote OpenAI-like APIs, multi-modal models supported) | 3 | |
87 | CIDER | This is the official repository for Cross-modality Information Check for Detecting Jailbreaking in Multimodal Large Language Models. | 3 | |
88 | awesome-text-to-video-plus | The Ultimate Guide to Effortlessly Creating AI Videos for Social Media Go From Text to Eye-Catching Videos in Just a Few Steps | 3 | |
89 | ScaleDP | ScaleDP is an Open-Source extension of Apache Spark for Document Processing | 3 | |
90 | LLMs-Journey | Various LLM resources and experiments | 3 | |
91 | MANBench | MANBench: Is Your Multimodal Model Smarter than Human? | 3 | |
92 | Vision-LLM-Alignment | This repository contains the code for SFT, RLHF, and DPO, designed for vision-based LLMs, including the LLaVA models and the LLaMA-3.2-vision models. | 2 | |
93 | Sora | 🎬 OpenAI's Sora, a new text-to-video AI model, is set to launch later in 2024. | 2 | |
94 | Finetune-Qwen2.5-VL | Fine-tuning Qwen2.5-VL for vision-language tasks | Optimized for Vision understanding | LoRA & PEFT support. | 2 | |
95 | mini-paligemma2 | Minimalist implementation of PaliGemma 2 & PaliGemma VLM from scratch | 2 | |
96 | Multi-Round-VLM-powered-Multimodal-Conversational-AI-Navigation-Bot | Streamlit App Combining Vision, Language, and Audio AI Models | 2 | |
97 | sora | Sora是什么?如何使用Sora?Sora入口在哪?Sora订阅保姆级教程! | 2 | |
98 | Shard | Open Source Video Understanding API and Large Vision Model Observability Platform. | 2 | |
99 | llama-cord | Discord App for Interacting with local Ollama Models. Multiple Agents Supported! | 2 | |
100 | slides2video-pinokio-script | Pinokio script for installing the app slides2video | 2 |