TOP AI Developers by monthly star count
TOP AI Organization Account by AI repo star count
Top AI Project by Category star count
Top Growing Speed list by the speed of gaining stars
Top List of who create influential repos with little people known
Rankings | Organization Account | Related Project | Project intro | Star count |
---|---|---|---|---|
1 | nexa-sdk | Nexa SDK is a comprehensive toolkit for supporting ONNX and GGML models. It supports text generation, image generation, vision-language models (VLM), auto-speech-recognition (ASR), and text-to-speech (TTS) capabilities. | 3.4K | |
2 | MGM | Official repo for "Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models" | 3.2K | |
3 | MobileAgent | Mobile-Agent: The Powerful Mobile Device Operation Assistant Family | 3.0K | |
4 | SoraWebui | SoraWebui is an open-source Sora web client, enabling users to easily create videos from text with OpenAI's Sora model. | 2.3K | |
5 | DeepSeek-VL | DeepSeek-VL: Towards Real-World Vision-Language Understanding | 2.1K | |
6 | cambrian | Cambrian-1 is a family of multimodal LLMs with a vision-centric design. | 1.8K | |
7 | Local-File-Organizer | An AI-powered file management tool that ensures privacy by organizing local texts, images. Using Llama3.2 3B and Llava v1.6 models with the Nexa SDK, it intuitively scans, restructures, and organizes files for quick, seamless access and easy retrieval. | 1.7K | |
8 | ShareGPT4Video | [NeurIPS 2024] An official implementation of ShareGPT4Video: Improving Video Understanding and Generation with Better Captions | 1.3K | |
9 | minisora | MiniSora: A community aims to explore the implementation path and future development direction of Sora. | 1.2K | |
10 | colpali | The code used to train and run inference with the ColPali architecture. | 1.1K | |
11 | comfyui_LLM_party | LLM Agent Framework in ComfyUI includes Omost,GPT-sovits, ChatTTS,GOT-OCR2.0, and FLUX prompt nodes,access to Feishu,discord,and adapts to all llms with similar openai/gemini interfaces, such as o1,ollama, grok, qwen, GLM, deepseek, moonshot,doubao. Adapted to local llms, vlm, gguf such as llama-3.2, Linkage neo4j KG, graphRAG / RAG / html 2 img | 998 | |
12 | sorafm | Sora AI Video Generator by Sora.FM | 951 | |
13 | Bunny | A family of lightweight multimodal models. | 930 | |
14 | LLaVA-pp | 🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3) | 810 | |
15 | TinyLLaVA_Factory | A Framework of Small-scale Large Multimodal Models | 645 | |
16 | Groma | [ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization | 559 | |
17 | Awesome-Robotics-3D | A curated list of 3D Vision papers relating to Robotics domain in the era of large models i.e. LLMs/VLMs, inspired by awesome-computer-vision, including papers, codes, and related websites | 546 | |
18 | EAGLE | EAGLE: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders | 538 | |
19 | Ovis | A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings. | 512 | |
20 | mlx-vlm | MLX-VLM is a package for running Vision LLMs locally on your Mac using MLX. | 488 | |
21 | awesome-vlm-architectures | Famous Vision Language Models and Their Architectures | 412 | |
22 | llama-assistant | AI-powered assistant to help you with your daily tasks, powered by Llama 3.2. It can recognize your voice, process natural language, and perform various actions based on your commands: summarizing text, rephasing sentences, answering questions, writing emails, and more. | 412 | |
23 | meme_search | Index your memes by their content and text, making them easily retrievable for your meme warfare pleasures. Find funny fast. | 399 | |
24 | Open-LLaVA-NeXT | An open-source implementation for training LLaVA-NeXT. | 391 | |
25 | VisRAG | Parsing-free RAG supported by VLMs | 363 | |
26 | minimind-v | 「大模型」3小时从0训练27M参数的视觉多模态VLM,个人显卡即可推理训练! | 350 | |
27 | Awesome-Jailbreak-on-LLMs | Awesome-Jailbreak-on-LLMs is a collection of state-of-the-art, novel, exciting jailbreak methods on LLMs. It contains papers, codes, datasets, evaluations, and analyses. | 315 | |
28 | ai-devices | AI Device Template Featuring Whisper, TTS, Groq, Llama3, OpenAI and more | 279 | |
29 | RLAIF-V | RLAIF-V: Aligning MLLMs through Open-Source AI Feedback for Super GPT-4V Trustworthiness | 238 | |
30 | Phi-3-Vision-MLX | Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon | 237 | |
31 | PromptKD | [CVPR 2024] Official PyTorch Code for "PromptKD: Unsupervised Prompt Distillation for Vision-Language Models" | 233 | |
32 | EVE | [NeurIPS'24 Spotlight] EVE: Encoder-Free Vision-Language Models | 228 | |
33 | Awesome-Open-AI-Sora | Sora AI Awesome List – Your go-to resource hub for all things Sora AI, OpenAI's groundbreaking model for crafting realistic scenes from text. Explore a curated collection of articles, videos, podcasts, and news about Sora's capabilities, advancements, and more. | 216 | |
34 | TokenPacker | The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM". | 213 | |
35 | SoraFlows | The most powerful and modular Sora WebUI, api and backend with OpenAI's Sora Model. Collecting the highest quality prompts for Sora. using NextJs and Tailwind CSS | 195 | |
36 | IAmDirector-Text2Video-NextJS-Client | 本项目开源基于NextJS的前端, 希望能够提供一个用于生成式AI的文字转视频, 尤其是电影从编剧到视频生成的Web前端平台参考。Everyone can become a director. The Nextjs front-end of an AI driven platform for automatic movie/video generation (form GPT script generation to text2video movie generation).这是一个免费试用AI视频创作平台,集成了基于GPT的视频剧本生成和视频生成功能。 我们的理想是让每个人都能成为导演,以最快的方式将日常中的任何创意转化为高质量的视频, 无论是电影、营销视频、还是自媒体视频。 | 189 | |
37 | Mantis | Official code for Paper "Mantis: Multi-Image Instruction Tuning" | 180 | |
38 | embodied-agents | Seamlessly integrate state-of-the-art transformer models into robotics stacks | 163 | |
39 | seemore | From scratch implementation of a vision language model in pure PyTorch | 161 | |
40 | rai | RAI is a multi-vendor agent framework for robotics, utilizing Langchain and ROS 2 tools to perform complex actions, defined scenarios, free interface execution, log summaries, voice interaction and more. | 157 | |
41 | LLaRA | LLaRA: Large Language and Robotics Assistant | 154 | |
42 | AUITestAgent | AUITestAgent is the first automatic, natural language-driven GUI testing tool for mobile apps, capable of fully automating the entire process of GUI interaction and function verification. | 148 | |
43 | PsyDI | PsyDI: Towards a Personalized and Progressively In-depth Chatbot for Psychological Measurements. (e.g. MBTI Measurement Agent) | 147 | |
44 | ELM | [ECCV 2024] Embodied Understanding of Driving Scenarios | 146 | |
45 | image-textualization | Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions (NeurIPS 2024) | 143 | |
46 | InCTRL | Official implementation of CVPR'24 paper 'Toward Generalist Anomaly Detection via In-context Residual Learning with Few-shot Sample Prompts'. | 127 | |
47 | joycaption | JoyCaption is an image captioning Visual Language Model (VLM) being built from the ground up as a free, open, and uncensored model for the community to use in training Diffusion models. | 127 | |
48 | Awesome-VLGFM | A Survey on Vision-Language Geo-Foundation Models (VLGFMs) | 125 | |
49 | captcha-solver | basic google recaptcha solver using llava-v1.6-7b | 120 | |
50 | VidProM | [NeurIPS 2024] VidProM: A Million-scale Real Prompt-Gallery Dataset for Text-to-Video Diffusion Models | 113 | |
51 | Emotion-LLaMA | Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning | 109 | |
52 | Spider2-V | [NeurIPS 2024] Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows? | 107 | |
53 | MMTrustEval | A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks) | 106 | |
54 | GPA-LM | This repo is a live list of papers on game playing and large multimodality model - "A Survey on Game Playing Agents and Large Models: Methods, Applications, and Challenges". | 101 | |
55 | RobustVLM | [ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models | 99 | |
56 | MM-NIAH | [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of existing MLLMs to comprehend long multimodal documents. | 99 | |
57 | graphist | Official Repo of Graphist | 97 | |
58 | KarmaVLM | 🧘🏻♂️KarmaVLM (相生):A family of high efficiency and powerful visual language model. | 84 | |
59 | LLaVA-MORE | LLaVA-MORE: Enhancing Visual Instruction Tuning with LLaMA 3.1 | 84 | |
60 | MoE-Mamba | Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Zeta | 83 | |
61 | Mini-LLaVA | A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability. | 83 | |
62 | eureka-ml-insights | A framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings. | 83 | |
63 | VoCo-LLaMA | VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models". | 81 | |
64 | matryoshka-mm | Matryoshka Multimodal Models | 81 | |
65 | Modality-Integration-Rate | The official code of the paper "Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration Rate". | 79 | |
66 | CharXiv | [NeurIPS 2024] CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs | 75 | |
67 | Llama3.2-Vision-Finetune | An open-source implementaion for fine-tuning Llama3.2-Vision series by Meta. | 74 | |
68 | VLM-Visualizer | Visualizing the attention of vision-language models | 70 | |
69 | Uniaa | Unified Multi-modal IAA Baseline and Benchmark | 70 | |
70 | Know-Your-Neighbors | [CVPR 2024] 🏡Know Your Neighbors: Improving Single-View Reconstruction via Spatial Vision-Language Reasoning | 69 | |
71 | YoLLaVA | 🌋👵🏻 Yo'LLaVA: Your Personalized Language and Vision Assistant | 65 | |
72 | CVPR2024_MAVL | Multi-Aspect Vision Language Pretraining - CVPR2024 | 63 | |
73 | VLM-Grounder | [CoRL 2024] VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding | 62 | |
74 | ollama-open-webui | Self-host a ChatGPT-style web interface for Ollama 🦙 | 61 | |
75 | SpeechLLM | This repository contains the training, inference, evaluation code for SpeechLLM models and details about the model releases on huggingface. | 58 | |
76 | STIC | Enhancing Large Vision Language Models with Self-Training on Image Comprehension. | 58 | |
77 | CARES | [NeurIPS'24 & ICMLW'24] CARES: A Comprehensive Benchmark of Trustworthiness in Medical Vision Language Models | 56 | |
78 | Elysium | [ECCV 2024] Elysium: Exploring Object-level Perception in Videos via MLLM | 56 | |
79 | Dream2Real | [ICRA 2024] Dream2Real: Zero-Shot 3D Object Rearrangement with Vision-Language Models | 56 | |
80 | SparseVLMs | Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference" proposed by Peking University and UC Berkeley. | 53 | |
81 | Chinese-LLaVA-Med | 中文医学多模态大模型 Large Chinese Language-and-Vision Assistant for BioMedicine | 51 | |
82 | usls | A Rust library integrated with ONNXRuntime, providing a collection of Computer Vison and Vision-Language models. | 49 | |
83 | FreeVA | FreeVA: Offline MLLM as Training-Free Video Assistant | 48 | |
84 | VisualWebBench | Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?" | 47 | |
85 | KDPL | [ECCV 2024] - Improving Zero-shot Generalization of Learned Prompts via Unsupervised Knowledge Distillation | 46 | |
86 | VLGuard | [ICML 2024] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models. | 45 | |
87 | WCA | [ICML 2024] "Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language Models" | 43 | |
88 | MLM_Filter | Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters". | 42 | |
89 | imagenet_d | [CVPR 2024 Highlight] ImageNet-D | 38 | |
90 | UndergraduateDissertation | Undergraduate Dissertation of Guilin University of Electronic Technology | 38 | |
91 | VLM-Captioning-Tools | Python scripts to use for captioning images with VLMs | 34 | |
92 | MMInstruct | The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Diversity". The MMInstruct dataset includes 973K instructions from 24 domains and four instruction types. | 33 | |
93 | AAPL | AAPL: Adding Attributes to Prompt Learning for Vision-Language Models (CVPRw 2024) | 31 | |
94 | CompBench | CompBench evaluates the comparative reasoning of multimodal large language models (MLLMs) with 40K image pairs and questions across 8 dimensions of relative comparison: visual attribute, existence, state, emotion, temporality, spatiality, quantity, and quality. CompBench covers diverse visual domains, including animals, fashion, sports, and scenes. | 31 | |
95 | LLaVA-UHD-Better | A bug-free and improved implementation of LLaVA-UHD, based on the code from the official repo | 31 | |
96 | LLM4VPR | Can multimodal LLM help visual place recognition? | 30 | |
97 | ConBench | [NeurIPS'24] Official implementation of paper "Unveiling the Tapestry of Consistency in Large Vision-Language Models". | 30 | |
98 | ReachQA | Code & Dataset for Paper: "Distill Visual Chart Reasoning Ability from LLMs to MLLMs" | 30 | |
99 | GMAI-MMBench | GMAI-MMBench: A Comprehensive Multimodal Evaluation Benchmark Towards General Medical AI. | 29 | |
100 | Jailbreak-In-Pieces | [ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal Language Models | 25 |