-
Vision language models are blind
Paper • 2407.06581 • Published • 84 -
Generate, but Verify: Reducing Hallucination in Vision-Language Models with Retrospective Resampling
Paper • 2504.13169 • Published • 39 -
Lost in Embeddings: Information Loss in Vision-Language Models
Paper • 2509.11986 • Published • 27
Collections
Discover the best community collections!
Collections including paper arxiv:2407.06581
-
Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models
Paper • 2406.17294 • Published • 11 -
OMG-LLaVA: Bridging Image-level, Object-level, Pixel-level Reasoning and Understanding
Paper • 2406.19389 • Published • 54 -
EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model
Paper • 2406.20076 • Published • 10 -
PicoAudio: Enabling Precise Timestamp and Frequency Controllability of Audio Events in Text-to-audio Generation
Paper • 2407.02869 • Published • 21
-
A.S.E: A Repository-Level Benchmark for Evaluating Security in AI-Generated Code
Paper • 2508.18106 • Published • 345 -
HtmlRAG: HTML is Better Than Plain Text for Modeling Retrieved Knowledge in RAG Systems
Paper • 2411.02959 • Published • 70 -
Easy Dataset: A Unified and Extensible Framework for Synthesizing LLM Fine-Tuning Data from Unstructured Documents
Paper • 2507.04009 • Published • 51 -
MonkeyOCR: Document Parsing with a Structure-Recognition-Relation Triplet Paradigm
Paper • 2506.05218 • Published • 2
-
4K4DGen: Panoramic 4D Generation at 4K Resolution
Paper • 2406.13527 • Published • 9 -
Style-NeRF2NeRF: 3D Style Transfer From Style-Aligned Multi-View Images
Paper • 2406.13393 • Published • 5 -
YouDream: Generating Anatomically Controllable Consistent Text-to-3D Animals
Paper • 2406.16273 • Published • 43 -
EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model
Paper • 2406.20076 • Published • 10
-
PaliGemma: A versatile 3B VLM for transfer
Paper • 2407.07726 • Published • 72 -
Vision language models are blind
Paper • 2407.06581 • Published • 84 -
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning
Paper • 2404.16994 • Published • 36 -
DeepSeek-VL: Towards Real-World Vision-Language Understanding
Paper • 2403.05525 • Published • 48
-
PaliGemma: A versatile 3B VLM for transfer
Paper • 2407.07726 • Published • 72 -
Vision language models are blind
Paper • 2407.06581 • Published • 84 -
CosmoCLIP: Generalizing Large Vision-Language Models for Astronomical Imaging
Paper • 2407.07315 • Published • 7 -
Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision
Paper • 2407.06189 • Published • 26
-
Vision language models are blind
Paper • 2407.06581 • Published • 84 -
Generate, but Verify: Reducing Hallucination in Vision-Language Models with Retrospective Resampling
Paper • 2504.13169 • Published • 39 -
Lost in Embeddings: Information Loss in Vision-Language Models
Paper • 2509.11986 • Published • 27
-
A.S.E: A Repository-Level Benchmark for Evaluating Security in AI-Generated Code
Paper • 2508.18106 • Published • 345 -
HtmlRAG: HTML is Better Than Plain Text for Modeling Retrieved Knowledge in RAG Systems
Paper • 2411.02959 • Published • 70 -
Easy Dataset: A Unified and Extensible Framework for Synthesizing LLM Fine-Tuning Data from Unstructured Documents
Paper • 2507.04009 • Published • 51 -
MonkeyOCR: Document Parsing with a Structure-Recognition-Relation Triplet Paradigm
Paper • 2506.05218 • Published • 2
-
4K4DGen: Panoramic 4D Generation at 4K Resolution
Paper • 2406.13527 • Published • 9 -
Style-NeRF2NeRF: 3D Style Transfer From Style-Aligned Multi-View Images
Paper • 2406.13393 • Published • 5 -
YouDream: Generating Anatomically Controllable Consistent Text-to-3D Animals
Paper • 2406.16273 • Published • 43 -
EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model
Paper • 2406.20076 • Published • 10
-
PaliGemma: A versatile 3B VLM for transfer
Paper • 2407.07726 • Published • 72 -
Vision language models are blind
Paper • 2407.06581 • Published • 84 -
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning
Paper • 2404.16994 • Published • 36 -
DeepSeek-VL: Towards Real-World Vision-Language Understanding
Paper • 2403.05525 • Published • 48
-
Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models
Paper • 2406.17294 • Published • 11 -
OMG-LLaVA: Bridging Image-level, Object-level, Pixel-level Reasoning and Understanding
Paper • 2406.19389 • Published • 54 -
EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model
Paper • 2406.20076 • Published • 10 -
PicoAudio: Enabling Precise Timestamp and Frequency Controllability of Audio Events in Text-to-audio Generation
Paper • 2407.02869 • Published • 21
-
PaliGemma: A versatile 3B VLM for transfer
Paper • 2407.07726 • Published • 72 -
Vision language models are blind
Paper • 2407.06581 • Published • 84 -
CosmoCLIP: Generalizing Large Vision-Language Models for Astronomical Imaging
Paper • 2407.07315 • Published • 7 -
Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision
Paper • 2407.06189 • Published • 26