{"id":36618,"date":"2025-09-07T06:27:32","date_gmt":"2025-09-07T06:27:32","guid":{"rendered":"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/"},"modified":"2025-09-07T06:27:32","modified_gmt":"2025-09-07T06:27:32","slug":"hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms","status":"publish","type":"post","link":"https:\/\/youzum.net\/zh\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/","title":{"rendered":"Hugging Face Open-Sourced FineVision: A New Multimodal Dataset with\u00a024 Million Samples for Training Vision-Language Models (VLMs)"},"content":{"rendered":"<p>Hugging Face has just released <strong>FineVision<\/strong>, an open multimodal dataset designed to set a new standard for Vision-Language Models (VLMs). With <strong>17.3 million images<\/strong>, <strong>24.3 million samples<\/strong>, <strong>88.9 million question-answer turns<\/strong>, and nearly <strong>10 billion answer tokens<\/strong>, FineVision position itself as one of the largest and structured publicly available VLM training datasets.<\/p>\n<p>FineVision aggregates <strong>200+ sources<\/strong> into a unified format, rigorously filtered for duplicates and benchmark contamination. Rated systematically across multiple quality dimensions, the dataset enables researchers and devs to construct robust training mixtures while minimizing data leakage.<\/p>\n<h3 class=\"wp-block-heading\"><strong>Why is FineVision Important for VLM Training?<\/strong><\/h3>\n<p>Most state-of-the-art VLMs rely on proprietary datasets, limiting reproducibility and accessibility for the broader research community. FineVision addresses this gap by:<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>Scale and Coverage<\/strong>: 5 TB of curated data across 9 categories, including General VQA, OCR QA, Chart &amp; Table reasoning, Science, Captioning, Grounding &amp; Counting, and GUI navigation.<\/li>\n<li><strong>Benchmark Gains<\/strong>: Across <strong>11 widely used benchmarks<\/strong> (e.g., AI2D, ChartQA, DocVQA, ScienceQA, OCRBench), models trained on FineVision outperform alternatives by significant margins\u2014up to <strong>46.3% over LLaVA<\/strong>, <strong>40.7% over Cauldron<\/strong>, and <strong>12.1% over Cambrian<\/strong>.<\/li>\n<li><strong>New Skill Domains<\/strong>: FineVision introduces data for emerging tasks like GUI navigation, pointing, and counting, expanding the capabilities of VLMs beyond conventional captioning and VQA.<\/li>\n<\/ul>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img fetchpriority=\"high\" decoding=\"async\" width=\"1024\" height=\"808\" data-attachment-id=\"74337\" data-permalink=\"https:\/\/www.marktechpost.com\/2025\/09\/06\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/image-116\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/image-15.png\" data-orig-size=\"1762,1390\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"image\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/image-15-300x237.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/image-15-1024x808.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/image-15-1024x808.png\" alt=\"\" class=\"wp-image-74337\" \/><\/figure>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>How Was FineVision Built?<\/strong><\/h3>\n<p><strong>The curation pipeline followed a three-step process:<\/strong><\/p>\n<ol class=\"wp-block-list\">\n<li><strong>Collection and Augmentation<\/strong><br \/>Over 200 publicly available image-text datasets were gathered. Missing modalities (e.g., text-only data) were reformatted into QA pairs. Underrepresented domains, such as GUI data, were supplemented through targeted collection.<\/li>\n<li><strong>Cleaning<\/strong>\n<ul class=\"wp-block-list\">\n<li>Removed oversized QA pairs (&gt;8192 tokens).<\/li>\n<li>Resized large images to a maximum of 2048 px while preserving aspect ratio.<\/li>\n<li>Discarded corrupted samples.<\/li>\n<\/ul>\n<\/li>\n<li><strong>Quality Rating<\/strong><br \/>Using <strong>Qwen3-32B<\/strong> and <strong>Qwen2.5-VL-32B-Instruct<\/strong> as judges, every QA pair was rated on four axes:\n<ul>\n<li>Text Formatting Quality<\/li>\n<li>Question-Answer Relevance<\/li>\n<li>Visual Dependency<\/li>\n<li>Image-Question Correspondence<\/li>\n<\/ul>\n<p>These ratings enable selective training mixtures, though ablations show that <strong>retaining all samples yields the best performance<\/strong>, even when lower-rated samples are included.<\/p><\/li>\n<\/ol>\n<h3 class=\"wp-block-heading\"><strong>Comparative Analysis: FineVision vs. Existing Open Datasets<\/strong><\/h3>\n<figure class=\"wp-block-table\">\n<table class=\"has-fixed-layout\">\n<thead>\n<tr>\n<th>Dataset<\/th>\n<th>Images<\/th>\n<th>Samples<\/th>\n<th>Turns<\/th>\n<th>Tokens<\/th>\n<th>Leakage<\/th>\n<th>Perf. Drop After Deduplication<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Cauldron<\/td>\n<td>2.0M<\/td>\n<td>1.8M<\/td>\n<td>27.8M<\/td>\n<td>0.3B<\/td>\n<td>3.05%<\/td>\n<td>-2.39%<\/td>\n<\/tr>\n<tr>\n<td>LLaVA-Vision<\/td>\n<td>2.5M<\/td>\n<td>3.9M<\/td>\n<td>9.1M<\/td>\n<td>1.0B<\/td>\n<td>2.15%<\/td>\n<td>-2.72%<\/td>\n<\/tr>\n<tr>\n<td>Cambrian-7M<\/td>\n<td>5.4M<\/td>\n<td>7.0M<\/td>\n<td>12.2M<\/td>\n<td>0.8B<\/td>\n<td>2.29%<\/td>\n<td>-2.78%<\/td>\n<\/tr>\n<tr>\n<td><strong>FineVision<\/strong><\/td>\n<td><strong>17.3M<\/strong><\/td>\n<td><strong>24.3M<\/strong><\/td>\n<td><strong>88.9M<\/strong><\/td>\n<td><strong>9.5B<\/strong><\/td>\n<td><strong>1.02%<\/strong><\/td>\n<td><strong>-1.45%<\/strong><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/figure>\n<p>FineVision is not only one of the largest but also the <strong>least hallucinated<\/strong> dataset, with just <strong>1% overlap<\/strong> with benchmark test sets. This ensures minimal data leakage and reliable evaluation performance.<\/p>\n<h3 class=\"wp-block-heading\"><strong>Performance Insights<\/strong><\/h3>\n<ul class=\"wp-block-list\">\n<li><strong>Model Setup<\/strong>: Ablations were conducted using <strong>nanoVLM<\/strong> (460M parameters), combining <strong>SmolLM2-360M-Instruct<\/strong> as the language backbone and <strong>SigLIP2-Base-512<\/strong> as the vision encoder.<\/li>\n<li><strong>Training Efficiency<\/strong>: On 32 NVIDIA H100 GPUs, one full epoch (12k steps) takes ~20 hours.<\/li>\n<li><strong>Performance Trends<\/strong>:\n<ul class=\"wp-block-list\">\n<li>FineVision models improve steadily with exposure to diverse data, overtaking baselines after ~12k steps.<\/li>\n<li>Deduplication experiments confirm FineVision\u2019s low leakage compared to Cauldron, LLaVA, and Cambrian.<\/li>\n<li>Multilingual subsets, even when the backbone is monolingual, show slight performance gains, suggesting diversity outweighs strict alignment.<\/li>\n<li>Attempts at multi-stage training (two or 2.5 stages) did not yield consistent benefits, reinforcing that <strong>scale + diversity<\/strong> is more critical than training heuristics.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3 class=\"wp-block-heading\"><strong>Why FineVision Brings the New Standard<\/strong>?<\/h3>\n<ol class=\"wp-block-list\">\n<li><strong>+20% Average Performance Boost<\/strong>: Outperforms all existing open datasets across 10+ benchmarks.<\/li>\n<li><strong>Unprecedented Scale<\/strong>: 17M+ images, 24M+ samples, 10B tokens.<\/li>\n<li><strong>Skill Expansion<\/strong>: GUI navigation, counting, pointing, and document reasoning included.<\/li>\n<li><strong>Lowest Data Leakage<\/strong>: 1% contamination, compared to 2\u20133% in other datasets.<\/li>\n<li><strong>Fully Open Source<\/strong>: Available on Hugging Face Hub for immediate use via the <code>datasets<\/code> library.<\/li>\n<\/ol>\n<h3 class=\"wp-block-heading\"><strong>Conclusion<\/strong><\/h3>\n<p>FineVision marks a significant advancement in open multimodal datasets. Its large scale, systematic curation, and transparent quality assessments create a reproducible and extensible foundation for training state-of-the-art Vision-Language Models. By reducing dependence on proprietary resources, it enables researchers and devs to build competitive systems and accelerate progress in areas such as document analysis, visual reasoning, and agentic multimodal tasks.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out the\u00a0<strong><a href=\"https:\/\/huggingface.co\/datasets\/HuggingFaceM4\/FineVision\" target=\"_blank\" rel=\"noreferrer noopener\">Dataset<\/a><\/strong> and <strong><a href=\"https:\/\/huggingface.co\/spaces\/HuggingFaceM4\/FineVision\" target=\"_blank\" rel=\"noreferrer noopener\">Technical details<\/a><em>.<\/em><\/strong>\u00a0Feel free to check out our\u00a0<strong><mark><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\" target=\"_blank\" rel=\"noreferrer noopener\">GitHub Page for Tutorials, Codes and Notebooks<\/a><\/mark><\/strong>.\u00a0Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">100k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>.<\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2025\/09\/06\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/\">Hugging Face Open-Sourced FineVision: A New Multimodal Dataset with\u00a024 Million Samples for Training Vision-Language Models (VLMs)<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>Hugging Face has just released FineVision, an open multimodal dataset designed to set a new standard for Vision-Language Models (VLMs). With 17.3 million images, 24.3 million samples, 88.9 million question-answer turns, and nearly 10 billion answer tokens, FineVision position itself as one of the largest and structured publicly available VLM training datasets. FineVision aggregates 200+ sources into a unified format, rigorously filtered for duplicates and benchmark contamination. Rated systematically across multiple quality dimensions, the dataset enables researchers and devs to construct robust training mixtures while minimizing data leakage. Why is FineVision Important for VLM Training? Most state-of-the-art VLMs rely on proprietary datasets, limiting reproducibility and accessibility for the broader research community. FineVision addresses this gap by: Scale and Coverage: 5 TB of curated data across 9 categories, including General VQA, OCR QA, Chart &amp; Table reasoning, Science, Captioning, Grounding &amp; Counting, and GUI navigation. Benchmark Gains: Across 11 widely used benchmarks (e.g., AI2D, ChartQA, DocVQA, ScienceQA, OCRBench), models trained on FineVision outperform alternatives by significant margins\u2014up to 46.3% over LLaVA, 40.7% over Cauldron, and 12.1% over Cambrian. New Skill Domains: FineVision introduces data for emerging tasks like GUI navigation, pointing, and counting, expanding the capabilities of VLMs beyond conventional captioning and VQA. How Was FineVision Built? The curation pipeline followed a three-step process: Collection and AugmentationOver 200 publicly available image-text datasets were gathered. Missing modalities (e.g., text-only data) were reformatted into QA pairs. Underrepresented domains, such as GUI data, were supplemented through targeted collection. Cleaning Removed oversized QA pairs (&gt;8192 tokens). Resized large images to a maximum of 2048 px while preserving aspect ratio. Discarded corrupted samples. Quality RatingUsing Qwen3-32B and Qwen2.5-VL-32B-Instruct as judges, every QA pair was rated on four axes: Text Formatting Quality Question-Answer Relevance Visual Dependency Image-Question Correspondence These ratings enable selective training mixtures, though ablations show that retaining all samples yields the best performance, even when lower-rated samples are included. Comparative Analysis: FineVision vs. Existing Open Datasets Dataset Images Samples Turns Tokens Leakage Perf. Drop After Deduplication Cauldron 2.0M 1.8M 27.8M 0.3B 3.05% -2.39% LLaVA-Vision 2.5M 3.9M 9.1M 1.0B 2.15% -2.72% Cambrian-7M 5.4M 7.0M 12.2M 0.8B 2.29% -2.78% FineVision 17.3M 24.3M 88.9M 9.5B 1.02% -1.45% FineVision is not only one of the largest but also the least hallucinated dataset, with just 1% overlap with benchmark test sets. This ensures minimal data leakage and reliable evaluation performance. Performance Insights Model Setup: Ablations were conducted using nanoVLM (460M parameters), combining SmolLM2-360M-Instruct as the language backbone and SigLIP2-Base-512 as the vision encoder. Training Efficiency: On 32 NVIDIA H100 GPUs, one full epoch (12k steps) takes ~20 hours. Performance Trends: FineVision models improve steadily with exposure to diverse data, overtaking baselines after ~12k steps. Deduplication experiments confirm FineVision\u2019s low leakage compared to Cauldron, LLaVA, and Cambrian. Multilingual subsets, even when the backbone is monolingual, show slight performance gains, suggesting diversity outweighs strict alignment. Attempts at multi-stage training (two or 2.5 stages) did not yield consistent benefits, reinforcing that scale + diversity is more critical than training heuristics. Why FineVision Brings the New Standard? +20% Average Performance Boost: Outperforms all existing open datasets across 10+ benchmarks. Unprecedented Scale: 17M+ images, 24M+ samples, 10B tokens. Skill Expansion: GUI navigation, counting, pointing, and document reasoning included. Lowest Data Leakage: 1% contamination, compared to 2\u20133% in other datasets. Fully Open Source: Available on Hugging Face Hub for immediate use via the datasets library. Conclusion FineVision marks a significant advancement in open multimodal datasets. Its large scale, systematic curation, and transparent quality assessments create a reproducible and extensible foundation for training state-of-the-art Vision-Language Models. By reducing dependence on proprietary resources, it enables researchers and devs to build competitive systems and accelerate progress in areas such as document analysis, visual reasoning, and agentic multimodal tasks. Check out the\u00a0Dataset and Technical details.\u00a0Feel free to check out our\u00a0GitHub Page for Tutorials, Codes and Notebooks.\u00a0Also,\u00a0feel free to follow us on\u00a0Twitter\u00a0and don\u2019t forget to join our\u00a0100k+ ML SubReddit\u00a0and Subscribe to\u00a0our Newsletter. The post Hugging Face Open-Sourced FineVision: A New Multimodal Dataset with\u00a024 Million Samples for Training Vision-Language Models (VLMs) appeared first on MarkTechPost.<\/p>","protected":false},"author":2,"featured_media":36619,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"pmpro_default_level":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"_pvb_checkbox_block_on_post":false,"footnotes":""},"categories":[52,5,7,1],"tags":[],"class_list":["post-36618","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-club","category-committee","category-news","category-uncategorized","pmpro-has-access"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.3 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Hugging Face Open-Sourced FineVision: A New Multimodal Dataset with\u00a024 Million Samples for Training Vision-Language Models (VLMs) - YouZum<\/title>\n<meta name=\"description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/youzum.net\/zh\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/\" \/>\n<meta property=\"og:locale\" content=\"zh_CN\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Hugging Face Open-Sourced FineVision: A New Multimodal Dataset with\u00a024 Million Samples for Training Vision-Language Models (VLMs) - YouZum\" \/>\n<meta property=\"og:description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta property=\"og:url\" content=\"https:\/\/youzum.net\/zh\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/\" \/>\n<meta property=\"og:site_name\" content=\"YouZum\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/DroneAssociationTH\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-09-07T06:27:32+00:00\" \/>\n<meta name=\"author\" content=\"admin NU\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\u4f5c\u8005\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin NU\" \/>\n\t<meta name=\"twitter:label2\" content=\"\u9884\u8ba1\u9605\u8bfb\u65f6\u95f4\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 \u5206\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/\"},\"author\":{\"name\":\"admin NU\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\"},\"headline\":\"Hugging Face Open-Sourced FineVision: A New Multimodal Dataset with\u00a024 Million Samples for Training Vision-Language Models (VLMs)\",\"datePublished\":\"2025-09-07T06:27:32+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/\"},\"wordCount\":688,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/image-15-1024x808-En9qsn.webp\",\"articleSection\":[\"AI\",\"Committee\",\"News\",\"Uncategorized\"],\"inLanguage\":\"zh-Hans\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/\",\"url\":\"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/\",\"name\":\"Hugging Face Open-Sourced FineVision: A New Multimodal Dataset with\u00a024 Million Samples for Training Vision-Language Models (VLMs) - YouZum\",\"isPartOf\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/image-15-1024x808-En9qsn.webp\",\"datePublished\":\"2025-09-07T06:27:32+00:00\",\"description\":\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\",\"breadcrumb\":{\"@id\":\"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/#breadcrumb\"},\"inLanguage\":\"zh-Hans\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-Hans\",\"@id\":\"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/#primaryimage\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/image-15-1024x808-En9qsn.webp\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/image-15-1024x808-En9qsn.webp\",\"width\":1024,\"height\":808},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/youzum.net\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Hugging Face Open-Sourced FineVision: A New Multimodal Dataset with\u00a024 Million Samples for Training Vision-Language Models (VLMs)\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/yousum.gpucore.co\/#website\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"name\":\"YouSum\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/yousum.gpucore.co\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"zh-Hans\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\",\"name\":\"Drone Association Thailand\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-Hans\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"width\":300,\"height\":300,\"caption\":\"Drone Association Thailand\"},\"image\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/DroneAssociationTH\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\",\"name\":\"admin NU\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-Hans\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"caption\":\"admin NU\"},\"url\":\"https:\/\/youzum.net\/zh\/members\/adminnu\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Hugging Face Open-Sourced FineVision: A New Multimodal Dataset with\u00a024 Million Samples for Training Vision-Language Models (VLMs) - YouZum","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/youzum.net\/zh\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/","og_locale":"zh_CN","og_type":"article","og_title":"Hugging Face Open-Sourced FineVision: A New Multimodal Dataset with\u00a024 Million Samples for Training Vision-Language Models (VLMs) - YouZum","og_description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","og_url":"https:\/\/youzum.net\/zh\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/","og_site_name":"YouZum","article_publisher":"https:\/\/www.facebook.com\/DroneAssociationTH\/","article_published_time":"2025-09-07T06:27:32+00:00","author":"admin NU","twitter_card":"summary_large_image","twitter_misc":{"\u4f5c\u8005":"admin NU","\u9884\u8ba1\u9605\u8bfb\u65f6\u95f4":"3 \u5206"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/#article","isPartOf":{"@id":"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/"},"author":{"name":"admin NU","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c"},"headline":"Hugging Face Open-Sourced FineVision: A New Multimodal Dataset with\u00a024 Million Samples for Training Vision-Language Models (VLMs)","datePublished":"2025-09-07T06:27:32+00:00","mainEntityOfPage":{"@id":"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/"},"wordCount":688,"commentCount":0,"publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"image":{"@id":"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/#primaryimage"},"thumbnailUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/image-15-1024x808-En9qsn.webp","articleSection":["AI","Committee","News","Uncategorized"],"inLanguage":"zh-Hans","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/","url":"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/","name":"Hugging Face Open-Sourced FineVision: A New Multimodal Dataset with\u00a024 Million Samples for Training Vision-Language Models (VLMs) - YouZum","isPartOf":{"@id":"https:\/\/yousum.gpucore.co\/#website"},"primaryImageOfPage":{"@id":"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/#primaryimage"},"image":{"@id":"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/#primaryimage"},"thumbnailUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/image-15-1024x808-En9qsn.webp","datePublished":"2025-09-07T06:27:32+00:00","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","breadcrumb":{"@id":"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/#breadcrumb"},"inLanguage":"zh-Hans","potentialAction":[{"@type":"ReadAction","target":["https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/"]}]},{"@type":"ImageObject","inLanguage":"zh-Hans","@id":"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/#primaryimage","url":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/image-15-1024x808-En9qsn.webp","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/image-15-1024x808-En9qsn.webp","width":1024,"height":808},{"@type":"BreadcrumbList","@id":"https:\/\/youzum.net\/hugging-face-open-sourced-finevision-a-new-multimodal-dataset-with-24-million-samples-for-training-vision-language-models-vlms\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/youzum.net\/"},{"@type":"ListItem","position":2,"name":"Hugging Face Open-Sourced FineVision: A New Multimodal Dataset with\u00a024 Million Samples for Training Vision-Language Models (VLMs)"}]},{"@type":"WebSite","@id":"https:\/\/yousum.gpucore.co\/#website","url":"https:\/\/yousum.gpucore.co\/","name":"YouSum","description":"","publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/yousum.gpucore.co\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"zh-Hans"},{"@type":"Organization","@id":"https:\/\/yousum.gpucore.co\/#organization","name":"Drone Association Thailand","url":"https:\/\/yousum.gpucore.co\/","logo":{"@type":"ImageObject","inLanguage":"zh-Hans","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","width":300,"height":300,"caption":"Drone Association Thailand"},"image":{"@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/DroneAssociationTH\/"]},{"@type":"Person","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c","name":"admin NU","image":{"@type":"ImageObject","inLanguage":"zh-Hans","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","caption":"admin NU"},"url":"https:\/\/youzum.net\/zh\/members\/adminnu\/"}]}},"rttpg_featured_image_url":{"full":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/image-15-1024x808-En9qsn.webp",1024,808,false],"landscape":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/image-15-1024x808-En9qsn.webp",1024,808,false],"portraits":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/image-15-1024x808-En9qsn.webp",1024,808,false],"thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/image-15-1024x808-En9qsn-150x150.webp",150,150,true],"medium":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/image-15-1024x808-En9qsn-300x237.webp",300,237,true],"large":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/image-15-1024x808-En9qsn.webp",1024,808,false],"1536x1536":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/image-15-1024x808-En9qsn.webp",1024,808,false],"2048x2048":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/image-15-1024x808-En9qsn.webp",1024,808,false],"trp-custom-language-flag":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/image-15-1024x808-En9qsn-15x12.webp",15,12,true],"woocommerce_thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/image-15-1024x808-En9qsn-300x300.webp",300,300,true],"woocommerce_single":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/image-15-1024x808-En9qsn-600x473.webp",600,473,true],"woocommerce_gallery_thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/image-15-1024x808-En9qsn-100x100.webp",100,100,true]},"rttpg_author":{"display_name":"admin NU","author_link":"https:\/\/youzum.net\/zh\/members\/adminnu\/"},"rttpg_comment":0,"rttpg_category":"<a href=\"https:\/\/youzum.net\/zh\/category\/ai-club\/\" rel=\"category tag\">AI<\/a> <a href=\"https:\/\/youzum.net\/zh\/category\/committee\/\" rel=\"category tag\">Committee<\/a> <a href=\"https:\/\/youzum.net\/zh\/category\/news\/\" rel=\"category tag\">News<\/a> <a href=\"https:\/\/youzum.net\/zh\/category\/uncategorized\/\" rel=\"category tag\">Uncategorized<\/a>","rttpg_excerpt":"Hugging Face has just released FineVision, an open multimodal dataset designed to set a new standard for Vision-Language Models (VLMs). With 17.3 million images, 24.3 million samples, 88.9 million question-answer turns, and nearly 10 billion answer tokens, FineVision position itself as one of the largest and structured publicly available VLM training datasets. FineVision aggregates 200+&hellip;","_links":{"self":[{"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/posts\/36618","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/comments?post=36618"}],"version-history":[{"count":0,"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/posts\/36618\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/media\/36619"}],"wp:attachment":[{"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/media?parent=36618"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/categories?post=36618"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/tags?post=36618"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}