{"id":38242,"date":"2025-09-15T07:08:11","date_gmt":"2025-09-15T07:08:11","guid":{"rendered":"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/"},"modified":"2025-09-15T07:08:11","modified_gmt":"2025-09-15T07:08:11","slug":"meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models","status":"publish","type":"post","link":"https:\/\/youzum.net\/de\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/","title":{"rendered":"Meta AI Released MobileLLM-R1: A Edge Reasoning Model with less than 1B Parameters and Achieves 2x\u20135x Performance Boost Over Other Fully Open-Source AI Models"},"content":{"rendered":"<div class=\"wp-block-yoast-seo-table-of-contents yoast-table-of-contents\">\n<h3><strong>Table of contents<\/strong><\/h3>\n<ul>\n<li><a href=\"https:\/\/www.marktechpost.com\/2025\/09\/14\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/#h-what-architecture-powers-mobilellm-r1\" data-level=\"3\">What architecture powers MobileLLM-R1?<\/a><\/li>\n<li><a href=\"https:\/\/www.marktechpost.com\/2025\/09\/14\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/#h-how-efficient-is-the-training\" data-level=\"3\">How efficient is the training?<\/a><\/li>\n<li><a href=\"https:\/\/www.marktechpost.com\/2025\/09\/14\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/#h-how-does-it-perform-against-other-open-models\" data-level=\"3\">How does it perform against other open models?<\/a><\/li>\n<li><a href=\"https:\/\/www.marktechpost.com\/2025\/09\/14\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/#h-where-does-mobilellm-r1-fall-short\" data-level=\"3\">Where does MobileLLM-R1 fall short?<\/a><\/li>\n<li><a href=\"https:\/\/www.marktechpost.com\/2025\/09\/14\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/#h-how-does-mobilellm-r1-compare-to-qwen3-smollm2-and-olmo\" data-level=\"3\">How does MobileLLM-R1 compare to Qwen3, SmolLM2, and OLMo?<\/a><\/li>\n<li><a href=\"https:\/\/www.marktechpost.com\/2025\/09\/14\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/#h-summary\" data-level=\"3\">Summary<\/a><\/li>\n<\/ul>\n<\/div>\n<p>Meta has released <strong>MobileLLM-R1<\/strong>, a family of lightweight edge reasoning models now available on <a href=\"https:\/\/huggingface.co\/facebook\/MobileLLM-R1-950M\">Hugging Face<\/a>. The release includes models ranging from 140M to 950M parameters, with a focus on efficient mathematical, coding, and scientific reasoning at sub-billion scale.<\/p>\n<p>Unlike general-purpose chat models, MobileLLM-R1 is designed for edge deployment, aiming to deliver state-of-the-art reasoning accuracy while remaining computationally efficient.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><a href=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/1000x800-1-scaled.png\"><img fetchpriority=\"high\" decoding=\"async\" width=\"1024\" height=\"819\" data-attachment-id=\"74559\" data-permalink=\"https:\/\/www.marktechpost.com\/2025\/09\/14\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/1000x800-2\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/1000x800-1-scaled.png\" data-orig-size=\"2560,2048\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"1000\u00d7800\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/1000x800-1-300x240.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/1000x800-1-1024x819.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/1000x800-1-1024x819.png\" alt=\"\" class=\"wp-image-74559\" \/><\/a><\/figure>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>What architecture powers MobileLLM-R1?<\/strong><\/h3>\n<p>The largest model, <strong>MobileLLM-R1-950M<\/strong>, integrates several architectural optimizations:<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>22 Transformer layers<\/strong> with 24 attention heads and 6 grouped KV heads.<\/li>\n<li><strong>Embedding dimension: 1536<\/strong>; <strong>hidden dimension: 6144<\/strong>.<\/li>\n<li><strong>Grouped-Query Attention (GQA)<\/strong> reduces compute and memory.<\/li>\n<li><strong>Block-wise weight sharing<\/strong> cuts parameter count without heavy latency penalties.<\/li>\n<li><strong>SwiGLU activations<\/strong> improve small-model representation.<\/li>\n<li><strong>Context length:<\/strong> 4K for base, 32K for post-trained models.<\/li>\n<li><strong>128K vocabulary<\/strong> with shared input\/output embeddings.<\/li>\n<\/ul>\n<p>The emphasis is on reducing compute and memory requirements, making it suitable for deployment on constrained devices.<\/p>\n<h3 class=\"wp-block-heading\"><strong>How efficient is the training?<\/strong><\/h3>\n<p><strong>MobileLLM-R1 is notable for data efficiency:<\/strong><\/p>\n<ul class=\"wp-block-list\">\n<li>Trained on <strong>~4.2T tokens<\/strong> in total.<\/li>\n<li>By comparison, <strong>Qwen3\u2019s 0.6B<\/strong> model was trained on <strong>36T tokens<\/strong>.<\/li>\n<li>This means MobileLLM-R1 uses only <strong>\u224811.7%<\/strong> of the data to reach or surpass Qwen3\u2019s accuracy.<\/li>\n<li>Post-training applies supervised fine-tuning on math, coding, and reasoning datasets.<\/li>\n<\/ul>\n<p>This efficiency translates directly into lower training costs and resource demands.<\/p>\n<h3 class=\"wp-block-heading\"><strong>How does it perform against other open models?<\/strong><\/h3>\n<p>On benchmarks, MobileLLM-R1-950M shows significant gains:<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>MATH (MATH500 dataset):<\/strong> ~<strong>5\u00d7 higher accuracy<\/strong> than <strong>Olmo-1.24B<\/strong> and ~<strong>2\u00d7 higher accuracy<\/strong> than <strong>SmolLM2-1.7B<\/strong>.<\/li>\n<li><strong>Reasoning and coding (GSM8K, AIME, LiveCodeBench):<\/strong> Matches or surpasses <strong>Qwen3-0.6B<\/strong>, despite using far fewer tokens.<\/li>\n<\/ul>\n<p>The model delivers results typically associated with larger architectures while maintaining a smaller footprint.<\/p>\n<h3 class=\"wp-block-heading\"><strong>Where does MobileLLM-R1 fall short?<\/strong><\/h3>\n<p>The model\u2019s focus creates limitations:<\/p>\n<ul class=\"wp-block-list\">\n<li>Strong in <strong>math, code, and structured reasoning<\/strong>.<\/li>\n<li>Weaker in <strong>general conversation, commonsense, and creative tasks<\/strong> compared to larger LLMs.<\/li>\n<li>Distributed under <strong>FAIR NC (non-commercial) license<\/strong>, which restricts usage in production settings.<\/li>\n<li>Longer contexts (32K) raise <strong>KV-cache and memory demands<\/strong> at inference.<\/li>\n<\/ul>\n<h3 class=\"wp-block-heading\"><strong>How does MobileLLM-R1 compare to Qwen3, SmolLM2, and OLMo?<\/strong><\/h3>\n<p><strong>Performance snapshot (post-trained models):<\/strong><\/p>\n<figure class=\"wp-block-table is-style-stripes\">\n<table class=\"has-fixed-layout\">\n<thead>\n<tr>\n<th>Model<\/th>\n<th>Params<\/th>\n<th>Train tokens (T)<\/th>\n<th>MATH500<\/th>\n<th>GSM8K<\/th>\n<th>AIME\u201924<\/th>\n<th>AIME\u201925<\/th>\n<th>LiveCodeBench<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>MobileLLM-R1-950M<\/strong><\/td>\n<td>0.949B<\/td>\n<td><strong>4.2<\/strong><\/td>\n<td><strong>74.0<\/strong><\/td>\n<td>67.5<\/td>\n<td>15.5<\/td>\n<td>16.3<\/td>\n<td>19.9<\/td>\n<\/tr>\n<tr>\n<td><strong>Qwen3-0.6B<\/strong><\/td>\n<td>0.596B<\/td>\n<td><strong>36.0<\/strong><\/td>\n<td>73.0<\/td>\n<td><strong>79.2<\/strong><\/td>\n<td>11.3<\/td>\n<td>17.0<\/td>\n<td>14.9<\/td>\n<\/tr>\n<tr>\n<td><strong>SmolLM2-1.7B-Instruct<\/strong><\/td>\n<td>1.71B<\/td>\n<td><strong>~11.0<\/strong><\/td>\n<td>19.2<\/td>\n<td>41.8<\/td>\n<td>0.3<\/td>\n<td>0.1<\/td>\n<td>4.4<\/td>\n<\/tr>\n<tr>\n<td><strong>OLMo-2-1B-Instruct<\/strong><\/td>\n<td>1.48B<\/td>\n<td><strong>~3.95<\/strong><\/td>\n<td>19.2<\/td>\n<td>69.7<\/td>\n<td>0.6<\/td>\n<td>0.1<\/td>\n<td>0.0<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/figure>\n<p><strong>Key observations:<\/strong><\/p>\n<ul class=\"wp-block-list\">\n<li>R1-950M matches <strong>Qwen3-0.6B<\/strong> in math (74.0 vs 73.0) while requiring ~<strong>8.6\u00d7 fewer tokens<\/strong>.<\/li>\n<li>Performance gaps vs <strong>SmolLM2<\/strong> and <strong>OLMo<\/strong> are substantial across reasoning tasks.<\/li>\n<li>Qwen3 maintains an edge in GSM8K, but the difference is small compared to the training efficiency advantage.<\/li>\n<\/ul>\n<h3 class=\"wp-block-heading\"><strong>Summary<\/strong><\/h3>\n<p>Meta\u2019s MobileLLM-R1 underscores a trend toward smaller, domain-optimized models that deliver competitive reasoning without massive training budgets. By achieving 2\u00d7\u20135\u00d7 performance gains over larger open models while training on a fraction of the data, it demonstrates that efficiency\u2014not just scale\u2014will define the next phase of LLM deployment, especially for math, coding, and scientific use cases on edge devices.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out the\u00a0<strong><a href=\"https:\/\/huggingface.co\/facebook\/MobileLLM-R1-950M\" target=\"_blank\" rel=\"noreferrer noopener\">Model on Hugging Face<\/a><em>.<\/em><\/strong>\u00a0Feel free to check out our\u00a0<strong><mark><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\" target=\"_blank\" rel=\"noreferrer noopener\">GitHub Page for Tutorials, Codes and Notebooks<\/a><\/mark><\/strong>.\u00a0Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">100k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>.<\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2025\/09\/14\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/\">Meta AI Released MobileLLM-R1: A Edge Reasoning Model with less than 1B Parameters and Achieves 2x\u20135x Performance Boost Over Other Fully Open-Source AI Models<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>Table of contents What architecture powers MobileLLM-R1? How efficient is the training? How does it perform against other open models? Where does MobileLLM-R1 fall short? How does MobileLLM-R1 compare to Qwen3, SmolLM2, and OLMo? Summary Meta has released MobileLLM-R1, a family of lightweight edge reasoning models now available on Hugging Face. The release includes models ranging from 140M to 950M parameters, with a focus on efficient mathematical, coding, and scientific reasoning at sub-billion scale. Unlike general-purpose chat models, MobileLLM-R1 is designed for edge deployment, aiming to deliver state-of-the-art reasoning accuracy while remaining computationally efficient. What architecture powers MobileLLM-R1? The largest model, MobileLLM-R1-950M, integrates several architectural optimizations: 22 Transformer layers with 24 attention heads and 6 grouped KV heads. Embedding dimension: 1536; hidden dimension: 6144. Grouped-Query Attention (GQA) reduces compute and memory. Block-wise weight sharing cuts parameter count without heavy latency penalties. SwiGLU activations improve small-model representation. Context length: 4K for base, 32K for post-trained models. 128K vocabulary with shared input\/output embeddings. The emphasis is on reducing compute and memory requirements, making it suitable for deployment on constrained devices. How efficient is the training? MobileLLM-R1 is notable for data efficiency: Trained on ~4.2T tokens in total. By comparison, Qwen3\u2019s 0.6B model was trained on 36T tokens. This means MobileLLM-R1 uses only \u224811.7% of the data to reach or surpass Qwen3\u2019s accuracy. Post-training applies supervised fine-tuning on math, coding, and reasoning datasets. This efficiency translates directly into lower training costs and resource demands. How does it perform against other open models? On benchmarks, MobileLLM-R1-950M shows significant gains: MATH (MATH500 dataset): ~5\u00d7 higher accuracy than Olmo-1.24B and ~2\u00d7 higher accuracy than SmolLM2-1.7B. Reasoning and coding (GSM8K, AIME, LiveCodeBench): Matches or surpasses Qwen3-0.6B, despite using far fewer tokens. The model delivers results typically associated with larger architectures while maintaining a smaller footprint. Where does MobileLLM-R1 fall short? The model\u2019s focus creates limitations: Strong in math, code, and structured reasoning. Weaker in general conversation, commonsense, and creative tasks compared to larger LLMs. Distributed under FAIR NC (non-commercial) license, which restricts usage in production settings. Longer contexts (32K) raise KV-cache and memory demands at inference. How does MobileLLM-R1 compare to Qwen3, SmolLM2, and OLMo? Performance snapshot (post-trained models): Model Params Train tokens (T) MATH500 GSM8K AIME\u201924 AIME\u201925 LiveCodeBench MobileLLM-R1-950M 0.949B 4.2 74.0 67.5 15.5 16.3 19.9 Qwen3-0.6B 0.596B 36.0 73.0 79.2 11.3 17.0 14.9 SmolLM2-1.7B-Instruct 1.71B ~11.0 19.2 41.8 0.3 0.1 4.4 OLMo-2-1B-Instruct 1.48B ~3.95 19.2 69.7 0.6 0.1 0.0 Key observations: R1-950M matches Qwen3-0.6B in math (74.0 vs 73.0) while requiring ~8.6\u00d7 fewer tokens. Performance gaps vs SmolLM2 and OLMo are substantial across reasoning tasks. Qwen3 maintains an edge in GSM8K, but the difference is small compared to the training efficiency advantage. Summary Meta\u2019s MobileLLM-R1 underscores a trend toward smaller, domain-optimized models that deliver competitive reasoning without massive training budgets. By achieving 2\u00d7\u20135\u00d7 performance gains over larger open models while training on a fraction of the data, it demonstrates that efficiency\u2014not just scale\u2014will define the next phase of LLM deployment, especially for math, coding, and scientific use cases on edge devices. Check out the\u00a0Model on Hugging Face.\u00a0Feel free to check out our\u00a0GitHub Page for Tutorials, Codes and Notebooks.\u00a0Also,\u00a0feel free to follow us on\u00a0Twitter\u00a0and don\u2019t forget to join our\u00a0100k+ ML SubReddit\u00a0and Subscribe to\u00a0our Newsletter. The post Meta AI Released MobileLLM-R1: A Edge Reasoning Model with less than 1B Parameters and Achieves 2x\u20135x Performance Boost Over Other Fully Open-Source AI Models appeared first on MarkTechPost.<\/p>","protected":false},"author":2,"featured_media":38243,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"pmpro_default_level":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"_pvb_checkbox_block_on_post":false,"footnotes":""},"categories":[52,5,7,1],"tags":[],"class_list":["post-38242","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-club","category-committee","category-news","category-uncategorized","pmpro-has-access"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.3 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Meta AI Released MobileLLM-R1: A Edge Reasoning Model with less than 1B Parameters and Achieves 2x\u20135x Performance Boost Over Other Fully Open-Source AI Models - YouZum<\/title>\n<meta name=\"description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/youzum.net\/de\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/\" \/>\n<meta property=\"og:locale\" content=\"de_DE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Meta AI Released MobileLLM-R1: A Edge Reasoning Model with less than 1B Parameters and Achieves 2x\u20135x Performance Boost Over Other Fully Open-Source AI Models - YouZum\" \/>\n<meta property=\"og:description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta property=\"og:url\" content=\"https:\/\/youzum.net\/de\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/\" \/>\n<meta property=\"og:site_name\" content=\"YouZum\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/DroneAssociationTH\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-09-15T07:08:11+00:00\" \/>\n<meta name=\"author\" content=\"admin NU\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Verfasst von\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin NU\" \/>\n\t<meta name=\"twitter:label2\" content=\"Gesch\u00e4tzte Lesezeit\" \/>\n\t<meta name=\"twitter:data2\" content=\"3\u00a0Minuten\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/\"},\"author\":{\"name\":\"admin NU\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\"},\"headline\":\"Meta AI Released MobileLLM-R1: A Edge Reasoning Model with less than 1B Parameters and Achieves 2x\u20135x Performance Boost Over Other Fully Open-Source AI Models\",\"datePublished\":\"2025-09-15T07:08:11+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/\"},\"wordCount\":600,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/1000x800-1-1024x819-WOwWyM.png\",\"articleSection\":[\"AI\",\"Committee\",\"News\",\"Uncategorized\"],\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/\",\"url\":\"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/\",\"name\":\"Meta AI Released MobileLLM-R1: A Edge Reasoning Model with less than 1B Parameters and Achieves 2x\u20135x Performance Boost Over Other Fully Open-Source AI Models - YouZum\",\"isPartOf\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/1000x800-1-1024x819-WOwWyM.png\",\"datePublished\":\"2025-09-15T07:08:11+00:00\",\"description\":\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\",\"breadcrumb\":{\"@id\":\"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/#breadcrumb\"},\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/#primaryimage\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/1000x800-1-1024x819-WOwWyM.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/1000x800-1-1024x819-WOwWyM.png\",\"width\":1024,\"height\":819},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/youzum.net\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Meta AI Released MobileLLM-R1: A Edge Reasoning Model with less than 1B Parameters and Achieves 2x\u20135x Performance Boost Over Other Fully Open-Source AI Models\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/yousum.gpucore.co\/#website\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"name\":\"YouSum\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/yousum.gpucore.co\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"de\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\",\"name\":\"Drone Association Thailand\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"width\":300,\"height\":300,\"caption\":\"Drone Association Thailand\"},\"image\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/DroneAssociationTH\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\",\"name\":\"admin NU\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"caption\":\"admin NU\"},\"url\":\"https:\/\/youzum.net\/de\/members\/adminnu\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Meta AI Released MobileLLM-R1: A Edge Reasoning Model with less than 1B Parameters and Achieves 2x\u20135x Performance Boost Over Other Fully Open-Source AI Models - YouZum","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/youzum.net\/de\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/","og_locale":"de_DE","og_type":"article","og_title":"Meta AI Released MobileLLM-R1: A Edge Reasoning Model with less than 1B Parameters and Achieves 2x\u20135x Performance Boost Over Other Fully Open-Source AI Models - YouZum","og_description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","og_url":"https:\/\/youzum.net\/de\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/","og_site_name":"YouZum","article_publisher":"https:\/\/www.facebook.com\/DroneAssociationTH\/","article_published_time":"2025-09-15T07:08:11+00:00","author":"admin NU","twitter_card":"summary_large_image","twitter_misc":{"Verfasst von":"admin NU","Gesch\u00e4tzte Lesezeit":"3\u00a0Minuten"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/#article","isPartOf":{"@id":"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/"},"author":{"name":"admin NU","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c"},"headline":"Meta AI Released MobileLLM-R1: A Edge Reasoning Model with less than 1B Parameters and Achieves 2x\u20135x Performance Boost Over Other Fully Open-Source AI Models","datePublished":"2025-09-15T07:08:11+00:00","mainEntityOfPage":{"@id":"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/"},"wordCount":600,"commentCount":0,"publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"image":{"@id":"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/#primaryimage"},"thumbnailUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/1000x800-1-1024x819-WOwWyM.png","articleSection":["AI","Committee","News","Uncategorized"],"inLanguage":"de","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/","url":"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/","name":"Meta AI Released MobileLLM-R1: A Edge Reasoning Model with less than 1B Parameters and Achieves 2x\u20135x Performance Boost Over Other Fully Open-Source AI Models - YouZum","isPartOf":{"@id":"https:\/\/yousum.gpucore.co\/#website"},"primaryImageOfPage":{"@id":"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/#primaryimage"},"image":{"@id":"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/#primaryimage"},"thumbnailUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/1000x800-1-1024x819-WOwWyM.png","datePublished":"2025-09-15T07:08:11+00:00","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","breadcrumb":{"@id":"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/#breadcrumb"},"inLanguage":"de","potentialAction":[{"@type":"ReadAction","target":["https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/"]}]},{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/#primaryimage","url":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/1000x800-1-1024x819-WOwWyM.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/1000x800-1-1024x819-WOwWyM.png","width":1024,"height":819},{"@type":"BreadcrumbList","@id":"https:\/\/youzum.net\/meta-ai-released-mobilellm-r1-a-edge-reasoning-model-with-less-than-1b-parameters-and-achieves-2x-5x-performance-boost-over-other-fully-open-source-ai-models\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/youzum.net\/"},{"@type":"ListItem","position":2,"name":"Meta AI Released MobileLLM-R1: A Edge Reasoning Model with less than 1B Parameters and Achieves 2x\u20135x Performance Boost Over Other Fully Open-Source AI Models"}]},{"@type":"WebSite","@id":"https:\/\/yousum.gpucore.co\/#website","url":"https:\/\/yousum.gpucore.co\/","name":"YouSum","description":"","publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/yousum.gpucore.co\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"de"},{"@type":"Organization","@id":"https:\/\/yousum.gpucore.co\/#organization","name":"Drone Association Thailand","url":"https:\/\/yousum.gpucore.co\/","logo":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","width":300,"height":300,"caption":"Drone Association Thailand"},"image":{"@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/DroneAssociationTH\/"]},{"@type":"Person","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c","name":"admin NU","image":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","caption":"admin NU"},"url":"https:\/\/youzum.net\/de\/members\/adminnu\/"}]}},"rttpg_featured_image_url":{"full":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/1000x800-1-1024x819-WOwWyM.png",1024,819,false],"landscape":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/1000x800-1-1024x819-WOwWyM.png",1024,819,false],"portraits":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/1000x800-1-1024x819-WOwWyM.png",1024,819,false],"thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/1000x800-1-1024x819-WOwWyM-150x150.png",150,150,true],"medium":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/1000x800-1-1024x819-WOwWyM-300x240.png",300,240,true],"large":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/1000x800-1-1024x819-WOwWyM.png",1024,819,false],"1536x1536":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/1000x800-1-1024x819-WOwWyM.png",1024,819,false],"2048x2048":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/1000x800-1-1024x819-WOwWyM.png",1024,819,false],"trp-custom-language-flag":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/1000x800-1-1024x819-WOwWyM-15x12.png",15,12,true],"woocommerce_thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/1000x800-1-1024x819-WOwWyM-300x300.png",300,300,true],"woocommerce_single":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/1000x800-1-1024x819-WOwWyM-600x480.png",600,480,true],"woocommerce_gallery_thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/1000x800-1-1024x819-WOwWyM-100x100.png",100,100,true]},"rttpg_author":{"display_name":"admin NU","author_link":"https:\/\/youzum.net\/de\/members\/adminnu\/"},"rttpg_comment":0,"rttpg_category":"<a href=\"https:\/\/youzum.net\/de\/category\/ai-club\/\" rel=\"category tag\">AI<\/a> <a href=\"https:\/\/youzum.net\/de\/category\/committee\/\" rel=\"category tag\">Committee<\/a> <a href=\"https:\/\/youzum.net\/de\/category\/news\/\" rel=\"category tag\">News<\/a> <a href=\"https:\/\/youzum.net\/de\/category\/uncategorized\/\" rel=\"category tag\">Uncategorized<\/a>","rttpg_excerpt":"Table of contents What architecture powers MobileLLM-R1? How efficient is the training? How does it perform against other open models? Where does MobileLLM-R1 fall short? How does MobileLLM-R1 compare to Qwen3, SmolLM2, and OLMo? Summary Meta has released MobileLLM-R1, a family of lightweight edge reasoning models now available on Hugging Face. The release includes models&hellip;","_links":{"self":[{"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/posts\/38242","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/comments?post=38242"}],"version-history":[{"count":0,"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/posts\/38242\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/media\/38243"}],"wp:attachment":[{"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/media?parent=38242"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/categories?post=38242"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/tags?post=38242"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}