{"id":43624,"date":"2025-10-11T06:57:42","date_gmt":"2025-10-11T06:57:42","guid":{"rendered":"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/"},"modified":"2025-10-11T06:57:42","modified_gmt":"2025-10-11T06:57:42","slug":"liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token","status":"publish","type":"post","link":"https:\/\/youzum.net\/ja\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/","title":{"rendered":"Liquid AI Releases LFM2-8B-A1B: An On-Device Mixture-of-Experts with 8.3B Params and a 1.5B Active Params per Token"},"content":{"rendered":"<p>How much capability can a sparse <strong>8.3B-parameter<\/strong> MoE with a <strong>~1.5B active path<\/strong> deliver on your phone without blowing latency or memory? <strong>Liquid AI has released <\/strong><a href=\"https:\/\/www.liquid.ai\/blog\/lfm2-8b-a1b-an-efficient-on-device-mixture-of-experts\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>LFM2-8B-A1B<\/strong>,<\/a> a small-scale Mixture-of-Experts (MoE) model built for on-device execution under tight memory, latency, and energy budgets. Unlike most MoE work optimized for cloud batch serving, LFM2-8B-A1B targets phones, laptops, and embedded systems. It showcases <strong>8.3B total parameters<\/strong> but activates only <strong>~1.5B parameters per token<\/strong>, using sparse expert routing to preserve a small compute path while increasing representational capacity. The model is released under the <strong>LFM Open License v1.0 (lfm1.0)<\/strong><\/p>\n<h3 class=\"wp-block-heading\"><strong>Understanding the Architecture<\/strong><\/h3>\n<p>LFM2-8B-A1B retains the LFM2 \u2018fast backbone\u2019 and inserts sparse-MoE feed-forward blocks to lift capacity without materially increasing the active compute. The backbone uses <strong>18 gated short-convolution blocks<\/strong> and <strong>6 grouped-query attention (GQA) blocks<\/strong>. All layers <strong>except the first two<\/strong> include an MoE block; the first two remain dense for stability. Each MoE block defines <strong>32 experts<\/strong>; the router selects <strong>top-4 experts per token<\/strong> with a <strong>normalized-sigmoid gate<\/strong> and <strong>adaptive routing bias<\/strong> to balance load and stabilize training. Context length is <strong>32,768 tokens<\/strong>; vocabulary size <strong>65,536<\/strong>; reported pre-training budget <strong>~12T tokens<\/strong>. <\/p>\n<p>This approach keeps per-token FLOPs and cache growth bounded by the active path (attention + four expert MLPs), while total capacity allows specialization across domains such as multilingual knowledge, math, and code\u2014use cases that often regress on very small dense models.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img fetchpriority=\"high\" decoding=\"async\" width=\"2560\" height=\"2223\" data-attachment-id=\"75259\" data-permalink=\"https:\/\/www.marktechpost.com\/2025\/10\/10\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/image-151\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/image-4-scaled.png\" data-orig-size=\"2560,2223\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"image\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/image-4-300x260.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/image-4-1024x889.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/image-4-scaled.png\" alt=\"\" class=\"wp-image-75259\" \/><figcaption class=\"wp-element-caption\">https:\/\/www.liquid.ai\/blog\/lfm2-8b-a1b-an-efficient-on-device-mixture-of-experts<\/figcaption><\/figure>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>Performance signals<\/strong><\/h3>\n<p>Liquid AI reports that LFM2-8B-A1B <strong>runs significantly faster than Qwen3-1.7B<\/strong> under CPU tests using an internal XNNPACK-based stack and a custom CPU MoE kernel. The public plots cover <strong>int4 quantization with int8 dynamic activations<\/strong> on <strong>AMD Ryzen AI 9 HX370<\/strong> and <strong>Samsung Galaxy S24 Ultra<\/strong>. The Liquid AI team positions quality as comparable to <strong>3\u20134B dense models<\/strong>, while keeping the active compute near <strong>1.5B<\/strong>. No cross-vendor \u201c\u00d7-faster\u201d headline multipliers are published; the claims are framed as per-device comparisons versus similarly active models.<\/p>\n<p>On accuracy, the model card lists results across 16 benchmarks, including MMLU\/MMLU-Pro\/GPQA (knowledge), IFEval\/IFBench\/Multi-IF (instruction following), GSM8K\/GSMPlus\/MATH500\/MATH-Lvl-5 (math), and MGSM\/MMMLU (multilingual). The numbers indicate competitive instruction-following and math performance within the small-model band, and improved knowledge capacity relative to LFM2-2.6B, consistent with the larger total parameter budget.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img decoding=\"async\" width=\"1688\" height=\"900\" data-attachment-id=\"75261\" data-permalink=\"https:\/\/www.marktechpost.com\/2025\/10\/10\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/screenshot-2025-10-10-at-9-07-22-pm-2\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-10-at-9.07.22-PM-1.png\" data-orig-size=\"1688,900\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2025-10-10 at 9.07.22\u202fPM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-10-at-9.07.22-PM-1-300x160.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-10-at-9.07.22-PM-1-1024x546.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-10-at-9.07.22-PM-1.png\" alt=\"\" class=\"wp-image-75261\" \/><figcaption class=\"wp-element-caption\">https:\/\/www.liquid.ai\/blog\/lfm2-8b-a1b-an-efficient-on-device-mixture-of-experts<\/figcaption><\/figure>\n<\/div>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img decoding=\"async\" width=\"1654\" height=\"898\" data-attachment-id=\"75263\" data-permalink=\"https:\/\/www.marktechpost.com\/2025\/10\/10\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/screenshot-2025-10-10-at-9-07-55-pm-2\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-10-at-9.07.55-PM-1.png\" data-orig-size=\"1654,898\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2025-10-10 at 9.07.55\u202fPM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-10-at-9.07.55-PM-1-300x163.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-10-at-9.07.55-PM-1-1024x556.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-10-at-9.07.55-PM-1.png\" alt=\"\" class=\"wp-image-75263\" \/><figcaption class=\"wp-element-caption\">https:\/\/www.liquid.ai\/blog\/lfm2-8b-a1b-an-efficient-on-device-mixture-of-experts<\/figcaption><\/figure>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>Deployment and tooling<\/strong><\/h3>\n<p>LFM2-8B-A1B ships with Transformers\/vLLM for GPU inference and GGUF builds for llama.cpp; the official GGUF repo lists common quants from <strong>Q4_0 \u22484.7 GB<\/strong> up to <strong>F16 \u224816.7 GB<\/strong> for local runs, while <strong>llama.cpp<\/strong> requires a recent build with <code>lfm2moe<\/code> support (<strong>b6709+<\/strong>) to avoid \u201cunknown model architecture\u201d errors. Liquid\u2019s CPU validation uses <strong>Q4_0<\/strong> with <strong>int8 dynamic activations<\/strong> on <strong>AMD Ryzen AI 9 HX370<\/strong> and <strong>Samsung Galaxy S24 Ultra<\/strong>, where LFM2-8B-A1B shows higher decode throughput than <strong>Qwen3-1.7B<\/strong> at a similar active-parameter class; <strong>ExecuTorch<\/strong> is referenced for mobile\/embedded CPU deployment.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1604\" height=\"1200\" data-attachment-id=\"75265\" data-permalink=\"https:\/\/www.marktechpost.com\/2025\/10\/10\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/screenshot-2025-10-10-at-9-15-38-pm-2\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-10-at-9.15.38-PM-1.png\" data-orig-size=\"1604,1200\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2025-10-10 at 9.15.38\u202fPM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-10-at-9.15.38-PM-1-300x224.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-10-at-9.15.38-PM-1-1024x766.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-10-at-9.15.38-PM-1.png\" alt=\"\" class=\"wp-image-75265\" \/><figcaption class=\"wp-element-caption\">https:\/\/www.liquid.ai\/blog\/lfm2-8b-a1b-an-efficient-on-device-mixture-of-experts<\/figcaption><\/figure>\n<\/div>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1582\" height=\"1210\" data-attachment-id=\"75267\" data-permalink=\"https:\/\/www.marktechpost.com\/2025\/10\/10\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/screenshot-2025-10-10-at-9-16-04-pm-2\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-10-at-9.16.04-PM-1.png\" data-orig-size=\"1582,1210\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2025-10-10 at 9.16.04\u202fPM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-10-at-9.16.04-PM-1-300x229.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-10-at-9.16.04-PM-1-1024x783.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-10-at-9.16.04-PM-1.png\" alt=\"\" class=\"wp-image-75267\" \/><figcaption class=\"wp-element-caption\">https:\/\/www.liquid.ai\/blog\/lfm2-8b-a1b-an-efficient-on-device-mixture-of-experts<\/figcaption><\/figure>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>Key Takeaways<\/strong><\/h3>\n<ul class=\"wp-block-list\">\n<li><strong>Architecture &amp; routing<\/strong>: LFM2-8B-A1B pairs an LFM2 fast backbone (18 gated short-conv blocks + 6 GQA blocks) with per-layer sparse-MoE FFNs (all layers except the first two), using 32 experts with top-4 routing via normalized-sigmoid gating and adaptive biases; <strong>8.3B total params, ~1.5B active per token<\/strong>. <\/li>\n<li><strong>On-device target<\/strong>: Designed for phones, laptops, and embedded CPUs\/GPUs; quantized variants \u201cfit comfortably\u201d on high-end consumer hardware for private, low-latency use.<\/li>\n<li><strong>Performance positioning.<\/strong> Liquid reports LFM2-8B-A1B is <strong>significantly faster than Qwen3-1.7B<\/strong> in CPU tests and aims for <strong>3\u20134B dense-class quality<\/strong> while keeping an ~1.5B active path. <\/li>\n<\/ul>\n<h3 class=\"wp-block-heading\"><strong>Editorial Comments<\/strong><\/h3>\n<p>LFM2-8B-A1B demonstrates that sparse MoE can be practical below the usual <a href=\"https:\/\/www.marktechpost.com\/2025\/08\/08\/proxy-servers-explained-types-use-cases-trends-in-2025-technical-deep-dive\/\" target=\"_blank\">server<\/a>-scale regime. The model combines an LFM2 conv-attention backbone with per-layer expert MLPs (except the first two layers) to keep token compute near 1.5B while lifting quality toward 3\u20134B dense classes. With standard and GGUF weights, llama.cpp\/ExecuTorch\/vLLM paths, and a permissive on-device posture, LFM2-8B-A1B is a concrete option for building low-latency, private assistants and application-embedded copilots on consumer and edge hardware. <\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out the\u00a0<strong><a href=\"https:\/\/huggingface.co\/LiquidAI\/LFM2-8B-A1B\" target=\"_blank\" rel=\"noreferrer noopener\">Model on Hugging Face <\/a><\/strong>and<strong> <a href=\"https:\/\/www.liquid.ai\/blog\/lfm2-8b-a1b-an-efficient-on-device-mixture-of-experts\" target=\"_blank\" rel=\"noreferrer noopener\">Technical details<\/a><\/strong>. Feel free to check out our\u00a0<strong><mark><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\" target=\"_blank\" rel=\"noreferrer noopener\">GitHub Page for Tutorials, Codes and Notebooks<\/a><\/mark><\/strong>.\u00a0Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">100k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2025\/10\/10\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/\">Liquid AI Releases LFM2-8B-A1B: An On-Device Mixture-of-Experts with 8.3B Params and a 1.5B Active Params per Token<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>How much capability can a sparse 8.3B-parameter MoE with a ~1.5B active path deliver on your phone without blowing latency or memory? Liquid AI has released LFM2-8B-A1B, a small-scale Mixture-of-Experts (MoE) model built for on-device execution under tight memory, latency, and energy budgets. Unlike most MoE work optimized for cloud batch serving, LFM2-8B-A1B targets phones, laptops, and embedded systems. It showcases 8.3B total parameters but activates only ~1.5B parameters per token, using sparse expert routing to preserve a small compute path while increasing representational capacity. The model is released under the LFM Open License v1.0 (lfm1.0) Understanding the Architecture LFM2-8B-A1B retains the LFM2 \u2018fast backbone\u2019 and inserts sparse-MoE feed-forward blocks to lift capacity without materially increasing the active compute. The backbone uses 18 gated short-convolution blocks and 6 grouped-query attention (GQA) blocks. All layers except the first two include an MoE block; the first two remain dense for stability. Each MoE block defines 32 experts; the router selects top-4 experts per token with a normalized-sigmoid gate and adaptive routing bias to balance load and stabilize training. Context length is 32,768 tokens; vocabulary size 65,536; reported pre-training budget ~12T tokens. This approach keeps per-token FLOPs and cache growth bounded by the active path (attention + four expert MLPs), while total capacity allows specialization across domains such as multilingual knowledge, math, and code\u2014use cases that often regress on very small dense models. https:\/\/www.liquid.ai\/blog\/lfm2-8b-a1b-an-efficient-on-device-mixture-of-experts Performance signals Liquid AI reports that LFM2-8B-A1B runs significantly faster than Qwen3-1.7B under CPU tests using an internal XNNPACK-based stack and a custom CPU MoE kernel. The public plots cover int4 quantization with int8 dynamic activations on AMD Ryzen AI 9 HX370 and Samsung Galaxy S24 Ultra. The Liquid AI team positions quality as comparable to 3\u20134B dense models, while keeping the active compute near 1.5B. No cross-vendor \u201c\u00d7-faster\u201d headline multipliers are published; the claims are framed as per-device comparisons versus similarly active models. On accuracy, the model card lists results across 16 benchmarks, including MMLU\/MMLU-Pro\/GPQA (knowledge), IFEval\/IFBench\/Multi-IF (instruction following), GSM8K\/GSMPlus\/MATH500\/MATH-Lvl-5 (math), and MGSM\/MMMLU (multilingual). The numbers indicate competitive instruction-following and math performance within the small-model band, and improved knowledge capacity relative to LFM2-2.6B, consistent with the larger total parameter budget. https:\/\/www.liquid.ai\/blog\/lfm2-8b-a1b-an-efficient-on-device-mixture-of-experts https:\/\/www.liquid.ai\/blog\/lfm2-8b-a1b-an-efficient-on-device-mixture-of-experts Deployment and tooling LFM2-8B-A1B ships with Transformers\/vLLM for GPU inference and GGUF builds for llama.cpp; the official GGUF repo lists common quants from Q4_0 \u22484.7 GB up to F16 \u224816.7 GB for local runs, while llama.cpp requires a recent build with lfm2moe support (b6709+) to avoid \u201cunknown model architecture\u201d errors. Liquid\u2019s CPU validation uses Q4_0 with int8 dynamic activations on AMD Ryzen AI 9 HX370 and Samsung Galaxy S24 Ultra, where LFM2-8B-A1B shows higher decode throughput than Qwen3-1.7B at a similar active-parameter class; ExecuTorch is referenced for mobile\/embedded CPU deployment. https:\/\/www.liquid.ai\/blog\/lfm2-8b-a1b-an-efficient-on-device-mixture-of-experts https:\/\/www.liquid.ai\/blog\/lfm2-8b-a1b-an-efficient-on-device-mixture-of-experts Key Takeaways Architecture &amp; routing: LFM2-8B-A1B pairs an LFM2 fast backbone (18 gated short-conv blocks + 6 GQA blocks) with per-layer sparse-MoE FFNs (all layers except the first two), using 32 experts with top-4 routing via normalized-sigmoid gating and adaptive biases; 8.3B total params, ~1.5B active per token. On-device target: Designed for phones, laptops, and embedded CPUs\/GPUs; quantized variants \u201cfit comfortably\u201d on high-end consumer hardware for private, low-latency use. Performance positioning. Liquid reports LFM2-8B-A1B is significantly faster than Qwen3-1.7B in CPU tests and aims for 3\u20134B dense-class quality while keeping an ~1.5B active path. Editorial Comments LFM2-8B-A1B demonstrates that sparse MoE can be practical below the usual server-scale regime. The model combines an LFM2 conv-attention backbone with per-layer expert MLPs (except the first two layers) to keep token compute near 1.5B while lifting quality toward 3\u20134B dense classes. With standard and GGUF weights, llama.cpp\/ExecuTorch\/vLLM paths, and a permissive on-device posture, LFM2-8B-A1B is a concrete option for building low-latency, private assistants and application-embedded copilots on consumer and edge hardware. Check out the\u00a0Model on Hugging Face and Technical details. Feel free to check out our\u00a0GitHub Page for Tutorials, Codes and Notebooks.\u00a0Also,\u00a0feel free to follow us on\u00a0Twitter\u00a0and don\u2019t forget to join our\u00a0100k+ ML SubReddit\u00a0and Subscribe to\u00a0our Newsletter. Wait! are you on telegram?\u00a0now you can join us on telegram as well. The post Liquid AI Releases LFM2-8B-A1B: An On-Device Mixture-of-Experts with 8.3B Params and a 1.5B Active Params per Token appeared first on MarkTechPost.<\/p>","protected":false},"author":2,"featured_media":43625,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"pmpro_default_level":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"_pvb_checkbox_block_on_post":false,"footnotes":""},"categories":[52,5,7,1],"tags":[],"class_list":["post-43624","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-club","category-committee","category-news","category-uncategorized","pmpro-has-access"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.3 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Liquid AI Releases LFM2-8B-A1B: An On-Device Mixture-of-Experts with 8.3B Params and a 1.5B Active Params per Token - YouZum<\/title>\n<meta name=\"description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/youzum.net\/ja\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/\" \/>\n<meta property=\"og:locale\" content=\"ja_JP\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Liquid AI Releases LFM2-8B-A1B: An On-Device Mixture-of-Experts with 8.3B Params and a 1.5B Active Params per Token - YouZum\" \/>\n<meta property=\"og:description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta property=\"og:url\" content=\"https:\/\/youzum.net\/ja\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/\" \/>\n<meta property=\"og:site_name\" content=\"YouZum\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/DroneAssociationTH\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-10-11T06:57:42+00:00\" \/>\n<meta name=\"author\" content=\"admin NU\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\u57f7\u7b46\u8005\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin NU\" \/>\n\t<meta name=\"twitter:label2\" content=\"\u63a8\u5b9a\u8aad\u307f\u53d6\u308a\u6642\u9593\" \/>\n\t<meta name=\"twitter:data2\" content=\"4\u5206\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/\"},\"author\":{\"name\":\"admin NU\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\"},\"headline\":\"Liquid AI Releases LFM2-8B-A1B: An On-Device Mixture-of-Experts with 8.3B Params and a 1.5B Active Params per Token\",\"datePublished\":\"2025-10-11T06:57:42+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/\"},\"wordCount\":811,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/image-4-scaled-htsYrW.png\",\"articleSection\":[\"AI\",\"Committee\",\"News\",\"Uncategorized\"],\"inLanguage\":\"ja\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/\",\"url\":\"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/\",\"name\":\"Liquid AI Releases LFM2-8B-A1B: An On-Device Mixture-of-Experts with 8.3B Params and a 1.5B Active Params per Token - YouZum\",\"isPartOf\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/image-4-scaled-htsYrW.png\",\"datePublished\":\"2025-10-11T06:57:42+00:00\",\"description\":\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\",\"breadcrumb\":{\"@id\":\"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/#breadcrumb\"},\"inLanguage\":\"ja\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"ja\",\"@id\":\"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/#primaryimage\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/image-4-scaled-htsYrW.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/image-4-scaled-htsYrW.png\",\"width\":2560,\"height\":2223},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/youzum.net\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Liquid AI Releases LFM2-8B-A1B: An On-Device Mixture-of-Experts with 8.3B Params and a 1.5B Active Params per Token\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/yousum.gpucore.co\/#website\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"name\":\"YouSum\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/yousum.gpucore.co\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"ja\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\",\"name\":\"Drone Association Thailand\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ja\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"width\":300,\"height\":300,\"caption\":\"Drone Association Thailand\"},\"image\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/DroneAssociationTH\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\",\"name\":\"admin NU\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ja\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"caption\":\"admin NU\"},\"url\":\"https:\/\/youzum.net\/ja\/members\/adminnu\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Liquid AI Releases LFM2-8B-A1B: An On-Device Mixture-of-Experts with 8.3B Params and a 1.5B Active Params per Token - YouZum","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/youzum.net\/ja\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/","og_locale":"ja_JP","og_type":"article","og_title":"Liquid AI Releases LFM2-8B-A1B: An On-Device Mixture-of-Experts with 8.3B Params and a 1.5B Active Params per Token - YouZum","og_description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","og_url":"https:\/\/youzum.net\/ja\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/","og_site_name":"YouZum","article_publisher":"https:\/\/www.facebook.com\/DroneAssociationTH\/","article_published_time":"2025-10-11T06:57:42+00:00","author":"admin NU","twitter_card":"summary_large_image","twitter_misc":{"\u57f7\u7b46\u8005":"admin NU","\u63a8\u5b9a\u8aad\u307f\u53d6\u308a\u6642\u9593":"4\u5206"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/#article","isPartOf":{"@id":"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/"},"author":{"name":"admin NU","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c"},"headline":"Liquid AI Releases LFM2-8B-A1B: An On-Device Mixture-of-Experts with 8.3B Params and a 1.5B Active Params per Token","datePublished":"2025-10-11T06:57:42+00:00","mainEntityOfPage":{"@id":"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/"},"wordCount":811,"commentCount":0,"publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"image":{"@id":"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/#primaryimage"},"thumbnailUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/image-4-scaled-htsYrW.png","articleSection":["AI","Committee","News","Uncategorized"],"inLanguage":"ja","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/","url":"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/","name":"Liquid AI Releases LFM2-8B-A1B: An On-Device Mixture-of-Experts with 8.3B Params and a 1.5B Active Params per Token - YouZum","isPartOf":{"@id":"https:\/\/yousum.gpucore.co\/#website"},"primaryImageOfPage":{"@id":"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/#primaryimage"},"image":{"@id":"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/#primaryimage"},"thumbnailUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/image-4-scaled-htsYrW.png","datePublished":"2025-10-11T06:57:42+00:00","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","breadcrumb":{"@id":"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/#breadcrumb"},"inLanguage":"ja","potentialAction":[{"@type":"ReadAction","target":["https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/"]}]},{"@type":"ImageObject","inLanguage":"ja","@id":"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/#primaryimage","url":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/image-4-scaled-htsYrW.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/image-4-scaled-htsYrW.png","width":2560,"height":2223},{"@type":"BreadcrumbList","@id":"https:\/\/youzum.net\/liquid-ai-releases-lfm2-8b-a1b-an-on-device-mixture-of-experts-with-8-3b-params-and-a-1-5b-active-params-per-token\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/youzum.net\/"},{"@type":"ListItem","position":2,"name":"Liquid AI Releases LFM2-8B-A1B: An On-Device Mixture-of-Experts with 8.3B Params and a 1.5B Active Params per Token"}]},{"@type":"WebSite","@id":"https:\/\/yousum.gpucore.co\/#website","url":"https:\/\/yousum.gpucore.co\/","name":"YouSum","description":"","publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/yousum.gpucore.co\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"ja"},{"@type":"Organization","@id":"https:\/\/yousum.gpucore.co\/#organization","name":"Drone Association Thailand","url":"https:\/\/yousum.gpucore.co\/","logo":{"@type":"ImageObject","inLanguage":"ja","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","width":300,"height":300,"caption":"Drone Association Thailand"},"image":{"@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/DroneAssociationTH\/"]},{"@type":"Person","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c","name":"admin NU","image":{"@type":"ImageObject","inLanguage":"ja","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","caption":"admin NU"},"url":"https:\/\/youzum.net\/ja\/members\/adminnu\/"}]}},"rttpg_featured_image_url":{"full":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/image-4-scaled-htsYrW.png",2560,2223,false],"landscape":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/image-4-scaled-htsYrW.png",2560,2223,false],"portraits":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/image-4-scaled-htsYrW.png",2560,2223,false],"thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/image-4-scaled-htsYrW-150x150.png",150,150,true],"medium":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/image-4-scaled-htsYrW-300x261.png",300,261,true],"large":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/image-4-scaled-htsYrW-1024x889.png",1024,889,true],"1536x1536":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/image-4-scaled-htsYrW-1536x1334.png",1536,1334,true],"2048x2048":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/image-4-scaled-htsYrW-2048x1778.png",2048,1778,true],"trp-custom-language-flag":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/image-4-scaled-htsYrW-14x12.png",14,12,true],"woocommerce_thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/image-4-scaled-htsYrW-300x300.png",300,300,true],"woocommerce_single":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/image-4-scaled-htsYrW-600x521.png",600,521,true],"woocommerce_gallery_thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/image-4-scaled-htsYrW-100x100.png",100,100,true]},"rttpg_author":{"display_name":"admin NU","author_link":"https:\/\/youzum.net\/ja\/members\/adminnu\/"},"rttpg_comment":0,"rttpg_category":"<a href=\"https:\/\/youzum.net\/ja\/category\/ai-club\/\" rel=\"category tag\">AI<\/a> <a href=\"https:\/\/youzum.net\/ja\/category\/committee\/\" rel=\"category tag\">Committee<\/a> <a href=\"https:\/\/youzum.net\/ja\/category\/news\/\" rel=\"category tag\">News<\/a> <a href=\"https:\/\/youzum.net\/ja\/category\/uncategorized\/\" rel=\"category tag\">Uncategorized<\/a>","rttpg_excerpt":"How much capability can a sparse 8.3B-parameter MoE with a ~1.5B active path deliver on your phone without blowing latency or memory? Liquid AI has released LFM2-8B-A1B, a small-scale Mixture-of-Experts (MoE) model built for on-device execution under tight memory, latency, and energy budgets. Unlike most MoE work optimized for cloud batch serving, LFM2-8B-A1B targets phones,&hellip;","_links":{"self":[{"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/posts\/43624","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/comments?post=43624"}],"version-history":[{"count":0,"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/posts\/43624\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/media\/43625"}],"wp:attachment":[{"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/media?parent=43624"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/categories?post=43624"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/tags?post=43624"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}