{"id":26232,"date":"2025-07-20T05:39:59","date_gmt":"2025-07-20T05:39:59","guid":{"rendered":"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/"},"modified":"2025-07-20T05:39:59","modified_gmt":"2025-07-20T05:39:59","slug":"nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528","status":"publish","type":"post","link":"https:\/\/youzum.net\/zh\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/","title":{"rendered":"NVIDIA AI Releases OpenReasoning-Nemotron: A Suite of Reasoning-Enhanced LLMs Distilled from DeepSeek R1 0528"},"content":{"rendered":"<p>NVIDIA AI has introduced <strong>OpenReasoning-Nemotron<\/strong>, a family of large language models (LLMs) designed to excel in complex reasoning tasks across mathematics, science, and code. This model suite\u2014comprising <strong>1.5B, 7B, 14B, and 32B parameter versions<\/strong>\u2014has been <strong>distilled from the 671B DeepSeek R1 0528 model<\/strong>, capturing its high-level reasoning capabilities in significantly smaller and more efficient models.<\/p>\n<p>The release positions NVIDIA as a leading contributor to the open-source LLM ecosystem, delivering models that push state-of-the-art (SOTA) performance while remaining commercially permissive and widely accessible via <a class=\"\" href=\"https:\/\/huggingface.co\/blog\/nvidia\/openreasoning-nemotron?linkId=100000374186136\">Hugging Face<\/a>.<\/p>\n<h3 class=\"wp-block-heading\"><strong>Model Overview and Architecture<\/strong><\/h3>\n<h4 class=\"wp-block-heading\"><strong><img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/16.0.1\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Distillation from DeepSeek R1 0528 (671B)<\/strong><\/h4>\n<p>At the heart of OpenReasoning-Nemotron lies a <strong>distillation strategy<\/strong> that transfers reasoning ability from DeepSeek R1\u2014a massive 671B parameter model\u2014into smaller architectures. The process prioritizes <strong>reasoning generalization<\/strong> over raw token prediction, enabling compact models to perform effectively on structured, high-cognition tasks.<\/p>\n<p>The distillation dataset emphasizes <strong>mathematics, science, and programming languages<\/strong>, aligning model capabilities with key reasoning domains.<\/p>\n<h4 class=\"wp-block-heading\"><strong><img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/16.0.1\/72x72\/1f4ca.png\" alt=\"\ud83d\udcca\" class=\"wp-smiley\" \/> Model Variants and Specs<\/strong><\/h4>\n<figure class=\"wp-block-table\">\n<table class=\"has-fixed-layout\">\n<thead>\n<tr>\n<th>Model Name<\/th>\n<th>Parameters<\/th>\n<th>Intended Use<\/th>\n<th>Hugging Face Page<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>OpenReasoning-Nemotron-1.5B<\/td>\n<td>1.5B<\/td>\n<td>Entry-level reasoning and inference<\/td>\n<td><a class=\"\" href=\"https:\/\/huggingface.co\/nvidia\/OpenReasoning-Nemotron-1.5B\">Link<\/a><\/td>\n<\/tr>\n<tr>\n<td>OpenReasoning-Nemotron-7B<\/td>\n<td>7B<\/td>\n<td>Mid-scale reasoning, good for code\/math<\/td>\n<td><a class=\"\" href=\"https:\/\/huggingface.co\/nvidia\/OpenReasoning-Nemotron-7B\">Link<\/a><\/td>\n<\/tr>\n<tr>\n<td>OpenReasoning-Nemotron-14B<\/td>\n<td>14B<\/td>\n<td>Advanced reasoning capabilities<\/td>\n<td><a class=\"\" href=\"https:\/\/huggingface.co\/nvidia\/OpenReasoning-Nemotron-14B\">Link<\/a><\/td>\n<\/tr>\n<tr>\n<td>OpenReasoning-Nemotron-32B<\/td>\n<td>32B<\/td>\n<td>Near frontier-model performance in logic-intensive tasks<\/td>\n<td><a class=\"\" href=\"https:\/\/huggingface.co\/nvidia\/OpenReasoning-Nemotron-32B\">Link<\/a><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/figure>\n<p>All models are compatible with <strong>transformer architectures<\/strong>, support <strong>FP16\/INT8 quantization<\/strong>, and are optimized for <strong>NVIDIA GPUs and NeMo<\/strong> frameworks.<\/p>\n<h3 class=\"wp-block-heading\"><strong>Performance Benchmarks<\/strong><\/h3>\n<p>These models set <strong>new state-of-the-art pass@1 scores for their size class<\/strong> across multiple reasoning benchmarks:<\/p>\n<figure class=\"wp-block-table\">\n<table class=\"has-fixed-layout\">\n<tbody>\n<tr>\n<td>Model<\/td>\n<td>GPQA<\/td>\n<td>MMLU\u2011PRO<\/td>\n<td>HLE<\/td>\n<td>LiveCodeBench<\/td>\n<td>SciCode<\/td>\n<td>AIME24<\/td>\n<td>AIME25<\/td>\n<td>HMMT\u202fFeb\u202f2025<\/td>\n<\/tr>\n<tr>\n<td>1.5B<\/td>\n<td>31.6<\/td>\n<td>47.5<\/td>\n<td>5.5<\/td>\n<td>28.6<\/td>\n<td>2.2<\/td>\n<td>55.5<\/td>\n<td>45.6<\/td>\n<td>31.5<\/td>\n<\/tr>\n<tr>\n<td>7B<\/td>\n<td>61.1<\/td>\n<td>71.9<\/td>\n<td>8.3<\/td>\n<td>63.3<\/td>\n<td>16.2<\/td>\n<td>84.7<\/td>\n<td>78.2<\/td>\n<td>63.5<\/td>\n<\/tr>\n<tr>\n<td>14B<\/td>\n<td>71.6<\/td>\n<td>77.5<\/td>\n<td>10.1<\/td>\n<td>67.8<\/td>\n<td>23.5<\/td>\n<td>87.8<\/td>\n<td>82.0<\/td>\n<td>71.2<\/td>\n<\/tr>\n<tr>\n<td>32B<\/td>\n<td>73.1<\/td>\n<td>80.0<\/td>\n<td>11.9<\/td>\n<td>70.2<\/td>\n<td>28.5<\/td>\n<td>89.2<\/td>\n<td>84.0<\/td>\n<td>73.8<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/figure>\n<p>All quoted scores are pass@1 without GenSelect.<\/p>\n<h4 class=\"wp-block-heading\"><strong><img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/16.0.1\/72x72\/1f50d.png\" alt=\"\ud83d\udd0d\" class=\"wp-smiley\" \/> GenSelect (Heavy Mode)<\/strong><\/h4>\n<p>Using <strong>Generative Selection with 64 candidates<\/strong> (\u201cGenSelect\u201d), performance further improves, especially at 32B:<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>32B achieves<\/strong>: AIME24\u202f89.2 \u2192\u202f93.3, AIME25\u202f84.0 \u2192\u202f90.0, HMMT\u202f73.8 \u2192\u202f96.7, LiveCodeBench\u202f70.2 \u2192\u202f75.3.<\/li>\n<\/ul>\n<p>This demonstrates strong emergent reasoning performance at scale.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img fetchpriority=\"high\" decoding=\"async\" width=\"1024\" height=\"511\" data-attachment-id=\"72773\" data-permalink=\"https:\/\/www.marktechpost.com\/2025\/07\/19\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/screenshot-2025-07-19-at-9-35-49-pm-2\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1.png\" data-orig-size=\"1472,734\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2025-07-19 at 9.35.49\u202fPM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1-300x150.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1-1024x511.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1-1024x511.png\" alt=\"\" class=\"wp-image-72773\" \/><\/figure>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>Training Data and Reasoning Specialization<\/strong><\/h3>\n<p>The training corpus is a <strong>distilled, high-quality subset<\/strong> of the DeepSeek R1 0528 dataset. Key features include:<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>Heavily curated reasoning data<\/strong> from math, science, and CS disciplines.<\/li>\n<li><strong>Prompt-engineered fine-tuning<\/strong> designed to reinforce multi-step thought chains.<\/li>\n<li>Emphasis on <strong>logical consistency, constraint satisfaction<\/strong>, and <strong>symbolic reasoning<\/strong>.<\/li>\n<\/ul>\n<p>This deliberate curation ensures strong alignment with real-world reasoning problems found in both academia and applied <a href=\"https:\/\/www.marktechpost.com\/2025\/01\/14\/what-is-machine-learning-ml\/\" target=\"_blank\">ML<\/a> domains.<\/p>\n<h3 class=\"wp-block-heading\"><strong>Open and Ecosystem Integration<\/strong><\/h3>\n<p>All four OpenReasoning-Nemotron models are released under an <strong>open and commercially permissive license<\/strong>, with model cards, evaluation scripts, and inference-ready weights available on Hugging Face:<\/p>\n<ul class=\"wp-block-list\">\n<li><a class=\"\" href=\"https:\/\/huggingface.co\/nvidia\/OpenReasoning-Nemotron-1.5B\">OpenReasoning-Nemotron-1.5B<\/a><\/li>\n<li><a class=\"\" href=\"https:\/\/huggingface.co\/nvidia\/OpenReasoning-Nemotron-7B\">OpenReasoning-Nemotron-7B<\/a><\/li>\n<li><a class=\"\" href=\"https:\/\/huggingface.co\/nvidia\/OpenReasoning-Nemotron-14B\">OpenReasoning-Nemotron-14B<\/a><\/li>\n<li><a class=\"\" href=\"https:\/\/huggingface.co\/nvidia\/OpenReasoning-Nemotron-32B\">OpenReasoning-Nemotron-32B<\/a><\/li>\n<\/ul>\n<p>These models are designed to plug into the <strong>NVIDIA NeMo framework<\/strong>, and support <strong>TensorRT-LLM<\/strong>, <strong>ONNX<\/strong>, and <strong>Hugging Face Transformers<\/strong> toolchains, facilitating rapid deployment in production and research settings.<\/p>\n<h3 class=\"wp-block-heading\"><strong>Key Use Cases<\/strong><\/h3>\n<ul class=\"wp-block-list\">\n<li><strong>Math tutors and theorem solvers<\/strong><\/li>\n<li><strong>Scientific QA agents and medical reasoning systems<\/strong><\/li>\n<li><strong>Code generation and debugging assistants<\/strong><\/li>\n<li><strong>Chain-of-thought multi-hop question answering<\/strong><\/li>\n<li><strong>Synthetic data generation for structured domains<\/strong><\/li>\n<\/ul>\n<h3 class=\"wp-block-heading\"><strong>Conclusion<\/strong><\/h3>\n<p>NVIDIA\u2019s OpenReasoning-Nemotron models offer a pragmatic, open-source path toward <strong>scaling reasoning ability without frontier-scale compute costs<\/strong>. By distilling from the 671B DeepSeek R1 and targeting high-leverage reasoning domains, these models deliver a powerful balance of <strong>accuracy, efficiency, and accessibility<\/strong>.<\/p>\n<p>For developers, researchers, and enterprises working on logic-intensive AI applications, OpenReasoning-Nemotron provides a compelling foundation\u2014free from the trade-offs that often accompany proprietary or overgeneralized models.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<h3 class=\"wp-block-heading\"><strong><img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/16.0.1\/72x72\/1f50d.png\" alt=\"\ud83d\udd0d\" class=\"wp-smiley\" \/> Frequently Asked Questions (FAQs)<\/strong><\/h3>\n<p><strong>Q1. What benchmarks are supported?<\/strong><br \/>GPQA, MMLU-PRO, HLE, LiveCodeBench, SciCode, AIME 2024\/25, HMMT Feb 2025 (pass@1).<\/p>\n<p><strong>Q2. How much data was used?<\/strong><br \/>A distillation corpus of <strong>5 million reasoning log examples<\/strong> across domains, generated by DeepSeek\u2011R1\u20110528.<\/p>\n<p><strong>Q3. Is reinforcement learning used?<\/strong><br \/>No\u2014models are trained purely via SFT, preserving efficiency while enabling future RL research.<\/p>\n<p><strong>Q4. Can I scale reasoning with GenSelect?<\/strong><br \/>Yes. Using GenSelect significantly boosts performance\u201432B jumps from 73.8 to 96.7 on HMMT with 64 candidates.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\" \/>\n<p>Check out the<strong>\u00a0<a href=\"https:\/\/huggingface.co\/blog\/nvidia\/openreasoning-nemotron?linkId=100000374186136\" target=\"_blank\" rel=\"noreferrer noopener\">Technical details<\/a>.<\/strong>\u00a0All credit for this research goes to the researchers of this project.<\/p>\n<p><strong>Sponsorship Opportunity:<\/strong>\u00a0Reach the most influential AI developers in US and Europe. 1M+ monthly readers, 500K+ community builders, infinite possibilities.\u00a0<strong><a href=\"https:\/\/promotion.marktechpost.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">[Explore Sponsorship]<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2025\/07\/19\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/\">NVIDIA AI Releases OpenReasoning-Nemotron: A Suite of Reasoning-Enhanced LLMs Distilled from DeepSeek R1 0528<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>NVIDIA AI has introduced OpenReasoning-Nemotron, a family of large language models (LLMs) designed to excel in complex reasoning tasks across mathematics, science, and code. This model suite\u2014comprising 1.5B, 7B, 14B, and 32B parameter versions\u2014has been distilled from the 671B DeepSeek R1 0528 model, capturing its high-level reasoning capabilities in significantly smaller and more efficient models. The release positions NVIDIA as a leading contributor to the open-source LLM ecosystem, delivering models that push state-of-the-art (SOTA) performance while remaining commercially permissive and widely accessible via Hugging Face. Model Overview and Architecture Distillation from DeepSeek R1 0528 (671B) At the heart of OpenReasoning-Nemotron lies a distillation strategy that transfers reasoning ability from DeepSeek R1\u2014a massive 671B parameter model\u2014into smaller architectures. The process prioritizes reasoning generalization over raw token prediction, enabling compact models to perform effectively on structured, high-cognition tasks. The distillation dataset emphasizes mathematics, science, and programming languages, aligning model capabilities with key reasoning domains. Model Variants and Specs Model Name Parameters Intended Use Hugging Face Page OpenReasoning-Nemotron-1.5B 1.5B Entry-level reasoning and inference Link OpenReasoning-Nemotron-7B 7B Mid-scale reasoning, good for code\/math Link OpenReasoning-Nemotron-14B 14B Advanced reasoning capabilities Link OpenReasoning-Nemotron-32B 32B Near frontier-model performance in logic-intensive tasks Link All models are compatible with transformer architectures, support FP16\/INT8 quantization, and are optimized for NVIDIA GPUs and NeMo frameworks. Performance Benchmarks These models set new state-of-the-art pass@1 scores for their size class across multiple reasoning benchmarks: Model GPQA MMLU\u2011PRO HLE LiveCodeBench SciCode AIME24 AIME25 HMMT\u202fFeb\u202f2025 1.5B 31.6 47.5 5.5 28.6 2.2 55.5 45.6 31.5 7B 61.1 71.9 8.3 63.3 16.2 84.7 78.2 63.5 14B 71.6 77.5 10.1 67.8 23.5 87.8 82.0 71.2 32B 73.1 80.0 11.9 70.2 28.5 89.2 84.0 73.8 All quoted scores are pass@1 without GenSelect. GenSelect (Heavy Mode) Using Generative Selection with 64 candidates (\u201cGenSelect\u201d), performance further improves, especially at 32B: 32B achieves: AIME24\u202f89.2 \u2192\u202f93.3, AIME25\u202f84.0 \u2192\u202f90.0, HMMT\u202f73.8 \u2192\u202f96.7, LiveCodeBench\u202f70.2 \u2192\u202f75.3. This demonstrates strong emergent reasoning performance at scale. Training Data and Reasoning Specialization The training corpus is a distilled, high-quality subset of the DeepSeek R1 0528 dataset. Key features include: Heavily curated reasoning data from math, science, and CS disciplines. Prompt-engineered fine-tuning designed to reinforce multi-step thought chains. Emphasis on logical consistency, constraint satisfaction, and symbolic reasoning. This deliberate curation ensures strong alignment with real-world reasoning problems found in both academia and applied ML domains. Open and Ecosystem Integration All four OpenReasoning-Nemotron models are released under an open and commercially permissive license, with model cards, evaluation scripts, and inference-ready weights available on Hugging Face: OpenReasoning-Nemotron-1.5B OpenReasoning-Nemotron-7B OpenReasoning-Nemotron-14B OpenReasoning-Nemotron-32B These models are designed to plug into the NVIDIA NeMo framework, and support TensorRT-LLM, ONNX, and Hugging Face Transformers toolchains, facilitating rapid deployment in production and research settings. Key Use Cases Math tutors and theorem solvers Scientific QA agents and medical reasoning systems Code generation and debugging assistants Chain-of-thought multi-hop question answering Synthetic data generation for structured domains Conclusion NVIDIA\u2019s OpenReasoning-Nemotron models offer a pragmatic, open-source path toward scaling reasoning ability without frontier-scale compute costs. By distilling from the 671B DeepSeek R1 and targeting high-leverage reasoning domains, these models deliver a powerful balance of accuracy, efficiency, and accessibility. For developers, researchers, and enterprises working on logic-intensive AI applications, OpenReasoning-Nemotron provides a compelling foundation\u2014free from the trade-offs that often accompany proprietary or overgeneralized models. Frequently Asked Questions (FAQs) Q1. What benchmarks are supported?GPQA, MMLU-PRO, HLE, LiveCodeBench, SciCode, AIME 2024\/25, HMMT Feb 2025 (pass@1). Q2. How much data was used?A distillation corpus of 5 million reasoning log examples across domains, generated by DeepSeek\u2011R1\u20110528. Q3. Is reinforcement learning used?No\u2014models are trained purely via SFT, preserving efficiency while enabling future RL research. Q4. Can I scale reasoning with GenSelect?Yes. Using GenSelect significantly boosts performance\u201432B jumps from 73.8 to 96.7 on HMMT with 64 candidates. Check out the\u00a0Technical details.\u00a0All credit for this research goes to the researchers of this project. Sponsorship Opportunity:\u00a0Reach the most influential AI developers in US and Europe. 1M+ monthly readers, 500K+ community builders, infinite possibilities.\u00a0[Explore Sponsorship] The post NVIDIA AI Releases OpenReasoning-Nemotron: A Suite of Reasoning-Enhanced LLMs Distilled from DeepSeek R1 0528 appeared first on MarkTechPost.<\/p>","protected":false},"author":2,"featured_media":26233,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"pmpro_default_level":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"_pvb_checkbox_block_on_post":false,"footnotes":""},"categories":[52,5,7,1],"tags":[],"class_list":["post-26232","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-club","category-committee","category-news","category-uncategorized","pmpro-has-access"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.3 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>NVIDIA AI Releases OpenReasoning-Nemotron: A Suite of Reasoning-Enhanced LLMs Distilled from DeepSeek R1 0528 - YouZum<\/title>\n<meta name=\"description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/youzum.net\/zh\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/\" \/>\n<meta property=\"og:locale\" content=\"zh_CN\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"NVIDIA AI Releases OpenReasoning-Nemotron: A Suite of Reasoning-Enhanced LLMs Distilled from DeepSeek R1 0528 - YouZum\" \/>\n<meta property=\"og:description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta property=\"og:url\" content=\"https:\/\/youzum.net\/zh\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/\" \/>\n<meta property=\"og:site_name\" content=\"YouZum\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/DroneAssociationTH\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-07-20T05:39:59+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/s.w.org\/images\/core\/emoji\/16.0.1\/72x72\/2705.png\" \/>\n<meta name=\"author\" content=\"admin NU\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\u4f5c\u8005\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin NU\" \/>\n\t<meta name=\"twitter:label2\" content=\"\u9884\u8ba1\u9605\u8bfb\u65f6\u95f4\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 \u5206\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/\"},\"author\":{\"name\":\"admin NU\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\"},\"headline\":\"NVIDIA AI Releases OpenReasoning-Nemotron: A Suite of Reasoning-Enhanced LLMs Distilled from DeepSeek R1 0528\",\"datePublished\":\"2025-07-20T05:39:59+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/\"},\"wordCount\":666,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1-1024x511-0TBi84.png\",\"articleSection\":[\"AI\",\"Committee\",\"News\",\"Uncategorized\"],\"inLanguage\":\"zh-Hans\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/\",\"url\":\"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/\",\"name\":\"NVIDIA AI Releases OpenReasoning-Nemotron: A Suite of Reasoning-Enhanced LLMs Distilled from DeepSeek R1 0528 - YouZum\",\"isPartOf\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1-1024x511-0TBi84.png\",\"datePublished\":\"2025-07-20T05:39:59+00:00\",\"description\":\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\",\"breadcrumb\":{\"@id\":\"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/#breadcrumb\"},\"inLanguage\":\"zh-Hans\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-Hans\",\"@id\":\"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/#primaryimage\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1-1024x511-0TBi84.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1-1024x511-0TBi84.png\",\"width\":1024,\"height\":511},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/youzum.net\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"NVIDIA AI Releases OpenReasoning-Nemotron: A Suite of Reasoning-Enhanced LLMs Distilled from DeepSeek R1 0528\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/yousum.gpucore.co\/#website\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"name\":\"YouSum\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/yousum.gpucore.co\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"zh-Hans\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\",\"name\":\"Drone Association Thailand\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-Hans\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"width\":300,\"height\":300,\"caption\":\"Drone Association Thailand\"},\"image\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/DroneAssociationTH\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\",\"name\":\"admin NU\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-Hans\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"caption\":\"admin NU\"},\"url\":\"https:\/\/youzum.net\/zh\/members\/adminnu\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"NVIDIA AI Releases OpenReasoning-Nemotron: A Suite of Reasoning-Enhanced LLMs Distilled from DeepSeek R1 0528 - YouZum","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/youzum.net\/zh\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/","og_locale":"zh_CN","og_type":"article","og_title":"NVIDIA AI Releases OpenReasoning-Nemotron: A Suite of Reasoning-Enhanced LLMs Distilled from DeepSeek R1 0528 - YouZum","og_description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","og_url":"https:\/\/youzum.net\/zh\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/","og_site_name":"YouZum","article_publisher":"https:\/\/www.facebook.com\/DroneAssociationTH\/","article_published_time":"2025-07-20T05:39:59+00:00","og_image":[{"url":"https:\/\/s.w.org\/images\/core\/emoji\/16.0.1\/72x72\/2705.png","type":"","width":"","height":""}],"author":"admin NU","twitter_card":"summary_large_image","twitter_misc":{"\u4f5c\u8005":"admin NU","\u9884\u8ba1\u9605\u8bfb\u65f6\u95f4":"3 \u5206"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/#article","isPartOf":{"@id":"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/"},"author":{"name":"admin NU","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c"},"headline":"NVIDIA AI Releases OpenReasoning-Nemotron: A Suite of Reasoning-Enhanced LLMs Distilled from DeepSeek R1 0528","datePublished":"2025-07-20T05:39:59+00:00","mainEntityOfPage":{"@id":"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/"},"wordCount":666,"commentCount":0,"publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"image":{"@id":"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/#primaryimage"},"thumbnailUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1-1024x511-0TBi84.png","articleSection":["AI","Committee","News","Uncategorized"],"inLanguage":"zh-Hans","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/","url":"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/","name":"NVIDIA AI Releases OpenReasoning-Nemotron: A Suite of Reasoning-Enhanced LLMs Distilled from DeepSeek R1 0528 - YouZum","isPartOf":{"@id":"https:\/\/yousum.gpucore.co\/#website"},"primaryImageOfPage":{"@id":"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/#primaryimage"},"image":{"@id":"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/#primaryimage"},"thumbnailUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1-1024x511-0TBi84.png","datePublished":"2025-07-20T05:39:59+00:00","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","breadcrumb":{"@id":"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/#breadcrumb"},"inLanguage":"zh-Hans","potentialAction":[{"@type":"ReadAction","target":["https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/"]}]},{"@type":"ImageObject","inLanguage":"zh-Hans","@id":"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/#primaryimage","url":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1-1024x511-0TBi84.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1-1024x511-0TBi84.png","width":1024,"height":511},{"@type":"BreadcrumbList","@id":"https:\/\/youzum.net\/nvidia-ai-releases-openreasoning-nemotron-a-suite-of-reasoning-enhanced-llms-distilled-from-deepseek-r1-0528\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/youzum.net\/"},{"@type":"ListItem","position":2,"name":"NVIDIA AI Releases OpenReasoning-Nemotron: A Suite of Reasoning-Enhanced LLMs Distilled from DeepSeek R1 0528"}]},{"@type":"WebSite","@id":"https:\/\/yousum.gpucore.co\/#website","url":"https:\/\/yousum.gpucore.co\/","name":"YouSum","description":"","publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/yousum.gpucore.co\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"zh-Hans"},{"@type":"Organization","@id":"https:\/\/yousum.gpucore.co\/#organization","name":"Drone Association Thailand","url":"https:\/\/yousum.gpucore.co\/","logo":{"@type":"ImageObject","inLanguage":"zh-Hans","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","width":300,"height":300,"caption":"Drone Association Thailand"},"image":{"@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/DroneAssociationTH\/"]},{"@type":"Person","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c","name":"admin NU","image":{"@type":"ImageObject","inLanguage":"zh-Hans","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","caption":"admin NU"},"url":"https:\/\/youzum.net\/zh\/members\/adminnu\/"}]}},"rttpg_featured_image_url":{"full":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1-1024x511-0TBi84.png",1024,511,false],"landscape":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1-1024x511-0TBi84.png",1024,511,false],"portraits":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1-1024x511-0TBi84.png",1024,511,false],"thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1-1024x511-0TBi84-150x150.png",150,150,true],"medium":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1-1024x511-0TBi84-300x150.png",300,150,true],"large":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1-1024x511-0TBi84.png",1024,511,false],"1536x1536":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1-1024x511-0TBi84.png",1024,511,false],"2048x2048":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1-1024x511-0TBi84.png",1024,511,false],"trp-custom-language-flag":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1-1024x511-0TBi84-18x9.png",18,9,true],"woocommerce_thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1-1024x511-0TBi84-300x300.png",300,300,true],"woocommerce_single":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1-1024x511-0TBi84-600x299.png",600,299,true],"woocommerce_gallery_thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/07\/Screenshot-2025-07-19-at-9.35.49-PM-1-1024x511-0TBi84-100x100.png",100,100,true]},"rttpg_author":{"display_name":"admin NU","author_link":"https:\/\/youzum.net\/zh\/members\/adminnu\/"},"rttpg_comment":0,"rttpg_category":"<a href=\"https:\/\/youzum.net\/zh\/category\/ai-club\/\" rel=\"category tag\">AI<\/a> <a href=\"https:\/\/youzum.net\/zh\/category\/committee\/\" rel=\"category tag\">Committee<\/a> <a href=\"https:\/\/youzum.net\/zh\/category\/news\/\" rel=\"category tag\">News<\/a> <a href=\"https:\/\/youzum.net\/zh\/category\/uncategorized\/\" rel=\"category tag\">Uncategorized<\/a>","rttpg_excerpt":"NVIDIA AI has introduced OpenReasoning-Nemotron, a family of large language models (LLMs) designed to excel in complex reasoning tasks across mathematics, science, and code. This model suite\u2014comprising 1.5B, 7B, 14B, and 32B parameter versions\u2014has been distilled from the 671B DeepSeek R1 0528 model, capturing its high-level reasoning capabilities in significantly smaller and more efficient models.&hellip;","_links":{"self":[{"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/posts\/26232","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/comments?post=26232"}],"version-history":[{"count":0,"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/posts\/26232\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/media\/26233"}],"wp:attachment":[{"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/media?parent=26232"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/categories?post=26232"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/tags?post=26232"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}