{"id":72707,"date":"2026-02-21T11:51:30","date_gmt":"2026-02-21T11:51:30","guid":{"rendered":"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/"},"modified":"2026-02-21T11:51:30","modified_gmt":"2026-02-21T11:51:30","slug":"nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data","status":"publish","type":"post","link":"https:\/\/youzum.net\/de\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/","title":{"rendered":"NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data"},"content":{"rendered":"<p>Building simulators for robots has been a long term challenge. Traditional engines require manual coding of physics and perfect 3D models. NVIDIA is changing this with <strong>DreamDojo<\/strong>, a fully open-source, generalizable robot world model. Instead of using a physics engine, DreamDojo \u2018dreams\u2019 the results of robot actions directly in pixels. <\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img fetchpriority=\"high\" decoding=\"async\" width=\"1364\" height=\"766\" data-attachment-id=\"78001\" data-permalink=\"https:\/\/www.marktechpost.com\/2026\/02\/20\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/screenshot-2026-02-20-at-12-22-37-pm-2\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1.png\" data-orig-size=\"1364,766\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2026-02-20 at 12.22.37\u202fPM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1-300x168.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1-1024x575.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1.png\" alt=\"\" class=\"wp-image-78001\" \/><figcaption class=\"wp-element-caption\">https:\/\/arxiv.org\/pdf\/2602.06949<\/figcaption><\/figure>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>Scaling Robotics with 44k+ Hours of Human Experience<\/strong><\/h3>\n<p>The biggest hurdle for AI in robotics is data. Collecting robot-specific data is expensive and slow. DreamDojo solves this by learning from <strong>44k+ hours<\/strong> of egocentric human videos. This dataset, called <strong>DreamDojo-HV<\/strong>, is the largest of its kind for world model pretraining.<\/p>\n<ul class=\"wp-block-list\">\n<li>It features 6,015 unique tasks across 1M+ trajectories.<\/li>\n<li>The data covers 9,869 unique scenes and 43,237 unique objects. <\/li>\n<li>Pretraining used <strong>100,000 NVIDIA H100 GPU hours<\/strong> to build 2B and 14B model variants. <\/li>\n<\/ul>\n<p>Humans have already mastered complex physics, such as pouring liquids or folding clothes. DreamDojo uses this human data to give robots a \u2018common sense\u2019 understanding of how the world works. <\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img decoding=\"async\" width=\"1408\" height=\"890\" data-attachment-id=\"78003\" data-permalink=\"https:\/\/www.marktechpost.com\/2026\/02\/20\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/screenshot-2026-02-20-at-12-23-16-pm-2\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.23.16-PM-1.png\" data-orig-size=\"1408,890\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2026-02-20 at 12.23.16\u202fPM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.23.16-PM-1-300x190.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.23.16-PM-1-1024x647.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.23.16-PM-1.png\" alt=\"\" class=\"wp-image-78003\" \/><figcaption class=\"wp-element-caption\">https:\/\/arxiv.org\/pdf\/2602.06949<\/figcaption><\/figure>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>Bridging the Gap with Latent Actions<\/strong><\/h3>\n<p>Human videos do not have robot motor commands. To make these videos \u2018robot-readable,\u2019 NVIDIA\u2019s research team introduced <strong>continuous latent actions<\/strong>. This system uses a spatiotemporal Transformer VAE to extract actions directly from pixels.<\/p>\n<ul class=\"wp-block-list\">\n<li>The VAE encoder takes 2 consecutive frames and outputs a 32-dimensional latent vector. <\/li>\n<li>This vector represents the most critical motion between frames. <\/li>\n<li>The design creates an information bottleneck that disentangles action from visual context. <\/li>\n<li>This allows the model to learn physics from humans and apply them to different robot bodies. <\/li>\n<\/ul>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img decoding=\"async\" width=\"1398\" height=\"754\" data-attachment-id=\"78005\" data-permalink=\"https:\/\/www.marktechpost.com\/2026\/02\/20\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/screenshot-2026-02-20-at-12-23-49-pm-2\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.23.49-PM-1.png\" data-orig-size=\"1398,754\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2026-02-20 at 12.23.49\u202fPM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.23.49-PM-1-300x162.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.23.49-PM-1-1024x552.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.23.49-PM-1.png\" alt=\"\" class=\"wp-image-78005\" \/><figcaption class=\"wp-element-caption\">https:\/\/arxiv.org\/pdf\/2602.06949<\/figcaption><\/figure>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>Better Physics through Architecture<\/strong><\/h3>\n<p>DreamDojo is based on the <strong>Cosmos-Predict2.5<\/strong> latent video diffusion model. It uses the <strong>WAN2.2 tokenizer<\/strong>, which has a temporal compression ratio of 4. <strong>The team improved the architecture with 3 key features:<\/strong><\/p>\n<ol start=\"1\" class=\"wp-block-list\">\n<li><strong>Relative Actions:<\/strong> The model uses joint deltas instead of absolute poses. This makes it easier for the model to generalize across different trajectories.<\/li>\n<li><strong>Chunked Action Injection:<\/strong> It injects 4 consecutive actions into each latent frame. This aligns the actions with the tokenizer\u2019s compression ratio and fixes causality confusion.<\/li>\n<li><strong>Temporal Consistency Loss:<\/strong> A new loss function matches predicted frame velocities to ground-truth transitions. This reduces visual artifacts and keeps objects physically consistent.<\/li>\n<\/ol>\n<h3 class=\"wp-block-heading\"><strong>Distillation for 10.81 FPS Real-Time Interaction<\/strong><\/h3>\n<p>A simulator is only useful if it is fast. Standard diffusion models require too many denoising steps for real-time use. NVIDIA team used a <strong>Self Forcing<\/strong> distillation pipeline to solve this. <\/p>\n<ul class=\"wp-block-list\">\n<li>The distillation training was conducted on <strong>64 NVIDIA H100 GPUs<\/strong>. <\/li>\n<li>The \u2018student\u2019 model reduces denoising from 35 steps down to 4 steps. <\/li>\n<li>The final model achieves a real-time speed of <strong>10.81 FPS<\/strong>.<\/li>\n<li>It is stable for continuous rollouts of 60 seconds (600 frames). <\/li>\n<\/ul>\n<h3 class=\"wp-block-heading\"><strong>Unlocking Downstream Applications<\/strong><\/h3>\n<p>DreamDojo\u2019s speed and accuracy enable several advanced applications for AI engineers.<\/p>\n<h4 class=\"wp-block-heading\"><strong>1. Reliable Policy Evaluation<\/strong><\/h4>\n<p>Testing robots in the real world is risky. DreamDojo acts as a high-fidelity simulator for benchmarking. <\/p>\n<ul class=\"wp-block-list\">\n<li>Its simulated success rates show a Pearson correlation of (Pearson \ud835\udc5f=0.995) with real-world results.<\/li>\n<li>The Mean Maximum Rank Violation (MMRV) is only <strong>0.003<\/strong>.<\/li>\n<\/ul>\n<h4 class=\"wp-block-heading\"><strong>2. Model-Based Planning<\/strong><\/h4>\n<p>Robots can use DreamDojo to \u2018look ahead.\u2019 A robot can simulate multiple action sequences and pick the best one.<\/p>\n<ul class=\"wp-block-list\">\n<li>In a fruit-packing task, this improved real-world success rates by <strong>17%<\/strong>. <\/li>\n<li>Compared to random sampling, it provided a 2x increase in success. <\/li>\n<\/ul>\n<h4 class=\"wp-block-heading\"><strong>3. Live Teleoperation<\/strong><\/h4>\n<p>Developers can teleoperate virtual robots in real time. NVIDIA team demonstrated this using a <strong>PICO VR controller<\/strong> and a local desktop with an <strong>NVIDIA RTX 5090<\/strong>. This allows for safe and rapid data collection.<\/p>\n<h3 class=\"wp-block-heading\"><strong>Summary of Model Performance<\/strong><\/h3>\n<figure class=\"wp-block-table is-style-stripes\">\n<table class=\"has-fixed-layout\">\n<thead>\n<tr>\n<td><strong>Metric<\/strong><\/td>\n<td><strong>DREAMDOJO-2B<\/strong><\/td>\n<td><strong>DREAMDOJO-14B<\/strong><\/td>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Physics Correctness<\/strong><\/td>\n<td>62.50%<\/td>\n<td>73.50%<\/td>\n<\/tr>\n<tr>\n<td><strong>Action Following<\/strong><\/td>\n<td>63.45%<\/td>\n<td>72.55%<\/td>\n<\/tr>\n<tr>\n<td><strong>FPS (Distilled)<\/strong><\/td>\n<td>10.81<\/td>\n<td>N\/A<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/figure>\n<p>NVIDIA has released all weights, training code, and evaluation benchmarks. This open-source release allows you to post-train DreamDojo on your own robot data today.<\/p>\n<h3 class=\"wp-block-heading\"><strong>Key Takeaways<\/strong><\/h3>\n<ul class=\"wp-block-list\">\n<li><strong>Massive Scale and Diversity<\/strong>: DreamDojo is pretrained on <strong>DreamDojo-HV<\/strong>, the largest egocentric human video dataset to date, featuring <strong>44,711 hours<\/strong> of footage across <strong>6,015 unique tasks<\/strong> and <strong>9,869 scenes<\/strong>.<\/li>\n<li><strong>Unified Latent Action Proxy<\/strong>: To overcome the lack of action labels in human videos, the model uses <strong>continuous latent actions<\/strong> extracted via a spatiotemporal Transformer VAE, which serves as a hardware-agnostic control interface.<\/li>\n<li><strong>Optimized Training and Architecture<\/strong>: The model achieves high-fidelity physics and precise controllability by utilizing <strong>relative action transformations<\/strong>, <strong>chunked action injection<\/strong>, and a specialized <strong>temporal consistency loss<\/strong>.<\/li>\n<li><strong>Real-Time Performance via Distillation<\/strong>: Through a <strong>Self Forcing<\/strong> distillation pipeline, the model is accelerated to <strong>10.81 FPS<\/strong>, enabling interactive applications like live teleoperation and stable, long-horizon simulations for over <strong>1 minute<\/strong>.<\/li>\n<li><strong>Reliable for Downstream Tasks<\/strong>: DreamDojo functions as an accurate simulator for <strong>policy evaluation<\/strong>, showing a <strong>0.995 Pearson correlation<\/strong> with real-world success rates, and can improve real-world performance by <strong>17%<\/strong> when used for <strong>model-based planning<\/strong>.<\/li>\n<\/ul>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out the\u00a0<strong><a href=\"https:\/\/arxiv.org\/pdf\/2602.06949\" target=\"_blank\" rel=\"noreferrer noopener\">Paper<\/a> <\/strong>and <strong><a href=\"https:\/\/github.com\/NVIDIA\/DreamDojo\" target=\"_blank\" rel=\"noreferrer noopener\">Codes<\/a>.\u00a0<\/strong>Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">100k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/02\/20\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/\">NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>Building simulators for robots has been a long term challenge. Traditional engines require manual coding of physics and perfect 3D models. NVIDIA is changing this with DreamDojo, a fully open-source, generalizable robot world model. Instead of using a physics engine, DreamDojo \u2018dreams\u2019 the results of robot actions directly in pixels. https:\/\/arxiv.org\/pdf\/2602.06949 Scaling Robotics with 44k+ Hours of Human Experience The biggest hurdle for AI in robotics is data. Collecting robot-specific data is expensive and slow. DreamDojo solves this by learning from 44k+ hours of egocentric human videos. This dataset, called DreamDojo-HV, is the largest of its kind for world model pretraining. It features 6,015 unique tasks across 1M+ trajectories. The data covers 9,869 unique scenes and 43,237 unique objects. Pretraining used 100,000 NVIDIA H100 GPU hours to build 2B and 14B model variants. Humans have already mastered complex physics, such as pouring liquids or folding clothes. DreamDojo uses this human data to give robots a \u2018common sense\u2019 understanding of how the world works. https:\/\/arxiv.org\/pdf\/2602.06949 Bridging the Gap with Latent Actions Human videos do not have robot motor commands. To make these videos \u2018robot-readable,\u2019 NVIDIA\u2019s research team introduced continuous latent actions. This system uses a spatiotemporal Transformer VAE to extract actions directly from pixels. The VAE encoder takes 2 consecutive frames and outputs a 32-dimensional latent vector. This vector represents the most critical motion between frames. The design creates an information bottleneck that disentangles action from visual context. This allows the model to learn physics from humans and apply them to different robot bodies. https:\/\/arxiv.org\/pdf\/2602.06949 Better Physics through Architecture DreamDojo is based on the Cosmos-Predict2.5 latent video diffusion model. It uses the WAN2.2 tokenizer, which has a temporal compression ratio of 4. The team improved the architecture with 3 key features: Relative Actions: The model uses joint deltas instead of absolute poses. This makes it easier for the model to generalize across different trajectories. Chunked Action Injection: It injects 4 consecutive actions into each latent frame. This aligns the actions with the tokenizer\u2019s compression ratio and fixes causality confusion. Temporal Consistency Loss: A new loss function matches predicted frame velocities to ground-truth transitions. This reduces visual artifacts and keeps objects physically consistent. Distillation for 10.81 FPS Real-Time Interaction A simulator is only useful if it is fast. Standard diffusion models require too many denoising steps for real-time use. NVIDIA team used a Self Forcing distillation pipeline to solve this. The distillation training was conducted on 64 NVIDIA H100 GPUs. The \u2018student\u2019 model reduces denoising from 35 steps down to 4 steps. The final model achieves a real-time speed of 10.81 FPS. It is stable for continuous rollouts of 60 seconds (600 frames). Unlocking Downstream Applications DreamDojo\u2019s speed and accuracy enable several advanced applications for AI engineers. 1. Reliable Policy Evaluation Testing robots in the real world is risky. DreamDojo acts as a high-fidelity simulator for benchmarking. Its simulated success rates show a Pearson correlation of (Pearson \ud835\udc5f=0.995) with real-world results. The Mean Maximum Rank Violation (MMRV) is only 0.003. 2. Model-Based Planning Robots can use DreamDojo to \u2018look ahead.\u2019 A robot can simulate multiple action sequences and pick the best one. In a fruit-packing task, this improved real-world success rates by 17%. Compared to random sampling, it provided a 2x increase in success. 3. Live Teleoperation Developers can teleoperate virtual robots in real time. NVIDIA team demonstrated this using a PICO VR controller and a local desktop with an NVIDIA RTX 5090. This allows for safe and rapid data collection. Summary of Model Performance Metric DREAMDOJO-2B DREAMDOJO-14B Physics Correctness 62.50% 73.50% Action Following 63.45% 72.55% FPS (Distilled) 10.81 N\/A NVIDIA has released all weights, training code, and evaluation benchmarks. This open-source release allows you to post-train DreamDojo on your own robot data today. Key Takeaways Massive Scale and Diversity: DreamDojo is pretrained on DreamDojo-HV, the largest egocentric human video dataset to date, featuring 44,711 hours of footage across 6,015 unique tasks and 9,869 scenes. Unified Latent Action Proxy: To overcome the lack of action labels in human videos, the model uses continuous latent actions extracted via a spatiotemporal Transformer VAE, which serves as a hardware-agnostic control interface. Optimized Training and Architecture: The model achieves high-fidelity physics and precise controllability by utilizing relative action transformations, chunked action injection, and a specialized temporal consistency loss. Real-Time Performance via Distillation: Through a Self Forcing distillation pipeline, the model is accelerated to 10.81 FPS, enabling interactive applications like live teleoperation and stable, long-horizon simulations for over 1 minute. Reliable for Downstream Tasks: DreamDojo functions as an accurate simulator for policy evaluation, showing a 0.995 Pearson correlation with real-world success rates, and can improve real-world performance by 17% when used for model-based planning. Check out the\u00a0Paper and Codes.\u00a0Also,\u00a0feel free to follow us on\u00a0Twitter\u00a0and don\u2019t forget to join our\u00a0100k+ ML SubReddit\u00a0and Subscribe to\u00a0our Newsletter. Wait! are you on telegram?\u00a0now you can join us on telegram as well. The post NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data appeared first on MarkTechPost.<\/p>","protected":false},"author":2,"featured_media":72708,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"pmpro_default_level":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"_pvb_checkbox_block_on_post":false,"footnotes":""},"categories":[52,5,7,1],"tags":[],"class_list":["post-72707","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-club","category-committee","category-news","category-uncategorized","pmpro-has-access"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.3 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data - YouZum<\/title>\n<meta name=\"description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/youzum.net\/de\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/\" \/>\n<meta property=\"og:locale\" content=\"de_DE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data - YouZum\" \/>\n<meta property=\"og:description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta property=\"og:url\" content=\"https:\/\/youzum.net\/de\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/\" \/>\n<meta property=\"og:site_name\" content=\"YouZum\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/DroneAssociationTH\/\" \/>\n<meta property=\"article:published_time\" content=\"2026-02-21T11:51:30+00:00\" \/>\n<meta name=\"author\" content=\"admin NU\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Verfasst von\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin NU\" \/>\n\t<meta name=\"twitter:label2\" content=\"Gesch\u00e4tzte Lesezeit\" \/>\n\t<meta name=\"twitter:data2\" content=\"4\u00a0Minuten\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/\"},\"author\":{\"name\":\"admin NU\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\"},\"headline\":\"NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data\",\"datePublished\":\"2026-02-21T11:51:30+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/\"},\"wordCount\":839,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1-Jhs2Uz.png\",\"articleSection\":[\"AI\",\"Committee\",\"News\",\"Uncategorized\"],\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/\",\"url\":\"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/\",\"name\":\"NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data - YouZum\",\"isPartOf\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1-Jhs2Uz.png\",\"datePublished\":\"2026-02-21T11:51:30+00:00\",\"description\":\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\",\"breadcrumb\":{\"@id\":\"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/#breadcrumb\"},\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/#primaryimage\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1-Jhs2Uz.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1-Jhs2Uz.png\",\"width\":1364,\"height\":766},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/youzum.net\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/yousum.gpucore.co\/#website\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"name\":\"YouSum\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/yousum.gpucore.co\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"de\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\",\"name\":\"Drone Association Thailand\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"width\":300,\"height\":300,\"caption\":\"Drone Association Thailand\"},\"image\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/DroneAssociationTH\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\",\"name\":\"admin NU\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"caption\":\"admin NU\"},\"url\":\"https:\/\/youzum.net\/de\/members\/adminnu\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data - YouZum","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/youzum.net\/de\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/","og_locale":"de_DE","og_type":"article","og_title":"NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data - YouZum","og_description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","og_url":"https:\/\/youzum.net\/de\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/","og_site_name":"YouZum","article_publisher":"https:\/\/www.facebook.com\/DroneAssociationTH\/","article_published_time":"2026-02-21T11:51:30+00:00","author":"admin NU","twitter_card":"summary_large_image","twitter_misc":{"Verfasst von":"admin NU","Gesch\u00e4tzte Lesezeit":"4\u00a0Minuten"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/#article","isPartOf":{"@id":"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/"},"author":{"name":"admin NU","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c"},"headline":"NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data","datePublished":"2026-02-21T11:51:30+00:00","mainEntityOfPage":{"@id":"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/"},"wordCount":839,"commentCount":0,"publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"image":{"@id":"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/#primaryimage"},"thumbnailUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1-Jhs2Uz.png","articleSection":["AI","Committee","News","Uncategorized"],"inLanguage":"de","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/","url":"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/","name":"NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data - YouZum","isPartOf":{"@id":"https:\/\/yousum.gpucore.co\/#website"},"primaryImageOfPage":{"@id":"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/#primaryimage"},"image":{"@id":"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/#primaryimage"},"thumbnailUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1-Jhs2Uz.png","datePublished":"2026-02-21T11:51:30+00:00","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","breadcrumb":{"@id":"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/#breadcrumb"},"inLanguage":"de","potentialAction":[{"@type":"ReadAction","target":["https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/"]}]},{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/#primaryimage","url":"https:\/\/youzum.net\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1-Jhs2Uz.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1-Jhs2Uz.png","width":1364,"height":766},{"@type":"BreadcrumbList","@id":"https:\/\/youzum.net\/nvidia-releases-dreamdojo-an-open-source-robot-world-model-trained-on-44711-hours-of-real-world-human-video-data\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/youzum.net\/"},{"@type":"ListItem","position":2,"name":"NVIDIA Releases DreamDojo: An Open-Source Robot World Model Trained on 44,711 Hours of Real-World Human Video Data"}]},{"@type":"WebSite","@id":"https:\/\/yousum.gpucore.co\/#website","url":"https:\/\/yousum.gpucore.co\/","name":"YouSum","description":"","publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/yousum.gpucore.co\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"de"},{"@type":"Organization","@id":"https:\/\/yousum.gpucore.co\/#organization","name":"Drone Association Thailand","url":"https:\/\/yousum.gpucore.co\/","logo":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","width":300,"height":300,"caption":"Drone Association Thailand"},"image":{"@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/DroneAssociationTH\/"]},{"@type":"Person","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c","name":"admin NU","image":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","caption":"admin NU"},"url":"https:\/\/youzum.net\/de\/members\/adminnu\/"}]}},"rttpg_featured_image_url":{"full":["https:\/\/youzum.net\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1-Jhs2Uz.png",1364,766,false],"landscape":["https:\/\/youzum.net\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1-Jhs2Uz.png",1364,766,false],"portraits":["https:\/\/youzum.net\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1-Jhs2Uz.png",1364,766,false],"thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1-Jhs2Uz-150x150.png",150,150,true],"medium":["https:\/\/youzum.net\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1-Jhs2Uz-300x168.png",300,168,true],"large":["https:\/\/youzum.net\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1-Jhs2Uz-1024x575.png",1024,575,true],"1536x1536":["https:\/\/youzum.net\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1-Jhs2Uz.png",1364,766,false],"2048x2048":["https:\/\/youzum.net\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1-Jhs2Uz.png",1364,766,false],"trp-custom-language-flag":["https:\/\/youzum.net\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1-Jhs2Uz-18x10.png",18,10,true],"woocommerce_thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1-Jhs2Uz-300x300.png",300,300,true],"woocommerce_single":["https:\/\/youzum.net\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1-Jhs2Uz-600x337.png",600,337,true],"woocommerce_gallery_thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-20-at-12.22.37-PM-1-Jhs2Uz-100x100.png",100,100,true]},"rttpg_author":{"display_name":"admin NU","author_link":"https:\/\/youzum.net\/de\/members\/adminnu\/"},"rttpg_comment":0,"rttpg_category":"<a href=\"https:\/\/youzum.net\/de\/category\/ai-club\/\" rel=\"category tag\">AI<\/a> <a href=\"https:\/\/youzum.net\/de\/category\/committee\/\" rel=\"category tag\">Committee<\/a> <a href=\"https:\/\/youzum.net\/de\/category\/news\/\" rel=\"category tag\">News<\/a> <a href=\"https:\/\/youzum.net\/de\/category\/uncategorized\/\" rel=\"category tag\">Uncategorized<\/a>","rttpg_excerpt":"Building simulators for robots has been a long term challenge. Traditional engines require manual coding of physics and perfect 3D models. NVIDIA is changing this with DreamDojo, a fully open-source, generalizable robot world model. Instead of using a physics engine, DreamDojo \u2018dreams\u2019 the results of robot actions directly in pixels. https:\/\/arxiv.org\/pdf\/2602.06949 Scaling Robotics with 44k+&hellip;","_links":{"self":[{"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/posts\/72707","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/comments?post=72707"}],"version-history":[{"count":0,"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/posts\/72707\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/media\/72708"}],"wp:attachment":[{"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/media?parent=72707"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/categories?post=72707"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/tags?post=72707"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}