{"id":43177,"date":"2025-10-09T06:56:26","date_gmt":"2025-10-09T06:56:26","guid":{"rendered":"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/"},"modified":"2025-10-09T06:56:26","modified_gmt":"2025-10-09T06:56:26","slug":"ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms","status":"publish","type":"post","link":"https:\/\/youzum.net\/th\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/","title":{"rendered":"RA3: Mid-Training with Temporal Action Abstractions for Faster Reinforcement Learning (RL) Post-Training in Code LLMs"},"content":{"rendered":"<p><strong>TL;DR:<\/strong> A new research from Apple, formalizes what \u201cmid-training\u201d should do before reinforcement learning RL post-training and introduces <strong>RA3 (Reasoning as Action Abstractions)<\/strong>\u2014an EM-style procedure that learns temporally consistent latent actions from expert traces, then fine-tunes on those bootstrapped traces. It shows mid-training should (1) prune to a compact near-optimal action subspace and (2) shorten the effective planning horizon, improving RL convergence. Empirically, RA3 improves HumanEval\/MBPP by ~8\/4 points over base\/NTP and accelerates RLVR on HumanEval+, MBPP+, LiveCodeBench, and Codeforces. <\/p>\n<h3 class=\"wp-block-heading\"><strong>What does the research present?<\/strong><\/h3>\n<p>The research team present the first formal treatment of how mid-training shapes post-training reinforcement learning RL: they breakdown outcomes into <strong>(i) pruning efficiency<\/strong>\u2014how well mid-training selects a compact near-optimal action subset that shapes the initial policy prior\u2014and <strong>(ii) RL convergence<\/strong>\u2014how quickly post-training improves within that restricted set. The analysis argues mid-training is most effective when the <strong>decision space is compact<\/strong> and the <strong>effective horizon is short<\/strong>, favoring <strong>temporal abstractions<\/strong> over primitive next-token actions.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img fetchpriority=\"high\" decoding=\"async\" width=\"1024\" height=\"488\" data-attachment-id=\"75198\" data-permalink=\"https:\/\/www.marktechpost.com\/2025\/10\/08\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/screenshot-2025-10-08-at-11-14-51-pm-2\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1.png\" data-orig-size=\"1812,864\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2025-10-08 at 11.14.51\u202fPM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1-300x143.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1-1024x488.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1-1024x488.png\" alt=\"\" class=\"wp-image-75198\" \/><figcaption class=\"wp-element-caption\">https:\/\/arxiv.org\/pdf\/2509.25810<\/figcaption><\/figure>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>Algorithm: RA3 in one pass<\/strong><\/h3>\n<p><strong>RA3<\/strong> derives a <strong>sequential variational lower bound<\/strong> (a temporal ELBO) and <strong>optimizes it with an EM-like loop:<\/strong><\/p>\n<ul class=\"wp-block-list\">\n<li><strong>E-step (latent discovery):<\/strong> use RL to infer <strong>temporally consistent latent structures<\/strong> (abstractions) aligned to expert sequences.<\/li>\n<li><strong>M-step (model update):<\/strong> perform next-token prediction on the <strong>bootstrapped, latent-annotated traces<\/strong> to make those abstractions part of the model\u2019s policy.<\/li>\n<\/ul>\n<h3 class=\"wp-block-heading\"><strong>Results: code generation and RLVR<\/strong><\/h3>\n<p>On Python code tasks, the research team reports that across multiple base models, <strong>RA3 improves average pass@k on HumanEval and MBPP by ~8 and ~4 points<\/strong> over the base model and an NTP mid-training baseline. In post-training, <strong>RLVR<\/strong> converges <strong>faster<\/strong> and to <strong>higher final performance<\/strong> on <strong>HumanEval+, MBPP+, LiveCodeBench, and Codeforces<\/strong> when initialized from RA3. These are mid- and post-training effects respectively; the evaluation scope is code generation. <\/p>\n<h3 class=\"wp-block-heading\"><strong>Key Takeaways<\/strong><\/h3>\n<ol class=\"wp-block-list\">\n<li>The research team formalizes mid-training via two determinants\u2014<strong>pruning efficiency<\/strong> and <strong>impact on RL convergence<\/strong>\u2014arguing effectiveness rises when the decision space is compact and the effective horizon is short.<\/li>\n<li><strong>RA3<\/strong> optimizes a sequential variational lower bound by <strong>iteratively discovering temporally consistent latent structures with RL<\/strong> and then <strong>fine-tuning on bootstrapped traces<\/strong> (EM-style).<\/li>\n<li>On code generation, RA3 reports <strong>~+8<\/strong> (HumanEval) and <strong>~+4<\/strong> (MBPP) average pass@k gains over base\/NTP mid-training baselines across several model scales.<\/li>\n<li>Initializing post-training with RA3 <strong>accelerates RLVR convergence<\/strong> and improves <strong>asymptotic performance<\/strong> on HumanEval+, MBPP+, LiveCodeBench, and Codeforces. <\/li>\n<\/ol>\n<h3 class=\"wp-block-heading\"><strong>Editorial Comments<\/strong><\/h3>\n<p>RA3\u2019s contribution is concrete and narrow: it formalizes mid-training around two determinants\u2014pruning efficiency and RL convergence\u2014and operationalizes them via a temporal ELBO optimized in an EM loop to learn persistent action abstractions before RLVR. The researchers report ~+8 (HumanEval) and ~+4 (MBPP) average pass@k gains over base\/NTP and faster RLVR convergence on HumanEval+, MBPP+, LiveCodeBench, and Codeforces.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out the\u00a0<strong><a href=\"https:\/\/arxiv.org\/abs\/2509.25810\" target=\"_blank\" rel=\"noreferrer noopener\">Technical Paper<\/a><\/strong>. Feel free to check out our\u00a0<strong><mark><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\" target=\"_blank\" rel=\"noreferrer noopener\">GitHub Page for Tutorials, Codes and Notebooks<\/a><\/mark><\/strong>.\u00a0Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">100k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2025\/10\/08\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/\">RA3: Mid-Training with Temporal Action Abstractions for Faster Reinforcement Learning (RL) Post-Training in Code LLMs<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>TL;DR: A new research from Apple, formalizes what \u201cmid-training\u201d should do before reinforcement learning RL post-training and introduces RA3 (Reasoning as Action Abstractions)\u2014an EM-style procedure that learns temporally consistent latent actions from expert traces, then fine-tunes on those bootstrapped traces. It shows mid-training should (1) prune to a compact near-optimal action subspace and (2) shorten the effective planning horizon, improving RL convergence. Empirically, RA3 improves HumanEval\/MBPP by ~8\/4 points over base\/NTP and accelerates RLVR on HumanEval+, MBPP+, LiveCodeBench, and Codeforces. What does the research present? The research team present the first formal treatment of how mid-training shapes post-training reinforcement learning RL: they breakdown outcomes into (i) pruning efficiency\u2014how well mid-training selects a compact near-optimal action subset that shapes the initial policy prior\u2014and (ii) RL convergence\u2014how quickly post-training improves within that restricted set. The analysis argues mid-training is most effective when the decision space is compact and the effective horizon is short, favoring temporal abstractions over primitive next-token actions. https:\/\/arxiv.org\/pdf\/2509.25810 Algorithm: RA3 in one pass RA3 derives a sequential variational lower bound (a temporal ELBO) and optimizes it with an EM-like loop: E-step (latent discovery): use RL to infer temporally consistent latent structures (abstractions) aligned to expert sequences. M-step (model update): perform next-token prediction on the bootstrapped, latent-annotated traces to make those abstractions part of the model\u2019s policy. Results: code generation and RLVR On Python code tasks, the research team reports that across multiple base models, RA3 improves average pass@k on HumanEval and MBPP by ~8 and ~4 points over the base model and an NTP mid-training baseline. In post-training, RLVR converges faster and to higher final performance on HumanEval+, MBPP+, LiveCodeBench, and Codeforces when initialized from RA3. These are mid- and post-training effects respectively; the evaluation scope is code generation. Key Takeaways The research team formalizes mid-training via two determinants\u2014pruning efficiency and impact on RL convergence\u2014arguing effectiveness rises when the decision space is compact and the effective horizon is short. RA3 optimizes a sequential variational lower bound by iteratively discovering temporally consistent latent structures with RL and then fine-tuning on bootstrapped traces (EM-style). On code generation, RA3 reports ~+8 (HumanEval) and ~+4 (MBPP) average pass@k gains over base\/NTP mid-training baselines across several model scales. Initializing post-training with RA3 accelerates RLVR convergence and improves asymptotic performance on HumanEval+, MBPP+, LiveCodeBench, and Codeforces. Editorial Comments RA3\u2019s contribution is concrete and narrow: it formalizes mid-training around two determinants\u2014pruning efficiency and RL convergence\u2014and operationalizes them via a temporal ELBO optimized in an EM loop to learn persistent action abstractions before RLVR. The researchers report ~+8 (HumanEval) and ~+4 (MBPP) average pass@k gains over base\/NTP and faster RLVR convergence on HumanEval+, MBPP+, LiveCodeBench, and Codeforces. Check out the\u00a0Technical Paper. Feel free to check out our\u00a0GitHub Page for Tutorials, Codes and Notebooks.\u00a0Also,\u00a0feel free to follow us on\u00a0Twitter\u00a0and don\u2019t forget to join our\u00a0100k+ ML SubReddit\u00a0and Subscribe to\u00a0our Newsletter. Wait! are you on telegram?\u00a0now you can join us on telegram as well. The post RA3: Mid-Training with Temporal Action Abstractions for Faster Reinforcement Learning (RL) Post-Training in Code LLMs appeared first on MarkTechPost.<\/p>","protected":false},"author":2,"featured_media":43178,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"pmpro_default_level":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"_pvb_checkbox_block_on_post":false,"footnotes":""},"categories":[52,5,7,1],"tags":[],"class_list":["post-43177","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-club","category-committee","category-news","category-uncategorized","pmpro-has-access"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.3 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>RA3: Mid-Training with Temporal Action Abstractions for Faster Reinforcement Learning (RL) Post-Training in Code LLMs - YouZum<\/title>\n<meta name=\"description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/youzum.net\/th\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/\" \/>\n<meta property=\"og:locale\" content=\"th_TH\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"RA3: Mid-Training with Temporal Action Abstractions for Faster Reinforcement Learning (RL) Post-Training in Code LLMs - YouZum\" \/>\n<meta property=\"og:description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta property=\"og:url\" content=\"https:\/\/youzum.net\/th\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/\" \/>\n<meta property=\"og:site_name\" content=\"YouZum\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/DroneAssociationTH\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-10-09T06:56:26+00:00\" \/>\n<meta name=\"author\" content=\"admin NU\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin NU\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 \u0e19\u0e32\u0e17\u0e35\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/\"},\"author\":{\"name\":\"admin NU\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\"},\"headline\":\"RA3: Mid-Training with Temporal Action Abstractions for Faster Reinforcement Learning (RL) Post-Training in Code LLMs\",\"datePublished\":\"2025-10-09T06:56:26+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/\"},\"wordCount\":543,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1-1024x488-VdKem1.webp\",\"articleSection\":[\"AI\",\"Committee\",\"News\",\"Uncategorized\"],\"inLanguage\":\"th\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/\",\"url\":\"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/\",\"name\":\"RA3: Mid-Training with Temporal Action Abstractions for Faster Reinforcement Learning (RL) Post-Training in Code LLMs - YouZum\",\"isPartOf\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1-1024x488-VdKem1.webp\",\"datePublished\":\"2025-10-09T06:56:26+00:00\",\"description\":\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\",\"breadcrumb\":{\"@id\":\"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/#breadcrumb\"},\"inLanguage\":\"th\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"th\",\"@id\":\"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/#primaryimage\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1-1024x488-VdKem1.webp\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1-1024x488-VdKem1.webp\",\"width\":1024,\"height\":488},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/youzum.net\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"RA3: Mid-Training with Temporal Action Abstractions for Faster Reinforcement Learning (RL) Post-Training in Code LLMs\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/yousum.gpucore.co\/#website\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"name\":\"YouSum\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/yousum.gpucore.co\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"th\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\",\"name\":\"Drone Association Thailand\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"th\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"width\":300,\"height\":300,\"caption\":\"Drone Association Thailand\"},\"image\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/DroneAssociationTH\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\",\"name\":\"admin NU\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"th\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"caption\":\"admin NU\"},\"url\":\"https:\/\/youzum.net\/th\/members\/adminnu\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"RA3: Mid-Training with Temporal Action Abstractions for Faster Reinforcement Learning (RL) Post-Training in Code LLMs - YouZum","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/youzum.net\/th\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/","og_locale":"th_TH","og_type":"article","og_title":"RA3: Mid-Training with Temporal Action Abstractions for Faster Reinforcement Learning (RL) Post-Training in Code LLMs - YouZum","og_description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","og_url":"https:\/\/youzum.net\/th\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/","og_site_name":"YouZum","article_publisher":"https:\/\/www.facebook.com\/DroneAssociationTH\/","article_published_time":"2025-10-09T06:56:26+00:00","author":"admin NU","twitter_card":"summary_large_image","twitter_misc":{"Written by":"admin NU","Est. reading time":"3 \u0e19\u0e32\u0e17\u0e35"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/#article","isPartOf":{"@id":"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/"},"author":{"name":"admin NU","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c"},"headline":"RA3: Mid-Training with Temporal Action Abstractions for Faster Reinforcement Learning (RL) Post-Training in Code LLMs","datePublished":"2025-10-09T06:56:26+00:00","mainEntityOfPage":{"@id":"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/"},"wordCount":543,"commentCount":0,"publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"image":{"@id":"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/#primaryimage"},"thumbnailUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1-1024x488-VdKem1.webp","articleSection":["AI","Committee","News","Uncategorized"],"inLanguage":"th","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/","url":"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/","name":"RA3: Mid-Training with Temporal Action Abstractions for Faster Reinforcement Learning (RL) Post-Training in Code LLMs - YouZum","isPartOf":{"@id":"https:\/\/yousum.gpucore.co\/#website"},"primaryImageOfPage":{"@id":"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/#primaryimage"},"image":{"@id":"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/#primaryimage"},"thumbnailUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1-1024x488-VdKem1.webp","datePublished":"2025-10-09T06:56:26+00:00","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","breadcrumb":{"@id":"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/#breadcrumb"},"inLanguage":"th","potentialAction":[{"@type":"ReadAction","target":["https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/"]}]},{"@type":"ImageObject","inLanguage":"th","@id":"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/#primaryimage","url":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1-1024x488-VdKem1.webp","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1-1024x488-VdKem1.webp","width":1024,"height":488},{"@type":"BreadcrumbList","@id":"https:\/\/youzum.net\/ra3-mid-training-with-temporal-action-abstractions-for-faster-reinforcement-learning-rl-post-training-in-code-llms\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/youzum.net\/"},{"@type":"ListItem","position":2,"name":"RA3: Mid-Training with Temporal Action Abstractions for Faster Reinforcement Learning (RL) Post-Training in Code LLMs"}]},{"@type":"WebSite","@id":"https:\/\/yousum.gpucore.co\/#website","url":"https:\/\/yousum.gpucore.co\/","name":"YouSum","description":"","publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/yousum.gpucore.co\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"th"},{"@type":"Organization","@id":"https:\/\/yousum.gpucore.co\/#organization","name":"Drone Association Thailand","url":"https:\/\/yousum.gpucore.co\/","logo":{"@type":"ImageObject","inLanguage":"th","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","width":300,"height":300,"caption":"Drone Association Thailand"},"image":{"@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/DroneAssociationTH\/"]},{"@type":"Person","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c","name":"admin NU","image":{"@type":"ImageObject","inLanguage":"th","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","caption":"admin NU"},"url":"https:\/\/youzum.net\/th\/members\/adminnu\/"}]}},"rttpg_featured_image_url":{"full":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1-1024x488-VdKem1.webp",1024,488,false],"landscape":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1-1024x488-VdKem1.webp",1024,488,false],"portraits":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1-1024x488-VdKem1.webp",1024,488,false],"thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1-1024x488-VdKem1-150x150.webp",150,150,true],"medium":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1-1024x488-VdKem1-300x143.webp",300,143,true],"large":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1-1024x488-VdKem1.webp",1024,488,false],"1536x1536":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1-1024x488-VdKem1.webp",1024,488,false],"2048x2048":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1-1024x488-VdKem1.webp",1024,488,false],"trp-custom-language-flag":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1-1024x488-VdKem1-18x9.webp",18,9,true],"woocommerce_thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1-1024x488-VdKem1-300x300.webp",300,300,true],"woocommerce_single":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1-1024x488-VdKem1-600x286.webp",600,286,true],"woocommerce_gallery_thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-08-at-11.14.51-PM-1-1024x488-VdKem1-100x100.webp",100,100,true]},"rttpg_author":{"display_name":"admin NU","author_link":"https:\/\/youzum.net\/th\/members\/adminnu\/"},"rttpg_comment":0,"rttpg_category":"<a href=\"https:\/\/youzum.net\/th\/category\/ai-club\/\" rel=\"category tag\">AI<\/a> <a href=\"https:\/\/youzum.net\/th\/category\/committee\/\" rel=\"category tag\">Committee<\/a> <a href=\"https:\/\/youzum.net\/th\/category\/news\/\" rel=\"category tag\">News<\/a> <a href=\"https:\/\/youzum.net\/th\/category\/uncategorized\/\" rel=\"category tag\">Uncategorized<\/a>","rttpg_excerpt":"TL;DR: A new research from Apple, formalizes what \u201cmid-training\u201d should do before reinforcement learning RL post-training and introduces RA3 (Reasoning as Action Abstractions)\u2014an EM-style procedure that learns temporally consistent latent actions from expert traces, then fine-tunes on those bootstrapped traces. It shows mid-training should (1) prune to a compact near-optimal action subspace and (2) shorten&hellip;","_links":{"self":[{"href":"https:\/\/youzum.net\/th\/wp-json\/wp\/v2\/posts\/43177","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/youzum.net\/th\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/youzum.net\/th\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/youzum.net\/th\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/youzum.net\/th\/wp-json\/wp\/v2\/comments?post=43177"}],"version-history":[{"count":0,"href":"https:\/\/youzum.net\/th\/wp-json\/wp\/v2\/posts\/43177\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/youzum.net\/th\/wp-json\/wp\/v2\/media\/43178"}],"wp:attachment":[{"href":"https:\/\/youzum.net\/th\/wp-json\/wp\/v2\/media?parent=43177"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/youzum.net\/th\/wp-json\/wp\/v2\/categories?post=43177"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/youzum.net\/th\/wp-json\/wp\/v2\/tags?post=43177"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}