{"id":35212,"date":"2025-08-31T06:18:27","date_gmt":"2025-08-31T06:18:27","guid":{"rendered":"https:\/\/youzum.net\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/"},"modified":"2025-08-31T06:18:27","modified_gmt":"2025-08-31T06:18:27","slug":"a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models","status":"publish","type":"post","link":"https:\/\/youzum.net\/ja\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/","title":{"rendered":"A Coding Guide to Building a Brain-Inspired Hierarchical Reasoning AI Agent with Hugging Face Models"},"content":{"rendered":"<p>In this tutorial, we set out to recreate the spirit of the Hierarchical Reasoning Model (HRM) using a free Hugging Face model that runs locally. We walk through the design of a lightweight yet structured reasoning agent, where we act as both architects and experimenters. By breaking problems into subgoals, solving them with Python, critiquing the outcomes, and synthesizing a final answer, we can experience how hierarchical planning and execution can enhance reasoning performance. This process enables us to see, in real-time, how a brain-inspired workflow can be implemented without requiring massive model sizes or expensive APIs. Check out the\u00a0<strong><a href=\"https:\/\/arxiv.org\/pdf\/2506.21734\" target=\"_blank\" rel=\"noreferrer noopener\">Paper<\/a>\u00a0and\u00a0<a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/AI%20Agents%20Codes\/hrm_braininspired_ai_agent_huggingface_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">FULL CODES<\/a>.<\/strong><\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">!pip -q install -U transformers accelerate bitsandbytes rich\n\n\nimport os, re, json, textwrap, traceback\nfrom typing import Dict, Any, List\nfrom rich import print as rprint\nimport torch\nfrom transformers import AutoTokenizer, AutoModelForCausalLM, pipeline\n\n\nMODEL_NAME = \"Qwen\/Qwen2.5-1.5B-Instruct\"\nDTYPE = torch.bfloat16 if torch.cuda.is_available() else torch.float32<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We begin by installing the required libraries and loading the Qwen2.5-1.5B-Instruct model from Hugging Face. We set the data type based on GPU availability to ensure efficient model execution in Colab.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">tok = AutoTokenizer.from_pretrained(MODEL_NAME, use_fast=True)\nmodel = AutoModelForCausalLM.from_pretrained(\n   MODEL_NAME,\n   device_map=\"auto\",\n   torch_dtype=DTYPE,\n   load_in_4bit=True\n)\ngen = pipeline(\n   \"text-generation\",\n   model=model,\n   tokenizer=tok,\n   return_full_text=False\n)\n<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We load the tokenizer and model, configure it to run in 4-bit for efficiency, and wrap everything in a text-generation pipeline so we can interact with the model easily in Colab. Check out the\u00a0<strong><a href=\"https:\/\/arxiv.org\/pdf\/2506.21734\" target=\"_blank\" rel=\"noreferrer noopener\">Paper<\/a>\u00a0and\u00a0<a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/AI%20Agents%20Codes\/hrm_braininspired_ai_agent_huggingface_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">FULL CODES<\/a>.<\/strong><\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">def chat(prompt: str, system: str = \"\", max_new_tokens: int = 512, temperature: float = 0.3) -&gt; str:\n   msgs = []\n   if system:\n       msgs.append({\"role\":\"system\",\"content\":system})\n   msgs.append({\"role\":\"user\",\"content\":prompt})\n   inputs = tok.apply_chat_template(msgs, tokenize=False, add_generation_prompt=True)\n   out = gen(inputs, max_new_tokens=max_new_tokens, do_sample=(temperature&gt;0), temperature=temperature, top_p=0.9)\n   return out[0][\"generated_text\"].strip()\n\n\ndef extract_json(txt: str) -&gt; Dict[str, Any]:\n   m = re.search(r\"{[sS]*}$\", txt.strip())\n   if not m:\n       m = re.search(r\"{[sS]*?}\", txt)\n   try:\n       return json.loads(m.group(0)) if m else {}\n   except Exception:\n       # fallback: strip code fences\n       s = re.sub(r\"^```.*?n|n```$\", \"\", txt, flags=re.S)\n       try:\n           return json.loads(s)\n       except Exception:\n           return {}<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We define helper functions: the chat function allows us to send prompts to the model with optional system instructions and sampling controls, while extract_json helps us parse structured JSON outputs from the model reliably, even if the response includes code fences or additional text. Check out the\u00a0<strong><a href=\"https:\/\/arxiv.org\/pdf\/2506.21734\" target=\"_blank\" rel=\"noreferrer noopener\">Paper<\/a>\u00a0and\u00a0<a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/AI%20Agents%20Codes\/hrm_braininspired_ai_agent_huggingface_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">FULL CODES<\/a>.<\/strong><\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">def extract_code(txt: str) -&gt; str:\n   m = re.search(r\"```(?:python)?s*([sS]*?)```\", txt, flags=re.I)\n   return (m.group(1) if m else txt).strip()\n\n\ndef run_python(code: str, env: Dict[str, Any] | None = None) -&gt; Dict[str, Any]:\n   import io, contextlib\n   g = {\"__name__\": \"__main__\"}; l = {}\n   if env: g.update(env)\n   buf = io.StringIO()\n   try:\n       with contextlib.redirect_stdout(buf):\n           exec(code, g, l)\n       out = l.get(\"RESULT\", g.get(\"RESULT\"))\n       return {\"ok\": True, \"result\": out, \"stdout\": buf.getvalue()}\n   except Exception as e:\n       return {\"ok\": False, \"error\": str(e), \"trace\": traceback.format_exc(), \"stdout\": buf.getvalue()}\n\n\nPLANNER_SYS = \"\"\"You are the HRM Planner.\nDecompose the TASK into 2\u20134 atomic, code-solvable subgoals.\nReturn compact JSON only: {\"subgoals\":[...], \"final_format\":\"&lt;one-line answer format&gt;\"}.\"\"\"\n\n\nSOLVER_SYS = \"\"\"You are the HRM Solver.\nGiven SUBGOAL and CONTEXT vars, output a single Python snippet.\nRules:\n- Compute deterministically.\n- Set a variable RESULT to the answer.\n- Keep code short; stdlib only.\nReturn only a Python code block.\"\"\"\n\n\nCRITIC_SYS = \"\"\"You are the HRM Critic.\nGiven TASK and LOGS (subgoal results), decide if final answer is ready.\nReturn JSON only: {\"action\":\"submit\"|\"revise\",\"critique\":\"...\", \"fix_hint\":\"&lt;if revise&gt;\"}.\"\"\"\n\n\nSYNTH_SYS = \"\"\"You are the HRM Synthesizer.\nGiven TASK, LOGS, and final_format, output only the final answer (no steps).\nFollow final_format exactly.\"\"\"\n<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We add two important pieces: utility functions and system prompts. The extract_code function pulls Python snippets from the model\u2019s output, while run_python safely executes those snippets and captures their results. Alongside, we define four role prompts, Planner, Solver, Critic, and Synthesizer, which guide the model to break tasks into subgoals, solve them with code, verify correctness, and finally produce a clean answer. Check out the\u00a0<strong><a href=\"https:\/\/arxiv.org\/pdf\/2506.21734\" target=\"_blank\" rel=\"noreferrer noopener\">Paper<\/a>\u00a0and\u00a0<a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/AI%20Agents%20Codes\/hrm_braininspired_ai_agent_huggingface_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">FULL CODES<\/a>.<\/strong><\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">def plan(task: str) -&gt; Dict[str, Any]:\n   p = f\"TASK:n{task}nReturn JSON only.\"\n   return extract_json(chat(p, PLANNER_SYS, temperature=0.2, max_new_tokens=300))\n\n\ndef solve_subgoal(subgoal: str, context: Dict[str, Any]) -&gt; Dict[str, Any]:\n   prompt = f\"SUBGOAL:n{subgoal}nCONTEXT vars: {list(context.keys())}nReturn Python code only.\"\n   code = extract_code(chat(prompt, SOLVER_SYS, temperature=0.2, max_new_tokens=400))\n   res = run_python(code, env=context)\n   return {\"subgoal\": subgoal, \"code\": code, \"run\": res}\n\n\ndef critic(task: str, logs: List[Dict[str, Any]]) -&gt; Dict[str, Any]:\n   pl = [{\"subgoal\": L[\"subgoal\"], \"result\": L[\"run\"].get(\"result\"), \"ok\": L[\"run\"][\"ok\"]} for L in logs]\n   out = chat(\"TASK:n\"+task+\"nLOGS:n\"+json.dumps(pl, ensure_ascii=False, indent=2)+\"nReturn JSON only.\",\n              CRITIC_SYS, temperature=0.1, max_new_tokens=250)\n   return extract_json(out)\n\n\ndef refine(task: str, logs: List[Dict[str, Any]]) -&gt; Dict[str, Any]:\n   sys = \"Refine subgoals minimally to fix issues. Return same JSON schema as planner.\"\n   out = chat(\"TASK:n\"+task+\"nLOGS:n\"+json.dumps(logs, ensure_ascii=False)+\"nReturn JSON only.\",\n              sys, temperature=0.2, max_new_tokens=250)\n   j = extract_json(out)\n   return j if j.get(\"subgoals\") else {}\n\n\ndef synthesize(task: str, logs: List[Dict[str, Any]], final_format: str) -&gt; str:\n   packed = [{\"subgoal\": L[\"subgoal\"], \"result\": L[\"run\"].get(\"result\")} for L in logs]\n   return chat(\"TASK:n\"+task+\"nLOGS:n\"+json.dumps(packed, ensure_ascii=False)+\n               f\"nfinal_format: {final_format}nOnly the final answer.\",\n               SYNTH_SYS, temperature=0.0, max_new_tokens=120).strip()\n\n\ndef hrm_agent(task: str, context: Dict[str, Any] | None = None, budget: int = 2) -&gt; Dict[str, Any]:\n   ctx = dict(context or {})\n   trace, plan_json = [], plan(task)\n   for round_id in range(1, budget+1):\n       logs = [solve_subgoal(sg, ctx) for sg in plan_json.get(\"subgoals\", [])]\n       for L in logs:\n           ctx_key = f\"g{len(trace)}_{abs(hash(L['subgoal']))%9999}\"\n           ctx[ctx_key] = L[\"run\"].get(\"result\")\n       verdict = critic(task, logs)\n       trace.append({\"round\": round_id, \"plan\": plan_json, \"logs\": logs, \"verdict\": verdict})\n       if verdict.get(\"action\") == \"submit\": break\n       plan_json = refine(task, logs) or plan_json\n   final = synthesize(task, trace[-1][\"logs\"], plan_json.get(\"final_format\", \"Answer: &lt;value&gt;\"))\n   return {\"final\": final, \"trace\": trace}<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We implement the full HRM loop: we plan subgoals, solve each by generating and running Python (capturing RESULT), then we critique, optionally refine the plan, and synthesize a clean final answer. We orchestrate these rounds in hrm_agent, carrying forward intermediate results as context so we iteratively improve and stop once the critic says \u201csubmit.\u201d Check out the\u00a0<strong><a href=\"https:\/\/arxiv.org\/pdf\/2506.21734\" target=\"_blank\" rel=\"noreferrer noopener\">Paper<\/a>\u00a0and\u00a0<a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/AI%20Agents%20Codes\/hrm_braininspired_ai_agent_huggingface_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">FULL CODES<\/a>.<\/strong><\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">ARC_TASK = textwrap.dedent(\"\"\"\nInfer the transformation rule from train examples and apply to test.\nReturn exactly: \"Answer: &lt;grid&gt;\", where &lt;grid&gt; is a Python list of lists of ints.\n\"\"\").strip()\nARC_DATA = {\n   \"train\": [\n       {\"inp\": [[0,0],[1,0]], \"out\": [[1,1],[0,1]]},\n       {\"inp\": [[0,1],[0,0]], \"out\": [[1,0],[1,1]]}\n   ],\n   \"test\": [[0,0],[0,1]]\n}\nres1 = hrm_agent(ARC_TASK, context={\"TRAIN\": ARC_DATA[\"train\"], \"TEST\": ARC_DATA[\"test\"]}, budget=2)\nrprint(\"n[bold]Demo 1 \u2014 ARC-like Toy[\/bold]\")\nrprint(res1[\"final\"])\n\n\nWM_TASK = \"A tank holds 1200 L. It leaks 2% per hour for 3 hours, then is refilled by 150 L. Return exactly: 'Answer: &lt;liters&gt;'.\"\nres2 = hrm_agent(WM_TASK, context={}, budget=2)\nrprint(\"n[bold]Demo 2 \u2014 Word Math[\/bold]\")\nrprint(res2[\"final\"])\n\n\nrprint(\"n[dim]Rounds executed (Demo 1):[\/dim]\", len(res1[\"trace\"]))<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We run two demos to validate the agent: an ARC-style task where we infer a transformation from train pairs and apply it to a test grid, and a word-math problem that checks numeric reasoning. We call hrm_agent with each task, print the final answers, and also display the number of reasoning rounds the ARC run takes.<\/p>\n<p>In conclusion, we recognize that what we have built is more than a simple demonstration; it is a window into how hierarchical reasoning can make smaller models punch above their weight. By layering planning, solving, and critiquing, we empower a free Hugging Face model to perform tasks with surprising robustness. We leave with a deeper appreciation of how brain-inspired structures, when paired with practical, open-source tools, enable us to explore reasoning benchmarks and experiment creatively without incurring high costs. This hands-on journey shows us that advanced cognitive-like workflows are accessible to anyone willing to tinker, iterate, and learn.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out the\u00a0<strong><a href=\"https:\/\/arxiv.org\/pdf\/2506.21734\" target=\"_blank\" rel=\"noreferrer noopener\">Paper<\/a>\u00a0and\u00a0<a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/AI%20Agents%20Codes\/hrm_braininspired_ai_agent_huggingface_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">FULL CODES<\/a>.<\/strong>\u00a0Feel free to check out our\u00a0<strong><mark><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\" target=\"_blank\" rel=\"noreferrer noopener\">GitHub Page for Tutorials, Codes and Notebooks<\/a><\/mark><\/strong>.\u00a0Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">100k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>.<\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2025\/08\/30\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/\">A Coding Guide to Building a Brain-Inspired Hierarchical Reasoning AI Agent with Hugging Face Models<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>In this tutorial, we set out to recreate the spirit of the Hierarchical Reasoning Model (HRM) using a free Hugging Face model that runs locally. We walk through the design of a lightweight yet structured reasoning agent, where we act as both architects and experimenters. By breaking problems into subgoals, solving them with Python, critiquing the outcomes, and synthesizing a final answer, we can experience how hierarchical planning and execution can enhance reasoning performance. This process enables us to see, in real-time, how a brain-inspired workflow can be implemented without requiring massive model sizes or expensive APIs. Check out the\u00a0Paper\u00a0and\u00a0FULL CODES. Copy CodeCopiedUse a different Browser !pip -q install -U transformers accelerate bitsandbytes rich import os, re, json, textwrap, traceback from typing import Dict, Any, List from rich import print as rprint import torch from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline MODEL_NAME = &#8220;Qwen\/Qwen2.5-1.5B-Instruct&#8221; DTYPE = torch.bfloat16 if torch.cuda.is_available() else torch.float32 We begin by installing the required libraries and loading the Qwen2.5-1.5B-Instruct model from Hugging Face. We set the data type based on GPU availability to ensure efficient model execution in Colab. Copy CodeCopiedUse a different Browser tok = AutoTokenizer.from_pretrained(MODEL_NAME, use_fast=True) model = AutoModelForCausalLM.from_pretrained( MODEL_NAME, device_map=&#8221;auto&#8221;, torch_dtype=DTYPE, load_in_4bit=True ) gen = pipeline( &#8220;text-generation&#8221;, model=model, tokenizer=tok, return_full_text=False ) We load the tokenizer and model, configure it to run in 4-bit for efficiency, and wrap everything in a text-generation pipeline so we can interact with the model easily in Colab. Check out the\u00a0Paper\u00a0and\u00a0FULL CODES. Copy CodeCopiedUse a different Browser def chat(prompt: str, system: str = &#8220;&#8221;, max_new_tokens: int = 512, temperature: float = 0.3) -&gt; str: msgs = [] if system: msgs.append({&#8220;role&#8221;:&#8221;system&#8221;,&#8221;content&#8221;:system}) msgs.append({&#8220;role&#8221;:&#8221;user&#8221;,&#8221;content&#8221;:prompt}) inputs = tok.apply_chat_template(msgs, tokenize=False, add_generation_prompt=True) out = gen(inputs, max_new_tokens=max_new_tokens, do_sample=(temperature&gt;0), temperature=temperature, top_p=0.9) return out[0][&#8220;generated_text&#8221;].strip() def extract_json(txt: str) -&gt; Dict[str, Any]: m = re.search(r&#8221;{[sS]*}$&#8221;, txt.strip()) if not m: m = re.search(r&#8221;{[sS]*?}&#8221;, txt) try: return json.loads(m.group(0)) if m else {} except Exception: # fallback: strip code fences s = re.sub(r&#8221;^&#8220;`.*?n|n&#8220;`$&#8221;, &#8220;&#8221;, txt, flags=re.S) try: return json.loads(s) except Exception: return {} We define helper functions: the chat function allows us to send prompts to the model with optional system instructions and sampling controls, while extract_json helps us parse structured JSON outputs from the model reliably, even if the response includes code fences or additional text. Check out the\u00a0Paper\u00a0and\u00a0FULL CODES. Copy CodeCopiedUse a different Browser def extract_code(txt: str) -&gt; str: m = re.search(r&#8221;&#8220;`(?:python)?s*([sS]*?)&#8220;`&#8221;, txt, flags=re.I) return (m.group(1) if m else txt).strip() def run_python(code: str, env: Dict[str, Any] | None = None) -&gt; Dict[str, Any]: import io, contextlib g = {&#8220;__name__&#8221;: &#8220;__main__&#8221;}; l = {} if env: g.update(env) buf = io.StringIO() try: with contextlib.redirect_stdout(buf): exec(code, g, l) out = l.get(&#8220;RESULT&#8221;, g.get(&#8220;RESULT&#8221;)) return {&#8220;ok&#8221;: True, &#8220;result&#8221;: out, &#8220;stdout&#8221;: buf.getvalue()} except Exception as e: return {&#8220;ok&#8221;: False, &#8220;error&#8221;: str(e), &#8220;trace&#8221;: traceback.format_exc(), &#8220;stdout&#8221;: buf.getvalue()} PLANNER_SYS = &#8220;&#8221;&#8221;You are the HRM Planner. Decompose the TASK into 2\u20134 atomic, code-solvable subgoals. Return compact JSON only: {&#8220;subgoals&#8221;:[&#8230;], &#8220;final_format&#8221;:&#8221;&lt;one-line answer format&gt;&#8221;}.&#8221;&#8221;&#8221; SOLVER_SYS = &#8220;&#8221;&#8221;You are the HRM Solver. Given SUBGOAL and CONTEXT vars, output a single Python snippet. Rules: &#8211; Compute deterministically. &#8211; Set a variable RESULT to the answer. &#8211; Keep code short; stdlib only. Return only a Python code block.&#8221;&#8221;&#8221; CRITIC_SYS = &#8220;&#8221;&#8221;You are the HRM Critic. Given TASK and LOGS (subgoal results), decide if final answer is ready. Return JSON only: {&#8220;action&#8221;:&#8221;submit&#8221;|&#8221;revise&#8221;,&#8221;critique&#8221;:&#8221;&#8230;&#8221;, &#8220;fix_hint&#8221;:&#8221;&lt;if revise&gt;&#8221;}.&#8221;&#8221;&#8221; SYNTH_SYS = &#8220;&#8221;&#8221;You are the HRM Synthesizer. Given TASK, LOGS, and final_format, output only the final answer (no steps). Follow final_format exactly.&#8221;&#8221;&#8221; We add two important pieces: utility functions and system prompts. The extract_code function pulls Python snippets from the model\u2019s output, while run_python safely executes those snippets and captures their results. Alongside, we define four role prompts, Planner, Solver, Critic, and Synthesizer, which guide the model to break tasks into subgoals, solve them with code, verify correctness, and finally produce a clean answer. Check out the\u00a0Paper\u00a0and\u00a0FULL CODES. Copy CodeCopiedUse a different Browser def plan(task: str) -&gt; Dict[str, Any]: p = f&#8221;TASK:n{task}nReturn JSON only.&#8221; return extract_json(chat(p, PLANNER_SYS, temperature=0.2, max_new_tokens=300)) def solve_subgoal(subgoal: str, context: Dict[str, Any]) -&gt; Dict[str, Any]: prompt = f&#8221;SUBGOAL:n{subgoal}nCONTEXT vars: {list(context.keys())}nReturn Python code only.&#8221; code = extract_code(chat(prompt, SOLVER_SYS, temperature=0.2, max_new_tokens=400)) res = run_python(code, env=context) return {&#8220;subgoal&#8221;: subgoal, &#8220;code&#8221;: code, &#8220;run&#8221;: res} def critic(task: str, logs: List[Dict[str, Any]]) -&gt; Dict[str, Any]: pl = [{&#8220;subgoal&#8221;: L[&#8220;subgoal&#8221;], &#8220;result&#8221;: L[&#8220;run&#8221;].get(&#8220;result&#8221;), &#8220;ok&#8221;: L[&#8220;run&#8221;][&#8220;ok&#8221;]} for L in logs] out = chat(&#8220;TASK:n&#8221;+task+&#8221;nLOGS:n&#8221;+json.dumps(pl, ensure_ascii=False, indent=2)+&#8221;nReturn JSON only.&#8221;, CRITIC_SYS, temperature=0.1, max_new_tokens=250) return extract_json(out) def refine(task: str, logs: List[Dict[str, Any]]) -&gt; Dict[str, Any]: sys = &#8220;Refine subgoals minimally to fix issues. Return same JSON schema as planner.&#8221; out = chat(&#8220;TASK:n&#8221;+task+&#8221;nLOGS:n&#8221;+json.dumps(logs, ensure_ascii=False)+&#8221;nReturn JSON only.&#8221;, sys, temperature=0.2, max_new_tokens=250) j = extract_json(out) return j if j.get(&#8220;subgoals&#8221;) else {} def synthesize(task: str, logs: List[Dict[str, Any]], final_format: str) -&gt; str: packed = [{&#8220;subgoal&#8221;: L[&#8220;subgoal&#8221;], &#8220;result&#8221;: L[&#8220;run&#8221;].get(&#8220;result&#8221;)} for L in logs] return chat(&#8220;TASK:n&#8221;+task+&#8221;nLOGS:n&#8221;+json.dumps(packed, ensure_ascii=False)+ f&#8221;nfinal_format: {final_format}nOnly the final answer.&#8221;, SYNTH_SYS, temperature=0.0, max_new_tokens=120).strip() def hrm_agent(task: str, context: Dict[str, Any] | None = None, budget: int = 2) -&gt; Dict[str, Any]: ctx = dict(context or {}) trace, plan_json = [], plan(task) for round_id in range(1, budget+1): logs = [solve_subgoal(sg, ctx) for sg in plan_json.get(&#8220;subgoals&#8221;, [])] for L in logs: ctx_key = f&#8221;g{len(trace)}_{abs(hash(L[&#8216;subgoal&#8217;]))%9999}&#8221; ctx[ctx_key] = L[&#8220;run&#8221;].get(&#8220;result&#8221;) verdict = critic(task, logs) trace.append({&#8220;round&#8221;: round_id, &#8220;plan&#8221;: plan_json, &#8220;logs&#8221;: logs, &#8220;verdict&#8221;: verdict}) if verdict.get(&#8220;action&#8221;) == &#8220;submit&#8221;: break plan_json = refine(task, logs) or plan_json final = synthesize(task, trace[-1][&#8220;logs&#8221;], plan_json.get(&#8220;final_format&#8221;, &#8220;Answer: &lt;value&gt;&#8221;)) return {&#8220;final&#8221;: final, &#8220;trace&#8221;: trace} We implement the full HRM loop: we plan subgoals, solve each by generating and running Python (capturing RESULT), then we critique, optionally refine the plan, and synthesize a clean final answer. We orchestrate these rounds in hrm_agent, carrying forward intermediate results as context so we iteratively improve and stop once the critic says \u201csubmit.\u201d Check out the\u00a0Paper\u00a0and\u00a0FULL CODES. Copy CodeCopiedUse a different Browser ARC_TASK = textwrap.dedent(&#8220;&#8221;&#8221; Infer the transformation rule from train examples and apply to test. Return exactly: &#8220;Answer: &lt;grid&gt;&#8221;, where &lt;grid&gt; is a Python list of lists of ints. &#8220;&#8221;&#8221;).strip() ARC_DATA = { &#8220;train&#8221;: [ {&#8220;inp&#8221;: [[0,0],[1,0]], &#8220;out&#8221;: [[1,1],[0,1]]}, {&#8220;inp&#8221;: [[0,1],[0,0]], &#8220;out&#8221;: [[1,0],[1,1]]} ], &#8220;test&#8221;: [[0,0],[0,1]] } res1 = hrm_agent(ARC_TASK, context={&#8220;TRAIN&#8221;: ARC_DATA[&#8220;train&#8221;], &#8220;TEST&#8221;: ARC_DATA[&#8220;test&#8221;]}, budget=2) rprint(&#8220;n[bold]Demo 1 \u2014<\/p>","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"pmpro_default_level":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"_pvb_checkbox_block_on_post":false,"footnotes":""},"categories":[52,5,7,1],"tags":[],"class_list":["post-35212","post","type-post","status-publish","format-standard","hentry","category-ai-club","category-committee","category-news","category-uncategorized","pmpro-has-access"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.3 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>A Coding Guide to Building a Brain-Inspired Hierarchical Reasoning AI Agent with Hugging Face Models - YouZum<\/title>\n<meta name=\"description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/youzum.net\/ja\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/\" \/>\n<meta property=\"og:locale\" content=\"ja_JP\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"A Coding Guide to Building a Brain-Inspired Hierarchical Reasoning AI Agent with Hugging Face Models - YouZum\" \/>\n<meta property=\"og:description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta property=\"og:url\" content=\"https:\/\/youzum.net\/ja\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/\" \/>\n<meta property=\"og:site_name\" content=\"YouZum\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/DroneAssociationTH\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-08-31T06:18:27+00:00\" \/>\n<meta name=\"author\" content=\"admin NU\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\u57f7\u7b46\u8005\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin NU\" \/>\n\t<meta name=\"twitter:label2\" content=\"\u63a8\u5b9a\u8aad\u307f\u53d6\u308a\u6642\u9593\" \/>\n\t<meta name=\"twitter:data2\" content=\"8\u5206\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/youzum.net\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/youzum.net\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/\"},\"author\":{\"name\":\"admin NU\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\"},\"headline\":\"A Coding Guide to Building a Brain-Inspired Hierarchical Reasoning AI Agent with Hugging Face Models\",\"datePublished\":\"2025-08-31T06:18:27+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/youzum.net\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/\"},\"wordCount\":637,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"articleSection\":[\"AI\",\"Committee\",\"News\",\"Uncategorized\"],\"inLanguage\":\"ja\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/youzum.net\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/youzum.net\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/\",\"url\":\"https:\/\/youzum.net\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/\",\"name\":\"A Coding Guide to Building a Brain-Inspired Hierarchical Reasoning AI Agent with Hugging Face Models - YouZum\",\"isPartOf\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#website\"},\"datePublished\":\"2025-08-31T06:18:27+00:00\",\"description\":\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\",\"breadcrumb\":{\"@id\":\"https:\/\/youzum.net\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/#breadcrumb\"},\"inLanguage\":\"ja\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/youzum.net\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/youzum.net\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/youzum.net\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"A Coding Guide to Building a Brain-Inspired Hierarchical Reasoning AI Agent with Hugging Face Models\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/yousum.gpucore.co\/#website\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"name\":\"YouSum\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/yousum.gpucore.co\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"ja\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\",\"name\":\"Drone Association Thailand\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ja\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"width\":300,\"height\":300,\"caption\":\"Drone Association Thailand\"},\"image\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/DroneAssociationTH\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\",\"name\":\"admin NU\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ja\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"caption\":\"admin NU\"},\"url\":\"https:\/\/youzum.net\/ja\/members\/adminnu\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"A Coding Guide to Building a Brain-Inspired Hierarchical Reasoning AI Agent with Hugging Face Models - YouZum","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/youzum.net\/ja\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/","og_locale":"ja_JP","og_type":"article","og_title":"A Coding Guide to Building a Brain-Inspired Hierarchical Reasoning AI Agent with Hugging Face Models - YouZum","og_description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","og_url":"https:\/\/youzum.net\/ja\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/","og_site_name":"YouZum","article_publisher":"https:\/\/www.facebook.com\/DroneAssociationTH\/","article_published_time":"2025-08-31T06:18:27+00:00","author":"admin NU","twitter_card":"summary_large_image","twitter_misc":{"\u57f7\u7b46\u8005":"admin NU","\u63a8\u5b9a\u8aad\u307f\u53d6\u308a\u6642\u9593":"8\u5206"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/youzum.net\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/#article","isPartOf":{"@id":"https:\/\/youzum.net\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/"},"author":{"name":"admin NU","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c"},"headline":"A Coding Guide to Building a Brain-Inspired Hierarchical Reasoning AI Agent with Hugging Face Models","datePublished":"2025-08-31T06:18:27+00:00","mainEntityOfPage":{"@id":"https:\/\/youzum.net\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/"},"wordCount":637,"commentCount":0,"publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"articleSection":["AI","Committee","News","Uncategorized"],"inLanguage":"ja","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/youzum.net\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/youzum.net\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/","url":"https:\/\/youzum.net\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/","name":"A Coding Guide to Building a Brain-Inspired Hierarchical Reasoning AI Agent with Hugging Face Models - YouZum","isPartOf":{"@id":"https:\/\/yousum.gpucore.co\/#website"},"datePublished":"2025-08-31T06:18:27+00:00","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","breadcrumb":{"@id":"https:\/\/youzum.net\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/#breadcrumb"},"inLanguage":"ja","potentialAction":[{"@type":"ReadAction","target":["https:\/\/youzum.net\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/youzum.net\/a-coding-guide-to-building-a-brain-inspired-hierarchical-reasoning-ai-agent-with-hugging-face-models\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/youzum.net\/"},{"@type":"ListItem","position":2,"name":"A Coding Guide to Building a Brain-Inspired Hierarchical Reasoning AI Agent with Hugging Face Models"}]},{"@type":"WebSite","@id":"https:\/\/yousum.gpucore.co\/#website","url":"https:\/\/yousum.gpucore.co\/","name":"YouSum","description":"","publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/yousum.gpucore.co\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"ja"},{"@type":"Organization","@id":"https:\/\/yousum.gpucore.co\/#organization","name":"Drone Association Thailand","url":"https:\/\/yousum.gpucore.co\/","logo":{"@type":"ImageObject","inLanguage":"ja","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","width":300,"height":300,"caption":"Drone Association Thailand"},"image":{"@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/DroneAssociationTH\/"]},{"@type":"Person","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c","name":"admin NU","image":{"@type":"ImageObject","inLanguage":"ja","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","caption":"admin NU"},"url":"https:\/\/youzum.net\/ja\/members\/adminnu\/"}]}},"rttpg_featured_image_url":null,"rttpg_author":{"display_name":"admin NU","author_link":"https:\/\/youzum.net\/ja\/members\/adminnu\/"},"rttpg_comment":0,"rttpg_category":"<a href=\"https:\/\/youzum.net\/ja\/category\/ai-club\/\" rel=\"category tag\">AI<\/a> <a href=\"https:\/\/youzum.net\/ja\/category\/committee\/\" rel=\"category tag\">Committee<\/a> <a href=\"https:\/\/youzum.net\/ja\/category\/news\/\" rel=\"category tag\">News<\/a> <a href=\"https:\/\/youzum.net\/ja\/category\/uncategorized\/\" rel=\"category tag\">Uncategorized<\/a>","rttpg_excerpt":"In this tutorial, we set out to recreate the spirit of the Hierarchical Reasoning Model (HRM) using a free Hugging Face model that runs locally. We walk through the design of a lightweight yet structured reasoning agent, where we act as both architects and experimenters. By breaking problems into subgoals, solving them with Python, critiquing&hellip;","_links":{"self":[{"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/posts\/35212","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/comments?post=35212"}],"version-history":[{"count":0,"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/posts\/35212\/revisions"}],"wp:attachment":[{"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/media?parent=35212"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/categories?post=35212"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/tags?post=35212"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}