{"id":71538,"date":"2026-02-16T11:48:29","date_gmt":"2026-02-16T11:48:29","guid":{"rendered":"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/"},"modified":"2026-02-16T11:48:29","modified_gmt":"2026-02-16T11:48:29","slug":"a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation","status":"publish","type":"post","link":"https:\/\/youzum.net\/it\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/","title":{"rendered":"A Coding Implementation to Design a Stateful Tutor Agent with Long-Term Memory, Semantic Recall, and Adaptive Practice Generation"},"content":{"rendered":"<p>In this tutorial, we build a fully stateful personal tutor agent that moves beyond short-lived chat interactions and learns continuously over time. We design the system to persist user preferences, track weak learning areas, and selectively recall only relevant past context when responding. By combining durable storage, semantic retrieval, and adaptive prompting, we demonstrate how an agent can behave more like a long-term tutor than a stateless chatbot. Also, we focus on keeping the agent self-managed, context-aware, and able to improve its guidance without requiring the user to repeat information.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">!pip -q install \"langchain&gt;=0.2.12\" \"langchain-openai&gt;=0.1.20\" \"sentence-transformers&gt;=3.0.1\" \"faiss-cpu&gt;=1.8.0.post1\" \"pydantic&gt;=2.7.0\"\n\n\nimport os, json, sqlite3, uuid\nfrom datetime import datetime, timezone\nfrom typing import List, Dict, Any\nimport numpy as np\nimport faiss\nfrom pydantic import BaseModel, Field\nfrom sentence_transformers import SentenceTransformer\nfrom langchain_core.messages import SystemMessage, HumanMessage, AIMessage\nfrom langchain_core.language_models.chat_models import BaseChatModel\nfrom langchain_core.outputs import ChatGeneration, ChatResult\n\n\nDB_PATH=\"\/content\/tutor_memory.db\"\nSTORE_DIR=\"\/content\/tutor_store\"\nINDEX_PATH=f\"{STORE_DIR}\/mem.faiss\"\nMETA_PATH=f\"{STORE_DIR}\/mem_meta.json\"\nos.makedirs(STORE_DIR, exist_ok=True)\n\n\ndef now(): return datetime.now(timezone.utc).isoformat()\n\n\ndef db(): return sqlite3.connect(DB_PATH)\n\n\ndef init_db():\n   c=db(); cur=c.cursor()\n   cur.execute(\"\"\"CREATE TABLE IF NOT EXISTS events(\n       id TEXT PRIMARY KEY,user_id TEXT,session_id TEXT,role TEXT,content TEXT,ts TEXT)\"\"\")\n   cur.execute(\"\"\"CREATE TABLE IF NOT EXISTS memories(\n       id TEXT PRIMARY KEY,user_id TEXT,kind TEXT,content TEXT,tags TEXT,importance REAL,ts TEXT)\"\"\")\n   cur.execute(\"\"\"CREATE TABLE IF NOT EXISTS weak_topics(\n       user_id TEXT,topic TEXT,mastery REAL,last_seen TEXT,notes TEXT,PRIMARY KEY(user_id,topic))\"\"\")\n   c.commit(); c.close()<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We set up the execution environment and import all required libraries for building a stateful agent. We also define core paths and utility functions for time handling and database connections. It establishes the foundational infrastructure that the rest of the system relies on.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">class MemoryItem(BaseModel):\n   kind:str\n   content:str\n   tags:List[str]=Field(default_factory=list)\n   importance:float=Field(0.5,ge=0,le=1)\n\n\nclass WeakTopicSignal(BaseModel):\n   topic:str\n   signal:str\n   evidence:str\n   confidence:float=Field(0.5,ge=0,le=1)\n\n\nclass Extracted(BaseModel):\n   memories:List[MemoryItem]=Field(default_factory=list)\n   weak_topics:List[WeakTopicSignal]=Field(default_factory=list)\n\n\nclass FallbackTutorLLM(BaseChatModel):\n   @property\n   def _llm_type(self)-&gt;str: return \"fallback_tutor\"\n   def _generate(self, messages, stop=None, run_manager=None, **kwargs)-&gt;ChatResult:\n       last=messages[-1].content if messages else \"\"\n       content=self._respond(last)\n       return ChatResult(generations=[ChatGeneration(message=AIMessage(content=content))])\n   def _respond(self, text:str)-&gt;str:\n       t=text.lower()\n       if \"extract_memories\" in t:\n           out={\"memories\":[],\"weak_topics\":[]}\n           if \"recursion\" in t:\n               out[\"weak_topics\"].append({\"topic\":\"recursion\",\"signal\":\"struggled\",\n                                         \"evidence\":\"User indicates difficulty with recursion.\",\"confidence\":0.85})\n           if \"prefer\" in t or \"i like\" in t:\n               out[\"memories\"].append({\"kind\":\"preference\",\"content\":\"User prefers concise explanations with examples.\",\n                                       \"tags\":[\"style\",\"preference\"],\"importance\":0.55})\n           return json.dumps(out)\n       if \"generate_practice\" in t:\n           return \"n\".join([\n               \"Targeted Practice (Recursion):\",\n               \"1) Implement factorial(n) recursively, then iteratively.\",\n               \"2) Recursively sum a list; state the base case explicitly.\",\n               \"3) Recursive binary search; return index or -1.\",\n               \"4) Trace fibonacci(6) call tree; count repeated subcalls.\",\n               \"5) Recursively reverse a string; discuss time\/space.\",\n               \"Mini-quiz: Why does missing a base case cause infinite recursion?\"\n           ])\n       return \"Tell me what you're studying and what felt hard; I\u2019ll remember and adapt practice next time.\"\n\n\ndef get_llm():\n   key=os.environ.get(\"OPENAI_API_KEY\",\"\").strip()\n   if key:\n       from langchain_openai import ChatOpenAI\n       return ChatOpenAI(model=\"gpt-4o-mini\",temperature=0.2)\n   return FallbackTutorLLM()<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We define the database schema and initialize persistent storage for events, memories, and weak topics. We ensure that user interactions and long-term learning signals are stored reliably across sessions. It enables agent memory to be durable beyond a single run.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">EMBED_MODEL=\"sentence-transformers\/all-MiniLM-L6-v2\"\nembedder=SentenceTransformer(EMBED_MODEL)\n\n\ndef load_meta():\n   if os.path.exists(META_PATH):\n       with open(META_PATH,\"r\") as f: return json.load(f)\n   return []\n\n\ndef save_meta(meta):\n   with open(META_PATH,\"w\") as f: json.dump(meta,f)\n\n\ndef normalize(x):\n   n=np.linalg.norm(x,axis=1,keepdims=True)+1e-12\n   return x\/n\n\n\ndef load_index(dim):\n   if os.path.exists(INDEX_PATH): return faiss.read_index(INDEX_PATH)\n   return faiss.IndexFlatIP(dim)\n\n\ndef save_index(ix): faiss.write_index(ix, INDEX_PATH)\n\n\nEXTRACTOR_SYSTEM = (\n   \"You are a memory extractor for a stateful personal tutor.n\"\n   \"Return ONLY JSON with keys: memories (list of {kind,content,tags,importance}) \"\n   \"and weak_topics (list of {topic,signal,evidence,confidence}).n\"\n   \"Store durable info only; do not store secrets.\"\n)\n\n\nllm=get_llm()\ninit_db()\n\n\ndim=embedder.encode([\"x\"],convert_to_numpy=True).shape[1]\nix=load_index(dim)\nmeta=load_meta()<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We define the data models and the fallback language model used when no external API key is available. We formalize how memories and weak-topic signals are represented and extracted. It allows the agent to consistently convert raw conversations into structured, actionable memory.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">def log_event(user_id, session_id, role, content):\n   c=db(); cur=c.cursor()\n   cur.execute(\"INSERT INTO events VALUES (?,?,?,?,?,?)\",\n               (str(uuid.uuid4()),user_id,session_id,role,content,now()))\n   c.commit(); c.close()\n\n\ndef upsert_weak(user_id, sig:WeakTopicSignal):\n   c=db(); cur=c.cursor()\n   cur.execute(\"SELECT mastery,notes FROM weak_topics WHERE user_id=? AND topic=?\",(user_id,sig.topic))\n   row=cur.fetchone()\n   delta=(-0.10 if sig.signal==\"struggled\" else 0.10 if sig.signal==\"improved\" else 0.0)*sig.confidence\n   if row is None:\n       mastery=float(np.clip(0.5+delta,0,1)); notes=sig.evidence\n       cur.execute(\"INSERT INTO weak_topics VALUES (?,?,?,?,?)\",(user_id,sig.topic,mastery,now(),notes))\n   else:\n       mastery=float(np.clip(row[0]+delta,0,1)); notes=(row[1]+\" | \"+sig.evidence)[-2000:]\n       cur.execute(\"UPDATE weak_topics SET mastery=?,last_seen=?,notes=? WHERE user_id=? AND topic=?\",\n                   (mastery,now(),notes,user_id,sig.topic))\n   c.commit(); c.close()\n\n\ndef store_memory(user_id, m:MemoryItem):\n   mem_id=str(uuid.uuid4())\n   c=db(); cur=c.cursor()\n   cur.execute(\"INSERT INTO memories VALUES (?,?,?,?,?,?,?)\",\n               (mem_id,user_id,m.kind,m.content,json.dumps(m.tags),float(m.importance),now()))\n   c.commit(); c.close()\n   v=embedder.encode([m.content],convert_to_numpy=True).astype(\"float32\")\n   v=normalize(v); ix.add(v)\n   meta.append({\"mem_id\":mem_id,\"user_id\":user_id,\"kind\":m.kind,\"content\":m.content,\n                \"tags\":m.tags,\"importance\":m.importance,\"ts\":now()})\n   save_index(ix); save_meta(meta)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We focus on embedding-based semantic memory using vector representations and similarity search. We encode memories, store them in a vector index, and persist metadata for later retrieval. It enables relevance-based recall rather than blindly loading all past context.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">def extract(user_text)-&gt;Extracted:\n   msg=\"extract_memoriesnnUser message:n\"+user_text\n   r=llm.invoke([SystemMessage(content=EXTRACTOR_SYSTEM),HumanMessage(content=msg)]).content\n   try:\n       d=json.loads(r)\n       return Extracted(\n           memories=[MemoryItem(**x) for x in d.get(\"memories\",[])],\n           weak_topics=[WeakTopicSignal(**x) for x in d.get(\"weak_topics\",[])]\n       )\n   except:\n       return Extracted()\n\n\ndef recall(user_id, query, k=6):\n   if ix.ntotal==0: return []\n   q=embedder.encode([query],convert_to_numpy=True).astype(\"float32\")\n   q=normalize(q)\n   scores, idxs = ix.search(q,k)\n   out=[]\n   for s,i in zip(scores[0].tolist(), idxs[0].tolist()):\n       if i&lt;0 or i&gt;=len(meta): continue\n       m=meta[i]\n       if m[\"user_id\"]!=user_id or s&lt;0.25: continue\n       out.append({**m,\"score\":float(s)})\n   out.sort(key=lambda r: r[\"score\"]*(0.6+0.4*r[\"importance\"]), reverse=True)\n   return out\n\n\ndef weak_snapshot(user_id):\n   c=db(); cur=c.cursor()\n   cur.execute(\"SELECT topic,mastery,last_seen FROM weak_topics WHERE user_id=? ORDER BY mastery ASC LIMIT 5\",(user_id,))\n   rows=cur.fetchall(); c.close()\n   return [{\"topic\":t,\"mastery\":float(m),\"last_seen\":ls} for t,m,ls in rows]\n\n\ndef tutor_turn(user_id, session_id, user_text):\n   log_event(user_id,session_id,\"user\",user_text)\n   ex=extract(user_text)\n   for w in ex.weak_topics: upsert_weak(user_id,w)\n   for m in ex.memories: store_memory(user_id,m)\n   rel=recall(user_id,user_text,k=6)\n   weak=weak_snapshot(user_id)\n   prompt={\n       \"recalled_memories\":[{\"kind\":x[\"kind\"],\"content\":x[\"content\"],\"score\":x[\"score\"]} for x in rel],\n       \"weak_topics\":weak,\n       \"user_message\":user_text\n   }\n   gen = llm.invoke([SystemMessage(content=\"You are a personal tutor. Use recalled_memories only if relevant.\"),\n                     HumanMessage(content=\"generate_practicenn\"+json.dumps(prompt))]).content\n   log_event(user_id,session_id,\"assistant\",gen)\n   return gen, rel, weak\n\n\nUSER_ID=\"user_demo\"\nSESSION_ID=str(uuid.uuid4())\n\n\nprint(\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Ready. Example run:n\")\nans, rel, weak = tutor_turn(USER_ID, SESSION_ID, \"Last week I struggled with recursion. I prefer concise explanations.\")\nprint(ans)\nprint(\"nRecalled:\", [r[\"content\"] for r in rel])\nprint(\"Weak topics:\", weak)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We orchestrate the full tutor interaction loop, combining extraction, storage, recall, and response generation. We update mastery scores, retrieve relevant memories, and dynamically generate targeted practice. It completes the transformation from a stateless chatbot into a long-term, adaptive tutor.<\/p>\n<p>In conclusion, we implemented a tutor agent that remembers, reasons, and adapts across sessions. We showed how structured memory extraction, long-term persistence, and relevance-based recall work together to overcome the \u201cgoldfish memory\u201d limitation common in most agents. The resulting system continuously refines its understanding of a user\u2019s weaknesses. It proactively generates targeted practice, demonstrating a practical foundation for building stateful, long-horizon AI agents that improve with sustained interaction.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Agentic%20AI%20Memory\/stateful_tutor_long_term_memory_agent_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">Full Codes here<\/a>.\u00a0<\/strong>Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">100k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/02\/15\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/\">A Coding Implementation to Design a Stateful Tutor Agent with Long-Term Memory, Semantic Recall, and Adaptive Practice Generation<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>In this tutorial, we build a fully stateful personal tutor agent that moves beyond short-lived chat interactions and learns continuously over time. We design the system to persist user preferences, track weak learning areas, and selectively recall only relevant past context when responding. By combining durable storage, semantic retrieval, and adaptive prompting, we demonstrate how an agent can behave more like a long-term tutor than a stateless chatbot. Also, we focus on keeping the agent self-managed, context-aware, and able to improve its guidance without requiring the user to repeat information. Copy CodeCopiedUse a different Browser !pip -q install &#8220;langchain&gt;=0.2.12&#8221; &#8220;langchain-openai&gt;=0.1.20&#8221; &#8220;sentence-transformers&gt;=3.0.1&#8221; &#8220;faiss-cpu&gt;=1.8.0.post1&#8221; &#8220;pydantic&gt;=2.7.0&#8243; import os, json, sqlite3, uuid from datetime import datetime, timezone from typing import List, Dict, Any import numpy as np import faiss from pydantic import BaseModel, Field from sentence_transformers import SentenceTransformer from langchain_core.messages import SystemMessage, HumanMessage, AIMessage from langchain_core.language_models.chat_models import BaseChatModel from langchain_core.outputs import ChatGeneration, ChatResult DB_PATH=&#8221;\/content\/tutor_memory.db&#8221; STORE_DIR=&#8221;\/content\/tutor_store&#8221; INDEX_PATH=f&#8221;{STORE_DIR}\/mem.faiss&#8221; META_PATH=f&#8221;{STORE_DIR}\/mem_meta.json&#8221; os.makedirs(STORE_DIR, exist_ok=True) def now(): return datetime.now(timezone.utc).isoformat() def db(): return sqlite3.connect(DB_PATH) def init_db(): c=db(); cur=c.cursor() cur.execute(&#8220;&#8221;&#8221;CREATE TABLE IF NOT EXISTS events( id TEXT PRIMARY KEY,user_id TEXT,session_id TEXT,role TEXT,content TEXT,ts TEXT)&#8221;&#8221;&#8221;) cur.execute(&#8220;&#8221;&#8221;CREATE TABLE IF NOT EXISTS memories( id TEXT PRIMARY KEY,user_id TEXT,kind TEXT,content TEXT,tags TEXT,importance REAL,ts TEXT)&#8221;&#8221;&#8221;) cur.execute(&#8220;&#8221;&#8221;CREATE TABLE IF NOT EXISTS weak_topics( user_id TEXT,topic TEXT,mastery REAL,last_seen TEXT,notes TEXT,PRIMARY KEY(user_id,topic))&#8221;&#8221;&#8221;) c.commit(); c.close() We set up the execution environment and import all required libraries for building a stateful agent. We also define core paths and utility functions for time handling and database connections. It establishes the foundational infrastructure that the rest of the system relies on. Copy CodeCopiedUse a different Browser class MemoryItem(BaseModel): kind:str content:str tags:List[str]=Field(default_factory=list) importance:float=Field(0.5,ge=0,le=1) class WeakTopicSignal(BaseModel): topic:str signal:str evidence:str confidence:float=Field(0.5,ge=0,le=1) class Extracted(BaseModel): memories:List[MemoryItem]=Field(default_factory=list) weak_topics:List[WeakTopicSignal]=Field(default_factory=list) class FallbackTutorLLM(BaseChatModel): @property def _llm_type(self)-&gt;str: return &#8220;fallback_tutor&#8221; def _generate(self, messages, stop=None, run_manager=None, **kwargs)-&gt;ChatResult: last=messages[-1].content if messages else &#8220;&#8221; content=self._respond(last) return ChatResult(generations=[ChatGeneration(message=AIMessage(content=content))]) def _respond(self, text:str)-&gt;str: t=text.lower() if &#8220;extract_memories&#8221; in t: out={&#8220;memories&#8221;:[],&#8221;weak_topics&#8221;:[]} if &#8220;recursion&#8221; in t: out[&#8220;weak_topics&#8221;].append({&#8220;topic&#8221;:&#8221;recursion&#8221;,&#8221;signal&#8221;:&#8221;struggled&#8221;, &#8220;evidence&#8221;:&#8221;User indicates difficulty with recursion.&#8221;,&#8221;confidence&#8221;:0.85}) if &#8220;prefer&#8221; in t or &#8220;i like&#8221; in t: out[&#8220;memories&#8221;].append({&#8220;kind&#8221;:&#8221;preference&#8221;,&#8221;content&#8221;:&#8221;User prefers concise explanations with examples.&#8221;, &#8220;tags&#8221;:[&#8220;style&#8221;,&#8221;preference&#8221;],&#8221;importance&#8221;:0.55}) return json.dumps(out) if &#8220;generate_practice&#8221; in t: return &#8220;n&#8221;.join([ &#8220;Targeted Practice (Recursion):&#8221;, &#8220;1) Implement factorial(n) recursively, then iteratively.&#8221;, &#8220;2) Recursively sum a list; state the base case explicitly.&#8221;, &#8220;3) Recursive binary search; return index or -1.&#8221;, &#8220;4) Trace fibonacci(6) call tree; count repeated subcalls.&#8221;, &#8220;5) Recursively reverse a string; discuss time\/space.&#8221;, &#8220;Mini-quiz: Why does missing a base case cause infinite recursion?&#8221; ]) return &#8220;Tell me what you&#8217;re studying and what felt hard; I\u2019ll remember and adapt practice next time.&#8221; def get_llm(): key=os.environ.get(&#8220;OPENAI_API_KEY&#8221;,&#8221;&#8221;).strip() if key: from langchain_openai import ChatOpenAI return ChatOpenAI(model=&#8221;gpt-4o-mini&#8221;,temperature=0.2) return FallbackTutorLLM() We define the database schema and initialize persistent storage for events, memories, and weak topics. We ensure that user interactions and long-term learning signals are stored reliably across sessions. It enables agent memory to be durable beyond a single run. Copy CodeCopiedUse a different Browser EMBED_MODEL=&#8221;sentence-transformers\/all-MiniLM-L6-v2&#8243; embedder=SentenceTransformer(EMBED_MODEL) def load_meta(): if os.path.exists(META_PATH): with open(META_PATH,&#8221;r&#8221;) as f: return json.load(f) return [] def save_meta(meta): with open(META_PATH,&#8221;w&#8221;) as f: json.dump(meta,f) def normalize(x): n=np.linalg.norm(x,axis=1,keepdims=True)+1e-12 return x\/n def load_index(dim): if os.path.exists(INDEX_PATH): return faiss.read_index(INDEX_PATH) return faiss.IndexFlatIP(dim) def save_index(ix): faiss.write_index(ix, INDEX_PATH) EXTRACTOR_SYSTEM = ( &#8220;You are a memory extractor for a stateful personal tutor.n&#8221; &#8220;Return ONLY JSON with keys: memories (list of {kind,content,tags,importance}) &#8221; &#8220;and weak_topics (list of {topic,signal,evidence,confidence}).n&#8221; &#8220;Store durable info only; do not store secrets.&#8221; ) llm=get_llm() init_db() dim=embedder.encode([&#8220;x&#8221;],convert_to_numpy=True).shape[1] ix=load_index(dim) meta=load_meta() We define the data models and the fallback language model used when no external API key is available. We formalize how memories and weak-topic signals are represented and extracted. It allows the agent to consistently convert raw conversations into structured, actionable memory. Copy CodeCopiedUse a different Browser def log_event(user_id, session_id, role, content): c=db(); cur=c.cursor() cur.execute(&#8220;INSERT INTO events VALUES (?,?,?,?,?,?)&#8221;, (str(uuid.uuid4()),user_id,session_id,role,content,now())) c.commit(); c.close() def upsert_weak(user_id, sig:WeakTopicSignal): c=db(); cur=c.cursor() cur.execute(&#8220;SELECT mastery,notes FROM weak_topics WHERE user_id=? AND topic=?&#8221;,(user_id,sig.topic)) row=cur.fetchone() delta=(-0.10 if sig.signal==&#8221;struggled&#8221; else 0.10 if sig.signal==&#8221;improved&#8221; else 0.0)*sig.confidence if row is None: mastery=float(np.clip(0.5+delta,0,1)); notes=sig.evidence cur.execute(&#8220;INSERT INTO weak_topics VALUES (?,?,?,?,?)&#8221;,(user_id,sig.topic,mastery,now(),notes)) else: mastery=float(np.clip(row[0]+delta,0,1)); notes=(row[1]+&#8221; | &#8220;+sig.evidence)[-2000:] cur.execute(&#8220;UPDATE weak_topics SET mastery=?,last_seen=?,notes=? WHERE user_id=? AND topic=?&#8221;, (mastery,now(),notes,user_id,sig.topic)) c.commit(); c.close() def store_memory(user_id, m:MemoryItem): mem_id=str(uuid.uuid4()) c=db(); cur=c.cursor() cur.execute(&#8220;INSERT INTO memories VALUES (?,?,?,?,?,?,?)&#8221;, (mem_id,user_id,m.kind,m.content,json.dumps(m.tags),float(m.importance),now())) c.commit(); c.close() v=embedder.encode([m.content],convert_to_numpy=True).astype(&#8220;float32&#8221;) v=normalize(v); ix.add(v) meta.append({&#8220;mem_id&#8221;:mem_id,&#8221;user_id&#8221;:user_id,&#8221;kind&#8221;:m.kind,&#8221;content&#8221;:m.content, &#8220;tags&#8221;:m.tags,&#8221;importance&#8221;:m.importance,&#8221;ts&#8221;:now()}) save_index(ix); save_meta(meta) We focus on embedding-based semantic memory using vector representations and similarity search. We encode memories, store them in a vector index, and persist metadata for later retrieval. It enables relevance-based recall rather than blindly loading all past context. Copy CodeCopiedUse a different Browser def extract(user_text)-&gt;Extracted: msg=&#8221;extract_memoriesnnUser message:n&#8221;+user_text r=llm.invoke([SystemMessage(content=EXTRACTOR_SYSTEM),HumanMessage(content=msg)]).content try: d=json.loads(r) return Extracted( memories=[MemoryItem(**x) for x in d.get(&#8220;memories&#8221;,[])], weak_topics=[WeakTopicSignal(**x) for x in d.get(&#8220;weak_topics&#8221;,[])] ) except: return Extracted() def recall(user_id, query, k=6): if ix.ntotal==0: return [] q=embedder.encode([query],convert_to_numpy=True).astype(&#8220;float32&#8221;) q=normalize(q) scores, idxs = ix.search(q,k) out=[] for s,i in zip(scores[0].tolist(), idxs[0].tolist()): if i&lt;0 or i&gt;=len(meta): continue m=meta[i] if m[&#8220;user_id&#8221;]!=user_id or s&lt;0.25: continue out.append({**m,&#8221;score&#8221;:float(s)}) out.sort(key=lambda r: r[&#8220;score&#8221;]*(0.6+0.4*r[&#8220;importance&#8221;]), reverse=True) return out def weak_snapshot(user_id): c=db(); cur=c.cursor() cur.execute(&#8220;SELECT topic,mastery,last_seen FROM weak_topics WHERE user_id=? ORDER BY mastery ASC LIMIT 5&#8221;,(user_id,)) rows=cur.fetchall(); c.close() return [{&#8220;topic&#8221;:t,&#8221;mastery&#8221;:float(m),&#8221;last_seen&#8221;:ls} for t,m,ls in rows] def tutor_turn(user_id, session_id, user_text): log_event(user_id,session_id,&#8221;user&#8221;,user_text) ex=extract(user_text) for w in ex.weak_topics: upsert_weak(user_id,w) for m in ex.memories: store_memory(user_id,m) rel=recall(user_id,user_text,k=6) weak=weak_snapshot(user_id) prompt={ &#8220;recalled_memories&#8221;:[{&#8220;kind&#8221;:x[&#8220;kind&#8221;],&#8221;content&#8221;:x[&#8220;content&#8221;],&#8221;score&#8221;:x[&#8220;score&#8221;]} for x in rel], &#8220;weak_topics&#8221;:weak, &#8220;user_message&#8221;:user_text } gen = llm.invoke([SystemMessage(content=&#8221;You are a personal tutor. Use recalled_memories only if relevant.&#8221;), HumanMessage(content=&#8221;generate_practicenn&#8221;+json.dumps(prompt))]).content log_event(user_id,session_id,&#8221;assistant&#8221;,gen) return gen, rel, weak USER_ID=&#8221;user_demo&#8221; SESSION_ID=str(uuid.uuid4()) print(&#8221; Ready. Example run:n&#8221;) ans, rel, weak = tutor_turn(USER_ID, SESSION_ID, &#8220;Last week I struggled with recursion. I prefer concise explanations.&#8221;) print(ans) print(&#8220;nRecalled:&#8221;, [r[&#8220;content&#8221;] for r in rel]) print(&#8220;Weak topics:&#8221;, weak) We orchestrate the full tutor interaction loop, combining extraction, storage, recall, and response generation. We update mastery scores, retrieve relevant memories, and dynamically generate targeted practice. It completes the transformation from a stateless chatbot into a long-term, adaptive tutor. In conclusion, we implemented a tutor agent that remembers, reasons, and adapts across sessions. We showed how structured memory extraction, long-term persistence, and relevance-based recall work together to overcome the \u201cgoldfish memory\u201d limitation common in most agents. The resulting system continuously refines its understanding of a user\u2019s weaknesses. It proactively generates targeted practice, demonstrating a practical foundation for building stateful, long-horizon AI agents that improve with sustained interaction. Check out the\u00a0Full Codes here.\u00a0Also,\u00a0feel<\/p>","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"pmpro_default_level":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"_pvb_checkbox_block_on_post":false,"footnotes":""},"categories":[52,5,7,1],"tags":[],"class_list":["post-71538","post","type-post","status-publish","format-standard","hentry","category-ai-club","category-committee","category-news","category-uncategorized","pmpro-has-access"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.3 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>A Coding Implementation to Design a Stateful Tutor Agent with Long-Term Memory, Semantic Recall, and Adaptive Practice Generation - YouZum<\/title>\n<meta name=\"description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/youzum.net\/it\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/\" \/>\n<meta property=\"og:locale\" content=\"it_IT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"A Coding Implementation to Design a Stateful Tutor Agent with Long-Term Memory, Semantic Recall, and Adaptive Practice Generation - YouZum\" \/>\n<meta property=\"og:description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta property=\"og:url\" content=\"https:\/\/youzum.net\/it\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/\" \/>\n<meta property=\"og:site_name\" content=\"YouZum\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/DroneAssociationTH\/\" \/>\n<meta property=\"article:published_time\" content=\"2026-02-16T11:48:29+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" \/>\n<meta name=\"author\" content=\"admin NU\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Scritto da\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin NU\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo di lettura stimato\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minuti\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/\"},\"author\":{\"name\":\"admin NU\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\"},\"headline\":\"A Coding Implementation to Design a Stateful Tutor Agent with Long-Term Memory, Semantic Recall, and Adaptive Practice Generation\",\"datePublished\":\"2026-02-16T11:48:29+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/\"},\"wordCount\":481,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\",\"articleSection\":[\"AI\",\"Committee\",\"News\",\"Uncategorized\"],\"inLanguage\":\"it-IT\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/\",\"url\":\"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/\",\"name\":\"A Coding Implementation to Design a Stateful Tutor Agent with Long-Term Memory, Semantic Recall, and Adaptive Practice Generation - YouZum\",\"isPartOf\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\",\"datePublished\":\"2026-02-16T11:48:29+00:00\",\"description\":\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\",\"breadcrumb\":{\"@id\":\"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/#breadcrumb\"},\"inLanguage\":\"it-IT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/#primaryimage\",\"url\":\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\",\"contentUrl\":\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/youzum.net\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"A Coding Implementation to Design a Stateful Tutor Agent with Long-Term Memory, Semantic Recall, and Adaptive Practice Generation\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/yousum.gpucore.co\/#website\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"name\":\"YouSum\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/yousum.gpucore.co\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"it-IT\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\",\"name\":\"Drone Association Thailand\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"width\":300,\"height\":300,\"caption\":\"Drone Association Thailand\"},\"image\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/DroneAssociationTH\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\",\"name\":\"admin NU\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"caption\":\"admin NU\"},\"url\":\"https:\/\/youzum.net\/it\/members\/adminnu\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"A Coding Implementation to Design a Stateful Tutor Agent with Long-Term Memory, Semantic Recall, and Adaptive Practice Generation - YouZum","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/youzum.net\/it\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/","og_locale":"it_IT","og_type":"article","og_title":"A Coding Implementation to Design a Stateful Tutor Agent with Long-Term Memory, Semantic Recall, and Adaptive Practice Generation - YouZum","og_description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","og_url":"https:\/\/youzum.net\/it\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/","og_site_name":"YouZum","article_publisher":"https:\/\/www.facebook.com\/DroneAssociationTH\/","article_published_time":"2026-02-16T11:48:29+00:00","og_image":[{"url":"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png","type":"","width":"","height":""}],"author":"admin NU","twitter_card":"summary_large_image","twitter_misc":{"Scritto da":"admin NU","Tempo di lettura stimato":"8 minuti"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/#article","isPartOf":{"@id":"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/"},"author":{"name":"admin NU","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c"},"headline":"A Coding Implementation to Design a Stateful Tutor Agent with Long-Term Memory, Semantic Recall, and Adaptive Practice Generation","datePublished":"2026-02-16T11:48:29+00:00","mainEntityOfPage":{"@id":"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/"},"wordCount":481,"commentCount":0,"publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"image":{"@id":"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/#primaryimage"},"thumbnailUrl":"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png","articleSection":["AI","Committee","News","Uncategorized"],"inLanguage":"it-IT","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/","url":"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/","name":"A Coding Implementation to Design a Stateful Tutor Agent with Long-Term Memory, Semantic Recall, and Adaptive Practice Generation - YouZum","isPartOf":{"@id":"https:\/\/yousum.gpucore.co\/#website"},"primaryImageOfPage":{"@id":"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/#primaryimage"},"image":{"@id":"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/#primaryimage"},"thumbnailUrl":"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png","datePublished":"2026-02-16T11:48:29+00:00","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","breadcrumb":{"@id":"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/#breadcrumb"},"inLanguage":"it-IT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/"]}]},{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/#primaryimage","url":"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png","contentUrl":"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png"},{"@type":"BreadcrumbList","@id":"https:\/\/youzum.net\/a-coding-implementation-to-design-a-stateful-tutor-agent-with-long-term-memory-semantic-recall-and-adaptive-practice-generation\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/youzum.net\/"},{"@type":"ListItem","position":2,"name":"A Coding Implementation to Design a Stateful Tutor Agent with Long-Term Memory, Semantic Recall, and Adaptive Practice Generation"}]},{"@type":"WebSite","@id":"https:\/\/yousum.gpucore.co\/#website","url":"https:\/\/yousum.gpucore.co\/","name":"YouSum","description":"","publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/yousum.gpucore.co\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"it-IT"},{"@type":"Organization","@id":"https:\/\/yousum.gpucore.co\/#organization","name":"Drone Association Thailand","url":"https:\/\/yousum.gpucore.co\/","logo":{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","width":300,"height":300,"caption":"Drone Association Thailand"},"image":{"@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/DroneAssociationTH\/"]},{"@type":"Person","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c","name":"admin NU","image":{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","caption":"admin NU"},"url":"https:\/\/youzum.net\/it\/members\/adminnu\/"}]}},"rttpg_featured_image_url":null,"rttpg_author":{"display_name":"admin NU","author_link":"https:\/\/youzum.net\/it\/members\/adminnu\/"},"rttpg_comment":0,"rttpg_category":"<a href=\"https:\/\/youzum.net\/it\/category\/ai-club\/\" rel=\"category tag\">AI<\/a> <a href=\"https:\/\/youzum.net\/it\/category\/committee\/\" rel=\"category tag\">Committee<\/a> <a href=\"https:\/\/youzum.net\/it\/category\/news\/\" rel=\"category tag\">News<\/a> <a href=\"https:\/\/youzum.net\/it\/category\/uncategorized\/\" rel=\"category tag\">Uncategorized<\/a>","rttpg_excerpt":"In this tutorial, we build a fully stateful personal tutor agent that moves beyond short-lived chat interactions and learns continuously over time. We design the system to persist user preferences, track weak learning areas, and selectively recall only relevant past context when responding. By combining durable storage, semantic retrieval, and adaptive prompting, we demonstrate how&hellip;","_links":{"self":[{"href":"https:\/\/youzum.net\/it\/wp-json\/wp\/v2\/posts\/71538","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/youzum.net\/it\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/youzum.net\/it\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/youzum.net\/it\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/youzum.net\/it\/wp-json\/wp\/v2\/comments?post=71538"}],"version-history":[{"count":0,"href":"https:\/\/youzum.net\/it\/wp-json\/wp\/v2\/posts\/71538\/revisions"}],"wp:attachment":[{"href":"https:\/\/youzum.net\/it\/wp-json\/wp\/v2\/media?parent=71538"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/youzum.net\/it\/wp-json\/wp\/v2\/categories?post=71538"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/youzum.net\/it\/wp-json\/wp\/v2\/tags?post=71538"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}