{"id":41935,"date":"2025-10-03T06:48:56","date_gmt":"2025-10-03T06:48:56","guid":{"rendered":"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/"},"modified":"2025-10-03T06:48:56","modified_gmt":"2025-10-03T06:48:56","slug":"thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs","status":"publish","type":"post","link":"https:\/\/youzum.net\/zh\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/","title":{"rendered":"Thinking Machines Launches Tinker: A Low-Level Training API that Abstracts Distributed LLM Fine-Tuning without Hiding the Knobs"},"content":{"rendered":"<p>Thinking Machines has released <strong>Tinker<\/strong>, a Python API that lets researchers and engineers write training loops locally while the platform executes them on managed distributed GPU clusters. The pitch is narrow and technical: keep full control of data, objectives, and optimization steps; hand off scheduling, fault tolerance, and multi-node orchestration. The service is in private beta with a waitlist and starts free, moving to usage-based pricing \u201cin the coming weeks.\u201d<\/p>\n<h3 class=\"wp-block-heading\"><strong>Alright, but tell me what it is?<\/strong><\/h3>\n<p>Tinker exposes <strong>low-level primitives<\/strong>\u2014not high-level \u201ctrain()\u201d wrappers. Core calls include <code>forward_backward<\/code>, <code>optim_step<\/code>, <code>save_state<\/code>, and <code>sample<\/code>, giving users direct control over gradient computation, optimizer stepping, checkpointing, and evaluation\/inference inside custom loops. A typical workflow: instantiate a <strong>LoRA<\/strong> training client against a base model (e.g., Llama-3.2-1B), iterate <code>forward_backward<\/code>\/<code>optim_step<\/code>, persist state, then obtain a sampling client to evaluate or export weights.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img fetchpriority=\"high\" decoding=\"async\" width=\"1024\" height=\"495\" data-attachment-id=\"75036\" data-permalink=\"https:\/\/www.marktechpost.com\/2025\/10\/02\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/screenshot-2025-10-02-at-9-48-39-pm\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM.png\" data-orig-size=\"1138,550\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2025-10-02 at 9.48.39\u202fPM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM-300x145.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM-1024x495.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM-1024x495.png\" alt=\"\" class=\"wp-image-75036\" \/><figcaption class=\"wp-element-caption\">https:\/\/thinkingmachines.ai\/tinker\/<\/figcaption><\/figure>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>Key Features<\/strong><\/h3>\n<ul class=\"wp-block-list\">\n<li><strong>Open-weights model coverage.<\/strong> Fine-tune families such as <strong>Llama<\/strong> and <strong>Qwen<\/strong>, including large mixture-of-experts variants (e.g., <strong>Qwen3-235B-A22B<\/strong>).<\/li>\n<li><strong>LoRA-based post-training.<\/strong> Tinker implements <strong>Low-Rank Adaptation (LoRA)<\/strong> rather than full fine-tuning; their technical note (\u201cLoRA Without Regret\u201d) argues LoRA can match full FT for many practical workloads\u2014especially RL\u2014under the right setup. <\/li>\n<li><strong>Portable artifacts.<\/strong> Download trained adapter weights for use outside Tinker (e.g., with your preferred inference stack\/provider).<\/li>\n<\/ul>\n<h3 class=\"wp-block-heading\"><strong>What runs on it?<\/strong><\/h3>\n<p>The Thinking Machines team positions Tinker as a <strong>managed post-training platform<\/strong> for open-weights models from small LLMs up to large mixture-of-experts systems, a good example would be <strong>Qwen-235B-A22B<\/strong> as a supported model. Switching models is intentionally minimal\u2014change a string identifier and rerun. Under the hood, runs are scheduled on Thinking Machines\u2019 internal clusters; the LoRA approach enables shared compute pools and lower utilization overhead.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img decoding=\"async\" width=\"1024\" height=\"456\" data-attachment-id=\"75039\" data-permalink=\"https:\/\/www.marktechpost.com\/2025\/10\/02\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/screenshot-2025-10-02-at-9-49-03-pm-2\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.49.03-PM-1.png\" data-orig-size=\"2238,996\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2025-10-02 at 9.49.03\u202fPM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.49.03-PM-1-300x134.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.49.03-PM-1-1024x456.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.49.03-PM-1-1024x456.png\" alt=\"\" class=\"wp-image-75039\" \/><figcaption class=\"wp-element-caption\">https:\/\/thinkingmachines.ai\/tinker\/<\/figcaption><\/figure>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>Tinker Cookbook: Reference Training Loops and Post-Training Recipes<\/strong><\/h3>\n<p>To reduce boilerplate while keeping the core API lean, the team published the <strong><a href=\"https:\/\/github.com\/thinking-machines-lab\/tinker-cookbook\" target=\"_blank\" rel=\"noreferrer noopener\">Tinker Cookbook<\/a><\/strong> (Apache-2.0). It contains ready-to-use reference loops for <strong>supervised learning<\/strong> and <strong>reinforcement learning<\/strong>, plus worked examples for <strong>RLHF (three-stage SFT \u2192 reward modeling \u2192 policy RL)<\/strong>, <strong>math-reasoning rewards<\/strong>, <strong>tool-use \/ retrieval-augmented tasks<\/strong>, <strong>prompt distillation<\/strong>, and <strong>multi-agent<\/strong> setups. The repo also ships utilities for LoRA hyperparameter calculation and integrations for evaluation (e.g., InspectAI). <\/p>\n<h3 class=\"wp-block-heading\"><strong>Who\u2019s already using it?<\/strong><\/h3>\n<p>Early users include groups at <strong>Princeton<\/strong> (G\u00f6del prover team), <strong>Stanford<\/strong> (Rotskoff Chemistry), <strong>UC Berkeley<\/strong> (SkyRL, async off-policy multi-agent\/tool-use RL), and <strong>Redwood Research<\/strong> (RL on Qwen3-32B for control tasks). <\/p>\n<p>Tinker is <strong>private beta<\/strong> as of now with <strong>waitlist sign-up<\/strong>. The service is <strong>free to start<\/strong>, with <strong>usage-based pricing<\/strong> planned shortly; organizations are asked to contact the team directly for onboarding.<\/p>\n<h3 class=\"wp-block-heading\"><strong>My thoughts\/ comments<\/strong><\/h3>\n<p>I like that Tinker exposes low-level primitives (<code>forward_backward<\/code>, <code>optim_step<\/code>, <code>save_state<\/code>, <code>sample<\/code>) instead of a monolithic <code>train()<\/code>\u2014it keeps objective design, reward shaping, and evaluation in my control while offloading multi-node orchestration to their managed clusters. The LoRA-first posture is pragmatic for cost and turnaround, and their own analysis argues LoRA can match full fine-tuning when configured correctly, but I\u2019d still want transparent logs, deterministic seeds, and per-step telemetry to verify reproducibility and drift. The Cookbook\u2019s RLHF and SL reference loops are useful starting points, yet I\u2019ll judge the platform on throughput stability, checkpoint portability, and guardrails for data governance (PII handling, audit trails) during real workloads.<\/p>\n<p>Overall I prefer Tinker\u2019s open, flexible API: it lets me customize open-weight LLMs via explicit training-loop primitives while the service handles distributed execution. Compared with closed systems, this preserves algorithmic control (losses, RLHF workflows, data handling) and lowers the barrier for new practitioners to experiment and iterate.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out the\u00a0<strong><a href=\"https:\/\/thinkingmachines.ai\/blog\/announcing-tinker\/\" target=\"_blank\" rel=\"noreferrer noopener\">Technical details<\/a> <\/strong>and<strong> Sign up for our waitlist\u00a0<a href=\"https:\/\/form.typeform.com\/to\/jH2xNWIg\" target=\"_blank\" rel=\"noreferrer noopener\">\u8fd9\u91cc<\/a>. If you\u2019re a university or organization looking for wide scale access, contact\u00a0<a href=\"mailto:tinker@thinkingmachines.ai\">tinker@thinkingmachines.ai<\/a><\/strong>.\u00a0<\/p>\n<p>Feel free to check out our\u00a0<strong><mark><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\" target=\"_blank\" rel=\"noreferrer noopener\">GitHub Page for Tutorials, Codes and Notebooks<\/a><\/mark><\/strong>.\u00a0Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">100k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2025\/10\/02\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/\">Thinking Machines Launches Tinker: A Low-Level Training API that Abstracts Distributed LLM Fine-Tuning without Hiding the Knobs<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>Thinking Machines has released Tinker, a Python API that lets researchers and engineers write training loops locally while the platform executes them on managed distributed GPU clusters. The pitch is narrow and technical: keep full control of data, objectives, and optimization steps; hand off scheduling, fault tolerance, and multi-node orchestration. The service is in private beta with a waitlist and starts free, moving to usage-based pricing \u201cin the coming weeks.\u201d Alright, but tell me what it is? Tinker exposes low-level primitives\u2014not high-level \u201ctrain()\u201d wrappers. Core calls include forward_backward, optim_step, save_state, and sample, giving users direct control over gradient computation, optimizer stepping, checkpointing, and evaluation\/inference inside custom loops. A typical workflow: instantiate a LoRA training client against a base model (e.g., Llama-3.2-1B), iterate forward_backward\/optim_step, persist state, then obtain a sampling client to evaluate or export weights. https:\/\/thinkingmachines.ai\/tinker\/ Key Features Open-weights model coverage. Fine-tune families such as Llama and Qwen, including large mixture-of-experts variants (e.g., Qwen3-235B-A22B). LoRA-based post-training. Tinker implements Low-Rank Adaptation (LoRA) rather than full fine-tuning; their technical note (\u201cLoRA Without Regret\u201d) argues LoRA can match full FT for many practical workloads\u2014especially RL\u2014under the right setup. Portable artifacts. Download trained adapter weights for use outside Tinker (e.g., with your preferred inference stack\/provider). What runs on it? The Thinking Machines team positions Tinker as a managed post-training platform for open-weights models from small LLMs up to large mixture-of-experts systems, a good example would be Qwen-235B-A22B as a supported model. Switching models is intentionally minimal\u2014change a string identifier and rerun. Under the hood, runs are scheduled on Thinking Machines\u2019 internal clusters; the LoRA approach enables shared compute pools and lower utilization overhead. https:\/\/thinkingmachines.ai\/tinker\/ Tinker Cookbook: Reference Training Loops and Post-Training Recipes To reduce boilerplate while keeping the core API lean, the team published the Tinker Cookbook (Apache-2.0). It contains ready-to-use reference loops for supervised learning and reinforcement learning, plus worked examples for RLHF (three-stage SFT \u2192 reward modeling \u2192 policy RL), math-reasoning rewards, tool-use \/ retrieval-augmented tasks, prompt distillation, and multi-agent setups. The repo also ships utilities for LoRA hyperparameter calculation and integrations for evaluation (e.g., InspectAI). Who\u2019s already using it? Early users include groups at Princeton (G\u00f6del prover team), Stanford (Rotskoff Chemistry), UC Berkeley (SkyRL, async off-policy multi-agent\/tool-use RL), and Redwood Research (RL on Qwen3-32B for control tasks). Tinker is private beta as of now with waitlist sign-up. The service is free to start, with usage-based pricing planned shortly; organizations are asked to contact the team directly for onboarding. My thoughts\/ comments I like that Tinker exposes low-level primitives (forward_backward, optim_step, save_state, sample) instead of a monolithic train()\u2014it keeps objective design, reward shaping, and evaluation in my control while offloading multi-node orchestration to their managed clusters. The LoRA-first posture is pragmatic for cost and turnaround, and their own analysis argues LoRA can match full fine-tuning when configured correctly, but I\u2019d still want transparent logs, deterministic seeds, and per-step telemetry to verify reproducibility and drift. The Cookbook\u2019s RLHF and SL reference loops are useful starting points, yet I\u2019ll judge the platform on throughput stability, checkpoint portability, and guardrails for data governance (PII handling, audit trails) during real workloads. Overall I prefer Tinker\u2019s open, flexible API: it lets me customize open-weight LLMs via explicit training-loop primitives while the service handles distributed execution. Compared with closed systems, this preserves algorithmic control (losses, RLHF workflows, data handling) and lowers the barrier for new practitioners to experiment and iterate. Check out the\u00a0Technical details and Sign up for our waitlist\u00a0here. If you\u2019re a university or organization looking for wide scale access, contact\u00a0tinker@thinkingmachines.ai.\u00a0 Feel free to check out our\u00a0GitHub Page for Tutorials, Codes and Notebooks.\u00a0Also,\u00a0feel free to follow us on\u00a0Twitter\u00a0and don\u2019t forget to join our\u00a0100k+ ML SubReddit\u00a0and Subscribe to\u00a0our Newsletter. Wait! are you on telegram?\u00a0now you can join us on telegram as well. The post Thinking Machines Launches Tinker: A Low-Level Training API that Abstracts Distributed LLM Fine-Tuning without Hiding the Knobs appeared first on MarkTechPost.<\/p>","protected":false},"author":2,"featured_media":41936,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"pmpro_default_level":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"_pvb_checkbox_block_on_post":false,"footnotes":""},"categories":[52,5,7,1],"tags":[],"class_list":["post-41935","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-club","category-committee","category-news","category-uncategorized","pmpro-has-access"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.3 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Thinking Machines Launches Tinker: A Low-Level Training API that Abstracts Distributed LLM Fine-Tuning without Hiding the Knobs - YouZum<\/title>\n<meta name=\"description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/youzum.net\/zh\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/\" \/>\n<meta property=\"og:locale\" content=\"zh_CN\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Thinking Machines Launches Tinker: A Low-Level Training API that Abstracts Distributed LLM Fine-Tuning without Hiding the Knobs - YouZum\" \/>\n<meta property=\"og:description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta property=\"og:url\" content=\"https:\/\/youzum.net\/zh\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/\" \/>\n<meta property=\"og:site_name\" content=\"YouZum\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/DroneAssociationTH\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-10-03T06:48:56+00:00\" \/>\n<meta name=\"author\" content=\"admin NU\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\u4f5c\u8005\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin NU\" \/>\n\t<meta name=\"twitter:label2\" content=\"\u9884\u8ba1\u9605\u8bfb\u65f6\u95f4\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 \u5206\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/\"},\"author\":{\"name\":\"admin NU\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\"},\"headline\":\"Thinking Machines Launches Tinker: A Low-Level Training API that Abstracts Distributed LLM Fine-Tuning without Hiding the Knobs\",\"datePublished\":\"2025-10-03T06:48:56+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/\"},\"wordCount\":700,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM-1024x495-mMp3kr.png\",\"articleSection\":[\"AI\",\"Committee\",\"News\",\"Uncategorized\"],\"inLanguage\":\"zh-Hans\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/\",\"url\":\"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/\",\"name\":\"Thinking Machines Launches Tinker: A Low-Level Training API that Abstracts Distributed LLM Fine-Tuning without Hiding the Knobs - YouZum\",\"isPartOf\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM-1024x495-mMp3kr.png\",\"datePublished\":\"2025-10-03T06:48:56+00:00\",\"description\":\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\",\"breadcrumb\":{\"@id\":\"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/#breadcrumb\"},\"inLanguage\":\"zh-Hans\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-Hans\",\"@id\":\"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/#primaryimage\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM-1024x495-mMp3kr.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM-1024x495-mMp3kr.png\",\"width\":1024,\"height\":495},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/youzum.net\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Thinking Machines Launches Tinker: A Low-Level Training API that Abstracts Distributed LLM Fine-Tuning without Hiding the Knobs\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/yousum.gpucore.co\/#website\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"name\":\"YouSum\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/yousum.gpucore.co\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"zh-Hans\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\",\"name\":\"Drone Association Thailand\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-Hans\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"width\":300,\"height\":300,\"caption\":\"Drone Association Thailand\"},\"image\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/DroneAssociationTH\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\",\"name\":\"admin NU\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-Hans\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"caption\":\"admin NU\"},\"url\":\"https:\/\/youzum.net\/zh\/members\/adminnu\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Thinking Machines Launches Tinker: A Low-Level Training API that Abstracts Distributed LLM Fine-Tuning without Hiding the Knobs - YouZum","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/youzum.net\/zh\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/","og_locale":"zh_CN","og_type":"article","og_title":"Thinking Machines Launches Tinker: A Low-Level Training API that Abstracts Distributed LLM Fine-Tuning without Hiding the Knobs - YouZum","og_description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","og_url":"https:\/\/youzum.net\/zh\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/","og_site_name":"YouZum","article_publisher":"https:\/\/www.facebook.com\/DroneAssociationTH\/","article_published_time":"2025-10-03T06:48:56+00:00","author":"admin NU","twitter_card":"summary_large_image","twitter_misc":{"\u4f5c\u8005":"admin NU","\u9884\u8ba1\u9605\u8bfb\u65f6\u95f4":"4 \u5206"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/#article","isPartOf":{"@id":"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/"},"author":{"name":"admin NU","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c"},"headline":"Thinking Machines Launches Tinker: A Low-Level Training API that Abstracts Distributed LLM Fine-Tuning without Hiding the Knobs","datePublished":"2025-10-03T06:48:56+00:00","mainEntityOfPage":{"@id":"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/"},"wordCount":700,"commentCount":0,"publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"image":{"@id":"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/#primaryimage"},"thumbnailUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM-1024x495-mMp3kr.png","articleSection":["AI","Committee","News","Uncategorized"],"inLanguage":"zh-Hans","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/","url":"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/","name":"Thinking Machines Launches Tinker: A Low-Level Training API that Abstracts Distributed LLM Fine-Tuning without Hiding the Knobs - YouZum","isPartOf":{"@id":"https:\/\/yousum.gpucore.co\/#website"},"primaryImageOfPage":{"@id":"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/#primaryimage"},"image":{"@id":"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/#primaryimage"},"thumbnailUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM-1024x495-mMp3kr.png","datePublished":"2025-10-03T06:48:56+00:00","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","breadcrumb":{"@id":"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/#breadcrumb"},"inLanguage":"zh-Hans","potentialAction":[{"@type":"ReadAction","target":["https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/"]}]},{"@type":"ImageObject","inLanguage":"zh-Hans","@id":"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/#primaryimage","url":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM-1024x495-mMp3kr.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM-1024x495-mMp3kr.png","width":1024,"height":495},{"@type":"BreadcrumbList","@id":"https:\/\/youzum.net\/thinking-machines-launches-tinker-a-low-level-training-api-that-abstracts-distributed-llm-fine-tuning-without-hiding-the-knobs\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/youzum.net\/"},{"@type":"ListItem","position":2,"name":"Thinking Machines Launches Tinker: A Low-Level Training API that Abstracts Distributed LLM Fine-Tuning without Hiding the Knobs"}]},{"@type":"WebSite","@id":"https:\/\/yousum.gpucore.co\/#website","url":"https:\/\/yousum.gpucore.co\/","name":"YouSum","description":"","publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/yousum.gpucore.co\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"zh-Hans"},{"@type":"Organization","@id":"https:\/\/yousum.gpucore.co\/#organization","name":"Drone Association Thailand","url":"https:\/\/yousum.gpucore.co\/","logo":{"@type":"ImageObject","inLanguage":"zh-Hans","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","width":300,"height":300,"caption":"Drone Association Thailand"},"image":{"@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/DroneAssociationTH\/"]},{"@type":"Person","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c","name":"admin NU","image":{"@type":"ImageObject","inLanguage":"zh-Hans","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","caption":"admin NU"},"url":"https:\/\/youzum.net\/zh\/members\/adminnu\/"}]}},"rttpg_featured_image_url":{"full":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM-1024x495-mMp3kr.png",1024,495,false],"landscape":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM-1024x495-mMp3kr.png",1024,495,false],"portraits":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM-1024x495-mMp3kr.png",1024,495,false],"thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM-1024x495-mMp3kr-150x150.png",150,150,true],"medium":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM-1024x495-mMp3kr-300x145.png",300,145,true],"large":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM-1024x495-mMp3kr.png",1024,495,false],"1536x1536":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM-1024x495-mMp3kr.png",1024,495,false],"2048x2048":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM-1024x495-mMp3kr.png",1024,495,false],"trp-custom-language-flag":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM-1024x495-mMp3kr-18x9.png",18,9,true],"woocommerce_thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM-1024x495-mMp3kr-300x300.png",300,300,true],"woocommerce_single":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM-1024x495-mMp3kr-600x290.png",600,290,true],"woocommerce_gallery_thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/10\/Screenshot-2025-10-02-at-9.48.39-PM-1024x495-mMp3kr-100x100.png",100,100,true]},"rttpg_author":{"display_name":"admin NU","author_link":"https:\/\/youzum.net\/zh\/members\/adminnu\/"},"rttpg_comment":0,"rttpg_category":"<a href=\"https:\/\/youzum.net\/zh\/category\/ai-club\/\" rel=\"category tag\">AI<\/a> <a href=\"https:\/\/youzum.net\/zh\/category\/committee\/\" rel=\"category tag\">Committee<\/a> <a href=\"https:\/\/youzum.net\/zh\/category\/news\/\" rel=\"category tag\">News<\/a> <a href=\"https:\/\/youzum.net\/zh\/category\/uncategorized\/\" rel=\"category tag\">Uncategorized<\/a>","rttpg_excerpt":"Thinking Machines has released Tinker, a Python API that lets researchers and engineers write training loops locally while the platform executes them on managed distributed GPU clusters. The pitch is narrow and technical: keep full control of data, objectives, and optimization steps; hand off scheduling, fault tolerance, and multi-node orchestration. The service is in private&hellip;","_links":{"self":[{"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/posts\/41935","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/comments?post=41935"}],"version-history":[{"count":0,"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/posts\/41935\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/media\/41936"}],"wp:attachment":[{"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/media?parent=41935"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/categories?post=41935"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/tags?post=41935"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}