{"id":11838,"date":"2025-05-11T02:44:09","date_gmt":"2025-05-11T02:44:09","guid":{"rendered":"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/"},"modified":"2025-05-11T02:44:09","modified_gmt":"2025-05-11T02:44:09","slug":"enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy","status":"publish","type":"post","link":"https:\/\/youzum.net\/ja\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/","title":{"rendered":"Enterprise AI Without GPU Burn: Salesforce\u2019s xGen-small Optimizes for Context, Cost, and Privacy"},"content":{"rendered":"<p>Language processing in enterprise environments faces critical challenges as business workflows increasingly depend on synthesising information from diverse sources, including internal documentation, code repositories, research reports, and real-time data streams. While recent advances in large language models have delivered impressive capabilities, this progress comes with significant downsides: skyrocketing per-request costs, constant hardware upgrade requirements, and increased data privacy risks.\u00a0<\/p>\n<p>Pursuing ever-larger model architectures has demonstrated diminishing returns, with the accelerating energy demands potentially constraining future AI development. Modern enterprises now require balanced solutions that deliver comprehensive long-context comprehension while maintaining efficient processing, predictable low-cost serving capabilities, and robust privacy guarantees\u2014a combination that <a href=\"https:\/\/www.marktechpost.com\/2025\/01\/12\/what-are-small-language-models-slms\/\" target=\"_blank\">small language models<\/a> are uniquely positioned to provide despite the complex, high-volume inference demands characteristic of today\u2019s business applications.<\/p>\n<p>Traditional approaches to extending language model capabilities beyond their inherent context limitations have relied on several workaround methods. Retrieval-augmented generation (<a href=\"https:\/\/www.marktechpost.com\/2024\/11\/25\/retrieval-augmented-generation-rag-deep-dive-into-25-different-types-of-rag\/\" target=\"_blank\">RAG<\/a>) systems pull relevant information from external knowledge bases to supplement model inputs. External tool calls enable models to access specialised functions outside their parameters. Memory mechanisms artificially persist information across conversation turns. While functional, these techniques represent brittle \u201cstitching\u201d solutions that add complexity and potential failure points to processing pipelines.\u00a0<\/p>\n<p>Context window extensions in larger models attempted to address these limitations but introduced significant computational overhead. Each method fundamentally acknowledges the same critical need: genuine long-context processing capabilities that allow models to handle entire documents, sustained conversations, code repositories, and research reports in a single forward pass rather than through fragmented processing. These stopgap approaches highlight why native extended context is essential\u2014it eliminates architectural complexity while maintaining information coherence throughout processing.<\/p>\n<p>Salesforce AI Research has developed <strong>xGen-small<\/strong>, an enterprise-ready compact language model for efficient long-context processing. This solution combines domain-focused data curation, scalable pre-training, length-extension techniques, instruction fine-tuning, and reinforcement learning to deliver high-performance enterprise AI capabilities with predictable low costs, addressing the critical balance businesses require between capability and operational efficiency.<\/p>\n<p>xGen-small\u2019s architecture employs a \u201csmall but long\u201d strategy that fundamentally inverts the traditional scale-up paradigm. Rather than increasing parameter counts, this approach deliberately shrinks model size while precisely refining data distributions toward enterprise-relevant domains and training protocols. This architectural philosophy demands comprehensive expertise across multiple development stages and components working in concert through a vertically integrated pipeline.\u00a0<\/p>\n<p>The framework begins with meticulous raw data curation followed by scalable pre-training optimised for efficient processing. Sophisticated length-extension mechanisms enable the compact model to handle extensive contexts while targeted post-training and reinforcement learning techniques enhance performance in enterprise-specific tasks. This architecture delivers strategic advantages for business applications by providing cost efficiency, robust privacy safeguards, and long-context understanding without the resource requirements of larger models, creating a sustainable pathway for deploying Enterprise AI at scale with predictable operational characteristics.<\/p>\n<p>xGen-small\u2019s development pipeline integrates multiple stages into a streamlined workflow. Starting with a multi-trillion-token corpus, the process applies rigorous filtering and quality controls before large-scale TPU pre-training with optimised learning schedules. Targeted length-extension techniques expand context capacity, while task-specific post-training and reward-based reinforcement learning refine model capabilities.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter is-resized\"><img decoding=\"async\" src=\"https:\/\/lh7-rt.googleusercontent.com\/docsz\/AD_4nXeNvCapTIs56e7Of9_eU_0YS7J0YxklbtrCe-pzBym5qFRnxlo8yqiLceHf-9G9nkWnLDKbXp7NJL05epIqeb7ENZ_D1ooRH5TNBDFNvhX2JnsJhjHB9UYCslKGZegYa1Y1kKKC?key=Juu7aFD_jX89BP6iCbz8AQ\" alt=\"\" \/><\/figure>\n<\/div>\n<p>Data curation for xGen-small began with harvesting a corpus substantially larger than the final eight trillion training tokens. The pipeline applied fast heuristic filters to remove spam, followed by a two-stage quality assessment using classifier ensembles. <strong><em>Exact hashing and fuzzy fingerprinting eliminated near-duplicates,<\/em><\/strong> while careful balancing of general data with specialised content for code, mathematics, and natural language optimised performance. Extensive ablation studies refined this curation approach to maximise factual accuracy and overall usefulness.<\/p>\n<p>Pre-training of xGen-small utilises TPU v5p pods with Jaxformer v8 library, implementing FSDP, sequence-parallel attention, and splash kernels for maximum efficiency. The multi-phase learning rate schedule optimises training dynamics. At the same time, a carefully balanced data mixture combines code corpora, natural language examples, mathematical texts, and high-quality filtered content to capture both diversity and domain expertise.<\/p>\n<p>xGen-small demonstrates competitive performance against leading baselines in its size class. The strategic blending of diverse data types\u2014including low-entropy code, high-entropy natural language, mathematical content, and classifier-filtered high-quality subsets\u2014delivers exceptional results across evaluation metrics while maintaining the model\u2019s compact, efficient architecture. This approach successfully balances processing efficiency with robust performance capabilities required for enterprise applications.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter is-resized\"><img decoding=\"async\" src=\"https:\/\/lh7-rt.googleusercontent.com\/docsz\/AD_4nXdFwjuZgIY_YOnRbPzJBvXIsx-hhEV5Z8AcP6-Sj5fhkQG4j8YK4pZ5KiK9ZgpWkpi1KxZWvVIHZ9Wsk1roWbgne35Tjalh1xQz2818JL2wZzQzGnm6uHvnKHhZp-OOY1hW8v4GGg?key=Juu7aFD_jX89BP6iCbz8AQ\" alt=\"\" \/><\/figure>\n<\/div>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter is-resized\"><img decoding=\"async\" src=\"https:\/\/lh7-rt.googleusercontent.com\/docsz\/AD_4nXdgF86ueKyNY6nAPtXjRmdsqU1PGszu7MVEKr3E7C7nAbFwJ0aqMuRdJu3VwgCuJDWg-irhovlKFA8PsGPHcjCRfp__oKtfW3SSCwDoN6ckb2FtU1-WDqkp40zscY5aNnctIAsI?key=Juu7aFD_jX89BP6iCbz8AQ\" alt=\"\" \/><\/figure>\n<\/div>\n<p>Performance evaluations demonstrate xGen-small\u2019s exceptional long-context capabilities, with the 9B model achieving state-of-the-art results on the RULER benchmark and the 4B model securing second place in its class. Unlike competitors whose performance degrades significantly at extended context lengths, xGen maintains consistent performance from 4K to 128K tokens. This stability comes from a sophisticated length-extension strategy using two-stage extension (32K then 128K), over-length training to 256K, and sequence parallelism to manage memory constraints efficiently, delivering reliable performance across the entire context spectrum.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter is-resized\"><img decoding=\"async\" src=\"https:\/\/lh7-rt.googleusercontent.com\/docsz\/AD_4nXeLq54o4avUgf8b8c8qzYyqcgdVOKNYPesVCiObEB_ZqPXe1lslcrqKp3bNQTWbCNiTWMVUiBuG3rydCmfeiJGh3HEREyWccTAR7MkErJcirVngj0o1wbeGnSlrUUflF8zRnHRZuw?key=Juu7aFD_jX89BP6iCbz8AQ\" alt=\"\" \/><\/figure>\n<\/div>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter is-resized\"><img decoding=\"async\" src=\"https:\/\/lh7-rt.googleusercontent.com\/docsz\/AD_4nXfS16Q7F2DfGPfIayOlhUJm5k_1pF7eO4jv0g1xpGgV2Rbp9-UczKuaLQ9jxpCwSZN0TEcdO9u-xIZ_T2cDeyLh3DQ_kRirUqVVw5xYl7aNZwiQNNrqafQIQTExgcK302WUpTHo?key=Juu7aFD_jX89BP6iCbz8AQ\" alt=\"\" \/><\/figure>\n<\/div>\n<p>Post-training transforms xGen-small base models into comprehensive instruction models through a two-stage process. First, supervised fine-tuning uses a diverse, high-quality instruction dataset spanning mathematics, coding, safety, and general-purpose domains to establish core behaviours and alignment. Subsequently, large-scale reinforcement learning refines the model\u2019s policy, particularly enhancing reasoning capabilities. This approach delivers exceptional performance in complex reasoning domains like mathematics, coding, and STEM applications while maintaining consistent instruction-following abilities across general tasks.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter is-resized\"><img decoding=\"async\" src=\"https:\/\/lh7-rt.googleusercontent.com\/docsz\/AD_4nXdpXtuVRb-BQ-Frr9MGdaYiOZ1vQdN9MY3HuW9Zof6f3aS8a0s6rfjrljwTUci24aM_CJfAD7PrF4ruBVxevpMpZQhF-A_D7YuzU0rPFP2VwOGGdIg3kEq4MYG8OpjWeuuK2o6CKQ?key=Juu7aFD_jX89BP6iCbz8AQ\" alt=\"\" \/><\/figure>\n<\/div>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter is-resized\"><img decoding=\"async\" src=\"https:\/\/lh7-rt.googleusercontent.com\/docsz\/AD_4nXfmXfsb3i-Z0cMCrU1IXZK7bVfVJgzsDd_OMnPfF8Jpnaug7jAibLgqeqMq1SvgahEP4TVHhFs4vRtoVhaYyw6jIknYsGVGpmzpIUeteXISHEGE5qiN2G-6DsUC5UTCkDiNhM7RdA?key=Juu7aFD_jX89BP6iCbz8AQ\" alt=\"\" \/><\/figure>\n<\/div>\n<p>The development of xGen-small demonstrates that deliberately constraining model size while extending context capacity creates optimal solutions for enterprise AI applications. This \u201csmall but long\u201d approach significantly reduces inference costs and hardware requirements while enabling seamless processing of extensive internal knowledge sources without external retrieval dependencies. Through an integrated pipeline of meticulous data curation, scalable pre-training, targeted length-extension, and reinforcement learning, these compact models match or exceed larger counterparts\u2019 performance. This architecture provides businesses with a predictable, sustainable, cost-effective, and privacy-preserving framework for deploying AI at enterprise scale.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out the<strong> <a href=\"https:\/\/huggingface.co\/Salesforce\/xgen-small-r\" target=\"_blank\" rel=\"noreferrer noopener\">Model on Hugging Face <\/a>and <a href=\"https:\/\/www.salesforce.com\/blog\/xgen-small-enterprise-ready-small-language-models\/\" target=\"_blank\" rel=\"noreferrer noopener\">Technical details<\/a>.<\/strong> Also,\u00a0don\u2019t forget to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>.<\/p>\n<p><strong>Here\u2019s a brief overview of what we\u2019re building at Marktechpost:<\/strong><\/p>\n<ul class=\"wp-block-list\">\n<li><strong>ML News Community \u2013<a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">\u00a0r\/machinelearningnews<\/a>\u00a0(92k+ members)<\/strong><\/li>\n<li><strong>Newsletter\u2013\u00a0<a href=\"https:\/\/minicon.marktechpost.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">airesearchinsights.com\/<\/a>(30k+ subscribers)<\/strong><\/li>\n<li><strong>miniCON AI Events \u2013\u00a0<a href=\"https:\/\/minicon.marktechpost.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">minicon.marktechpost.com<\/a><\/strong><\/li>\n<li><strong>AI Reports &amp; Magazines \u2013\u00a0<a href=\"https:\/\/magazine.marktechpost.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">magazine.marktechpost.com<\/a><\/strong><\/li>\n<li><strong>AI Dev &amp; Research News \u2013\u00a0<a href=\"https:\/\/marktechpost.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">marktechpost.com<\/a>\u00a0(1M+ monthly readers)<\/strong><\/li>\n<li><strong><a href=\"https:\/\/forms.gle\/cnXafrh6Be8UigQ68\" target=\"_blank\" rel=\"noreferrer noopener\">Partner with us<\/a><\/strong><\/li>\n<\/ul>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2025\/05\/09\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/\">Enterprise AI Without GPU Burn: Salesforce\u2019s xGen-small Optimizes for Context, Cost, and Privacy<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>Language processing in enterprise environments faces critical challenges as business workflows increasingly depend on synthesising information from diverse sources, including internal documentation, code repositories, research reports, and real-time data streams. While recent advances in large language models have delivered impressive capabilities, this progress comes with significant downsides: skyrocketing per-request costs, constant hardware upgrade requirements, and increased data privacy risks.\u00a0 Pursuing ever-larger model architectures has demonstrated diminishing returns, with the accelerating energy demands potentially constraining future AI development. Modern enterprises now require balanced solutions that deliver comprehensive long-context comprehension while maintaining efficient processing, predictable low-cost serving capabilities, and robust privacy guarantees\u2014a combination that small language models are uniquely positioned to provide despite the complex, high-volume inference demands characteristic of today\u2019s business applications. Traditional approaches to extending language model capabilities beyond their inherent context limitations have relied on several workaround methods. Retrieval-augmented generation (RAG) systems pull relevant information from external knowledge bases to supplement model inputs. External tool calls enable models to access specialised functions outside their parameters. Memory mechanisms artificially persist information across conversation turns. While functional, these techniques represent brittle \u201cstitching\u201d solutions that add complexity and potential failure points to processing pipelines.\u00a0 Context window extensions in larger models attempted to address these limitations but introduced significant computational overhead. Each method fundamentally acknowledges the same critical need: genuine long-context processing capabilities that allow models to handle entire documents, sustained conversations, code repositories, and research reports in a single forward pass rather than through fragmented processing. These stopgap approaches highlight why native extended context is essential\u2014it eliminates architectural complexity while maintaining information coherence throughout processing. Salesforce AI Research has developed xGen-small, an enterprise-ready compact language model for efficient long-context processing. This solution combines domain-focused data curation, scalable pre-training, length-extension techniques, instruction fine-tuning, and reinforcement learning to deliver high-performance enterprise AI capabilities with predictable low costs, addressing the critical balance businesses require between capability and operational efficiency. xGen-small\u2019s architecture employs a \u201csmall but long\u201d strategy that fundamentally inverts the traditional scale-up paradigm. Rather than increasing parameter counts, this approach deliberately shrinks model size while precisely refining data distributions toward enterprise-relevant domains and training protocols. This architectural philosophy demands comprehensive expertise across multiple development stages and components working in concert through a vertically integrated pipeline.\u00a0 The framework begins with meticulous raw data curation followed by scalable pre-training optimised for efficient processing. Sophisticated length-extension mechanisms enable the compact model to handle extensive contexts while targeted post-training and reinforcement learning techniques enhance performance in enterprise-specific tasks. This architecture delivers strategic advantages for business applications by providing cost efficiency, robust privacy safeguards, and long-context understanding without the resource requirements of larger models, creating a sustainable pathway for deploying Enterprise AI at scale with predictable operational characteristics. xGen-small\u2019s development pipeline integrates multiple stages into a streamlined workflow. Starting with a multi-trillion-token corpus, the process applies rigorous filtering and quality controls before large-scale TPU pre-training with optimised learning schedules. Targeted length-extension techniques expand context capacity, while task-specific post-training and reward-based reinforcement learning refine model capabilities. Data curation for xGen-small began with harvesting a corpus substantially larger than the final eight trillion training tokens. The pipeline applied fast heuristic filters to remove spam, followed by a two-stage quality assessment using classifier ensembles. Exact hashing and fuzzy fingerprinting eliminated near-duplicates, while careful balancing of general data with specialised content for code, mathematics, and natural language optimised performance. Extensive ablation studies refined this curation approach to maximise factual accuracy and overall usefulness. Pre-training of xGen-small utilises TPU v5p pods with Jaxformer v8 library, implementing FSDP, sequence-parallel attention, and splash kernels for maximum efficiency. The multi-phase learning rate schedule optimises training dynamics. At the same time, a carefully balanced data mixture combines code corpora, natural language examples, mathematical texts, and high-quality filtered content to capture both diversity and domain expertise. xGen-small demonstrates competitive performance against leading baselines in its size class. The strategic blending of diverse data types\u2014including low-entropy code, high-entropy natural language, mathematical content, and classifier-filtered high-quality subsets\u2014delivers exceptional results across evaluation metrics while maintaining the model\u2019s compact, efficient architecture. This approach successfully balances processing efficiency with robust performance capabilities required for enterprise applications. Performance evaluations demonstrate xGen-small\u2019s exceptional long-context capabilities, with the 9B model achieving state-of-the-art results on the RULER benchmark and the 4B model securing second place in its class. Unlike competitors whose performance degrades significantly at extended context lengths, xGen maintains consistent performance from 4K to 128K tokens. This stability comes from a sophisticated length-extension strategy using two-stage extension (32K then 128K), over-length training to 256K, and sequence parallelism to manage memory constraints efficiently, delivering reliable performance across the entire context spectrum. Post-training transforms xGen-small base models into comprehensive instruction models through a two-stage process. First, supervised fine-tuning uses a diverse, high-quality instruction dataset spanning mathematics, coding, safety, and general-purpose domains to establish core behaviours and alignment. Subsequently, large-scale reinforcement learning refines the model\u2019s policy, particularly enhancing reasoning capabilities. This approach delivers exceptional performance in complex reasoning domains like mathematics, coding, and STEM applications while maintaining consistent instruction-following abilities across general tasks. The development of xGen-small demonstrates that deliberately constraining model size while extending context capacity creates optimal solutions for enterprise AI applications. This \u201csmall but long\u201d approach significantly reduces inference costs and hardware requirements while enabling seamless processing of extensive internal knowledge sources without external retrieval dependencies. Through an integrated pipeline of meticulous data curation, scalable pre-training, targeted length-extension, and reinforcement learning, these compact models match or exceed larger counterparts\u2019 performance. This architecture provides businesses with a predictable, sustainable, cost-effective, and privacy-preserving framework for deploying AI at enterprise scale. Check out the Model on Hugging Face and Technical details. Also,\u00a0don\u2019t forget to follow us on\u00a0Twitter. Here\u2019s a brief overview of what we\u2019re building at Marktechpost: ML News Community \u2013\u00a0r\/machinelearningnews\u00a0(92k+ members) Newsletter\u2013\u00a0airesearchinsights.com\/(30k+ subscribers) miniCON AI Events \u2013\u00a0minicon.marktechpost.com AI Reports &amp; Magazines \u2013\u00a0magazine.marktechpost.com AI Dev &amp; Research News \u2013\u00a0marktechpost.com\u00a0(1M+ monthly readers) Partner with us The post Enterprise AI Without GPU Burn: Salesforce\u2019s xGen-small Optimizes for Context, Cost, and Privacy appeared first on MarkTechPost.<\/p>","protected":false},"author":2,"featured_media":11839,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"pmpro_default_level":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"_pvb_checkbox_block_on_post":false,"footnotes":""},"categories":[52,5,7,1],"tags":[],"class_list":["post-11838","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-club","category-committee","category-news","category-uncategorized","pmpro-has-access"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.3 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Enterprise AI Without GPU Burn: Salesforce\u2019s xGen-small Optimizes for Context, Cost, and Privacy - YouZum<\/title>\n<meta name=\"description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/youzum.net\/ja\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/\" \/>\n<meta property=\"og:locale\" content=\"ja_JP\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Enterprise AI Without GPU Burn: Salesforce\u2019s xGen-small Optimizes for Context, Cost, and Privacy - YouZum\" \/>\n<meta property=\"og:description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta property=\"og:url\" content=\"https:\/\/youzum.net\/ja\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/\" \/>\n<meta property=\"og:site_name\" content=\"YouZum\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/DroneAssociationTH\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-05-11T02:44:09+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/05\/AD_4nXeNvCapTIs56e7Of9_eU_0YS7J0YxklbtrCe-pzBym5qFRnxlo8yqiLceHf-9G9nkWnLDKbXp7NJL05epIqeb7ENZ_D1ooRH5TNBDFNvhX2JnsJhjHB9UYCslKGZegYa1Y1kKKC-AKymKd.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1386\" \/>\n\t<meta property=\"og:image:height\" content=\"400\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"admin NU\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\u57f7\u7b46\u8005\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin NU\" \/>\n\t<meta name=\"twitter:label2\" content=\"\u63a8\u5b9a\u8aad\u307f\u53d6\u308a\u6642\u9593\" \/>\n\t<meta name=\"twitter:data2\" content=\"5\u5206\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/\"},\"author\":{\"name\":\"admin NU\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\"},\"headline\":\"Enterprise AI Without GPU Burn: Salesforce\u2019s xGen-small Optimizes for Context, Cost, and Privacy\",\"datePublished\":\"2025-05-11T02:44:09+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/\"},\"wordCount\":1035,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/05\/AD_4nXeNvCapTIs56e7Of9_eU_0YS7J0YxklbtrCe-pzBym5qFRnxlo8yqiLceHf-9G9nkWnLDKbXp7NJL05epIqeb7ENZ_D1ooRH5TNBDFNvhX2JnsJhjHB9UYCslKGZegYa1Y1kKKC-AKymKd.png\",\"articleSection\":[\"AI\",\"Committee\",\"News\",\"Uncategorized\"],\"inLanguage\":\"ja\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/\",\"url\":\"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/\",\"name\":\"Enterprise AI Without GPU Burn: Salesforce\u2019s xGen-small Optimizes for Context, Cost, and Privacy - YouZum\",\"isPartOf\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/05\/AD_4nXeNvCapTIs56e7Of9_eU_0YS7J0YxklbtrCe-pzBym5qFRnxlo8yqiLceHf-9G9nkWnLDKbXp7NJL05epIqeb7ENZ_D1ooRH5TNBDFNvhX2JnsJhjHB9UYCslKGZegYa1Y1kKKC-AKymKd.png\",\"datePublished\":\"2025-05-11T02:44:09+00:00\",\"description\":\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\",\"breadcrumb\":{\"@id\":\"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/#breadcrumb\"},\"inLanguage\":\"ja\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"ja\",\"@id\":\"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/#primaryimage\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/05\/AD_4nXeNvCapTIs56e7Of9_eU_0YS7J0YxklbtrCe-pzBym5qFRnxlo8yqiLceHf-9G9nkWnLDKbXp7NJL05epIqeb7ENZ_D1ooRH5TNBDFNvhX2JnsJhjHB9UYCslKGZegYa1Y1kKKC-AKymKd.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/05\/AD_4nXeNvCapTIs56e7Of9_eU_0YS7J0YxklbtrCe-pzBym5qFRnxlo8yqiLceHf-9G9nkWnLDKbXp7NJL05epIqeb7ENZ_D1ooRH5TNBDFNvhX2JnsJhjHB9UYCslKGZegYa1Y1kKKC-AKymKd.png\",\"width\":1386,\"height\":400},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/youzum.net\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Enterprise AI Without GPU Burn: Salesforce\u2019s xGen-small Optimizes for Context, Cost, and Privacy\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/yousum.gpucore.co\/#website\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"name\":\"YouSum\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/yousum.gpucore.co\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"ja\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\",\"name\":\"Drone Association Thailand\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ja\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"width\":300,\"height\":300,\"caption\":\"Drone Association Thailand\"},\"image\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/DroneAssociationTH\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\",\"name\":\"admin NU\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"ja\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"caption\":\"admin NU\"},\"url\":\"https:\/\/youzum.net\/ja\/members\/adminnu\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Enterprise AI Without GPU Burn: Salesforce\u2019s xGen-small Optimizes for Context, Cost, and Privacy - YouZum","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/youzum.net\/ja\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/","og_locale":"ja_JP","og_type":"article","og_title":"Enterprise AI Without GPU Burn: Salesforce\u2019s xGen-small Optimizes for Context, Cost, and Privacy - YouZum","og_description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","og_url":"https:\/\/youzum.net\/ja\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/","og_site_name":"YouZum","article_publisher":"https:\/\/www.facebook.com\/DroneAssociationTH\/","article_published_time":"2025-05-11T02:44:09+00:00","og_image":[{"width":1386,"height":400,"url":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/05\/AD_4nXeNvCapTIs56e7Of9_eU_0YS7J0YxklbtrCe-pzBym5qFRnxlo8yqiLceHf-9G9nkWnLDKbXp7NJL05epIqeb7ENZ_D1ooRH5TNBDFNvhX2JnsJhjHB9UYCslKGZegYa1Y1kKKC-AKymKd.png","type":"image\/png"}],"author":"admin NU","twitter_card":"summary_large_image","twitter_misc":{"\u57f7\u7b46\u8005":"admin NU","\u63a8\u5b9a\u8aad\u307f\u53d6\u308a\u6642\u9593":"5\u5206"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/#article","isPartOf":{"@id":"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/"},"author":{"name":"admin NU","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c"},"headline":"Enterprise AI Without GPU Burn: Salesforce\u2019s xGen-small Optimizes for Context, Cost, and Privacy","datePublished":"2025-05-11T02:44:09+00:00","mainEntityOfPage":{"@id":"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/"},"wordCount":1035,"commentCount":0,"publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"image":{"@id":"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/#primaryimage"},"thumbnailUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/05\/AD_4nXeNvCapTIs56e7Of9_eU_0YS7J0YxklbtrCe-pzBym5qFRnxlo8yqiLceHf-9G9nkWnLDKbXp7NJL05epIqeb7ENZ_D1ooRH5TNBDFNvhX2JnsJhjHB9UYCslKGZegYa1Y1kKKC-AKymKd.png","articleSection":["AI","Committee","News","Uncategorized"],"inLanguage":"ja","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/","url":"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/","name":"Enterprise AI Without GPU Burn: Salesforce\u2019s xGen-small Optimizes for Context, Cost, and Privacy - YouZum","isPartOf":{"@id":"https:\/\/yousum.gpucore.co\/#website"},"primaryImageOfPage":{"@id":"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/#primaryimage"},"image":{"@id":"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/#primaryimage"},"thumbnailUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/05\/AD_4nXeNvCapTIs56e7Of9_eU_0YS7J0YxklbtrCe-pzBym5qFRnxlo8yqiLceHf-9G9nkWnLDKbXp7NJL05epIqeb7ENZ_D1ooRH5TNBDFNvhX2JnsJhjHB9UYCslKGZegYa1Y1kKKC-AKymKd.png","datePublished":"2025-05-11T02:44:09+00:00","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","breadcrumb":{"@id":"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/#breadcrumb"},"inLanguage":"ja","potentialAction":[{"@type":"ReadAction","target":["https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/"]}]},{"@type":"ImageObject","inLanguage":"ja","@id":"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/#primaryimage","url":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/05\/AD_4nXeNvCapTIs56e7Of9_eU_0YS7J0YxklbtrCe-pzBym5qFRnxlo8yqiLceHf-9G9nkWnLDKbXp7NJL05epIqeb7ENZ_D1ooRH5TNBDFNvhX2JnsJhjHB9UYCslKGZegYa1Y1kKKC-AKymKd.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/05\/AD_4nXeNvCapTIs56e7Of9_eU_0YS7J0YxklbtrCe-pzBym5qFRnxlo8yqiLceHf-9G9nkWnLDKbXp7NJL05epIqeb7ENZ_D1ooRH5TNBDFNvhX2JnsJhjHB9UYCslKGZegYa1Y1kKKC-AKymKd.png","width":1386,"height":400},{"@type":"BreadcrumbList","@id":"https:\/\/youzum.net\/enterprise-ai-without-gpu-burn-salesforces-xgen-small-optimizes-for-context-cost-and-privacy\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/youzum.net\/"},{"@type":"ListItem","position":2,"name":"Enterprise AI Without GPU Burn: Salesforce\u2019s xGen-small Optimizes for Context, Cost, and Privacy"}]},{"@type":"WebSite","@id":"https:\/\/yousum.gpucore.co\/#website","url":"https:\/\/yousum.gpucore.co\/","name":"YouSum","description":"","publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/yousum.gpucore.co\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"ja"},{"@type":"Organization","@id":"https:\/\/yousum.gpucore.co\/#organization","name":"Drone Association Thailand","url":"https:\/\/yousum.gpucore.co\/","logo":{"@type":"ImageObject","inLanguage":"ja","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","width":300,"height":300,"caption":"Drone Association Thailand"},"image":{"@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/DroneAssociationTH\/"]},{"@type":"Person","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c","name":"admin NU","image":{"@type":"ImageObject","inLanguage":"ja","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","caption":"admin NU"},"url":"https:\/\/youzum.net\/ja\/members\/adminnu\/"}]}},"rttpg_featured_image_url":{"full":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/05\/AD_4nXeNvCapTIs56e7Of9_eU_0YS7J0YxklbtrCe-pzBym5qFRnxlo8yqiLceHf-9G9nkWnLDKbXp7NJL05epIqeb7ENZ_D1ooRH5TNBDFNvhX2JnsJhjHB9UYCslKGZegYa1Y1kKKC-AKymKd.png",1386,400,false],"landscape":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/05\/AD_4nXeNvCapTIs56e7Of9_eU_0YS7J0YxklbtrCe-pzBym5qFRnxlo8yqiLceHf-9G9nkWnLDKbXp7NJL05epIqeb7ENZ_D1ooRH5TNBDFNvhX2JnsJhjHB9UYCslKGZegYa1Y1kKKC-AKymKd.png",1386,400,false],"portraits":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/05\/AD_4nXeNvCapTIs56e7Of9_eU_0YS7J0YxklbtrCe-pzBym5qFRnxlo8yqiLceHf-9G9nkWnLDKbXp7NJL05epIqeb7ENZ_D1ooRH5TNBDFNvhX2JnsJhjHB9UYCslKGZegYa1Y1kKKC-AKymKd.png",1386,400,false],"thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/05\/AD_4nXeNvCapTIs56e7Of9_eU_0YS7J0YxklbtrCe-pzBym5qFRnxlo8yqiLceHf-9G9nkWnLDKbXp7NJL05epIqeb7ENZ_D1ooRH5TNBDFNvhX2JnsJhjHB9UYCslKGZegYa1Y1kKKC-AKymKd-150x150.png",150,150,true],"medium":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/05\/AD_4nXeNvCapTIs56e7Of9_eU_0YS7J0YxklbtrCe-pzBym5qFRnxlo8yqiLceHf-9G9nkWnLDKbXp7NJL05epIqeb7ENZ_D1ooRH5TNBDFNvhX2JnsJhjHB9UYCslKGZegYa1Y1kKKC-AKymKd-300x87.png",300,87,true],"large":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/05\/AD_4nXeNvCapTIs56e7Of9_eU_0YS7J0YxklbtrCe-pzBym5qFRnxlo8yqiLceHf-9G9nkWnLDKbXp7NJL05epIqeb7ENZ_D1ooRH5TNBDFNvhX2JnsJhjHB9UYCslKGZegYa1Y1kKKC-AKymKd-1024x296.png",1024,296,true],"1536x1536":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/05\/AD_4nXeNvCapTIs56e7Of9_eU_0YS7J0YxklbtrCe-pzBym5qFRnxlo8yqiLceHf-9G9nkWnLDKbXp7NJL05epIqeb7ENZ_D1ooRH5TNBDFNvhX2JnsJhjHB9UYCslKGZegYa1Y1kKKC-AKymKd.png",1386,400,false],"2048x2048":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/05\/AD_4nXeNvCapTIs56e7Of9_eU_0YS7J0YxklbtrCe-pzBym5qFRnxlo8yqiLceHf-9G9nkWnLDKbXp7NJL05epIqeb7ENZ_D1ooRH5TNBDFNvhX2JnsJhjHB9UYCslKGZegYa1Y1kKKC-AKymKd.png",1386,400,false],"trp-custom-language-flag":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/05\/AD_4nXeNvCapTIs56e7Of9_eU_0YS7J0YxklbtrCe-pzBym5qFRnxlo8yqiLceHf-9G9nkWnLDKbXp7NJL05epIqeb7ENZ_D1ooRH5TNBDFNvhX2JnsJhjHB9UYCslKGZegYa1Y1kKKC-AKymKd-18x5.png",18,5,true],"woocommerce_thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/05\/AD_4nXeNvCapTIs56e7Of9_eU_0YS7J0YxklbtrCe-pzBym5qFRnxlo8yqiLceHf-9G9nkWnLDKbXp7NJL05epIqeb7ENZ_D1ooRH5TNBDFNvhX2JnsJhjHB9UYCslKGZegYa1Y1kKKC-AKymKd-300x300.png",300,300,true],"woocommerce_single":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/05\/AD_4nXeNvCapTIs56e7Of9_eU_0YS7J0YxklbtrCe-pzBym5qFRnxlo8yqiLceHf-9G9nkWnLDKbXp7NJL05epIqeb7ENZ_D1ooRH5TNBDFNvhX2JnsJhjHB9UYCslKGZegYa1Y1kKKC-AKymKd-600x173.png",600,173,true],"woocommerce_gallery_thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/05\/AD_4nXeNvCapTIs56e7Of9_eU_0YS7J0YxklbtrCe-pzBym5qFRnxlo8yqiLceHf-9G9nkWnLDKbXp7NJL05epIqeb7ENZ_D1ooRH5TNBDFNvhX2JnsJhjHB9UYCslKGZegYa1Y1kKKC-AKymKd-100x100.png",100,100,true]},"rttpg_author":{"display_name":"admin NU","author_link":"https:\/\/youzum.net\/ja\/members\/adminnu\/"},"rttpg_comment":0,"rttpg_category":"<a href=\"https:\/\/youzum.net\/ja\/category\/ai-club\/\" rel=\"category tag\">AI<\/a> <a href=\"https:\/\/youzum.net\/ja\/category\/committee\/\" rel=\"category tag\">Committee<\/a> <a href=\"https:\/\/youzum.net\/ja\/category\/news\/\" rel=\"category tag\">News<\/a> <a href=\"https:\/\/youzum.net\/ja\/category\/uncategorized\/\" rel=\"category tag\">Uncategorized<\/a>","rttpg_excerpt":"Language processing in enterprise environments faces critical challenges as business workflows increasingly depend on synthesising information from diverse sources, including internal documentation, code repositories, research reports, and real-time data streams. While recent advances in large language models have delivered impressive capabilities, this progress comes with significant downsides: skyrocketing per-request costs, constant hardware upgrade requirements, and&hellip;","_links":{"self":[{"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/posts\/11838","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/comments?post=11838"}],"version-history":[{"count":0,"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/posts\/11838\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/media\/11839"}],"wp:attachment":[{"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/media?parent=11838"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/categories?post=11838"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/youzum.net\/ja\/wp-json\/wp\/v2\/tags?post=11838"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}