{"id":38049,"date":"2025-09-14T06:34:11","date_gmt":"2025-09-14T06:34:11","guid":{"rendered":"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/"},"modified":"2025-09-14T06:34:11","modified_gmt":"2025-09-14T06:34:11","slug":"google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy","status":"publish","type":"post","link":"https:\/\/youzum.net\/it\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/","title":{"rendered":"Google AI Releases VaultGemma: The Largest and Most Capable Open Model (1B-parameters) Trained from Scratch with Differential Privacy"},"content":{"rendered":"<p>Google AI Research and DeepMind have released <strong>VaultGemma 1B<\/strong>, the largest open-weight large language model trained entirely with <strong>differential privacy (DP)<\/strong>. This development is a major step toward building AI models that are both powerful and privacy-preserving.<\/p>\n<h3 class=\"wp-block-heading\"><strong>Why Do We Need Differential Privacy in LLMs?<\/strong><\/h3>\n<p>Large language models trained on vast web-scale datasets are prone to <strong>memorization attacks<\/strong>, where sensitive or personally identifiable information can be extracted from the model. Studies have shown that verbatim training data can resurface, especially in open-weight releases.<\/p>\n<p>Differential Privacy offers a <strong>mathematical guarantee<\/strong> that prevents any single training example from significantly influencing the model. Unlike approaches that apply DP only during fine-tuning, VaultGemma enforces <strong>full private pretraining<\/strong>, ensuring that privacy protection begins at the foundational level.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img fetchpriority=\"high\" decoding=\"async\" width=\"1024\" height=\"547\" data-attachment-id=\"74509\" data-permalink=\"https:\/\/www.marktechpost.com\/2025\/09\/13\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/screenshot-2025-09-13-at-12-12-46-am-2\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1.png\" data-orig-size=\"1652,882\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2025-09-13 at 12.12.46\u202fAM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1-300x160.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1-1024x547.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1-1024x547.png\" alt=\"\" class=\"wp-image-74509\" \/><figcaption class=\"wp-element-caption\">https:\/\/services.google.com\/fh\/files\/blogs\/vaultgemma_tech_report.pdf<\/figcaption><\/figure>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>What Is the Architecture of VaultGemma?<\/strong><\/h3>\n<p>VaultGemma is architecturally similar to earlier Gemma models, but optimized for private training.<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>Model size<\/strong>: 1B parameters, 26 layers.<\/li>\n<li><strong>Transformer type<\/strong>: Decoder-only.<\/li>\n<li><strong>Activations<\/strong>: GeGLU with feedforward dimension of 13,824.<\/li>\n<li><strong>Attention<\/strong>: Multi-Query Attention (MQA) with global span of 1024 tokens.<\/li>\n<li><strong>Normalization<\/strong>: RMSNorm in pre-norm configuration.<\/li>\n<li><strong>Tokenizer<\/strong>: SentencePiece with a 256K vocabulary.<\/li>\n<\/ul>\n<p>A notable change is the <strong>reduction of sequence length to 1024 tokens<\/strong>, which lowers compute costs and enables larger batch sizes under DP constraints.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><a href=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/1200x1200-infographics-5-scaled.png\"><img decoding=\"async\" width=\"1024\" height=\"1024\" data-attachment-id=\"74513\" data-permalink=\"https:\/\/www.marktechpost.com\/2025\/09\/13\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/1200x1200-infographics-7\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/1200x1200-infographics-5-scaled.png\" data-orig-size=\"2560,2560\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"1200\u00d71200 infographics\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/1200x1200-infographics-5-300x300.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/1200x1200-infographics-5-1024x1024.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/1200x1200-infographics-5-1024x1024.png\" alt=\"\" class=\"wp-image-74513\" \/><\/a><\/figure>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>What Data Was Used for Training?<\/strong><\/h3>\n<p>VaultGemma was trained on the <strong>same 13 trillion-token dataset<\/strong> as Gemma 2, composed primarily of English text from web documents, code, and scientific articles.<\/p>\n<p><strong>The dataset underwent several filtering stages to:<\/strong><\/p>\n<ul class=\"wp-block-list\">\n<li>Remove unsafe or sensitive content.<\/li>\n<li>Reduce personal information exposure.<\/li>\n<li>Prevent evaluation data contamination.<\/li>\n<\/ul>\n<p>This ensures both safety and fairness in benchmarking.<\/p>\n<h3 class=\"wp-block-heading\"><strong>How Was Differential Privacy Applied?<\/strong><\/h3>\n<p>VaultGemma used <strong>DP-SGD (Differentially Private Stochastic Gradient Descent)<\/strong> with gradient clipping and Gaussian noise addition. Implementation was built on <strong>JAX Privacy<\/strong> and <strong>introduced optimizations for scalability:<\/strong><\/p>\n<ul class=\"wp-block-list\">\n<li><strong>Vectorized per-example clipping<\/strong> for parallel efficiency.<\/li>\n<li><strong>Gradient accumulation<\/strong> to simulate large batches.<\/li>\n<li><strong>Truncated Poisson Subsampling<\/strong> integrated into the data loader for efficient on-the-fly sampling.<\/li>\n<\/ul>\n<p>The model achieved a <strong>formal DP guarantee<\/strong> of (\u03b5 \u2264 2.0, \u03b4 \u2264 1.1e\u221210) at the sequence level (1024 tokens).<\/p>\n<h3 class=\"wp-block-heading\"><strong>How Do Scaling Laws Work for Private Training?<\/strong><\/h3>\n<p>Training large models under DP constraints requires new scaling strategies. The VaultGemma team developed <strong>DP-specific scaling laws<\/strong> with three innovations:<\/p>\n<ol class=\"wp-block-list\">\n<li><strong>Optimal learning rate modeling<\/strong> using quadratic fits across training runs.<\/li>\n<li><strong>Parametric extrapolation of loss values<\/strong> to reduce reliance on intermediate checkpoints.<\/li>\n<li><strong>Semi-parametric fits<\/strong> to generalize across model size, training steps, and noise-batch ratios.<\/li>\n<\/ol>\n<p>This methodology enabled precise prediction of achievable loss and efficient resource use on the TPUv6e training cluster.<\/p>\n<h3 class=\"wp-block-heading\"><strong>What Were the Training Configurations?<\/strong><\/h3>\n<p>VaultGemma was trained on <strong>2048 TPUv6e chips<\/strong> using GSPMD partitioning and MegaScale XLA compilation.<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>Batch size<\/strong>: ~518K tokens.<\/li>\n<li><strong>Training iterations<\/strong>: 100,000.<\/li>\n<li><strong>Noise multiplier<\/strong>: 0.614.<\/li>\n<\/ul>\n<p>The achieved loss was within 1% of predictions from the DP scaling law, validating the approach.<\/p>\n<h3 class=\"wp-block-heading\"><strong>How Does VaultGemma Perform Compared to Non-Private Models?<\/strong><\/h3>\n<p><strong>On academic benchmarks, VaultGemma trails its non-private counterparts but shows strong utility:<\/strong><\/p>\n<ul class=\"wp-block-list\">\n<li><strong>ARC-C<\/strong>: 26.45 vs. 38.31 (Gemma-3 1B).<\/li>\n<li><strong>PIQA<\/strong>: 68.0 vs. 70.51 (GPT-2 1.5B).<\/li>\n<li><strong>TriviaQA (5-shot)<\/strong>: 11.24 vs. 39.75 (Gemma-3 1B).<\/li>\n<\/ul>\n<p>These results suggest that DP-trained models are currently comparable to <strong>non-private models from about five years ago<\/strong>. Importantly, memorization tests confirmed that <strong>no training data leakage<\/strong> was detectable in VaultGemma, unlike in non-private Gemma models.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img decoding=\"async\" width=\"1024\" height=\"267\" data-attachment-id=\"74507\" data-permalink=\"https:\/\/www.marktechpost.com\/2025\/09\/13\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/screenshot-2025-09-13-at-12-11-31-am\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.11.31-AM.png\" data-orig-size=\"1358,354\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2025-09-13 at 12.11.31\u202fAM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.11.31-AM-300x78.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.11.31-AM-1024x267.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.11.31-AM-1024x267.png\" alt=\"\" class=\"wp-image-74507\" \/><figcaption class=\"wp-element-caption\">https:\/\/services.google.com\/fh\/files\/blogs\/vaultgemma_tech_report.pdf<\/figcaption><\/figure>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>Summary<\/strong><\/h3>\n<p>In summary, VaultGemma 1B proves that large-scale language models can be trained with rigorous differential privacy guarantees without making them impractical to use. While a utility gap remains compared to non-private counterparts, the release of both the model and its training methodology provides the community with a strong foundation for advancing private AI. This work signals a shift toward building models that are not only capable but also inherently safe, transparent, and privacy-preserving.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out the\u00a0<strong><a href=\"https:\/\/services.google.com\/fh\/files\/blogs\/vaultgemma_tech_report.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Paper<\/a>,\u00a0<a href=\"https:\/\/huggingface.co\/google\/vaultgemma-1b\" target=\"_blank\" rel=\"noreferrer noopener\">Model on Hugging Face<\/a><\/strong> and <strong><a href=\"https:\/\/research.google\/blog\/vaultgemma-the-worlds-most-capable-differentially-private-llm\/\" target=\"_blank\" rel=\"noreferrer noopener\">Technical Details<\/a><em>.<\/em><\/strong>\u00a0Feel free to check out our\u00a0<strong><mark><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\" target=\"_blank\" rel=\"noreferrer noopener\">GitHub Page for Tutorials, Codes and Notebooks<\/a><\/mark><\/strong>.\u00a0Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">100k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>.<\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2025\/09\/13\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/\">Google AI Releases VaultGemma: The Largest and Most Capable Open Model (1B-parameters) Trained from Scratch with Differential Privacy<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>Google AI Research and DeepMind have released VaultGemma 1B, the largest open-weight large language model trained entirely with differential privacy (DP). This development is a major step toward building AI models that are both powerful and privacy-preserving. Why Do We Need Differential Privacy in LLMs? Large language models trained on vast web-scale datasets are prone to memorization attacks, where sensitive or personally identifiable information can be extracted from the model. Studies have shown that verbatim training data can resurface, especially in open-weight releases. Differential Privacy offers a mathematical guarantee that prevents any single training example from significantly influencing the model. Unlike approaches that apply DP only during fine-tuning, VaultGemma enforces full private pretraining, ensuring that privacy protection begins at the foundational level. https:\/\/services.google.com\/fh\/files\/blogs\/vaultgemma_tech_report.pdf What Is the Architecture of VaultGemma? VaultGemma is architecturally similar to earlier Gemma models, but optimized for private training. Model size: 1B parameters, 26 layers. Transformer type: Decoder-only. Activations: GeGLU with feedforward dimension of 13,824. Attention: Multi-Query Attention (MQA) with global span of 1024 tokens. Normalization: RMSNorm in pre-norm configuration. Tokenizer: SentencePiece with a 256K vocabulary. A notable change is the reduction of sequence length to 1024 tokens, which lowers compute costs and enables larger batch sizes under DP constraints. What Data Was Used for Training? VaultGemma was trained on the same 13 trillion-token dataset as Gemma 2, composed primarily of English text from web documents, code, and scientific articles. The dataset underwent several filtering stages to: Remove unsafe or sensitive content. Reduce personal information exposure. Prevent evaluation data contamination. This ensures both safety and fairness in benchmarking. How Was Differential Privacy Applied? VaultGemma used DP-SGD (Differentially Private Stochastic Gradient Descent) with gradient clipping and Gaussian noise addition. Implementation was built on JAX Privacy and introduced optimizations for scalability: Vectorized per-example clipping for parallel efficiency. Gradient accumulation to simulate large batches. Truncated Poisson Subsampling integrated into the data loader for efficient on-the-fly sampling. The model achieved a formal DP guarantee of (\u03b5 \u2264 2.0, \u03b4 \u2264 1.1e\u221210) at the sequence level (1024 tokens). How Do Scaling Laws Work for Private Training? Training large models under DP constraints requires new scaling strategies. The VaultGemma team developed DP-specific scaling laws with three innovations: Optimal learning rate modeling using quadratic fits across training runs. Parametric extrapolation of loss values to reduce reliance on intermediate checkpoints. Semi-parametric fits to generalize across model size, training steps, and noise-batch ratios. This methodology enabled precise prediction of achievable loss and efficient resource use on the TPUv6e training cluster. What Were the Training Configurations? VaultGemma was trained on 2048 TPUv6e chips using GSPMD partitioning and MegaScale XLA compilation. Batch size: ~518K tokens. Training iterations: 100,000. Noise multiplier: 0.614. The achieved loss was within 1% of predictions from the DP scaling law, validating the approach. How Does VaultGemma Perform Compared to Non-Private Models? On academic benchmarks, VaultGemma trails its non-private counterparts but shows strong utility: ARC-C: 26.45 vs. 38.31 (Gemma-3 1B). PIQA: 68.0 vs. 70.51 (GPT-2 1.5B). TriviaQA (5-shot): 11.24 vs. 39.75 (Gemma-3 1B). These results suggest that DP-trained models are currently comparable to non-private models from about five years ago. Importantly, memorization tests confirmed that no training data leakage was detectable in VaultGemma, unlike in non-private Gemma models. https:\/\/services.google.com\/fh\/files\/blogs\/vaultgemma_tech_report.pdf Summary In summary, VaultGemma 1B proves that large-scale language models can be trained with rigorous differential privacy guarantees without making them impractical to use. While a utility gap remains compared to non-private counterparts, the release of both the model and its training methodology provides the community with a strong foundation for advancing private AI. This work signals a shift toward building models that are not only capable but also inherently safe, transparent, and privacy-preserving. Check out the\u00a0Paper,\u00a0Model on Hugging Face and Technical Details.\u00a0Feel free to check out our\u00a0GitHub Page for Tutorials, Codes and Notebooks.\u00a0Also,\u00a0feel free to follow us on\u00a0Twitter\u00a0and don\u2019t forget to join our\u00a0100k+ ML SubReddit\u00a0and Subscribe to\u00a0our Newsletter. The post Google AI Releases VaultGemma: The Largest and Most Capable Open Model (1B-parameters) Trained from Scratch with Differential Privacy appeared first on MarkTechPost.<\/p>","protected":false},"author":2,"featured_media":38050,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"pmpro_default_level":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"_pvb_checkbox_block_on_post":false,"footnotes":""},"categories":[52,5,7,1],"tags":[],"class_list":["post-38049","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-club","category-committee","category-news","category-uncategorized","pmpro-has-access"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.3 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Google AI Releases VaultGemma: The Largest and Most Capable Open Model (1B-parameters) Trained from Scratch with Differential Privacy - YouZum<\/title>\n<meta name=\"description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/youzum.net\/it\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/\" \/>\n<meta property=\"og:locale\" content=\"it_IT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Google AI Releases VaultGemma: The Largest and Most Capable Open Model (1B-parameters) Trained from Scratch with Differential Privacy - YouZum\" \/>\n<meta property=\"og:description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta property=\"og:url\" content=\"https:\/\/youzum.net\/it\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/\" \/>\n<meta property=\"og:site_name\" content=\"YouZum\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/DroneAssociationTH\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-09-14T06:34:11+00:00\" \/>\n<meta name=\"author\" content=\"admin NU\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Scritto da\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin NU\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo di lettura stimato\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minuti\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/\"},\"author\":{\"name\":\"admin NU\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\"},\"headline\":\"Google AI Releases VaultGemma: The Largest and Most Capable Open Model (1B-parameters) Trained from Scratch with Differential Privacy\",\"datePublished\":\"2025-09-14T06:34:11+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/\"},\"wordCount\":697,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1-1024x547-4hvwJn.webp\",\"articleSection\":[\"AI\",\"Committee\",\"News\",\"Uncategorized\"],\"inLanguage\":\"it-IT\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/\",\"url\":\"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/\",\"name\":\"Google AI Releases VaultGemma: The Largest and Most Capable Open Model (1B-parameters) Trained from Scratch with Differential Privacy - YouZum\",\"isPartOf\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1-1024x547-4hvwJn.webp\",\"datePublished\":\"2025-09-14T06:34:11+00:00\",\"description\":\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\",\"breadcrumb\":{\"@id\":\"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/#breadcrumb\"},\"inLanguage\":\"it-IT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/#primaryimage\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1-1024x547-4hvwJn.webp\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1-1024x547-4hvwJn.webp\",\"width\":1024,\"height\":547},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/youzum.net\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Google AI Releases VaultGemma: The Largest and Most Capable Open Model (1B-parameters) Trained from Scratch with Differential Privacy\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/yousum.gpucore.co\/#website\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"name\":\"YouSum\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/yousum.gpucore.co\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"it-IT\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\",\"name\":\"Drone Association Thailand\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"width\":300,\"height\":300,\"caption\":\"Drone Association Thailand\"},\"image\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/DroneAssociationTH\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\",\"name\":\"admin NU\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"it-IT\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"caption\":\"admin NU\"},\"url\":\"https:\/\/youzum.net\/it\/members\/adminnu\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Google AI Releases VaultGemma: The Largest and Most Capable Open Model (1B-parameters) Trained from Scratch with Differential Privacy - YouZum","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/youzum.net\/it\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/","og_locale":"it_IT","og_type":"article","og_title":"Google AI Releases VaultGemma: The Largest and Most Capable Open Model (1B-parameters) Trained from Scratch with Differential Privacy - YouZum","og_description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","og_url":"https:\/\/youzum.net\/it\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/","og_site_name":"YouZum","article_publisher":"https:\/\/www.facebook.com\/DroneAssociationTH\/","article_published_time":"2025-09-14T06:34:11+00:00","author":"admin NU","twitter_card":"summary_large_image","twitter_misc":{"Scritto da":"admin NU","Tempo di lettura stimato":"3 minuti"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/#article","isPartOf":{"@id":"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/"},"author":{"name":"admin NU","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c"},"headline":"Google AI Releases VaultGemma: The Largest and Most Capable Open Model (1B-parameters) Trained from Scratch with Differential Privacy","datePublished":"2025-09-14T06:34:11+00:00","mainEntityOfPage":{"@id":"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/"},"wordCount":697,"commentCount":0,"publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"image":{"@id":"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/#primaryimage"},"thumbnailUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1-1024x547-4hvwJn.webp","articleSection":["AI","Committee","News","Uncategorized"],"inLanguage":"it-IT","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/","url":"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/","name":"Google AI Releases VaultGemma: The Largest and Most Capable Open Model (1B-parameters) Trained from Scratch with Differential Privacy - YouZum","isPartOf":{"@id":"https:\/\/yousum.gpucore.co\/#website"},"primaryImageOfPage":{"@id":"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/#primaryimage"},"image":{"@id":"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/#primaryimage"},"thumbnailUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1-1024x547-4hvwJn.webp","datePublished":"2025-09-14T06:34:11+00:00","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","breadcrumb":{"@id":"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/#breadcrumb"},"inLanguage":"it-IT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/"]}]},{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/#primaryimage","url":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1-1024x547-4hvwJn.webp","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1-1024x547-4hvwJn.webp","width":1024,"height":547},{"@type":"BreadcrumbList","@id":"https:\/\/youzum.net\/google-ai-releases-vaultgemma-the-largest-and-most-capable-open-model-1b-parameters-trained-from-scratch-with-differential-privacy\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/youzum.net\/"},{"@type":"ListItem","position":2,"name":"Google AI Releases VaultGemma: The Largest and Most Capable Open Model (1B-parameters) Trained from Scratch with Differential Privacy"}]},{"@type":"WebSite","@id":"https:\/\/yousum.gpucore.co\/#website","url":"https:\/\/yousum.gpucore.co\/","name":"YouSum","description":"","publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/yousum.gpucore.co\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"it-IT"},{"@type":"Organization","@id":"https:\/\/yousum.gpucore.co\/#organization","name":"Drone Association Thailand","url":"https:\/\/yousum.gpucore.co\/","logo":{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","width":300,"height":300,"caption":"Drone Association Thailand"},"image":{"@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/DroneAssociationTH\/"]},{"@type":"Person","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c","name":"admin NU","image":{"@type":"ImageObject","inLanguage":"it-IT","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","caption":"admin NU"},"url":"https:\/\/youzum.net\/it\/members\/adminnu\/"}]}},"rttpg_featured_image_url":{"full":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1-1024x547-4hvwJn.webp",1024,547,false],"landscape":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1-1024x547-4hvwJn.webp",1024,547,false],"portraits":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1-1024x547-4hvwJn.webp",1024,547,false],"thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1-1024x547-4hvwJn-150x150.webp",150,150,true],"medium":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1-1024x547-4hvwJn-300x160.webp",300,160,true],"large":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1-1024x547-4hvwJn.webp",1024,547,false],"1536x1536":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1-1024x547-4hvwJn.webp",1024,547,false],"2048x2048":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1-1024x547-4hvwJn.webp",1024,547,false],"trp-custom-language-flag":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1-1024x547-4hvwJn-18x10.webp",18,10,true],"woocommerce_thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1-1024x547-4hvwJn-300x300.webp",300,300,true],"woocommerce_single":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1-1024x547-4hvwJn-600x321.webp",600,321,true],"woocommerce_gallery_thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-13-at-12.12.46-AM-1-1024x547-4hvwJn-100x100.webp",100,100,true]},"rttpg_author":{"display_name":"admin NU","author_link":"https:\/\/youzum.net\/it\/members\/adminnu\/"},"rttpg_comment":0,"rttpg_category":"<a href=\"https:\/\/youzum.net\/it\/category\/ai-club\/\" rel=\"category tag\">AI<\/a> <a href=\"https:\/\/youzum.net\/it\/category\/committee\/\" rel=\"category tag\">Committee<\/a> <a href=\"https:\/\/youzum.net\/it\/category\/news\/\" rel=\"category tag\">News<\/a> <a href=\"https:\/\/youzum.net\/it\/category\/uncategorized\/\" rel=\"category tag\">Uncategorized<\/a>","rttpg_excerpt":"Google AI Research and DeepMind have released VaultGemma 1B, the largest open-weight large language model trained entirely with differential privacy (DP). This development is a major step toward building AI models that are both powerful and privacy-preserving. Why Do We Need Differential Privacy in LLMs? Large language models trained on vast web-scale datasets are prone&hellip;","_links":{"self":[{"href":"https:\/\/youzum.net\/it\/wp-json\/wp\/v2\/posts\/38049","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/youzum.net\/it\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/youzum.net\/it\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/youzum.net\/it\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/youzum.net\/it\/wp-json\/wp\/v2\/comments?post=38049"}],"version-history":[{"count":0,"href":"https:\/\/youzum.net\/it\/wp-json\/wp\/v2\/posts\/38049\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/youzum.net\/it\/wp-json\/wp\/v2\/media\/38050"}],"wp:attachment":[{"href":"https:\/\/youzum.net\/it\/wp-json\/wp\/v2\/media?parent=38049"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/youzum.net\/it\/wp-json\/wp\/v2\/categories?post=38049"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/youzum.net\/it\/wp-json\/wp\/v2\/tags?post=38049"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}