{"id":37865,"date":"2025-09-13T06:32:59","date_gmt":"2025-09-13T06:32:59","guid":{"rendered":"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/"},"modified":"2025-09-13T06:32:59","modified_gmt":"2025-09-13T06:32:59","slug":"ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture","status":"publish","type":"post","link":"https:\/\/youzum.net\/de\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/","title":{"rendered":"IBM AI Research Releases Two English Granite Embedding Models, Both Based on the ModernBERT Architecture"},"content":{"rendered":"<p>IBM has quietly built a strong presence in the open-source AI ecosystem, and its latest release shows why it shouldn\u2019t be overlooked. The company has introduced two new embedding models\u2014<strong>granite-embedding-english-r2<\/strong> and <strong>granite-embedding-small-english-r2<\/strong>\u2014designed specifically for high-performance retrieval and RAG (retrieval-augmented generation) systems. These models are not only compact and efficient but also licensed under <strong>Apache 2.0<\/strong>, making them ready for commercial deployment.<\/p>\n<h3 class=\"wp-block-heading\"><strong>What Models Did IBM Release?<\/strong><\/h3>\n<p>The two models target different compute budgets. The larger <strong>granite-embedding-english-r2<\/strong> has 149 million parameters with an embedding size of 768, built on a 22-layer ModernBERT encoder. Its smaller counterpart, <strong>granite-embedding-small-english-r2<\/strong>, comes in at just 47 million parameters with an embedding size of 384, using a 12-layer ModernBERT encoder.<\/p>\n<p>Despite their differences in size, both support a maximum context length of <strong>8192 tokens<\/strong>, a major upgrade from the first-generation Granite embeddings. This long-context capability makes them highly suitable for enterprise workloads involving long documents and complex retrieval tasks.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img fetchpriority=\"high\" decoding=\"async\" width=\"1024\" height=\"350\" data-attachment-id=\"74497\" data-permalink=\"https:\/\/www.marktechpost.com\/2025\/09\/12\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/screenshot-2025-09-12-at-8-28-05-pm-2\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1.png\" data-orig-size=\"1368,468\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2025-09-12 at 8.28.05\u202fPM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1-300x103.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1-1024x350.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1-1024x350.png\" alt=\"\" class=\"wp-image-74497\" \/><figcaption class=\"wp-element-caption\">https:\/\/arxiv.org\/abs\/2508.21085<\/figcaption><\/figure>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>What\u2019s Inside the Architecture?<\/strong><\/h3>\n<p>Both models are built on the <strong>ModernBERT<\/strong> backbone, which introduces several optimizations:<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>Alternating global and local attention<\/strong> to balance efficiency with long-range dependencies.<\/li>\n<li><strong>Rotary positional embeddings (RoPE)<\/strong> tuned for positional interpolation, enabling longer context windows.<\/li>\n<li><strong>FlashAttention 2<\/strong> to improve memory usage and throughput at inference time.<\/li>\n<\/ul>\n<p>IBM also trained these models with a <strong>multi-stage pipeline<\/strong>. The process started with masked language pretraining on a two-trillion-token dataset sourced from web, Wikipedia, PubMed, BookCorpus, and internal IBM technical documents. This was followed by <strong>context extension from 1k to 8k tokens<\/strong>, <strong>contrastive learning with distillation from Mistral-7B<\/strong>, and <strong>domain-specific tuning<\/strong> for conversational, tabular, and code retrieval tasks.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><a href=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/infograp-1200x700-7-scaled.png\"><img decoding=\"async\" width=\"1024\" height=\"597\" data-attachment-id=\"74504\" data-permalink=\"https:\/\/www.marktechpost.com\/2025\/09\/12\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/infograp-1200x700-8\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/infograp-1200x700-7-scaled.png\" data-orig-size=\"2560,1493\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"infograp 1200\u00d7700\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/infograp-1200x700-7-300x175.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/infograp-1200x700-7-1024x597.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/infograp-1200x700-7-1024x597.png\" alt=\"\" class=\"wp-image-74504\" \/><\/a><\/figure>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>How Do They Perform on Benchmarks?<\/strong><\/h3>\n<p>The Granite R2 models deliver strong results across widely used retrieval benchmarks. On <strong>MTEB-v2<\/strong> and <strong>BEIR<\/strong>, the larger granite-embedding-english-r2 outperforms similarly sized models like BGE Base, E5, and Arctic Embed. The smaller model, granite-embedding-small-english-r2, achieves accuracy close to models two to three times larger, making it particularly attractive for latency-sensitive workloads.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img decoding=\"async\" width=\"1024\" height=\"538\" data-attachment-id=\"74499\" data-permalink=\"https:\/\/www.marktechpost.com\/2025\/09\/12\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/screenshot-2025-09-12-at-8-28-36-pm-2\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.36-PM-1.png\" data-orig-size=\"1576,828\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2025-09-12 at 8.28.36\u202fPM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.36-PM-1-300x158.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.36-PM-1-1024x538.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.36-PM-1-1024x538.png\" alt=\"\" class=\"wp-image-74499\" \/><figcaption class=\"wp-element-caption\">https:\/\/arxiv.org\/abs\/2508.21085<\/figcaption><\/figure>\n<\/div>\n<p>Both models also perform well in specialized domains:<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>Long-document retrieval (MLDR, LongEmbed)<\/strong> where 8k context support is critical.<\/li>\n<li><strong>Table retrieval tasks (OTT-QA, FinQA, OpenWikiTables)<\/strong> where structured reasoning is required.<\/li>\n<li><strong>Code retrieval (CoIR)<\/strong>, handling both text-to-code and code-to-text queries.<\/li>\n<\/ul>\n<h3 class=\"wp-block-heading\"><strong>Are They Fast Enough for Large-Scale Use?<\/strong><\/h3>\n<p>Efficiency is one of the standout aspects of these models. On an Nvidia H100 GPU, the <strong>granite-embedding-small-english-r2<\/strong> encodes nearly <strong>200 documents per second<\/strong>, which is significantly faster than BGE Small and E5 Small. The larger granite-embedding-english-r2 also reaches <strong>144 documents per second<\/strong>, outperforming many ModernBERT-based alternatives.<\/p>\n<p>Crucially, these models remain practical even on CPUs, allowing enterprises to run them in less GPU-intensive environments. This balance of <strong>speed, compact size, and retrieval accuracy<\/strong> makes them highly adaptable for real-world deployment.<\/p>\n<h3 class=\"wp-block-heading\"><strong>What Does This Mean for Retrieval in Practice?<\/strong><\/h3>\n<p>IBM\u2019s Granite Embedding R2 models demonstrate that embedding systems don\u2019t need massive parameter counts to be effective. They combine <strong>long-context support, benchmark-leading accuracy, and high throughput<\/strong> in compact architectures. For companies building retrieval pipelines, knowledge management systems, or RAG workflows, Granite R2 provides a <strong>production-ready, commercially viable alternative<\/strong> to existing open-source options.<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"600\" data-attachment-id=\"74495\" data-permalink=\"https:\/\/www.marktechpost.com\/2025\/09\/12\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/screenshot-2025-09-12-at-8-27-36-pm-2\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.27.36-PM-1.png\" data-orig-size=\"1916,1122\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2025-09-12 at 8.27.36\u202fPM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.27.36-PM-1-300x176.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.27.36-PM-1-1024x600.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.27.36-PM-1-1024x600.png\" alt=\"\" class=\"wp-image-74495\" \/><figcaption class=\"wp-element-caption\">https:\/\/arxiv.org\/abs\/2508.21085<\/figcaption><\/figure>\n<\/div>\n<h3 class=\"wp-block-heading\"><strong>Summary<\/strong><\/h3>\n<p>In short, IBM\u2019s Granite Embedding R2 models strike an effective balance between compact design, long-context capability, and strong retrieval performance. With throughput optimized for both GPU and CPU environments, and an Apache 2.0 license that enables unrestricted commercial use, they present a practical alternative to bulkier open-source embeddings. For enterprises deploying RAG, search, or large-scale knowledge systems, Granite R2 stands out as an efficient and production-ready option.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out the\u00a0<strong><a href=\"https:\/\/arxiv.org\/abs\/2508.21085\" target=\"_blank\" rel=\"noreferrer noopener\">Paper<\/a>, <a href=\"https:\/\/huggingface.co\/ibm-granite\/granite-embedding-small-english-r2\" target=\"_blank\" rel=\"noreferrer noopener\">granite-embedding-small-english-r2<\/a><\/strong> and <strong><a href=\"https:\/\/huggingface.co\/ibm-granite\/granite-embedding-english-r2\" target=\"_blank\" rel=\"noreferrer noopener\">granite-embedding-english-r2<\/a><em>.<\/em><\/strong>\u00a0Feel free to check out our\u00a0<strong><mark><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\" target=\"_blank\" rel=\"noreferrer noopener\">GitHub Page for Tutorials, Codes and Notebooks<\/a><\/mark><\/strong>.\u00a0Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">100k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>.<\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2025\/09\/12\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/\">IBM AI Research Releases Two English Granite Embedding Models, Both Based on the ModernBERT Architecture<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>IBM has quietly built a strong presence in the open-source AI ecosystem, and its latest release shows why it shouldn\u2019t be overlooked. The company has introduced two new embedding models\u2014granite-embedding-english-r2 and granite-embedding-small-english-r2\u2014designed specifically for high-performance retrieval and RAG (retrieval-augmented generation) systems. These models are not only compact and efficient but also licensed under Apache 2.0, making them ready for commercial deployment. What Models Did IBM Release? The two models target different compute budgets. The larger granite-embedding-english-r2 has 149 million parameters with an embedding size of 768, built on a 22-layer ModernBERT encoder. Its smaller counterpart, granite-embedding-small-english-r2, comes in at just 47 million parameters with an embedding size of 384, using a 12-layer ModernBERT encoder. Despite their differences in size, both support a maximum context length of 8192 tokens, a major upgrade from the first-generation Granite embeddings. This long-context capability makes them highly suitable for enterprise workloads involving long documents and complex retrieval tasks. https:\/\/arxiv.org\/abs\/2508.21085 What\u2019s Inside the Architecture? Both models are built on the ModernBERT backbone, which introduces several optimizations: Alternating global and local attention to balance efficiency with long-range dependencies. Rotary positional embeddings (RoPE) tuned for positional interpolation, enabling longer context windows. FlashAttention 2 to improve memory usage and throughput at inference time. IBM also trained these models with a multi-stage pipeline. The process started with masked language pretraining on a two-trillion-token dataset sourced from web, Wikipedia, PubMed, BookCorpus, and internal IBM technical documents. This was followed by context extension from 1k to 8k tokens, contrastive learning with distillation from Mistral-7B, and domain-specific tuning for conversational, tabular, and code retrieval tasks. How Do They Perform on Benchmarks? The Granite R2 models deliver strong results across widely used retrieval benchmarks. On MTEB-v2 and BEIR, the larger granite-embedding-english-r2 outperforms similarly sized models like BGE Base, E5, and Arctic Embed. The smaller model, granite-embedding-small-english-r2, achieves accuracy close to models two to three times larger, making it particularly attractive for latency-sensitive workloads. https:\/\/arxiv.org\/abs\/2508.21085 Both models also perform well in specialized domains: Long-document retrieval (MLDR, LongEmbed) where 8k context support is critical. Table retrieval tasks (OTT-QA, FinQA, OpenWikiTables) where structured reasoning is required. Code retrieval (CoIR), handling both text-to-code and code-to-text queries. Are They Fast Enough for Large-Scale Use? Efficiency is one of the standout aspects of these models. On an Nvidia H100 GPU, the granite-embedding-small-english-r2 encodes nearly 200 documents per second, which is significantly faster than BGE Small and E5 Small. The larger granite-embedding-english-r2 also reaches 144 documents per second, outperforming many ModernBERT-based alternatives. Crucially, these models remain practical even on CPUs, allowing enterprises to run them in less GPU-intensive environments. This balance of speed, compact size, and retrieval accuracy makes them highly adaptable for real-world deployment. What Does This Mean for Retrieval in Practice? IBM\u2019s Granite Embedding R2 models demonstrate that embedding systems don\u2019t need massive parameter counts to be effective. They combine long-context support, benchmark-leading accuracy, and high throughput in compact architectures. For companies building retrieval pipelines, knowledge management systems, or RAG workflows, Granite R2 provides a production-ready, commercially viable alternative to existing open-source options. https:\/\/arxiv.org\/abs\/2508.21085 Summary In short, IBM\u2019s Granite Embedding R2 models strike an effective balance between compact design, long-context capability, and strong retrieval performance. With throughput optimized for both GPU and CPU environments, and an Apache 2.0 license that enables unrestricted commercial use, they present a practical alternative to bulkier open-source embeddings. For enterprises deploying RAG, search, or large-scale knowledge systems, Granite R2 stands out as an efficient and production-ready option. Check out the\u00a0Paper, granite-embedding-small-english-r2 and granite-embedding-english-r2.\u00a0Feel free to check out our\u00a0GitHub Page for Tutorials, Codes and Notebooks.\u00a0Also,\u00a0feel free to follow us on\u00a0Twitter\u00a0and don\u2019t forget to join our\u00a0100k+ ML SubReddit\u00a0and Subscribe to\u00a0our Newsletter. The post IBM AI Research Releases Two English Granite Embedding Models, Both Based on the ModernBERT Architecture appeared first on MarkTechPost.<\/p>","protected":false},"author":2,"featured_media":37866,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"pmpro_default_level":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"_pvb_checkbox_block_on_post":false,"footnotes":""},"categories":[52,5,7,1],"tags":[],"class_list":["post-37865","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-club","category-committee","category-news","category-uncategorized","pmpro-has-access"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.3 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>IBM AI Research Releases Two English Granite Embedding Models, Both Based on the ModernBERT Architecture - YouZum<\/title>\n<meta name=\"description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/youzum.net\/de\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/\" \/>\n<meta property=\"og:locale\" content=\"de_DE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"IBM AI Research Releases Two English Granite Embedding Models, Both Based on the ModernBERT Architecture - YouZum\" \/>\n<meta property=\"og:description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta property=\"og:url\" content=\"https:\/\/youzum.net\/de\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/\" \/>\n<meta property=\"og:site_name\" content=\"YouZum\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/DroneAssociationTH\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-09-13T06:32:59+00:00\" \/>\n<meta name=\"author\" content=\"admin NU\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Verfasst von\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin NU\" \/>\n\t<meta name=\"twitter:label2\" content=\"Gesch\u00e4tzte Lesezeit\" \/>\n\t<meta name=\"twitter:data2\" content=\"3\u00a0Minuten\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/\"},\"author\":{\"name\":\"admin NU\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\"},\"headline\":\"IBM AI Research Releases Two English Granite Embedding Models, Both Based on the ModernBERT Architecture\",\"datePublished\":\"2025-09-13T06:32:59+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/\"},\"wordCount\":659,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1-1024x350-YQUnmW.png\",\"articleSection\":[\"AI\",\"Committee\",\"News\",\"Uncategorized\"],\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/\",\"url\":\"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/\",\"name\":\"IBM AI Research Releases Two English Granite Embedding Models, Both Based on the ModernBERT Architecture - YouZum\",\"isPartOf\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1-1024x350-YQUnmW.png\",\"datePublished\":\"2025-09-13T06:32:59+00:00\",\"description\":\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\",\"breadcrumb\":{\"@id\":\"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/#breadcrumb\"},\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/#primaryimage\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1-1024x350-YQUnmW.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1-1024x350-YQUnmW.png\",\"width\":1024,\"height\":350},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/youzum.net\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"IBM AI Research Releases Two English Granite Embedding Models, Both Based on the ModernBERT Architecture\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/yousum.gpucore.co\/#website\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"name\":\"YouSum\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/yousum.gpucore.co\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"de\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\",\"name\":\"Drone Association Thailand\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"width\":300,\"height\":300,\"caption\":\"Drone Association Thailand\"},\"image\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/DroneAssociationTH\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\",\"name\":\"admin NU\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"caption\":\"admin NU\"},\"url\":\"https:\/\/youzum.net\/de\/members\/adminnu\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"IBM AI Research Releases Two English Granite Embedding Models, Both Based on the ModernBERT Architecture - YouZum","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/youzum.net\/de\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/","og_locale":"de_DE","og_type":"article","og_title":"IBM AI Research Releases Two English Granite Embedding Models, Both Based on the ModernBERT Architecture - YouZum","og_description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","og_url":"https:\/\/youzum.net\/de\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/","og_site_name":"YouZum","article_publisher":"https:\/\/www.facebook.com\/DroneAssociationTH\/","article_published_time":"2025-09-13T06:32:59+00:00","author":"admin NU","twitter_card":"summary_large_image","twitter_misc":{"Verfasst von":"admin NU","Gesch\u00e4tzte Lesezeit":"3\u00a0Minuten"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/#article","isPartOf":{"@id":"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/"},"author":{"name":"admin NU","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c"},"headline":"IBM AI Research Releases Two English Granite Embedding Models, Both Based on the ModernBERT Architecture","datePublished":"2025-09-13T06:32:59+00:00","mainEntityOfPage":{"@id":"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/"},"wordCount":659,"commentCount":0,"publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"image":{"@id":"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/#primaryimage"},"thumbnailUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1-1024x350-YQUnmW.png","articleSection":["AI","Committee","News","Uncategorized"],"inLanguage":"de","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/","url":"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/","name":"IBM AI Research Releases Two English Granite Embedding Models, Both Based on the ModernBERT Architecture - YouZum","isPartOf":{"@id":"https:\/\/yousum.gpucore.co\/#website"},"primaryImageOfPage":{"@id":"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/#primaryimage"},"image":{"@id":"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/#primaryimage"},"thumbnailUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1-1024x350-YQUnmW.png","datePublished":"2025-09-13T06:32:59+00:00","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","breadcrumb":{"@id":"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/#breadcrumb"},"inLanguage":"de","potentialAction":[{"@type":"ReadAction","target":["https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/"]}]},{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/#primaryimage","url":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1-1024x350-YQUnmW.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1-1024x350-YQUnmW.png","width":1024,"height":350},{"@type":"BreadcrumbList","@id":"https:\/\/youzum.net\/ibm-ai-research-releases-two-english-granite-embedding-models-both-based-on-the-modernbert-architecture\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/youzum.net\/"},{"@type":"ListItem","position":2,"name":"IBM AI Research Releases Two English Granite Embedding Models, Both Based on the ModernBERT Architecture"}]},{"@type":"WebSite","@id":"https:\/\/yousum.gpucore.co\/#website","url":"https:\/\/yousum.gpucore.co\/","name":"YouSum","description":"","publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/yousum.gpucore.co\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"de"},{"@type":"Organization","@id":"https:\/\/yousum.gpucore.co\/#organization","name":"Drone Association Thailand","url":"https:\/\/yousum.gpucore.co\/","logo":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","width":300,"height":300,"caption":"Drone Association Thailand"},"image":{"@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/DroneAssociationTH\/"]},{"@type":"Person","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c","name":"admin NU","image":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","caption":"admin NU"},"url":"https:\/\/youzum.net\/de\/members\/adminnu\/"}]}},"rttpg_featured_image_url":{"full":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1-1024x350-YQUnmW.png",1024,350,false],"landscape":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1-1024x350-YQUnmW.png",1024,350,false],"portraits":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1-1024x350-YQUnmW.png",1024,350,false],"thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1-1024x350-YQUnmW-150x150.png",150,150,true],"medium":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1-1024x350-YQUnmW-300x103.png",300,103,true],"large":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1-1024x350-YQUnmW.png",1024,350,false],"1536x1536":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1-1024x350-YQUnmW.png",1024,350,false],"2048x2048":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1-1024x350-YQUnmW.png",1024,350,false],"trp-custom-language-flag":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1-1024x350-YQUnmW-18x6.png",18,6,true],"woocommerce_thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1-1024x350-YQUnmW-300x300.png",300,300,true],"woocommerce_single":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1-1024x350-YQUnmW-600x205.png",600,205,true],"woocommerce_gallery_thumbnail":["https:\/\/youzum.net\/wp-content\/uploads\/2025\/09\/Screenshot-2025-09-12-at-8.28.05-PM-1-1024x350-YQUnmW-100x100.png",100,100,true]},"rttpg_author":{"display_name":"admin NU","author_link":"https:\/\/youzum.net\/de\/members\/adminnu\/"},"rttpg_comment":0,"rttpg_category":"<a href=\"https:\/\/youzum.net\/de\/category\/ai-club\/\" rel=\"category tag\">AI<\/a> <a href=\"https:\/\/youzum.net\/de\/category\/committee\/\" rel=\"category tag\">Committee<\/a> <a href=\"https:\/\/youzum.net\/de\/category\/news\/\" rel=\"category tag\">News<\/a> <a href=\"https:\/\/youzum.net\/de\/category\/uncategorized\/\" rel=\"category tag\">Uncategorized<\/a>","rttpg_excerpt":"IBM has quietly built a strong presence in the open-source AI ecosystem, and its latest release shows why it shouldn\u2019t be overlooked. The company has introduced two new embedding models\u2014granite-embedding-english-r2 and granite-embedding-small-english-r2\u2014designed specifically for high-performance retrieval and RAG (retrieval-augmented generation) systems. These models are not only compact and efficient but also licensed under Apache 2.0,&hellip;","_links":{"self":[{"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/posts\/37865","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/comments?post=37865"}],"version-history":[{"count":0,"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/posts\/37865\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/media\/37866"}],"wp:attachment":[{"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/media?parent=37865"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/categories?post=37865"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/youzum.net\/de\/wp-json\/wp\/v2\/tags?post=37865"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}