{"id":66329,"date":"2026-01-24T11:07:39","date_gmt":"2026-01-24T11:07:39","guid":{"rendered":"https:\/\/youzum.net\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/"},"modified":"2026-01-24T11:07:39","modified_gmt":"2026-01-24T11:07:39","slug":"how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores","status":"publish","type":"post","link":"https:\/\/youzum.net\/es\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/","title":{"rendered":"How Machine Learning and Semantic Embeddings Reorder CVE Vulnerabilities Beyond Raw CVSS Scores"},"content":{"rendered":"<p>In this tutorial, we build an AI-assisted vulnerability scanner that goes beyond static CVSS scoring and instead learns to prioritize vulnerabilities using semantic understanding and machine learning. We treat vulnerability descriptions as rich linguistic artifacts, embed them using modern sentence transformers, and combine these representations with structural metadata to produce a data-driven priority score. Also, we demonstrate how security teams can shift from rule-based triage to adaptive, explainable, ML-driven risk assessment. Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Security\/ai_assisted_vulnerability_prioritization_ml_nlp_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">FULL CODES here<\/a><\/strong>.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">print(\"Installing required packages...\")\nimport subprocess\nimport sys\n\n\npackages = [\n   'sentence-transformers',\n   'scikit-learn',\n   'pandas',\n   'numpy',\n   'matplotlib',\n   'seaborn',\n   'requests'\n]\n\n\nfor package in packages:\n   subprocess.check_call([sys.executable, '-m', 'pip', 'install', '-q', package])\n\n\nimport requests\nimport pandas as pd\nimport numpy as np\nfrom datetime import datetime, timedelta\nimport json\nimport re\nfrom collections import Counter\nimport warnings\nwarnings.filterwarnings('ignore')\n\n\nfrom sentence_transformers import SentenceTransformer\nfrom sklearn.ensemble import RandomForestClassifier, GradientBoostingRegressor\nfrom sklearn.cluster import KMeans\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import classification_report, mean_squared_error\n\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n\nprint(\"\u2713 All packages installed successfully!n\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We install and load all required NLP, machine learning, and visualization libraries for the end-to-end pipeline. We ensure the runtime is fully self-contained and ready to execute in Colab or similar notebook environments. It establishes a reproducible foundation for the scanner. Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Security\/ai_assisted_vulnerability_prioritization_ml_nlp_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">FULL CODES here<\/a><\/strong>.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">class CVEDataFetcher:\n   def __init__(self):\n       self.base_url = \"https:\/\/services.nvd.nist.gov\/rest\/json\/cves\/2.0\"\n\n\n   def fetch_recent_cves(self, days=30, max_results=100):\n       print(f\"Fetching CVEs from last {days} days...\")\n\n\n       end_date = datetime.now()\n       start_date = end_date - timedelta(days=days)\n\n\n       params = {\n           'pubStartDate': start_date.strftime('%Y-%m-%dT00:00:00.000'),\n           'pubEndDate': end_date.strftime('%Y-%m-%dT23:59:59.999'),\n           'resultsPerPage': min(max_results, 2000)\n       }\n\n\n       try:\n           response = requests.get(self.base_url, params=params, timeout=30)\n           response.raise_for_status()\n           data = response.json()\n\n\n           cves = []\n           for item in data.get('vulnerabilities', [])[:max_results]:\n               cve = item.get('cve', {})\n               cve_id = cve.get('id', 'Unknown')\n\n\n               descriptions = cve.get('descriptions', [])\n               description = next((d['value'] for d in descriptions if d['lang'] == 'en'), 'No description')\n\n\n               metrics = cve.get('metrics', {})\n               cvss_v3 = metrics.get('cvssMetricV31', [{}])[0].get('cvssData', {})\n               cvss_v2 = metrics.get('cvssMetricV2', [{}])[0].get('cvssData', {})\n\n\n               base_score = cvss_v3.get('baseScore') or cvss_v2.get('baseScore') or 0.0\n               severity = cvss_v3.get('baseSeverity') or 'UNKNOWN'\n\n\n               published = cve.get('published', '')\n               references = cve.get('references', [])\n\n\n               cves.append({\n                   'cve_id': cve_id,\n                   'description': description,\n                   'cvss_score': float(base_score),\n                   'severity': severity,\n                   'published': published,\n                   'reference_count': len(references),\n                   'attack_vector': cvss_v3.get('attackVector', 'UNKNOWN'),\n                   'attack_complexity': cvss_v3.get('attackComplexity', 'UNKNOWN'),\n                   'privileges_required': cvss_v3.get('privilegesRequired', 'UNKNOWN'),\n                   'user_interaction': cvss_v3.get('userInteraction', 'UNKNOWN')\n               })\n\n\n           print(f\"\u2713 Fetched {len(cves)} CVEsn\")\n           return pd.DataFrame(cves)\n\n\n       except Exception as e:\n           print(f\"Error fetching CVEs: {e}\")\n           return self._generate_sample_data(max_results)\n\n\n   def _generate_sample_data(self, n=50):\n       print(\"Using sample CVE data for demonstration...n\")\n\n\n       sample_descriptions = [\n           \"A buffer overflow vulnerability in the network driver allows remote code execution\",\n           \"SQL injection vulnerability in web application login form enables unauthorized access\",\n           \"Cross-site scripting (XSS) vulnerability in user input validation\",\n           \"Authentication bypass in admin panel due to weak session management\",\n           \"Remote code execution via deserialization of untrusted data\",\n           \"Path traversal vulnerability allows reading arbitrary files\",\n           \"Privilege escalation through improper input validation\",\n           \"Denial of service through resource exhaustion in API endpoint\",\n           \"Information disclosure via error messages exposing sensitive data\",\n           \"Memory corruption vulnerability in image processing library\",\n           \"Command injection in file upload functionality\",\n           \"Integer overflow leading to heap buffer overflow\",\n           \"Use-after-free vulnerability in memory management\",\n           \"Race condition in multi-threaded application\",\n           \"Cryptographic weakness in password storage mechanism\"\n       ]\n\n\n       severities = ['LOW', 'MEDIUM', 'HIGH', 'CRITICAL']\n       attack_vectors = ['NETWORK', 'ADJACENT', 'LOCAL', 'PHYSICAL']\n       complexities = ['LOW', 'HIGH']\n\n\n       data = []\n       for i in range(n):\n           severity = np.random.choice(severities, p=[0.1, 0.3, 0.4, 0.2])\n           score_ranges = {'LOW': (0.1, 3.9), 'MEDIUM': (4.0, 6.9), 'HIGH': (7.0, 8.9), 'CRITICAL': (9.0, 10.0)}\n\n\n           data.append({\n               'cve_id': f'CVE-2024-{10000+i}',\n               'description': np.random.choice(sample_descriptions),\n               'cvss_score': np.random.uniform(*score_ranges[severity]),\n               'severity': severity,\n               'published': (datetime.now() - timedelta(days=np.random.randint(1, 30))).isoformat(),\n               'reference_count': np.random.randint(1, 10),\n               'attack_vector': np.random.choice(attack_vectors),\n               'attack_complexity': np.random.choice(complexities),\n               'privileges_required': np.random.choice(['NONE', 'LOW', 'HIGH']),\n               'user_interaction': np.random.choice(['NONE', 'REQUIRED'])\n           })\n\n\n       return pd.DataFrame(data)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We implement a robust CVE ingestion component that pulls recent vulnerabilities directly from the NVD API. We normalize raw CVE records into structured features while gracefully falling back to synthetic data when API access fails. It allows the tutorial to remain runnable while reflecting real-world challenges in data ingestion. Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Security\/ai_assisted_vulnerability_prioritization_ml_nlp_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">FULL CODES here<\/a><\/strong>.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">class VulnerabilityFeatureExtractor:\n   def __init__(self):\n       print(\"Loading sentence transformer model...\")\n       self.model = SentenceTransformer('all-MiniLM-L6-v2')\n       print(\"\u2713 Model loadedn\")\n\n\n       self.critical_keywords = {\n           'execution': ['remote code execution', 'rce', 'execute', 'arbitrary code'],\n           'injection': ['sql injection', 'command injection', 'code injection'],\n           'authentication': ['bypass', 'authentication', 'authorization'],\n           'overflow': ['buffer overflow', 'heap overflow', 'stack overflow'],\n           'exposure': ['information disclosure', 'data leak', 'exposure'],\n       }\n\n\n   def extract_semantic_features(self, descriptions):\n       print(\"Generating semantic embeddings...\")\n       embeddings = self.model.encode(descriptions, show_progress_bar=True)\n       return embeddings\n\n\n   def extract_keyword_features(self, df):\n       print(\"Extracting keyword features...\")\n\n\n       for category, keywords in self.critical_keywords.items():\n           df[f'has_{category}'] = df['description'].apply(\n               lambda x: any(kw in x.lower() for kw in keywords)\n           ).astype(int)\n\n\n       df['desc_length'] = df['description'].apply(len)\n       df['word_count'] = df['description'].apply(lambda x: len(x.split()))\n\n\n       return df\n\n\n   def encode_categorical_features(self, df):\n       print(\"Encoding categorical features...\")\n\n\n       categorical_cols = ['attack_vector', 'attack_complexity', 'privileges_required', 'user_interaction']\n\n\n       for col in categorical_cols:\n           dummies = pd.get_dummies(df[col], prefix=col)\n           df = pd.concat([df, dummies], axis=1)\n\n\n       return df<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We transform unstructured vulnerability descriptions into dense semantic embeddings using a sentence-transformer model. We also extract keyword-based risk indicators and textual statistics that capture exploit intent and complexity. Together, these features bridge linguistic context with quantitative ML inputs. Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Security\/ai_assisted_vulnerability_prioritization_ml_nlp_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">FULL CODES here<\/a><\/strong>.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">class VulnerabilityPrioritizer:\n   def __init__(self):\n       self.severity_classifier = RandomForestClassifier(n_estimators=100, random_state=42)\n       self.score_predictor = GradientBoostingRegressor(n_estimators=100, random_state=42)\n       self.scaler = StandardScaler()\n       self.feature_cols = None\n\n\n   def prepare_features(self, df, embeddings):\n       numeric_features = ['reference_count', 'desc_length', 'word_count']\n       keyword_features = [col for col in df.columns if col.startswith('has_')]\n       categorical_features = [col for col in df.columns if any(col.startswith(prefix) for prefix in ['attack_vector_', 'attack_complexity_', 'privileges_required_', 'user_interaction_'])]\n       self.feature_cols = numeric_features + keyword_features + categorical_features\n       X_structured = df[self.feature_cols].values\n       X_embeddings = embeddings\n       X_combined = np.hstack([X_structured, X_embeddings])\n       return X_combined\n\n\n   def train_models(self, X, y_severity, y_score):\n       print(\"nTraining ML models...\")\n       X_scaled = self.scaler.fit_transform(X)\n       X_train, X_test, y_sev_train, y_sev_test, y_score_train, y_score_test = train_test_split(\n           X_scaled, y_severity, y_score, test_size=0.2, random_state=42\n       )\n       self.severity_classifier.fit(X_train, y_sev_train)\n       sev_pred = self.severity_classifier.predict(X_test)\n       self.score_predictor.fit(X_train, y_score_train)\n       score_pred = self.score_predictor.predict(X_test)\n       print(\"n--- Severity Classification Report ---\")\n       print(classification_report(y_sev_test, sev_pred))\n       print(f\"n--- CVSS Score Prediction ---\")\n       print(f\"RMSE: {np.sqrt(mean_squared_error(y_score_test, score_pred)):.2f}\")\n       return X_scaled\n\n\n   def predict_priority(self, X):\n       X_scaled = self.scaler.transform(X)\n       severity_pred = self.severity_classifier.predict_proba(X_scaled)\n       score_pred = self.score_predictor.predict(X_scaled)\n       severity_weight = severity_pred[:, -1] * 0.4\n       score_weight = (score_pred \/ 10.0) * 0.6\n       priority_score = severity_weight + score_weight\n       return priority_score, severity_pred, score_pred\n\n\n   def get_feature_importance(self):\n       importance = self.score_predictor.feature_importances_\n       n_structured = len(self.feature_cols)\n       structured_importance = importance[:n_structured]\n       embedding_importance = importance[n_structured:]\n       feature_imp_df = pd.DataFrame({\n           'feature': self.feature_cols,\n           'importance': structured_importance\n       }).sort_values('importance', ascending=False)\n       return feature_imp_df, embedding_importance.mean()<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We train supervised models to predict both vulnerability severity classes and CVSS-like scores from learned features. We combine structured metadata with embeddings to create a hybrid feature space and derive a composite priority score. This is where the scanner learns how to rank vulnerabilities beyond static heuristics. Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Security\/ai_assisted_vulnerability_prioritization_ml_nlp_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">FULL CODES here<\/a><\/strong>.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">class VulnerabilityAnalyzer:\n   def __init__(self, n_clusters=5):\n       self.n_clusters = n_clusters\n       self.kmeans = KMeans(n_clusters=n_clusters, random_state=42)\n\n\n   def cluster_vulnerabilities(self, embeddings):\n       print(f\"nClustering vulnerabilities into {self.n_clusters} groups...\")\n       clusters = self.kmeans.fit_predict(embeddings)\n       return clusters\n\n\n   def analyze_clusters(self, df, clusters):\n       df['cluster'] = clusters\n       print(\"n--- Cluster Analysis ---\")\n       for i in range(self.n_clusters):\n           cluster_df = df[df['cluster'] == i]\n           print(f\"nCluster {i} ({len(cluster_df)} vulnerabilities):\")\n           print(f\"  Avg CVSS Score: {cluster_df['cvss_score'].mean():.2f}\")\n           print(f\"  Severity Distribution: {cluster_df['severity'].value_counts().to_dict()}\")\n           print(f\"  Top keywords: \", end=\"\")\n           all_words = ' '.join(cluster_df['description'].values).lower()\n           words = re.findall(r'b[a-z]{4,}b', all_words)\n           common = Counter(words).most_common(5)\n           print(', '.join([w for w, _ in common]))\n       return df<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We cluster vulnerabilities based on embedding similarity to uncover recurring exploit patterns. We analyze each cluster to understand dominant attack themes, severity distributions, and common exploit terminology. It helps surface systemic risks rather than isolated issues. Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Security\/ai_assisted_vulnerability_prioritization_ml_nlp_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">FULL CODES here<\/a><\/strong>.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">def visualize_results(df, priority_scores, feature_importance):\n   fig, axes = plt.subplots(2, 3, figsize=(18, 10))\n   fig.suptitle('Vulnerability Scanner - ML Analysis Dashboard', fontsize=16, fontweight='bold')\n   axes[0, 0].hist(priority_scores, bins=30, color='crimson', alpha=0.7, edgecolor='black')\n   axes[0, 0].set_xlabel('Priority Score')\n   axes[0, 0].set_ylabel('Frequency')\n   axes[0, 0].set_title('Priority Score Distribution')\n   axes[0, 0].axvline(np.percentile(priority_scores, 75), color='orange', linestyle='--', label='75th percentile')\n   axes[0, 0].legend()\n   axes[0, 1].scatter(df['cvss_score'], priority_scores, alpha=0.6, c=priority_scores, cmap='RdYlGn_r', s=50)\n   axes[0, 1].set_xlabel('CVSS Score')\n   axes[0, 1].set_ylabel('ML Priority Score')\n   axes[0, 1].set_title('CVSS vs ML Priority')\n   axes[0, 1].plot([0, 10], [0, 1], 'k--', alpha=0.3)\n   severity_counts = df['severity'].value_counts()\n   colors = {'CRITICAL': 'darkred', 'HIGH': 'red', 'MEDIUM': 'orange', 'LOW': 'yellow'}\n   axes[0, 2].bar(severity_counts.index, severity_counts.values, color=[colors.get(s, 'gray') for s in severity_counts.index])\n   axes[0, 2].set_xlabel('Severity')\n   axes[0, 2].set_ylabel('Count')\n   axes[0, 2].set_title('Severity Distribution')\n   axes[0, 2].tick_params(axis='x', rotation=45)\n   top_features = feature_importance.head(10)\n   axes[1, 0].barh(top_features['feature'], top_features['importance'], color='steelblue')\n   axes[1, 0].set_xlabel('Importance')\n   axes[1, 0].set_title('Top 10 Feature Importance')\n   axes[1, 0].invert_yaxis()\n   if 'cluster' in df.columns:\n       cluster_counts = df['cluster'].value_counts().sort_index()\n       axes[1, 1].bar(cluster_counts.index, cluster_counts.values, color='teal', alpha=0.7)\n       axes[1, 1].set_xlabel('Cluster')\n       axes[1, 1].set_ylabel('Count')\n       axes[1, 1].set_title('Vulnerability Clusters')\n   attack_vector_counts = df['attack_vector'].value_counts()\n   axes[1, 2].pie(attack_vector_counts.values, labels=attack_vector_counts.index, autopct='%1.1f%%', startangle=90)\n   axes[1, 2].set_title('Attack Vector Distribution')\n   plt.tight_layout()\n   plt.show()\n\n\ndef main():\n   print(\"=\"*70)\n   print(\"AI-ASSISTED VULNERABILITY SCANNER WITH ML PRIORITIZATION\")\n   print(\"=\"*70)\n   print()\n   fetcher = CVEDataFetcher()\n   df = fetcher.fetch_recent_cves(days=30, max_results=50)\n   print(f\"Dataset Overview:\")\n   print(f\"  Total CVEs: {len(df)}\")\n   print(f\"  Date Range: {df['published'].min()[:10]} to {df['published'].max()[:10]}\")\n   print(f\"  Severity Breakdown: {df['severity'].value_counts().to_dict()}\")\n   print()\n   feature_extractor = VulnerabilityFeatureExtractor()\n   embeddings = feature_extractor.extract_semantic_features(df['description'].tolist())\n   df = feature_extractor.extract_keyword_features(df)\n   df = feature_extractor.encode_categorical_features(df)\n   prioritizer = VulnerabilityPrioritizer()\n   X = prioritizer.prepare_features(df, embeddings)\n   severity_map = {'LOW': 0, 'MEDIUM': 1, 'HIGH': 2, 'CRITICAL': 3, 'UNKNOWN': 1}\n   y_severity = df['severity'].map(severity_map).values\n   y_score = df['cvss_score'].values\n   X_scaled = prioritizer.train_models(X, y_severity, y_score)\n   priority_scores, severity_probs, score_preds = prioritizer.predict_priority(X)\n   df['ml_priority_score'] = priority_scores\n   df['predicted_score'] = score_preds\n   analyzer = VulnerabilityAnalyzer(n_clusters=5)\n   clusters = analyzer.cluster_vulnerabilities(embeddings)\n   df = analyzer.analyze_clusters(df, clusters)\n   feature_imp, emb_imp = prioritizer.get_feature_importance()\n   print(f\"n--- Feature Importance ---\")\n   print(feature_imp.head(10))\n   print(f\"nAverage embedding importance: {emb_imp:.4f}\")\n   print(\"n\" + \"=\"*70)\n   print(\"TOP 10 PRIORITY VULNERABILITIES\")\n   print(\"=\"*70)\n   top_vulns = df.nlargest(10, 'ml_priority_score')[['cve_id', 'cvss_score', 'ml_priority_score', 'severity', 'description']]\n   for idx, row in top_vulns.iterrows():\n       print(f\"n{row['cve_id']} [Priority: {row['ml_priority_score']:.3f}]\")\n       print(f\"  CVSS: {row['cvss_score']:.1f} | Severity: {row['severity']}\")\n       print(f\"  {row['description'][:100]}...\")\n   print(\"nnGenerating visualizations...\")\n   visualize_results(df, priority_scores, feature_imp)\n   print(\"n\" + \"=\"*70)\n   print(\"ANALYSIS COMPLETE\")\n   print(\"=\"*70)\n   print(f\"nResults summary:\")\n   print(f\"  High Priority (&gt;0.7): {(priority_scores &gt; 0.7).sum()} vulnerabilities\")\n   print(f\"  Medium Priority (0.4-0.7): {((priority_scores &gt;= 0.4) &amp; (priority_scores &lt;= 0.7)).sum()}\")\n   print(f\"  Low Priority (&lt;0.4): {(priority_scores &lt; 0.4).sum()}\")\n   return df, prioritizer, analyzer\n\n\nif __name__ == \"__main__\":\n   results_df, prioritizer, analyzer = main()\n   print(\"n\u2713 All analyses completed successfully!\")\n   print(\"nYou can now:\")\n   print(\"  - Access results via 'results_df' DataFrame\")\n   print(\"  - Use 'prioritizer' to predict new vulnerabilities\")\n   print(\"  - Explore 'analyzer' for clustering insights\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We generate an interactive analysis dashboard that visualizes priority distributions, feature importance, clusters, and attack vectors. We execute the complete pipeline, rank the highest-priority vulnerabilities, and summarize actionable insights. It turns raw model outputs into decision-ready intelligence.<\/p>\n<p>In conclusion, we implemented how vulnerability management can evolve from static scoring to intelligent prioritization using machine learning and semantic analysis. By combining embeddings, metadata, clustering, and explainability, we created a system that better reflects real-world exploit risk and operational urgency. It lays the groundwork for adaptive security pipelines where prioritization improves continuously as new vulnerability data emerges.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/Security\/ai_assisted_vulnerability_prioritization_ml_nlp_marktechpost.py\" target=\"_blank\" rel=\"noreferrer noopener\">FULL CODES here<\/a><\/strong>.\u00a0Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">100k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/01\/23\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/\">How Machine Learning and Semantic Embeddings Reorder CVE Vulnerabilities Beyond Raw CVSS Scores<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>In this tutorial, we build an AI-assisted vulnerability scanner that goes beyond static CVSS scoring and instead learns to prioritize vulnerabilities using semantic understanding and machine learning. We treat vulnerability descriptions as rich linguistic artifacts, embed them using modern sentence transformers, and combine these representations with structural metadata to produce a data-driven priority score. Also, we demonstrate how security teams can shift from rule-based triage to adaptive, explainable, ML-driven risk assessment. Check out the\u00a0FULL CODES here. Copy CodeCopiedUse a different Browser print(&#8220;Installing required packages&#8230;&#8221;) import subprocess import sys packages = [ &#8216;sentence-transformers&#8217;, &#8216;scikit-learn&#8217;, &#8216;pandas&#8217;, &#8216;numpy&#8217;, &#8216;matplotlib&#8217;, &#8216;seaborn&#8217;, &#8216;requests&#8217; ] for package in packages: subprocess.check_call([sys.executable, &#8216;-m&#8217;, &#8216;pip&#8217;, &#8216;install&#8217;, &#8216;-q&#8217;, package]) import requests import pandas as pd import numpy as np from datetime import datetime, timedelta import json import re from collections import Counter import warnings warnings.filterwarnings(&#8216;ignore&#8217;) from sentence_transformers import SentenceTransformer from sklearn.ensemble import RandomForestClassifier, GradientBoostingRegressor from sklearn.cluster import KMeans from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report, mean_squared_error import matplotlib.pyplot as plt import seaborn as sns print(&#8220;\u2713 All packages installed successfully!n&#8221;) We install and load all required NLP, machine learning, and visualization libraries for the end-to-end pipeline. We ensure the runtime is fully self-contained and ready to execute in Colab or similar notebook environments. It establishes a reproducible foundation for the scanner. Check out the\u00a0FULL CODES here. Copy CodeCopiedUse a different Browser class CVEDataFetcher: def __init__(self): self.base_url = &#8220;https:\/\/services.nvd.nist.gov\/rest\/json\/cves\/2.0&#8243; def fetch_recent_cves(self, days=30, max_results=100): print(f&#8221;Fetching CVEs from last {days} days&#8230;&#8221;) end_date = datetime.now() start_date = end_date &#8211; timedelta(days=days) params = { &#8216;pubStartDate&#8217;: start_date.strftime(&#8216;%Y-%m-%dT00:00:00.000&#8217;), &#8216;pubEndDate&#8217;: end_date.strftime(&#8216;%Y-%m-%dT23:59:59.999&#8217;), &#8216;resultsPerPage&#8217;: min(max_results, 2000) } try: response = requests.get(self.base_url, params=params, timeout=30) response.raise_for_status() data = response.json() cves = [] for item in data.get(&#8216;vulnerabilities&#8217;, [])[:max_results]: cve = item.get(&#8216;cve&#8217;, {}) cve_id = cve.get(&#8216;id&#8217;, &#8216;Unknown&#8217;) descriptions = cve.get(&#8216;descriptions&#8217;, []) description = next((d[&#8216;value&#8217;] for d in descriptions if d[&#8216;lang&#8217;] == &#8216;en&#8217;), &#8216;No description&#8217;) metrics = cve.get(&#8216;metrics&#8217;, {}) cvss_v3 = metrics.get(&#8216;cvssMetricV31&#8217;, [{}])[0].get(&#8216;cvssData&#8217;, {}) cvss_v2 = metrics.get(&#8216;cvssMetricV2&#8217;, [{}])[0].get(&#8216;cvssData&#8217;, {}) base_score = cvss_v3.get(&#8216;baseScore&#8217;) or cvss_v2.get(&#8216;baseScore&#8217;) or 0.0 severity = cvss_v3.get(&#8216;baseSeverity&#8217;) or &#8216;UNKNOWN&#8217; published = cve.get(&#8216;published&#8217;, &#8221;) references = cve.get(&#8216;references&#8217;, []) cves.append({ &#8216;cve_id&#8217;: cve_id, &#8216;description&#8217;: description, &#8216;cvss_score&#8217;: float(base_score), &#8216;severity&#8217;: severity, &#8216;published&#8217;: published, &#8216;reference_count&#8217;: len(references), &#8216;attack_vector&#8217;: cvss_v3.get(&#8216;attackVector&#8217;, &#8216;UNKNOWN&#8217;), &#8216;attack_complexity&#8217;: cvss_v3.get(&#8216;attackComplexity&#8217;, &#8216;UNKNOWN&#8217;), &#8216;privileges_required&#8217;: cvss_v3.get(&#8216;privilegesRequired&#8217;, &#8216;UNKNOWN&#8217;), &#8216;user_interaction&#8217;: cvss_v3.get(&#8216;userInteraction&#8217;, &#8216;UNKNOWN&#8217;) }) print(f&#8221;\u2713 Fetched {len(cves)} CVEsn&#8221;) return pd.DataFrame(cves) except Exception as e: print(f&#8221;Error fetching CVEs: {e}&#8221;) return self._generate_sample_data(max_results) def _generate_sample_data(self, n=50): print(&#8220;Using sample CVE data for demonstration&#8230;n&#8221;) sample_descriptions = [ &#8220;A buffer overflow vulnerability in the network driver allows remote code execution&#8221;, &#8220;SQL injection vulnerability in web application login form enables unauthorized access&#8221;, &#8220;Cross-site scripting (XSS) vulnerability in user input validation&#8221;, &#8220;Authentication bypass in admin panel due to weak session management&#8221;, &#8220;Remote code execution via deserialization of untrusted data&#8221;, &#8220;Path traversal vulnerability allows reading arbitrary files&#8221;, &#8220;Privilege escalation through improper input validation&#8221;, &#8220;Denial of service through resource exhaustion in API endpoint&#8221;, &#8220;Information disclosure via error messages exposing sensitive data&#8221;, &#8220;Memory corruption vulnerability in image processing library&#8221;, &#8220;Command injection in file upload functionality&#8221;, &#8220;Integer overflow leading to heap buffer overflow&#8221;, &#8220;Use-after-free vulnerability in memory management&#8221;, &#8220;Race condition in multi-threaded application&#8221;, &#8220;Cryptographic weakness in password storage mechanism&#8221; ] severities = [&#8216;LOW&#8217;, &#8216;MEDIUM&#8217;, &#8216;HIGH&#8217;, &#8216;CRITICAL&#8217;] attack_vectors = [&#8216;NETWORK&#8217;, &#8216;ADJACENT&#8217;, &#8216;LOCAL&#8217;, &#8216;PHYSICAL&#8217;] complexities = [&#8216;LOW&#8217;, &#8216;HIGH&#8217;] data = [] for i in range(n): severity = np.random.choice(severities, p=[0.1, 0.3, 0.4, 0.2]) score_ranges = {&#8216;LOW&#8217;: (0.1, 3.9), &#8216;MEDIUM&#8217;: (4.0, 6.9), &#8216;HIGH&#8217;: (7.0, 8.9), &#8216;CRITICAL&#8217;: (9.0, 10.0)} data.append({ &#8216;cve_id&#8217;: f&#8217;CVE-2024-{10000+i}&#8217;, &#8216;description&#8217;: np.random.choice(sample_descriptions), &#8216;cvss_score&#8217;: np.random.uniform(*score_ranges[severity]), &#8216;severity&#8217;: severity, &#8216;published&#8217;: (datetime.now() &#8211; timedelta(days=np.random.randint(1, 30))).isoformat(), &#8216;reference_count&#8217;: np.random.randint(1, 10), &#8216;attack_vector&#8217;: np.random.choice(attack_vectors), &#8216;attack_complexity&#8217;: np.random.choice(complexities), &#8216;privileges_required&#8217;: np.random.choice([&#8216;NONE&#8217;, &#8216;LOW&#8217;, &#8216;HIGH&#8217;]), &#8216;user_interaction&#8217;: np.random.choice([&#8216;NONE&#8217;, &#8216;REQUIRED&#8217;]) }) return pd.DataFrame(data) We implement a robust CVE ingestion component that pulls recent vulnerabilities directly from the NVD API. We normalize raw CVE records into structured features while gracefully falling back to synthetic data when API access fails. It allows the tutorial to remain runnable while reflecting real-world challenges in data ingestion. Check out the\u00a0FULL CODES here. Copy CodeCopiedUse a different Browser class VulnerabilityFeatureExtractor: def __init__(self): print(&#8220;Loading sentence transformer model&#8230;&#8221;) self.model = SentenceTransformer(&#8216;all-MiniLM-L6-v2&#8217;) print(&#8220;\u2713 Model loadedn&#8221;) self.critical_keywords = { &#8216;execution&#8217;: [&#8216;remote code execution&#8217;, &#8216;rce&#8217;, &#8216;execute&#8217;, &#8216;arbitrary code&#8217;], &#8216;injection&#8217;: [&#8216;sql injection&#8217;, &#8216;command injection&#8217;, &#8216;code injection&#8217;], &#8216;authentication&#8217;: [&#8216;bypass&#8217;, &#8216;authentication&#8217;, &#8216;authorization&#8217;], &#8216;overflow&#8217;: [&#8216;buffer overflow&#8217;, &#8216;heap overflow&#8217;, &#8216;stack overflow&#8217;], &#8216;exposure&#8217;: [&#8216;information disclosure&#8217;, &#8216;data leak&#8217;, &#8216;exposure&#8217;], } def extract_semantic_features(self, descriptions): print(&#8220;Generating semantic embeddings&#8230;&#8221;) embeddings = self.model.encode(descriptions, show_progress_bar=True) return embeddings def extract_keyword_features(self, df): print(&#8220;Extracting keyword features&#8230;&#8221;) for category, keywords in self.critical_keywords.items(): df[f&#8217;has_{category}&#8217;] = df[&#8216;description&#8217;].apply( lambda x: any(kw in x.lower() for kw in keywords) ).astype(int) df[&#8216;desc_length&#8217;] = df[&#8216;description&#8217;].apply(len) df[&#8216;word_count&#8217;] = df[&#8216;description&#8217;].apply(lambda x: len(x.split())) return df def encode_categorical_features(self, df): print(&#8220;Encoding categorical features&#8230;&#8221;) categorical_cols = [&#8216;attack_vector&#8217;, &#8216;attack_complexity&#8217;, &#8216;privileges_required&#8217;, &#8216;user_interaction&#8217;] for col in categorical_cols: dummies = pd.get_dummies(df[col], prefix=col) df = pd.concat([df, dummies], axis=1) return df We transform unstructured vulnerability descriptions into dense semantic embeddings using a sentence-transformer model. We also extract keyword-based risk indicators and textual statistics that capture exploit intent and complexity. Together, these features bridge linguistic context with quantitative ML inputs. Check out the\u00a0FULL CODES here. Copy CodeCopiedUse a different Browser class VulnerabilityPrioritizer: def __init__(self): self.severity_classifier = RandomForestClassifier(n_estimators=100, random_state=42) self.score_predictor = GradientBoostingRegressor(n_estimators=100, random_state=42) self.scaler = StandardScaler() self.feature_cols = None def prepare_features(self, df, embeddings): numeric_features = [&#8216;reference_count&#8217;, &#8216;desc_length&#8217;, &#8216;word_count&#8217;] keyword_features = [col for col in df.columns if col.startswith(&#8216;has_&#8217;)] categorical_features = [col for col in df.columns if any(col.startswith(prefix) for prefix in [&#8216;attack_vector_&#8217;, &#8216;attack_complexity_&#8217;, &#8216;privileges_required_&#8217;, &#8216;user_interaction_&#8217;])] self.feature_cols = numeric_features + keyword_features + categorical_features X_structured = df[self.feature_cols].values X_embeddings = embeddings X_combined = np.hstack([X_structured, X_embeddings]) return X_combined def train_models(self, X, y_severity, y_score): print(&#8220;nTraining ML models&#8230;&#8221;) X_scaled = self.scaler.fit_transform(X) X_train, X_test, y_sev_train, y_sev_test, y_score_train, y_score_test = train_test_split( X_scaled, y_severity, y_score, test_size=0.2, random_state=42 ) self.severity_classifier.fit(X_train, y_sev_train) sev_pred = self.severity_classifier.predict(X_test) self.score_predictor.fit(X_train, y_score_train) score_pred = self.score_predictor.predict(X_test) print(&#8220;n&#8212; Severity Classification Report &#8212;&#8220;) print(classification_report(y_sev_test, sev_pred)) print(f&#8221;n&#8212; CVSS Score Prediction &#8212;&#8220;) print(f&#8221;RMSE: {np.sqrt(mean_squared_error(y_score_test, score_pred)):.2f}&#8221;) return X_scaled def predict_priority(self, X): X_scaled = self.scaler.transform(X) severity_pred = self.severity_classifier.predict_proba(X_scaled) score_pred = self.score_predictor.predict(X_scaled) severity_weight = severity_pred[:, -1] * 0.4 score_weight = (score_pred \/ 10.0) * 0.6 priority_score = severity_weight + score_weight return priority_score, severity_pred, score_pred def get_feature_importance(self): importance = self.score_predictor.feature_importances_ n_structured = len(self.feature_cols) structured_importance = importance[:n_structured] embedding_importance = importance[n_structured:] feature_imp_df = pd.DataFrame({ &#8216;feature&#8217;: self.feature_cols, &#8216;importance&#8217;: structured_importance }).sort_values(&#8216;importance&#8217;, ascending=False) return feature_imp_df, embedding_importance.mean() We train supervised<\/p>","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"pmpro_default_level":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"_pvb_checkbox_block_on_post":false,"footnotes":""},"categories":[52,5,7,1],"tags":[],"class_list":["post-66329","post","type-post","status-publish","format-standard","hentry","category-ai-club","category-committee","category-news","category-uncategorized","pmpro-has-access"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.3 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>How Machine Learning and Semantic Embeddings Reorder CVE Vulnerabilities Beyond Raw CVSS Scores - YouZum<\/title>\n<meta name=\"description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/youzum.net\/es\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/\" \/>\n<meta property=\"og:locale\" content=\"es_ES\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How Machine Learning and Semantic Embeddings Reorder CVE Vulnerabilities Beyond Raw CVSS Scores - YouZum\" \/>\n<meta property=\"og:description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta property=\"og:url\" content=\"https:\/\/youzum.net\/es\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/\" \/>\n<meta property=\"og:site_name\" content=\"YouZum\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/DroneAssociationTH\/\" \/>\n<meta property=\"article:published_time\" content=\"2026-01-24T11:07:39+00:00\" \/>\n<meta name=\"author\" content=\"admin NU\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin NU\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tiempo de lectura\" \/>\n\t<meta name=\"twitter:data2\" content=\"12 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/youzum.net\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/youzum.net\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/\"},\"author\":{\"name\":\"admin NU\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\"},\"headline\":\"How Machine Learning and Semantic Embeddings Reorder CVE Vulnerabilities Beyond Raw CVSS Scores\",\"datePublished\":\"2026-01-24T11:07:39+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/youzum.net\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/\"},\"wordCount\":530,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"articleSection\":[\"AI\",\"Committee\",\"News\",\"Uncategorized\"],\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/youzum.net\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/youzum.net\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/\",\"url\":\"https:\/\/youzum.net\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/\",\"name\":\"How Machine Learning and Semantic Embeddings Reorder CVE Vulnerabilities Beyond Raw CVSS Scores - YouZum\",\"isPartOf\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#website\"},\"datePublished\":\"2026-01-24T11:07:39+00:00\",\"description\":\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\",\"breadcrumb\":{\"@id\":\"https:\/\/youzum.net\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/#breadcrumb\"},\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/youzum.net\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/youzum.net\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/youzum.net\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"How Machine Learning and Semantic Embeddings Reorder CVE Vulnerabilities Beyond Raw CVSS Scores\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/yousum.gpucore.co\/#website\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"name\":\"YouSum\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/yousum.gpucore.co\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"es\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\",\"name\":\"Drone Association Thailand\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"width\":300,\"height\":300,\"caption\":\"Drone Association Thailand\"},\"image\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/DroneAssociationTH\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\",\"name\":\"admin NU\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"caption\":\"admin NU\"},\"url\":\"https:\/\/youzum.net\/es\/members\/adminnu\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"How Machine Learning and Semantic Embeddings Reorder CVE Vulnerabilities Beyond Raw CVSS Scores - YouZum","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/youzum.net\/es\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/","og_locale":"es_ES","og_type":"article","og_title":"How Machine Learning and Semantic Embeddings Reorder CVE Vulnerabilities Beyond Raw CVSS Scores - YouZum","og_description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","og_url":"https:\/\/youzum.net\/es\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/","og_site_name":"YouZum","article_publisher":"https:\/\/www.facebook.com\/DroneAssociationTH\/","article_published_time":"2026-01-24T11:07:39+00:00","author":"admin NU","twitter_card":"summary_large_image","twitter_misc":{"Escrito por":"admin NU","Tiempo de lectura":"12 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/youzum.net\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/#article","isPartOf":{"@id":"https:\/\/youzum.net\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/"},"author":{"name":"admin NU","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c"},"headline":"How Machine Learning and Semantic Embeddings Reorder CVE Vulnerabilities Beyond Raw CVSS Scores","datePublished":"2026-01-24T11:07:39+00:00","mainEntityOfPage":{"@id":"https:\/\/youzum.net\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/"},"wordCount":530,"commentCount":0,"publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"articleSection":["AI","Committee","News","Uncategorized"],"inLanguage":"es","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/youzum.net\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/youzum.net\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/","url":"https:\/\/youzum.net\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/","name":"How Machine Learning and Semantic Embeddings Reorder CVE Vulnerabilities Beyond Raw CVSS Scores - YouZum","isPartOf":{"@id":"https:\/\/yousum.gpucore.co\/#website"},"datePublished":"2026-01-24T11:07:39+00:00","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","breadcrumb":{"@id":"https:\/\/youzum.net\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/#breadcrumb"},"inLanguage":"es","potentialAction":[{"@type":"ReadAction","target":["https:\/\/youzum.net\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/youzum.net\/how-machine-learning-and-semantic-embeddings-reorder-cve-vulnerabilities-beyond-raw-cvss-scores\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/youzum.net\/"},{"@type":"ListItem","position":2,"name":"How Machine Learning and Semantic Embeddings Reorder CVE Vulnerabilities Beyond Raw CVSS Scores"}]},{"@type":"WebSite","@id":"https:\/\/yousum.gpucore.co\/#website","url":"https:\/\/yousum.gpucore.co\/","name":"YouSum","description":"","publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/yousum.gpucore.co\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"es"},{"@type":"Organization","@id":"https:\/\/yousum.gpucore.co\/#organization","name":"Drone Association Thailand","url":"https:\/\/yousum.gpucore.co\/","logo":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","width":300,"height":300,"caption":"Drone Association Thailand"},"image":{"@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/DroneAssociationTH\/"]},{"@type":"Person","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c","name":"admin NU","image":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","caption":"admin NU"},"url":"https:\/\/youzum.net\/es\/members\/adminnu\/"}]}},"rttpg_featured_image_url":null,"rttpg_author":{"display_name":"admin NU","author_link":"https:\/\/youzum.net\/es\/members\/adminnu\/"},"rttpg_comment":0,"rttpg_category":"<a href=\"https:\/\/youzum.net\/es\/category\/ai-club\/\" rel=\"category tag\">AI<\/a> <a href=\"https:\/\/youzum.net\/es\/category\/committee\/\" rel=\"category tag\">Committee<\/a> <a href=\"https:\/\/youzum.net\/es\/category\/news\/\" rel=\"category tag\">News<\/a> <a href=\"https:\/\/youzum.net\/es\/category\/uncategorized\/\" rel=\"category tag\">Uncategorized<\/a>","rttpg_excerpt":"In this tutorial, we build an AI-assisted vulnerability scanner that goes beyond static CVSS scoring and instead learns to prioritize vulnerabilities using semantic understanding and machine learning. We treat vulnerability descriptions as rich linguistic artifacts, embed them using modern sentence transformers, and combine these representations with structural metadata to produce a data-driven priority score. Also,&hellip;","_links":{"self":[{"href":"https:\/\/youzum.net\/es\/wp-json\/wp\/v2\/posts\/66329","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/youzum.net\/es\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/youzum.net\/es\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/youzum.net\/es\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/youzum.net\/es\/wp-json\/wp\/v2\/comments?post=66329"}],"version-history":[{"count":0,"href":"https:\/\/youzum.net\/es\/wp-json\/wp\/v2\/posts\/66329\/revisions"}],"wp:attachment":[{"href":"https:\/\/youzum.net\/es\/wp-json\/wp\/v2\/media?parent=66329"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/youzum.net\/es\/wp-json\/wp\/v2\/categories?post=66329"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/youzum.net\/es\/wp-json\/wp\/v2\/tags?post=66329"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}