{"id":74538,"date":"2026-03-01T11:59:23","date_gmt":"2026-03-01T11:59:23","guid":{"rendered":"https:\/\/youzum.net\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/"},"modified":"2026-03-01T11:59:23","modified_gmt":"2026-03-01T11:59:23","slug":"a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment","status":"publish","type":"post","link":"https:\/\/youzum.net\/th\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/","title":{"rendered":"A Complete End-to-End Coding Guide to MLflow Experiment Tracking, Hyperparameter Optimization, Model Evaluation, and Live Model Deployment"},"content":{"rendered":"<p>In this tutorial, we build a complete, production-grade ML experimentation and deployment workflow using <a href=\"https:\/\/github.com\/mlflow\/mlflow\"><strong>MLflow<\/strong><\/a>. We start by launching a dedicated MLflow Tracking Server with a structured backend and artifact store, enabling us to track experiments in a scalable, reproducible manner. We then train multiple machine learning models using a nested hyperparameter sweep while automatically logging parameters, metrics, and model artifacts. We enhance the experiment by logging diagnostic visualizations, evaluating the best model using MLflow\u2019s built-in evaluation framework, and storing detailed evaluation results for future analysis. We also deploy the trained model using MLflow\u2019s native serving capabilities and interact with it via a REST API, demonstrating how MLflow bridges the gap between experimentation and real-world model deployment.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">!pip -q install \"mlflow&gt;=3.0.0\" scikit-learn pandas numpy matplotlib requests\n\n\nimport os\nimport time\nimport json\nimport shutil\nimport socket\nimport signal\nimport subprocess\nfrom pathlib import Path\n\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport requests\n\n\nfrom sklearn.datasets import load_breast_cancer\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import (\n   roc_auc_score,\n   accuracy_score,\n   precision_score,\n   recall_score,\n   f1_score,\n   confusion_matrix,\n   ConfusionMatrixDisplay,\n)\n\n\nimport mlflow\nimport mlflow.sklearn\nfrom mlflow.models.signature import infer_signature\n\n\ndef _is_port_open(host: str, port: int, timeout_s: float = 0.2) -&gt; bool:\n   with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:\n       s.settimeout(timeout_s)\n       return s.connect_ex((host, port)) == 0\n\n\ndef _wait_for_http(url: str, timeout_s: int = 30) -&gt; None:\n   t0 = time.time()\n   last_err = None\n   while time.time() - t0 &lt; timeout_s:\n       try:\n           r = requests.get(url, timeout=1)\n           if r.status_code &lt; 500:\n               return\n       except Exception as e:\n           last_err = e\n       time.sleep(0.5)\n   raise RuntimeError(f\"Server not ready at {url}. Last error: {last_err}\")\n\n\ndef _safe_kill(proc: subprocess.Popen):\n   if proc is None:\n       return\n   try:\n       proc.terminate()\n       try:\n           proc.wait(timeout=5)\n       except subprocess.TimeoutExpired:\n           proc.kill()\n   except Exception:\n       pass<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We install all required dependencies and import the complete MLflow, scikit-learn, and system libraries needed for experiment tracking and deployment. We define utility functions that allow us to check port availability, wait for server readiness, and safely terminate background processes. We establish the foundational infrastructure to ensure our MLflow tracking server and model-serving components operate reliably in the Colab environment.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">BASE_DIR = Path(\"\/content\/mlflow_colab_demo\").resolve()\nBACKEND_DB = BASE_DIR \/ \"mlflow.db\"\nARTIFACT_ROOT = BASE_DIR \/ \"mlartifacts\"\nos.makedirs(BASE_DIR, exist_ok=True)\nos.makedirs(ARTIFACT_ROOT, exist_ok=True)\n\n\nHOST = \"127.0.0.1\"\nPORT = 5000\nTRACKING_URI = f\"http:\/\/{HOST}:{PORT}\"\n\n\nif _is_port_open(HOST, PORT):\n   for p in range(5001, 5015):\n       if not _is_port_open(HOST, p):\n           PORT = p\n           TRACKING_URI = f\"http:\/\/{HOST}:{PORT}\"\n           break\n\n\nprint(\"Using TRACKING_URI:\", TRACKING_URI)\nprint(\"Backend DB:\", BACKEND_DB)\nprint(\"Artifact root:\", ARTIFACT_ROOT)\n\n\nserver_cmd = [\n   \"mlflow\",\n   \"server\",\n   \"--host\", HOST,\n   \"--port\", str(PORT),\n   \"--backend-store-uri\", f\"sqlite:\/\/\/{BACKEND_DB}\",\n   \"--default-artifact-root\", str(ARTIFACT_ROOT),\n]\n\n\nmlflow_server = subprocess.Popen(\n   server_cmd,\n   stdout=subprocess.PIPE,\n   stderr=subprocess.STDOUT,\n   text=True,\n)\n\n\n_wait_for_http(TRACKING_URI, timeout_s=45)\nmlflow.set_tracking_uri(TRACKING_URI)\nprint(\"MLflow server is up.\")\n\n\nEXPERIMENT_NAME = \"colab-advanced-mlflow-sklearn\"\nmlflow.set_experiment(EXPERIMENT_NAME)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We configure the MLflow backend storage and artifact directories to create a structured, persistent experiment-tracking environment. We launch the MLflow Tracking Server with a SQLite database and a local artifact store, enabling full experiment logging and management. We connect our notebook to the running MLflow server and initialize a dedicated experiment that will organize all training runs and associated metadata.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">data = load_breast_cancer(as_frame=True)\ndf = data.frame.copy()\ntarget_col = \"target\"\n\n\nX = df.drop(columns=[target_col])\ny = df[target_col].astype(int)\n\n\nmlflow.sklearn.autolog(\n   log_input_examples=False,\n   log_model_signatures=False,\n   silent=True\n)\n\n\nC_VALUES = [0.01, 0.1, 1.0, 3.0]\nSOLVERS = [\"liblinear\", \"lbfgs\"]\n\n\nbest = {\"auc\": -1.0, \"run_id\": None, \"params\": None}<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We load the dataset and prepare the training and testing splits required for machine learning experimentation. We enable MLflow autologging, allowing automatic tracking of parameters, metrics, and model artifacts without manual intervention. We define the hyperparameter search space and initialize the structure to identify and store the best-performing model configuration.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">with mlflow.start_run(run_name=\"parent_sweep_run\") as parent_run:\n   mlflow.log_param(\"dataset\", \"sklearn_breast_cancer\")\n   mlflow.log_param(\"n_features\", X_train.shape[1])\n   mlflow.log_param(\"n_train\", X_train.shape[0])\n   mlflow.log_param(\"n_test\", X_test.shape[0])\n\n\n   for C in C_VALUES:\n       for solver in SOLVERS:\n           with mlflow.start_run(run_name=f\"child_C={C}_solver={solver}\", nested=True) as child_run:\n               pipe = Pipeline([\n                   (\"scaler\", StandardScaler()),\n                   (\"clf\", LogisticRegression(\n                       C=C,\n                       solver=solver,\n                       penalty=\"l2\",\n                       max_iter=2000,\n                       random_state=42\n                   ))\n               ])\n\n\n               pipe.fit(X_train, y_train)\n               proba = pipe.predict_proba(X_test)[:, 1]\n               pred = (proba &gt;= 0.5).astype(int)\n\n\n               auc = roc_auc_score(y_test, proba)\n               acc = accuracy_score(y_test, pred)\n               prec = precision_score(y_test, pred, zero_division=0)\n               rec = recall_score(y_test, pred, zero_division=0)\n               f1 = f1_score(y_test, pred, zero_division=0)\n\n\n               mlflow.log_metrics({\n                   \"test_auc\": float(auc),\n                   \"test_accuracy\": float(acc),\n                   \"test_precision\": float(prec),\n                   \"test_recall\": float(rec),\n                   \"test_f1\": float(f1),\n               })\n\n\n               cm = confusion_matrix(y_test, pred)\n               disp = ConfusionMatrixDisplay(cm, display_labels=data.target_names)\n               fig, ax = plt.subplots(figsize=(5, 4))\n               disp.plot(ax=ax, values_format=\"d\")\n               ax.set_title(f\"Confusion Matrix (C={C}, solver={solver})\")\n               cm_path = BASE_DIR \/ \"confusion_matrix.png\"\n               fig.tight_layout()\n               fig.savefig(cm_path, dpi=140)\n               plt.close(fig)\n               mlflow.log_artifact(str(cm_path), artifact_path=\"diagnostics\")\n\n\n               if auc &gt; best[\"auc\"]:\n                   best.update({\n                       \"auc\": float(auc),\n                       \"run_id\": child_run.info.run_id,\n                       \"params\": {\"C\": C, \"solver\": solver}\n                   })\n\n\n   mlflow.log_dict(best, \"best_run_summary.json\")\n   print(\"Best config:\", best)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We perform a nested hyperparameter sweep, training multiple models within a structured parent-child run hierarchy. We compute performance metrics and log them alongside diagnostic artifacts, such as confusion matrices, to enable detailed analysis of experiments. We continuously monitor model performance and update our tracking structure to identify the best configuration across all training runs.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">best_C = best[\"params\"][\"C\"]\nbest_solver = best[\"params\"][\"solver\"]\n\n\nfinal_pipe = Pipeline([\n   (\"scaler\", StandardScaler()),\n   (\"clf\", LogisticRegression(\n       C=best_C,\n       solver=best_solver,\n       penalty=\"l2\",\n       max_iter=2000,\n       random_state=42\n   ))\n])\n\n\nwith mlflow.start_run(run_name=\"final_model_run\") as final_run:\n   final_pipe.fit(X_train, y_train)\n\n\n   proba = final_pipe.predict_proba(X_test)[:, 1]\n   pred = (proba &gt;= 0.5).astype(int)\n\n\n   metrics = {\n       \"test_auc\": float(roc_auc_score(y_test, proba)),\n       \"test_accuracy\": float(accuracy_score(y_test, pred)),\n       \"test_precision\": float(precision_score(y_test, pred, zero_division=0)),\n       \"test_recall\": float(recall_score(y_test, pred, zero_division=0)),\n       \"test_f1\": float(f1_score(y_test, pred, zero_division=0)),\n   }\n   mlflow.log_metrics(metrics)\n   mlflow.log_params({\"C\": best_C, \"solver\": best_solver, \"model\": \"LogisticRegression+StandardScaler\"})\n\n\n   input_example = X_test.iloc[:5].copy()\n   signature = infer_signature(input_example, final_pipe.predict_proba(input_example)[:, 1])\n\n\n   model_info = mlflow.sklearn.log_model(\n       sk_model=final_pipe,\n       artifact_path=\"model\",\n       signature=signature,\n       input_example=input_example,\n       registered_model_name=None,\n   )\n\n\n   print(\"Final run_id:\", final_run.info.run_id)\n   print(\"Logged model URI:\", model_info.model_uri)\n\n\n   eval_df = X_test.copy()\n   eval_df[\"label\"] = y_test.values\n\n\n   eval_result = mlflow.models.evaluate(\n       model=model_info.model_uri,\n       data=eval_df,\n       targets=\"label\",\n       model_type=\"classifier\",\n       evaluators=\"default\",\n   )\n\n\n   eval_summary = {\n       \"metrics\": {k: float(v) if isinstance(v, (int, float, np.floating)) else str(v)\n                   for k, v in eval_result.metrics.items()},\n       \"artifacts\": {k: str(v) for k, v in eval_result.artifacts.items()},\n   }\n   mlflow.log_dict(eval_summary, \"evaluation\/eval_summary.json\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We train the final model using the best hyperparameters identified during the experiment sweep and log it with a proper signature and input example. We evaluate the model using MLflow\u2019s built-in evaluation framework, which generates detailed metrics and evaluation artifacts. We store the evaluation summary within MLflow, ensuring the final model is fully documented, reproducible, and ready for deployment.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">SERVE_PORT = 6000\nif _is_port_open(HOST, SERVE_PORT):\n   for p in range(6001, 6020):\n       if not _is_port_open(HOST, p):\n           SERVE_PORT = p\n           break\n\n\nMODEL_URI = model_info.model_uri\n\n\nserve_cmd = [\n   \"mlflow\", \"models\", \"serve\",\n   \"-m\", MODEL_URI,\n   \"-p\", str(SERVE_PORT),\n   \"--host\", HOST,\n   \"--env-manager\", \"local\"\n]\n\n\nmlflow_serve = subprocess.Popen(\n   serve_cmd,\n   stdout=subprocess.PIPE,\n   stderr=subprocess.STDOUT,\n   text=True,\n)\n\n\nserve_url = f\"http:\/\/{HOST}:{SERVE_PORT}\/invocations\"\n_wait_for_http(f\"http:\/\/{HOST}:{SERVE_PORT}\", timeout_s=60)\nprint(\"Model server is up at:\", serve_url)\n\n\npayload = {\n   \"dataframe_split\": {\n       \"columns\": list(X_test.columns),\n       \"data\": X_test.iloc[:3].values.tolist()\n   }\n}\n\n\nr = requests.post(\n   serve_url,\n   headers={\"Content-Type\": \"application\/json\"},\n   data=json.dumps(payload),\n   timeout=10\n)\nprint(\"Serve status:\", r.status_code)\nprint(\"Predictions (probabilities or outputs):\", r.text)\n\n\nprint(\"nOpen the MLflow UI by visiting:\", TRACKING_URI)\nprint(\"Artifacts are stored under:\", ARTIFACT_ROOT)<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We deploy the trained MLflow model as a live REST API service using MLflow\u2019s native serving infrastructure. We send a test request to the deployed model endpoint to verify that the model responds correctly and produces predictions in real time. We complete the full machine learning lifecycle by transitioning from experiment tracking to live model deployment within a unified MLflow workflow.<\/p>\n<p>In conclusion, we established a fully integrated ML lifecycle pipeline using MLflow, covering experiment tracking, hyperparameter optimization, artifact logging, model evaluation, and live model serving. We created a structured environment in which every training run is tracked, reproducible, and auditable, enabling efficient experimentation and model comparison. We leveraged MLflow\u2019s model packaging and serving infrastructure to transform trained models into deployable services with minimal effort. By completing this workflow, we demonstrated how MLflow functions as a central orchestration layer for managing machine learning systems, enabling scalable, reproducible, and production-ready ML pipelines entirely within a cloud-based notebook environment.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Tutorial-Codes-Included\/blob\/main\/MLFlow%20for%20LLM%20Evaluation\/mlflow_experiment_tracking_and_model_serving_Marktechpost.ipynb\" target=\"_blank\" rel=\"noreferrer noopener\">Full Codes here<\/a>.\u00a0<\/strong>Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">120k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/03\/01\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/\">A Complete End-to-End Coding Guide to MLflow Experiment Tracking, Hyperparameter Optimization, Model Evaluation, and Live Model Deployment<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>In this tutorial, we build a complete, production-grade ML experimentation and deployment workflow using MLflow. We start by launching a dedicated MLflow Tracking Server with a structured backend and artifact store, enabling us to track experiments in a scalable, reproducible manner. We then train multiple machine learning models using a nested hyperparameter sweep while automatically logging parameters, metrics, and model artifacts. We enhance the experiment by logging diagnostic visualizations, evaluating the best model using MLflow\u2019s built-in evaluation framework, and storing detailed evaluation results for future analysis. We also deploy the trained model using MLflow\u2019s native serving capabilities and interact with it via a REST API, demonstrating how MLflow bridges the gap between experimentation and real-world model deployment. Copy CodeCopiedUse a different Browser !pip -q install &#8220;mlflow&gt;=3.0.0&#8243; scikit-learn pandas numpy matplotlib requests import os import time import json import shutil import socket import signal import subprocess from pathlib import Path import numpy as np import pandas as pd import matplotlib.pyplot as plt import requests from sklearn.datasets import load_breast_cancer from sklearn.model_selection import train_test_split from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegression from sklearn.metrics import ( roc_auc_score, accuracy_score, precision_score, recall_score, f1_score, confusion_matrix, ConfusionMatrixDisplay, ) import mlflow import mlflow.sklearn from mlflow.models.signature import infer_signature def _is_port_open(host: str, port: int, timeout_s: float = 0.2) -&gt; bool: with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.settimeout(timeout_s) return s.connect_ex((host, port)) == 0 def _wait_for_http(url: str, timeout_s: int = 30) -&gt; None: t0 = time.time() last_err = None while time.time() &#8211; t0 &lt; timeout_s: try: r = requests.get(url, timeout=1) if r.status_code &lt; 500: return except Exception as e: last_err = e time.sleep(0.5) raise RuntimeError(f&#8221;Server not ready at {url}. Last error: {last_err}&#8221;) def _safe_kill(proc: subprocess.Popen): if proc is None: return try: proc.terminate() try: proc.wait(timeout=5) except subprocess.TimeoutExpired: proc.kill() except Exception: pass We install all required dependencies and import the complete MLflow, scikit-learn, and system libraries needed for experiment tracking and deployment. We define utility functions that allow us to check port availability, wait for server readiness, and safely terminate background processes. We establish the foundational infrastructure to ensure our MLflow tracking server and model-serving components operate reliably in the Colab environment. Copy CodeCopiedUse a different Browser BASE_DIR = Path(&#8220;\/content\/mlflow_colab_demo&#8221;).resolve() BACKEND_DB = BASE_DIR \/ &#8220;mlflow.db&#8221; ARTIFACT_ROOT = BASE_DIR \/ &#8220;mlartifacts&#8221; os.makedirs(BASE_DIR, exist_ok=True) os.makedirs(ARTIFACT_ROOT, exist_ok=True) HOST = &#8220;127.0.0.1&#8221; PORT = 5000 TRACKING_URI = f&#8221;http:\/\/{HOST}:{PORT}&#8221; if _is_port_open(HOST, PORT): for p in range(5001, 5015): if not _is_port_open(HOST, p): PORT = p TRACKING_URI = f&#8221;http:\/\/{HOST}:{PORT}&#8221; break print(&#8220;Using TRACKING_URI:&#8221;, TRACKING_URI) print(&#8220;Backend DB:&#8221;, BACKEND_DB) print(&#8220;Artifact root:&#8221;, ARTIFACT_ROOT) server_cmd = [ &#8220;mlflow&#8221;, &#8220;server&#8221;, &#8220;&#8211;host&#8221;, HOST, &#8220;&#8211;port&#8221;, str(PORT), &#8220;&#8211;backend-store-uri&#8221;, f&#8221;sqlite:\/\/\/{BACKEND_DB}&#8221;, &#8220;&#8211;default-artifact-root&#8221;, str(ARTIFACT_ROOT), ] mlflow_server = subprocess.Popen( server_cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True, ) _wait_for_http(TRACKING_URI, timeout_s=45) mlflow.set_tracking_uri(TRACKING_URI) print(&#8220;MLflow server is up.&#8221;) EXPERIMENT_NAME = &#8220;colab-advanced-mlflow-sklearn&#8221; mlflow.set_experiment(EXPERIMENT_NAME) We configure the MLflow backend storage and artifact directories to create a structured, persistent experiment-tracking environment. We launch the MLflow Tracking Server with a SQLite database and a local artifact store, enabling full experiment logging and management. We connect our notebook to the running MLflow server and initialize a dedicated experiment that will organize all training runs and associated metadata. Copy CodeCopiedUse a different Browser data = load_breast_cancer(as_frame=True) df = data.frame.copy() target_col = &#8220;target&#8221; X = df.drop(columns=[target_col]) y = df[target_col].astype(int) mlflow.sklearn.autolog( log_input_examples=False, log_model_signatures=False, silent=True ) C_VALUES = [0.01, 0.1, 1.0, 3.0] SOLVERS = [&#8220;liblinear&#8221;, &#8220;lbfgs&#8221;] best = {&#8220;auc&#8221;: -1.0, &#8220;run_id&#8221;: None, &#8220;params&#8221;: None} We load the dataset and prepare the training and testing splits required for machine learning experimentation. We enable MLflow autologging, allowing automatic tracking of parameters, metrics, and model artifacts without manual intervention. We define the hyperparameter search space and initialize the structure to identify and store the best-performing model configuration. Copy CodeCopiedUse a different Browser with mlflow.start_run(run_name=&#8221;parent_sweep_run&#8221;) as parent_run: mlflow.log_param(&#8220;dataset&#8221;, &#8220;sklearn_breast_cancer&#8221;) mlflow.log_param(&#8220;n_features&#8221;, X_train.shape[1]) mlflow.log_param(&#8220;n_train&#8221;, X_train.shape[0]) mlflow.log_param(&#8220;n_test&#8221;, X_test.shape[0]) for C in C_VALUES: for solver in SOLVERS: with mlflow.start_run(run_name=f&#8221;child_C={C}_solver={solver}&#8221;, nested=True) as child_run: pipe = Pipeline([ (&#8220;scaler&#8221;, StandardScaler()), (&#8220;clf&#8221;, LogisticRegression( C=C, solver=solver, penalty=&#8221;l2&#8243;, max_iter=2000, random_state=42 )) ]) pipe.fit(X_train, y_train) proba = pipe.predict_proba(X_test)[:, 1] pred = (proba &gt;= 0.5).astype(int) auc = roc_auc_score(y_test, proba) acc = accuracy_score(y_test, pred) prec = precision_score(y_test, pred, zero_division=0) rec = recall_score(y_test, pred, zero_division=0) f1 = f1_score(y_test, pred, zero_division=0) mlflow.log_metrics({ &#8220;test_auc&#8221;: float(auc), &#8220;test_accuracy&#8221;: float(acc), &#8220;test_precision&#8221;: float(prec), &#8220;test_recall&#8221;: float(rec), &#8220;test_f1&#8243;: float(f1), }) cm = confusion_matrix(y_test, pred) disp = ConfusionMatrixDisplay(cm, display_labels=data.target_names) fig, ax = plt.subplots(figsize=(5, 4)) disp.plot(ax=ax, values_format=&#8221;d&#8221;) ax.set_title(f&#8221;Confusion Matrix (C={C}, solver={solver})&#8221;) cm_path = BASE_DIR \/ &#8220;confusion_matrix.png&#8221; fig.tight_layout() fig.savefig(cm_path, dpi=140) plt.close(fig) mlflow.log_artifact(str(cm_path), artifact_path=&#8221;diagnostics&#8221;) if auc &gt; best[&#8220;auc&#8221;]: best.update({ &#8220;auc&#8221;: float(auc), &#8220;run_id&#8221;: child_run.info.run_id, &#8220;params&#8221;: {&#8220;C&#8221;: C, &#8220;solver&#8221;: solver} }) mlflow.log_dict(best, &#8220;best_run_summary.json&#8221;) print(&#8220;Best config:&#8221;, best) We perform a nested hyperparameter sweep, training multiple models within a structured parent-child run hierarchy. We compute performance metrics and log them alongside diagnostic artifacts, such as confusion matrices, to enable detailed analysis of experiments. We continuously monitor model performance and update our tracking structure to identify the best configuration across all training runs. Copy CodeCopiedUse a different Browser best_C = best[&#8220;params&#8221;][&#8220;C&#8221;] best_solver = best[&#8220;params&#8221;][&#8220;solver&#8221;] final_pipe = Pipeline([ (&#8220;scaler&#8221;, StandardScaler()), (&#8220;clf&#8221;, LogisticRegression( C=best_C, solver=best_solver, penalty=&#8221;l2&#8243;, max_iter=2000, random_state=42 )) ]) with mlflow.start_run(run_name=&#8221;final_model_run&#8221;) as final_run: final_pipe.fit(X_train, y_train) proba = final_pipe.predict_proba(X_test)[:, 1] pred = (proba &gt;= 0.5).astype(int) metrics = { &#8220;test_auc&#8221;: float(roc_auc_score(y_test, proba)), &#8220;test_accuracy&#8221;: float(accuracy_score(y_test, pred)), &#8220;test_precision&#8221;: float(precision_score(y_test, pred, zero_division=0)), &#8220;test_recall&#8221;: float(recall_score(y_test, pred, zero_division=0)), &#8220;test_f1&#8221;: float(f1_score(y_test, pred, zero_division=0)), } mlflow.log_metrics(metrics) mlflow.log_params({&#8220;C&#8221;: best_C, &#8220;solver&#8221;: best_solver, &#8220;model&#8221;: &#8220;LogisticRegression+StandardScaler&#8221;}) input_example = X_test.iloc[:5].copy() signature = infer_signature(input_example, final_pipe.predict_proba(input_example)[:, 1]) model_info = mlflow.sklearn.log_model( sk_model=final_pipe, artifact_path=&#8221;model&#8221;, signature=signature, input_example=input_example, registered_model_name=None, ) print(&#8220;Final run_id:&#8221;, final_run.info.run_id) print(&#8220;Logged model URI:&#8221;, model_info.model_uri) eval_df = X_test.copy() eval_df[&#8220;label&#8221;] = y_test.values eval_result = mlflow.models.evaluate( model=model_info.model_uri, data=eval_df, targets=&#8221;label&#8221;, model_type=&#8221;classifier&#8221;, evaluators=&#8221;default&#8221;, ) eval_summary = { &#8220;metrics&#8221;: {k: float(v) if isinstance(v, (int, float, np.floating)) else str(v) for k, v in eval_result.metrics.items()}, &#8220;artifacts&#8221;: {k: str(v) for k, v in eval_result.artifacts.items()}, } mlflow.log_dict(eval_summary, &#8220;evaluation\/eval_summary.json&#8221;) We train the final model using the best hyperparameters identified during the experiment sweep and log it with a proper signature and input example. We evaluate the model using MLflow\u2019s built-in evaluation framework, which generates detailed metrics and evaluation artifacts. We store the evaluation summary within MLflow, ensuring the final model is fully documented, reproducible, and ready for deployment. Copy CodeCopiedUse a different Browser SERVE_PORT = 6000 if _is_port_open(HOST, SERVE_PORT): for p<\/p>","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"pmpro_default_level":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"_pvb_checkbox_block_on_post":false,"footnotes":""},"categories":[52,5,7,1],"tags":[],"class_list":["post-74538","post","type-post","status-publish","format-standard","hentry","category-ai-club","category-committee","category-news","category-uncategorized","pmpro-has-access"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.3 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>A Complete End-to-End Coding Guide to MLflow Experiment Tracking, Hyperparameter Optimization, Model Evaluation, and Live Model Deployment - YouZum<\/title>\n<meta name=\"description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/youzum.net\/th\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/\" \/>\n<meta property=\"og:locale\" content=\"th_TH\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"A Complete End-to-End Coding Guide to MLflow Experiment Tracking, Hyperparameter Optimization, Model Evaluation, and Live Model Deployment - YouZum\" \/>\n<meta property=\"og:description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta property=\"og:url\" content=\"https:\/\/youzum.net\/th\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/\" \/>\n<meta property=\"og:site_name\" content=\"YouZum\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/DroneAssociationTH\/\" \/>\n<meta property=\"article:published_time\" content=\"2026-03-01T11:59:23+00:00\" \/>\n<meta name=\"author\" content=\"admin NU\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin NU\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 \u0e19\u0e32\u0e17\u0e35\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/youzum.net\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/youzum.net\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/\"},\"author\":{\"name\":\"admin NU\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\"},\"headline\":\"A Complete End-to-End Coding Guide to MLflow Experiment Tracking, Hyperparameter Optimization, Model Evaluation, and Live Model Deployment\",\"datePublished\":\"2026-03-01T11:59:23+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/youzum.net\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/\"},\"wordCount\":687,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"articleSection\":[\"AI\",\"Committee\",\"News\",\"Uncategorized\"],\"inLanguage\":\"th\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/youzum.net\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/youzum.net\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/\",\"url\":\"https:\/\/youzum.net\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/\",\"name\":\"A Complete End-to-End Coding Guide to MLflow Experiment Tracking, Hyperparameter Optimization, Model Evaluation, and Live Model Deployment - YouZum\",\"isPartOf\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#website\"},\"datePublished\":\"2026-03-01T11:59:23+00:00\",\"description\":\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\",\"breadcrumb\":{\"@id\":\"https:\/\/youzum.net\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/#breadcrumb\"},\"inLanguage\":\"th\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/youzum.net\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/youzum.net\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/youzum.net\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"A Complete End-to-End Coding Guide to MLflow Experiment Tracking, Hyperparameter Optimization, Model Evaluation, and Live Model Deployment\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/yousum.gpucore.co\/#website\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"name\":\"YouSum\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/yousum.gpucore.co\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"th\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\",\"name\":\"Drone Association Thailand\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"th\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"width\":300,\"height\":300,\"caption\":\"Drone Association Thailand\"},\"image\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/DroneAssociationTH\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\",\"name\":\"admin NU\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"th\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"caption\":\"admin NU\"},\"url\":\"https:\/\/youzum.net\/th\/members\/adminnu\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"A Complete End-to-End Coding Guide to MLflow Experiment Tracking, Hyperparameter Optimization, Model Evaluation, and Live Model Deployment - YouZum","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/youzum.net\/th\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/","og_locale":"th_TH","og_type":"article","og_title":"A Complete End-to-End Coding Guide to MLflow Experiment Tracking, Hyperparameter Optimization, Model Evaluation, and Live Model Deployment - YouZum","og_description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","og_url":"https:\/\/youzum.net\/th\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/","og_site_name":"YouZum","article_publisher":"https:\/\/www.facebook.com\/DroneAssociationTH\/","article_published_time":"2026-03-01T11:59:23+00:00","author":"admin NU","twitter_card":"summary_large_image","twitter_misc":{"Written by":"admin NU","Est. reading time":"9 \u0e19\u0e32\u0e17\u0e35"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/youzum.net\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/#article","isPartOf":{"@id":"https:\/\/youzum.net\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/"},"author":{"name":"admin NU","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c"},"headline":"A Complete End-to-End Coding Guide to MLflow Experiment Tracking, Hyperparameter Optimization, Model Evaluation, and Live Model Deployment","datePublished":"2026-03-01T11:59:23+00:00","mainEntityOfPage":{"@id":"https:\/\/youzum.net\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/"},"wordCount":687,"commentCount":0,"publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"articleSection":["AI","Committee","News","Uncategorized"],"inLanguage":"th","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/youzum.net\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/youzum.net\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/","url":"https:\/\/youzum.net\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/","name":"A Complete End-to-End Coding Guide to MLflow Experiment Tracking, Hyperparameter Optimization, Model Evaluation, and Live Model Deployment - YouZum","isPartOf":{"@id":"https:\/\/yousum.gpucore.co\/#website"},"datePublished":"2026-03-01T11:59:23+00:00","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","breadcrumb":{"@id":"https:\/\/youzum.net\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/#breadcrumb"},"inLanguage":"th","potentialAction":[{"@type":"ReadAction","target":["https:\/\/youzum.net\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/youzum.net\/a-complete-end-to-end-coding-guide-to-mlflow-experiment-tracking-hyperparameter-optimization-model-evaluation-and-live-model-deployment\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/youzum.net\/"},{"@type":"ListItem","position":2,"name":"A Complete End-to-End Coding Guide to MLflow Experiment Tracking, Hyperparameter Optimization, Model Evaluation, and Live Model Deployment"}]},{"@type":"WebSite","@id":"https:\/\/yousum.gpucore.co\/#website","url":"https:\/\/yousum.gpucore.co\/","name":"YouSum","description":"","publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/yousum.gpucore.co\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"th"},{"@type":"Organization","@id":"https:\/\/yousum.gpucore.co\/#organization","name":"Drone Association Thailand","url":"https:\/\/yousum.gpucore.co\/","logo":{"@type":"ImageObject","inLanguage":"th","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","width":300,"height":300,"caption":"Drone Association Thailand"},"image":{"@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/DroneAssociationTH\/"]},{"@type":"Person","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c","name":"admin NU","image":{"@type":"ImageObject","inLanguage":"th","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","caption":"admin NU"},"url":"https:\/\/youzum.net\/th\/members\/adminnu\/"}]}},"rttpg_featured_image_url":null,"rttpg_author":{"display_name":"admin NU","author_link":"https:\/\/youzum.net\/th\/members\/adminnu\/"},"rttpg_comment":0,"rttpg_category":"<a href=\"https:\/\/youzum.net\/th\/category\/ai-club\/\" rel=\"category tag\">AI<\/a> <a href=\"https:\/\/youzum.net\/th\/category\/committee\/\" rel=\"category tag\">Committee<\/a> <a href=\"https:\/\/youzum.net\/th\/category\/news\/\" rel=\"category tag\">News<\/a> <a href=\"https:\/\/youzum.net\/th\/category\/uncategorized\/\" rel=\"category tag\">Uncategorized<\/a>","rttpg_excerpt":"In this tutorial, we build a complete, production-grade ML experimentation and deployment workflow using MLflow. We start by launching a dedicated MLflow Tracking Server with a structured backend and artifact store, enabling us to track experiments in a scalable, reproducible manner. We then train multiple machine learning models using a nested hyperparameter sweep while automatically&hellip;","_links":{"self":[{"href":"https:\/\/youzum.net\/th\/wp-json\/wp\/v2\/posts\/74538","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/youzum.net\/th\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/youzum.net\/th\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/youzum.net\/th\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/youzum.net\/th\/wp-json\/wp\/v2\/comments?post=74538"}],"version-history":[{"count":0,"href":"https:\/\/youzum.net\/th\/wp-json\/wp\/v2\/posts\/74538\/revisions"}],"wp:attachment":[{"href":"https:\/\/youzum.net\/th\/wp-json\/wp\/v2\/media?parent=74538"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/youzum.net\/th\/wp-json\/wp\/v2\/categories?post=74538"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/youzum.net\/th\/wp-json\/wp\/v2\/tags?post=74538"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}