{"id":87596,"date":"2026-05-02T15:53:49","date_gmt":"2026-05-02T15:53:49","guid":{"rendered":"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/"},"modified":"2026-05-02T15:53:49","modified_gmt":"2026-05-02T15:53:49","slug":"a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features","status":"publish","type":"post","link":"https:\/\/youzum.net\/zh\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/","title":{"rendered":"A Coding Implementation of End-to-End Brain Decoding from MEG Signals Using NeuralSet and Deep Learning for Predicting Linguistic Features"},"content":{"rendered":"<p>In this tutorial, we explore how we can decode linguistic features directly from brain signals using a modern <a href=\"https:\/\/github.com\/facebookresearch\/neuroai\"><strong>neuroAI<\/strong><\/a> pipeline. We work with MEG data and build an end-to-end system that transforms raw neural activity into meaningful predictions, in this case, estimating word length from brain responses. We set up the environment, load and process neural events, design a custom feature extractor, and construct a structured data pipeline using NeuralSet. From there, we train a convolutional neural network to learn patterns in the temporal and spatial structure of MEG signals. Throughout the process, we focus on building a clean, modular workflow that mirrors real-world neuroAI research practices.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">import subprocess, sys, importlib, pkgutil\ndef pip_install(*pkgs):\n   print(f\"pip install {' '.join(pkgs)} ...\")\n   r = subprocess.run([sys.executable, \"-m\", \"pip\", \"install\", \"-q\", *pkgs],\n                      capture_output=True, text=True)\n   if r.returncode != 0:\n       print(\"pip STDOUT:\", r.stdout[-2000:])\n       print(\"pip STDERR:\", r.stderr[-2000:])\n       raise RuntimeError(\"pip install failed; see output above.\")\n   print(\"  ok\")\n\n\npip_install(\"numpy&gt;=2.0,&lt;2.3\")\npip_install(\"neuralset\")\npip_install(\"neuralfetch\")\n\n\nimport numpy as np\nfrom numpy._core.umath import _center\nprint(f\"numpy {np.__version__} OK\")\n\n\nimport warnings, typing as tp\nwarnings.filterwarnings(\"ignore\")\nimport pandas as pd\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.utils.data import DataLoader\nimport matplotlib.pyplot as plt\n\n\nimport neuralset as ns\nfrom neuralset import extractors as ext_mod<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We install and validate all required dependencies, ensuring critical packages such as NumPy and NeuralSet are properly configured. We perform a quick NumPy check to avoid runtime issues later in the pipeline. We then import all core libraries needed for data processing, modeling, and visualization.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">def deep_import(pkg_name: str):\n   try:\n       pkg = importlib.import_module(pkg_name)\n   except Exception as e:\n       print(f\"<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" alt=\"\u26a0\" class=\"wp-smiley\" \/>  could not import {pkg_name}: {e}\")\n       return\n   if not hasattr(pkg, \"__path__\"):\n       return\n   for m in pkgutil.walk_packages(pkg.__path__, prefix=pkg_name + \".\"):\n       try:\n           importlib.import_module(m.name)\n       except Exception:\n           pass\n\n\ndeep_import(\"neuralfetch\")\ndeep_import(\"neuralset\")\n\n\ntorch.manual_seed(0); np.random.seed(0)\n\n\ncatalog = ns.Study.catalog()\nprint(f\"n{len(catalog)} studies registered.\")\npreferred = [\"Fake2025Meg\", \"Test2025Meg\", \"Test2023Meg\"]\nstudy_name = next((n for n in preferred if n in catalog), None)\n\n\nif study_name is None:\n   meg_studies = [n for n, c in catalog.items() if \"Meg\" in c.neuro_types()]\n   study_name = meg_studies[0] if meg_studies else None\n\n\nif study_name is None:\n   raise RuntimeError(\n       \"No MEG study available. Catalog: \"\n       f\"{sorted(catalog.keys())[:20]}\u2026  \"\n       \"Install neuralfetch correctly (pip install neuralfetch) and re-run.\"\n   )\nprint(f\"\u2192 Using study: {study_name}\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We dynamically import all submodules from NeuralFetch and NeuralSet to ensure that all available studies are properly registered. We seed the random number generator for reproducibility and inspect the study catalog to identify available MEG datasets. We then select an appropriate study to use as the foundation for our pipeline.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">class CharCount(ext_mod.BaseStatic):\n   event_types: tp.Literal[\"Word\"] = \"Word\"\n   def get_static(self, event) -&gt; torch.Tensor:\n       return torch.tensor([float(len(event.text))], dtype=torch.float32)\n\n\nprint(\"nBuilding chain...\")\nchain = ns.Chain(steps=[\n   {\"name\": study_name, \"path\": str(ns.CACHE_FOLDER)},\n   {\"name\": \"QueryEvents\", \"query\": \"type in ['Word', 'Meg']\"},\n])\nevents = chain.run()\nprint(f\"  \u2192 {len(events)} events; types={sorted(events.type.unique().tolist())}\")\nprint(f\"  \u2192 Words: {(events.type=='Word').sum()} | \"\n     f\"timelines: {events.timeline.nunique()}\")\nprint(\"nSample words:\")\nprint(events[events.type=='Word'][[\"start\",\"duration\",\"text\",\"timeline\"]]\n     .head(5).to_string(index=False))\n\n\nprint(\"nBuilding segmenter...\")\nsegmenter = ns.dataloader.Segmenter(\n   extractors={\n       \"meg\":        {\"name\": \"MegExtractor\", \"frequency\": 100.0},\n       \"char_count\": CharCount(aggregation=\"trigger\"),\n   },\n   trigger_query=\"type == 'Word'\",\n   start=-0.2, duration=0.8,\n   drop_incomplete=True,\n)\ndataset = segmenter.apply(events)\nprint(f\"  \u2192 SegmentDataset: {len(dataset)} segments\")\n\n\ns0 = dataset[0]\nprint(f\"nSingle item:n  meg        : {tuple(s0.data['meg'].shape)}\")\nprint(f\"  char_count : {s0.data['char_count'].item()}  \"\n     f\"(word: {s0.segments[0].trigger.text!r})\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We define a custom extractor that computes the character count of each word event, enabling us to create a supervised learning target. We build a processing chain to load and filter relevant events from the selected study. We then segment the MEG signals around word events and construct a dataset ready for modeling.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">rng  = np.random.RandomState(42)\nperm = rng.permutation(len(dataset))\nn_tr, n_va = int(0.70*len(dataset)), int(0.15*len(dataset))\ntrain_ds = dataset.select(perm[:n_tr])\nval_ds   = dataset.select(perm[n_tr:n_tr+n_va])\ntest_ds  = dataset.select(perm[n_tr+n_va:])\nprint(f\"nSplit | train={len(train_ds)}  val={len(val_ds)}  test={len(test_ds)}\")\n\n\nmk = lambda d, sh: DataLoader(d, batch_size=32, shuffle=sh,\n                             collate_fn=d.collate_fn, drop_last=False)\ntrain_loader, val_loader, test_loader = mk(train_ds, True), mk(val_ds, False), mk(test_ds, False)\n\n\nprobe = next(iter(train_loader))\nn_ch, n_t = probe.data[\"meg\"].shape[-2:]\nprint(f\"  \u2192 batch[meg]  shape: {tuple(probe.data['meg'].shape)}\")\nprint(f\"  \u2192 batch[char] shape: {tuple(probe.data['char_count'].shape)}\")\n\n\nclass MEGDecoder(nn.Module):\n   def __init__(self, n_channels: int, mid: int = 64):\n       super().__init__()\n       self.spatial   = nn.Conv1d(n_channels, mid, 1)\n       self.bn0       = nn.BatchNorm1d(mid)\n       self.temporal1 = nn.Conv1d(mid, mid, 7, padding=3)\n       self.bn1       = nn.BatchNorm1d(mid)\n       self.temporal2 = nn.Conv1d(mid, mid\/\/2, 7, padding=3)\n       self.bn2       = nn.BatchNorm1d(mid\/\/2)\n       self.pool      = nn.AdaptiveAvgPool1d(1)\n       self.head      = nn.Linear(mid\/\/2, 1)\n       self.drop      = nn.Dropout(0.3)\n   def forward(self, x):\n       x = F.gelu(self.bn0(self.spatial(x)))\n       x = F.gelu(self.bn1(self.temporal1(x)))\n       x = self.drop(x)\n       x = F.gelu(self.bn2(self.temporal2(x)))\n       return self.head(self.pool(x).squeeze(-1)).squeeze(-1)\n\n\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\nmodel  = MEGDecoder(n_channels=n_ch).to(device)\nprint(f\"nDevice: {device} | params: {sum(p.numel() for p in model.parameters()):,}\")\n\n\ntrain_targets = torch.cat([b.data[\"char_count\"].squeeze(-1) for b in train_loader])\ny_mean, y_std = train_targets.mean().item(), train_targets.std().item() + 1e-6\nprint(f\"Target  \u03bc={y_mean:.2f}  \u03c3={y_std:.2f}\")\n\n\ndef prep(batch):\n   x = batch.data[\"meg\"].to(device).float()\n   y = batch.data[\"char_count\"].squeeze(-1).to(device).float()\n   x = (x - x.mean(-1, keepdim=True)) \/ (x.std(-1, keepdim=True) + 1e-6)\n   y = (y - y_mean) \/ y_std\n   return x, y<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We split the dataset into training, validation, and test sets to ensure proper model evaluation. We create data loaders and inspect batch shapes to confirm correct data formatting. We then define a convolutional neural network and prepare normalized inputs and targets for stable training.<\/p>\n<div class=\"dm-code-snippet dark dm-normal-version default no-background-mobile\">\n<div class=\"control-language\">\n<div class=\"dm-buttons\">\n<div class=\"dm-buttons-left\">\n<div class=\"dm-button-snippet red-button\"><\/div>\n<div class=\"dm-button-snippet orange-button\"><\/div>\n<div class=\"dm-button-snippet green-button\"><\/div>\n<\/div>\n<div class=\"dm-buttons-right\"><a><span class=\"dm-copy-text\">Copy Code<\/span><span class=\"dm-copy-confirmed\">Copied<\/span><span class=\"dm-error-message\">Use a different Browser<\/span><\/a><\/div>\n<\/div>\n<pre class=\"no-line-numbers\"><code class=\"no-wrap language-php\">EPOCHS  = 15\nopt     = torch.optim.AdamW(model.parameters(), lr=1e-3, weight_decay=1e-4)\nsched   = torch.optim.lr_scheduler.CosineAnnealingLR(opt, T_max=EPOCHS)\nloss_fn = nn.MSELoss()\nhist    = {\"tr\": [], \"va\": [], \"r\": []}\n\n\ndef pearson(a, b):\n   a, b = a - a.mean(), b - b.mean()\n   return (a*b).sum() \/ (a.norm()*b.norm() + 1e-8)\n\n\nprint(\"n\" + \"=\"*64)\nprint(f\"{'Epoch':&gt;5} | {'train':&gt;9} | {'val':&gt;9} | {'val_r':&gt;7}\")\nprint(\"=\"*64)\nfor ep in range(EPOCHS):\n   model.train(); tr = []\n   for batch in train_loader:\n       x, y = prep(batch)\n       loss = loss_fn(model(x), y)\n       opt.zero_grad(); loss.backward()\n       torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)\n       opt.step(); tr.append(loss.item())\n   sched.step()\n\n\n   model.eval(); va, P, T = [], [], []\n   with torch.no_grad():\n       for batch in val_loader:\n           x, y = prep(batch); p = model(x)\n           va.append(loss_fn(p, y).item()); P.append(p.cpu()); T.append(y.cpu())\n   P, T = torch.cat(P), torch.cat(T)\n   r = pearson(P, T).item()\n   hist[\"tr\"].append(np.mean(tr)); hist[\"va\"].append(np.mean(va)); hist[\"r\"].append(r)\n   print(f\"{ep+1:&gt;5d} | {np.mean(tr):&gt;9.4f} | {np.mean(va):&gt;9.4f} | {r:&gt;+7.3f}\")\n\n\nmodel.eval(); P, T = [], []\nwith torch.no_grad():\n   for batch in test_loader:\n       x, y = prep(batch)\n       P.append(model(x).cpu()); T.append(y.cpu())\nP, T = torch.cat(P), torch.cat(T)\ntest_r   = pearson(P, T).item()\ntest_mse = ((P - T) ** 2).mean().item()\nprint(f\"nTEST  |  Pearson r = {test_r:+.3f}   MSE = {test_mse:.3f}\")\nprint(f\"(Synthetic-MEG signals are random by design \u2014 small\/zero r is expected.)\")\n\n\nfig, ax = plt.subplots(1, 3, figsize=(15, 4))\nax[0].plot(hist[\"tr\"], label=\"train\"); ax[0].plot(hist[\"va\"], label=\"val\")\nax[0].set(xlabel=\"Epoch\", ylabel=\"MSE\", title=\"Loss curves\"); ax[0].legend(); ax[0].grid(alpha=.3)\nax[1].plot(hist[\"r\"], color=\"C2\"); ax[1].axhline(0, color=\"k\", ls=\"--\", alpha=.4)\nax[1].set(xlabel=\"Epoch\", ylabel=\"Pearson r\", title=\"Validation correlation\"); ax[1].grid(alpha=.3)\nm = float(max(T.abs().max(), P.abs().max()))\nax[2].scatter(T.numpy(), P.numpy(), s=10, alpha=.35)\nax[2].plot([-m, m], [-m, m], \"k--\", alpha=.4)\nax[2].set(xlabel=\"True (z-scored char count)\", ylabel=\"Predicted\",\n         title=f\"Test predictions (r = {test_r:+.3f})\"); ax[2].grid(alpha=.3)\nplt.tight_layout(); plt.show()\n\n\nprint(\"n<img decoding=\"async\" src=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/2705.png\" alt=\"\u2705\" class=\"wp-smiley\" \/> Tutorial complete!\")\nprint(f\"  \u2022 Study used        : {study_name}\")\nprint(f\"  \u2022 Pipeline          : Chain \u2192 Segmenter \u2192 SegmentDataset \u2192 DataLoader\")\nprint(f\"  \u2022 Custom extractor  : CharCount (subclass of BaseStatic)\")\nprint(f\"  \u2022 Built-in extractor: MegExtractor @ 100 Hz\")\nprint(f\"  \u2022 Model             : 1\u00d71 spatial conv + 2 temporal convs + linear head\")<\/code><\/pre>\n<\/div>\n<\/div>\n<p>We train the neural network using a structured training loop with loss tracking and learning rate scheduling. We evaluate the model on the validation and test sets using metrics such as MSE and Pearson\u2019s correlation. Also, we visualize training performance and predictions to understand how well the model learns from the data.<\/p>\n<p>In conclusion, we demonstrated how we can bridge neural data and language understanding using deep learning. We implemented a full pipeline, from raw event extraction to model training and evaluation, while maintaining flexibility through reusable components like chains, segmenters, and extractors. Although we worked with synthetic MEG signals, the framework we built is directly applicable to real-world datasets and more complex decoding tasks. This exercise highlights how we can combine neuroscience, machine learning, and structured pipelines to advance interpretable brain decoding systems, laying a strong foundation for more advanced neuroAI applications.<\/p>\n<hr class=\"wp-block-separator aligncenter has-alpha-channel-opacity is-style-wide\" \/>\n<p>Check out\u00a0the\u00a0<strong><a href=\"https:\/\/github.com\/Marktechpost\/AI-Agents-Projects-Tutorials\/blob\/main\/Deep%20Learning\/meg_brain_decoding_neuralset_cnn_Marktechpost.ipynb\" target=\"_blank\" rel=\"noreferrer noopener\">Full Codes and Notebook here<\/a><\/strong>.<strong>\u00a0<\/strong>Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">130k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>Need to partner with us for promoting your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar etc.?\u00a0<strong><a href=\"https:\/\/forms.gle\/MTNLpmJtsFA3VRVd9\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Connect with us<\/mark><\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/05\/01\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/\">A Coding Implementation of End-to-End Brain Decoding from MEG Signals Using NeuralSet and Deep Learning for Predicting Linguistic Features<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>In this tutorial, we explore how we can decode linguistic features directly from brain signals using a modern neuroAI pipeline. We work with MEG data and build an end-to-end system that transforms raw neural activity into meaningful predictions, in this case, estimating word length from brain responses. We set up the environment, load and process neural events, design a custom feature extractor, and construct a structured data pipeline using NeuralSet. From there, we train a convolutional neural network to learn patterns in the temporal and spatial structure of MEG signals. Throughout the process, we focus on building a clean, modular workflow that mirrors real-world neuroAI research practices. Copy CodeCopiedUse a different Browser import subprocess, sys, importlib, pkgutil def pip_install(*pkgs): print(f&#8221;pip install {&#8216; &#8216;.join(pkgs)} &#8230;&#8221;) r = subprocess.run([sys.executable, &#8220;-m&#8221;, &#8220;pip&#8221;, &#8220;install&#8221;, &#8220;-q&#8221;, *pkgs], capture_output=True, text=True) if r.returncode != 0: print(&#8220;pip STDOUT:&#8221;, r.stdout[-2000:]) print(&#8220;pip STDERR:&#8221;, r.stderr[-2000:]) raise RuntimeError(&#8220;pip install failed; see output above.&#8221;) print(&#8221; ok&#8221;) pip_install(&#8220;numpy&gt;=2.0,&lt;2.3&#8221;) pip_install(&#8220;neuralset&#8221;) pip_install(&#8220;neuralfetch&#8221;) import numpy as np from numpy._core.umath import _center print(f&#8221;numpy {np.__version__} OK&#8221;) import warnings, typing as tp warnings.filterwarnings(&#8220;ignore&#8221;) import pandas as pd import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import DataLoader import matplotlib.pyplot as plt import neuralset as ns from neuralset import extractors as ext_mod We install and validate all required dependencies, ensuring critical packages such as NumPy and NeuralSet are properly configured. We perform a quick NumPy check to avoid runtime issues later in the pipeline. We then import all core libraries needed for data processing, modeling, and visualization. Copy CodeCopiedUse a different Browser def deep_import(pkg_name: str): try: pkg = importlib.import_module(pkg_name) except Exception as e: print(f&#8221; could not import {pkg_name}: {e}&#8221;) return if not hasattr(pkg, &#8220;__path__&#8221;): return for m in pkgutil.walk_packages(pkg.__path__, prefix=pkg_name + &#8220;.&#8221;): try: importlib.import_module(m.name) except Exception: pass deep_import(&#8220;neuralfetch&#8221;) deep_import(&#8220;neuralset&#8221;) torch.manual_seed(0); np.random.seed(0) catalog = ns.Study.catalog() print(f&#8221;n{len(catalog)} studies registered.&#8221;) preferred = [&#8220;Fake2025Meg&#8221;, &#8220;Test2025Meg&#8221;, &#8220;Test2023Meg&#8221;] study_name = next((n for n in preferred if n in catalog), None) if study_name is None: meg_studies = [n for n, c in catalog.items() if &#8220;Meg&#8221; in c.neuro_types()] study_name = meg_studies[0] if meg_studies else None if study_name is None: raise RuntimeError( &#8220;No MEG study available. Catalog: &#8221; f&#8221;{sorted(catalog.keys())[:20]}\u2026 &#8221; &#8220;Install neuralfetch correctly (pip install neuralfetch) and re-run.&#8221; ) print(f&#8221;\u2192 Using study: {study_name}&#8221;) We dynamically import all submodules from NeuralFetch and NeuralSet to ensure that all available studies are properly registered. We seed the random number generator for reproducibility and inspect the study catalog to identify available MEG datasets. We then select an appropriate study to use as the foundation for our pipeline. Copy CodeCopiedUse a different Browser class CharCount(ext_mod.BaseStatic): event_types: tp.Literal[&#8220;Word&#8221;] = &#8220;Word&#8221; def get_static(self, event) -&gt; torch.Tensor: return torch.tensor([float(len(event.text))], dtype=torch.float32) print(&#8220;nBuilding chain&#8230;&#8221;) chain = ns.Chain(steps=[ {&#8220;name&#8221;: study_name, &#8220;path&#8221;: str(ns.CACHE_FOLDER)}, {&#8220;name&#8221;: &#8220;QueryEvents&#8221;, &#8220;query&#8221;: &#8220;type in [&#8216;Word&#8217;, &#8216;Meg&#8217;]&#8221;}, ]) events = chain.run() print(f&#8221; \u2192 {len(events)} events; types={sorted(events.type.unique().tolist())}&#8221;) print(f&#8221; \u2192 Words: {(events.type==&#8217;Word&#8217;).sum()} | &#8221; f&#8221;timelines: {events.timeline.nunique()}&#8221;) print(&#8220;nSample words:&#8221;) print(events[events.type==&#8217;Word&#8217;][[&#8220;start&#8221;,&#8221;duration&#8221;,&#8221;text&#8221;,&#8221;timeline&#8221;]] .head(5).to_string(index=False)) print(&#8220;nBuilding segmenter&#8230;&#8221;) segmenter = ns.dataloader.Segmenter( extractors={ &#8220;meg&#8221;: {&#8220;name&#8221;: &#8220;MegExtractor&#8221;, &#8220;frequency&#8221;: 100.0}, &#8220;char_count&#8221;: CharCount(aggregation=&#8221;trigger&#8221;), }, trigger_query=&#8221;type == &#8216;Word'&#8221;, start=-0.2, duration=0.8, drop_incomplete=True, ) dataset = segmenter.apply(events) print(f&#8221; \u2192 SegmentDataset: {len(dataset)} segments&#8221;) s0 = dataset[0] print(f&#8221;nSingle item:n meg : {tuple(s0.data[&#8216;meg&#8217;].shape)}&#8221;) print(f&#8221; char_count : {s0.data[&#8216;char_count&#8217;].item()} &#8221; f&#8221;(word: {s0.segments[0].trigger.text!r})&#8221;) We define a custom extractor that computes the character count of each word event, enabling us to create a supervised learning target. We build a processing chain to load and filter relevant events from the selected study. We then segment the MEG signals around word events and construct a dataset ready for modeling. Copy CodeCopiedUse a different Browser rng = np.random.RandomState(42) perm = rng.permutation(len(dataset)) n_tr, n_va = int(0.70*len(dataset)), int(0.15*len(dataset)) train_ds = dataset.select(perm[:n_tr]) val_ds = dataset.select(perm[n_tr:n_tr+n_va]) test_ds = dataset.select(perm[n_tr+n_va:]) print(f&#8221;nSplit | train={len(train_ds)} val={len(val_ds)} test={len(test_ds)}&#8221;) mk = lambda d, sh: DataLoader(d, batch_size=32, shuffle=sh, collate_fn=d.collate_fn, drop_last=False) train_loader, val_loader, test_loader = mk(train_ds, True), mk(val_ds, False), mk(test_ds, False) probe = next(iter(train_loader)) n_ch, n_t = probe.data[&#8220;meg&#8221;].shape[-2:] print(f&#8221; \u2192 batch[meg] shape: {tuple(probe.data[&#8216;meg&#8217;].shape)}&#8221;) print(f&#8221; \u2192 batch[char] shape: {tuple(probe.data[&#8216;char_count&#8217;].shape)}&#8221;) class MEGDecoder(nn.Module): def __init__(self, n_channels: int, mid: int = 64): super().__init__() self.spatial = nn.Conv1d(n_channels, mid, 1) self.bn0 = nn.BatchNorm1d(mid) self.temporal1 = nn.Conv1d(mid, mid, 7, padding=3) self.bn1 = nn.BatchNorm1d(mid) self.temporal2 = nn.Conv1d(mid, mid\/\/2, 7, padding=3) self.bn2 = nn.BatchNorm1d(mid\/\/2) self.pool = nn.AdaptiveAvgPool1d(1) self.head = nn.Linear(mid\/\/2, 1) self.drop = nn.Dropout(0.3) def forward(self, x): x = F.gelu(self.bn0(self.spatial(x))) x = F.gelu(self.bn1(self.temporal1(x))) x = self.drop(x) x = F.gelu(self.bn2(self.temporal2(x))) return self.head(self.pool(x).squeeze(-1)).squeeze(-1) device = torch.device(&#8220;cuda&#8221; if torch.cuda.is_available() else &#8220;cpu&#8221;) model = MEGDecoder(n_channels=n_ch).to(device) print(f&#8221;nDevice: {device} | params: {sum(p.numel() for p in model.parameters()):,}&#8221;) train_targets = torch.cat([b.data[&#8220;char_count&#8221;].squeeze(-1) for b in train_loader]) y_mean, y_std = train_targets.mean().item(), train_targets.std().item() + 1e-6 print(f&#8221;Target \u03bc={y_mean:.2f} \u03c3={y_std:.2f}&#8221;) def prep(batch): x = batch.data[&#8220;meg&#8221;].to(device).float() y = batch.data[&#8220;char_count&#8221;].squeeze(-1).to(device).float() x = (x &#8211; x.mean(-1, keepdim=True)) \/ (x.std(-1, keepdim=True) + 1e-6) y = (y &#8211; y_mean) \/ y_std return x, y We split the dataset into training, validation, and test sets to ensure proper model evaluation. We create data loaders and inspect batch shapes to confirm correct data formatting. We then define a convolutional neural network and prepare normalized inputs and targets for stable training. Copy CodeCopiedUse a different Browser EPOCHS = 15 opt = torch.optim.AdamW(model.parameters(), lr=1e-3, weight_decay=1e-4) sched = torch.optim.lr_scheduler.CosineAnnealingLR(opt, T_max=EPOCHS) loss_fn = nn.MSELoss() hist = {&#8220;tr&#8221;: [], &#8220;va&#8221;: [], &#8220;r&#8221;: []} def pearson(a, b): a, b = a &#8211; a.mean(), b &#8211; b.mean() return (a*b).sum() \/ (a.norm()*b.norm() + 1e-8) print(&#8220;n&#8221; + &#8220;=&#8221;*64) print(f&#8221;{&#8216;Epoch&#8217;:&gt;5} | {&#8216;train&#8217;:&gt;9} | {&#8216;val&#8217;:&gt;9} | {&#8216;val_r&#8217;:&gt;7}&#8221;) print(&#8220;=&#8221;*64) for ep in range(EPOCHS): model.train(); tr = [] for batch in train_loader: x, y = prep(batch) loss = loss_fn(model(x), y) opt.zero_grad(); loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) opt.step(); tr.append(loss.item()) sched.step() model.eval(); va, P, T = [], [], [] with torch.no_grad(): for batch in val_loader: x, y = prep(batch); p = model(x) va.append(loss_fn(p, y).item()); P.append(p.cpu()); T.append(y.cpu()) P, T = torch.cat(P), torch.cat(T) r = pearson(P, T).item() hist[&#8220;tr&#8221;].append(np.mean(tr)); hist[&#8220;va&#8221;].append(np.mean(va)); hist[&#8220;r&#8221;].append(r) print(f&#8221;{ep+1:&gt;5d} | {np.mean(tr):&gt;9.4f} | {np.mean(va):&gt;9.4f} | {r:&gt;+7.3f}&#8221;) model.eval(); P, T = [], [] with torch.no_grad(): for batch in test_loader: x, y = prep(batch) P.append(model(x).cpu()); T.append(y.cpu()) P, T = torch.cat(P), torch.cat(T) test_r = pearson(P, T).item() test_mse = ((P &#8211; T) ** 2).mean().item() print(f&#8221;nTEST | Pearson r = {test_r:+.3f} MSE = {test_mse:.3f}&#8221;) print(f&#8221;(Synthetic-MEG signals are random by design \u2014 small\/zero r is expected.)&#8221;) fig, ax = plt.subplots(1, 3, figsize=(15, 4))<\/p>","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"pmpro_default_level":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"_pvb_checkbox_block_on_post":false,"footnotes":""},"categories":[52,5,7,1],"tags":[],"class_list":["post-87596","post","type-post","status-publish","format-standard","hentry","category-ai-club","category-committee","category-news","category-uncategorized","pmpro-has-access"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.3 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>A Coding Implementation of End-to-End Brain Decoding from MEG Signals Using NeuralSet and Deep Learning for Predicting Linguistic Features - YouZum<\/title>\n<meta name=\"description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/youzum.net\/zh\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/\" \/>\n<meta property=\"og:locale\" content=\"zh_CN\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"A Coding Implementation of End-to-End Brain Decoding from MEG Signals Using NeuralSet and Deep Learning for Predicting Linguistic Features - YouZum\" \/>\n<meta property=\"og:description\" content=\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\" \/>\n<meta property=\"og:url\" content=\"https:\/\/youzum.net\/zh\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/\" \/>\n<meta property=\"og:site_name\" content=\"YouZum\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/DroneAssociationTH\/\" \/>\n<meta property=\"article:published_time\" content=\"2026-05-02T15:53:49+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\" \/>\n<meta name=\"author\" content=\"admin NU\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\u4f5c\u8005\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin NU\" \/>\n\t<meta name=\"twitter:label2\" content=\"\u9884\u8ba1\u9605\u8bfb\u65f6\u95f4\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 \u5206\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/\"},\"author\":{\"name\":\"admin NU\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\"},\"headline\":\"A Coding Implementation of End-to-End Brain Decoding from MEG Signals Using NeuralSet and Deep Learning for Predicting Linguistic Features\",\"datePublished\":\"2026-05-02T15:53:49+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/\"},\"wordCount\":590,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\",\"articleSection\":[\"AI\",\"Committee\",\"News\",\"Uncategorized\"],\"inLanguage\":\"zh-Hans\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/\",\"url\":\"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/\",\"name\":\"A Coding Implementation of End-to-End Brain Decoding from MEG Signals Using NeuralSet and Deep Learning for Predicting Linguistic Features - YouZum\",\"isPartOf\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\",\"datePublished\":\"2026-05-02T15:53:49+00:00\",\"description\":\"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19\",\"breadcrumb\":{\"@id\":\"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/#breadcrumb\"},\"inLanguage\":\"zh-Hans\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-Hans\",\"@id\":\"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/#primaryimage\",\"url\":\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\",\"contentUrl\":\"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/youzum.net\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"A Coding Implementation of End-to-End Brain Decoding from MEG Signals Using NeuralSet and Deep Learning for Predicting Linguistic Features\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/yousum.gpucore.co\/#website\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"name\":\"YouSum\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/yousum.gpucore.co\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"zh-Hans\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/yousum.gpucore.co\/#organization\",\"name\":\"Drone Association Thailand\",\"url\":\"https:\/\/yousum.gpucore.co\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-Hans\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png\",\"width\":300,\"height\":300,\"caption\":\"Drone Association Thailand\"},\"image\":{\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/DroneAssociationTH\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c\",\"name\":\"admin NU\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"zh-Hans\",\"@id\":\"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"contentUrl\":\"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png\",\"caption\":\"admin NU\"},\"url\":\"https:\/\/youzum.net\/zh\/members\/adminnu\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"A Coding Implementation of End-to-End Brain Decoding from MEG Signals Using NeuralSet and Deep Learning for Predicting Linguistic Features - YouZum","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/youzum.net\/zh\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/","og_locale":"zh_CN","og_type":"article","og_title":"A Coding Implementation of End-to-End Brain Decoding from MEG Signals Using NeuralSet and Deep Learning for Predicting Linguistic Features - YouZum","og_description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","og_url":"https:\/\/youzum.net\/zh\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/","og_site_name":"YouZum","article_publisher":"https:\/\/www.facebook.com\/DroneAssociationTH\/","article_published_time":"2026-05-02T15:53:49+00:00","og_image":[{"url":"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png","type":"","width":"","height":""}],"author":"admin NU","twitter_card":"summary_large_image","twitter_misc":{"\u4f5c\u8005":"admin NU","\u9884\u8ba1\u9605\u8bfb\u65f6\u95f4":"9 \u5206"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/#article","isPartOf":{"@id":"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/"},"author":{"name":"admin NU","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c"},"headline":"A Coding Implementation of End-to-End Brain Decoding from MEG Signals Using NeuralSet and Deep Learning for Predicting Linguistic Features","datePublished":"2026-05-02T15:53:49+00:00","mainEntityOfPage":{"@id":"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/"},"wordCount":590,"commentCount":0,"publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"image":{"@id":"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/#primaryimage"},"thumbnailUrl":"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png","articleSection":["AI","Committee","News","Uncategorized"],"inLanguage":"zh-Hans","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/","url":"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/","name":"A Coding Implementation of End-to-End Brain Decoding from MEG Signals Using NeuralSet and Deep Learning for Predicting Linguistic Features - YouZum","isPartOf":{"@id":"https:\/\/yousum.gpucore.co\/#website"},"primaryImageOfPage":{"@id":"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/#primaryimage"},"image":{"@id":"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/#primaryimage"},"thumbnailUrl":"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png","datePublished":"2026-05-02T15:53:49+00:00","description":"\u0e01\u0e34\u0e08\u0e01\u0e23\u0e23\u0e21\u0e40\u0e01\u0e35\u0e48\u0e22\u0e27\u0e01\u0e31\u0e1a\u0e42\u0e14\u0e23\u0e19","breadcrumb":{"@id":"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/#breadcrumb"},"inLanguage":"zh-Hans","potentialAction":[{"@type":"ReadAction","target":["https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/"]}]},{"@type":"ImageObject","inLanguage":"zh-Hans","@id":"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/#primaryimage","url":"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png","contentUrl":"https:\/\/s.w.org\/images\/core\/emoji\/17.0.2\/72x72\/26a0.png"},{"@type":"BreadcrumbList","@id":"https:\/\/youzum.net\/a-coding-implementation-of-end-to-end-brain-decoding-from-meg-signals-using-neuralset-and-deep-learning-for-predicting-linguistic-features\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/youzum.net\/"},{"@type":"ListItem","position":2,"name":"A Coding Implementation of End-to-End Brain Decoding from MEG Signals Using NeuralSet and Deep Learning for Predicting Linguistic Features"}]},{"@type":"WebSite","@id":"https:\/\/yousum.gpucore.co\/#website","url":"https:\/\/yousum.gpucore.co\/","name":"YouSum","description":"","publisher":{"@id":"https:\/\/yousum.gpucore.co\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/yousum.gpucore.co\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"zh-Hans"},{"@type":"Organization","@id":"https:\/\/yousum.gpucore.co\/#organization","name":"Drone Association Thailand","url":"https:\/\/yousum.gpucore.co\/","logo":{"@type":"ImageObject","inLanguage":"zh-Hans","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/2024\/11\/tranparent-logo.png","width":300,"height":300,"caption":"Drone Association Thailand"},"image":{"@id":"https:\/\/yousum.gpucore.co\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/DroneAssociationTH\/"]},{"@type":"Person","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/97fa48242daf3908e4d9a5f26f4a059c","name":"admin NU","image":{"@type":"ImageObject","inLanguage":"zh-Hans","@id":"https:\/\/yousum.gpucore.co\/#\/schema\/person\/image\/","url":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","contentUrl":"https:\/\/youzum.net\/wp-content\/uploads\/avatars\/2\/1746849356-bpfull.png","caption":"admin NU"},"url":"https:\/\/youzum.net\/zh\/members\/adminnu\/"}]}},"rttpg_featured_image_url":null,"rttpg_author":{"display_name":"admin NU","author_link":"https:\/\/youzum.net\/zh\/members\/adminnu\/"},"rttpg_comment":0,"rttpg_category":"<a href=\"https:\/\/youzum.net\/zh\/category\/ai-club\/\" rel=\"category tag\">AI<\/a> <a href=\"https:\/\/youzum.net\/zh\/category\/committee\/\" rel=\"category tag\">Committee<\/a> <a href=\"https:\/\/youzum.net\/zh\/category\/news\/\" rel=\"category tag\">News<\/a> <a href=\"https:\/\/youzum.net\/zh\/category\/uncategorized\/\" rel=\"category tag\">Uncategorized<\/a>","rttpg_excerpt":"In this tutorial, we explore how we can decode linguistic features directly from brain signals using a modern neuroAI pipeline. We work with MEG data and build an end-to-end system that transforms raw neural activity into meaningful predictions, in this case, estimating word length from brain responses. We set up the environment, load and process&hellip;","_links":{"self":[{"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/posts\/87596","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/comments?post=87596"}],"version-history":[{"count":0,"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/posts\/87596\/revisions"}],"wp:attachment":[{"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/media?parent=87596"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/categories?post=87596"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/youzum.net\/zh\/wp-json\/wp\/v2\/tags?post=87596"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}