YouZum

Actualités

Actualités

Mistral launches new code embedding model that outperforms OpenAI and Cohere in real-world retrieval tasks

Mistral’s Codestral Embed will help make RAG use cases faster and find duplicate code segments...

Mistral just updated its open source Small model from 3.1 to 3.2: here’s why

The fact that it is made by a French startup and compliant with EU rules...

Mistral AI’s new coding assistant takes direct aim at GitHub Copilot

Mistral AI launches enterprise coding assistant with on-premise deployment to challenge GitHub Copilot, targeting corporate...

Mistral AI Unveils Mistral Medium 3.1: Enhancing AI with Superior Performance and Usability

Mistral AI has introduced Mistral Medium 3.1, setting new standards in multimodal intelligence, enterprise readiness...

Mistral AI Introduces Mistral Code: A Customizable AI Coding Assistant for Enterprise Workflows

Mistral AI announced the release of Mistral Code, an AI-powered coding assistant tailored for enterprise...

MiroMind-M1: An Open-Source Advancement in Mathematical Reasoning via Context-Aware Multi-Stage Policy Optimization

arXiv:2507.14683v1 Announce Type: new Abstract: Large language models have recently evolved from fluent text generation...

MiroMind-M1: Advancing Open-Source Mathematical Reasoning via Context-Aware Multi-Stage Reinforcement Learning

Large language models (LLMs) have recently demonstrated remarkable progress in multi-step reasoning, establishing mathematical problem-solving...

MinosEval: Distinguishing Factoid and Non-Factoid for Tailored Open-Ended QA Evaluation with LLMs

arXiv:2506.15215v1 Announce Type: new Abstract: Open-ended question answering (QA) is a key task for evaluating...

MiniMax AI Releases MiniMax-M1: A 456B Parameter Hybrid Model for Long-Context and Reinforcement Learning RL Tasks

The Challenge of Long-Context Reasoning in AI Models Large reasoning models are not only designed...

Mind the Gap: A Review of Arabic Post-Training Datasets and Their Limitations

arXiv:2507.14688v1 Announce Type: new Abstract: Post-training has emerged as a crucial technique for aligning pre-trained...

Middo: Model-Informed Dynamic Data Optimization for Enhanced LLM Fine-Tuning via Closed-Loop Learning

arXiv:2508.21589v1 Announce Type: new Abstract: Supervised Fine-Tuning (SFT) Large Language Models (LLM) fundamentally rely on...

We use cookies to improve your experience and performance on our website. You can learn more at Politique de confidentialité and manage your privacy settings by clicking Settings.

Privacy Preferences

You can choose your cookie settings by turning on/off each type of cookie as you wish, except for essential cookies.

Allow All
Manage Consent Preferences
  • Always Active

Save
fr_FR