Skip to main content

Mistral launches new model Codestral Embed that outperforms OpenAI and Cohere in real-world retrieval tasks

 


With demand for enterprise retrieval augmented generation (RAG) on the rise, the opportunity is ripe for model providers to offer their take on embedding models.

French AI company Mistral threw its hat into the ring with Codestral Embed, its first embedding model, which it said outperforms existing embedding models on benchmarks like SWE-Bench.

The model specializes in code and “performs especially well for retrieval use cases on real-world code data.” The model is available to developers for $0.15 per million tokens.

The company said the Codestral Embed “significantly outperforms leading code embedders” like voyage code3, cohere embed v4.0 and Open Ai embedding model, Text Embedding 3 Large.

Codestral Embed, part of Mistral’s Codestral family of coding models, can make embeddings that transform code and data into numerical representations for RAG.

“Codestral Embed can output embeddings with different dimensions and precisions, and the figure below illustrates the trade-offs between retrieval quality and storage costs,” Mistral said in a blog post. “Codestral Embed with dimension 256 and int8 precision still performs better than any model from our competitors. The dimensions of our embeddings are ordered by relevance. For any integer target dimension n, you can choose to keep the first n dimensions for a smooth trade-off between quality and cost.”

Mistral tested the model on several benchmarks, including SWE-Bench and Text2Code from GitHub. In both cases, the company said Codestral Embed outperformed leading embedding models.

Codestral Embed, part of Mistral’s Codestral family of coding models, can make embeddings that transform code and data into numerical representations for RAG.

“Codestral Embed can output embeddings with different dimensions and precisions, and the figure below illustrates the trade-offs between retrieval quality and storage costs,” Mistral said in a blog post. “Codestral Embed with dimension 256 and int8 precision still performs better than any model from our competitors. The dimensions of our embeddings are ordered by relevance. For any integer target dimension n, you can choose to keep the first n dimensions for a smooth trade-off between quality and cost.”

Mistral tested the model on several benchmarks, including SWE-Bench and Text2Code from GitHub. In both cases, the company said Codestral Embed outperformed leading embedding models.

Codestral Embed, part of Mistral’s Codestral family of coding models, can make embeddings that transform code and data into numerical representations for RAG.

“Codestral Embed can output embeddings with different dimensions and precisions, and the figure below illustrates the trade-offs between retrieval quality and storage costs,” Mistral said in a blog post. “Codestral Embed with dimension 256 and int8 precision still performs better than any model from our competitors. The dimensions of our embeddings are ordered by relevance. For any integer target dimension n, you can choose to keep the first n dimensions for a smooth trade-off between quality and cost.”

Mistral tested the model on several benchmarks, including SWE-Bench and Text2Code from GitHub. In both cases, the company said Codestral Embed outperformed leading embedding models.

SWE- Bench

Text2Code

Use cases

Mistral said Codestral Embed is optimized for “high-performance code retrieval” and semantic understanding. The company said the code works best for at least four kinds of use cases: RAG, semantic code search, similarity search and code analytics.

Embedding models generally target RAG use cases, as they can facilitate faster information retrieval for tasks or agentic processes. Therefore, it’s not surprising that Codestral Embed would focus on that.

The model can also perform semantic code search, allowing developers to find code snippets using natural language. This use case works well for developer tool platforms, documentation systems and coding copilots. Codestral Embed can also help developers identify duplicated code segments or similar code strings, which can be helpful for enterprises with policies regarding reused code.

The model supports semantic clustering, which involves grouping code based on its functionality or structure. This use case would help analyze repositories, categorize and find patterns in code architecture.

1. Retrieval-augmented generation

Codestral Embed facilitates rapid and efficient context retrieval for code completion, editing, or explanation tasks. It is ideal for AI-powered software engineering in copilots or coding agent frameworks.

2. Semantic code search

Embed enables accurate search of relevant code snippets from natural language or code queries. It is suitable for use within developer tools, documentation systems, and copilots.

3. Similarity search and duplicate detection

The model’s embeddings can be used to identify near-duplicate or functionally similar code segments, even with significant lexical variation. This supports use cases such as identifying reusable code to avoid duplicates, or detecting copy-paste reuse to enforce licensing policies.

4. Semantic clustering and code analytics

Codestral Embed supports unsupervised grouping of code based on functionality or structure. This is useful for analyzing repository composition, identifying emergent architecture patterns, or feeding into automated documentation and categorization systems.

Competition is increasing in the embedding space

Mistral has been on a roll with releasing new models and agentic tools. It released Mistral Medium 3, a medium version of its flagship large language model (LLM), which currently powers its enterprise-focused platform Le Chat Enterprise.

It also announced the Agents API, which allows developers to access tools for creating agents that perform real-world tasks and orchestrate multiple agents.

Mistral’s moves to offer more model options to developers have not gone unnoticed in developer spaces. Some on X note that Mistral’s timing in releasing Codestral Embed is “coming on the heels of increased competition.”

Comments

Popular posts from this blog

GPT-5 Drops in July 2025: The AI Revolution That’s About to Explode Your World

  “It’s wild watching people use ChatGPT… knowing what’s coming.” — OpenAI insider Picture this: It’s July 2025, and the AI landscape is about to shatter into  before  and  after . If GPT-4 felt like a game-changer,  GPT-5  is set to rewrite the rules entirely. This isn’t some minor tweak — it’s a full-blown  paradigm shift , leaping from mind-blowing to straight-up revolutionary. And guess what? It’s hitting sooner than anyone dared to dream. Why July 2025 Is the Date That Changes Everything OpenAI doesn’t do slow rolls. Remember GPT-4? Total radio silence, then  bam  — the world flipped overnight. Back in February 2024, CEO Sam Altman teased that GPT-5 would follow GPT-4.5 “in months, not years”. Fast-forward to now, and summer 2025 is here, backed by internal whispers and recent leaks. Why does this timeline hit so hard? Because AI isn’t evolving — it’s  exploding . Experts thought we’d wait years for this level of tech, but OpenAI’s ...

ChatGPT Launched A NEW Feature That’s CRAZY! New MCP connectors for Google Drive, Box

  OpenAI’s ChatGPT is adding new features for business users, including integrations with different cloud services, meeting recordings, and MCP connection support for connecting to tools for deep research. Introduction to ChatGPT’s New Features ChatGPT has long been at the forefront of AI advancements, offering innovative solutions for various sectors. The latest updates bring a suite of features designed to streamline workflows and enhance user interaction. Among these, the meeting recording functionality stands out as a game-changer for professionals who rely on accurate documentation and seamless collaboration. As part of the launch, ChatGPT is gaining connectors for Dropbox, Box, SharePoint, OneDrive, and Google Drive. This allows ChatGPT to look for information across users’ own services to answer their questions. For instance, an analyst could use the company’s slide deck and documents to build out an investment thesis. OpenAI said that the new feature will follow an organiza...

How to Connect Your Zerodha Account to Claude Using Kite MCP

  Have you ever wished you could ask an AI Assistant to analyze your portfolio and tell you how your stocks are doing today? With the latest release of Kite MCP (Model Context Protocol) from Zerodha, that future is here. The MCP lets you connect your Zerodha account with Claude and ask it to work for you. This connection allows investors to chat with their portfolio and ask complex market questions, all in simple English. Whether you are a seasoned trader or a complete beginner, this integration will completely change your investing workflow. Understanding Kite MCP Kite MCP acts as a connector between your LLM (Large Language Model) and the external tools available, in a structured way. It is like a standardized way for LLMs to talk to or work with external systems, making it easier to perform multi-step tasks. The MCP also acts like a contextual data layer that allows AI to see the live data. The traditional Kite API gives us structured data based on manual queries. We would then ...