Mistral launches new model Codestral Embed that outperforms OpenAI and Cohere in real-world retrieval tasks
With demand for enterprise retrieval augmented generation (RAG) on the rise, the opportunity is ripe for model providers to offer their take on embedding models.
French AI company Mistral threw its hat into the ring with Codestral Embed, its first embedding model, which it said outperforms existing embedding models on benchmarks like SWE-Bench.
The model specializes in code and “performs especially well for retrieval use cases on real-world code data.” The model is available to developers for $0.15 per million tokens.
The company said the Codestral Embed “significantly outperforms leading code embedders” like voyage code3, cohere embed v4.0 and Open Ai embedding model, Text Embedding 3 Large.
Codestral Embed, part of Mistral’s Codestral family of coding models, can make embeddings that transform code and data into numerical representations for RAG.
“Codestral Embed can output embeddings with different dimensions and precisions, and the figure below illustrates the trade-offs between retrieval quality and storage costs,” Mistral said in a blog post. “Codestral Embed with dimension 256 and int8 precision still performs better than any model from our competitors. The dimensions of our embeddings are ordered by relevance. For any integer target dimension n, you can choose to keep the first n dimensions for a smooth trade-off between quality and cost.”
Mistral tested the model on several benchmarks, including SWE-Bench and Text2Code from GitHub. In both cases, the company said Codestral Embed outperformed leading embedding models.
Codestral Embed, part of Mistral’s Codestral family of coding models, can make embeddings that transform code and data into numerical representations for RAG.
“Codestral Embed can output embeddings with different dimensions and precisions, and the figure below illustrates the trade-offs between retrieval quality and storage costs,” Mistral said in a blog post. “Codestral Embed with dimension 256 and int8 precision still performs better than any model from our competitors. The dimensions of our embeddings are ordered by relevance. For any integer target dimension n, you can choose to keep the first n dimensions for a smooth trade-off between quality and cost.”
Mistral tested the model on several benchmarks, including SWE-Bench and Text2Code from GitHub. In both cases, the company said Codestral Embed outperformed leading embedding models.
Codestral Embed, part of Mistral’s Codestral family of coding models, can make embeddings that transform code and data into numerical representations for RAG.
“Codestral Embed can output embeddings with different dimensions and precisions, and the figure below illustrates the trade-offs between retrieval quality and storage costs,” Mistral said in a blog post. “Codestral Embed with dimension 256 and int8 precision still performs better than any model from our competitors. The dimensions of our embeddings are ordered by relevance. For any integer target dimension n, you can choose to keep the first n dimensions for a smooth trade-off between quality and cost.”
Mistral tested the model on several benchmarks, including SWE-Bench and Text2Code from GitHub. In both cases, the company said Codestral Embed outperformed leading embedding models.

SWE- Bench
Text2Code
Use cases
Mistral said Codestral Embed is optimized for “high-performance code retrieval” and semantic understanding. The company said the code works best for at least four kinds of use cases: RAG, semantic code search, similarity search and code analytics.
Embedding models generally target RAG use cases, as they can facilitate faster information retrieval for tasks or agentic processes. Therefore, it’s not surprising that Codestral Embed would focus on that.
The model can also perform semantic code search, allowing developers to find code snippets using natural language. This use case works well for developer tool platforms, documentation systems and coding copilots. Codestral Embed can also help developers identify duplicated code segments or similar code strings, which can be helpful for enterprises with policies regarding reused code.
The model supports semantic clustering, which involves grouping code based on its functionality or structure. This use case would help analyze repositories, categorize and find patterns in code architecture.
1. Retrieval-augmented generation
Codestral Embed facilitates rapid and efficient context retrieval for code completion, editing, or explanation tasks. It is ideal for AI-powered software engineering in copilots or coding agent frameworks.
2. Semantic code search
Embed enables accurate search of relevant code snippets from natural language or code queries. It is suitable for use within developer tools, documentation systems, and copilots.
3. Similarity search and duplicate detection
The model’s embeddings can be used to identify near-duplicate or functionally similar code segments, even with significant lexical variation. This supports use cases such as identifying reusable code to avoid duplicates, or detecting copy-paste reuse to enforce licensing policies.
4. Semantic clustering and code analytics
Codestral Embed supports unsupervised grouping of code based on functionality or structure. This is useful for analyzing repository composition, identifying emergent architecture patterns, or feeding into automated documentation and categorization systems.
Competition is increasing in the embedding space
Mistral has been on a roll with releasing new models and agentic tools. It released Mistral Medium 3, a medium version of its flagship large language model (LLM), which currently powers its enterprise-focused platform Le Chat Enterprise.
It also announced the Agents API, which allows developers to access tools for creating agents that perform real-world tasks and orchestrate multiple agents.
Mistral’s moves to offer more model options to developers have not gone unnoticed in developer spaces. Some on X note that Mistral’s timing in releasing Codestral Embed is “coming on the heels of increased competition.”
Comments
Post a Comment