New vision model from Cohere runs on two GPUs, beats top-tier VLMs on visual tasks


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now


The rise in Deep Research features and other AI-powered analysis has given rise to more models and services looking to simplify that process and read more of the documents businesses actually use. 

Canadian AI company Cohere is banking on its models, including a newly released visual model, to make the case that Deep Research features should also be optimized for enterprise use cases. 

The company has released Command A Vision, a visual model specifically targeting enterprise use cases, built on the back of its Command A model. The 112 billion parameter model can “unlock valuable insights from visual data, and make highly accurate, data-driven decisions through document optical character recognition (OCR) and image analysis,” the company says.

“Whether it’s interpreting product manuals with complex diagrams or analyzing photographs of real-world scenes for risk detection, Command A Vision excels at tackling the most demanding enterprise vision challenges,” the company said in a blog post


The AI Impact Series Returns to San Francisco – August 5

The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

Secure your spot now – space is limited: https://bit.ly/3GuuPLF


This means Command A Vision can read and analyze the most common types of images enterprises need: graphs, charts, diagrams, scanned documents and PDFs. 

Since it’s built on Command A’s architecture, Command A Vision requires two or fewer GPUs, just like the text model. The vision model also retains the text capabilities of Command A to read words on images and understands at least 23 languages. Cohere said that, unlike other models, Command A Vision reduces the total cost of ownership for enterprises and is fully optimized for retrieval use cases for businesses. 

How Cohere is architecting Command A

Cohere said it followed a Llava architecture to build its Command A models, including the visual model. This architecture turns visual features into soft vision tokens, which can be divided into different tiles. 

These tiles are passed into the Command A text tower, “a dense, 111B parameters textual LLM,” the company said. “In this manner, a single image consumes up to 3,328 tokens.”

Cohere said it trained the visual model in three stages: vision-language alignment, supervised fine-tuning (SFT) and post-training reinforcement learning with human feedback (RLHF).

“This approach enables the mapping of image encoder features to the language model embedding space,” the company said. “In contrast, during the SFT stage, we simultaneously trained the vision encoder, the vision adapter and the language model on a diverse set of instruction-following multimodal tasks.”

Visualizing enterprise AI 

Benchmark tests showed Command A Vision outperforming other models with similar visual capabilities. 

Cohere pitted Command A Vision against OpenAI’s GPT 4.1, Meta’s Llama 4 Maverick, Mistral’s Pixtral Large and Mistral Medium 3 in nine benchmark tests. The company did not mention if it tested the model against Mistral’s OCR-focused API, Mistral OCR

Command A Vision outscored the other models in tests such as ChartQA, OCRBench, AI2D and TextVQA. Overall, Command A Vision had an average score of 83.1% compared to GPT 4.1’s 78.6%, Llama 4 Maverick’s 80.5% and the 78.3% from Mistral Medium 3. 

Most large language models (LLMs) these days are multimodal, meaning they can generate or understand visual media like photos or videos. However, enterprises generally use more graphical documents such as charts and PDFs, so extracting information from these unstructured data sources often proves difficult. 

With Deep Research on the rise, the importance of bringing in models capable of reading, analyzing and even downloading unstructured data has grown.

Cohere also said it’s offering Command A Vision in an open weights system, in hopes that enterprises looking to move away from closed or proprietary models will start using its products. So far, there is some interest from developers.


Great Job Emilia David & the Team @ VentureBeat Source link for sharing this story.

#FROUSA #HillCountryNews #NewBraunfels #ComalCounty #LocalVoices #IndependentMedia

Felicia Ray Owens
Felicia Ray Owenshttps://feliciarayowens.com
Felicia Ray Owens is a media founder, cultural strategist, and civic advocate who creates platforms where power meets lived truth. As the voice behind C4: Coffee. Cocktails. Culture. Conversation and the founder of FROUSA Media, she uses storytelling, public dialogue, and organizing to spotlight the issues that matter most—locally and nationally. A longtime advocate for community wellness and political engagement, Felicia brings experience as a former Precinct Chair and former Chief Communications Officer of Indivisible Hill Country. Her work bridges culture, activism, and healing through curated spaces designed to inspire real change. Learn more at FROUSA.org

Latest articles

spot_img

Related articles

LEAVE A REPLY

Please enter your comment!
Please enter Your First & Last Name here

Leave the field below empty!

spot_img
Secret Link