By Eduardo Baptista
BEIJING, April 24 (Reuters) – Chinese startup DeepSeek on Friday released a preview version of V4, its new artificial intelligence model adapted to run on Huawei chips, marking another step in China’s push to build a self-sufficient AI ecosystem.
Here is what we know so far about the long-awaited open-source offering.
V4 MODEL CHARACTERISTICS
DeepSeek said V4 is designed to work with agent frameworks including Claude Code and OpenClaw, reflecting the industry shift away from prompt-based chatbots towards models that can complete complex, multi-step tasks with less human input.
V4 comes in two versions: the more powerful and more expensive Pro, and the cheaper, lighter Flash.
Pro is positioned as a higher-end model with performance comparable to leading closed-source systems, particularly in agentic coding, world knowledge, STEM (science, technology, engineering and mathematics) and competitive programming.
In maximum reasoning mode, Pro outperforms all open-source models, though it still trails frontier closed-source systems such as Google’s Gemini 3.1 Pro and OpenAI’s GPT-5.4 in some areas, according to a DeepSeek paper released alongside the model.
“DeepSeek-V4-Pro Max … redefines the state of the art for open models, outperforming its predecessors in core tasks,” DeepSeek said.
Flash delivers similar reasoning ability in some areas but runs faster and at lower cost than Pro, with weaker world knowledge and lower performance on more demanding agent-based tasks.
Both versions support a 1-million-token context window, matching the expansion DeepSeek introduced with V3 in February. DeepSeek said V4’s architecture is designed to reduce compute and memory costs for long-context use.
ADAPTED FOR HUAWEI CHIPS
A key change from earlier DeepSeek releases is that V4 was adapted for Huawei’s most advanced Ascend AI chips.
Reuters reported in February that DeepSeek had not shared its new model with U.S. chipmakers for performance tuning, instead granting early access to domestic companies such as Huawei, despite previously working closely with Nvidia’s technical staff.
Hours after the preview release, Huawei said V4 is fully supported on its Ascend 950-based supernode clusters, and that its chips were used for part of V4-Flash’s training.
“Through close technical collaboration … the entire Ascend supernode product line now supports the DeepSeek-V4 series models,” Huawei said.
DeepSeek’s earlier V3 and R1 models were trained on Nvidia chips. The company did not say whether the same applied to V4.
SELF-SUFFICIENCY PUSH AND LIMITS
Lian Jye Su, chief analyst at tech research firm Omdia, said the partnership shows DeepSeek models can deliver similar performance on both Huawei and Nvidia hardware.
“The popularity of DeepSeek in the domestic Chinese market encouraged Huawei to optimize the model for its hardware, and this, in turn, lowers the barriers for Chinese developers and companies to build AI apps entirely on domestic solutions,” he said.
He added that Huawei still trails Nvidia technologically, and moving developers away from Nvidia’s ecosystem remains difficult. Even so, he said, “DeepSeek’s pivot reveals real, tangible progress toward AI infrastructure self-sufficiency.”
DeepSeek also faces compute constraints under U.S. export controls on Nvidia chips and chipmaking equipment. The company said Pro can cost up to 12 times more than Flash because of “constraints in high-end compute capacity,” limiting current Pro service availability.
DeepSeek said Pro pricing could fall sharply once Huawei Ascend 950 supernodes are deployed at scale in the second half of the year.
(Reporting by Eduardo Baptista; additional reporting by Liam Mo; Editing by Mark Potter)



Comments