V4-pro carries 1.6 trillion parameters; a Huawei-adapted variant signals workaround to US chip controls
Briefing
DeepSeek R1's release triggered a single-day Nvidia selloff exceeding $500bn in market cap as markets priced the implication that frontier AI performance could be achieved at a fraction of assumed training cost. V4 arrives with the same open-source strategy and a Huawei hardware variant, compounding rather than repeating that event.
The Biden administration imposed sweeping export controls on advanced semiconductors to China, targeting Nvidia A100 and H100 chips. The policy was premised on hardware scarcity constraining Chinese AI development. DeepSeek's Huawei-adapted V4 is the first public evidence that a frontier Chinese lab has credibly engineered around that constraint.
The post-dot-com period showed that infrastructure buildout cycles can persist even as the revenue models funding them erode. Fiber was laid long after bandwidth pricing collapsed. The current dynamic where hyperscaler capex accelerates as open-source models commoditize closed-model pricing carries a structurally similar ROIC risk.

ServiceNow's 16% plunge on AI displacement fears, with IBM software revenue missing alongside it, confirms enterprise IT budgets are already reallocating away from incumbent SaaS toward AI infrastructure. DeepSeek V4's open-source release at frontier parity accelerates this reallocation by lowering the cost floor for capable AI, reducing the stickiness of closed-model contract commitments.

Microsoft's voluntary buyout of up to 7% of its US workforce, framed as capital redeployment into $140bn AI infrastructure spend, is directly exposed to DeepSeek V4: if open-source frontier models compress closed-API pricing, the revenue thesis that justifies converting payroll into capex weakens before the workforce reduction is complete.
SK Hynix's record Q1 profit arriving in line rather than ahead of estimates indicated AI memory demand is fully priced into consensus. DeepSeek V4's efficiency claims, if validated, reduce per-inference memory bandwidth requirements, introducing a downside scenario for HBM demand volumes that the current SK Hynix valuation does not price.
5 hours ago