BABA-W (09988.HK) -5.300 (-4.112%) Short selling $846.92M; Ratio 7.148% (BABA.US) unveiled its Qwen3 series of AI foundation models under Tongyi Qianwen, featuring two Mixture of Experts (MoE) models and six dense models with parameters ranging from 0.6B to 235B. The flagship Qwen3-235B-A22B competes strongly against top models like DeepSeek-R1, OpenAI’s o1 and o3-mini, xAI’s Grok-3, and Google’s Gemini-2.5-Pro, excelling in coding, math, and general capability benchmarks.Related NewsCICC Lists Capital Flow Forecasts for Stocks w/ Major Weighting Changes After MSCI Quarterly Review (Table)The compact MoE model Qwen3-30B-A3B, with just 10% of QwQ-32B’s active parameters, outperforms it, while the small Qwen3-4B model matches the performance of Qwen2.5-72B-Instruct.(HK stocks quote is delayed for at least 15 mins.Short Selling Data as at 2025-05-16 12:25.) (Real-time Streaming US Stocks Quote; Except All OTC quotes are at least 15 minutes delayed.)