China's social platform RedNote (Xiaohongshu) released its first open-source foundation model "dots.llm1", which is a 142 billion parameter mixture of experts (MoE) model, and activates only 14 billion parameters during inference, maintaining high performance while radically reducing training and inference costs.The most notable feature of "dots.llm1" is its use of 11.2 trillion tokens of non-synthetic high-quality training data, which is very rare among currently available open-source foundation models. Related NewsWhite House: Trump Extends TikTok 'Sell or Ban' Deadline by Another 90 DaysIn Chinese tests, 'dots.llm1' performed exceptionally well, achieving an average score of 91.3, surpassing DeepSeek's open-source V2, V3, and Alibaba's open-source Qwen2.5 32B and 72B.