News Sharing
For sharing news, please enter the email address of you and the receiver, then press SEND button.*Mandatory Fields
Receiver*
Enter email addresses, separated by semicolon (;). E.g. a@a.com;b@b.com
Your email address*
Content Sharing
谷歌研究發表壓縮演算法TurboQuant 節省AI模型對記憶體消耗
外媒報道,谷歌研究(Google Research)周二(24日)發表無需預先訓練的壓縮演算法TurboQuant,能在不影響模型精準度下,將大語言模型(LLM)的KV快取量壓縮至3位元。在英偉達(NVDA.US)H100圖像處理器(GPU)的基準測試中,相較於未量化的32位元鍵值,4位元的TurboQuant在計算注意力邏輯值(attention logi...
Reset
Send
The window will close in 5 seconds
谷歌研究發表壓縮演算法TurboQuant 節省AI模型對記憶體消耗
Close
Recommend
2
Positive
9
Negative
2
 
 

外媒報道,谷歌研究(Google Research)周二(24日)發表無需預先訓練的壓縮演算法TurboQuant,能在不影響模型精準度下,將大語言模型(LLM)的KV快取量壓縮至3位元。在英偉達(NVDA.US)      H100圖像處理器(GPU)的基準測試中,相較於未量化的32位元鍵值,4位元的TurboQuant在計算注意力邏輯值(attention logits)時的效能提升最高可達8倍,同時將KV快取記憶體減少至少6倍。

KV快取用於儲存先前計算出的注意力資料,使大語言模型無需在每個標記生成步驟中重新計算。隨著上下文視窗不斷擴大,這些快取正逐漸成為主要記憶體樽頸。雖然傳統向量量化方法能縮小快取規模,但由於必須將量化常數與壓縮資料一同儲存,每個值會產生幾位元小量記憶體開銷。在更大上下文視窗下,這些開銷會隨之累積。TurboQuant算法則消除有關樽頸。

記憶體股Sandisk(SDNK.US)及美光(MU.US)      隔晚(25日)分別跌3.5%及3.4%。(fc/j)(Real-time Streaming US Stocks Quote; Except All OTC quotes are at least 15 minutes delayed.)

AASTOCKS新聞

Copyright(C) AASTOCKS.com Limited 2000. All rights reserved.
Disclaimer: AASTOCKS.com Ltd, HKEx Information Services Limited, its holding companies and/or any subsidiaries of such holding companies endeavour to ensure the accuracy and reliability of the Information provided but do not guarantee its accuracy or reliability and accept no liability (whether in tort or contract or otherwise) for any loss or damage arising from any inaccuracies or omissions.