Nvidia's KV Cache Transform Coding (KVTC) compresses LLM key-value cache by 20x without model changes, cutting GPU memory costs and time-to-first-token by up to 8x for multi-turn AI applications.
Abstract: Consider the case where consecutive blocks of N letters of a semi-infinite individual sequence X over a finite alphabet are being compressed into binary sequences by some one-to-one mapping.
A Neural Arithmetic Compression framework for high entropy IoT log strings created by pairing a character-level GRU with Arithmetic Coding.
Abstract: With the rapid development of SAR imaging technology and its expanding applications across missile-borne, airborne and spaceborne platforms, higher requirements were imposed on the real-time ...