Adamas: Hadamard Sparse Attention for Efficient Long-Context Inference Paper • 2510.18413 • Published Oct 21 • 4 • 2
Long-Context Attention Benchmark: From Kernel Efficiency to Distributed Context Parallelism Paper • 2510.17896 • Published Oct 19 • 4 • 2