iFlame: Interleaving Full and Linear Attention for Efficient Mesh Generation

1CASIA, 2KAUST

* Corresponding author

A single-GPU trainable unconditional efficient mesh generative model
Press G to toggle wireframe. Press R to reset view.

Abstract

This paper describes a novel transformer-based network architecture for large mesh generation. While attention-based models have demonstrated remarkable performance in mesh generation, their quadratic computational complexity limits scalability, particularly for high-resolution 3D data. Conversely, linear attention mechanisms offer lower computational costs but often struggle to capture long-range dependencies, resulting in suboptimal outcomes.

To address this trade-off, we propose an interleaving autoregressive mesh generation framework that combines the efficiency of linear attention with the expressive power of standard attention mechanisms. To further enhance efficiency and leverage the inherent structure of mesh representations, we integrate this interleaving approach into an hourglass architecture, which significantly boosts efficiency.

Our approach reduces training time while achieving performance comparable to pure attention-based models. To improve inference efficiency, we implemented a caching algorithm that almost doubles the speed and reduces the KV cache size by seven-eighths compared to the original Transformer. We evaluate our framework on ShapeNet and Objaverse, demonstrating its ability to generate high-quality 3D meshes efficiently. Our results indicate that the proposed interleaving framework effectively balances computational efficiency and generative performance, making it a practical solution for mesh generation. The training takes only 2 days with 4 GPUs on 39k data with a maximum of 4k faces on Objaverse.

Method Overview

iFlame pipeline architecture

iFlame is a highly efficient unconditional single-GPU trainable mesh generation model that combines the efficiency of linear attention with the expressive power of standard attention mechanisms through an interleaving framework. We integrate this approach into an hourglass architecture to enhance efficiency and leverage the inherent structure of mesh representations.

Key contributions of our approach include:

  • An interleaving autoregressive mesh generation framework
  • Integration with an hourglass architecture for improved efficiency
  • A novel caching algorithm that nearly doubles inference speed
  • Reduction of KV cache size by seven-eighths compared to original Transformers

Performance Comparison

Performance comparison of our iFlame architecture

Figure: Performance comparison of our iFlame architecture. (a) Our model achieves 1.8× higher inference throughput (81.9 t/s vs. 45.0 t/s). (b) Our model maintains low KV cache usage (0.8GB) while full attention requires 8.3× more memory when generating 4000 faces. (c, d, e) Our model reduces training time by 46% (227 min vs. 422 min), requires 38% less GPU memory during training (28GB vs. 45GB per GPU), and maintains face accuracy (78.1% vs. 78.3%) compared to baseline methods on ShapeNet with 2B tokens.

BibTeX


		@article{wang2025iflameinterleavinglinearattention,
		      title={{iFlame}: Interleaving Full and Linear Attention for Efficient Mesh Generation}, 
author={Hanxiao Wang and Biao Zhang and Weize Quan and Dong-Ming Yan and Peter Wonka},
year={2025},
eprint={2503.16653},
archivePrefix={arXiv},
primaryClass={cs.CV},
}