Our research at SwAPP lab is primarily focused on the intersection of HPC and AI (for the latest publications from our lab, please refer to the publication page):
AI for HPC
- Performance analysis, auto-tuning: Smart AutoTuner [ICPP'25], [PACT'24], [NeurIPS'23], [HPDC'23], [IPDPS'23], [IPDPS'22].
- AI-assisted HPC code translation, transformation and optimization: ConTraPh [ICS'25], AutoParLLM [NAACL'25], CodeRosetta [NeurIPS'24].
- Code representation and graph-based learning in HPC and compilers: PCEBench [IPDPS'25], PERFOGRAPH [NeurIPS'23], ParaGraph [IPDPS-W'24].
- Profiling, parallel patterns, and parallelization, optimal parallel configurations: [PPoPP'26], ConTraPh [ICS'25], AutoParLLM [NAACL'25], DiscoPoP, OMPGPT, OpenMPDart [SC'24], [Euro-Par'24], [MLSys'23], [ICS'19].
- Correctness for parallel programming [IWOMP'25], [Correctness'25], [IPDPS'24].
HPC for AI — Efficient and Scalable Learning and Inferences
- Training acceleration of DNNs, LLMs, and foundation models: OSF [HPDC'25], MassiveGNN [Cluster'24], FFM [LREC-Coling'24].
- Model compression and optimization: GNN-RL pipeline, [ICML'22 – long presentation], [ICCV'21].
- Inference acceleration: PipeInfer [SC'24], Boda, [EuroSys'20], [Euro-Par'19 - best paper], [TACO'19].
- Resource-aware federated learning: FedGLAD [SEC'25], [CCGRID'25], RaFL [Euro-Par'24], [CCGRID'23], [SC'22].
Scientific Machine Learning
- GNN and foundation models for hydro-ecological models: HydroGAT [SIGSPATIAL'25], HydroGNN [HPC-Asia-W'23], [NeurIPS-W'21].
- Foundation models for computational materials science: SAM-I-Am
- Long-context LLMs for scientific data comprehension