
Welcome to the Parallel Architecture, System, and Algorithm Lab

Campus Photo

University of California, Merced
Parallel Architecture, System, and Algorithm (PASA) Lab at the Electrical Engineering and Computer Science, University of California, Merced performs research in core technologies for large-scale parallel systems. The core theme of our research is to study how to enable scalable and efficient execution of applications (especially machine learning and artificial intelligence workloads) on increasingly complex large-scale parallel systems. Our work creates innovation in runtime, architecture, performance modeling, and programming models; We also investigate the impact of novel architectures (e.g., CXL-based memory and accelerator with massive parallelism) on the designs of applications and runtime. Our goal is to improve the performance, reliability, energy efficiency, and productivity of large-scale parallel systems. PASA is a part of High Performance Computing Systems and Architecture Group at UC Merced
See our Research and Publications pages for more information about our work. For information about our group members, see our People page.
[10/2023] Our paper on dynamic neural network ("Enabling Large Dynamic Neural Network Training with Learning-based Memory Management") is accepted by HPCA'24.
[8/2023] The undergraduate student Elgin Li joined PASA to enrich his experiences in research. Welcome Elgin!
[6/2023] Our project on tensor network is funded by the NFS PPoSS program. Thanks to our collaborators at NCSU and Oregon State!
[6/2023] NSF funds us to establish an IUCRC center, the Center for Memory System Research (CEMSYS)! Looking forward to the future collaboration between UC Merced, UC Davis and industry partners.
[5/2023] We summarized the discussions at the 1st Workshop for Heterogeneous and Composable Memory and published the summary as a SigArch blog.
[3/2023] A paper "Auto-HPCnet: An Automatic Framework to Build Neural Network-based Surrogate for High-Performance Computing Applications " is accepted to HPDC'23.
[2/2023] Co-chaired Workshop for Heterogeneous and Composable Memory
[2/2023] Invited as a panelist in the panel on "Challenges and Solutions for the upcoming Extreme Heterogeneity Era" in International Workshop on Extreme Heterogeneity Solutions associated with PPoPP'23.
[12/2022] Welcome new PhD students, Bin Ma and Jianbo Wu!
[11/2022] Dong Li was invited as a panelist in the panel on "AI for HPC" in Seventh International Workshop on Extreme Scale Programming Models and Middleware associated with SC'22.
[11/2022] Dong Li co-chaired AI for Scientific Applications workshop associated with SC'22.
[11/2022] Thanks Lawrence Livermore National Lab (LLNL) for supporting our research on heterogeneous memory!
[11/2022] Thanks Oracle for supporting our research on large-scale AI model training!
[11/2022] Group meeting with DarchR Lab@UC Davis for research collaboration.
[11/2022] A paper "Merchandiser: Data Placement on Heterogeneous Memory for Task-Parallel HPC Applications with Load-Balance Awareness" is accepted to PPoPP'23.
[11/2022] Prof. Jian Huang@UIUC visited us and gave a talk "The ISA of Future Storage Systems: Interface, Specialization and Approach".
[9/2022] A paper on using heterogeneous memory to train GNN (titled "Betty: Enabling Large-Scale GNN Training with Batch-Level Graph Partitioning") is accepted to ASPLOS'23.
[9/2022] Welcome Prof. Tsung-wei Huang@Utah visits us.
[8/2022] Welcome new PhD student, Jin Huang!
[8/2022] Thanks SK hynix for supporting our research on heterogeneous memory.
[6/2022] A paper on mixed-precision AI model training is accepted into ATC'22.
[5/2022] Dong Xu and Jie Liu will go to SK hynix and Tencent for summer internship. Congratulations!