Award Abstract # 2046444
CAREER: An adaptive framework to accelerate real-time workloads in heterogeneous and reconfigurable environments

NSF Org: CCF
Division of Computing and Communication Foundations
Recipient: THE RESEARCH FOUNDATION FOR THE STATE UNIVERSITY OF NEW YORK
Initial Amendment Date: February 23, 2021
Latest Amendment Date: July 14, 2023
Award Number: 2046444
Award Instrument: Continuing Grant
Program Manager: Almadena Chtchelkanova
achtchel@nsf.gov
 (703)292-7498
CCF
 Division of Computing and Communication Foundations
CSE
 Direct For Computer & Info Scie & Enginr
Start Date: September 1, 2021
End Date: August 31, 2026 (Estimated)
Total Intended Award Amount: $533,113.00
Total Awarded Amount to Date: $314,800.00
Funds Obligated to Date: FY 2021 = $208,210.00
FY 2023 = $106,590.00
History of Investigator:
  • Zhenhua Liu (Principal Investigator)
    zhenhua.liu@stonybrook.edu
Recipient Sponsored Research Office: SUNY at Stony Brook
W5510 FRANKS MELVILLE MEMORIAL L
STONY BROOK
NY  US  11794-0001
(631)632-9949
Sponsor Congressional District: 01
Primary Place of Performance: SUNY at Stony Brook
Stony Brook University
Stony Brook
NY  US  11794-0001
Primary Place of Performance
Congressional District:
01
Unique Entity Identifier (UEI): M746VC6XMNH9
Parent UEI:
NSF Program(s): CSR-Computer Systems Research,
Software & Hardware Foundation
Primary Program Source: 01002122DB NSF RESEARCH & RELATED ACTIVIT
01002324DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 1045, 7354, 7942
Program Element Code(s): 735400, 779800
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Artificial intelligence and machine learning are enabling real-time decisions based on live data for interactive scientific discovery and mission critical applications such as autonomous driving and smart grid. They are increasingly powered by heterogeneous and even reconfigurable accelerators. The reconfigurability and heterogeneity of accelerators, together with stringent performance requirements and complex dependencies in real-time workloads, bring daunting operational challenges. These issues, if left unaddressed, would slow down scientific discovery and waste lots of computing resources and energy. This project will develop a heterogeneity and reconfigurability aware framework to accelerate real-time artificial intelligence and machine learning without hurting other workloads. It will benefit the society by improving the efficiency of costly computing systems, which saves taxpayers' money and better utilize existing investments. Real-time artificial intelligence and machine learning powered by the framework can better serve the society, e.g., accelerating scientific discovery and enabling data-driven control. The project will bring innovative education, outreach and training opportunities for both academic and industrial participants to train the next generation of researchers and practitioners for the society.


Today, managing heterogeneous and reconfigurable systems for diverse workloads with high resource utilization and performance guarantee is an extremely challenging task. This project will design and implement an adaptive framework which automatically detects, profiles, and analyzes both workloads and accelerators on the fly. Based on the information, it adaptively reconfigures them to match resource capabilities with workload needs. Global and local optimization will be used to accommodate multiple types of workloads and the configuring, partitioning, placement, scheduling, and execution of models in each workload. The developed framework will provide provable performance even with partial information in unknown environments, which is urgently needed due to the ever increasing system complexity and volatility in workloads. Novel global resource allocation policies will be developed based on optimization techniques in this project to provide performance guarantee such as fairness, strategyproofness, and Pareto efficiency. Throughout the project, a reciprocal methodology is envisioned: the framework accelerates artificial intelligence/machine learning workloads and artificial intelligence/machine learning techniques enable the framework.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Liu, Yu and Mao, Yingling and Liu, Zhenhua and Yang, Yuanyuan "Deep Learning-Assisted Online Task Offloading for Latency Minimization in Heterogeneous Mobile Edge" IEEE Transactions on Mobile Computing , 2023 https://doi.org/10.1109/TMC.2023.3285882 Citation Details
Maghakian, Jessica and Lee, Russell and Hajiesmaili, Mohammad and Li, Jian and Sitaraman, Ramesh and Liu, Zhenhua "Applied Online Algorithms with Heterogeneous Predictors" International Conference on Machine Learning , 2023 Citation Details
Liu, Yu and Mao, Yingling and Liu, Zhenhua and Ye, Fan and Yang, Yuanyuan "Joint Task Offloading and Resource Allocation in Heterogeneous Edge Environments" Proceedings IEEE INFOCOM , 2023 Citation Details
Liu, Yu and Mao, Yingling and Shang, Xiaojun and Liu, Zhenhua and Yang, Yuanyuan "Energy-Aware Online Task Offloading and Resource Allocation for Mobile Edge Computing" Proceedings of the International Conference on Distributed Computing Systems , 2023 Citation Details
Shang, Xiaojun and Mao, Yingling and Liu, Yu and Huang, Yaodong and Liu, Zhenhua and Yang, Yuanyuan "Online Container Scheduling for Data-intensive Applications in Serverless Edge Computing" Proceedings IEEE INFOCOM , 2023 Citation Details

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page