检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:温鑫 曾焘 李春波[1] 徐子晨 WEN Xin;ZENG Tao;LI Chun-bo;XU Zi-chen(School of Mathematics and Computer Science,Nanchang University,Nanchang 330031,China)
机构地区:[1]南昌大学数学与计算机学院,江西南昌330031
出 处:《计算机工程与科学》2024年第7期1210-1217,共8页Computer Engineering & Science
基 金:国家重点研发计划(2022YFB4501703);江西省科技厅重点研发计划(20212BBE53004);南昌大学江西省财政科技专项(ZBG20230418043);江西省研究生创新基金(YC2023-B010)。
摘 要:模型推理服务正随着大模型技术的发展被广泛应用,为模型推理服务构建稳定可靠的体系结构支撑逐渐成为云服务商关注的焦点。服务器无感计算是一种资源粒度细、抽象程度高的云服务计算范式,具有按需计费、弹性扩展等优势,能够有效提高模型推理服务的计算效率。但是,模型推理服务工作流呈现出多阶段的特点,独立的服务器无感计算框架难以确保模型推理服务工作流各阶段的最优执行。因此,如何利用不同服务器无感计算框架的性能特征,实现模型推理服务工作流各阶段的在线切换,缩短整体工作流的执行时间,是亟待解决的关键问题。讨论模型推理服务在不同服务器无感计算框架上的切换问题。首先,使用预训练模型构建模型推理服务函数,得出异构服务器无感计算框架的性能特征;其次,采用机器学习技术构建二分类模型,结合异构服务器无感计算框架的性能特征,实现模型推理服务在线切换框架原型;最后,搭建测试平台,生成模型推理服务工作流,完成在线切换框架原型的性能评估。初步实验结果表明,在线切换框架原型与独立的服务器无感计算框架相比,最大可缩短模型推理服务工作流57%的执行时间。ion level.It offers advantages such as on-demand billing and elastic scalability,which can effectively improve the computational efficiency of model inference services.However,the multi-stage nature of model inference service workflows makes it challenging for independent serverless computing frameworks to ensure optimal execution of each stage.Therefore,the key problem to be addressed is how to leverage the performance characteristics of different serverless computing frameworks to achieve online switching of model inference service workflows and reduce the overall execution time.This paper discusses the switching problem of model inference ser-vices on different serverless computing frameworks.Firstly,a pre-trained model is used to construct model inference service functions and derive the performance characteristics of heterogeneous serverless computing frameworks.Secondly,a machine learning technique is employed to build a binary classification model that combines the performance characteristics of heterogeneous serverless computing frameworks,enabling online switching of the model inference service framework.Finally,a testing platform is established to generate model inference service workflows and evaluate the performance of the online switching framework prototype.Preliminary experimental results indicate that compared with the independent serverless computing framework,the online switching framework prototype can reduce the execution time of model inference service workflows by up to 57%.
分 类 号:TP302[自动化与计算机技术—计算机系统结构]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:13.58.11.68