检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:丁佳华 翟亚红[1] 徐龙艳[1] Ding Jiahua;Zhai Yahong;Xu Longyan(School of Electrical&Information Engineering,Hubei University of Automotive Technology,Shiyan 442002,China)
机构地区:[1]湖北汽车工业学院电气与信息工程学院,湖北十堰442002
出 处:《湖北汽车工业学院学报》2023年第3期27-30,38,共5页Journal of Hubei University Of Automotive Technology
基 金:湖北省教育厅科研计划重点项目(D20211802)。
摘 要:在集中式的资源管理结构中,基站作为决策节点收集数据的时延较高,为此提出了基于深度强化学习的联合信道和功率等级分配的分布式自主方案。首先确定车对基础设施链路分配方案,然后根据车与车链路与环境交互情况选择信道和传输功率,计算奖励函数,更新Q网络并选择策略。结果表明,车与车链路满足链路时延约束和容量要求,同时也降低了对车与基础设施链路的干扰,提升了通信性能。In a centralized resource management structure,the base station can be used as a decision node to collect data but has a high delay.Therefore,a distributed autonomous scheme of joint channel and power level allocation based on deep reinforcement learning was proposed.First,the vehicle-to-infrastructure(V2I)resource allocation scheme was determined,and the channel and transmission power were then selected based on the vehicle-to-vehicle(V2V)link interaction with the environment.The reward function was calculated,and the Q network was updated,with a strategy selected.The experimental results show that the V2V link can satisfy the delay constraint and capacity of the link,and it minimizes interference to the V2I link and improves communication performance.
分 类 号:TN929.5[电子电信—通信与信息系统] TP181[电子电信—信息与通信工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.7