检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Qianwen Li Peng Zhang Handong Yao Zhiwei Chen Xiaopeng Li
机构地区:[1]School of Environmental,Civil,Agricultural and Mechanical Engineering,University of Georgia,Athens 30602,USA [2]Department of Civil and Environmental Engineering,University of Wisconsin–Madison,Madison 53706,USA [3]Department of Civil,Architectural,and Environmental Engineering,Drexel University,Philadelphia 19104,USA
出 处:《Journal of Intelligent and Connected Vehicles》2024年第2期86-96,共11页智能网联汽车(英文)
基 金:sponsored by the National Science Foundation(CMMI#1558887 and CMMI#1932452).
摘 要:Motivated by the promising benefits of connected and autonomous vehicles (CAVs) in improving fuelefficiency, mitigating congestion, and enhancing safety, numerous theoretical models have been proposed to plan CAVmultiple-step trajectories (time–specific speed/location trajectories) to accomplish various operations. However, limitedefforts have been made to develop proper trajectory control techniques to regulate vehicle movements to follow multiplesteptrajectories and test the performance of theoretical trajectory planning models with field experiments. Without aneffective control method, the benefits of theoretical models for CAV trajectory planning can be difficult to harvest. This studyproposes an online learning-based model predictive vehicle trajectory control structure to follow time–specific speed andlocation profiles. Unlike single-step controllers that are dominantly used in the literature, a multiple-step model predictivecontroller is adopted to control the vehicle’s longitudinal movements for higher accuracy. The model predictive controlleroutput (speed) cannot be interpreted by vehicles. A reinforcement learning agent is used to convert the speed value to thevehicle’s direct control variable (i.e., throttle/brake). The reinforcement learning agent captures real-time changes in theoperating environment. This is valuable in saving parameter calibration resources and improving trajectory control accuracy.A line tracking controller keeps vehicles on track. The proposed control structure is tested using reduced-scale robot cars.The adaptivity of the proposed control structure is demonstrated by changing the vehicle load. Then, experiments on twofundamental CAV platoon operations (i.e., platooning and split) show the effectiveness of the proposed trajectory controlstructure in regulating robot movements to follow time-specific reference trajectories.
关 键 词:connected and autonomous vehicles(CAVs) reinforcement learning physical tests time-specific speed and location longitudinal and lateral control
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.145.68.176