SegVeg: Segmenting RGB Images into Green and Senescent Vegetation by Combining Deep and Shallow Methods  被引量:3

在线阅读下载全文

作  者:Mario Serouart Simon Madec Etienne David Kaaviya Velumani Raul LopezLozano Marie Weiss Frederic Baret 

机构地区:[1]Arvalis,Institutduvegetal,228,routedel’aerodrome-CS40509,84914 Avignon Cedex 9,France [2]INRAE,Avignon Universite,UMREMMAH,UMTCAPTE,228,routedel’aerodrome-CS40509,84914 Avignon Cedex 9,France [3]CIRAD,UMRTETIS,F-34398 Montpellier,France [4]Hiphen SAS,228,routedel’aerodrome-CS40509,84914 Avignon Cedex 9,France

出  处:《Plant Phenomics》2022年第1期26-42,共17页植物表型组学(英文)

基  金:The study was partly supported by several projects including ANR PHENOME(Programme d’investissement d’avenir),Digitag(PIA Institut Convergences Agriculture Numérique ANR-16-CONV-0004),CASDAR LITERAL,and P2S2 funded by CNES.

摘  要:Pixel segmentation of high-resolution RGB images into chlorophyll-active or nonactive vegetation classes is a first step often required before estimating key traits of interest.We have developed the SegVeg approach for semantic segmentation of RGB images into three classes(background,green,and senescent vegetation).This is achieved in two steps:A U-net model is first trained on a very large dataset to separate whole vegetation from background.The green and senescent vegetation pixels are then separated using SVM,a shallow machine learning technique,trained over a selection of pixels extracted from images.The performances of the SegVeg approach is then compared to a 3-class U-net model trained using weak supervision over RGB images segmented with SegVeg as groundtruth masks.Results show that the SegVeg approach allows to segment accurately the three classes.However,some confusion is observed mainly between the background and senescent vegetation,particularly over the dark and bright regions of the images.The U-net model achieves similar performances,with slight degradation over the green vegetation:the SVM pixel-based approach provides more precise delineation of the green and senescent patches as compared to the convolutional nature of U-net.The use of the components of several color spaces allows to better classify the vegetation pixels into green and senescent.Finally,the models are used to predict the fraction of three classes over whole images or regularly spaced grid-pixels.Results show that green fraction is very well estimated(R^(2)=0.94)by the SegVeg model,while the senescent and background fractions show slightly degraded performances(R^(2)=0.70 and 0.73,respectively)with a mean 95%confidence error interval of 2.7%and 2.1%for the senescent vegetation and background,versus 1%for green vegetation.We have made SegVeg publicly available as a ready-to-use script and model,along with the entire annotated grid-pixels dataset.We thus hope to render segmentation accessible to a broad audience by requiring neither

关 键 词:DEEP offering RENDER 

分 类 号:TP391.41[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象