检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Bas Peters Eldad Haber Keegan Lensink
机构地区:[1]Computational Geosciences Inc.,Vancouver,V6H 3Y4,BC,Canada [2]The University of British Columbia,Department of Earth,Ocean,and Atmospheric Sciences,Vancouver,V6T 1Z4,BC,Canada
出 处:《Artificial Intelligence in Geosciences》2024年第1期269-281,共13页地学人工智能(英文)
摘 要:The large spatial/temporal/frequency scale of geoscience and remote-sensing datasets causes memory issues when using convolutional neural networks for(sub-)surface data segmentation.Recently developed fully reversible or fully invertible networks can mostly avoid memory limitations by recomputing the states during the backward pass through the network.This results in a low and fixed memory requirement for storing network states,as opposed to the typical linear memory growth with network depth.This work focuses on a fully invertible network based on the telegraph equation.While reversibility saves the major amount of memory used in deep networks by the data,the convolutional kernels can take up most memory if fully invertible networks contain multiple invertible pooling/coarsening layers.We address the explosion of the number of convolutional kernels by combining fully invertible networks with layers that contain the convolutional kernels in a compressed form directly.A second challenge is that invertible networks output a tensor the same size as its input.This property prevents the straightforward application of invertible networks to applications that map between different input-output dimensions,need to map to outputs with more channels than present in the input data,or desire outputs that decrease/increase the resolution compared to the input data.However,we show that by employing invertible networks in a non-standard fashion,we can still use them for these tasks.Examples in hyperspectral land-use classification,airborne geophysical surveying,and seismic imaging illustrate that we can input large data volumes in one chunk and do not need to work on small patches,use dimensionality reduction,or employ methods that classify a patch to a single central pixel.
关 键 词:Invertible neural networks Large scale deep learning Memory efficient deep learning
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:18.219.115.102