因此,我們從原始輸入幀中提取每像素上下文映射,並將它們與輸入幀一起扭曲,然後將它們輸入到合成網絡中,如圖5-1所示。我們選擇通過利用conv的英文翻譯

因此,我們從原始輸入幀中提取每像素上下文映射,並將它們與輸入幀一起扭曲

因此,我們從原始輸入幀中提取每像素上下文映射,並將它們與輸入幀一起扭曲,然後將它們輸入到合成網絡中,如圖5-1所示。我們選擇通過利用conv1層的響應來提取上下文信息來自ResNet-18[39]。因此,輸入幀中的每個像素具有描述其7×7鄰域的上下文向量。 深度學習中延伸了GridNet架構,從兩個預先扭曲的幀以及上下文映射生成最終的插值結果[45]。 GridNet不是像典型的神經網絡那樣具有單個連續層序列,而是處理行和列中的特徵,如圖5-1所示。每行中的層形成一個特徵分辨率保持不變的flow。每個flow以不同的比例處理信息,並且列通過使用下採樣和上採樣層來連接flow以交換信息。這概括了典型的編碼器 - 解碼器架構,其中特徵沿著單個路徑處理[46][47]。相比之下,GridNet學習如何將不同尺度的信息單獨組合,使它非常適合像素方面的問題,其中全局低分辨率信息可以引導局部高分辨率預測。文獻[29]中Simon Niklaus和Feng Liu修改了GridNet架構的水平和垂直連接,如圖5-2所示。此外,方法採用參數整流線性單元來改進訓練並使用雙線性上採樣來避免棋盤格偽影[48][49]。請注意,我們對三個流的配置,以及三個規模,導致網絡的相對較小的感知領域[50]。然而,它很適合處理遮蔽和複雜運動的問題,因為預翹曲已經補償了運動。 訓練方法則是利用PyTorch實現,方法在PWC-Net [39]實現,並使用CUDA實現了必要的成本量層,並利用cuDNN的網格採樣器來執行所涉及的變形,並使用CUDA實現了預先變形算法。雙向光流具體來說,以view 0→view 2使用光流產生view 1,再以同樣方式以view 2→view 0使用光流產生view 1,用這兩個預先扭曲的幀,現有的雙向方法通過加權混合將它們組合以獲得插值結合成產生另一個view1。而深度學習方法則通過訓練合成神經網絡開發一種更靈活,該網路將兩個預先扭曲的圖像作為輸入並直接生成最終輸出,而無需採用像素方式混合。因此,我們從原始輸入幀中提取每像素上下文映射,並將它們與輸入幀一起扭曲,然後將它們輸入到合成網絡中,如圖5-1所示。我們選擇通過利用conv1層的響應來提取上下文信息來自ResNet-18[39]。因此,輸入幀中的每個像素具有描述其7×7鄰域的上下文向量。 深度學習中延伸了GridNet架構,從兩個預先扭曲的幀以及上下文映射生成最終的插值結果[45]。 GridNet不是像典型的神經網絡那樣具有單個連續層序列,而是處理行和列中的特徵,如圖5-1所示。每行中的層形成一個特徵分辨率保持不變的flow。每個flow以不同的比例處理信息,並且列通過使用下採樣和上採樣層來連接flow以交換信息。這概括了典型的編碼器 - 解碼器架構,其中特徵沿著單個路徑處理[46][47]。相比之下,GridNet學習如何將不同尺度的信息單獨組合,使它非常適合像素方面的問題,其中全局低分辨率信息可以引導局部高分辨率預測。文獻[29]中Simon Niklaus和Feng Liu修改了GridNet架構的水平和垂直連接,如圖5-2所示。此外,方法採用參數整流線性單元來改進訓練並使用雙線性上採樣來避免棋盤格偽影[48][49]。請注意,我們對三個流的配置,以及三個規模,導致網絡的相對較小的感知領域[50]。然而,它很適合處理遮蔽和複雜運動的問題,因為預翹曲已經補償了運動。 訓練方法則是利用PyTorch實現,方法在PWC-Net [39]實現,並使用CUDA實現了必要的成本量層,並利用cuDNN的網格採樣器來執行所涉及的變形,並使用CUDA實現了預先變形算法。雙向光流具體來說,以view 0→view 2使用光流產生view 1,再以同樣方式以view 2→view 0使用光流產生view 1,用這兩個預先扭曲的幀,現有的雙向方法通過加權混合將它們組合以獲得插值結合成產生另一個view1。而深度學習方法則通過訓練合成神經網絡開發一種更靈活,該網路將兩個預先扭曲的圖像作為輸入並直接生成最終輸出,而無需採用像素方式混合。
0/5000
原始語言: -
目標語言: -
結果 (英文) 1: [復制]
復制成功!
We therefore extracted from the original input frame per pixel context mapping, and twist them together with the input frame, and inputs them to the combining network, shown in Figure 5-1. We chose conv1 layer by using the response information extracted from the context ResNet-18 [39]. Thus, each pixel in the input frame is described which has a 7 × 7 neighborhood context vector. <br>Extending the depth study GridNet architecture to generate the final interpolation result from two frames and the context previously twisted mapping [45]. GridNet not like a typical neural network as a single continuous layer sequence, but the handling characteristics rows and columns, shown in Figure 5-1. Each layer forming a row of feature resolution remains constant flow. Each process flow of information in different proportions, and the column through the lower layer downsampling and upsampling using flow connected to exchange information. This summarizes typical coder - decoder architecture, wherein the processing feature [46] [47] along a single path. In contrast, GridNet learn how to separate information from different combinations of scales, making it ideal for problematic pixel aspect, wherein the global high-resolution low-resolution prediction information may be directed locally. Document [29] Simon Niklaus Feng Liu and the horizontal and vertical modifications GridNet connection architecture, shown in Figure 5-2. Further, the method using linear parameter rectifying means to improve the training, using a bilinear sampling on a grid to avoid artifacts [48] [49]. Please note that we configure three streams, as well as three of scale, resulting in a relatively small area of network-aware [50]. However, it is very suitable for masking Problem and complex motion, because the motion compensation has been pre-warped. <br>Training method is to use PyTorch implemented method PWC-Net [39] implemented and realized with CUDA amount necessary cost layer, and using a mesh cuDNN sampler to perform modification involved using a previously implemented CUDA morphing algorithm.<br>Bidirectional optical flow Jutilaishui to view 0 → view 2 generated using optical flow view 1, and then in the same manner to view 2 → view 0 is generated using optical flow view 1, previously twisted by two frames, the conventional method of bidirectional mixing them by a weighted combination to generate a combined interpolated to obtain another view1. And the depth of the learning by the neural network training to develop the synthesis of a more flexible, the web will advance two distorted images as input and generates the final output directly, without using a pixel-wise while mixing. <br>We therefore extracted from the original input frame per pixel context mapping, and twist them together with the input frame, and inputs them to the combining network, shown in Figure 5-1. We chose conv1 layer by using the response information extracted from the context ResNet-18 [39]. Thus, each pixel in the input frame is described which has a 7 × 7 neighborhood context vector. <br>Extending the depth study GridNet architecture to generate the final interpolation result from two frames and the context previously twisted mapping [45]. GridNet not like a typical neural network as a single continuous layer sequence, but the handling characteristics rows and columns, shown in Figure 5-1. Each layer forming a row of feature resolution remains constant flow. Each process flow of information in different proportions, and the column through the lower layer downsampling and upsampling using flow connected to exchange information. This summarizes typical coder - decoder architecture, wherein the processing feature [46] [47] along a single path. In contrast, GridNet learn how to separate information from different combinations of scales, making it ideal for problematic pixel aspect, wherein the global high-resolution low-resolution prediction information may be directed locally. Document [29] Simon Niklaus Feng Liu and the horizontal and vertical modifications GridNet connection architecture, shown in Figure 5-2. Further, the method using linear parameter rectifying means to improve the training, using a bilinear sampling on a grid to avoid artifacts [48] [49]. Please note that we configure three streams, as well as three of scale, resulting in a relatively small area of network-aware [50]. However, it is very suitable for masking Problem and complex motion, because the motion compensation has been pre-warped.<br>Training method is to use PyTorch implemented method PWC-Net [39] implemented and realized with CUDA amount necessary cost layer, and using a mesh cuDNN sampler to perform modification involved using a previously implemented CUDA morphing algorithm. <br>Bidirectional optical flow Jutilaishui to view 0 → view 2 generated using optical flow view 1, and then in the same manner to view 2 → view 0 is generated using optical flow view 1, previously twisted by two frames, the conventional method of bidirectional mixing them by a weighted combination to generate a combined interpolated to obtain another view1. And the depth of the learning by the neural network training to develop the synthesis of a more flexible, the web will advance two distorted images as input and generates the final output directly, without using a pixel-wise while mixing.
正在翻譯中..
結果 (英文) 2:[復制]
復制成功!
Therefore, we extract the context image per pixel from the original input frame, distort them with the input frame, and then enter them into the composition network, as shown in Figure 5-1. We chose to extract contextual information from ResNet-18 by utilizing the response of the conv1 layer. Therefore, each pixel in the input frame has a context vector that describes its 7-by-7 neighborhood.<br> The GridNet schema is extended in deep learning to generate the final interpolation results from two pre-distorted frames and context images. GridNet does not have a single sequence of consecutive layers, as a typical neural network does, but rather processes the characteristics in rows and columns, as shown in Figure 5-1. The layers in each row form a flow with the same feature resolution. Each flow processes the information at a different scale, and the columns connect the flow by using the lower and upper sampling layers to exchange the information. This summarizes the typical encoder-decoder architecture, where features are processed along a single path. GridNet, by contrast, learns how to combine information at different scales to make it ideal for pixel problems, where global low-resolution information guides local high-resolution predictions. Simon Niklaus and Feng Liu in the literature, the horizontal and vertical connections to the GridNet architecture, as shown in Figure 5-2. In addition, the method uses parameter rectifier linear units to improve training and use bilinear upsampling to avoid checkerboard artifacts. Note that our configuration of three streams, as well as three sizes, leads to a relatively small perceived area of the network. However, it is well suited to deal with masking and complex movements, as prewarping has compensated for movement.<br> The training method is implemented by PyTorch, which is implemented in PWC-Net, and the necessary cost layer is realized by using CUDA, and the mesh sampler of cuDNN is used to perform the deformation involved, and the pre-deformation algorithm is implemented by using CUDA.<br>Bidirectional optical flow specifically, in view 0 - view 2 using light flow to generate view 1, and then in the same way to view 2 - view 0 using light flow to generate view 1, with these two pre-distorted frames, the existing two-way method by weighting mixing them to obtain interpolation combined to produce another view1. Deep learning, on the other hand, develops a more flexible approach by training synthetic neural networks that use two pre-distorted images as inputs and produce the final output directly, without the need for pixel blending.<br>Therefore, we extract the context image per pixel from the original input frame, distort them with the input frame, and then enter them into the composition network, as shown in Figure 5-1. We chose to extract contextual information from ResNet-18 by utilizing the response of the conv1 layer. Therefore, each pixel in the input frame has a context vector that describes its 7-by-7 neighborhood.<br> The GridNet schema is extended in deep learning to generate the final interpolation results from two pre-distorted frames and context images. GridNet does not have a single sequence of consecutive layers, as a typical neural network does, but rather processes the characteristics in rows and columns, as shown in Figure 5-1. The layers in each row form a flow with the same feature resolution. Each flow processes the information at a different scale, and the columns connect the flow by using the lower and upper sampling layers to exchange the information. This summarizes the typical encoder-decoder architecture, where features are processed along a single path. GridNet, by contrast, learns how to combine information at different scales to make it ideal for pixel problems, where global low-resolution information guides local high-resolution predictions. Simon Niklaus and Feng Liu in the literature, the horizontal and vertical connections to the GridNet architecture, as shown in Figure 5-2. In addition, the method uses parameter rectifier linear units to improve training and use bilinear upsampling to avoid checkerboard artifacts. Note that our configuration of three streams, as well as three sizes, leads to a relatively small perceived area of the network. However, it is well suited to deal with masking and complex movements, as prewarping has compensated for movement.<br> The training method is implemented by PyTorch, which is implemented in PWC-Net, and the necessary cost layer is realized by using CUDA, and the mesh sampler of cuDNN is used to perform the deformation involved, and the pre-deformation algorithm is implemented by using CUDA.<br>Bidirectional optical flow specifically, in view 0 - view 2 using light flow to generate view 1, and then in the same way to view 2 - view 0 using light flow to generate view 1, with these two pre-distorted frames, the existing two-way method by weighting mixing them to obtain interpolation combined to produce another view1. Deep learning, on the other hand, develops a more flexible approach by training synthetic neural networks that use two pre-distorted images as inputs and produce the final output directly, without the need for pixel blending.
正在翻譯中..
結果 (英文) 3:[復制]
復制成功!
Therefore, we extract each pixel context map from the original input frame, twist them with the input frame, and then input them into the synthesis network, as shown in Figure 5-1. We choose to extract context information from resnet-18 by using the response of conv1 layer [39]. Therefore, each pixel in the input frame has a context vector describing its 7 * 7 neighborhood.<br>In depth learning, gridnet architecture is extended to generate the final interpolation result from two pre twisted frames and context mapping [45] Gridnet does not have a single continuous layer sequence like a typical neural network, but deals with features in rows and columns, as shown in Figure 5-1. The layers in each row form a flow with constant feature resolution. Each flow processes information in a different scale, and columns exchange information by connecting flows using the lower and upper sampling layers. This summarizes a typical encoder decoder architecture, in which features are processed along a single path [46] [47]. In contrast, gridnet learns how to combine information of different scales separately, making it very suitable for pixel problems, in which global low-resolution information can guide local high-resolution prediction. In reference [29], Simon Niklaus and Feng Liu modified the horizontal and vertical connections of gridnet architecture, as shown in Figure 5-2. In addition, parameter rectification linear unit is used to improve the training and bilinear up sampling is used to avoid checkerboard artifacts [48] [49]. Please note that our configuration of the three flows, as well as the three scales, results in a relatively small sensing area of the network [50]. However, it is very suitable to deal with the problems of shielding and complex motion, because the pre warping has compensated the motion.<br>The training method is implemented in PwC net [39] by pytorch, and the necessary cost level is implemented by CUDA. The grid sampler of cudnn is used to perform the deformation involved, and the pre deformation algorithm is implemented by CUDA.<br>Two way optical flow specifically, view 1 is generated by view 0 → view 2, and view 1 is generated by view 2 → view 0 in the same way. With these two pre twisted frames, the existing two-way method combines them by weighted blending to obtain interpolation to generate another view 1. The deep learning method develops a more flexible method by training the synthetic neural network, which takes two pre distorted images as input and directly generates the final output without pixel mixing.<br>Therefore, we extract each pixel context map from the original input frame, twist them with the input frame, and then input them into the synthesis network, as shown in Figure 5-1. We choose to extract context information from resnet-18 by using the response of conv1 layer [39]. Therefore, each pixel in the input frame has a context vector describing its 7 * 7 neighborhood.<br>In depth learning, gridnet architecture is extended to generate the final interpolation result from two pre twisted frames and context mapping [45] Gridnet does not have a single continuous layer sequence like a typical neural network, but deals with features in rows and columns, as shown in Figure 5-1. The layers in each row form a flow with constant feature resolution. Each flow processes information in a different scale, and columns exchange information by connecting flows using the lower and upper sampling layers. This summarizes a typical encoder decoder architecture, in which features are processed along a single path [46] [47]. In contrast, gridnet learns how to combine information of different scales separately, making it very suitable for pixel problems, in which global low-resolution information can guide local high-resolution prediction. In reference [29], Simon Niklaus and Feng Liu modified the horizontal and vertical connections of gridnet architecture, as shown in Figure 5-2. In addition, parameter rectification linear unit is used to improve the training and bilinear up sampling is used to avoid checkerboard artifacts [48] [49]. Please note that our configuration of the three flows, as well as the three scales, results in a relatively small sensing area of the network [50]. However, it is very suitable to deal with the problems of shielding and complex motion, because the pre warping has compensated the motion.<br>The training method is implemented in PwC net [39] by pytorch, and the necessary cost level is implemented by CUDA. The grid sampler of cudnn is used to perform the deformation involved, and the pre deformation algorithm is implemented by CUDA.<br>Two way optical flow specifically, view 1 is generated by view 0 → view 2, and view 1 is generated by view 2 → view 0 in the same way. With these two pre twisted frames, the existing two-way method combines them by weighted blending to obtain interpolation to generate another view 1. The deep learning method develops a more flexible method by training the synthetic neural network, which takes two pre distorted images as input and directly generates the final output without pixel mixing.<br>
正在翻譯中..
 
其它語言
本翻譯工具支援: 世界語, 中文, 丹麥文, 亞塞拜然文, 亞美尼亞文, 伊博文, 俄文, 保加利亞文, 信德文, 偵測語言, 優魯巴文, 克林貢語, 克羅埃西亞文, 冰島文, 加泰羅尼亞文, 加里西亞文, 匈牙利文, 南非柯薩文, 南非祖魯文, 卡納達文, 印尼巽他文, 印尼文, 印度古哈拉地文, 印度文, 吉爾吉斯文, 哈薩克文, 喬治亞文, 土庫曼文, 土耳其文, 塔吉克文, 塞爾維亞文, 夏威夷文, 奇切瓦文, 威爾斯文, 孟加拉文, 宿霧文, 寮文, 尼泊爾文, 巴斯克文, 布爾文, 希伯來文, 希臘文, 帕施圖文, 庫德文, 弗利然文, 德文, 意第緒文, 愛沙尼亞文, 愛爾蘭文, 拉丁文, 拉脫維亞文, 挪威文, 捷克文, 斯洛伐克文, 斯洛維尼亞文, 斯瓦希里文, 旁遮普文, 日文, 歐利亞文 (奧里雅文), 毛利文, 法文, 波士尼亞文, 波斯文, 波蘭文, 泰文, 泰盧固文, 泰米爾文, 海地克里奧文, 烏克蘭文, 烏爾都文, 烏茲別克文, 爪哇文, 瑞典文, 瑟索托文, 白俄羅斯文, 盧安達文, 盧森堡文, 科西嘉文, 立陶宛文, 索馬里文, 紹納文, 維吾爾文, 緬甸文, 繁體中文, 羅馬尼亞文, 義大利文, 芬蘭文, 苗文, 英文, 荷蘭文, 菲律賓文, 葡萄牙文, 蒙古文, 薩摩亞文, 蘇格蘭的蓋爾文, 西班牙文, 豪沙文, 越南文, 錫蘭文, 阿姆哈拉文, 阿拉伯文, 阿爾巴尼亞文, 韃靼文, 韓文, 馬來文, 馬其頓文, 馬拉加斯文, 馬拉地文, 馬拉雅拉姆文, 馬耳他文, 高棉文, 等語言的翻譯.

Copyright ©2024 I Love Translation. All reserved.

E-mail: