224 resultados para convolution
Resumo:
MSC 2010: 30C45, 30A20, 34A40
Resumo:
MSC 2010: 44A35, 35L20, 35J05, 35J25
Resumo:
Иван Димовски, Юлиан Цанков - Предложено е разширение на принципa на Дюамел. За намиране на явно решение на нелокални гранични задачи от този тип е развито операционно смятане основано върху некласическа двумерна конволюция. Пример от такъв тип е задачата на Бицадзе-Самарски.
Resumo:
Иван Димовски, Юлиан Цанков - В статията е намерено точно решение на задачата на Бицадзе-Самрски (1) за уравнението на Лаплас, като е използвано операционно смятане основано на некласическа двумернa конволюция. На това точно решение може да се гледа като начин за сумиране на нехармоничния ред по синуси на решението, получен по метода на Фурие.
Resumo:
Иван Христов Димовски, Юлиан Цанков Цанков - Построени са директни операционни смятания за функции u(x, y, t), непрекъснати в област от вида D = [0, a] × [0, b] × [0, ∞). Наред с класическата дюамелова конволюция, построението използва и две некласически конволюции за операторите ∂2x и ∂2y. Тези три едномерни конволюции се комбинират в една тримерна конволюция u ∗ v в C(D). Вместо подхода на Я. Микусински, основаващ се на конволюционни частни, се развива алтернативен подход с използване на мултипликаторните частни на конволюционната алгебра (C(D), ∗).
Resumo:
Иван Хр. Димовски, Юлиан Ц. Цанков - Предложен е метод за намиране на явни решения на клас двумерни уравнения на топлопроводността с нелокални условия по пространствените променливи. Методът е основан на директно тримерно операционно смятане. Класическата дюамелова конволюция е комбинирана с две некласически конволюции за операторите ∂xx и ∂yy в една тримерна конволюция. Съответното операционно смятане използва мултипликаторни частни. Мултипликаторните частни позволяват да се продължи принципът на Дюамел за пространствените променливи и да се намерят явни решения на разглежданите гранични задачи. Общите разглеждания са приложени в случая на гранични условия от типа на Йонкин. Намерени са експлицитни решения в затворен вид.
Resumo:
In this paper, we develop a new entropic matching kernel for weighted graphs by aligning depth-based representations. We demonstrate that this kernel can be seen as an aligned subtree kernel that incorporates explicit subtree correspondences, and thus addresses the drawback of neglecting the relative locations between substructures that arises in the R-convolution kernels. Experiments on standard datasets demonstrate that our kernel can easily outperform state-of-the-art graph kernels in terms of classification accuracy.
Resumo:
We propose a family of attributed graph kernels based on mutual information measures, i.e., the Jensen-Tsallis (JT) q-differences (for q ∈ [1,2]) between probability distributions over the graphs. To this end, we first assign a probability to each vertex of the graph through a continuous-time quantum walk (CTQW). We then adopt the tree-index approach [1] to strengthen the original vertex labels, and we show how the CTQW can induce a probability distribution over these strengthened labels. We show that our JT kernel (for q = 1) overcomes the shortcoming of discarding non-isomorphic substructures arising in the R-convolution kernels. Moreover, we prove that the proposed JT kernels generalize the Jensen-Shannon graph kernel [2] (for q = 1) and the classical subtree kernel [3] (for q = 2), respectively. Experimental evaluations demonstrate the effectiveness and efficiency of the JT kernels.
Resumo:
MSC 2010: 44A35, 44A45, 44A40, 35K20, 35K05
Resumo:
MSC 2010: 35R11, 44A10, 44A20, 26A33, 33C45
Resumo:
2000 Mathematics Subject Classification: 49J52, 49J50, 58C20, 26B09.
Resumo:
As one of the most popular deep learning models, convolution neural network (CNN) has achieved huge success in image information extraction. Traditionally CNN is trained by supervised learning method with labeled data and used as a classifier by adding a classification layer in the end. Its capability of extracting image features is largely limited due to the difficulty of setting up a large training dataset. In this paper, we propose a new unsupervised learning CNN model, which uses a so-called convolutional sparse auto-encoder (CSAE) algorithm pre-Train the CNN. Instead of using labeled natural images for CNN training, the CSAE algorithm can be used to train the CNN with unlabeled artificial images, which enables easy expansion of training data and unsupervised learning. The CSAE algorithm is especially designed for extracting complex features from specific objects such as Chinese characters. After the features of articficial images are extracted by the CSAE algorithm, the learned parameters are used to initialize the first CNN convolutional layer, and then the CNN model is fine-Trained by scene image patches with a linear classifier. The new CNN model is applied to Chinese scene text detection and is evaluated with a multilingual image dataset, which labels Chinese, English and numerals texts separately. More than 10% detection precision gain is observed over two CNN models.
Resumo:
In 1972 the ionized cluster beam (ICB) deposition technique was introduced as a new method for thin film deposition. At that time the use of clusters was postulated to be able to enhance film nucleation and adatom surface mobility, resulting in high quality films. Although a few researchers reported singly ionized clusters containing 10$\sp2$-10$\sp3$ atoms, others were unable to repeat their work. The consensus now is that film effects in the early investigations were due to self-ion bombardment rather than clusters. Subsequently in recent work (early 1992) synthesis of large clusters of zinc without the use of a carrier gas was demonstrated by Gspann and repeated in our laboratory. Clusters resulted from very significant changes in two source parameters. Crucible pressure was increased from the earlier 2 Torr to several thousand Torr and a converging-diverging nozzle 18 mm long and 0.4 mm in diameter at the throat was used in place of the 1 mm x 1 mm nozzle used in the early work. While this is practical for zinc and other high vapor pressure materials it remains impractical for many materials of industrial interest such as gold, silver, and aluminum. The work presented here describes results using gold and silver at pressures of around 1 and 50 Torr in order to study the effect of the pressure and nozzle shape. Significant numbers of large clusters were not detected. Deposited films were studied by atomic force microscopy (AFM) for roughness analysis, and X-ray diffraction.^ Nanometer size islands of zinc deposited on flat silicon substrates by ICB were also studied by atomic force microscopy and the number of atoms/cm$\sp2$ was calculated and compared to data from Rutherford backscattering spectrometry (RBS). To improve the agreement between data from AFM and RBS, convolution and deconvolution algorithms were implemented to study and simulate the interaction between tip and sample in atomic force microscopy. The deconvolution algorithm takes into account the physical volume occupied by the tip resulting in an image that is a more accurate representation of the surface.^ One method increasingly used to study the deposited films both during the growth process and following, is ellipsometry. Ellipsometry is a surface analytical technique used to determine the optical properties and thickness of thin films. In situ measurements can be made through the windows of a deposition chamber. A method for determining the optical properties of a film, that is sensitive only to the growing film and accommodates underlying interfacial layers, multiple unknown underlayers, and other unknown substrates was developed. This method is carried out by making an initial ellipsometry measurement well past the real interface and by defining a virtual interface in the vicinity of this measurement. ^
Resumo:
The contributions of this dissertation are in the development of two new interrelated approaches to video data compression: (1) A level-refined motion estimation and subband compensation method for the effective motion estimation and motion compensation. (2) A shift-invariant sub-decimation decomposition method in order to overcome the deficiency of the decimation process in estimating motion due to its shift-invariant property of wavelet transform. ^ The enormous data generated by digital videos call for an intense need of efficient video compression techniques to conserve storage space and minimize bandwidth utilization. The main idea of video compression is to reduce the interpixel redundancies inside and between the video frames by applying motion estimation and motion compensation (MEMO) in combination with spatial transform coding. To locate the global minimum of the matching criterion function reasonably, hierarchical motion estimation by coarse to fine resolution refinements using discrete wavelet transform is applied due to its intrinsic multiresolution and scalability natures. ^ Due to the fact that most of the energies are concentrated in the low resolution subbands while decreased in the high resolution subbands, a new approach called level-refined motion estimation and subband compensation (LRSC) method is proposed. It realizes the possible intrablocks in the subbands for lower entropy coding while keeping the low computational loads of motion estimation as the level-refined method, thus to achieve both temporal compression quality and computational simplicity. ^ Since circular convolution is applied in wavelet transform to obtain the decomposed subframes without coefficient expansion, symmetric-extended wavelet transform is designed on the finite length frame signals for more accurate motion estimation without discontinuous boundary distortions. ^ Although wavelet transformed coefficients still contain spatial domain information, motion estimation in wavelet domain is not as straightforward as in spatial domain due to the shift variance property of the decimation process of the wavelet transform. A new approach called sub-decimation decomposition method is proposed, which maintains the motion consistency between the original frame and the decomposed subframes, improving as a consequence the wavelet domain video compressions by shift invariant motion estimation and compensation. ^
Resumo:
The distribution and abundance of the American crocodile (Crocodylus acutus) in the Florida Everglades is dependent on the timing, amount, and location of freshwater flow. One of the goals of the Comprehensive Everglades Restoration Plan (CERP) is to restore historic freshwater flows to American crocodile habitat throughout the Everglades. To predict the impacts on the crocodile population from planned restoration activities, we created a stage-based spatially explicit crocodile population model that incorporated regional hydrology models and American crocodile research and monitoring data. Growth and survival were influenced by salinity, water depth, and density-dependent interactions. A stage-structured spatial model was used with discrete spatial convolution to direct crocodiles toward attractive sources where conditions were favorable. The model predicted that CERP would have both positive and negative impacts on American crocodile growth, survival, and distribution. Overall, crocodile populations across south Florida were predicted to decrease approximately 3 % with the implementation of CERP compared to future conditions without restoration, but local increases up to 30 % occurred in the Joe Bay area near Taylor Slough, and local decreases up to 30 % occurred in the vicinity of Buttonwood Canal due to changes in salinity and freshwater flows.