404 resultados para regularization


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Calculating the potentials on the heart’s epicardial surface from the body surface potentials constitutes one form of inverse problems in electrocardiography (ECG). Since these problems are ill-posed, one approach is to use zero-order Tikhonov regularization, where the squared norms of both the residual and the solution are minimized, with a relative weight determined by the regularization parameter. In this paper, we used three different methods to choose the regularization parameter in the inverse solutions of ECG. The three methods include the L-curve, the generalized cross validation (GCV) and the discrepancy principle (DP). Among them, the GCV method has received less attention in solutions to ECG inverse problems than the other methods. Since the DP approach needs knowledge of norm of noises, we used a model function to estimate the noise. The performance of various methods was compared using a concentric sphere model and a real geometry heart-torso model with a distribution of current dipoles placed inside the heart model as the source. Gaussian measurement noises were added to the body surface potentials. The results show that the three methods all produce good inverse solutions with little noise; but, as the noise increases, the DP approach produces better results than the L-curve and GCV methods, particularly in the real geometry model. Both the GCV and L-curve methods perform well in low to medium noise situations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is well known that the addition of noise to the input data of a neural network during training can, in some circumstances, lead to significant improvements in generalization performance. Previous work has shown that such training with noise is equivalent to a form of regularization in which an extra term is added to the error function. However, the regularization term, which involves second derivatives of the error function, is not bounded below, and so can lead to difficulties if used directly in a learning algorithm based on error minimization. In this paper we show that, for the purposes of network training, the regularization term can be reduced to a positive definite form which involves only first derivatives of the network mapping. For a sum-of-squares error function, the regularization term belongs to the class of generalized Tikhonov regularizers. Direct minimization of the regularized error function provides a practical alternative to training with noise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we consider four alternative approaches to complexity control in feed-forward networks based respectively on architecture selection, regularization, early stopping, and training with noise. We show that there are close similarities between these approaches and we argue that, for most practical applications, the technique of regularization should be the method of choice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Learning user interests from online social networks helps to better understand user behaviors and provides useful guidance to design user-centric applications. Apart from analyzing users' online content, it is also important to consider users' social connections in the social Web. Graph regularization methods have been widely used in various text mining tasks, which can leverage the graph structure information extracted from data. Previously, graph regularization methods operate under the cluster assumption that nearby nodes are more similar and nodes on the same structure (typically referred to as a cluster or a manifold) are likely to be similar. We argue that learning user interests from complex, sparse, and dynamic social networks should be based on the link structure assumption under which node similarities are evaluated based on the local link structures instead of explicit links between two nodes. We propose a regularization framework based on the relation bipartite graph, which can be constructed from any type of relations. Using Twitter as our case study, we evaluate our proposed framework from social networks built from retweet relations. Both quantitative and qualitative experiments show that our proposed method outperforms a few competitive baselines in learning user interests over a set of predefined topics. It also gives superior results compared to the baselines on retweet prediction and topical authority identification. © 2014 ACM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

AMS subject classification: 65K10, 49M07, 90C25, 90C48.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present study is an attempt at assessing the level of consistency in the orthographic systems of selected sixteenth and seventeenth-century printers and at tracing the influence that normative writings could have potentially exerted on them. The approach taken here draws upon the philological tradition of examining and comparing several texts written in the same language, but produced at different times. The study discusses the orthography of the editions of The Schoole of Vertue, a manual of good conduct for children, published between 1557 and 1687. The orthographic variables taken into account fall into two criteria: the distribution and functional load of the selected graphemes and the indication of vowel length.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Imaging technologies are widely used in application fields such as natural sciences, engineering, medicine, and life sciences. A broad class of imaging problems reduces to solve ill-posed inverse problems (IPs). Traditional strategies to solve these ill-posed IPs rely on variational regularization methods, which are based on minimization of suitable energies, and make use of knowledge about the image formation model (forward operator) and prior knowledge on the solution, but lack in incorporating knowledge directly from data. On the other hand, the more recent learned approaches can easily learn the intricate statistics of images depending on a large set of data, but do not have a systematic method for incorporating prior knowledge about the image formation model. The main purpose of this thesis is to discuss data-driven image reconstruction methods which combine the benefits of these two different reconstruction strategies for the solution of highly nonlinear ill-posed inverse problems. Mathematical formulation and numerical approaches for image IPs, including linear as well as strongly nonlinear problems are described. More specifically we address the Electrical impedance Tomography (EIT) reconstruction problem by unrolling the regularized Gauss-Newton method and integrating the regularization learned by a data-adaptive neural network. Furthermore we investigate the solution of non-linear ill-posed IPs introducing a deep-PnP framework that integrates the graph convolutional denoiser into the proximal Gauss-Newton method with a practical application to the EIT, a recently introduced promising imaging technique. Efficient algorithms are then applied to the solution of the limited electrods problem in EIT, combining compressive sensing techniques and deep learning strategies. Finally, a transformer-based neural network architecture is adapted to restore the noisy solution of the Computed Tomography problem recovered using the filtered back-projection method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ill-conditioned inverse problems frequently arise in life sciences, particularly in the context of image deblurring and medical image reconstruction. These problems have been addressed through iterative variational algorithms, which regularize the reconstruction by adding prior knowledge about the problem's solution. Despite the theoretical reliability of these methods, their practical utility is constrained by the time required to converge. Recently, the advent of neural networks allowed the development of reconstruction algorithms that can compute highly accurate solutions with minimal time demands. Regrettably, it is well-known that neural networks are sensitive to unexpected noise, and the quality of their reconstructions quickly deteriorates when the input is slightly perturbed. Modern efforts to address this challenge have led to the creation of massive neural network architectures, but this approach is unsustainable from both ecological and economic standpoints. The recently introduced GreenAI paradigm argues that developing sustainable neural network models is essential for practical applications. In this thesis, we aim to bridge the gap between theory and practice by introducing a novel framework that combines the reliability of model-based iterative algorithms with the speed and accuracy of end-to-end neural networks. Additionally, we demonstrate that our framework yields results comparable to state-of-the-art methods while using relatively small, sustainable models. In the first part of this thesis, we discuss the proposed framework from a theoretical perspective. We provide an extension of classical regularization theory, applicable in scenarios where neural networks are employed to solve inverse problems, and we show there exists a trade-off between accuracy and stability. Furthermore, we demonstrate the effectiveness of our methods in common life science-related scenarios. In the second part of the thesis, we initiate an exploration extending the proposed method into the probabilistic domain. We analyze some properties of deep generative models, revealing their potential applicability in addressing ill-posed inverse problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a 4D chiral Thirring model we analyze the possibility that radiative corrections may produce spontaneous breaking of Lorentz and CPT symmetry. By studying the effective potential, we verified that the chiral current (psi) over bar gamma(mu)gamma(5)psi may assume a nonzero vacuum expectation value which triggers Lorentz and CPT violations. Furthermore, by making fluctuations on the minimum of the potential we dynamically induce a bumblebee-like model containing a Chem-Simons term.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Within the superfield formalism, we study the ultraviolet properties of the three-dimensional super-symmetric quantum electrodynamics. The theory is shown to be finite at all loop orders in a particular gauge.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We solve the operator ordering problem for the quantum continuous integrable su(1,1) Landau-Lifshitz model, and give a prescription to obtain the quantum trace identities, and the spectrum for the higher-order local charges. We also show that this method, based on operator regularization and renormalization, which guarantees quantum integrability, as well as the construction of self-adjoint extensions, can be used as an alternative to the discretization procedure, and unlike the latter, is based only on integrable representations. (C) 2010 American Institute of Physics. [doi:10.1063/1.3509374]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this article, a new hybrid model for estimating the pore size distribution of micro- and mesoporous materials is developed, and tested with the adsorption data of nitrogen, oxygen, and argon on ordered mesoporous materials reported in the literature. For the micropore region, the model uses the Dubinin-Rudushkevich (DR) isotherm with the Chen-Yang modification. A recent isotherm model of the authors for nonporous materials, which uses a continuum-mechanical model for the multilayer region and the Unilan model for the submonolayer region, has been extended for adsorption in mesopores. The experimental data is inverted using regularization to obtain the pore size distribution. The present model was found to be successful in predicting the pore size distribution of pure as well as binary physical mixtures of MCM-41 synthesized with different templates, with results in agreement with those from the XRD method and nonlocal density functional theory. It was found that various other recent methods, as well as the classical Broekhoff and de Beer method, underpredict the pore diameter of MCM-41. The present model has been successfully applied to MCM-48, SBA's, CMK, KIT, HMS, FSM, MTS, mesoporous fly ash, and a large number of other regular mesoporous materials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The characterization of three commercial activated carbons was carried out using the adsorption of various compounds in the aqueous phase. For this purpose the generalized adsorption isotherm was employed, and a modification of the Dubinin-Radushkevich pore filling model, incorporating repulsive contributions to the pore potential as well as bulk liquid phase nonideality, was used as the local isotherm. Eight different flavor compounds were used as adsorbates, and the isotherms were jointly fitted to yield a common pore size distribution for each carbon. The bulk liquid phase nonideality was incorporated through the UNIFAC activity coefficient model, and the repulsive contribution to the pore potential was incorporated through the Steele 10-4-3 potential model. The mean micropore network coordination number for each carbon was also determined from the fitted saturation capacity based on percolation theory. Good agreement between the model and the experimental data was observed. In addition, excellent agreement between the bimodal gamma pore size distribution and density functional theory-cum-regularization-based pore size distribution obtained by argon adsorption was also observed, supporting the validity of the model. The results show that liquid phase adsorption, using adsorptive molecules of different sizes, can be an effective means of characterizing the pore size distribution as well as connectivity. Alternately, if the carbon pore size distribution is independently known, the method can be used to measure critical molecular sizes. (C) 2001 Elsevier Science.