972 resultados para Kikuchi approximations
Resumo:
This article proposes computing sensitivities of upper tail probabilities of random sums by the saddlepoint approximation. The considered sensitivity is the derivative of the upper tail probability with respect to the parameter of the summation index distribution. Random sums with Poisson or Geometric distributed summation indices and Gamma or Weibull distributed summands are considered. The score method with importance sampling is considered as an alternative approximation. Numerical studies show that the saddlepoint approximation and the method of score with importance sampling are very accurate. But the saddlepoint approximation is substantially faster than the score method with importance sampling. Thus, the suggested saddlepoint approximation can be conveniently used in various scientific problems.
Resumo:
This article provides importance sampling algorithms for computing the probabilities of various types ruin of spectrally negative Lévy risk processes, which are ruin over the infinite time horizon, ruin within a finite time horizon and ruin past a finite time horizon. For the special case of the compound Poisson process perturbed by diffusion, algorithms for computing probabilities of ruins by creeping (i.e. induced by the diffusion term) and by jumping (i.e. by a claim amount) are provided. It is shown that these algorithms have either bounded relative error or logarithmic efficiency, as t,x→∞t,x→∞, where t>0t>0 is the time horizon and x>0x>0 is the starting point of the risk process, with y=t/xy=t/x held constant and assumed either below or above a certain constant.
Resumo:
A large deviations type approximation to the probability of ruin within a finite time for the compound Poisson risk process perturbed by diffusion is derived. This approximation is based on the saddlepoint method and generalizes the approximation for the non-perturbed risk process by Barndorff-Nielsen and Schmidli (Scand Actuar J 1995(2):169–186, 1995). An importance sampling approximation to this probability of ruin is also provided. Numerical illustrations assess the accuracy of the saddlepoint approximation using importance sampling as a benchmark. The relative deviations between saddlepoint approximation and importance sampling are very small, even for extremely small probabilities of ruin. The saddlepoint approximation is however substantially faster to compute.
Resumo:
The saddlepoint method provides accurate approximations for the distributions of many test statistics, estimators and for important probabilities arising in various stochastic models. The saddlepoint approximation is a large deviations technique which is substantially more accurate than limiting normal or Edgeworth approximations, especially in presence of very small sample sizes or very small probabilities. The outstanding accuracy of the saddlepoint approximation can be explained by the fact that it has bounded relative error.
Resumo:
We report 2 cases of Iso Kikuchi syndrome: A new born female patient with a 21 days history and a 62 year-old male. We present two cases of this unusual congenital abnormality of the nails at the extremes of life.
Resumo:
The technique of Abstract Interpretation [11] has allowed the development of sophisticated program analyses which are provably correct and practical. The semantic approximations produced by such analyses have been traditionally applied to optimization during program compilation. However, recently, novel and promising applications of semantic approximations have been proposed in the more general context of program validation and debugging [3,9,7].
Resumo:
Abstract is not available.
Resumo:
In this work, we consider the Minimum Weight Pseudo-Triangulation (MWPT) problem of a given set of n points in the plane. Globally optimal pseudo-triangulations with respect to the weight, as optimization criteria, are difficult to be found by deterministic methods, since no polynomial algorithm is known. We show how the Ant Colony Optimization (ACO) metaheuristic can be used to find high quality pseudo-triangulations of minimum weight. We present the experimental and statistical study based on our own set of instances since no reference to benchmarks for these problems were found in the literature. Throughout the experimental evaluation, we appraise the ACO metaheuristic performance for MWPT problem.
Resumo:
Mixtures of polynomials (MoPs) are a non-parametric density estimation technique especially designed for hybrid Bayesian networks with continuous and discrete variables. Algorithms to learn one- and multi-dimensional (marginal) MoPs from data have recently been proposed. In this paper we introduce two methods for learning MoP approximations of conditional densities from data. Both approaches are based on learning MoP approximations of the joint density and the marginal density of the conditioning variables, but they differ as to how the MoP approximation of the quotient of the two densities is found. We illustrate and study the methods using data sampled from known parametric distributions, and we demonstrate their applicability by learning models based on real neuroscience data. Finally, we compare the performance of the proposed methods with an approach for learning mixtures of truncated basis functions (MoTBFs). The empirical results show that the proposed methods generally yield models that are comparable to or significantly better than those found using the MoTBF-based method.
Resumo:
In this paper we present different error measurements with the aim to evaluate the quality of the approximations generated by the GNG3D method for mesh simplification. The first phase of this method consists on the execution of the GNG3D algorithm, described in the paper. The primary goal of this phase is to obtain a simplified set of vertices representing the best approximation of the original 3D object. In the reconstruction phase we use the information provided by the optimization algorithm to reconstruct the faces thus obtaining the optimized mesh. The implementation of three error functions, named Eavg, Emax, Esur, permitts us to control the error of the simplified model, as it is shown in the examples studied.
Resumo:
Light confinement and controlling an optical field has numerous applications in the field of telecommunications for optical signals processing. When the wavelength of the electromagnetic field is on the order of the period of a photonic microstructure, the field undergoes reflection, refraction, and coherent scattering. This produces photonic bandgaps, forbidden frequency regions or spectral stop bands where light cannot exist. Dielectric perturbations that break the perfect periodicity of these structures produce what is analogous to an impurity state in the bandgap of a semiconductor. The defect modes that exist at discrete frequencies within the photonic bandgap are spatially localized about the cavity-defects in the photonic crystal. In this thesis the properties of two tight-binding approximations (TBAs) are investigated in one-dimensional and two-dimensional coupled-cavity photonic crystal structures We require an efficient and simple approach that ensures the continuity of the electromagnetic field across dielectric interfaces in complex structures. In this thesis we develop \textrm{E} -- and \textrm{D} --TBAs to calculate the modes in finite 1D and 2D two-defect coupled-cavity photonic crystal structures. In the \textrm{E} -- and \textrm{D} --TBAs we expand the coupled-cavity \overrightarrow{E} --modes in terms of the individual \overrightarrow{E} -- and \overrightarrow{D} --modes, respectively. We investigate the dependence of the defect modes, their frequencies and quality factors on the relative placement of the defects in the photonic crystal structures. We then elucidate the differences between the two TBA formulations, and describe the conditions under which these formulations may be more robust when encountering a dielectric perturbation. Our 1D analysis showed that the 1D modes were sensitive to the structure geometry. The antisymmetric \textrm{D} mode amplitudes show that the \textrm{D} --TBA did not capture the correct (tangential \overrightarrow{E} --field) boundary conditions. However, the \textrm{D} --TBA did not yield significantly poorer results compared to the \textrm{E} --TBA. Our 2D analysis reveals that the \textrm{E} -- and \textrm{D} --TBAs produced nearly identical mode profiles for every structure. Plots of the relative difference between the \textrm{E} and \textrm{D} mode amplitudes show that the \textrm{D} --TBA did capture the correct (normal \overrightarrow{E} --field) boundary conditions. We found that the 2D TBA CC mode calculations were 125-150 times faster than an FDTD calculation for the same two-defect PCS. Notwithstanding this efficiency, the appropriateness of either TBA was found to depend on the geometry of the structure and the mode(s), i.e. whether or not the mode has a large normal or tangential component.
Resumo:
Mode of access: Internet.