954 resultados para Convergence Analysis
Resumo:
In this paper we develop a new approach to sparse principal component analysis (sparse PCA). We propose two single-unit and two block optimization formulations of the sparse PCA problem, aimed at extracting a single sparse dominant principal component of a data matrix, or more components at once, respectively. While the initial formulations involve nonconvex functions, and are therefore computationally intractable, we rewrite them into the form of an optimization program involving maximization of a convex function on a compact set. The dimension of the search space is decreased enormously if the data matrix has many more columns (variables) than rows. We then propose and analyze a simple gradient method suited for the task. It appears that our algorithm has best convergence properties in the case when either the objective function or the feasible set are strongly convex, which is the case with our single-unit formulations and can be enforced in the block case. Finally, we demonstrate numerically on a set of random and gene expression test problems that our approach outperforms existing algorithms both in quality of the obtained solution and in computational speed. © 2010 Michel Journée, Yurii Nesterov, Peter Richtárik and Rodolphe Sepulchre.
Resumo:
The basic idea of the finite element beam propagation method (FE-BPM) is described. It is applied to calculate the fundamental mode of a channel plasmonic polariton (CPP) waveguide to confirm its validity. Both the field distribution and the effective index of the, fundamental mode are given by the method. The convergence speed shows the advantage and stability of this method. Then a plasmonic waveguide with a dielectric strip deposited on a metal substrate is investigated, and the group velocity is negative for the fundamental mode of this kind of waveguide. The numerical result shows that the power flow direction is reverse to that of phase velocity.
Resumo:
The Expectation-Maximization (EM) algorithm is an iterative approach to maximum likelihood parameter estimation. Jordan and Jacobs (1993) recently proposed an EM algorithm for the mixture of experts architecture of Jacobs, Jordan, Nowlan and Hinton (1991) and the hierarchical mixture of experts architecture of Jordan and Jacobs (1992). They showed empirically that the EM algorithm for these architectures yields significantly faster convergence than gradient ascent. In the current paper we provide a theoretical analysis of this algorithm. We show that the algorithm can be regarded as a variable metric algorithm with its searching direction having a positive projection on the gradient of the log likelihood. We also analyze the convergence of the algorithm and provide an explicit expression for the convergence rate. In addition, we describe an acceleration technique that yields a significant speedup in simulation experiments.
Resumo:
The Saliency Network proposed by Shashua and Ullman is a well-known approach to the problem of extracting salient curves from images while performing gap completion. This paper analyzes the Saliency Network. The Saliency Network is attractive for several reasons. First, the network generally prefers long and smooth curves over short or wiggly ones. While computing saliencies, the network also fills in gaps with smooth completions and tolerates noise. Finally, the network is locally connected, and its size is proportional to the size of the image. Nevertheless, our analysis reveals certain weaknesses with the method. In particular, we show cases in which the most salient element does not lie on the perceptually most salient curve. Furthermore, in some cases the saliency measure changes its preferences when curves are scaled uniformly. Also, we show that for certain fragmented curves the measure prefers large gaps over a few small gaps of the same total size. In addition, we analyze the time complexity required by the method. We show that the number of steps required for convergence in serial implementations is quadratic in the size of the network, and in parallel implementations is linear in the size of the network. We discuss problems due to coarse sampling of the range of possible orientations. We show that with proper sampling the complexity of the network becomes cubic in the size of the network. Finally, we consider the possibility of using the Saliency Network for grouping. We show that the Saliency Network recovers the most salient curve efficiently, but it has problems with identifying any salient curve other than the most salient one.
Resumo:
Numerical approximation of the long time behavior of a stochastic di.erential equation (SDE) is considered. Error estimates for time-averaging estimators are obtained and then used to show that the stationary behavior of the numerical method converges to that of the SDE. The error analysis is based on using an associated Poisson equation for the underlying SDE. The main advantages of this approach are its simplicity and universality. It works equally well for a range of explicit and implicit schemes, including those with simple simulation of random variables, and for hypoelliptic SDEs. To simplify the exposition, we consider only the case where the state space of the SDE is a torus, and we study only smooth test functions. However, we anticipate that the approach can be applied more widely. An analogy between our approach and Stein's method is indicated. Some practical implications of the results are discussed. Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
Resumo:
Computational modelling of dynamic fluid-structure interaction (DFSI) is problematical since conventionally computational fluid dynamics (CFD) is solved using finite volume (FV) methods and computational structural mechanics (CSM) is based entirely on finite element (FE) methods. Hence, progress in modelling the emerging multi-physics problem of dynamic fluid-structure interaction in a consistent manner is frustrated and significant problems in computation convergence may be encountered in transferring and filtering data from one mesh and solution procedure to another, unless the fluid-structure coupling is either one way, very weak or both. This paper sets out the solution procedure for modelling the multi-physics dynamic fluid-structure interaction problem within a single software framework PHYSICA, using finite volume, unstructured mesh (FV-UM) procedures and will focus upon some of the problems and issues that have to be resolved for time accurate closely coupled dynamic fluid-structure flutter analysis.
Resumo:
The new rigorous numerical-analytical technique based upon Galerkin method with the entire domain basis functions has been developed and applied to the study of the periodic aperture arrays containing multiple dissimilar apertures of complex shapes in stratified medium. The rapid uniform convergence of the solutions has enabled a comprehensive parametric study of complex array arrangements. The developed theory has revealed new effects of the aperture shape and layout on the array performance. The physical mechanisms underlying the TM wave resonances and Luebbers' anomaly have been explained for the first time.
Resumo:
Previous papers have noted the difficulty in obtaining neural models which are stable under simulation when trained using prediction-error-based methods. Here the differences between series-parallel and parallel identification structures for training neural models are investigated. The effect of the error surface shape on training convergence and simulation performance is analysed using a standard algorithm operating in both training modes. A combined series-parallel/parallel training scheme is proposed, aiming to provide a more effective means of obtaining accurate neural simulation models. Simulation examples show the combined scheme is advantageous in circumstances where the solution space is known or suspected to be complex. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Reliable prediction of long-term medical device performance using computer simulation requires consideration of variability in surgical procedure, as well as patient-specific factors. However, even deterministic simulation of long-term failure processes for such devices is time and resource consuming so that including variability can lead to excessive time to achieve useful predictions. This study investigates the use of an accelerated probabilistic framework for predicting the likely performance envelope of a device and applies it to femoral prosthesis loosening in cemented hip arthroplasty.
A creep and fatigue damage failure model for bone cement, in conjunction with an interfacial fatigue model for the implant–cement interface, was used to simulate loosening of a prosthesis within a cement mantle. A deterministic set of trial simulations was used to account for variability of a set of surgical and patient factors, and a response surface method was used to perform and accelerate a Monte Carlo simulation to achieve an estimate of the likely range of prosthesis loosening. The proposed framework was used to conceptually investigate the influence of prosthesis selection and surgical placement on prosthesis migration.
Results demonstrate that the response surface method is capable of dramatically reducing the time to achieve convergence in mean and variance of predicted response variables. A critical requirement for realistic predictions is the size and quality of the initial training dataset used to generate the response surface and further work is required to determine the recommendations for a minimum number of initial trials. Results of this conceptual application predicted that loosening was sensitive to the implant size and femoral width. Furthermore, different rankings of implant performance were predicted when only individual simulations (e.g. an average condition) were used to rank implants, compared with when stochastic simulations were used. In conclusion, the proposed framework provides a viable approach to predicting realistic ranges of loosening behaviour for orthopaedic implants in reduced timeframes compared with conventional Monte Carlo simulations.
Resumo:
In this paper, we present a Statistical Shape Model for Human Figure Segmentation in gait sequences. Point Distribution Models (PDM) generally use Principal Component analysis (PCA) to describe the main directions of variation in the training set. However, PCA assumes a number of restrictions on the data that do not always hold. In this work, we explore the potential of Independent Component Analysis (ICA) as an alternative shape decomposition to the PDM-based Human Figure Segmentation. The shape model obtained enables accurate estimation of human figures despite segmentation errors in the input silhouettes and has really good convergence qualities.
Resumo:
Community structure depends on both deterministic and stochastic processes. However, patterns of community dissimilarity (e.g. difference in species composition) are difficult to interpret in terms of the relative roles of these processes. Local communities can be more dissimilar (divergence) than, less dissimilar (convergence) than, or as dissimilar as a hypothetical control based on either null or neutral models. However, several mechanisms may result in the same pattern, or act concurrently to generate a pattern, and much research has recently been focusing on unravelling these mechanisms and their relative contributions. Using a simulation approach, we addressed the effect of a complex but realistic spatial structure in the distribution of the niche axis and we analysed patterns of species co-occurrence and beta diversity as measured by dissimilarity indices (e.g. Jaccard index) using either expectations under a null model or neutral dynamics (i.e., based on switching off the niche effect). The strength of niche processes, dispersal, and environmental noise strongly interacted so that niche-driven dynamics may result in local communities that either diverge or converge depending on the combination of these factors. Thus, a fundamental result is that, in real systems, interacting processes of community assembly can be disentangled only by measuring traits such as niche breadth and dispersal. The ability to detect the signal of the niche was also dependent on the spatial resolution of the sampling strategy, which must account for the multiple scale spatial patterns in the niche axis. Notably, some of the patterns we observed correspond to patterns of community dissimilarities previously observed in the field and suggest mechanistic explanations for them or the data required to solve them. Our framework offers a synthesis of the patterns of community dissimilarity produced by the interaction of deterministic and stochastic determinants of community assembly in a spatially explicit and complex context.
Resumo:
Developed countries, led by the EU and the US, have consistently called for ‘deeper integration’ over the course of the past three decades i.e., the convergence of ‘behind-the-border’ or domestic polices and rules such as services, competition, public procurement, intellectual property (“IP”) and so forth. Following the collapse of the Doha Development Round, the EU and the US have pursued this push for deeper integration by entering into deep and comprehensive free trade agreements (“DCFTAs”) that are comprehensive insofar as they are not limited to tariffs but extend to regulatory trade barriers. More recently, the EU and the US launched negotiations on a Transatlantic Trade and Investment Partnership (“TTIP”) and a Trade in Services Agreement (“TISA”), which put tackling barriers resulting from divergences in domestic regulation in the area of services at the very top of the agenda. Should these agreements come to pass, they may well set the template for the rules of international trade and define the core features of domestic services market regulation. This article examines the regulatory disciplines in the area of services included in existing EU and US DCFTAs from a comparative perspective in order to delineate possible similarities and divergences and assess the extent to which these DCFTAs can shed some light into the possible outcome and limitations of future trade negotiations in services. It also discusses the potential impact of such negotiations on developing countries and, more generally, on the multilateral process.
Resumo:
The Arc-Length Method is a solution procedure that enables a generic non-linear problem to pass limit points. Some examples are provided of mode-jumping problems solutions using a commercial nite element package, and other investigations are carried out on a simple structure of which the numerical solution can be compared with an analytical one. It is shown that Arc-Length Method is not reliable when bifurcations are present in the primary equilibrium path; also the presence of very sharp snap-backs or special boundary conditions may cause convergence diÆculty at limit points. An improvement to the predictor used in the incremental procedure is suggested, together with a reliable criteria for selecting either solution of the quadratic arc-length constraint. The gap that is sometimes observed between the experimantal load level of mode-jumping and its arc-length prediction is explained through an example.
Resumo:
This paper revisits work on the socio-political amplification of risk, which predicts that those living in developing countries are exposed to greater risk than residents of developed nations. This prediction contrasts with the neoliberal expectation that market driven improvements in working conditions within industrialising/developing nations will lead to global convergence of hazard exposure levels. It also contradicts the assumption of risk society theorists that there will be an ubiquitous increase in risk exposure across the globe, which will primarily affect technically more advanced countries. Reviewing qualitative evidence on the impact of structural adjustment reforms in industrialising countries, the export of waste and hazardous waste recycling to these countries and new patterns of domestic industrialisation, the paper suggests that workers in industrialising countries continue to face far greater levels of hazard exposure than those of developed countries. This view is confirmed when a data set including 105 major multi-fatality industrial disasters from 1971 to 2000 is examined. The paper concludes that there is empirical support for the predictions of socio-political amplification of risk theory, which finds clear expression in the data in a consistent pattern of significantly greater fatality rates per industrial incident in industrialising/developing countries.
Resumo:
The I/Q mismatches in quadrature radio receivers results in finite and usually insufficient image rejection, degrading the performance greatly. In this paper we present a detailed analysis of the Blind-Source Separation (BSS) based mismatch corrector in terms of its structure, convergence and performance. The results indicate that the mismatch can be effectively compensated during the normal operation as well as in the rapidly changing environments. Since the compensation is carried out before any modulation specific processing, the proposed method works with all standard modulation formats and is amenable to low-power implementations.