982 resultados para local minimum spanning tree (LMST)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a new approach for the inversion of anisotropic P-wave data based on Monte Carlo methods combined with a multigrid approach. Simulated annealing facilitates objective minimization of the functional characterizing the misfit between observed and predicted traveltimes, as controlled by the Thomsen anisotropy parameters (epsilon, delta). Cycling between finer and coarser grids enhances the computational efficiency of the inversion process, thus accelerating the convergence of the solution while acting as a regularization technique of the inverse problem. Multigrid perturbation samples the probability density function without the requirements for the user to adjust tuning parameters. This increases the probability that the preferred global, rather than a poor local, minimum is attained. Undertaking multigrid refinement and Monte Carlo search in parallel produces more robust convergence than does the initially more intuitive approach of completing them sequentially. We demonstrate the usefulness of the new multigrid Monte Carlo (MGMC) scheme by applying it to (a) synthetic, noise-contaminated data reflecting an isotropic subsurface of constant slowness, horizontally layered geologic media and discrete subsurface anomalies; and (b) a crosshole seismic data set acquired by previous authors at the Reskajeage test site in Cornwall, UK. Inverted distributions of slowness (s) and the Thomson anisotropy parameters (epsilon, delta) compare favourably with those obtained previously using a popular matrix-based method. Reconstruction of the Thomsen epsilon parameter is particularly robust compared to that of slowness and the Thomsen delta parameter, even in the face of complex subsurface anomalies. The Thomsen epsilon and delta parameters have enhanced sensitivities to bulk-fabric and fracture-based anisotropies in the TI medium at Reskajeage. Because reconstruction of slowness (s) is intimately linked to that epsilon and delta in the MGMC scheme, inverted images of phase velocity reflect the integrated effects of these two modes of anisotropy. The new MGMC technique thus promises to facilitate rapid inversion of crosshole P-wave data for seismic slownesses and the Thomsen anisotropy parameters, with minimal user input in the inversion process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim We carried out a phylogeographic study across the range of the herbaceous plant species Monotropa hypopitys L. in North America to determine whether its current disjunct distribution is due to recolonization from separate eastern and western refugia after the Last Glacial Maximum (LGM). Location North America: Pacific Northwest and north-eastern USA/south-eastern Canada. Methods Palaeodistribution modelling was carried out to determine suitable climatic regions for M. hypopitys at the LGM. We analysed between 155 and 176 individuals from 39 locations spanning the species' entire range in North America. Sequence data were obtained for the chloroplast rps2 gene (n=168) and for the nuclear ITS region (n=158). Individuals were also genotyped for eight microsatellite loci (n=176). Interpolation of diversity values was used to visualize the range-wide distribution of genetic diversity for each of the three marker classes. Minimum spanning networks were constructed showing the relationships between the rps2 and ITS haplotypes, and the geographical distributions of these haplotypes were plotted. The numbers of genetic clusters based on the microsatellite data were estimated using Bayesian clustering approaches. Results The palaeodistribution modelling indicated suitable climate envelopes for M. hypopitys at the LGM in both the Pacific Northwest and south-eastern USA. High levels of genetic diversity and endemic haplotypes were found in Oregon, the Alexander Archipelago, Wisconsin, and in the south-eastern part of the species' distribution range. Main conclusions Our results suggest a complex recolonization history for M. hypopitys in North America, involving persistence in separate eastern and western refugia. A generally high degree of congruence between the different marker classes analysed indicated the presence of multiple refugia, with at least two refugia in each area. In the west, putative refugia were identified in Oregon and the Alexander Archipelago, whereas eastern refugia may have been located in the southern part of the species' current distribution, as well as in the 'Driftless Area'. These findings are in contrast to a previous study on the related species Orthilia secunda, which has a similar disjunct distribution to M. hypopitys, but which appears to have recolonized solely from western refugia. © 2011 Blackwell Publishing Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that, over a sequence of rounds, an adversary either inserts a node with arbitrary connections or deletes an arbitrary node from the network. The network responds to each such change by quick “repairs,” which consist of adding or deleting a small number of edges. These repairs essentially preserve closeness of nodes after adversarial deletions, without increasing node degrees by too much, in the following sense. At any point in the algorithm, nodes v and w whose distance would have been l in the graph formed by considering only the adversarial insertions (not the adversarial deletions), will be at distance at most l log n in the actual graph, where n is the total number of vertices seen so far. Similarly, at any point, a node v whose degree would have been d in the graph with adversarial insertions only, will have degree at most 3d in the actual graph. Our distributed data structure, which we call the Forgiving Graph, has low latency and bandwidth requirements. The Forgiving Graph improves on the Forgiving Tree distributed data structure from Hayes et al. (2008) in the following ways: 1) it ensures low stretch over all pairs of nodes, while the Forgiving Tree only ensures low diameter increase; 2) it handles both node insertions and deletions, while the Forgiving Tree only handles deletions; 3) it requires only a very simple and minimal initialization phase, while the Forgiving Tree initially requires construction of a spanning tree of the network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of self-healing in peer-to-peer networks that are under repeated attack by an omniscient adversary. We assume that, over a sequence of rounds, an adversary either inserts a node with arbitrary connections or deletes an arbitrary node from the network. The network responds to each such change by quick "repairs," which consist of adding or deleting a small number of edges. These repairs essentially preserve closeness of nodes after adversarial deletions,without increasing node degrees by too much, in the following sense. At any point in the algorithm, nodes v and w whose distance would have been - in the graph formed by considering only the adversarial insertions (not the adversarial deletions), will be at distance at most - log n in the actual graph, where n is the total number of vertices seen so far. Similarly, at any point, a node v whose degreewould have been d in the graph with adversarial insertions only, will have degree at most 3d in the actual graph. Our distributed data structure, which we call the Forgiving Graph, has low latency and bandwidth requirements. The Forgiving Graph improves on the Forgiving Tree distributed data structure from Hayes et al. (2008) in the following ways: 1) it ensures low stretch over all pairs of nodes, while the Forgiving Tree only ensures low diameter increase; 2) it handles both node insertions and deletions, while the Forgiving Tree only handles deletions; 3) it requires only a very simple and minimal initialization phase, while the Forgiving Tree initially requires construction of a spanning tree of the network. © Springer-Verlag 2012.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider some problems of the calculus of variations on time scales. On the beginning our attention is paid on two inverse extremal problems on arbitrary time scales. Firstly, using the Euler-Lagrange equation and the strengthened Legendre condition, we derive a general form for a variation functional that attains a local minimum at a given point of the vector space. Furthermore, we prove a necessary condition for a dynamic integro-differential equation to be an Euler-Lagrange equation. New and interesting results for the discrete and quantum calculus are obtained as particular cases. Afterwards, we prove Euler-Lagrange type equations and transversality conditions for generalized infinite horizon problems. Next we investigate the composition of a certain scalar function with delta and nabla integrals of a vector valued field. Euler-Lagrange equations in integral form, transversality conditions, and necessary optimality conditions for isoperimetric problems, on an arbitrary time scale, are proved. In the end, two main issues of application of time scales in economic, with interesting results, are presented. In the former case we consider a firm that wants to program its production and investment policies to reach a given production rate and to maximize its future market competitiveness. The model which describes firm activities is studied in two different ways: using classical discretizations; and applying discrete versions of our result on time scales. In the end we compare the cost functional values obtained from those two approaches. The latter problem is more complex and relates to rate of inflation, p, and rate of unemployment, u, which inflict a social loss. Using known relations between p, u, and the expected rate of inflation π, we rewrite the social loss function as a function of π. We present this model in the time scale framework and find an optimal path π that minimizes the total social loss over a given time interval.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several alternative approaches have been discussed: Levenberg-Marquardt - no satisfactory convergence speed + local minimum, Bacterial algorithm - problems with large dimensionality (speed), Clustering - no safe criterion for number of clusters + dimentionality problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a method for analysing the operational complexity in supply chains by using an entropic measure based on information theory. The proposed approach estimates the operational complexity at each stage of the supply chain and analyses the changes between stages. In this paper a stage is identified by the exchange of data and/or material. Through analysis the method identifies the stages where the operational complexity is both generated and propagated (exported, imported, generated or absorbed). Central to the method is the identification of a reference point within the supply chain. This is where the operational complexity is at a local minimum along the data transfer stages. Such a point can be thought of as a ‘sink’ for turbulence generated in the supply chain. Where it exists, it has the merit of stabilising the supply chain by attenuating uncertainty. However, the location of the reference point is also a matter of choice. If the preferred location is other than the current one, this is a trigger for management action. The analysis can help decide appropriate remedial action. More generally, the approach can assist logistics management by highlighting problem areas. An industrial application is presented to demonstrate the applicability of the method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optimization of wave functions in quantum Monte Carlo is a difficult task because the statistical uncertainty inherent to the technique makes the absolute determination of the global minimum difficult. To optimize these wave functions we generate a large number of possible minima using many independently generated Monte Carlo ensembles and perform a conjugate gradient optimization. Then we construct histograms of the resulting nominally optimal parameter sets and "filter" them to identify which parameter sets "go together" to generate a local minimum. We follow with correlated-sampling verification runs to find the global minimum. We illustrate this technique for variance and variational energy optimization for a variety of wave functions for small systellls. For such optimized wave functions we calculate the variational energy and variance as well as various non-differential properties. The optimizations are either on par with or superior to determinations in the literature. Furthermore, we show that this technique is sufficiently robust that for molecules one may determine the optimal geometry at tIle same time as one optimizes the variational energy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The enigmatic heavy fermion URu2Si2, which is the subject of this thesis, has attracted intensive theoretical and experimental research since 1984 when it was firstly reported by Schlabitz et al. at a conference [1]. The previous bulk property measurements clearly showed that one second order phase transition occurs at the Hidden Order temperature THO ≈ 17.5 K and another second order phase transition, the superconducting transition, occurs at Tc ≈ 1 K. Though twenty eight years have passed, the mechanisms behind these two phase transitions are still not clear to researchers. Perfect crystals do not exist. Different kinds of crystal defects can have considerable effects on the crystalline properties. Some of these defects can be eliminated, and hence the crystalline quality improved, by annealing. Previous publications showed that some bulk properties of URu2Si2 exhibited significant differences between as-grown samples and annealed samples. The present study shows that the annealing of URu2Si2 has some considerable effects on the resistivity and the DC magnetization. The effects of annealing on the resistivity are characterized by examining how the Residual Resistivity Ratio (RRR), the fitting parameters to an expression for the temperature dependence of the resistivity, the temperatures of the local maximum and local minimum of the resistivity at the Hidden Order phase transition and the Hidden Order Transition Width ∆THO change after annealing. The plots of one key fitting parameter, the onset temperature of the Hidden Order transition and ∆THO vs RRR are compared with those of Matsuda et al. [2]. Different media used to mount samples have some impact on how effectively the samples are cooled because the media have different thermal conductivity. The DC magnetization around the superconducting transition is presented for one unannealed sample under fields of 25 Oe and 50 Oe and one annealed sample under fields of 0 Oe and 25 Oe. The DC field dependent magnetization of the annealed Sample1-1 shows a typical field dependence of a Type-II superconductor. The lower critical field Hc1 is relatively high, which may be due to flux pinning by the crystal defects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dans ce travail, j’étudierai principalement un modèle abélien de Higgs en 2+1 dimensions, dans lequel un champ scalaire interagit avec un champ de jauge. Des défauts topologiques, nommés vortex, sont créés lorsque le potentiel possède un minimum brisant spontanément la symétrie U(1). En 3+1 dimensions, ces vortex deviennent des défauts à une dimension. Ils ap- paraissent par exemple en matière condensée dans les supraconducteurs de type II comme des lignes de flux magnétique. J’analyserai comment l’énergie des solutions statiques dépend des paramètres du modèle et en particulier du nombre d’enroulement du vortex. Pour le choix habituel de potentiel (un poly- nôme quartique dit « BPS »), la relation entre les masses des deux champs mène à deux types de comportements : type I si la masse du champ de jauge est plus grande que celle du champ sca- laire et type II inversement. Selon le cas, la dépendance de l’énergie au nombre d’enroulement, n, indiquera si les vortex auront tendance à s’attirer ou à se repousser, respectivement. Lorsque le flux emprisonné est grand, les vortex présentent un profil où la paroi est mince, permettant certaines simplifications dans l’analyse. Le potentiel, un polynôme d’ordre six (« non-BPS »), est choisi tel que le centre du vortex se trouve dans le vrai vide (minimum absolu du potentiel) alors qu’à l’infini le champ scalaire se retrouve dans le faux vide (minimum relatif du potentiel). Le taux de désintégration a déjà été estimé par une approximation semi-classique pour montrer l’impact des défauts topologiques sur la stabilité du faux vide. Le projet consiste d’abord à établir l’existence de vortex classi- quement stables de façon numérique. Puis, ma contribution fut une analyse des paramètres du modèle révélant le comportement énergétique de ceux-ci en fonction du nombre d’enroulement. Ce comportement s’avèrera être différent du cas « BPS » : le ratio des masses ne réussit pas à décrire le comportement observé numériquement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dans ce rapport de mémoire, nous avons utilisé les méthodes numériques telles que la dynamique moléculaire (code de Lammps) et ART-cinétique. Ce dernier est un algorithme de Monte Carlo cinétique hors réseau avec construction du catalogue d'événements à la volée qui incorpore exactement tous les effets élastiques. Dans la première partie, nous avons comparé et évalué des divers algorithmes de la recherche du minimum global sur une surface d'énergie potentielle des matériaux complexes. Ces divers algorithmes choisis sont essentiellement ceux qui utilisent le principe Bell-Evans-Polanyi pour explorer la surface d'énergie potentielle. Cette étude nous a permis de comprendre d'une part, les étapes nécessaires pour un matériau complexe d'échapper d'un minimum local vers un autre et d'autre part de contrôler les recherches pour vite trouver le minimum global. En plus, ces travaux nous ont amené à comprendre la force de ces méthodes sur la cinétique de l'évolution structurale de ces matériaux complexes. Dans la deuxième partie, nous avons mis en place un outil de simulation (le potentiel ReaxFF couplé avec ART-cinétique) capable d'étudier les étapes et les processus d'oxydation du silicium pendant des temps long comparable expérimentalement. Pour valider le système mis en place, nous avons effectué des tests sur les premières étapes d'oxydation du silicium. Les résultats obtenus sont en accord avec la littérature. Cet outil va être utilisé pour comprendre les vrais processus de l'oxydation et les transitions possibles des atomes d'oxygène à la surface du silicium associée avec les énergies de barrière, des questions qui sont des défis pour l'industrie micro-électronique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The synthesis of doubly thermoresponsive PPO-PMPC-PNIPAM triblock copolymer gelators by atom transfer radical polymerization using a PPO-based macroinitiator is described. Provided that the PPO block is sufficiently long, dynamic light scattering and differential scanning calorimetry studies confirm the presence of two separate thermal transitions corresponding to micellization and gelation, as expected. However, these ABC-type triblock copolymers proved to be rather inefficient gelators: free-standing gels at 37 degrees C required a triblock copolymer concentration of around 20 wt%. This gelator performance should be compared with copolymer concentrations of 6-7 wt% required for the PNIPAM-PMPC-PNIPAM triblock copolymers reported previously. Clearly, the separation of micellar self-assembly from gel network formation does not lead to enhanced gelator efficiencies, at least for this particular system. Nevertheless, there are some features of interest in the present study. In particular, close inspection of the viscosity vs temperature plot obtained for a PPO43-PMPC160-PNIPAM(81) triblock copolymer revealed a local minimum in viscosity. This is consistent with intramicelle collapse of the outer PNIPAM blocks prior to the development of the intermicelle hydrophobic interactions that are a prerequisite for macroscopic gelation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Differential Evolution (DE) is a tool for efficient optimisation, and it belongs to the class of evolutionary algorithms, which include Evolution Strategies and Genetic Algorithms. DE algorithms work well when the population covers the entire search space, and they have shown to be effective on a large range of classical optimisation problems. However, an undesirable behaviour was detected when all the members of the population are in a basin of attraction of a local optimum (local minimum or local maximum), because in this situation the population cannot escape from it. This paper proposes a modification of the standard mechanisms in DE algorithm in order to change the exploration vs. exploitation balance to improve its behaviour.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two algorithms for finding the point on non-rational/rational Bezier curves of which the normal vector passes through a given external point are presented. The algorithms are based on Bezier curves generation algorithms of de Casteljau's algorithm for non-rational Bezier curve or Farin's recursion for rational Bezier curve, respectively. Orthogonal projections from the external point are used to guide the directional search used in the proposed iterative algorithms. Using Lyapunov's method, it is shown that each algorithm is able to converge to a local minimum for each case of non-rational/rational Bezier curves. It is also shown that on convergence the distance between the point on curves to the external point reaches a local minimum for both approaches. Illustrative examples are included to demonstrate the effectiveness of the proposed approaches.