912 resultados para Simulated annealing algorithms
Resumo:
Estudio elaborado a partir de una estancia en el Karolinska University Hospital, Suecia, entre marzo y junio del 2006. En la radioterapia estereotáxica extracraneal (SBRT) de tumores de pulmón existen principalmente dos problemas en el cálculo de la dosis con los sistemas de planificación disponibles: la precisión limitada de los algoritmos de cálculo en presencia de tejidos con densidades muy diferentes y los movimientos debidos a la respiración del paciente durante el tratamiento. El objetivo de este trabajo ha sido llevar a cabo la simulación con el código Monte Carlo PENELOPE de la distribución de dosis en tumores de pulmón en casos representativos de tratamientos con SBRT teniendo en cuenta los movimientos respiratorios y su comparación con los resultados de varios planificadores. Se han estudiado casos representativos de tratamientos de SBRT en el Karolinska University Hospital. Los haces de radiación se han simulado mediante el código PENELOPE y se han usado para la obtención de los resultados MC de perfiles de dosis. Los resultados obtenidos para el caso estático (sin movimiento respiratorio ) ponen de manifiesto que, en comparación con la MC, la dosis (Gy/MU) calculada por los planificadores en el tumor tiene una precisión del 2-3%. En la zona de interfase entre tumor y tejido pulmonar los planificadores basados en el algoritmo PB sobrestiman la dosis en un 10%, mientras que el algoritmo CC la subestima en un 3-4%. Los resultados de la simulación mediante MC de los movimientos respiratorios indican que los resultados de los planificadores son suficientemente precisos en el tumor, aunque en la interfase hay una mayor subestimación de la dosis en comparación con el caso estático. Estos resultados son compatibles con la experiencia clínica adquirida durante 15 años en el Karolinska University Hospital. Los resultados se han publicado en la revista Acta Oncologica.
Resumo:
Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Since conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. Monte Carlo results show that the estimator performs well in comparison to other estimators that have been proposed for estimation of general DLV models.
Resumo:
El objetivo de este proyecto es la predicción de la pérdida de paquete. Para ello necesitaremos el modelado del canal. De esta manera, podremos determinar cuando una transmisión llega con éxito o no. En primer lugar, se han estudiado los algoritmos de adaptación de la tasa. Estos algoritmos mejoran el rendimiento de la comunicación. Por este motivo, el programa de simulación se basa en algunos de estos algoritmos. En paralelo, se han capturado medidas del canal terrestre para realizar el modelado. Finalmente, con un programa mucho más completo se ha simulado el comportamiento de una transmisión con el modelado del canal físico, y se han asumido algunas consideraciones, como las colisiones. Por lo tanto, se ha obtenido un resultado más realista, con el cual se ha analizado teóricamente las posibilidades de un enlace entre el canal terrestre y el canal satélite, para crear una red híbrida.
Resumo:
We study the properties of the well known Replicator Dynamics when applied to a finitely repeated version of the Prisoners' Dilemma game. We characterize the behavior of such dynamics under strongly simplifying assumptions (i.e. only 3 strategies are available) and show that the basin of attraction of defection shrinks as the number of repetitions increases. After discussing the difficulties involved in trying to relax the 'strongly simplifying assumptions' above, we approach the same model by means of simulations based on genetic algorithms. The resulting simulations describe a behavior of the system very close to the one predicted by the replicator dynamics without imposing any of the assumptions of the analytical model. Our main conclusion is that analytical and computational models are good complements for research in social sciences. Indeed, while on the one hand computational models are extremely useful to extend the scope of the analysis to complex scenar
Resumo:
The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.
Resumo:
In traditional criminal investigation, uncertainties are often dealt with using a combination of common sense, practical considerations and experience, but rarely with tailored statistical models. For example, in some countries, in order to search for a given profile in the national DNA database, it must have allelic information for six or more of the ten SGM Plus loci for a simple trace. If the profile does not have this amount of information then it cannot be searched in the national DNA database (NDNAD). This requirement (of a result at six or more loci) is not based on a statistical approach, but rather on the feeling that six or more would be sufficient. A statistical approach, however, could be more rigorous and objective and would take into consideration factors such as the probability of adventitious matches relative to the actual database size and/or investigator's requirements in a sensible way. Therefore, this research was undertaken to establish scientific foundations pertaining to the use of partial SGM Plus loci profiles (or similar) for investigation.
Resumo:
The goal of the present work was assess the feasibility of using a pseudo-inverse and null-space optimization approach in the modeling of the shoulder biomechanics. The method was applied to a simplified musculoskeletal shoulder model. The mechanical system consisted in the arm, and the external forces were the arm weight, 6 scapulo-humeral muscles and the reaction at the glenohumeral joint, which was considered as a spherical joint. The muscle wrapping was considered around the humeral head assumed spherical. The dynamical equations were solved in a Lagrangian approach. The mathematical redundancy of the mechanical system was solved in two steps: a pseudo-inverse optimization to minimize the square of the muscle stress and a null-space optimization to restrict the muscle force to physiological limits. Several movements were simulated. The mathematical and numerical aspects of the constrained redundancy problem were efficiently solved by the proposed method. The prediction of muscle moment arms was consistent with cadaveric measurements and the joint reaction force was consistent with in vivo measurements. This preliminary work demonstrated that the developed algorithm has a great potential for more complex musculoskeletal modeling of the shoulder joint. In particular it could be further applied to a non-spherical joint model, allowing for the natural translation of the humeral head in the glenoid fossa.
Resumo:
In this paper, we develop numerical algorithms that use small requirements of storage and operations for the computation of invariant tori in Hamiltonian systems (exact symplectic maps and Hamiltonian vector fields). The algorithms are based on the parameterization method and follow closely the proof of the KAM theorem given in [LGJV05] and [FLS07]. They essentially consist in solving a functional equation satisfied by the invariant tori by using a Newton method. Using some geometric identities, it is possible to perform a Newton step using little storage and few operations. In this paper we focus on the numerical issues of the algorithms (speed, storage and stability) and we refer to the mentioned papers for the rigorous results. We show how to compute efficiently both maximal invariant tori and whiskered tori, together with the associated invariant stable and unstable manifolds of whiskered tori. Moreover, we present fast algorithms for the iteration of the quasi-periodic cocycles and the computation of the invariant bundles, which is a preliminary step for the computation of invariant whiskered tori. Since quasi-periodic cocycles appear in other contexts, this section may be of independent interest. The numerical methods presented here allow to compute in a unified way primary and secondary invariant KAM tori. Secondary tori are invariant tori which can be contracted to a periodic orbit. We present some preliminary results that ensure that the methods are indeed implementable and fast. We postpone to a future paper optimized implementations and results on the breakdown of invariant tori.
Resumo:
Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user.
Resumo:
The broad resonances underlying the entire (1) H NMR spectrum of the brain, ascribed to macromolecules, can influence metabolite quantification. At the intermediate field strength of 3 T, distinct approaches for the determination of the macromolecule signal, previously used at either 1.5 or 7 T and higher, may become equivalent. The aim of this study was to evaluate, at 3 T for healthy subjects using LCModel, the impact on the metabolite quantification of two different macromolecule approaches: (i) experimentally measured macromolecules; and (ii) mathematically estimated macromolecules. Although small, but significant, differences in metabolite quantification (up to 23% for glutamate) were noted for some metabolites, 10 metabolites were quantified reproducibly with both approaches with a Cramer-Rao lower bound below 20%, and the neurochemical profiles were therefore similar. We conclude that the mathematical approximation can provide sufficiently accurate and reproducible estimation of the macromolecule contribution to the (1) H spectrum at 3 T. Copyright © 2013 John Wiley & Sons, Ltd.
Resumo:
We demonstrate that RecA protein can mediate annealing of complementary DNA strands in vitro by at least two different mechanisms. The first annealing mechanism predominates under conditions where RecA protein causes coaggregation of single-stranded DNA (ssDNA) molecules and where RecA-free ssDNA stretches are present on both reaction partners. Under these conditions annealing can take place between locally concentrated protein-free complementary sequences. Other DNA aggregating agents like histone H1 or ethanol stimulate annealing by the same mechanism. The second mechanism of RecA-mediated annealing of complementary DNA strands is best manifested when preformed saturated RecA-ssDNA complexes interact with protein-free ssDNA. In this case, annealing can occur between the ssDNA strand resident in the complex and the ssDNA strand that interacts with the preformed RecA-ssDNA complex. Here, the action of RecA protein reflects its specific recombination promoting mechanism. This mechanism enables DNA molecules resident in the presynaptic RecA-DNA complexes to be exposed for hydrogen bond formation with DNA molecules contacting the presynaptic RecA-DNA filament.
Resumo:
Abstract. Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Because conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. It is shown that as the number of simulations diverges, the estimator is consistent and a higher-order expansion reveals the stochastic difference between the infeasible GMM estimator based on the same moment conditions and the simulated version. In particular, we show how to adjust standard errors to account for the simulations. Monte Carlo results show how the estimator may be applied to a range of dynamic latent variable (DLV) models, and that it performs well in comparison to several other estimators that have been proposed for DLV models.