40 resultados para markov chains monte carlo methods
Resumo:
En esta tesis se aborda la detección y el seguimiento automático de vehículos mediante técnicas de visión artificial con una cámara monocular embarcada. Este problema ha suscitado un gran interés por parte de la industria automovilística y de la comunidad científica ya que supone el primer paso en aras de la ayuda a la conducción, la prevención de accidentes y, en última instancia, la conducción automática. A pesar de que se le ha dedicado mucho esfuerzo en los últimos años, de momento no se ha encontrado ninguna solución completamente satisfactoria y por lo tanto continúa siendo un tema de investigación abierto. Los principales problemas que plantean la detección y seguimiento mediante visión artificial son la gran variabilidad entre vehículos, un fondo que cambia dinámicamente debido al movimiento de la cámara, y la necesidad de operar en tiempo real. En este contexto, esta tesis propone un marco unificado para la detección y seguimiento de vehículos que afronta los problemas descritos mediante un enfoque estadístico. El marco se compone de tres grandes bloques, i.e., generación de hipótesis, verificación de hipótesis, y seguimiento de vehículos, que se llevan a cabo de manera secuencial. No obstante, se potencia el intercambio de información entre los diferentes bloques con objeto de obtener el máximo grado posible de adaptación a cambios en el entorno y de reducir el coste computacional. Para abordar la primera tarea de generación de hipótesis, se proponen dos métodos complementarios basados respectivamente en el análisis de la apariencia y la geometría de la escena. Para ello resulta especialmente interesante el uso de un dominio transformado en el que se elimina la perspectiva de la imagen original, puesto que este dominio permite una búsqueda rápida dentro de la imagen y por tanto una generación eficiente de hipótesis de localización de los vehículos. Los candidatos finales se obtienen por medio de un marco colaborativo entre el dominio original y el dominio transformado. Para la verificación de hipótesis se adopta un método de aprendizaje supervisado. Así, se evalúan algunos de los métodos de extracción de características más populares y se proponen nuevos descriptores con arreglo al conocimiento de la apariencia de los vehículos. Para evaluar la efectividad en la tarea de clasificación de estos descriptores, y dado que no existen bases de datos públicas que se adapten al problema descrito, se ha generado una nueva base de datos sobre la que se han realizado pruebas masivas. Finalmente, se presenta una metodología para la fusión de los diferentes clasificadores y se plantea una discusión sobre las combinaciones que ofrecen los mejores resultados. El núcleo del marco propuesto está constituido por un método Bayesiano de seguimiento basado en filtros de partículas. Se plantean contribuciones en los tres elementos fundamentales de estos filtros: el algoritmo de inferencia, el modelo dinámico y el modelo de observación. En concreto, se propone el uso de un método de muestreo basado en MCMC que evita el elevado coste computacional de los filtros de partículas tradicionales y por consiguiente permite que el modelado conjunto de múltiples vehículos sea computacionalmente viable. Por otra parte, el dominio transformado mencionado anteriormente permite la definición de un modelo dinámico de velocidad constante ya que se preserva el movimiento suave de los vehículos en autopistas. Por último, se propone un modelo de observación que integra diferentes características. En particular, además de la apariencia de los vehículos, el modelo tiene en cuenta también toda la información recibida de los bloques de procesamiento previos. El método propuesto se ejecuta en tiempo real en un ordenador de propósito general y da unos resultados sobresalientes en comparación con los métodos tradicionales. ABSTRACT This thesis addresses on-road vehicle detection and tracking with a monocular vision system. This problem has attracted the attention of the automotive industry and the research community as it is the first step for driver assistance and collision avoidance systems and for eventual autonomous driving. Although many effort has been devoted to address it in recent years, no satisfactory solution has yet been devised and thus it is an active research issue. The main challenges for vision-based vehicle detection and tracking are the high variability among vehicles, the dynamically changing background due to camera motion and the real-time processing requirement. In this thesis, a unified approach using statistical methods is presented for vehicle detection and tracking that tackles these issues. The approach is divided into three primary tasks, i.e., vehicle hypothesis generation, hypothesis verification, and vehicle tracking, which are performed sequentially. Nevertheless, the exchange of information between processing blocks is fostered so that the maximum degree of adaptation to changes in the environment can be achieved and the computational cost is alleviated. Two complementary strategies are proposed to address the first task, i.e., hypothesis generation, based respectively on appearance and geometry analysis. To this end, the use of a rectified domain in which the perspective is removed from the original image is especially interesting, as it allows for fast image scanning and coarse hypothesis generation. The final vehicle candidates are produced using a collaborative framework between the original and the rectified domains. A supervised classification strategy is adopted for the verification of the hypothesized vehicle locations. In particular, state-of-the-art methods for feature extraction are evaluated and new descriptors are proposed by exploiting the knowledge on vehicle appearance. Due to the lack of appropriate public databases, a new database is generated and the classification performance of the descriptors is extensively tested on it. Finally, a methodology for the fusion of the different classifiers is presented and the best combinations are discussed. The core of the proposed approach is a Bayesian tracking framework using particle filters. Contributions are made on its three key elements: the inference algorithm, the dynamic model and the observation model. In particular, the use of a Markov chain Monte Carlo method is proposed for sampling, which circumvents the exponential complexity increase of traditional particle filters thus making joint multiple vehicle tracking affordable. On the other hand, the aforementioned rectified domain allows for the definition of a constant-velocity dynamic model since it preserves the smooth motion of vehicles in highways. Finally, a multiple-cue observation model is proposed that not only accounts for vehicle appearance but also integrates the available information from the analysis in the previous blocks. The proposed approach is proven to run near real-time in a general purpose PC and to deliver outstanding results compared to traditional methods.
Resumo:
En este estudio se aplica una metodología de obtención de las leyes de frecuencia derivadas (de caudales máximo vertidos y niveles máximos alcanzados) en un entorno de simulaciones de Monte Carlo, para su inclusión en un modelo de análisis de riesgo de presas. Se compara su comportamiento respecto del uso de leyes de frecuencia obtenidas con las técnicas tradicionalmente utilizadas.
Resumo:
Monte Carlo simulations have been carried out to study the effect of temperature on the growth kinetics of a circular grain. This work demonstrates the importance of roughening fluctuations on the growth dynamics. Since the effect of thermal fluctuations is stronger in d =2 than in d =3, as predicted by d =3 theories of domain kinetics, the circular domain shrinks linearly with time as A (t)=A(0)-αt, where A (0) and A(t) are the initial and instantaneous areas, respectively. However, in contrast to d =3, the slope α is strongly temperature dependent for T≥0.6TC. An analytical theory which considers the thermal fluctuations agrees with the T dependence of the Monte Carlo data in this regime, and this model show that these fluctuations are responsible for the strong temperature dependence of the growth rate for d =2. Our results are particularly relevant to the problem of domain growth in surface science
Resumo:
A Monte Carlo computer simulation technique, in which a continuum system is modeled employing a discrete lattice, has been applied to the problem of recrystallization. Primary recrystallization is modeled under conditions where the degree of stored energy is varied and nucleation occurs homogeneously (without regard for position in the microstructure). The nucleation rate is chosen as site saturated. Temporal evolution of the simulated microstructures is analyzed to provide the time dependence of the recrystallized volume fraction and grain sizes. The recrystallized volume fraction shows sigmoidal variations with time. The data are approximately fit by the Johnson-Mehl-Avrami equation with the expected exponents, however significant deviations are observed for both small and large recrystallized volume fractions. Under constant rate nucleation conditions, the propensity for irregular grain shapes is decreased and the density of two sided grains increases.
Resumo:
The energy and specific energy absorbed in the main cell compartments (nucleus and cytoplasm) in typical radiobiology experiments are usually estimated by calculations as they are not accessible for a direct measurement. In most of the work, the cell geometry is modelled using the combination of simple mathematical volumes. We propose a method based on high resolution confocal imaging and ion beam analysis (IBA) in order to import realistic cell nuclei geometries in Monte-Carlo simulations and thus take into account the variety of different geometries encountered in a typical cell population. Seventy-six cell nuclei have been imaged using confocal microscopy and their chemical composition has been measured using IBA. A cellular phantom was created from these data using the ImageJ image analysis software and imported in the Geant4 Monte-Carlo simulation toolkit. Total energy and specific energy distributions in the 76 cell nuclei have been calculated for two types of irradiation protocols: a 3 MeV alpha particle microbeam used for targeted irradiation and a 239Pu alpha source used for large angle random irradiation. Qualitative images of the energy deposited along the particle tracks have been produced and show good agreement with images of DNA double strand break signalling proteins obtained experimentally. The methodology presented in this paper provides microdosimetric quantities calculated from realistic cellular volumes. It is based on open-source oriented software that is publicly available.
Resumo:
The aim of this work is to optimize a Monte Carlo (MC) kernel for electron radiation therapy (IOERT) compatible with intraoperative usage and to integrate it within an existing IOERT dedicated treatment planning system (TPS)
Resumo:
Kinetic Monte Carlo (KMC) is a widely used technique to simulate the evolution of radiation damage inside solids. Despite de fact that this technique was developed several decades ago, there is not an established and easy to access simulating tool for researchers interested in this field, unlike in the case of molecular dynamics or density functional theory calculations. In fact, scientists must develop their own tools or use unmaintained ones in order to perform these types of simulations. To fulfil this need, we have developed MMonCa, the Modular Monte Carlo simulator. MMonCa has been developed using professional C++ programming techniques and has been built on top of an interpreted language to allow having a powerful yet flexible, robust but customizable and easy to access modern simulator. Both non lattice and Lattice KMC modules have been developed. We will present in this conference, for the first time, the MMonCa simulator. Along with other (more detailed) contributions in this meeting, the versatility of MMonCa to study a number of problems in different materials (particularly, Fe and W) subject to a wide range of conditions will be shown. Regarding KMC simulations, we have studied neutron-generated cascade evolution in Fe (as a model material). Starting with a Frenkel pair distribution we have followed the defect evolution up to 450 K. Comparison with previous simulations and experiments shows excellent agreement. Furthermore, we have studied a more complex system (He-irradiated W:C) using a previous parametrization [1]. He-irradiation at 4 K followed by isochronal annealing steps up to 500 K has been simulated with MMonCa. The He energy was 400 eV or 3 keV. In the first case, no damage is associated to the He implantation, whereas in the second one, a significant Frenkel pair concentration (evolving into complex clusters) is associated to the He ions. We have been able to explain He desorption both in the absence and in the presence of Frenkel pairs and we have also applied MMonCa to high He doses and fluxes at elevated temperatures. He migration and trapping dominate the kinetics of He desorption. These processes will be discussed and compared to experimental results. [1] C.S. Becquart et al. J. Nucl. Mater. 403 (2010) 75
Resumo:
inor actinides (MAs) transmutation is a main design objective of advanced nuclear systems such as generation IV Sodium Fast Reactors (SFRs). In advanced fuel cycles, MA contents in final high level waste packages are main contributors to short term heat production as well as to long-term radiotoxicity. Therefore, MA transmutation would have an impact on repository designs and would reduce the environment burden of nuclear energy. In order to predict such consequences Monte Carlo (MC) transport codes are used in reactor design tasks and they are important complements and references for routinely used deterministic computational tools. In this paper two promising Monte Carlo transport-coupled depletion codes, EVOLCODE and SERPENT, are used to examine the impact of MA burning strategies in a SFR core, 3600 MWth. The core concept proposal for MA loading in two configurations is the result of an optimization effort upon a preliminary reference design to reduce the reactivity insertion as a consequence of sodium voiding, one of the main concerns of this technology. The objective of this paper is double. Firstly, efficiencies of the two core configurations for MA transmutation are addressed and evaluated in terms of actinides mass changes and reactivity coefficients. Results are compared with those without MA loading. Secondly, a comparison of the two codes is provided. The discrepancies in the results are quantified and discussed.
Resumo:
Using the Monte Carlo method the behavior of a system of true hard cylinders has been studied. Values of the length-to-breadth ratio L/D and packing fraction η have been chosen similar to those of real nematic liquid crystals. Results include radial distribution function g(r), structure factor S(k), and orientational order parameter M. These results lead to the conclusion that the hard cylinder model may be a useful reference for real mesomorphic phases.
Resumo:
In activation calculations, there are several approaches to quantify uncertainties: deterministic by means of sensitivity analysis, and stochastic by means of Monte Carlo. Here, two different Monte Carlo approaches for nuclear data uncertainty are presented: the first one is the Total Monte Carlo (TMC). The second one is by means of a Monte Carlo sampling of the covariance information included in the nuclear data libraries to propagate these uncertainties throughout the activation calculations. This last approach is what we named Covariance Uncertainty Propagation, CUP. This work presents both approaches and their differences. Also, they are compared by means of an activation calculation, where the cross-section uncertainties of 239Pu and 241Pu are propagated in an ADS activation calculation.
Resumo:
In this work, we introduce the Object Kinetic Monte Carlo (OKMC) simulator MMonCa and simulate the defect evolution in three different materials. We start by explaining the theory of OKMC and showing some details of how such theory is implemented by creating generic structures and algorithms in the objects that we want to simulate. Then we successfully reproduce simulated results for defect evolution in iron, silicon and tungsten using our simulator and compare with available experimental data and similar simulations. The comparisons validate MMonCa showing that it is powerful and flexible enough to be customized and used to study the damage evolution of defects in a wide range of solid materials.
Resumo:
Meta-análisis del volumen de eritrocitos en altitud
Resumo:
Subtraction of Ictal SPECT Co-registered to MRI (SISCOM) is an imaging technique used to localize the epileptogenic focus in patients with intractable partial epilepsy. The aim of this study was to determine the accuracy of registration algorithms involved in SISCOM analysis using FocusDET, a new user-friendly application. To this end, Monte Carlo simulation was employed to generate realistic SPECT studies. Simulated sinograms were reconstructed by using the Filtered BackProjection (FBP) algorithm and an Ordered Subsets Expectation Maximization (OSEM) reconstruction method that included compensation for all degradations. Registration errors in SPECT-SPECT and SPECT-MRI registration were evaluated by comparing the theoretical and actual transforms. Patient studies with well-localized epilepsy were also included in the registration assessment. Global registration errors including SPECT-SPECT and SPECT-MRI registration errors were less than 1.2 mm on average, exceeding the voxel size (3.32 mm) of SPECT studies in no case. Although images reconstructed using OSEM led to lower registration errors than images reconstructed with FBP, differences after using OSEM or FBP in reconstruction were less than 0.2 mm on average. This indicates that correction for degradations does not play a major role in the SISCOM process, thereby facilitating the application of the methodology in centers where OSEM is not implemented with correction of all degradations. These findings together with those obtained by clinicians from patients via MRI, interictal and ictal SPECT and video-EEG, show that FocusDET is a robust application for performing SISCOM analysis in clinical practice.
Resumo:
Ion beam therapy is a valuable method for the treatment of deep-seated and radio-resistant tumors thanks to the favorable depth-dose distribution characterized by the Bragg peak. Hadrontherapy facilities take advantage of the specific ion range, resulting in a highly conformal dose in the target volume, while the dose in critical organs is reduced as compared to photon therapy. The necessity to monitor the delivery precision, i.e. the ion range, is unquestionable, thus different approaches have been investigated, such as the detection of prompt photons or annihilation photons of positron emitter nuclei created during the therapeutic treatment. Based on the measurement of the induced β+ activity, our group has developed various in-beam PET prototypes: the one under test is composed by two planar detector heads, each one consisting of four modules with a total active area of 10 × 10 cm2. A single detector module is made of a LYSO crystal matrix coupled to a position sensitive photomultiplier and is read-out by dedicated frontend electronics. A preliminary data taking was performed at the Italian National Centre for Oncological Hadron Therapy (CNAO, Pavia), using proton beams in the energy range of 93–112 MeV impinging on a plastic phantom. The measured activity profiles are presented and compared with the simulated ones based on the Monte Carlo FLUKA package.
Resumo:
La región del espectro electromagnético comprendida entre 100 GHz y 10 THz alberga una gran variedad de aplicaciones en campos tan dispares como la radioastronomía, espectroscopíamolecular, medicina, seguridad, radar, etc. Los principales inconvenientes en el desarrollo de estas aplicaciones son los altos costes de producción de los sistemas trabajando a estas frecuencias, su costoso mantenimiento, gran volumen y baja fiabilidad. Entre las diferentes tecnologías a frecuencias de THz, la tecnología de los diodos Schottky juega un importante papel debido a su madurez y a la sencillez de estos dispositivos. Además, los diodos Schottky pueden operar tanto a temperatura ambiente como a temperaturas criogénicas, con altas eficiencias cuando se usan como multiplicadores y con moderadas temperaturas de ruido en mezcladores. El principal objetivo de esta tesis doctoral es analizar los fenómenos físicos responsables de las características eléctricas y del ruido en los diodos Schottky, así como analizar y diseñar circuitos multiplicadores y mezcladores en bandas milimétricas y submilimétricas. La primera parte de la tesis presenta un análisis de los fenómenos físicos que limitan el comportamiento de los diodos Schottky de GaAs y GaN y de las características del espectro de ruido de estos dispositivos. Para llevar a cabo este análisis, un modelo del diodo basado en la técnica de Monte Carlo se ha considerado como referencia debido a la elevada precisión y fiabilidad de este modelo. Además, el modelo de Monte Carlo permite calcular directamente el espectro de ruido de los diodos sin necesidad de utilizar ningún modelo analítico o empírico. Se han analizado fenómenos físicos como saturación de la velocidad, inercia de los portadores, dependencia de la movilidad electrónica con la longitud de la epicapa, resonancias del plasma y efectos no locales y no estacionarios. También se ha presentado un completo análisis del espectro de ruido para diodos Schottky de GaAs y GaN operando tanto en condiciones estáticas como variables con el tiempo. Los resultados obtenidos en esta parte de la tesis contribuyen a mejorar la comprensión de la respuesta eléctrica y del ruido de los diodos Schottky en condiciones de altas frecuencias y/o altos campos eléctricos. También, estos resultados han ayudado a determinar las limitaciones de modelos numéricos y analíticos usados en el análisis de la respuesta eléctrica y del ruido electrónico en los diodos Schottky. La segunda parte de la tesis está dedicada al análisis de multiplicadores y mezcladores mediante una herramienta de simulación de circuitos basada en la técnica de balance armónico. Diferentes modelos basados en circuitos equivalentes del dispositivo, en las ecuaciones de arrastre-difusión y en la técnica de Monte Carlo se han considerado en este análisis. El modelo de Monte Carlo acoplado a la técnica de balance armónico se ha usado como referencia para evaluar las limitaciones y el rango de validez de modelos basados en circuitos equivalentes y en las ecuaciones de arrastredifusión para el diseño de circuitos multiplicadores y mezcladores. Una notable característica de esta herramienta de simulación es que permite diseñar circuitos Schottky teniendo en cuenta tanto la respuesta eléctrica como el ruido generado en los dispositivos. Los resultados de las simulaciones presentados en esta parte de la tesis, tanto paramultiplicadores comomezcladores, se han comparado con resultados experimentales publicados en la literatura. El simulador que integra el modelo de Monte Carlo con la técnica de balance armónico permite analizar y diseñar circuitos a frecuencias superiores a 1 THz. ABSTRACT The terahertz region of the electromagnetic spectrum(100 GHz-10 THz) presents a wide range of applications such as radio-astronomy, molecular spectroscopy, medicine, security and radar, among others. The main obstacles for the development of these applications are the high production cost of the systems working at these frequencies, highmaintenance, high volume and low reliability. Among the different THz technologies, Schottky technology plays an important rule due to its maturity and the inherent simplicity of these devices. Besides, Schottky diodes can operate at both room and cryogenic temperatures, with high efficiency in multipliers and moderate noise temperature in mixers. This PhD. thesis is mainly concerned with the analysis of the physical processes responsible for the characteristics of the electrical response and noise of Schottky diodes, as well as the analysis and design of frequency multipliers and mixers at millimeter and submillimeter wavelengths. The first part of the thesis deals with the analysis of the physical phenomena limiting the electrical performance of GaAs and GaN Schottky diodes and their noise performance. To carry out this analysis, a Monte Carlo model of the diode has been used as a reference due to the high accuracy and reliability of this diode model at millimeter and submillimter wavelengths. Besides, the Monte Carlo model provides a direct description of the noise spectra of the devices without the necessity of any additional analytical or empirical model. Physical phenomena like velocity saturation, carrier inertia, dependence of the electron mobility on the epilayer length, plasma resonance and nonlocal effects in time and space have been analysed. Also, a complete analysis of the current noise spectra of GaAs and GaN Schottky diodes operating under static and time varying conditions is presented in this part of the thesis. The obtained results provide a better understanding of the electrical and the noise responses of Schottky diodes under high frequency and/or high electric field conditions. Also these results have helped to determine the limitations of numerical and analytical models used in the analysis of the electrical and the noise responses of these devices. The second part of the thesis is devoted to the analysis of frequency multipliers and mixers by means of an in-house circuit simulation tool based on the harmonic balance technique. Different lumped equivalent circuits, drift-diffusion and Monte Carlo models have been considered in this analysis. The Monte Carlo model coupled to the harmonic balance technique has been used as a reference to evaluate the limitations and range of validity of lumped equivalent circuit and driftdiffusion models for the design of frequency multipliers and mixers. A remarkable feature of this reference simulation tool is that it enables the design of Schottky circuits from both electrical and noise considerations. The simulation results presented in this part of the thesis for both multipliers and mixers have been compared with measured results available in the literature. In addition, the Monte Carlo simulation tool allows the analysis and design of circuits above 1 THz.