978 resultados para simulations de Monte-Carlo


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Smoothing splines are a popular approach for non-parametric regression problems. We use periodic smoothing splines to fit a periodic signal plus noise model to data for which we assume there are underlying circadian patterns. In the smoothing spline methodology, choosing an appropriate smoothness parameter is an important step in practice. In this paper, we draw a connection between smoothing splines and REACT estimators that provides motivation for the creation of criteria for choosing the smoothness parameter. The new criteria are compared to three existing methods, namely cross-validation, generalized cross-validation, and generalization of maximum likelihood criteria, by a Monte Carlo simulation and by an application to the study of circadian patterns. For most of the situations presented in the simulations, including the practical example, the new criteria out-perform the three existing criteria.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Generalized linear mixed models (GLMM) are generalized linear models with normally distributed random effects in the linear predictor. Penalized quasi-likelihood (PQL), an approximate method of inference in GLMMs, involves repeated fitting of linear mixed models with “working” dependent variables and iterative weights that depend on parameter estimates from the previous cycle of iteration. The generality of PQL, and its implementation in commercially available software, has encouraged the application of GLMMs in many scientific fields. Caution is needed, however, since PQL may sometimes yield badly biased estimates of variance components, especially with binary outcomes. Recent developments in numerical integration, including adaptive Gaussian quadrature, higher order Laplace expansions, stochastic integration and Markov chain Monte Carlo (MCMC) algorithms, provide attractive alternatives to PQL for approximate likelihood inference in GLMMs. Analyses of some well known datasets, and simulations based on these analyses, suggest that PQL still performs remarkably well in comparison with more elaborate procedures in many practical situations. Adaptive Gaussian quadrature is a viable alternative for nested designs where the numerical integration is limited to a small number of dimensions. Higher order Laplace approximations hold the promise of accurate inference more generally. MCMC is likely the method of choice for the most complex problems that involve high dimensional integrals.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recurrent event data are largely characterized by the rate function but smoothing techniques for estimating the rate function have never been rigorously developed or studied in statistical literature. This paper considers the moment and least squares methods for estimating the rate function from recurrent event data. With an independent censoring assumption on the recurrent event process, we study statistical properties of the proposed estimators and propose bootstrap procedures for the bandwidth selection and for the approximation of confidence intervals in the estimation of the occurrence rate function. It is identified that the moment method without resmoothing via a smaller bandwidth will produce curve with nicks occurring at the censoring times, whereas there is no such problem with the least squares method. Furthermore, the asymptotic variance of the least squares estimator is shown to be smaller under regularity conditions. However, in the implementation of the bootstrap procedures, the moment method is computationally more efficient than the least squares method because the former approach uses condensed bootstrap data. The performance of the proposed procedures is studied through Monte Carlo simulations and an epidemiological example on intravenous drug users.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this dissertation, the problem of creating effective large scale Adaptive Optics (AO) systems control algorithms for the new generation of giant optical telescopes is addressed. The effectiveness of AO control algorithms is evaluated in several respects, such as computational complexity, compensation error rejection and robustness, i.e. reasonable insensitivity to the system imperfections. The results of this research are summarized as follows: 1. Robustness study of Sparse Minimum Variance Pseudo Open Loop Controller (POLC) for multi-conjugate adaptive optics (MCAO). The AO system model that accounts for various system errors has been developed and applied to check the stability and performance of the POLC algorithm, which is one of the most promising approaches for the future AO systems control. It has been shown through numerous simulations that, despite the initial assumption that the exact system knowledge is necessary for the POLC algorithm to work, it is highly robust against various system errors. 2. Predictive Kalman Filter (KF) and Minimum Variance (MV) control algorithms for MCAO. The limiting performance of the non-dynamic Minimum Variance and dynamic KF-based phase estimation algorithms for MCAO has been evaluated by doing Monte-Carlo simulations. The validity of simple near-Markov autoregressive phase dynamics model has been tested and its adequate ability to predict the turbulence phase has been demonstrated both for single- and multiconjugate AO. It has also been shown that there is no performance improvement gained from the use of the more complicated KF approach in comparison to the much simpler MV algorithm in the case of MCAO. 3. Sparse predictive Minimum Variance control algorithm for MCAO. The temporal prediction stage has been added to the non-dynamic MV control algorithm in such a way that no additional computational burden is introduced. It has been confirmed through simulations that the use of phase prediction makes it possible to significantly reduce the system sampling rate and thus overall computational complexity while both maintaining the system stable and effectively compensating for the measurement and control latencies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BEAMnrc, a code for simulating medical linear accelerators based on EGSnrc, has been bench-marked and used extensively in the scientific literature and is therefore often considered to be the gold standard for Monte Carlo simulations for radiotherapy applications. However, its long computation times make it too slow for the clinical routine and often even for research purposes without a large investment in computing resources. VMC++ is a much faster code thanks to the intensive use of variance reduction techniques and a much faster implementation of the condensed history technique for charged particle transport. A research version of this code is also capable of simulating the full head of linear accelerators operated in photon mode (excluding multileaf collimators, hard and dynamic wedges). In this work, a validation of the full head simulation at 6 and 18 MV is performed, simulating with VMC++ and BEAMnrc the addition of one head component at a time and comparing the resulting phase space files. For the comparison, photon and electron fluence, photon energy fluence, mean energy, and photon spectra are considered. The largest absolute differences are found in the energy fluences. For all the simulations of the different head components, a very good agreement (differences in energy fluences between VMC++ and BEAMnrc <1%) is obtained. Only a particular case at 6 MV shows a somewhat larger energy fluence difference of 1.4%. Dosimetrically, these phase space differences imply an agreement between both codes at the <1% level, making VMC++ head module suitable for full head simulations with considerable gain in efficiency and without loss of accuracy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Electrical Power Assisted Steering system (EPAS) will likely be used on future automotive power steering systems. The sinusoidal brushless DC (BLDC) motor has been identified as one of the most suitable actuators for the EPAS application. Motor characteristic variations, which can be indicated by variations of the motor parameters such as the coil resistance and the torque constant, directly impart inaccuracies in the control scheme based on the nominal values of parameters and thus the whole system performance suffers. The motor controller must address the time-varying motor characteristics problem and maintain the performance in its long service life. In this dissertation, four adaptive control algorithms for brushless DC (BLDC) motors are explored. The first algorithm engages a simplified inverse dq-coordinate dynamics controller and solves for the parameter errors with the q-axis current (iq) feedback from several past sampling steps. The controller parameter values are updated by slow integration of the parameter errors. Improvement such as dynamic approximation, speed approximation and Gram-Schmidt orthonormalization are discussed for better estimation performance. The second algorithm is proposed to use both the d-axis current (id) and the q-axis current (iq) feedback for parameter estimation since id always accompanies iq. Stochastic conditions for unbiased estimation are shown through Monte Carlo simulations. Study of the first two adaptive algorithms indicates that the parameter estimation performance can be achieved by using more history data. The Extended Kalman Filter (EKF), a representative recursive estimation algorithm, is then investigated for the BLDC motor application. Simulation results validated the superior estimation performance with the EKF. However, the computation complexity and stability may be barriers for practical implementation of the EKF. The fourth algorithm is a model reference adaptive control (MRAC) that utilizes the desired motor characteristics as a reference model. Its stability is guaranteed by Lyapunov’s direct method. Simulation shows superior performance in terms of the convergence speed and current tracking. These algorithms are compared in closed loop simulation with an EPAS model and a motor speed control application. The MRAC is identified as the most promising candidate controller because of its combination of superior performance and low computational complexity. A BLDC motor controller developed with the dq-coordinate model cannot be implemented without several supplemental functions such as the coordinate transformation and a DC-to-AC current encoding scheme. A quasi-physical BLDC motor model is developed to study the practical implementation issues of the dq-coordinate control strategy, such as the initialization and rotor angle transducer resolution. This model can also be beneficial during first stage development in automotive BLDC motor applications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Water distribution systems are important for life saving facilities especially in the recovery after earthquakes. In this paper, a framework is discussed about seismic serviceability of water systems that includes the fragility evaluation of water sources of water distribution networks. Also, a case study is brought about the performance of a water system under different levels of seismic hazard. The seismic serviceability of a water supply system provided by EPANET is evaluated under various levels of seismic hazard. Basically, the assessment process is based on hydraulic analysis and Monte Carlo simulations, implemented with empirical fragility data provided by the American Lifeline Alliance (ALA, 2001) for both pipelines and water facilities. Represented by the Seismic Serviceability Index (Cornell University, 2008), the serviceability of the water distribution system is evaluated under each level of earthquakes with return periods of 72 years, 475 years, and 2475 years. The system serviceability under levels of earthquake hazard are compared with and without considering the seismic fragility of the water source. The results show that the seismic serviceability of the water system decreases with the growing of the return period of seismic hazard, and after considering the seismic fragility of the water source, the seismic serviceability decreases. The results reveal the importance of considering the seismic fragility of water sources, and the growing dependence of the system performance of water system on the seismic resilience of water source under severe earthquakes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Certain fatty acid N-alkyl amides from the medicinal plant Echinacea activate cannabinoid type-2 (CB2) receptors. In this study we show that the CB2-binding Echinacea constituents dodeca-2E,4E-dienoic acid isobutylamide (1) and dodeca-2E,4E,8Z,10Z-tetraenoic acid isobutylamide (2) form micelles in aqueous medium. In contrast, micelle formation is not observed for undeca-2E-ene-8,10-diynoic acid isobutylamide (3), which does not bind to CB2, or structurally related endogenous cannabinoids, such as arachidonoyl ethanolamine (anandamide). The critical micelle concentration (CMC) range of 1 and 2 was determined by fluorescence spectroscopy as 200-300 and 7400-10000 nM, respectively. The size of premicelle aggregates, micelles, and supermicelles was studied by dynamic light scattering. Microscopy images show that compound 1, but not 2, forms globular and rod-like supermicelles with radii of approximately 75 nm. The self-assembling N-alkyl amides partition between themselves and the CB2 receptor, and aggregation of N-alkyl amides thus determines their in vitro pharmacological effects. Molecular mechanics by Monte Carlo simulations of the aggregation process support the experimental data, suggesting that both 1 and 2 can readily aggregate into premicelles, but only 1 spontaneously assembles into larger aggregates. These findings have important implications for biological studies with this class of compounds.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The physics of the operation of singe-electron tunneling devices (SEDs) and singe-electron tunneling transistors (SETs), especially of those with multiple nanometer-sized islands, has remained poorly understood in spite of some intensive experimental and theoretical research. This computational study examines the current-voltage (IV) characteristics of multi-island single-electron devices using a newly developed multi-island transport simulator (MITS) that is based on semi-classical tunneling theory and kinetic Monte Carlo simulation. The dependence of device characteristics on physical device parameters is explored, and the physical mechanisms that lead to the Coulomb blockade (CB) and Coulomb staircase (CS) characteristics are proposed. Simulations using MITS demonstrate that the overall IV characteristics in a device with a random distribution of islands are a result of a complex interplay among those factors that affect the tunneling rates that are fixed a priori (e.g. island sizes, island separations, temperature, gate bias, etc.), and the evolving charge state of the system, which changes as the source-drain bias (VSD) is changed. With increasing VSD, a multi-island device has to overcome multiple discrete energy barriers (up-steps) before it reaches the threshold voltage (Vth). Beyond Vth, current flow is rate-limited by slow junctions, which leads to the CS structures in the IV characteristic. Each step in the CS is characterized by a unique distribution of island charges with an associated distribution of tunneling probabilities. MITS simulation studies done on one-dimensional (1D) disordered chains show that longer chains are better suited for switching applications as Vth increases with increasing chain length. They are also able to retain CS structures at higher temperatures better than shorter chains. In sufficiently disordered 2D systems, we demonstrate that there may exist a dominant conducting path (DCP) for conduction, which makes the 2D device behave as a quasi-1D device. The existence of a DCP is sensitive to the device structure, but is robust with respect to changes in temperature, gate bias, and VSD. A side gate in 1D and 2D systems can effectively control Vth. We argue that devices with smaller island sizes and narrower junctions may be better suited for practical applications, especially at room temperature.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Transparent and translucent objects involve both light reflection and transmission at surfaces. This paper presents a physically based transmission model of rough surface. The surface is assumed to be locally smooth, and statistical techniques is applied to calculate light transmission through a local illumination area. We have obtained an analytical expression for single scattering. The analytical model has been compared to our Monte Carlo simulations as well as to the previous simulations, and good agreements have been achieved. The presented model has potential applications for realistic rendering of transparent and translucent objects.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

During the second half of the 20th century untreated sewage released from housing and industry into natural waters led to a degradation of many freshwater lakes and reservoirs worldwide. In order to mitigate eutrophication, wastewater treatment plants, including Fe-induced phosphorus precipitation, were implemented throughout the industrialized world, leading to reoligotrophication in many freshwater lakes. To understand and assess the effects of reoligotrophication on primary productivity, we analyzed 28 years of 14C assimilation rates, as well as other biotic and abiotic parameters, such as global radiation, nutrient concentrations and plankton densities in peri-alpine Lake Lucerne, Switzerland. Using a simple productivity-light relationship, we estimated continuous primary production and discussed the relation between productivity and observed limnological parameters. Furthermore, we assessed the uncertainty of our modeling approach based on monthly 14C assimilation measurements using Monte Carlo simulations. Results confirm that monthly sampling of productivity is sufficient for identifying long-term trends in productivity and that conservation management has successfully improved water quality during the past three decades via reducing nutrients and primary production in the lake. However, even though nutrient concentrations have remained constant in recent years, annual primary production varies significantly from year to year. Despite the fact that nutrient concentrations have decreased by more than an order of magnitude, primary production has decreased only slightly. These results suggest that primary production correlates well to nutrients availability but meteorological conditions lead to interannual variability regardless of the trophic status of the lake. Accordingly, in oligotrophic freshwaters meteorological forcing may reduce productivity impacting on the entire food chain of the ecosystem.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Localized Magnetic Resonance Spectroscopy (MRS) is in widespread use for clinical brain research. Standard acquisition sequences to obtain one-dimensional spectra suffer from substantial overlap of spectral contributions from many metabolites. Therefore, specially tuned editing sequences or two-dimensional acquisition schemes are applied to extend the information content. Tuning specific acquisition parameters allows to make the sequences more efficient or more specific for certain target metabolites. Cramér-Rao bounds have been used in other fields for optimization of experiments and are now shown to be very useful as design criteria for localized MRS sequence optimization. The principle is illustrated for one- and two-dimensional MRS, in particular the 2D separation experiment, where the usual restriction to equidistant echo time spacings and equal acquisition times per echo time can be abolished. Particular emphasis is placed on optimizing experiments for quantification of GABA and glutamate. The basic principles are verified by Monte Carlo simulations and in vivo for repeated acquisitions of generalized two-dimensional separation brain spectra obtained from healthy subjects and expanded by bootstrapping for better definition of the quantification uncertainties.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Genesis mission Solar Wind Concentrator was built to enhance fluences of solar wind by an average of 20x over the 2.3 years that the mission exposed substrates to the solar wind. The Concentrator targets survived the hard landing upon return to Earth and were used to determine the isotopic composition of solar-wind—and hence solar—oxygen and nitrogen. Here we report on the flight operation of the instrument and on simulations of its performance. Concentration and fractionation patterns obtained from simulations are given for He, Li, N, O, Ne, Mg, Si, S, and Ar in SiC targets, and are compared with measured concentrations and isotope ratios for the noble gases. Carbon is also modeled for a Si target. Predicted differences in instrumental fractionation between elements are discussed. Additionally, as the Concentrator was designed only for ions ≤22 AMU, implications of analyzing elements as heavy as argon are discussed. Post-flight simulations of instrumental fractionation as a function of radial position on the targets incorporate solar-wind velocity and angular distributions measured in flight, and predict fractionation patterns for various elements and isotopes of interest. A tighter angular distribution, mostly due to better spacecraft spin stability than assumed in pre-flight modeling, results in a steeper isotopic fractionation gradient between the center and the perimeter of the targets. Using the distribution of solar-wind velocities encountered during flight, which are higher than those used in pre-flight modeling, results in elemental abundance patterns slightly less peaked at the center. Mean fractionations trend with atomic mass, with differences relative to the measured isotopes of neon of +4.1±0.9 ‰/amu for Li, between -0.4 and +2.8 ‰/amu for C, +1.9±0.7‰/amu for N, +1.3±0.4 ‰/amu for O, -7.5±0.4 ‰/amu for Mg, -8.9±0.6 ‰/amu for Si, and -22.0±0.7 ‰/amu for S (uncertainties reflect Monte Carlo statistics). The slopes of the fractionation trends depend to first order only on the relative differential mass ratio, Δ m/ m. This article and a companion paper (Reisenfeld et al. 2012, this issue) provide post-flight information necessary for the analysis of the Genesis solar wind samples, and thus serve to complement the Space Science Review volume, The Genesis Mission (v. 105, 2003).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A high-resolution α, x-ray, and γ-ray coincidence spectroscopy experiment was conducted at the GSI Helmholtzzentrum für Schwerionenforschung. Thirty correlated α-decay chains were detected following the fusion-evaporation reaction Ca48+Am243. The observations are consistent with previous assignments of similar decay chains to originate from element Z=115. For the first time, precise spectroscopy allows the derivation of excitation schemes of isotopes along the decay chains starting with elements Z>112. Comprehensive Monte Carlo simulations accompany the data analysis. Nuclear structure models provide a first level interpretation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Polar molecular crystals seem to contradict a quantum mechanical statement, according to which no stationary state of a system features a permanent electrical polarization. By stationary we understand here an ensemble for which thermal averaging applies. In the language of statistical mechanics we have thus to ask for the thermal expectation value of the polarization in molecular crystals. Nucleation aggregates and growing crystal surfaces can provide a single degree of freedom for polar molecules required to average the polarization. By means of group theoretical reasoning and Monte Carlo simulations we show that such systems thermalize into a bi-polar state featuring zero bulk polarity. A two domain, i.e. bipolar state is obtained because boundaries are setting up opposing effective electrical fields. Described phenomena can be understood as a process of partial ergodicity-restoring. Experimentally, a bi-polar state of molecular crystals was demonstrated using phase sensitive second harmonic generation and scanning pyroelectric microscopy