971 resultados para Monte-carlo Simulations
Resumo:
In recent years, spatial variability modeling of soil parameters using random field theory has gained distinct importance in geotechnical analysis. In the present Study, commercially available finite difference numerical code FLAC 5.0 is used for modeling the permeability parameter as spatially correlated log-normally distributed random variable and its influence on the steady state seepage flow and on the slope stability analysis are studied. Considering the case of a 5.0 m high cohesive-frictional soil slope of 30 degrees, a range of coefficients of variation (CoV%) from 60 to 90% in the permeability Values, and taking different values of correlation distance in the range of 0.5-15 m, parametric studies, using Monte Carlo simulations, are performed to study the following three aspects, i.e., (i) effect ostochastic soil permeability on the statistics of seepage flow in comparison to the analytic (Dupuit's) solution available for the uniformly constant permeability property; (ii) strain and deformation pattern, and (iii) stability of the given slope assessed in terms of factor of safety (FS). The results obtained in this study are useful to understand the role of permeability variations in slope stability analysis under different slope conditions and material properties. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL), and intermittent hypoxic exposure (IHE)) on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a ‘magnitude-based inference’ approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.
Resumo:
In this paper, new results and insights are derived for the performance of multiple-input, single-output systems with beamforming at the transmitter, when the channel state information is quantized and sent to the transmitter over a noisy feedback channel. It is assumed that there exists a per-antenna power constraint at the transmitter, hence, the equal gain transmission (EGT) beamforming vector is quantized and sent from the receiver to the transmitter. The loss in received signal-to-noise ratio (SNR) relative to perfect beamforming is analytically characterized, and it is shown that at high rates, the overall distortion can be expressed as the sum of the quantization-induced distortion and the channel error-induced distortion, and that the asymptotic performance depends on the error-rate behavior of the noisy feedback channel as the number of codepoints gets large. The optimum density of codepoints (also known as the point density) that minimizes the overall distortion subject to a boundedness constraint is shown to be the same as the point density for a noiseless feedback channel, i.e., the uniform density. The binary symmetric channel with random index assignment is a special case of the analysis, and it is shown that as the number of quantized bits gets large the distortion approaches the same as that obtained with random beamforming. The accuracy of the theoretical expressions obtained are verified through Monte Carlo simulations.
Resumo:
We describe a noniterative method for recovering optical absorption coefficient distribution from the absorbed energy map reconstructed using simulated and noisy boundary pressure measurements. The source reconstruction problem is first solved for the absorbed energy map corresponding to single- and multiple-source illuminations from the side of the imaging plane. It is shown that the absorbed energy map and the absorption coefficient distribution, recovered from the single-source illumination with a large variation in photon flux distribution, have signal-to-noise ratios comparable to those of the reconstructed parameters from a more uniform photon density distribution corresponding to multiple-source illuminations. The absorbed energy map is input as absorption coefficient times photon flux in the time-independent diffusion equation (DE) governing photon transport to recover the photon flux in a single step. The recovered photon flux is used to compute the optical absorption coefficient distribution from the absorbed energy map. In the absence of experimental data, we obtain the boundary measurements through Monte Carlo simulations, and we attempt to address the possible limitations of the DE model in the overall reconstruction procedure.
Resumo:
In this paper, the effects of energy quantization on different single-electron transistor (SET) circuits (logic inverter, current-biased circuits, and hybrid MOS-SET circuits) are analyzed through analytical modeling and Monte Carlo simulations. It is shown that energy quantizationmainly increases the Coulomb blockade area and Coulomb blockade oscillation periodicity, and thus, affects the SET circuit performance. A new model for the noise margin of the SET inverter is proposed, which includes the energy quantization effects. Using the noise margin as a metric, the robustness of the SET inverter is studied against the effects of energy quantization. An analytical expression is developed, which explicitly defines the maximum energy quantization (termed as ``quantization threshold'') that an SET inverter can withstand before its noise margin falls below a specified tolerance level. The effects of energy quantization are further studiedfor the current-biased negative differential resistance (NDR) circuitand hybrid SETMOS circuit. A new model for the conductance of NDR characteristics is also formulated that explains the energy quantization effects.
Resumo:
Nucleation is the first step of a first order phase transition. A new phase is always sprung up in nucleation phenomena. The two main categories of nucleation are homogeneous nucleation, where the new phase is formed in a uniform substance, and heterogeneous nucleation, when nucleation occurs on a pre-existing surface. In this thesis the main attention is paid on heterogeneous nucleation. This thesis wields the nucleation phenomena from two theoretical perspectives: the classical nucleation theory and the statistical mechanical approach. The formulation of the classical nucleation theory relies on equilibrium thermodynamics and use of macroscopically determined quantities to describe the properties of small nuclei, sometimes consisting of just a few molecules. The statistical mechanical approach is based on interactions between single molecules, and does not bear the same assumptions as the classical theory. This work gathers up the present theoretical knowledge of heterogeneous nucleation and utilizes it in computational model studies. A new exact molecular approach on heterogeneous nucleation was introduced and tested by Monte Carlo simulations. The results obtained from the molecular simulations were interpreted by means of the concepts of the classical nucleation theory. Numerical calculations were carried out for a variety of substances nucleating on different substances. The classical theory of heterogeneous nucleation was employed in calculations of one-component nucleation of water on newsprint paper, Teflon and cellulose film, and binary nucleation of water-n-propanol and water-sulphuric acid mixtures on silver nanoparticles. The results were compared with experimental results. The molecular simulation studies involved homogeneous nucleation of argon and heterogeneous nucleation of argon on a planar platinum surface. It was found out that the use of a microscopical contact angle as a fitting parameter in calculations based on the classical theory of heterogeneous nucleation leads to a fair agreement between the theoretical predictions and experimental results. In the presented cases the microscopical angle was found to be always smaller than the contact angle obtained from macroscopical measurements. Furthermore, molecular Monte Carlo simulations revealed that the concept of the geometrical contact parameter in heterogeneous nucleation calculations can work surprisingly well even for very small clusters.
Resumo:
Aims: Develop and validate tools to estimate residual noise covariance in Planck frequency maps. Quantify signal error effects and compare different techniques to produce low-resolution maps. Methods: We derive analytical estimates of covariance of the residual noise contained in low-resolution maps produced using a number of map-making approaches. We test these analytical predictions using Monte Carlo simulations and their impact on angular power spectrum estimation. We use simulations to quantify the level of signal errors incurred in different resolution downgrading schemes considered in this work. Results: We find an excellent agreement between the optimal residual noise covariance matrices and Monte Carlo noise maps. For destriping map-makers, the extent of agreement is dictated by the knee frequency of the correlated noise component and the chosen baseline offset length. The significance of signal striping is shown to be insignificant when properly dealt with. In map resolution downgrading, we find that a carefully selected window function is required to reduce aliasing to the sub-percent level at multipoles, ell > 2Nside, where Nside is the HEALPix resolution parameter. We show that sufficient characterization of the residual noise is unavoidable if one is to draw reliable contraints on large scale anisotropy. Conclusions: We have described how to compute the low-resolution maps, with a controlled sky signal level, and a reliable estimate of covariance of the residual noise. We have also presented a method to smooth the residual noise covariance matrices to describe the noise correlations in smoothed, bandwidth limited maps.
Resumo:
A laminated composite plate model based on first order shear deformation theory is implemented using the finite element method.Matrix cracks are introduced into the finite element model by considering changes in the A, B and D matrices of composites. The effects of different boundary conditions, laminate types and ply angles on the behavior of composite plates with matrix cracks are studied.Finally, the effect of material property uncertainty, which is important for composite material on the composite plate, is investigated using Monte Carlo simulations. Probabilistic estimates of damage detection reliability in composite plates are made for static and dynamic measurements. It is found that the effect of uncertainty must be considered for accurate damage detection in composite structures. The estimates of variance obtained for observable system properties due to uncertainty can be used for developing more robust damage detection algorithms. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
In the present study, results of reliability analyses of four selected rehabilitated earth dam sections, i.e., Chang, Tapar, Rudramata, and Kaswati, under pseudostatic loading conditions, are presented. Using the response surface methodology, in combination with first order reliability method and numerical analysis, the reliability index (beta) values are obtained and results are interpreted in conjunction with conventional factor of safety values. The influence of considering variability in the input soil shear strength parameters, horizontal seismic coefficient (alpha(h)), and location of reservoir full level on the stability assessment of the earth dam sections is discussed in the probabilistic framework. A comparison of results with those obtained from other method of reliability analysis, viz., Monte Carlo simulations combined with limit equilibrium approach, provided a basis for discussing the stability of earth dams in probabilistic terms, and the results of the analysis suggest that the considered earth dam sections are reliable and are expected to perform satisfactorily.
Resumo:
In this study, we derive a fast, novel time-domain algorithm to compute the nth-order moment of the power spectral density of the photoelectric current as measured in laser-Doppler flowmetry (LDF). It is well established that in the LDF literature these moments are closely related to fundamental physiological parameters, i.e. concentration of moving erythrocytes and blood flow. In particular, we take advantage of the link between moments in the Fourier domain and fractional derivatives in the temporal domain. Using Parseval's theorem, we establish an exact analytical equivalence between the time-domain expression and the conventional frequency-domain counterpart. Moreover, we demonstrate the appropriateness of estimating the zeroth-, first- and second-order moments using Monte Carlo simulations. Finally, we briefly discuss the feasibility of implementing the proposed algorithm in hardware.
Resumo:
In this article, the problem of two Unmanned Aerial Vehicles (UAVs) cooperatively searching an unknown region is addressed. The search region is discretized into hexagonal cells and each cell is assumed to possess an uncertainty value. The UAVs have to cooperatively search these cells taking limited endurance, sensor and communication range constraints into account. Due to limited endurance, the UAVs need to return to the base station for refuelling and also need to select a base station when multiple base stations are present. This article proposes a route planning algorithm that takes endurance time constraints into account and uses game theoretical strategies to reduce the uncertainty. The route planning algorithm selects only those cells that ensure the agent will return to any one of the available bases. A set of paths are formed using these cells which the game theoretical strategies use to select a path that yields maximum uncertainty reduction. We explore non-cooperative Nash, cooperative and security strategies from game theory to enhance the search effectiveness. Monte-Carlo simulations are carried out which show the superiority of the game theoretical strategies over greedy strategy for different look ahead step length paths. Within the game theoretical strategies, non-cooperative Nash and cooperative strategy perform similarly in an ideal case, but Nash strategy performs better than the cooperative strategy when the perceived information is different. We also propose a heuristic based on partitioning of the search space into sectors to reduce computational overhead without performance degradation.
Resumo:
The likelihood ratio test of cointegration rank is the most widely used test for cointegration. Many studies have shown that its finite sample distribution is not well approximated by the limiting distribution. The article introduces and evaluates by Monte Carlo simulation experiments bootstrap and fast double bootstrap (FDB) algorithms for the likelihood ratio test. It finds that the performance of the bootstrap test is very good. The more sophisticated FDB produces a further improvement in cases where the performance of the asymptotic test is very unsatisfactory and the ordinary bootstrap does not work as well as it might. Furthermore, the Monte Carlo simulations provide a number of guidelines on when the bootstrap and FDB tests can be expected to work well. Finally, the tests are applied to US interest rates and international stock prices series. It is found that the asymptotic test tends to overestimate the cointegration rank, while the bootstrap and FDB tests choose the correct cointegration rank.
Resumo:
Purpose: To assess the effect of ultrasound modulation of near infrared (NIR) light on the quantification of scattering coefficient in tissue-mimicking biological phantoms.Methods: A unique method to estimate the phase of the modulated NIR light making use of only time averaged intensity measurements using a charge coupled device camera is used in this investigation. These experimental measurements from tissue-mimicking biological phantoms are used to estimate the differential pathlength, in turn leading to estimation of optical scattering coefficient. A Monte-Carlo model base numerical estimation of phase in lieu of ultrasound modulation is performed to verify the experimental results. Results: The results indicate that the ultrasound modulation of NIR light enhances the effective scattering coefficient. The observed effective scattering coefficient enhancement in tissue-mimicking viscoelastic phantoms increases with increasing ultrasound drive voltage. The same trend is noticed as the ultrasound modulation frequency approaches the natural vibration frequency of the phantom material. The contrast enhancement is less for the stiffer (larger storage modulus) tissue, mimicking tumor necrotic core, compared to the normal tissue. Conclusions: The ultrasound modulation of the insonified region leads to an increase in the effective number of scattering events experienced by NIR light, increasing the measured phase, causing the enhancement in the effective scattering coefficient. The ultrasound modulation of NIR light could provide better estimation of scattering coefficient. The observed local enhancement of the effective scattering coefficient, in the ultrasound focal region, is validated using both experimental measurements and Monte-Carlo simulations. (C) 2010 American Association of Physicists in Medicine. [DOI: 10.1118/1.3456441]
Resumo:
We study the equilibrium properties of the nearest-neighbor Ising antiferromagnet on a triangular lattice in the presence of a staggered field conjugate to one of the degenerate ground states. Using a mapping of the ground states of the model without the staggered field to dimer coverings on the dual lattice, we classify the ground states into sectors specified by the number of "strings." We show that the effect of the staggered field is to generate long-range interactions between strings. In the limiting case of the antiferromagnetic coupling constant J becoming infinitely large, we prove the existence of a phase transition in this system and obtain a finite lower bound for the transition temperature. For finite J, we study the equilibrium properties of the system using Monte Carlo simulations with three different dynamics. We find that in all the three cases, equilibration times for low-field values increase rapidly with system size at low temperatures. Due to this difficulty in equilibrating sufficiently large systems at low temperatures, our finite-size scaling analysis of the numerical results does not permit a definite conclusion about the existence of st phase transition for finite values of J. A surprising feature in the system is the fact that unlike usual glassy systems; a zero-temperature quench almost always leads to the ground state, while a slow cooling does not.
Resumo:
The temperature dependence of the critical micelle concentration (CMC) and a closed-loop coexistence curve are obtained, via Monte Carlo simulations, in the water surfactant limit of a two-dimensional version of a statistical mechanical model for micro-emulsions, The CMC and the coexistence curve reproduce various experimental trends as functions of the couplings. In the oil-surfactant limit, there is a conventional coexistence cure with an upper consolute point that allows for a region of three-phase coexistence between oil-rich, water-rich and microemulsion phases.