135 resultados para Simulations de Monte-Carlo


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, new results and insights are derived for the performance of multiple-input, single-output systems with beamforming at the transmitter, when the channel state information is quantized and sent to the transmitter over a noisy feedback channel. It is assumed that there exists a per-antenna power constraint at the transmitter, hence, the equal gain transmission (EGT) beamforming vector is quantized and sent from the receiver to the transmitter. The loss in received signal-to-noise ratio (SNR) relative to perfect beamforming is analytically characterized, and it is shown that at high rates, the overall distortion can be expressed as the sum of the quantization-induced distortion and the channel error-induced distortion, and that the asymptotic performance depends on the error-rate behavior of the noisy feedback channel as the number of codepoints gets large. The optimum density of codepoints (also known as the point density) that minimizes the overall distortion subject to a boundedness constraint is shown to be the same as the point density for a noiseless feedback channel, i.e., the uniform density. The binary symmetric channel with random index assignment is a special case of the analysis, and it is shown that as the number of quantized bits gets large the distortion approaches the same as that obtained with random beamforming. The accuracy of the theoretical expressions obtained are verified through Monte Carlo simulations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We describe a noniterative method for recovering optical absorption coefficient distribution from the absorbed energy map reconstructed using simulated and noisy boundary pressure measurements. The source reconstruction problem is first solved for the absorbed energy map corresponding to single- and multiple-source illuminations from the side of the imaging plane. It is shown that the absorbed energy map and the absorption coefficient distribution, recovered from the single-source illumination with a large variation in photon flux distribution, have signal-to-noise ratios comparable to those of the reconstructed parameters from a more uniform photon density distribution corresponding to multiple-source illuminations. The absorbed energy map is input as absorption coefficient times photon flux in the time-independent diffusion equation (DE) governing photon transport to recover the photon flux in a single step. The recovered photon flux is used to compute the optical absorption coefficient distribution from the absorbed energy map. In the absence of experimental data, we obtain the boundary measurements through Monte Carlo simulations, and we attempt to address the possible limitations of the DE model in the overall reconstruction procedure.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A generalized technique is proposed for modeling the effects of process variations on dynamic power by directly relating the variations in process parameters to variations in dynamic power of a digital circuit. The dynamic power of a 2-input NAND gate is characterized by mixed-mode simulations, to be used as a library element for 65mn gate length technology. The proposed methodology is demonstrated with a multiplier circuit built using the NAND gate library, by characterizing its dynamic power through Monte Carlo analysis. The statistical technique of Response. Surface Methodology (RSM) using Design of Experiments (DOE) and Least Squares Method (LSM), are employed to generate a "hybrid model" for gate power to account for simultaneous variations in multiple process parameters. We demonstrate that our hybrid model based statistical design approach results in considerable savings in the power budget of low power CMOS designs with an error of less than 1%, with significant reductions in uncertainty by atleast 6X on a normalized basis, against worst case design.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, the effects of energy quantization on different single-electron transistor (SET) circuits (logic inverter, current-biased circuits, and hybrid MOS-SET circuits) are analyzed through analytical modeling and Monte Carlo simulations. It is shown that energy quantizationmainly increases the Coulomb blockade area and Coulomb blockade oscillation periodicity, and thus, affects the SET circuit performance. A new model for the noise margin of the SET inverter is proposed, which includes the energy quantization effects. Using the noise margin as a metric, the robustness of the SET inverter is studied against the effects of energy quantization. An analytical expression is developed, which explicitly defines the maximum energy quantization (termed as ``quantization threshold'') that an SET inverter can withstand before its noise margin falls below a specified tolerance level. The effects of energy quantization are further studiedfor the current-biased negative differential resistance (NDR) circuitand hybrid SETMOS circuit. A new model for the conductance of NDR characteristics is also formulated that explains the energy quantization effects.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A laminated composite plate model based on first order shear deformation theory is implemented using the finite element method.Matrix cracks are introduced into the finite element model by considering changes in the A, B and D matrices of composites. The effects of different boundary conditions, laminate types and ply angles on the behavior of composite plates with matrix cracks are studied.Finally, the effect of material property uncertainty, which is important for composite material on the composite plate, is investigated using Monte Carlo simulations. Probabilistic estimates of damage detection reliability in composite plates are made for static and dynamic measurements. It is found that the effect of uncertainty must be considered for accurate damage detection in composite structures. The estimates of variance obtained for observable system properties due to uncertainty can be used for developing more robust damage detection algorithms. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the present study, results of reliability analyses of four selected rehabilitated earth dam sections, i.e., Chang, Tapar, Rudramata, and Kaswati, under pseudostatic loading conditions, are presented. Using the response surface methodology, in combination with first order reliability method and numerical analysis, the reliability index (beta) values are obtained and results are interpreted in conjunction with conventional factor of safety values. The influence of considering variability in the input soil shear strength parameters, horizontal seismic coefficient (alpha(h)), and location of reservoir full level on the stability assessment of the earth dam sections is discussed in the probabilistic framework. A comparison of results with those obtained from other method of reliability analysis, viz., Monte Carlo simulations combined with limit equilibrium approach, provided a basis for discussing the stability of earth dams in probabilistic terms, and the results of the analysis suggest that the considered earth dam sections are reliable and are expected to perform satisfactorily.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this study, we derive a fast, novel time-domain algorithm to compute the nth-order moment of the power spectral density of the photoelectric current as measured in laser-Doppler flowmetry (LDF). It is well established that in the LDF literature these moments are closely related to fundamental physiological parameters, i.e. concentration of moving erythrocytes and blood flow. In particular, we take advantage of the link between moments in the Fourier domain and fractional derivatives in the temporal domain. Using Parseval's theorem, we establish an exact analytical equivalence between the time-domain expression and the conventional frequency-domain counterpart. Moreover, we demonstrate the appropriateness of estimating the zeroth-, first- and second-order moments using Monte Carlo simulations. Finally, we briefly discuss the feasibility of implementing the proposed algorithm in hardware.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this article, the problem of two Unmanned Aerial Vehicles (UAVs) cooperatively searching an unknown region is addressed. The search region is discretized into hexagonal cells and each cell is assumed to possess an uncertainty value. The UAVs have to cooperatively search these cells taking limited endurance, sensor and communication range constraints into account. Due to limited endurance, the UAVs need to return to the base station for refuelling and also need to select a base station when multiple base stations are present. This article proposes a route planning algorithm that takes endurance time constraints into account and uses game theoretical strategies to reduce the uncertainty. The route planning algorithm selects only those cells that ensure the agent will return to any one of the available bases. A set of paths are formed using these cells which the game theoretical strategies use to select a path that yields maximum uncertainty reduction. We explore non-cooperative Nash, cooperative and security strategies from game theory to enhance the search effectiveness. Monte-Carlo simulations are carried out which show the superiority of the game theoretical strategies over greedy strategy for different look ahead step length paths. Within the game theoretical strategies, non-cooperative Nash and cooperative strategy perform similarly in an ideal case, but Nash strategy performs better than the cooperative strategy when the perceived information is different. We also propose a heuristic based on partitioning of the search space into sectors to reduce computational overhead without performance degradation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: To assess the effect of ultrasound modulation of near infrared (NIR) light on the quantification of scattering coefficient in tissue-mimicking biological phantoms.Methods: A unique method to estimate the phase of the modulated NIR light making use of only time averaged intensity measurements using a charge coupled device camera is used in this investigation. These experimental measurements from tissue-mimicking biological phantoms are used to estimate the differential pathlength, in turn leading to estimation of optical scattering coefficient. A Monte-Carlo model base numerical estimation of phase in lieu of ultrasound modulation is performed to verify the experimental results. Results: The results indicate that the ultrasound modulation of NIR light enhances the effective scattering coefficient. The observed effective scattering coefficient enhancement in tissue-mimicking viscoelastic phantoms increases with increasing ultrasound drive voltage. The same trend is noticed as the ultrasound modulation frequency approaches the natural vibration frequency of the phantom material. The contrast enhancement is less for the stiffer (larger storage modulus) tissue, mimicking tumor necrotic core, compared to the normal tissue. Conclusions: The ultrasound modulation of the insonified region leads to an increase in the effective number of scattering events experienced by NIR light, increasing the measured phase, causing the enhancement in the effective scattering coefficient. The ultrasound modulation of NIR light could provide better estimation of scattering coefficient. The observed local enhancement of the effective scattering coefficient, in the ultrasound focal region, is validated using both experimental measurements and Monte-Carlo simulations. (C) 2010 American Association of Physicists in Medicine. [DOI: 10.1118/1.3456441]

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study the equilibrium properties of the nearest-neighbor Ising antiferromagnet on a triangular lattice in the presence of a staggered field conjugate to one of the degenerate ground states. Using a mapping of the ground states of the model without the staggered field to dimer coverings on the dual lattice, we classify the ground states into sectors specified by the number of "strings." We show that the effect of the staggered field is to generate long-range interactions between strings. In the limiting case of the antiferromagnetic coupling constant J becoming infinitely large, we prove the existence of a phase transition in this system and obtain a finite lower bound for the transition temperature. For finite J, we study the equilibrium properties of the system using Monte Carlo simulations with three different dynamics. We find that in all the three cases, equilibration times for low-field values increase rapidly with system size at low temperatures. Due to this difficulty in equilibrating sufficiently large systems at low temperatures, our finite-size scaling analysis of the numerical results does not permit a definite conclusion about the existence of st phase transition for finite values of J. A surprising feature in the system is the fact that unlike usual glassy systems; a zero-temperature quench almost always leads to the ground state, while a slow cooling does not.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The temperature dependence of the critical micelle concentration (CMC) and a closed-loop coexistence curve are obtained, via Monte Carlo simulations, in the water surfactant limit of a two-dimensional version of a statistical mechanical model for micro-emulsions, The CMC and the coexistence curve reproduce various experimental trends as functions of the couplings. In the oil-surfactant limit, there is a conventional coexistence cure with an upper consolute point that allows for a region of three-phase coexistence between oil-rich, water-rich and microemulsion phases.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We compare magnetovolume effects in bulk and nanoparticles by performing Monte Carlo simulations of a spin-analogous model with coupled spatial and magnetic degrees of freedom and chemical disorder. We find that correlations between surface and bulk atoms lead with decreasing particle size to a substantial modification of the magnetic and elastic behavior at low temperatures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The problem of time variant reliability analysis of existing structures subjected to stationary random dynamic excitations is considered. The study assumes that samples of dynamic response of the structure, under the action of external excitations, have been measured at a set of sparse points on the structure. The utilization of these measurements m in updating reliability models, postulated prior to making any measurements, is considered. This is achieved by using dynamic state estimation methods which combine results from Markov process theory and Bayes' theorem. The uncertainties present in measurements as well as in the postulated model for the structural behaviour are accounted for. The samples of external excitations are taken to emanate from known stochastic models and allowance is made for ability (or lack of it) to measure the applied excitations. The future reliability of the structure is modeled using expected structural response conditioned on all the measurements made. This expected response is shown to have a time varying mean and a random component that can be treated as being weakly stationary. For linear systems, an approximate analytical solution for the problem of reliability model updating is obtained by combining theories of discrete Kalman filter and level crossing statistics. For the case of nonlinear systems, the problem is tackled by combining particle filtering strategies with data based extreme value analysis. In all these studies, the governing stochastic differential equations are discretized using the strong forms of Ito-Taylor's discretization schemes. The possibility of using conditional simulation strategies, when applied external actions are measured, is also considered. The proposed procedures are exemplifiedmby considering the reliability analysis of a few low-dimensional dynamical systems based on synthetically generated measurement data. The performance of the procedures developed is also assessed based on a limited amount of pertinent Monte Carlo simulations. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Methodologies are presented for minimization of risk in a river water quality management problem. A risk minimization model is developed to minimize the risk of low water quality along a river in the face of conflict among various stake holders. The model consists of three parts: a water quality simulation model, a risk evaluation model with uncertainty analysis and an optimization model. Sensitivity analysis, First Order Reliability Analysis (FORA) and Monte-Carlo simulations are performed to evaluate the fuzzy risk of low water quality. Fuzzy multiobjective programming is used to formulate the multiobjective model. Probabilistic Global Search Laussane (PGSL), a global search algorithm developed recently, is used for solving the resulting non-linear optimization problem. The algorithm is based on the assumption that better sets of points are more likely to be found in the neighborhood of good sets of points, therefore intensifying the search in the regions that contain good solutions. Another model is developed for risk minimization, which deals with only the moments of the generated probability density functions of the water quality indicators. Suitable skewness values of water quality indicators, which lead to low fuzzy risk are identified. Results of the models are compared with the results of a deterministic fuzzy waste load allocation model (FWLAM), when methodologies are applied to the case study of Tunga-Bhadra river system in southern India, with a steady state BOD-DO model. The fractional removal levels resulting from the risk minimization model are slightly higher, but result in a significant reduction in risk of low water quality. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Artificial neural networks (ANNs) have shown great promise in modeling circuit parameters for computer aided design applications. Leakage currents, which depend on process parameters, supply voltage and temperature can be modeled accurately with ANNs. However, the complex nature of the ANN model, with the standard sigmoidal activation functions, does not allow analytical expressions for its mean and variance. We propose the use of a new activation function that allows us to derive an analytical expression for the mean and a semi-analytical expression for the variance of the ANN-based leakage model. To the best of our knowledge this is the first result in this direction. Our neural network model also includes the voltage and temperature as input parameters, thereby enabling voltage and temperature aware statistical leakage analysis (SLA). All existing SLA frameworks are closely tied to the exponential polynomial leakage model and hence fail to work with sophisticated ANN models. In this paper, we also set up an SLA framework that can efficiently work with these ANN models. Results show that the cumulative distribution function of leakage current of ISCAS'85 circuits can be predicted accurately with the error in mean and standard deviation, compared to Monte Carlo-based simulations, being less than 1% and 2% respectively across a range of voltage and temperature values.