971 resultados para MONTE-CARLO SIMULATION
Resumo:
A reliable method for service life estimation of the structural element is a prerequisite for service life design. A new methodology for durability-based service life estimation of reinforced concrete flexural elements with respect to chloride-induced corrosion of reinforcement is proposed. The methodology takes into consideration the fuzzy and random uncertainties associated with the variables involved in service life estimation by using a hybrid method combining the vertex method of fuzzy set theory with Monte Carlo simulation technique. It is also shown how to determine the bounds for characteristic value of failure probability from the resulting fuzzy set for failure probability with minimal computational effort. Using the methodology, the bounds for the characteristic value of failure probability for a reinforced concrete T-beam bridge girder has been determined. The service life of the structural element is determined by comparing the upper bound of characteristic value of failure probability with the target failure probability. The methodology will be useful for durability-based service life design and also for making decisions regarding in-service inspections.
Resumo:
This paper presents a study of the wave propagation responses in composite structures in an uncertain environment. Here, the main aim of the work is to quantify the effect of uncertainty in the wave propagation responses at high frequencies. The material properties are considered uncertain and the analysis is performed using Neumann expansion blended with Monte Carlo simulation under the environment of spectral finite element method. The material randomness is included in the conventional wave propagation analysis by different distributions (namely, the normal and the Weibul distribution) and their effect on wave propagation in a composite beam is analyzed. The numerical results presented investigates the effect of material uncertainties on different parameters, namely, wavenumber and group speed, which are relevant in the wave propagation analysis. The effect of the parameters, such as fiber orientation, lay-up sequence, number of layers, and the layer thickness on the uncertain responses due to dynamic impulse load, is thoroughly analyzed. Significant changes are observed in the high frequency responses with the variation in the above parameters, even for a small coefficient of variation. High frequency impact loads are applied and a number of interesting results are presented, which brings out the true effects of uncertainty in the high frequency responses. [DOI: 10.1115/1.4003945]
Resumo:
Finite element modeling can be a useful tool for predicting the behavior of composite materials and arriving at desirable filler contents for maximizing mechanical performance. In the present study, to corroborate finite element analysis results, quantitative information on the effect of reinforcing polypropylene (PP) with various proportions of nanoclay (in the range of 3-9% by weight) is obtained through experiments; in particular, attention is paid to the Young's modulus, tensile strength and failure strain. Micromechanical finite element analysis combined with Monte Carlo simulation have been carried out to establish the validity of the modeling procedure and accuracy of prediction by comparing against experimentally determined stiffness moduli of nanocomposites. In the same context, predictions of Young's modulus yielded by theoretical micromechanics-based models are compared with experimental results. Macromechanical modeling was done to capture the non-linear stress-strain behavior including failure observed in experiments as this is deemed to be a more viable tool for analyzing products made of nanocomposites including applications of dynamics. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This paper proposes an algorithm for joint data detection and tracking of the dominant singular mode of a time varying channel at the transmitter and receiver of a time division duplex multiple input multiple output beamforming system. The method proposed is a modified expectation maximization algorithm which utilizes an initial estimate to track the dominant modes of the channel at the transmitter and the receiver blindly; and simultaneously detects the un known data. Furthermore, the estimates are constrained to be within a confidence interval of the previous estimate in order to improve the tracking performance and mitigate the effect of error propagation. Monte-Carlo simulation results of the symbol error rate and the mean square inner product between the estimated and the true singular vector are plotted to show the performance benefits offered by the proposed method compared to existing techniques.
Resumo:
This paper considers the problem of weak signal detection in the presence of navigation data bits for Global Navigation Satellite System (GNSS) receivers. Typically, a set of partial coherent integration outputs are non-coherently accumulated to combat the effects of model uncertainties such as the presence of navigation data-bits and/or frequency uncertainty, resulting in a sub-optimal test statistic. In this work, the test-statistic for weak signal detection is derived in the presence of navigation data-bits from the likelihood ratio. It is highlighted that averaging the likelihood ratio based test-statistic over the prior distributions of the unknown data bits and the carrier phase uncertainty leads to the conventional Post Detection Integration (PDI) technique for detection. To improve the performance in the presence of model uncertainties, a novel cyclostationarity based sub-optimal PDI technique is proposed. The test statistic is analytically characterized, and shown to be robust to the presence of navigation data-bits, frequency, phase and noise uncertainties. Monte Carlo simulation results illustrate the validity of the theoretical results and the superior performance offered by the proposed detector in the presence of model uncertainties.
Resumo:
A few variance reduction schemes are proposed within the broad framework of a particle filter as applied to the problem of structural system identification. Whereas the first scheme uses a directional descent step, possibly of the Newton or quasi-Newton type, within the prediction stage of the filter, the second relies on replacing the more conventional Monte Carlo simulation involving pseudorandom sequence with one using quasi-random sequences along with a Brownian bridge discretization while representing the process noise terms. As evidenced through the derivations and subsequent numerical work on the identification of a shear frame, the combined effect of the proposed approaches in yielding variance-reduced estimates of the model parameters appears to be quite noticeable. DOI: 10.1061/(ASCE)EM.1943-7889.0000480. (C) 2013 American Society of Civil Engineers.
Resumo:
The problem of updating the reliability of instrumented structures based on measured response under random dynamic loading is considered. A solution strategy within the framework of Monte Carlo simulation based dynamic state estimation method and Girsanov's transformation for variance reduction is developed. For linear Gaussian state space models, the solution is developed based on continuous version of the Kalman filter, while, for non-linear and (or) non-Gaussian state space models, bootstrap particle filters are adopted. The controls to implement the Girsanov transformation are developed by solving a constrained non-linear optimization problem. Numerical illustrations include studies on a multi degree of freedom linear system and non-linear systems with geometric and (or) hereditary non-linearities and non-stationary random excitations.
Resumo:
The random eigenvalue problem arises in frequency and mode shape determination for a linear system with uncertainties in structural properties. Among several methods of characterizing this random eigenvalue problem, one computationally fast method that gives good accuracy is a weak formulation using polynomial chaos expansion (PCE). In this method, the eigenvalues and eigenvectors are expanded in PCE, and the residual is minimized by a Galerkin projection. The goals of the current work are (i) to implement this PCE-characterized random eigenvalue problem in the dynamic response calculation under random loading and (ii) to explore the computational advantages and challenges. In the proposed method, the response quantities are also expressed in PCE followed by a Galerkin projection. A numerical comparison with a perturbation method and the Monte Carlo simulation shows that when the loading has a random amplitude but deterministic frequency content, the proposed method gives more accurate results than a first-order perturbation method and a comparable accuracy as the Monte Carlo simulation in a lower computational time. However, as the frequency content of the loading becomes random, or for general random process loadings, the method loses its accuracy and computational efficiency. Issues in implementation, limitations, and further challenges are also addressed.
Resumo:
This paper addresses the problem of finding outage-optimal power control policies for wireless energy harvesting sensor (EHS) nodes with automatic repeat request (ARQ)-based packet transmissions. The power control policy of the EHS specifies the transmission power for each packet transmission attempt, based on all the information available at the EHS. In particular, the acknowledgement (ACK) or negative acknowledgement (NACK) messages received provide the EHS with partial information about the channel state. We solve the problem of finding an optimal power control policy by casting it as a partially observable Markov decision process (POMDP). We study the structure of the optimal power policy in two ways. First, for the special case of binary power levels at the EHS, we show that the optimal policy for the underlying Markov decision process (MDP) when the channel state is observable is a threshold policy in the battery state. Second, we benchmark the performance of the EHS by rigorously analyzing the outage probability of a general fixed-power transmission scheme, where the EHS uses a predetermined power level at each slot within the frame. Monte Carlo simulation results illustrate the performance of the POMDP approach and verify the accuracy of the analysis. They also show that the POMDP solutions can significantly outperform conventional ad hoc approaches.
Resumo:
The uncertainty in material properties and traffic characterization in the design of flexible pavements has led to significant efforts in recent years to incorporate reliability methods and probabilistic design procedures for the design, rehabilitation, and maintenance of pavements. In the mechanistic-empirical (ME) design of pavements, despite the fact that there are multiple failure modes, the design criteria applied in the majority of analytical pavement design methods guard only against fatigue cracking and subgrade rutting, which are usually considered as independent failure events. This study carries out the reliability analysis for a flexible pavement section for these failure criteria based on the first-order reliability method (FORM) and the second-order reliability method (SORM) techniques and the crude Monte Carlo simulation. Through a sensitivity analysis, the most critical parameter affecting the design reliability for both fatigue and rutting failure criteria was identified as the surface layer thickness. However, reliability analysis in pavement design is most useful if it can be efficiently and accurately applied to components of pavement design and the combination of these components in an overall system analysis. The study shows that for the pavement section considered, there is a high degree of dependence between the two failure modes, and demonstrates that the probability of simultaneous occurrence of failures can be almost as high as the probability of component failures. Thus, the need to consider the system reliability in the pavement analysis is highlighted, and the study indicates that the improvement of pavement performance should be tackled in the light of reducing this undesirable event of simultaneous failure and not merely the consideration of the more critical failure mode. Furthermore, this probability of simultaneous occurrence of failures is seen to increase considerably with small increments in the mean traffic loads, which also results in wider system reliability bounds. The study also advocates the use of narrow bounds to the probability of failure, which provides a better estimate of the probability of failure, as validated from the results obtained from Monte Carlo simulation (MCS).
Resumo:
The problem of updating the reliability of instrumented structures based on measured response under random dynamic loading is considered. A solution strategy within the framework of Monte Carlo simulation based dynamic state estimation method and Girsanov’s transformation for variance reduction is developed. For linear Gaussian state space models, the solution is developed based on continuous version of the Kalman filter, while, for non-linear and (or) non-Gaussian state space models, bootstrap particle filters are adopted. The controls to implement the Girsanov transformation are developed by solving a constrained non-linear optimization problem. Numerical illustrations include studies on a multi degree of freedom linear system and non-linear systems with geometric and (or) hereditary non-linearities and non-stationary random excitations.
Resumo:
The problem of identification of multi-component and (or) spatially varying earthquake support motions based on measured responses in instrumented structures is considered. The governing equations of motion are cast in the state space form and a time domain solution to the input identification problem is developed based on the Kalman and particle filtering methods. The method allows for noise in measured responses, imperfections in mathematical model for the structure, and possible nonlinear behavior of the structure. The unknown support motions are treated as hypothetical additional system states and a prior model for these motions are taken to be given in terms of white noise processes. For linear systems, the solution is developed within the Kalman filtering framework while, for nonlinear systems, the Monte Carlo simulation based particle filtering tools are employed. In the latter case, the question of controlling sampling variance based on the idea of Rao-Blackwellization is also explored. Illustrative examples include identification of multi-component and spatially varying support motions in linear/nonlinear structures.
Resumo:
This paper addresses the problem of finding optimal power control policies for wireless energy harvesting sensor (EHS) nodes with automatic repeat request (ARQ)-based packet transmissions. The EHS harvests energy from the environment according to a Bernoulli process; and it is required to operate within the constraint of energy neutrality. The EHS obtains partial channel state information (CSI) at the transmitter through the link-layer ARQ protocol, via the ACK/NACK feedback messages, and uses it to adapt the transmission power for the packet (re)transmission attempts. The underlying wireless fading channel is modeled as a finite state Markov chain with known transition probabilities. Thus, the goal of the power management policy is to determine the best power setting for the current packet transmission attempt, so as to maximize a long-run expected reward such as the expected outage probability. The problem is addressed in a decision-theoretic framework by casting it as a partially observable Markov decision process (POMDP). Due to the large size of the state-space, the exact solution to the POMDP is computationally expensive. Hence, two popular approximate solutions are considered, which yield good power management policies for the transmission attempts. Monte Carlo simulation results illustrate the efficacy of the approach and show that the approximate solutions significantly outperform conventional approaches.
Resumo:
Stochastic modelling is a useful way of simulating complex hard-rock aquifers as hydrological properties (permeability, porosity etc.) can be described using random variables with known statistics. However, very few studies have assessed the influence of topological uncertainty (i.e. the variability of thickness of conductive zones in the aquifer), probably because it is not easy to retrieve accurate statistics of the aquifer geometry, especially in hard rock context. In this paper, we assessed the potential of using geophysical surveys to describe the geometry of a hard rock-aquifer in a stochastic modelling framework. The study site was a small experimental watershed in South India, where the aquifer consisted of a clayey to loamy-sandy zone (regolith) underlain by a conductive fissured rock layer (protolith) and the unweathered gneiss (bedrock) at the bottom. The spatial variability of the thickness of the regolith and fissured layers was estimated by electrical resistivity tomography (ERT) profiles, which were performed along a few cross sections in the watershed. For stochastic analysis using Monte Carlo simulation, the generated random layer thickness was made conditional to the available data from the geophysics. In order to simulate steady state flow in the irregular domain with variable geometry, we used an isoparametric finite element method to discretize the flow equation over an unstructured grid with irregular hexahedral elements. The results indicated that the spatial variability of the layer thickness had a significant effect on reducing the simulated effective steady seepage flux and that using the conditional simulations reduced the uncertainty of the simulated seepage flux. As a conclusion, combining information on the aquifer geometry obtained from geophysical surveys with stochastic modelling is a promising methodology to improve the simulation of groundwater flow in complex hard-rock aquifers. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Solid-solid collapse transition in open framework structures is ubiquitous in nature. The real difficulty in understanding detailed microscopic aspects of such transitions in molecular systems arises from the interplay between different energy and length scales involved in molecular systems, often mediated through a solvent. In this work we employ Monte-Carlo simulation to study the collapse transition in a model molecular system interacting via both isotropic as well as anisotropic interactions having different length and energy scales. The model we use is known as Mercedes-Benz (MB), which, for a specific set of parameters, sustains two solid phases: honeycomb and oblique. In order to study the temperature induced collapse transition, we start with a metastable honeycomb solid and induce transition by increasing temperature. High density oblique solid so formed has two characteristic length scales corresponding to isotropic and anisotropic parts of interaction potential. Contrary to the common belief and classical nucleation theory, interestingly, we find linear strip-like nucleating clusters having significantly different order and average coordination number than the bulk stable phase. In the early stage of growth, the cluster grows as a linear strip, followed by branched and ring-like strips. The geometry of growing cluster is a consequence of the delicate balance between two types of interactions, which enables the dominance of stabilizing energy over destabilizing surface energy. The nucleus of stable oblique phase is wetted by intermediate order particles, which minimizes the surface free energy. In the case of pressure induced transition at low temperature the collapsed state is a disordered solid. The disordered solid phase has diverse local quasi-stable structures along with oblique-solid like domains. (C) 2013 AIP Publishing LLC.