971 resultados para MONTE-CARLO SIMULATION
Resumo:
This paper arose from the work carried out for the Cullen/Uff Joint Inquiry into Train Protection Systems. It is concerned with the problem of evaluating the benefits of safety enhancements in order to avoid rare, but catastrophic accidents, and the role of Operations Research in the process. The problems include both input values and representation of outcomes. A key input is the value of life. This paper briefly discusses why the value of life might vary from incident to incident and reviews alternative estimates before producing a 'best estimate' for rail. When the occurrence of an event is uncertain, the normal method is to apply a single 'expected' value. This paper argues that a more effective method of representing such situations is through Monte-Carlo simulation and demonstrates the use of the methodology on a case study of the decision as to whether or not advanced train protection (ATP) should have been installed on a route to the west of London. This paper suggests that the output is more informative than traditional cost-benefit appraisals or engineering event tree approaches. It also shows that, unlike the results from utilizing the traditional approach, the value of ATP on this route would be positive over 50% of the time.
Resumo:
The rejoining kinetics of double-stranded DNA fragments, along with measurements of residual damage after postirradiation incubation, are often used as indicators of the biological relevance of the damage induced by ionizing radiation of different qualities. Although it is widely accepted that high-LET radiation-induced double-strand breaks (DSBs) tend to rejoin with kinetics slower than low-LET radiation-induced DSBs, possibly due to the complexity of the DSB itself, the nature of a slowly rejoining DSB-containing DNA lesion remains unknown. Using an approach that combines pulsed-field gel electrophoresis (PFGE) of fragmented DNA from human skin fibroblasts and a recently developed Monte Carlo simulation of radiation-induced DNA breakage and rejoining kinetics, we have tested the role of DSB-containing DNA lesions in the 8-kbp-5.7-Mbp fragment size range in determining the DSB rejoining kinetics. It is found that with low-LET X rays or high LET alpha particles, DSB rejoining kinetics data obtained with PFGE can be computer-simulated assuming that DSB rejoining kinetics does not depend on spacing of breaks along the chromosomes. After analysis of DNA fragmentation profiles, the rejoining kinetics of X-ray-induced DSBs could be fitted by two components: a fast component with a half-life of 0.9 +/- 0.5 h and a slow component with a half-life of 16 +/- 9 h. For a particles, a fast component with a half-life of 0.7 +/- 0.4 h and a slow component with a half-life of 12 5 h along with a residual fraction of unrepaired breaks accounting for 8% of the initial damage were observed. In summary, it is shown that genomic proximity of breaks along a chromosome does not determine the rejoining kinetics, so the slowly rejoining breaks induced with higher frequencies after exposure to high-LET radiation (0.37 +/- 0.12) relative to low-LET radiation (0.22 +/- 0.07) can be explained on the basis of lesion complexity at the nanometer scale, known as locally multiply damaged sites. (c) 2005 by Radiation Research Society.
Resumo:
Reliable prediction of long-term medical device performance using computer simulation requires consideration of variability in surgical procedure, as well as patient-specific factors. However, even deterministic simulation of long-term failure processes for such devices is time and resource consuming so that including variability can lead to excessive time to achieve useful predictions. This study investigates the use of an accelerated probabilistic framework for predicting the likely performance envelope of a device and applies it to femoral prosthesis loosening in cemented hip arthroplasty.
A creep and fatigue damage failure model for bone cement, in conjunction with an interfacial fatigue model for the implant–cement interface, was used to simulate loosening of a prosthesis within a cement mantle. A deterministic set of trial simulations was used to account for variability of a set of surgical and patient factors, and a response surface method was used to perform and accelerate a Monte Carlo simulation to achieve an estimate of the likely range of prosthesis loosening. The proposed framework was used to conceptually investigate the influence of prosthesis selection and surgical placement on prosthesis migration.
Results demonstrate that the response surface method is capable of dramatically reducing the time to achieve convergence in mean and variance of predicted response variables. A critical requirement for realistic predictions is the size and quality of the initial training dataset used to generate the response surface and further work is required to determine the recommendations for a minimum number of initial trials. Results of this conceptual application predicted that loosening was sensitive to the implant size and femoral width. Furthermore, different rankings of implant performance were predicted when only individual simulations (e.g. an average condition) were used to rank implants, compared with when stochastic simulations were used. In conclusion, the proposed framework provides a viable approach to predicting realistic ranges of loosening behaviour for orthopaedic implants in reduced timeframes compared with conventional Monte Carlo simulations.
Resumo:
An important issue in risk analysis is the distinction between epistemic and aleatory uncertainties. In this paper, the use of distinct representation formats for aleatory and epistemic uncertainties is advocated, the latter being modelled by sets of possible values. Modern uncertainty theories based on convex sets of probabilities are known to be instrumental for hybrid representations where aleatory and epistemic components of uncertainty remain distinct. Simple uncertainty representation techniques based on fuzzy intervals and p-boxes are used in practice. This paper outlines a risk analysis methodology from elicitation of knowledge about parameters to decision. It proposes an elicitation methodology where the chosen representation format depends on the nature and the amount of available information. Uncertainty propagation methods then blend Monte Carlo simulation and interval analysis techniques. Nevertheless, results provided by these techniques, often in terms of probability intervals, may be too complex to interpret for a decision-maker and we, therefore, propose to compute a unique indicator of the likelihood of risk, called confidence index. It explicitly accounts for the decisionmaker’s attitude in the face of ambiguity. This step takes place at the end of the risk analysis process, when no further collection of evidence is possible that might reduce the ambiguity due to epistemic uncertainty. This last feature stands in contrast with the Bayesian methodology, where epistemic uncertainties on input parameters are modelled by single subjective probabilities at the beginning of the risk analysis process.
Resumo:
Radiocarbon-dated sediment cores from six lakes in the Ahklun Mountains, south-western Alaska, were used to interpolate the ages of late Quaternary tephra beds ranging in age from 25.4 to 0.4ka. The lakes are located downwind of the Aleutian Arc and Alaska Peninsula volcanoes in the northern Bristol Bay area between 159° and 161°W at around 60°N. Sedimentation-rate age models for each lake were based on a published spline-fit procedure that uses Monte Carlo simulation to determine age model uncertainty. In all, 62 C ages were used to construct the six age models, including 23 ages presented here for the first time. The age model from Lone Spruce Pond is based on 18 ages, and is currently the best-resolved Holocene age model available from the region, with an average 2s age uncertainty of about±109 years over the past 14.5ka. The sedimentary sequence from Lone Spruce Pond contains seven tephra beds, more than previously found in any other lake in the area. Of the 26 radiocarbon-dated tephra beds at the six lakes and from a soil pit, seven are correlated between two or more sites based on their ages. The major-element geochemistry of glass shards from most of these tephra beds supports the age-based correlations. The remaining tephra beds appear to be present at only one site based on their unique geochemistry or age. The 5.8ka tephra is similar to the widespread Aniakchak tephra [3.7±0.2 (1s) ka], but can be distinguished conclusively based on its trace-element geochemistry. The 3.1 and 0.4ka tephras have glass major- and trace-element geochemical compositions indistinguishable from prominent Aniakchak tephra, and might represent redeposited beds. Only two tephra beds are found in all lakes: the Aniakchak tephra (3.7±0.2ka) and Tephra B (6.1±0.3ka). The tephra beds can be used as chronostratigraphic markers for other sedimentary sequences in the region, including cores from Cascade and Sunday lakes, which were previously undated and were analyzed in this study to correlate with the new regional tephrostratigraphy. © 2012 John Wiley & Sons, Ltd.
Resumo:
A two-thermocouple sensor characterization method for use in variable flow applications is proposed. Previous offline methods for constant velocity flow are extended using sliding data windows and polynomials to accommodate variable velocity. Analysis of Monte-Carlo simulation studies confirms that the unbiased and consistent parameter estimator outperforms alternatives in the literature and has the added advantage of not requiring a priori knowledge of the time constant ratio of thermocouples. Experimental results from a test rig are also presented. © 2008 The Institute of Measurement and Control.
Resumo:
We propose a low-complexity closed-loop spatial multiplexing method with limited feedback over multi-input-multi-output (MIMO) fading channels. The transmit adaptation is simply performed by selecting transmit antennas (or substreams) by comparing their signal-to-noise ratios to a given threshold with a fixed nonadaptive constellation and fixed transmit power per substream. We analyze the performance of the proposed system by deriving closed-form expressions for spectral efficiency, average transmit power, and bit error rate (BER). Depending on practical system design constraints, the threshold is chosen to maximize the spectral efficiency (or minimize the average BER) subject to average transmit power and average BER (or spectral efficiency) constraints, respectively. We present numerical and Monte Carlo simulation results that validate our analysis. Compared to open-loop spatial multiplexing and other approaches that select the best antenna subset in spatial multiplexing, the numerical results illustrate that the proposed technique obtains significant power gains for the same BER and spectral efficiency. We also provide numerical results that show improvement over rate-adaptive orthogonal space-time block coding, which requires highly complex constellation adaptation. We analyze the impact of feedback delay using analytical and Monte Carlo approaches. The proposed approach is arguably the simplest possible adaptive spatial multiplexing system from an implementation point of view. However, our approach and analysis can be extended to other systems using multiple constellations and power levels.
Resumo:
This work investigates the end-to-end performance of randomized distributed space-time codes with complex Gaussian distribution, when employed in a wireless relay network. The relaying nodes are assumed to adopt a decode-and-forward strategy and transmissions are affected by small and large scale fading phenomena. Extremely tight, analytical approximations of the end-to-end symbol error probability and of the end-to-end outage probability are derived and successfully validated through Monte-Carlo simulation. For the high signal-to-noise ratio regime, a simple, closed-form expression for the symbol error probability is further provided.
Resumo:
Accretion disk winds are thought to produce many of the characteristic features seen in the spectra of active galactic nuclei (AGNs) and quasi-stellar objects (QSOs). These outflows also represent a natural form of feedback between the central supermassive black hole and its host galaxy. The mechanism for driving this mass loss remains unknown, although radiation pressure mediated by spectral lines is a leading candidate. Here, we calculate the ionization state of, and emergent spectra for, the hydrodynamic simulation of a line-driven disk wind previously presented by Proga & Kallman. To achieve this, we carry out a comprehensive Monte Carlo simulation of the radiative transfer through, and energy exchange within, the predicted outflow. We find that the wind is much more ionized than originally estimated. This is in part because it is much more difficult to shield any wind regions effectively when the outflow itself is allowed to reprocess and redirect ionizing photons. As a result, the calculated spectrum that would be observed from this particular outflow solution would not contain the ultraviolet spectral lines that are observed in many AGN/QSOs. Furthermore, the wind is so highly ionized that line driving would not actually be efficient. This does not necessarily mean that line-driven winds are not viable. However, our work does illustrate that in order to arrive at a self-consistent model of line-driven disk winds in AGN/QSO, it will be critical to include a more detailed treatment of radiative transfer and ionization in the next generation of hydrodynamic simulations.
Resumo:
Traditional experimental economics methods often consume enormous resources of qualified human participants, and the inconsistence of a participant’s decisions among repeated trials prevents investigation from sensitivity analyses. The problem can be solved if computer agents are capable of generating similar behaviors as the given participants in experiments. An experimental economics based analysis method is presented to extract deep information from questionnaire data and emulate any number of participants. Taking the customers’ willingness to purchase electric vehicles (EVs) as an example, multi-layer correlation information is extracted from a limited number of questionnaires. Multi-agents mimicking the inquired potential customers are modelled through matching the probabilistic distributions of their willingness embedded in the questionnaires. The authenticity of both the model and the algorithm is validated by comparing the agent-based Monte Carlo simulation results with the questionnaire-based deduction results. With the aid of agent models, the effects of minority agents with specific preferences on the results are also discussed.
Resumo:
This paper proposes a continuous time Markov chain (CTMC) based sequential analytical approach for composite generation and transmission systems reliability assessment. The basic idea is to construct a CTMC model for the composite system. Based on this model, sequential analyses are performed. Various kinds of reliability indices can be obtained, including expectation, variance, frequency, duration and probability distribution. In order to reduce the dimension of the state space, traditional CTMC modeling approach is modified by merging all high order contingencies into a single state, which can be calculated by Monte Carlo simulation (MCS). Then a state mergence technique is developed to integrate all normal states to further reduce the dimension of the CTMC model. Moreover, a time discretization method is presented for the CTMC model calculation. Case studies are performed on the RBTS and a modified IEEE 300-bus test system. The results indicate that sequential reliability assessment can be performed by the proposed approach. Comparing with the traditional sequential Monte Carlo simulation method, the proposed method is more efficient, especially in small scale or very reliable power systems.
Resumo:
With the increasing utilization of combined heat and power plants (CHP), electrical, gas, and thermal systems are becoming tightly integrated in the urban energy system (UES). However, the three systems are usually planned and operated separately, ignoring their interactions and coordination. To address this issue, the coupling point of different systems in the UES is described by the energy hub model. With this model, an integrated load curtailment method is proposed for the UES. Then a Monte Carlo simulation based approach is developed to assess the reliability of coordinated energy supply systems. Based on this approach, a reliability-optimal energy hub planning method is proposed to accommodate higher renewable energy penetration. Numerical studies indicate that the proposed approach is able to quantify the UES reliability with different structures. Also, optimal energy hub planning scheme can be determined to ensure the reliability of the UES with high renewable penetration.
Secure D2D Communication in Large-Scale Cognitive Cellular Networks: A Wireless Power Transfer Model
Resumo:
In this paper, we investigate secure device-to-device (D2D) communication in energy harvesting large-scale cognitive cellular networks. The energy constrained D2D transmitter harvests energy from multiantenna equipped power beacons (PBs), and communicates with the corresponding receiver using the spectrum of the primary base stations (BSs). We introduce a power transfer model and an information signal model to enable wireless energy harvesting and secure information transmission. In the power transfer model, three wireless power transfer (WPT) policies are proposed: 1) co-operative power beacons (CPB) power transfer, 2) best power beacon (BPB) power transfer, and 3) nearest power beacon (NPB) power transfer. To characterize the power transfer reliability of the proposed three policies, we derive new expressions for the exact power outage probability. Moreover, the analysis of the power outage probability is extended to the case when PBs are equipped with large antenna arrays. In the information signal model, we present a new comparative framework with two receiver selection schemes: 1) best receiver selection (BRS), where the receiver with the strongest channel is selected; and 2) nearest receiver selection (NRS), where the nearest receiver is selected. To assess the secrecy performance, we derive new analytical expressions for the secrecy outage probability and the secrecy throughput considering the two receiver selection schemes using the proposed WPT policies. We presented Monte carlo simulation results to corroborate our analysis and show: 1) secrecy performance improves with increasing densities of PBs and D2D receivers due to larger multiuser diversity gain; 2) CPB achieves better secrecy performance than BPB and NPB but consumes more power; and 3) BRS achieves better secrecy performance than NRS but demands more instantaneous feedback and overhead. A pivotal conclusion- is reached that with increasing number of antennas at PBs, NPB offers a comparable secrecy performance to that of BPB but with a lower complexity.
Resumo:
In this paper, a novel nanolens with super resolution, based on the photon nanojet effect through dielectric nanostructures in visible wavelengths, is proposed. The nanolens is made from plastic SU-8, consisting of parallel semi-cylinders in an array. This paper focuses on the lens designed by numerical simulation with the finite-difference time domain method and nanofabrication of the lens by grayscale electron beam lithography combined with a casting/bonding/lift-off transfer process. Monte Carlo simulation for injected charge distribution and development modeling was applied to define the resultant 3D profile in PMMA as the template for the lens shape. After the casting/bonding/lift-off process, the fabricated nanolens in SU-8 has the desired lens shape, very close to that of PMMA, indicating that the pattern transfer process developed in this work can be reliably applied not only for the fabrication of the lens but also for other 3D nanopatterns in general. The light distribution through the lens near its surface was initially characterized by a scanning near-field optical microscope, showing a well defined focusing image of designed grating lines. Such focusing function supports the great prospects of developing a novel nanolithography based on the photon nanojet effect.