970 resultados para Simulations de Monte-Carlo
Resumo:
We demonstrate quantitative optical property and elastic property imaging from ultrasound assisted optical tomography data. The measurements, which are modulation depth M and phase phi of the speckle pattern, are shown to be sensitively dependent on these properties of the object in the insonified focal region of the ultrasound (US) transducer. We demonstrate that Young's modulus (E) can be recovered from the resonance observed in M versus omega (the US frequency) plots and optical absorption (mu(a)) and scattering (mu(s)) coefficients from the measured differential phase changes. All experimental observations are verified also using Monte Carlo simulations. (c) 2012 Society of Photo-Optical Instrumentation Engineers (SPIE). DOI: 10.1117/1.JBO.17.10.101507]
Resumo:
A combination of ab initio and classical Monte Carlo simulations is used to investigate the effects of functional groups on methane binding. Using Moller-Plesset (MP2) calculations, we obtain the binding energies for benzene functionalized with NH2, OH, CH3, COOH, and H2PO3 and identify the methane binding sites. In all cases, the preferred binding sites are located above the benzene plane in the vicinity of the benzene carbon atom attached to the functional group. Functional groups enhance methane binding relative to benzene (-6.39 kJ/mol), with the largest enhancement observed for H2PO3 (-8.37 kJ/mol) followed by COOH and CH3 (-7.77 kJ/mol). Adsorption isotherms are obtained for edge-functionalized bilayer graphene nanoribbons using grand canonical Monte Carlo simulations with a five-site methane model. Adsorbed excess and heats of adsorption for pressures up to 40 bar and 298 K are obtained with functional group concentrations ranging from 3.125 to 6.25 mol 96 for graphene edges functionalized with OH, NH2, and COOH. The functional groups are found to act as preferred adsorption sites, and in the case of COOH the local methane density in the vicinity of the functional group is found to exceed that of bare graphene. The largest enhancement of 44.5% in the methane excess adsorbed is observed for COOH-functionalized nanoribbons when compared to H terminated ribbons. The corresponding enhancements for OH- and NH2-functionalized ribbons are 10.5% and 3.7%, respectively. The excess adsorption across functional groups reflects the trends observed in the binding energies from MP2 calculations. Our study reveals that specific site functionalization can have a significant effect on the local adsorption characteristics and can be used as a design strategy to tailor materials with enhanced methane storage capacity.
Resumo:
The study extends the first order reliability method (FORM) and inverse FORM to update reliability models for existing, statically loaded structures based on measured responses. Solutions based on Bayes' theorem, Markov chain Monte Carlo simulations, and inverse reliability analysis are developed. The case of linear systems with Gaussian uncertainties and linear performance functions is shown to be exactly solvable. FORM and inverse reliability based methods are subsequently developed to deal with more general problems. The proposed procedures are implemented by combining Matlab based reliability modules with finite element models residing on the Abaqus software. Numerical illustrations on linear and nonlinear frames are presented. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we study duty cycling and power management in a network of energy harvesting sensor (EHS) nodes. We consider a one-hop network, where K EHS nodes send data to a destination over a wireless fading channel. The goal is to find the optimum duty cycling and power scheduling across the nodes that maximizes the average sum data rate, subject to energy neutrality at each node. We adopt a two-stage approach to simplify the problem. In the inner stage, we solve the problem of optimal duty cycling of the nodes, subject to the short-term power constraint set by the outer stage. The outer stage sets the short-term power constraints on the inner stage to maximize the long-term expected sum data rate, subject to long-term energy neutrality at each node. Albeit suboptimal, our solutions turn out to have a surprisingly simple form: the duty cycle allotted to each node by the inner stage is simply the fractional allotted power of that node relative to the total allotted power. The sum power allotted is a clipped version of the sum harvested power across all the nodes. The average sum throughput thus ultimately depends only on the sum harvested power and its statistics. We illustrate the performance improvement offered by the proposed solution compared to other naive schemes via Monte-Carlo simulations.
Resumo:
Compressive Sampling Matching Pursuit (CoSaMP) is one of the popular greedy methods in the emerging field of Compressed Sensing (CS). In addition to the appealing empirical performance, CoSaMP has also splendid theoretical guarantees for convergence. In this paper, we propose a modification in CoSaMP to adaptively choose the dimension of search space in each iteration, using a threshold based approach. Using Monte Carlo simulations, we show that this modification improves the reconstruction capability of the CoSaMP algorithm in clean as well as noisy measurement cases. From empirical observations, we also propose an optimum value for the threshold to use in applications.
Resumo:
Future space-based gravity wave (GW) experiments such as the Big Bang Observatory (BBO), with their excellent projected, one sigma angular resolution, will measure the luminosity distance to a large number of GW sources to high precision, and the redshift of the single galaxies in the narrow solid angles towards the sources will provide the redshifts of the gravity wave sources. One sigma BBO beams contain the actual source in only 68% of the cases; the beams that do not contain the source may contain a spurious single galaxy, leading to misidentification. To increase the probability of the source falling within the beam, larger beams have to be considered, decreasing the chances of finding single galaxies in the beams. Saini et al. T.D. Saini, S.K. Sethi, and V. Sahni, Phys. Rev. D 81, 103009 (2010)] argued, largely analytically, that identifying even a small number of GW source galaxies furnishes a rough distance-redshift relation, which could be used to further resolve sources that have multiple objects in the angular beam. In this work we further develop this idea by introducing a self-calibrating iterative scheme which works in conjunction with Monte Carlo simulations to determine the luminosity distance to GW sources with progressively greater accuracy. This iterative scheme allows one to determine the equation of state of dark energy to within an accuracy of a few percent for a gravity wave experiment possessing a beam width an order of magnitude larger than BBO (and therefore having a far poorer angular resolution). This is achieved with no prior information about the nature of dark energy from other data sets such as type Ia supernovae, baryon acoustic oscillations, cosmic microwave background, etc. DOI:10.1103/PhysRevD.87.083001
Resumo:
Fast and efficient channel estimation is key to achieving high data rate performance in mobile and vehicular communication systems, where the channel is fast time-varying. To this end, this work proposes and optimizes channel-dependent training schemes for reciprocal Multiple-Input Multiple-Output (MIMO) channels with beamforming (BF) at the transmitter and receiver. First, assuming that Channel State Information (CSI) is available at the receiver, a channel-dependent Reverse Channel Training (RCT) signal is proposed that enables efficient estimation of the BF vector at the transmitter with a minimum training duration of only one symbol. In contrast, conventional orthogonal training requires a minimum training duration equal to the number of receive antennas. A tight approximation to the capacity lower bound on the system is derived, which is used as a performance metric to optimize the parameters of the RCT. Next, assuming that CSI is available at the transmitter, a channel-dependent forward-link training signal is proposed and its power and duration are optimized with respect to an approximate capacity lower bound. Monte Carlo simulations illustrate the significant performance improvement offered by the proposed channel-dependent training schemes over the existing channel-agnostic orthogonal training schemes.
Resumo:
Motivated by experiments on Josephson junction arrays in a magnetic field and ultracold interacting atoms in an optical lattice in the presence of a ``synthetic'' orbital magnetic field, we study the ``fully frustrated'' Bose-Hubbard model and quantum XY model with half a flux quantum per lattice plaquette. Using Monte Carlo simulations and the density matrix renormalization group method, we show that these kinetically frustrated boson models admit three phases at integer filling: a weakly interacting chiral superfluid phase with staggered loop currents which spontaneously break time-reversal symmetry, a conventional Mott insulator at strong coupling, and a remarkable ``chiral Mott insulator'' (CMI) with staggered loop currents sandwiched between them at intermediate correlation. We discuss how the CMI state may be viewed as an exciton condensate or a vortex supersolid, study a Jastrow variational wave function which captures its correlations, present results for the boson momentum distribution across the phase diagram, and consider various experimental implications of our phase diagram. Finally, we consider generalizations to a staggered flux Bose-Hubbard model and a two-dimensional (2D) version of the CMI in weakly coupled ladders.
Resumo:
In this study, the free energy barriers for homogeneous crystal nucleation in a system that exhibits a eutectic point are computed using Monte Carlo simulations. The system studied is a binary hard sphere mixture with a diameter ratio of 0.85 between the smaller and larger hard spheres. The simulations of crystal nucleation are performed for the entire range of fluid compositions. The free energy barrier is found to be the highest near the eutectic point and is nearly five times that for the pure fluid, which slows down the nucleation rate by a factor of 10(-31). These free energy barriers are some of highest ever computed using simulations. For most of the conditions studied, the composition of the critical nucleus corresponds to either one of the two thermodynamically stable solid phases. However, near the eutectic point, the nucleation barrier is lowest for the formation of the metastable random hexagonal closed packed (rhcp) solid phase with composition lying in the two-phase region of the phase diagram. The fluid to solid phase transition is hypothesized to proceed via formation of a metastable rhcp phase followed by a phase separation into respective stable fcc solid phases.
Resumo:
The solid phase formed by a binary mixture of oppositely charged colloidal particles can be either substitutionally ordered or substitutionally disordered depending on the nature and strength of interactions among the particles. In this work, we use Monte Carlo molecular simulations along with the Gibbs-Duhem integration technique to map out the favorable inter-particle interactions for the formation of substitutionally ordered crystalline phases from a fluid phase. The inter-particle interactions are modeled using the hard core Yukawa potential but the method can be easily extended to other systems of interest. The study obtains a map of interactions depicting regions indicating the type of the crystalline aggregate that forms upon phase transition.
Resumo:
Wavelet coefficients based on spatial wavelets are used as damage indicators to identify the damage location as well as the size of the damage in a laminated composite beam with localized matrix cracks. A finite element model of the composite beam is used in conjunction with a matrix crack based damage model to simulate the damaged composite beam structure. The modes of vibration of the beam are analyzed using the wavelet transform in order to identify the location and the extent of the damage by sensing the local perturbations at the damage locations. The location of the damage is identified by a sudden change in spatial distribution of wavelet coefficients. Monte Carlo Simulations (MCS) are used to investigate the effect of ply level uncertainty in composite material properties such as ply longitudinal stiffness, transverse stiffness, shear modulus and Poisson's ratio on damage detection parameter, wavelet coefficient. In this study, numerical simulations are done for single and multiple damage cases. It is observed that spatial wavelets can be used as a reliable damage detection tool for composite beams with localized matrix cracks which can result from low velocity impact damage.
Resumo:
A binary mixture of oppositely charged colloidal particles can self-assemble into either a substitutionally ordered or substitutionally disordered crystalline phase depending on the nature and strength of interactions among the particles. An earlier study had mapped out favorable inter-particle interactions for the formation of substitutionally ordered crystalline phases from a fluid phase using Monte Carlo molecular simulations along with the Gibbs-Duhem integration technique. In this paper, those studies are extended to determine the effect of fluid phase composition on formation of substitutionally ordered solid phases.
Resumo:
The current study analyzes the leachate distribution in the Orchard Hills Landfill, Davis Junction, Illinois, using a two-phase flow model to assess the influence of variability in hydraulic conductivity on the effectiveness of the existing leachate recirculation system and its operations through reliability analysis. Numerical modeling, using finite-difference code, is performed with due consideration to the spatial variation of hydraulic conductivity of the municipal solid waste (MSW). The inhomogeneous and anisotropic waste condition is assumed because it is a more realistic representation of the MSW. For the reliability analysis, the landfill is divided into 10 MSW layers with different mean values of vertical and horizontal hydraulic conductivities (decreasing from top to bottom), and the parametric study is performed by taking the coefficients of variation (COVs) as 50, 100, 150, and 200%. Monte Carlo simulations are performed to obtain statistical information (mean and COV) of output parameters of the (1) wetted area of the MSW, (2) maximum induced pore pressure, and (3) leachate outflow. The results of the reliability analysis are used to determine the influence of hydraulic conductivity on the effectiveness of the leachate recirculation and are discussed in the light of a deterministic approach. The study is useful in understanding the efficiency of the leachate recirculation system. (C) 2013 American Society of Civil Engineers.
Resumo:
In this paper, we consider the problem of finding a spectrum hole of a specified bandwidth in a given wide band of interest. We propose a new, simple and easily implementable sub-Nyquist sampling scheme for signal acquisition and a spectrum hole search algorithm that exploits sparsity in the primary spectral occupancy in the frequency domain by testing a group of adjacent subbands in a single test. The sampling scheme deliberately introduces aliasing during signal acquisition, resulting in a signal that is the sum of signals from adjacent sub-bands. Energy-based hypothesis tests are used to provide an occupancy decision over the group of subbands, and this forms the basis of the proposed algorithm to find contiguous spectrum holes. We extend this framework to a multi-stage sensing algorithm that can be employed in a variety of spectrum sensing scenarios, including non-contiguous spectrum hole search. Further, we provide the analytical means to optimize the hypothesis tests with respect to the detection thresholds, number of samples and group size to minimize the detection delay under a given error rate constraint. Depending on the sparsity and SNR, the proposed algorithms can lead to significantly lower detection delays compared to a conventional bin-by-bin energy detection scheme; the latter is in fact a special case of the group test when the group size is set to 1. We validate our analytical results via Monte Carlo simulations.
Resumo:
In this paper, a nonlinear suboptimal detector whose performance in heavy-tailed noise is significantly better than that of the matched filter is proposed. The detector consists of a nonlinear wavelet denoising filter to enhance the signal-to-noise ratio, followed by a replica correlator. Performance of the detector is investigated through an asymptotic theoretical analysis as well as Monte Carlo simulations. The proposed detector offers the following advantages over the optimal (in the Neyman-Pearson sense) detector: it is easier to implement, and it is more robust with respect to error in modeling the probability distribution of noise.