917 resultados para Perfect simulation
Resumo:
We explicitly construct a stationary coupling attaining Ornstein`s (d) over bar -distance between ordered pairs of binary chains of infinite order. Our main tool is a representation of the transition probabilities of the coupled bivariate chain of infinite order as a countable mixture of Markov transition probabilities of increasing order. Under suitable conditions on the loss of memory of the chains, this representation implies that the coupled chain can be represented as a concatenation of i.i.d. sequences of bivariate finite random strings of symbols. The perfect simulation algorithm is based on the fact that we can identify the first regeneration point to the left of the origin almost surely.
Resumo:
We prove that, once an algorithm of perfect simulation for a stationary and ergodic random field F taking values in S(Zd), S a bounded subset of R(n), is provided, the speed of convergence in the mean ergodic theorem occurs exponentially fast for F. Applications from (non-equilibrium) statistical mechanics and interacting particle systems are presented.
Resumo:
We introduce and study a class of infinite-horizon nonzero-sum non-cooperative stochastic games with infinitely many interacting agents using ideas of statistical mechanics. First we show, in the general case of asymmetric interactions, the existence of a strategy that allows any player to eliminate losses after a finite random time. In the special case of symmetric interactions, we also prove that, as time goes to infinity, the game converges to a Nash equilibrium. Moreover, assuming that all agents adopt the same strategy, using arguments related to those leading to perfect simulation algorithms, spatial mixing and ergodicity are proved. In turn, ergodicity allows us to prove “fixation”, i.e. that players will adopt a constant strategy after a finite time. The resulting dynamics is related to zerotemperature Glauber dynamics on random graphs of possibly infinite volume.
Resumo:
Is it possible to perfectly simulate a signature, in the particular and challenging case where the signature is simple? A set of signatures of six writers, considered to be simple on the basis of highlighted criteria, was sampled. These signatures were transferred to forgers requested to produce freehand simulations. Among these simulations, those capable of reproducing the features of the reference signatures were submitted for evaluation to forensic document experts through proficiency testing. The results suggest that there is no perfect simulation. With the supplementary aim of assessing the influence of forger's skills on the results, forgers were selected from three distinct populations, which differ according to professional criteria. The results indicate some differences in graphical capabilities between individuals. However, no trend could be established regarding age, degrees, years of practice and time dedicated to the exercise. The findings show that simulation is made easier if a graphical compatibility exists between the forger's own writing and the signature to be reproduced. Moreover, a global difficulty to preserve proportions and slant as well as the shape of capital letters and initials has been noticed.
Resumo:
A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.
Resumo:
Initialising the ocean internal variability for decadal predictability studies is a new area of research and a variety of ad hoc methods are currently proposed. In this study, we explore how nudging with sea surface temperature (SST) and salinity (SSS) can reconstruct the three-dimensional variability of the ocean in a perfect model framework. This approach builds on the hypothesis that oceanic processes themselves will transport the surface information into the ocean interior as seen in ocean-only simulations. Five nudged simulations are designed to reconstruct a 150 years “target” simulation, defined as a portion of a long control simulation. The nudged simulations differ by the variables restored to, SST or SST + SSS, and by the area where the nudging is applied. The strength of the heat flux feedback is diagnosed from observations and the restoring coefficients for SSS use the same time-scale. We observed that this choice prevents spurious convection at high latitudes and near sea-ice border when nudging both SST and SSS. In the tropics, nudging the SST is enough to reconstruct the tropical atmosphere circulation and the associated dynamical and thermodynamical impacts on the underlying ocean. In the tropical Pacific Ocean, the profiles for temperature show a significant correlation from the surface down to 2,000 m, due to dynamical adjustment of the isopycnals. At mid-to-high latitudes, SSS nudging is required to reconstruct both the temperature and the salinity below the seasonal thermocline. This is particularly true in the North Atlantic where adding SSS nudging enables to reconstruct the deep convection regions of the target. By initiating a previously documented 20-year cycle of the model, the SST + SSS nudging is also able to reproduce most of the AMOC variations, a key source of decadal predictability. Reconstruction at depth does not significantly improve with amount of time spent nudging and the efficiency of the surface nudging rather depends on the period/events considered. The joint SST + SSS nudging applied everywhere is the most efficient approach. It ensures that the right water masses are formed at the right surface density, the subsequent circulation, subduction and deep convection further transporting them at depth. The results of this study underline the potential key role of SSS for decadal predictability and further make the case for sustained large-scale observations of this field.
Resumo:
Increasing costs and competitive business strategies are pushing sawmill enterprises to make an effort for optimization of their process management. Organizational decisions mainly concentrate on performance and reduction of operational costs in order to maintain profit margins. Although many efforts have been made, effective utilization of resources, optimal planning and maximum productivity in sawmill are still challenging to sawmill industries. Many researchers proposed the simulation models in combination with optimization techniques to address problems of integrated logistics optimization. The combination of simulation and optimization technique identifies the optimal strategy by simulating all complex behaviours of the system under consideration including objectives and constraints. During the past decade, an enormous number of studies were conducted to simulate operational inefficiencies in order to find optimal solutions. This paper gives a review on recent developments and challenges associated with simulation and optimization techniques. It was believed that the review would provide a perfect ground to the authors in pursuing further work in optimizing sawmill yard operations.
Resumo:
The objective of this paper is the numerical study of the behavior of reinforced concrete beams and columns by non-linear numerical simulations. The numerical analysis is based on the finite element method implemented in CASTEM 2000. This program uses the constitutive elastoplastic perfect model for the steel, the Drucker-Prager model for the concrete and the Newton-Raphson for the solution of non-linear systems. This work concentrates on the determination of equilibrium curves to the beams and force-strain curves to the columns. The numeric responses are confronted with experimental results found in the literature in order to check there liability of the numerical analyses.
Resumo:
The biggest advantage of plasma immersion ion implantation (PIII) is the capability of treating objects with irregular geometry without complex manipulation of the target holder. The effectiveness of this approach relies on the uniformity of the incident ion dose. Unfortunately, perfect dose uniformity is usually difficult to achieve when treating samples of complex shape. The problems arise from the non-uniform plasma density and expansion of plasma sheath. A particle-in-cell computer simulation is used to study the time-dependent evolution of the plasma sheath surrounding two-dimensional objects during process of plasma immersion ion implantation. Before starting the implantation phase, steady-state nitrogen plasma is established inside the simulation volume by using ionization of gas precursor with primary electrons. The plasma self-consistently evolves to a non-uniform density distribution, which is used as initial density distribution for the implantation phase. As a result, we can obtain a more realistic description of the plasma sheath expansion and dynamics. Ion current density on the target, average impact energy, and trajectories of the implanted ions were calculated for three geometrical shapes. Large deviations from the uniform dose distribution have been observed for targets with irregular shapes. In addition, effect of secondary electron emission has been included in our simulation and no qualitative modifications to the sheath dynamics have been noticed. However, the energetic secondary electrons change drastically the plasma net balance and also pose significant X-ray hazard. Finally, an axial magnetic field has been added to the calculations and the possibility for magnetic insulation of secondary electrons has been proven.
Resumo:
Many techniques based on data which are drawn by Ranked Set Sampling (RSS) scheme assume that the ranking of observations is perfect. Therefore it is essential to develop some methods for testing this assumption. In this article, we propose a parametric location-scale free test for assessing the assumption of perfect ranking. The results of a simulation study in two special cases of normal and exponential distributions indicate that the proposed test performs well in comparison with its leading competitors.
Resumo:
Initialising the ocean internal variability for decadal predictability studies is a new area of research and a variety of ad hoc methods are currently proposed. In this study, we explore how nudging with sea surface temperature (SST) and salinity (SSS) can reconstruct the threedimensional variability of the ocean in a perfect model framework. This approach builds on the hypothesis that oceanic processes themselves will transport the surface information into the ocean interior as seen in ocean-only simulations. Five nudged simulations are designed to reconstruct a 150 years ‘‘target’’ simulation, defined as a portion of a long control simulation. The nudged simulations differ by the variables restored to, SST or SST + SSS, and by the area where the nudging is applied. The strength of the heat flux feedback is diagnosed from observations and the restoring coefficients for SSS use the same time-scale. We observed that this choice prevents spurious convection at high latitudes and near sea-ice border when nudging both SST and SSS. In the tropics, nudging the SST is enough to reconstruct the tropical atmosphere circulation and the associated dynamical and thermodynamical impacts on the underlying ocean. In the tropical Pacific Ocean, the profiles for temperature show a significant correlation from the surface down to 2,000 m, due to dynamical adjustment of the isopycnals. At mid-tohigh latitudes, SSS nudging is required to reconstruct both the temperature and the salinity below the seasonal thermocline. This is particularly true in the North Atlantic where adding SSS nudging enables to reconstruct the deep convection regions of the target. By initiating a previously documented 20-year cycle of the model, the SST + SSS nudging is also able to reproduce most of the AMOC variations, a key source of decadal predictability. Reconstruction at depth does not significantly improve with amount of time spent nudging and the efficiency of the surface nudging rather depends on the period/events considered. The joint SST + SSS nudging applied verywhere is the most efficient approach. It ensures that the right water masses are formed at the right surface density, the subsequent circulation, subduction and deep convection further transporting them at depth. The results of this study underline the potential key role of SSS for decadal predictability and further make the case for sustained largescale observations of this field.
Resumo:
In this paper we consider the adsorption of argon on the surface of graphitized thermal carbon black and in slit pores at temperatures ranging from subcritical to supercritical conditions by the method of grand canonical Monte Carlo simulation. Attention is paid to the variation of the adsorbed density when the temperature crosses the critical point. The behavior of the adsorbed density versus pressure (bulk density) shows interesting behavior at temperatures in the vicinity of and those above the critical point and also at extremely high pressures. Isotherms at temperatures greater than the critical temperature exhibit a clear maximum, and near the critical temperature this maximum is a very sharp spike. Under the supercritical conditions and very high pressure the excess of adsorbed density decreases towards zero value for a graphite surface, while for slit pores negative excess density is possible at extremely high pressures. For imperfect pores (defined as pores that cannot accommodate an integral number of parallel layers under moderate conditions) the pressure at which the excess pore density becomes negative is less than that for perfect pores, and this is due to the packing effect in those imperfect pores. However, at extremely high pressure molecules can be packed in parallel layers once chemical potential is great enough to overcome the repulsions among adsorbed molecules. (c) 2005 American Institute of Physics.
Resumo:
We model nongraphitized carbon black surfaces and investigate adsorption of argon on these surfaces by using the grand canonical Monte Carlo simulation. In this model, the nongraphitized surface is modeled as a stack of graphene layers with some carbon atoms of the top graphene layer being randomly removed. The percentage of the surface carbon atoms being removed and the effective size of the defect ( created by the removal) are the key parameters to characterize the nongraphitized surface. The patterns of adsorption isotherm and isosteric heat are particularly studied, as a function of these surface parameters as well as pressure and temperature. It is shown that the adsorption isotherm shows a steplike behavior on a perfect graphite surface and becomes smoother on nongraphitized surfaces. Regarding the isosteric heat versus loading, we observe for the case of graphitized thermal carbon black the increase of heat in the submonolayer coverage and then a sharp decline in the heat when the second layer is starting to form, beyond which it increases slightly. On the other hand, the isosteric heat versus loading for a highly nongraphitized surface shows a general decline with respect to loading, which is due to the energetic heterogeneity of the surface. It is only when the fluid-fluid interaction is greater than the surface energetic factor that we see a minimum-maximum in the isosteric heat versus loading. These simulation results of isosteric heat agree well with the experimental results of graphitization of Spheron 6 (Polley, M. H.; Schaeffer, W. D.; Smith, W. R. J. Phys. Chem. 1953, 57, 469; Beebe, R. A.; Young, D. M. J. Phys. Chem. 1954, 58, 93). Adsorption isotherms and isosteric heat in pores whose walls have defects are also studied from the simulation, and the pattern of isotherm and isosteric heat could be used to identify the fingerprint of the surface.
Resumo:
The evolution of cellular systems towards third generation (3G) or IMT-2000 seems to have a tendency to use W-CDMA as the standard access method, as ETSI decisions have showed. However, there is a question about the improvements in capacity and the wellness of this access method. One of the aspects that worry developers and researchers planning the third generation is the extended use of the Internet and more and more bandwidth hungry applications. This work shows the performance of a W-CDMA system simulated in a PC using cover maps generated with DC-Cell, a GIS based planning tool developed by the Technical University of Valencia, Spain. The maps are exported to MATLAB and used in the model. The system used consists of several microcells in a downtown area. We analyse the interference from users in the same cell and in adjacent cells and the effect in the system, assuming perfect control for each cell. The traffic generated by the simulator is voice and data. This model allows us to work with coverage that is more accurate and is a good approach to analyse the multiple access interference (MAI) problem in microcellular systems with irregular coverage. Finally, we compare the results obtained, with the performance of a similar system using TDMA.
Resumo:
In this work, the energy response functions of a CdTe detector were obtained by Monte Carlo (MC) simulation in the energy range from 5 to 160keV, using the PENELOPE code. In the response calculations the carrier transport features and the detector resolution were included. The computed energy response function was validated through comparison with experimental results obtained with (241)Am and (152)Eu sources. In order to investigate the influence of the correction by the detector response at diagnostic energy range, x-ray spectra were measured using a CdTe detector (model XR-100T, Amptek), and then corrected by the energy response of the detector using the stripping procedure. Results showed that the CdTe exhibits good energy response at low energies (below 40keV), showing only small distortions on the measured spectra. For energies below about 80keV, the contribution of the escape of Cd- and Te-K x-rays produce significant distortions on the measured x-ray spectra. For higher energies, the most important correction is the detector efficiency and the carrier trapping effects. The results showed that, after correction by the energy response, the measured spectra are in good agreement with those provided by a theoretical model of the literature. Finally, our results showed that the detailed knowledge of the response function and a proper correction procedure are fundamental for achieving more accurate spectra from which quality parameters (i.e., half-value layer and homogeneity coefficient) can be determined.