29 resultados para time of simulation
Resumo:
What is the time-optimal way of using a set of control Hamiltonians to obtain a desired interaction? Vidal, Hammerer, and Cirac [Phys. Rev. Lett. 88, 237902 (2002)] have obtained a set of powerful results characterizing the time-optimal simulation of a two-qubit quantum gate using a fixed interaction Hamiltonian and fast local control over the individual qubits. How practically useful are these results? We prove that there are two-qubit Hamiltonians such that time-optimal simulation requires infinitely many steps of evolution, each infinitesimally small, and thus is physically impractical. A procedure is given to determine which two-qubit Hamiltonians have this property, and we show that almost all Hamiltonians do. Finally, we determine some bounds on the penalty that must be paid in the simulation time if the number of steps is fixed at a finite number, and show that the cost in simulation time is not too great.
Resumo:
We present finite element simulations of temperature gradient driven rock alteration and mineralization in fluid saturated porous rock masses. In particular, we explore the significance of production/annihilation terms in the mass balance equations and the dependence of the spatial patterns of rock alteration upon the ratio of the roll over time of large scale convection cells to the relaxation time of the chemical reactions. Special concepts such as the gradient reaction criterion or rock alteration index (RAI) are discussed in light of the present, more general theory. In order to validate the finite element simulation, we derive an analytical solution for the rock alteration index of a benchmark problem on a two-dimensional rectangular domain. Since the geometry and boundary conditions of the benchmark problem can be easily and exactly modelled, the analytical solution is also useful for validating other numerical methods, such as the finite difference method and the boundary element method, when they are used to dear with this kind of problem. Finally, the potential of the theory is illustrated by means of finite element studies related to coupled flow problems in materially homogeneous and inhomogeneous porous rock masses. (C) 1998 Elsevier Science S.A. All rights reserved.
Resumo:
Objective: General practitioner recall of the 1992-96 'Stay on Your Feet'(SOYF) program and its influence on practice were surveyed five years post-intervention to gauge sustainability of the SOYF General Practice (GP) component. Methods: A survey assessed which SOYF components were still in existence, current practice related to falls prevention, and interest in professional development. All general practitioners (GPs) situated within the boundaries of a rural Area Health Service were mailed a survey in late 2001. Results: Response rate was 66.5% (139/ 209). Of 117 GPs in practice at the time of SOYF, 80.2% reported having heard of SOYF and 74.4% of those felt it had influenced practice. Half (50.9%) still had a copy of the SOYF GP resource and of those, 58.6% used it at least 'occasionally'. Three-quarters of GPs surveyed (75.2%) checked medications 'most/almost all' of the time with patients over 60 years; 46.7% assessed falls risk factors; 41.3% gave advice; and 22.6% referred to allied health practitioners. GPs indicated a strong interest in falls prevention- related professional development. There was no significant association between use of the SOYF resource package and any of the current falls prevention practices (all chi(2)>0.05). Conclusions and implications: There was high recall of SOYF and a general belief that it influenced practice. There was little indication that use of the resource had any lasting influence on GPs' practices. In future, careful thought needs to go into designing a program that has potential to affect long-term change in GPs' falls prevention practice.
Resumo:
Activated sludge models are used extensively in the study of wastewater treatment processes. While various commercial implementations of these models are available, there are many people who need to code models themselves using the simulation packages available to them, Quality assurance of such models is difficult. While benchmarking problems have been developed and are available, the comparison of simulation data with that of commercial models leads only to the detection, not the isolation of errors. To identify the errors in the code is time-consuming. In this paper, we address the problem by developing a systematic and largely automated approach to the isolation of coding errors. There are three steps: firstly, possible errors are classified according to their place in the model structure and a feature matrix is established for each class of errors. Secondly, an observer is designed to generate residuals, such that each class of errors imposes a subspace, spanned by its feature matrix, on the residuals. Finally. localising the residuals in a subspace isolates coding errors. The algorithm proved capable of rapidly and reliably isolating a variety of single and simultaneous errors in a case study using the ASM 1 activated sludge model. In this paper a newly coded model was verified against a known implementation. The method is also applicable to simultaneous verification of any two independent implementations, hence is useful in commercial model development.
Resumo:
The study of viral-based processes is hampered by (a) their complex, transient nature, (b) the instability of products, and (c) the lack of accurate diagnostic assays. Here, we describe the use of real-time quantitative polymerase chain reaction to characterize baculoviral infection. Baculovirus DNA content doubles every 1.7 h from 6 h post-infection until replication is halted at the onset of budding. No dynamic equilibrium exists between replication and release, and the kinetics are independent of the cell density at the time of infection. No more than 16% of the intracellular virus copies bud from the cell. (C) 2002 John Wiley & Sons, Inc. Biotechnol Bioeng 77: 476-480, 2002; DOI 10.1002/bit.10126.
Resumo:
This study investigates whether different diurnal types (morning versus evening) differ in their estimation of time duration at different times of the day. Given that the performance of morning and evening types is typically best at their preferred times of day, and assuming different diurnal trends in subjective alertness (arousal?) for morning and evening types, and adopting the attentional gate model of time duration estimation, it was predicted that morning types would tend to underestimate and be more accurate in the morning compared to evening types where the opposite pattern was expected. Nineteen morning types, 18 evening types and 18 intermediate types were drawn from a large sample (N=1175) of undergraduates administered the Early/Late Preference Scale. Groups performed a time duration estimation task using the production method for estimating 20-s unfilled intervals at two times of day: 0800/1830. The median absolute error, median directional error and frequency of under- and overestimation were analysed using repeated-measures ANOVA. While all differences were statistically non-significant, the following trends were observed: morning types performed better than evening types; participants overestimated in the morning and underestimated in the evening; and participants were more accurate later in the day. It was concluded that the trends are inconsistent with a relationship between subjective alertness and time duration estimation but consistent with a possible relationship between time duration estimation and diurnal body temperature fluctuations. (C) 2002 Elsevier Ltd. All rights reserved.
Resumo:
Statistical tests of Load-Unload Response Ratio (LURR) signals are carried in order to verify statistical robustness of the previous studies using the Lattice Solid Model (MORA et al., 2002b). In each case 24 groups of samples with the same macroscopic parameters (tidal perturbation amplitude A, period T and tectonic loading rate k) but different particle arrangements are employed. Results of uni-axial compression experiments show that before the normalized time of catastrophic failure, the ensemble average LURR value rises significantly, in agreement with the observations of high LURR prior to the large earthquakes. In shearing tests, two parameters are found to control the correlation between earthquake occurrence and tidal stress. One is, A/(kT) controlling the phase shift between the peak seismicity rate and the peak amplitude of the perturbation stress. With an increase of this parameter, the phase shift is found to decrease. Another parameter, AT/k, controls the height of the probability density function (Pdf) of modeled seismicity. As this parameter increases, the Pdf becomes sharper and narrower, indicating a strong triggering. Statistical studies of LURR signals in shearing tests also suggest that except in strong triggering cases, where LURR cannot be calculated due to poor data in unloading cycles, the larger events are more likely to occur in higher LURR periods than the smaller ones, supporting the LURR hypothesis.
Resumo:
Subsequent to the influential paper of [Chan, K.C., Karolyi, G.A., Longstaff, F.A., Sanders, A.B., 1992. An empirical comparison of alternative models of the short-term interest rate. Journal of Finance 47, 1209-1227], the generalised method of moments (GMM) has been a popular technique for estimation and inference relating to continuous-time models of the short-term interest rate. GMM has been widely employed to estimate model parameters and to assess the goodness-of-fit of competing short-rate specifications. The current paper conducts a series of simulation experiments to document the bias and precision of GMM estimates of short-rate parameters, as well as the size and power of [Hansen, L.P., 1982. Large sample properties of generalised method of moments estimators. Econometrica 50, 1029-1054], J-test of over-identifying restrictions. While the J-test appears to have appropriate size and good power in sample sizes commonly encountered in the short-rate literature, GMM estimates of the speed of mean reversion are shown to be severely biased. Consequently, it is dangerous to draw strong conclusions about the strength of mean reversion using GMM. In contrast, the parameter capturing the levels effect, which is important in differentiating between competing short-rate specifications, is estimated with little bias. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
In this paper we utilise a stochastic address model of broadcast oligopoly markets to analyse the Australian broadcast television market. In particular, we examine the effect of the presence of a single government market participant in this market. An examination of the dynamics of the simulations demonstrates that the presence of a government market participant can simultaneously generate positive outcomes for viewers as well as for other market suppliers. Further examination of simulation dynamics indicates that privatisation of the government market participant results in reduced viewer choice and diversity. We also demonstrate that additional private market participants would not result in significant benefits to viewers.
Resumo:
The published requirements for accurate measurement of heat transfer at the interface between two bodies have been reviewed. A strategy for reliable measurement has been established, based on the depth of the temperature sensors in the medium, on the inverse method parameters and on the time response of the sensors. Sources of both deterministic and stochastic errors have been investigated and a method to evaluate them has been proposed, with the help of a normalisation technique. The key normalisation variables are the duration of the heat input and the maximum heat flux density. An example of application of this technique in the field of high pressure die casting is demonstrated. The normalisation study, coupled with previous determination of the heat input duration, makes it possible to determine the optimum location for the sensors, along with an acceptable sampling rate and the thermocouples critical response-time (as well as eventual filter characteristics). Results from the gauge are used to assess the suitability of the initial design choices. In particular the unavoidable response time of the thermocouples is estimated by comparison with the normalised simulation. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Despite the identification of SRY as the testis-determining gene in mammals, the genetic interactions controlling the earliest steps of male sex determination remain poorly understood. In particular, the molecular lesions underlying a high proportion of human XY gonadal dysgenesis, XX maleness and XX true hermaphroditism remain undiscovered. A number of screens have identified candidate genes whose expression is modulated during testis or ovary differentiation in mice, but these screens have used whole gonads, consisting of multiple cell types, or stages of gonadal development well beyond the time of sex determination. We describe here a novel reporter mouse line that expresses enhanced green fluorescent protein under the control of an Sf1 promoter fragment, marking Sertoli and granulosa cell precursors during the critical period of sex determination. These cells were purified from gonads of male and female transgenic embryos at 10.5 dpc (shortly after Sry transcription is activated) and 11.5 dpc (when Sox9 transcription begins), and their transcriptomes analysed using Affymetrix genome arrays. We identified 266 genes, including Dhh, Fgf9 and Ptgds, that were upregulated and 50 genes that were downregulated in 11.5 dpc male somatic gonad cells only, and 242 genes, including Fst, that were upregulated in 11.5 dpc female somatic gonad cells only. The majority of these genes are novel genes that lack identifiable homology, and several human orthologues were found to map to chromosomal loci implicated in disorders of sexual development. These genes represent an important resource with which to piece together the earliest steps of sex determination and gonad development, and provide new candidates for mutation searching in human sexual dysgenesis syndromes.
Resumo:
A fundamental question about the perception of time is whether the neural mechanisms underlying temporal judgements are universal and centralized in the brain or modality specific and distributed []. Time perception has traditionally been thought to be entirely dissociated from spatial vision. Here we show that the apparent duration of a dynamic stimulus can be manipulated in a local region of visual space by adapting to oscillatory motion or flicker. This implicates spatially localized temporal mechanisms in duration perception. We do not see concomitant changes in the time of onset or offset of the test patterns, demonstrating a direct local effect on duration perception rather than an indirect effect on the time course of neural processing. The effects of adaptation on duration perception can also be dissociated from motion or flicker perception per se. Although 20 Hz adaptation reduces both the apparent temporal frequency and duration of a 10 Hz test stimulus, 5 Hz adaptation increases apparent temporal frequency but has little effect on duration perception. We conclude that there is a peripheral, spatially localized, essentially visual component involved in sensing the duration of visual events.
Resumo:
Stochastic simulation is a recognised tool for quantifying the spatial distribution of geological uncertainty and risk in earth science and engineering. Metals mining is an area where simulation technologies are extensively used; however, applications in the coal mining industry have been limited. This is particularly due to the lack of a systematic demonstration illustrating the capabilities these techniques have in problem solving in coal mining. This paper presents two broad and technically distinct areas of applications in coal mining. The first deals with the use of simulation in the quantification of uncertainty in coal seam attributes and risk assessment to assist coal resource classification, and drillhole spacing optimisation to meet pre-specified risk levels at a required confidence. The second application presents the use of stochastic simulation in the quantification of fault risk, an area of particular interest to underground coal mining, and documents the performance of the approach. The examples presented demonstrate the advantages and positive contribution stochastic simulation approaches bring to the coal mining industry