926 resultados para Bayesian Mixture Model, Cavalieri Method, Trapezoidal Rule


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new ball mill scale-up procedure is developed. This procedure has been validated using seven sets of Ml-scale ball mil data. The largest ball mills in these data have diameters (inside liners) of 6.58m. The procedure can predict the 80% passing size of the circuit product to within +/-6% of the measured value, with a precision of +/-11% (one standard deviation); the re-circulating load to within +/-33% of the mass-balanced value (this error margin is within the uncertainty associated with the determination of the re-circulating load); and the mill power to within +/-5% of the measured value. This procedure is applicable for the design of ball mills which are preceded by autogenous (AG) mills, semi-autogenous (SAG) mills, crushers and flotation circuits. The new procedure is more precise and more accurate than Bond's method for ball mill scale-up. This procedure contains no efficiency correction which relates to the mill diameter. This suggests that, within the range of mill diameter studied, milling efficiency does not vary with mill diameter. This is in contrast with Bond's equation-Bond claimed that milling efficiency increases with mill diameter. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Item noise models of recognition assert that interference at retrieval is generated by the words from the study list. Context noise models of recognition assert that interference at retrieval is generated by the contexts in which the test word has appeared. The authors introduce the bind cue decide model of episodic memory, a Bayesian context noise model, and demonstrate how it can account for data from the item noise and dual-processing approaches to recognition memory. From the item noise perspective, list strength and list length effects, the mirror effect for word frequency and concreteness, and the effects of the similarity of other words in a list are considered. From the dual-processing perspective, process dissociation data on the effects of length. temporal separation of lists, strength, and diagnosticity of context are examined. The authors conclude that the context noise approach to recognition is a viable alternative to existing approaches. (PsycINFO Database Record (c) 2008 APA, all rights reserved)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pasminco Century Mine has developed a geophysical logging system to provide new data for ore mining/grade control and the generation of Short Term Models for mine planning. Previous work indicated the applicability of petrophysical logging for lithology prediction, however, the automation of the method was not considered reliable enough for the development of a mining model. A test survey was undertaken using two diamond drilled control holes and eight percussion holes. All holes were logged with natural gamma, magnetic susceptibility and density. Calibration of the LogTrans auto-interpretation software using only natural gamma and magnetic susceptibility indicated that both lithology and stratigraphy could be predicted. Development of a capability to enforce stratigraphic order within LogTrans increased the reliability and accuracy of interpretations. After the completion of a feasibility program, Century Mine has invested in a dedicated logging vehicle to log blast holes as well as for use in in-fill drilling programs. Future refinement of the system may lead to the development of GPS controlled excavators for mining ore.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the genetic architecture of quantitative traits can greatly assist the design of strategies for their manipulation in plant-breeding programs. For a number of traits, genetic variation can be the result of segregation of a few major genes and many polygenes (minor genes). The joint segregation analysis (JSA) is a maximum-likelihood approach for fitting segregation models through the simultaneous use of phenotypic information from multiple generations. Our objective in this paper was to use computer simulation to quantify the power of the JSA method for testing the mixed-inheritance model for quantitative traits when it was applied to the six basic generations: both parents (P-1 and P-2), F-1, F-2, and both backcross generations (B-1 and B-2) derived from crossing the F-1 to each parent. A total of 1968 genetic model-experiment scenarios were considered in the simulation study to quantify the power of the method. Factors that interacted to influence the power of the JSA method to correctly detect genetic models were: (1) whether there were one or two major genes in combination with polygenes, (2) the heritability of the major genes and polygenes, (3) the level of dispersion of the major genes and polygenes between the two parents, and (4) the number of individuals examined in each generation (population size). The greatest levels of power were observed for the genetic models defined with simple inheritance; e.g., the power was greater than 90% for the one major gene model, regardless of the population size and major-gene heritability. Lower levels of power were observed for the genetic models with complex inheritance (major genes and polygenes), low heritability, small population sizes and a large dispersion of favourable genes among the two parents; e.g., the power was less than 5% for the two major-gene model with a heritability value of 0.3 and population sizes of 100 individuals. The JSA methodology was then applied to a previously studied sorghum data-set to investigate the genetic control of the putative drought resistance-trait osmotic adjustment in three crosses. The previous study concluded that there were two major genes segregating for osmotic adjustment in the three crosses. Application of the JSA method resulted in a change in the proposed genetic model. The presence of the two major genes was confirmed with the addition of an unspecified number of polygenes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study, was to develop a newborn piglet model of hypoxia/ischaemia which would better emulate the clinical situation in the asphyxiated human neonate and produce a consistent degree of histopathological injury following the insult. One-day-old piglets (n = 18) were anaesthetised with a mixture of propofol (10 mg/kg/h) and alfentinal (5,5.5 mug/kg/h) i.v. The piglets were intubated and ventilated. Physiological variables were monitored continuously. Hypoxia was induced by decreasing the inspired oxygen (FiO(2)) to 3-4% and adjusting FiO(2) to maintain the cerebral function monitor peak amplitude at less than or equal to5 muV. The duration of the mild insult was 20, min while the severe insult was 30 min which included 10 min where the blood pressure was allowed to fall below 70% of baseline. Control piglets (n=4 of 18) were subjected to the same protocol except for the hypoxic/ischaemic insult. The piglets were allowed to recover from anaesthesia then euthanased 72 It after the insult. The brains were perfusion-fixed, removed and embedded in paraffin. Coronal sections were stained by haematoxylin/eosin. A blinded observer examined the frontal and parietal cortex, hippocampus, basal ganglia, thalamus and cerebellum for the degree of damage. The total mean histology score for the five areas of the brain for the severe insult was 15.6 +/-4.4 (mean +/-S.D., n=7), whereas no damage was seen in either the mild insult (n=4) or control groups. This 'severe damage' model produces a consistent level of damage and will prove useful for examining potential neuroprotective therapies in the neonatal brain. (C) 2001 Elsevier Science BY. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lateral ventricular volumes based on segmented brain MR images can be significantly underestimated if partial volume effects are not considered. This is because a group of voxels in the neighborhood of lateral ventricles is often mis-classified as gray matter voxels due to partial volume effects. This group of voxels is actually a mixture of ventricular cerebro-spinal fluid and the white matter and therefore, a portion of it should be included as part of the lateral ventricular structure. In this note, we describe an automated method for the measurement of lateral ventricular volumes on segmented brain MR images. Image segmentation was carried in combination of intensity correction and thresholding. The method is featured with a procedure for addressing mis-classified voxels in the surrounding of lateral ventricles. A detailed analysis showed that lateral ventricular volumes could be underestimated by 10 to 30% depending upon the size of the lateral ventricular structure, if mis-classified voxels were not included. Validation of the method was done through comparison with the averaged manually traced volumes. Finally, the merit of the method is demonstrated in the evaluation of the rate of lateral ventricular enlargement. (C) 2001 Elsevier Science Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reaching out to grasp an object (prehension) is a deceptively elegant and skilled behavior. The movement prior to object contact can be described as having two components [1], the movement of the hand to an appropriate location for gripping the object, the transport component, and the opening and closing of the aperture between the fingers as they prepare to grip the target, the grasp component. The grasp component is sensitive to the size of the object, so that a larger grasp aperture is formed for wider objects [1]; the maximum grasp aperture (MGA) is a little wider than the width of the target object and occurs later in the movement for larger objects [1, 2]. We present a simple model that can account for the temporal relationship between the transport and grasp components, We report the results of an experiment providing empirical support for our rule of thumb. The model provides a simple, but plausible, account of a neural control strategy that has been the center of debate over the last two decades.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many occupational safety interventions, the objective is to reduce the injury incidence as well as the mean claims cost once injury has occurred. The claims cost data within a period typically contain a large proportion of zero observations (no claim). The distribution thus comprises a point mass at 0 mixed with a non-degenerate parametric component. Essentially, the likelihood function can be factorized into two orthogonal components. These two components relate respectively to the effect of covariates on the incidence of claims and the magnitude of claims, given that claims are made. Furthermore, the longitudinal nature of the intervention inherently imposes some correlation among the observations. This paper introduces a zero-augmented gamma random effects model for analysing longitudinal data with many zeros. Adopting the generalized linear mixed model (GLMM) approach reduces the original problem to the fitting of two independent GLMMs. The method is applied to evaluate the effectiveness of a workplace risk assessment teams program, trialled within the cleaning services of a Western Australian public hospital.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A pairing model for nucleons, introduced by Richardson in 1966, which describes proton-neutron pairing as well as proton-proton and neutron-neutron pairing, is re-examined in the context of the quantum inverse scattering method. Specifically, this shows that the model is integrable by enabling the explicit construction of the conserved operators. We determine the eigenvalues of these operators in terms of the Bethe ansatz, which in turn leads to an expression for the energy eigenvalues of the Hamiltonian.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new integrable model which is a variant of the one-dimensional Hubbard model is proposed. The integrability of the model is verified by presenting the associated quantum R-matrix which satisfies the Yang-Baxter equation. We argue that the new model possesses the SO(4) algebra symmetry, which contains a representation of the eta-pairing SU(2) algebra and a spin SU(2) algebra. Additionally, the algebraic Bethe ansatz is studied by means of the quantum inverse scattering method. The spectrum of the Hamiltonian, eigenvectors, as well as the Bethe ansatz equations, are discussed. (C) 2002 American Institute of Physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with atomic systems coupled to a structured reservoir of quantum EM field modes, with particular relevance to atoms interacting with the field in photonic band gap materials. The case of high Q cavities has been treated elsewhere using Fano diagonalization based on a quasimode approach, showing that the cavity quasimodes are responsible for pseudomodes introduced to treat non-Markovian behaviour. The paper considers a simple model of a photonic band gap case, where the spatially dependent permittivity consists of a constant term plus a small spatially periodic term that leads to a narrow band gap in the spectrum of mode frequencies. Most treatments of photonic band gap materials are based on the true modes, obtained numerically by solving the Helmholtz equation for the actual spatially periodic permittivity. Here the field modes are first treated in terms of a simpler quasimode approach, in which the quasimodes are plane waves associated with the constant permittivity term. Couplings between the quasimodes occur owing to the small periodic term in the permittivity, with selection rules for the coupled modes being related to the reciprocal lattice vectors. This produces a field Hamiltonian in quasimode form. A matrix diagonalization method may be applied to relate true mode annihilation operators to those for quasimodes. The atomic transitions are coupled to all the quasimodes, and the true mode atom-EM field coupling constants (one-photon Rabi frequencies) are related to those for the quasimodes and also expressions are obtained for the true mode density. The results for the one-photon Rabi frequencies differ from those assumed in other work. Expressions for atomic decay rates are obtained using the Fermi Golden rule, although these are valid only well away from the band gaps.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We compare two different approaches to the control of the dynamics of a continuously monitored open quantum system. The first is Markovian feedback, as introduced in quantum optics by Wiseman and Milburn [Phys. Rev. Lett. 70, 548 (1993)]. The second is feedback based on an estimate of the system state, developed recently by Doherty and Jacobs [Phys. Rev. A 60, 2700 (1999)]. Here we choose to call it, for brevity, Bayesian feedback. For systems with nonlinear dynamics, we expect these two methods of feedback control to give markedly different results. The simplest possible nonlinear system is a driven and damped two-level atom, so we choose this as our model system. The monitoring is taken to be homodyne detection of the atomic fluorescence, and the control is by modulating the driving. The aim of the feedback in both cases is to stabilize the internal state of the atom as close as possible to an arbitrarily chosen pure state, in the presence of inefficient detection and other forms of decoherence. Our results (obtained without recourse to stochastic simulations) prove that Bayesian feedback is never inferior, and is usually superior, to Markovian feedback. However, it would be far more difficult to implement than Markovian feedback and it loses its superiority when obvious simplifying approximations are made. It is thus not clear which form of feedback would be better in the face of inevitable experimental imperfections.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A systematic method for constructing trigonometric R-matrices corresponding to the (multiplicity-free) tensor product of any two affinizable representations of a quantum algebra or superalgebra has been developed by the Brisbane group and its collaborators. This method has been referred to as the Tensor Product Graph Method. Here we describe applications of this method to untwisted and twisted quantum affine superalgebras.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Control of chaotic instability in a simplified model of a spinning spacecraft with dissipation is achieved using an algorithm derived using Lyapunov's second method. The control method is implemented on a realistic spacecraft parameter configuration which has been found to exhibit chaotic instability for a range of forcing amplitudes and frequencies when a sinusoidally varying torque is applied to the spacecraft. Such a torque, may arise in practice from an unbalanced rotor or from vibrations in appendages. Numerical simulations are performed and the results are studied by means of time history, phase space, Poincare map, Lyapunov characteristic exponents and bifurcation diagrams. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The majority of the world's population now resides in urban environments and information on the internal composition and dynamics of these environments is essential to enable preservation of certain standards of living. Remotely sensed data, especially the global coverage of moderate spatial resolution satellites such as Landsat, Indian Resource Satellite and Systeme Pour I'Observation de la Terre (SPOT), offer a highly useful data source for mapping the composition of these cities and examining their changes over time. The utility and range of applications for remotely sensed data in urban environments could be improved with a more appropriate conceptual model relating urban environments to the sampling resolutions of imaging sensors and processing routines. Hence, the aim of this work was to take the Vegetation-Impervious surface-Soil (VIS) model of urban composition and match it with the most appropriate image processing methodology to deliver information on VIS composition for urban environments. Several approaches were evaluated for mapping the urban composition of Brisbane city (south-cast Queensland, Australia) using Landsat 5 Thematic Mapper data and 1:5000 aerial photographs. The methods evaluated were: image classification; interpretation of aerial photographs; and constrained linear mixture analysis. Over 900 reference sample points on four transects were extracted from the aerial photographs and used as a basis to check output of the classification and mixture analysis. Distinctive zonations of VIS related to urban composition were found in the per-pixel classification and aggregated air-photo interpretation; however, significant spectral confusion also resulted between classes. In contrast, the VIS fraction images produced from the mixture analysis enabled distinctive densities of commercial, industrial and residential zones within the city to be clearly defined, based on their relative amount of vegetation cover. The soil fraction image served as an index for areas being (re)developed. The logical match of a low (L)-resolution, spectral mixture analysis approach with the moderate spatial resolution image data, ensured the processing model matched the spectrally heterogeneous nature of the urban environments at the scale of Landsat Thematic Mapper data.