148 resultados para Joint market simulation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

When examining a rock mass, joint sets and their orientations can play a significant role with regard to how the rock mass will behave. To identify joint sets present in the rock mass, the orientation of individual fracture planer can be measured on exposed rock faces and the resulting data can be examined for heterogeneity. In this article, the expectation-maximization algorithm is used to lit mixtures of Kent component distributions to the fracture data to aid in the identification of joint sets. An additional uniform component is also included in the model to accommodate the noise present in the data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Activated sludge models are used extensively in the study of wastewater treatment processes. While various commercial implementations of these models are available, there are many people who need to code models themselves using the simulation packages available to them, Quality assurance of such models is difficult. While benchmarking problems have been developed and are available, the comparison of simulation data with that of commercial models leads only to the detection, not the isolation of errors. To identify the errors in the code is time-consuming. In this paper, we address the problem by developing a systematic and largely automated approach to the isolation of coding errors. There are three steps: firstly, possible errors are classified according to their place in the model structure and a feature matrix is established for each class of errors. Secondly, an observer is designed to generate residuals, such that each class of errors imposes a subspace, spanned by its feature matrix, on the residuals. Finally. localising the residuals in a subspace isolates coding errors. The algorithm proved capable of rapidly and reliably isolating a variety of single and simultaneous errors in a case study using the ASM 1 activated sludge model. In this paper a newly coded model was verified against a known implementation. The method is also applicable to simultaneous verification of any two independent implementations, hence is useful in commercial model development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computational simulations of the title reaction are presented, covering a temperature range from 300 to 2000 K. At lower temperatures we find that initial formation of the cyclopropene complex by addition of methylene to acetylene is irreversible, as is the stabilisation process via collisional energy transfer. Product branching between propargyl and the stable isomers is predicted at 300 K as a function of pressure for the first time. At intermediate temperatures (1200 K), complex temporal evolution involving multiple steady states begins to emerge. At high temperatures (2000 K) the timescale for subsequent unimolecular decay of thermalized intermediates begins to impinge on the timescale for reaction of methylene, such that the rate of formation of propargyl product does not admit a simple analysis in terms of a single time-independent rate constant until the methylene supply becomes depleted. Likewise, at the elevated temperatures the thermalized intermediates cannot be regarded as irreversible product channels. Our solution algorithm involves spectral propagation of a symmetrised version of the discretized master equation matrix, and is implemented in a high precision environment which makes hitherto unachievable low-temperature modelling a reality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The QU-GENE Computing Cluster (QCC) is a hardware and software solution to the automation and speedup of large QU-GENE (QUantitative GENEtics) simulation experiments that are designed to examine the properties of genetic models, particularly those that involve factorial combinations of treatment levels. QCC automates the management of the distribution of components of the simulation experiments among the networked single-processor computers to achieve the speedup.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We developed a general model to assess patient activity within the primary and secondary health-care sectors following a dermatology outpatient consultation. Based on observed variables from the UK teledermatology trial, the model showed that up to 11 doctor-patient interactions occurred before a patient was ultimately discharged from care. In a cohort of 1000 patients, the average number of health-care visits was 2.4 (range 1-11). Simulation analysis suggested that the most important parameter affecting the total number of doctor-patient Interactions is patient discharge from care following the initial consultation. This implies that resources should be concentrated in this area. The introduction of teledermatology (either realtime or store and forward) changes the values of the model parameters. The model provides a quantitative tool for planning the future provision of dermatology health-care.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Knee joint-position sensitivity has been shown to decline with increasing age, with much of the research reported in the literature investigating this age effect in non-weight-bearing (NWB) conditions. However, little data is available in the more functional position of weight-bearing conditions. The objective of this study was to identify the influence of age on the accuracy and nature of knee joint-position sense (JPS) in both full weight-bearing (FWB) and partial weight-bearing (PWB) conditions and to determine the effect of lower-extremity dominance on knee JPS. Sixty healthy subjects from three age groups (young: 20-35 years old, middle-aged: 40-55 years, and older: 60-75 years) were assessed. Tests were conducted on both the right and left legs to examine the ability of subjects to correctly reproduce knee angles in an active criterion-active repositioning paradigm. Knee angles were measured in degrees using an electromagnetic tracking device, Polhemus 3Space Fastrak, that detected positions of sensors placed on the test limb. Errors in FWB knee joint repositioning did not increase with age, but significant age-related increases in knee joint-repositioning error were found in PWB. It was found that elderly subjects tended to overshoot the criterion angle more often than subjects from the young and middle-aged groups. Subjects in all three age groups performed better in FWB than in PWB. Differences between the stance-dominant (STD) and skill-dominant (SKD) legs did not reach significance. Results demonstrated that for, normal pain-free individuals, there is no age-related decline in knee JPS in FWB, although an age effect does exist in PWB. This outcome challenges the current view that a generalised decline in knee joint proprioception occurs with age. In addition, lower-limb dominance is not a factor in acuity of knee JPS.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computer simulation was used to suggest potential selection strategies for beef cattle breeders with different mixes of clients between two potential markets. The traditional market paid on the basis of carcass weight (CWT), while a new market considered marbling grade in addition to CWT as a basis for payment. Both markets instituted discounts for CWT in excess of 340 kg and light carcasses below 300 kg. Herds were simulated for each price category on the carcass weight grid for the new market. This enabled the establishment of phenotypic relationships among the traits examined [CWT, percent intramuscular fat (IMF), carcass value in the traditional market, carcass value in the new market, and the expected proportion of progeny in elite price cells in the new market pricing grid]. The appropriateness of breeding goals was assessed on the basis of client satisfaction. Satisfaction was determined by the equitable distribution of available stock between markets combined with the assessment of the utility of the animal within the market to which it was assigned. The best goal for breeders with predominantly traditional clients was a CWT in excess of 330 kg, while that for breeders with predominantly new market clients was a CWT of between 310 and 329 kg and with a marbling grade of AAA in the Ontario carcass pricing system. For breeders who wished to satisfy both new and traditional clients, the optimal CWT was 310-329 kg and the optimal marbling grade was AA-AAA. This combination resulted in satisfaction levels of greater than 75% among clients, regardless of the distribution of the clients between the traditional and new marketplaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of cropping systems simulation capabilities world-wide combined with easy access to powerful computing has resulted in a plethora of agricultural models and consequently, model applications. Nonetheless, the scientific credibility of such applications and their relevance to farming practice is still being questioned. Our objective in this paper is to highlight some of the model applications from which benefits for farmers were or could be obtained via changed agricultural practice or policy. Changed on-farm practice due to the direct contribution of modelling, while keenly sought after, may in some cases be less achievable than a contribution via agricultural policies. This paper is intended to give some guidance for future model applications. It is not a comprehensive review of model applications, nor is it intended to discuss modelling in the context of social science or extension policy. Rather, we take snapshots around the globe to 'take stock' and to demonstrate that well-defined financial and environmental benefits can be obtained on-farm from the use of models. We highlight the importance of 'relevance' and hence the importance of true partnerships between all stakeholders (farmer, scientists, advisers) for the successful development and adoption of simulation approaches. Specifically, we address some key points that are essential for successful model applications such as: (1) issues to be addressed must be neither trivial nor obvious; (2) a modelling approach must reduce complexity rather than proliferate choices in order to aid the decision-making process (3) the cropping systems must be sufficiently flexible to allow management interventions based on insights gained from models. The pro and cons of normative approaches (e.g. decision support software that can reach a wide audience quickly but are often poorly contextualized for any individual client) versus model applications within the context of an individual client's situation will also be discussed. We suggest that a tandem approach is necessary whereby the latter is used in the early stages of model application for confidence building amongst client groups. This paper focuses on five specific regions that differ fundamentally in terms of environment and socio-economic structure and hence in their requirements for successful model applications. Specifically, we will give examples from Australia and South America (high climatic variability, large areas, low input, technologically advanced); Africa (high climatic variability, small areas, low input, subsistence agriculture); India (high climatic variability, small areas, medium level inputs, technologically progressing; and Europe (relatively low climatic variability, small areas, high input, technologically advanced). The contrast between Australia and Europe will further demonstrate how successful model applications are strongly influenced by the policy framework within which producers operate. We suggest that this might eventually lead to better adoption of fully integrated systems approaches and result in the development of resilient farming systems that are in tune with current climatic conditions and are adaptable to biophysical and socioeconomic variability and change. (C) 2001 Elsevier Science Ltd. All rights reserved.