907 resultados para Simulation Design
Resumo:
A generalised model for the prediction of single char particle gasification dynamics, accounting for multi-component mass transfer with chemical reaction, heat transfer, as well as structure evolution and peripheral fragmentation is developed in this paper. Maxwell-Stefan analysis is uniquely applied to both micro and macropores within the framework of the dusty-gas model to account for the bidisperse nature of the char, which differs significantly from the conventional models that are based on a single pore type. The peripheral fragmentation and random-pore correlation incorporated into the model enable prediction of structure/reactivity relationships. The occurrence of chemical reaction within the boundary layer reported by Biggs and Agarwal (Chem. Eng. Sci. 52 (1997) 941) has been confirmed through an analysis of CO/CO2 product ratio obtained from model simulations. However, it is also quantitatively observed that the significance of boundary layer reaction reduces notably with the reduction of oxygen concentration in the flue gas, operational pressure and film thickness. Computations have also shown that in the presence of diffusional gradients peripheral fragmentation occurs in the early stages on the surface, after which conversion quickens significantly due to small particle size. Results of the early commencement of peripheral fragmentation at relatively low overall conversion obtained from a large number of simulations agree well with experimental observations reported by Feng and Bhatia (Energy & Fuels 14 (2000) 297). Comprehensive analysis of simulation results is carried out based on well accepted physical principles to rationalise model prediction. (C) 2001 Elsevier Science Ltd. AH rights reserved.
Resumo:
We developed a general model to assess patient activity within the primary and secondary health-care sectors following a dermatology outpatient consultation. Based on observed variables from the UK teledermatology trial, the model showed that up to 11 doctor-patient interactions occurred before a patient was ultimately discharged from care. In a cohort of 1000 patients, the average number of health-care visits was 2.4 (range 1-11). Simulation analysis suggested that the most important parameter affecting the total number of doctor-patient Interactions is patient discharge from care following the initial consultation. This implies that resources should be concentrated in this area. The introduction of teledermatology (either realtime or store and forward) changes the values of the model parameters. The model provides a quantitative tool for planning the future provision of dermatology health-care.
Resumo:
Objective: To describe and analyse the study design and manuscript deficiencies in original research articles submitted to Emergency Medicine. Methods: This was a retrospective, analytical study. Articles were enrolled if the reports of the Section Editor and two reviewers were available. Data were extracted from these reports only. Outcome measures were the mean number and nature of the deficiencies and the mean reviewers’ assessment score. Results: Fifty-seven articles were evaluated (28 accepted for publication, 19 rejected, 10 pending revision). The mean (± SD) number of deficiencies was 18.1 ± 6.9, 16.4 ± 6.5 and 18.4 ± 6.7 for all articles, articles accepted for publication and articles rejected, respectively (P = 0.31 between accepted and rejected articles). The mean assessment scores (0–10) were 5.5 ± 1.5, 5.9 ± 1.5 and 4.7 ± 1.4 for all articles, articles accepted for publication and articles rejected, respectively. Accepted articles had a significantly higher assessment score than rejected articles (P = 0.006). For each group, there was a negative correlation between the number of deficiencies and the mean assessment score (P > 0.05). Significantly more rejected articles ‘… did not further our knowledge’ (P = 0.0014) and ‘… did not describe background information adequately’ (P = 0.049). Many rejected articles had ‘… findings that were not clinically or socially significant’ (P = 0.07). Common deficiencies among all articles included ambiguity of the methods (77%) and results (68%), conclusions not warranted by the data (72%), poor referencing (56%), inadequate study design description (51%), unclear tables (49%), an overly long discussion (49%), limitations of the study not described (51%), inadequate definition of terms (49%) and subject selection bias (40%). Conclusions: Researchers should undertake studies that are likely to further our knowledge and be clinically or socially significant. Deficiencies in manuscript preparation are more frequent than mistakes in study design and execution. Specific training or assistance in manuscript preparation is indicated.
Resumo:
Understanding the genetic architecture of quantitative traits can greatly assist the design of strategies for their manipulation in plant-breeding programs. For a number of traits, genetic variation can be the result of segregation of a few major genes and many polygenes (minor genes). The joint segregation analysis (JSA) is a maximum-likelihood approach for fitting segregation models through the simultaneous use of phenotypic information from multiple generations. Our objective in this paper was to use computer simulation to quantify the power of the JSA method for testing the mixed-inheritance model for quantitative traits when it was applied to the six basic generations: both parents (P-1 and P-2), F-1, F-2, and both backcross generations (B-1 and B-2) derived from crossing the F-1 to each parent. A total of 1968 genetic model-experiment scenarios were considered in the simulation study to quantify the power of the method. Factors that interacted to influence the power of the JSA method to correctly detect genetic models were: (1) whether there were one or two major genes in combination with polygenes, (2) the heritability of the major genes and polygenes, (3) the level of dispersion of the major genes and polygenes between the two parents, and (4) the number of individuals examined in each generation (population size). The greatest levels of power were observed for the genetic models defined with simple inheritance; e.g., the power was greater than 90% for the one major gene model, regardless of the population size and major-gene heritability. Lower levels of power were observed for the genetic models with complex inheritance (major genes and polygenes), low heritability, small population sizes and a large dispersion of favourable genes among the two parents; e.g., the power was less than 5% for the two major-gene model with a heritability value of 0.3 and population sizes of 100 individuals. The JSA methodology was then applied to a previously studied sorghum data-set to investigate the genetic control of the putative drought resistance-trait osmotic adjustment in three crosses. The previous study concluded that there were two major genes segregating for osmotic adjustment in the three crosses. Application of the JSA method resulted in a change in the proposed genetic model. The presence of the two major genes was confirmed with the addition of an unspecified number of polygenes.
Resumo:
HLA-A*0201 transgenic, H-2D(b)/mouse beta2-microglobulin double-knockout mice were used to compare and optimize the immunogenic potential of 17HIV 1-derived, HLA-A0201-restricted epitopic peptides. A tyrosine substitution in position 1 of the epitopic peptides, which increases both their affinity for and their HLA-A0201 molecule stabilizing capacity, was introduced in a significant proportion, having verified that such modifications enhance their immunogenicity in respect of their natural antigenicity. Based on these results, a 13-polyepitope construct was inserted in the pre-S2 segment of the hepatitis B middle glycoprotein and used for DNA immunization. Long-lasting CTL responses against most of the inserted epitopes could be elicited simultaneously in a single animal with cross-recognition in several cases of their most common natural variants.
Resumo:
An inverse, current density mapping (CDM) method has been developed for the design of elliptical cross-section MRI magnets. The method provides a rapid prototyping system for unusual magnet designs, as it generates a 3D current density in response to a set of target field and geometric constraints. The emphasis of this work is on the investigation of new elliptical coil structures for clinical MRI magnets. The effect of the elliptical aspect ratio on magnet performance is investigated. Viable designs are generated for symmetric, asymmetric and open architecture elliptical magnets using the new method. Clinically relevant attributes such as reduced stray field and large homogeneous regions relative to total magnet length are included in the design process and investigated in detail. The preliminary magnet designs have several novel features.
Resumo:
In this work, a new method of optimization is successfully applied to the theoretical design of compact, actively shielded, clinical MRI magnets. The problem is formulated as a two-step process in which the desired current densities on multiple, cc-axial surface layers are first calculated by solving Fredholm equations of the first kind. Non-linear optimization methods with inequality constraints are then invoked to fit practical magnet coils to the desired current densities. The current density approach allows rapid prototyping of unusual magnet designs. The emphasis of this work is on the optimal design of short, actively-shielded MRI magnets for whole-body imaging. Details of the hybrid numerical model are presented, and the model is used to investigate compact, symmetric, and asymmetric MRI magnets. Magnet designs are presented for actively-shielded, symmetric magnets of coil length 1.0 m, which is considerably shorter than currently available designs of comparable dsv size. Novel, actively-shielded, asymmetric magnet designs are also presented in which the beginning of a 50-cm dsv is positioned just 11 cm from the end of the coil structure, allowing much improved access to the patient and reduced patient claustrophobia. Magn Reson Med 45:331540, 2001. (C) 2001 Wiley-Liss, Inc.
Resumo:
New designs for force-minimized compact high-field clinical MRI magnets are described. The design method is a modified simulated annealing (SA) procedure which includes Maxwell forces in the error function to be minimized. This permits an automated force reduction in the magnet designs while controlling the overall dimensions of the system. As SA optimization requires many iterations to achieve a final design, it is important that each iteration in the procedure is rapid. We have therefore developed a rapid force calculation algorithm. Novel designs for short 3- and 4-T clinical MRI systems are presented in which force reduction has been invoked. The final designs provide large homogeneous regions and reduced stray fields in remarkable short magnets. A shielded 4-T design that is approximately 30% shorter than current designs is presented. This novel magnet generates a full 50-cm diameter homogeneous region.
Resumo:
Agricultural ecosystems and their associated business and government systems are diverse and varied. They range from farms, to input supply businesses, to marketing and government policy systems, among others. These systems are dynamic and responsive to fluctuations in climate. Skill in climate prediction offers considerable opportunities to managers via its potential to realise system improvements (i.e. increased food production and profit and/or reduced risks). Realising these opportunities, however, is not straightforward as the forecasting skill is imperfect and approaches to applying the existing skill to management issues have not been developed and tested extensively. While there has been much written about impacts of climate variability, there has been relatively little done in relation to applying knowledge of climate predictions to modify actions ahead of likely impacts. However, a considerable body of effort in various parts of the world is now being focused on this issue of applying climate predictions to improve agricultural systems. In this paper, we outline the basis for climate prediction, with emphasis on the El Nino-Southern Oscillation phenomenon, and catalogue experiences at field, national and global scales in applying climate predictions to agriculture. These diverse experiences are synthesised to derive general lessons about approaches to applying climate prediction in agriculture. The case studies have been selected to represent a diversity of agricultural systems and scales of operation. They also represent the on-going activities of some of the key research and development groups in this field around the world. The case studies include applications at field/farm scale to dryland cropping systems in Australia, Zimbabwe, and Argentina. This spectrum covers resource-rich and resource-poor farming with motivations ranging from profit to food security. At national and global scale we consider possible applications of climate prediction in commodity forecasting (wheat in Australia) and examine implications on global wheat trade and price associated with global consequences of climate prediction. In cataloguing these experiences we note some general lessons. Foremost is the value of an interdisciplinary systems approach in connecting disciplinary Knowledge in a manner most suited to decision-makers. This approach often includes scenario analysis based oil simulation with credible models as a key aspect of the learning process. Interaction among researchers, analysts and decision-makers is vital in the development of effective applications all of the players learn. Issues associated with balance between information demand and supply as well as appreciation of awareness limitations of decision-makers, analysts, and scientists are highlighted. It is argued that understanding and communicating decision risks is one of the keys to successful applications of climate prediction. We consider that advances of the future will be made by better connecting agricultural scientists and practitioners with the science of climate prediction. Professions involved in decision making must take a proactive role in the development of climate forecasts if the design and use of climate predictions are to reach their full potential. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The development of cropping systems simulation capabilities world-wide combined with easy access to powerful computing has resulted in a plethora of agricultural models and consequently, model applications. Nonetheless, the scientific credibility of such applications and their relevance to farming practice is still being questioned. Our objective in this paper is to highlight some of the model applications from which benefits for farmers were or could be obtained via changed agricultural practice or policy. Changed on-farm practice due to the direct contribution of modelling, while keenly sought after, may in some cases be less achievable than a contribution via agricultural policies. This paper is intended to give some guidance for future model applications. It is not a comprehensive review of model applications, nor is it intended to discuss modelling in the context of social science or extension policy. Rather, we take snapshots around the globe to 'take stock' and to demonstrate that well-defined financial and environmental benefits can be obtained on-farm from the use of models. We highlight the importance of 'relevance' and hence the importance of true partnerships between all stakeholders (farmer, scientists, advisers) for the successful development and adoption of simulation approaches. Specifically, we address some key points that are essential for successful model applications such as: (1) issues to be addressed must be neither trivial nor obvious; (2) a modelling approach must reduce complexity rather than proliferate choices in order to aid the decision-making process (3) the cropping systems must be sufficiently flexible to allow management interventions based on insights gained from models. The pro and cons of normative approaches (e.g. decision support software that can reach a wide audience quickly but are often poorly contextualized for any individual client) versus model applications within the context of an individual client's situation will also be discussed. We suggest that a tandem approach is necessary whereby the latter is used in the early stages of model application for confidence building amongst client groups. This paper focuses on five specific regions that differ fundamentally in terms of environment and socio-economic structure and hence in their requirements for successful model applications. Specifically, we will give examples from Australia and South America (high climatic variability, large areas, low input, technologically advanced); Africa (high climatic variability, small areas, low input, subsistence agriculture); India (high climatic variability, small areas, medium level inputs, technologically progressing; and Europe (relatively low climatic variability, small areas, high input, technologically advanced). The contrast between Australia and Europe will further demonstrate how successful model applications are strongly influenced by the policy framework within which producers operate. We suggest that this might eventually lead to better adoption of fully integrated systems approaches and result in the development of resilient farming systems that are in tune with current climatic conditions and are adaptable to biophysical and socioeconomic variability and change. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Recent years have seen the introduction of new and varied designs of activated sludge plants. With increasing needs for higher efficiencies and lower costs, the possibility of a plant that operates more effectively has created the need for tools that can be used to evaluate and compare designs at the design stage. One such tool is the operating space diagram. It is the aim of this paper to present this tool and demonstrate its application and relevance to design using a simple case study. In the case study, use of the operating space diagram suggested changes in design that would improve the flexibility of the process. It also was useful for designing suitable control strategies.
Resumo:
Two hazard risk assessment matrices for the ranking of occupational health risks are described. The qualitative matrix uses qualitative measures of probability and consequence to determine risk assessment codes for hazard-disease combinations. A walk-through survey of an underground metalliferous mine and concentrator is used to demonstrate how the qualitative matrix can be applied to determine priorities for the control of occupational health hazards. The semi-quantitative matrix uses attributable risk as a quantitative measure of probability and uses qualitative measures of consequence. A practical application of this matrix is the determination of occupational health priorities using existing epidemiological studies. Calculated attributable risks from epidemiological studies of hazard-disease combinations in mining and minerals processing are used as examples. These historic response data do not reflect the risks associated with current exposures. A method using current exposure data, known exposure-response relationships and the semi-quantitative matrix is proposed for more accurate and current risk rankings.