180 resultados para indicators’ improvement method
Resumo:
The emphasis of this work is on the optimal design of MRI magnets with both superconducting coils and ferromagnetic rings. The work is directed to the automated design of MRI magnet systems containing superconducting wire and both `cold' and `warm' iron. Details of the optimization procedure are given and the results show combined superconducting and iron material MRI magnets with excellent field characteristics. Strong, homogeneous central magnetic fields are produced with little stray or external field leakage. The field calculations are performed using a semi-analytical method for both current coil and iron material sources. Design examples for symmetric, open and asymmetric clinical MRI magnets containing both superconducting coils and ferromagnetic material are presented.
Resumo:
Numerical modeling of the eddy currents induced in the human body by the pulsed field gradients in MRI presents a difficult computational problem. It requires an efficient and accurate computational method for high spatial resolution analyses with a relatively low input frequency. In this article, a new technique is described which allows the finite difference time domain (FDTD) method to be efficiently applied over a very large frequency range, including low frequencies. This is not the case in conventional FDTD-based methods. A method of implementing streamline gradients in FDTD is presented, as well as comparative analyses which show that the correct source injection in the FDTD simulation plays a crucial rule in obtaining accurate solutions. In particular, making use of the derivative of the input source waveform is shown to provide distinct benefits in accuracy over direct source injection. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and the source injection method has been verified against examples with analytical solutions. Results are presented showing the spatial distribution of gradient-induced electric fields and eddy currents in a complete body model.
Resumo:
In this paper the diffusion and flow of carbon tetrachloride, benzene and n-hexane through a commercial activated carbon is studied by a differential permeation method. The range of pressure is covered from very low pressure to a pressure range where significant capillary condensation occurs. Helium as a non-adsorbing gas is used to determine the characteristics of the porous medium. For adsorbing gases and vapors, the motion of adsorbed molecules in small pores gives rise to a sharp increase in permeability at very low pressures. The interplay between a decreasing behavior in permeability due to the saturation of small pores with adsorbed molecules and an increasing behavior due to viscous flow in larger pores with pressure could lead to a minimum in the plot of total permeability versus pressure. This phenomenon is observed for n-hexane at 30degreesC. At relative pressure of 0.1-0.8 where the gaseous viscous flow dominates, the permeability is a linear function of pressure. Since activated carbon has a wide pore size distribution, the mobility mechanism of these adsorbed molecules is different from pore to pore. In very small pores where adsorbate molecules fill the pore the permeability decreases with an increase in pressure, while in intermediate pores the permeability of such transport increases with pressure due to the increasing build-up of layers of adsorbed molecules. For even larger pores, the transport is mostly due to diffusion and flow of free molecules, which gives rise to linear permeability with respect to pressure. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Indicators are valuable tools used to measure progress towards a desired health outcome. Increased awareness of the public health burden due to injury has lead to a concomitant interest in monitoring the impact of national initiatives that aim to reduce the size of the burden. Several injury indicators have now been proposed. This study examines the ability of each of the suggested indicators to reflect the nature and extent of the burden of non-fatal injury. A criterion validity, population-based, prospective cohort study was conducted in Brisbane, a sub-tropical Metropolitan City on the eastern seaboard of Australia, over a 12-month period between 1 January and 31 December 1998. Neither the presence of a long bone fracture nor the need for hospitalisation for 4 or more days were sensitive or specific indicators for 'serious' or major injury as defined by the 'Gold Standard' Injury Severity Score (ISS). Subsequent analysis, using other public health outcome measures demonstrated that the major component of the illness burden of injury was in fact due to 'minor' not serious injury. However, the suggested indicators demonstrated low sensitivity and specificity for these outcomes as well. The results of the study support the need to include at least all hospitalisations in any population-based measure of injury and not attempt to simplify the indicator to a more convenient measure aimed at identifying just those cases of,serious' injury.
Resumo:
Landscape metrics are widely applied in landscape ecology to quantify landscape structure. However, many are poorly tested and require rigorous validation if they are to serve as reliable indicators of habitat loss and fragmentation, such as Montreal Process Indicator 1.1e. We apply a landscape ecology theory, supported by exploratory and confirmatory statistical techniques, to empirically test landscape metrics for reporting Montreal Process Indicator 1.1e in continuous dry eucalypt forests of sub-tropical Queensland, Australia. Target biota examined included: the Yellow-bellied Glider (Petaurus australis); the diversity of nectar and sap feeding glider species including P. australis, the Sugar Glider P. breviceps, the Squirrel Glider P. norfolcensis, and the Feathertail Glider Acrobates pygmaeus; six diurnal forest birds species; total diurnal bird species diversity; and the density of nectar-feeding diurnal bird species. Two scales of influence were considered: the stand-scale (2 ha), and a series of radial landscape extents (500 m - 2 km; 78 - 1250 ha) surrounding each fauna transect. For all biota, stand-scale structural and compositional attributes were found to be more influential than landscape metrics. For the Yellow-bellied Glider, the proportion of trace habitats with a residual element of old spotted-gum/ironbark eucalypt trees was a significant landscape metric at the 2 km landscape extent. This is a measure of habitat loss rather than habitat fragmentation. For the diversity of nectar and sap feeding glider species, the proportion of trace habitats with a high coefficient of variation in patch size at the 750 m extent was a significant landscape metric. None of the landscape metrics tested was important for diurnal forest birds. We conclude that no single landscape metric adequately captures the response of the region's forest biota per se. This poses a major challenge to regional reporting of Montreal Process Indicator 1.1e, fragmentation of forest types.
Resumo:
Many large-scale stochastic systems, such as telecommunications networks, can be modelled using a continuous-time Markov chain. However, it is frequently the case that a satisfactory analysis of their time-dependent, or even equilibrium, behaviour is impossible. In this paper, we propose a new method of analyzing Markovian models, whereby the existing transition structure is replaced by a more amenable one. Using rates of transition given by the equilibrium expected rates of the corresponding transitions of the original chain, we are able to approximate its behaviour. We present two formulations of the idea of expected rates. The first provides a method for analysing time-dependent behaviour, while the second provides a highly accurate means of analysing equilibrium behaviour. We shall illustrate our approach with reference to a variety of models, giving particular attention to queueing and loss networks. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Caustis blakei is an attractive cut foliage plant harvested from the wild in Australia and marketed under the name of koala fern. Previous attempts to propagate large numbers of this plant have been unsuccessful. The effect of four light irradiances on organogenesis from compact and friable callus of C. blakei was studied for 21 wk. Both callus types produced numerous primordial shoots but many failed to develop into green plantlets. However, significantly more primordial shoots and green plantlets developed on the friable callus than on the compact callus, and significantly more green plantlets were regenerated under the higher photon irradiances of 200 and 300 mumol m(-2) s(-1) than under the lower irradiances of 100 and 150 mumol m(-2) s(-1). The compact callus produced its maximum number of green plantlets early in the experiment (after 9 wk), while the friable callus continued to produce primordial shoots and green plantlets throughout the period of the experiment, and reached its maximum production of green plantlets at 21 wk under the irradiance of 300 mumol m(-2) s(-1). Organogenesis from friable callus under high irradiance (300 mumol m(-2) s(-1)) offers an efficient propagation method for C. blakei.
Resumo:
A supersweet sweet corn hybrid, Pacific H5, was planted at weekly intervals (P-1 to P-5) in spring in South-Eastern Queensland. All plantings were harvested at the same time resulting in immature seed for the last planting (P-5). The seed was handled by three methods: manual harvest and processing (M-1), manual harvest and mechanical processing (M-2) and mechanical harvest and processing (M-3), and later graded into three sizes (small, medium and large). After eight months storage at 12-14degreesC, seed was maintained at 30degreesC with bimonthly monitoring of germination for fourteen months and seed damage at the end of this period. Seed quality was greatest for M-1 and was reduced by mechanical processing but not by mechanical harvesting. Large and medium seed had higher germination due to greater storage reserves but also more seed damage during mechanical processing. Immature seed from premature harvest (P-5) had poor quality especially when processed mechanically and reinforced the need for harvested seed to be physiologically mature.
Resumo:
Trials conducted in Queensland, Australia between 1997 and 2002 demonstrated that fungicides belonging to the triazole group were the most effective in minimising the severity of infection of sorghum by Claviceps africana, the causal agent of sorghum ergot. Triadimenol ( as Bayfidan 250EC) at 0.125 kg a. i./ha was the most effective fungicide. A combination of the systemic activated resistance compound acibenzolar-S-methyl ( as Bion 50WG) at 0.05 kg a. i./ha and mancozeb ( as Penncozeb 750DF) at 1.5 kg a. i./ha has the potential to provide protection against the pathogen, should triazole-resistant isolates be detected. Timing and method of fungicide application are important. Our results suggest that the triazole fungicides have no systemic activity in sorghum panicles, necessitating the need for multiple applications from first anthesis to the end of flowering, whereas acibenzolar-S-methyl is most effective when applied 4 days before flowering. The flat fan nozzles tested in the trials provided higher levels of protection against C. africana and greater droplet deposition on panicles than the tested hollow cone nozzles. Application of triadimenol by a fixed wing aircraft was as efficacious as application through a tractor-mounted boom spray.
Resumo:
A high definition, finite difference time domain (HD-FDTD) method is presented in this paper. This new method allows the FDTD method to be efficiently applied over a very large frequency range including low frequencies, which are problematic for conventional FDTD methods. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and has been verified against analytical solutions within the frequency range 50 Hz-1 GHz. As an example of the lower frequency range, the method has been applied to the problem of induced eddy currents in the human body resulting from the pulsed magnetic field gradients of an MRI system. The new method only requires approximately 0.3% of the source period to obtain an accurate solution. (C) 2003 Elsevier Science Inc. All rights reserved.
Resumo:
Background: Congestive heart failure (CHF) is an increasingly prevalent poor-prognosis condition for which effective interventions are available. It is -therefore important to determine the extent to which patients with CHF receive appropriate care in Australian hospitals and identify ways for improving suboptimal care, if it exists. Aim: To evaluate the quality of in-hospital acute care of patients with CHF using explicit quality indicators based on published guidelines. Methods: A retrospective case note review was -performed, involving 216 patients admitted to three teaching hospitals in Brisbane, Queensland, Australia, between October 2000 and April 2001. Outcome measures were process-of-care quality -indicators calculated as proportions of all, or strongly -eligible (ideal), patients who received -specific interventions. Results: Assessment of underlying causes and acute precipitating factors was undertaken in 86% and 76% of patients, respectively, and objective evaluation of left ventricular function was performed in 62% of patients. Prophylaxis for deep venous thrombosis (DVT) was used in only 29% of ideal patients. Proportions of ideal patients receiving pharmacological treatments at discharge were: (i) angiotensin--converting enzyme inhibitors (ACEi) (82%), (ii) target doses of ACEi (61%), (iii) alternative vasodilators in patients ineligible for ACEi (20%), (iv) beta-blockers (40%) and (v) warfarin (46%). Conclusions: Opportunities exist for improving quality of in-hospital care of patients with CHF, -particularly for optimal prescribing of: (i) DVT prophylaxis, (ii) ACEi, (iii) second-line vasodilators, (iv) beta-blockers and (v) warfarin. More research is needed to identify methods for improving quality of in-hospital care.
Resumo:
Blasting has been the most frequently used method for rock breakage since black powder was first used to fragment rocks, more than two hundred years ago. This paper is an attempt to reassess standard design techniques used in blasting by providing an alternative approach to blast design. The new approach has been termed asymmetric blasting. Based on providing real time rock recognition through the capacity of measurement while drilling (MWD) techniques, asymmetric blasting is an approach to deal with rock properties as they occur in nature, i.e., randomly and asymmetrically spatially distributed. It is well accepted that performance of basic mining operations, such as excavation and crushing rely on a broken rock mass which has been pre conditioned by the blast. By pre-conditioned we mean well fragmented, sufficiently loose and with adequate muckpile profile. These muckpile characteristics affect loading and hauling [1]. The influence of blasting does not end there. Under the Mine to Mill paradigm, blasting has a significant leverage on downstream operations such as crushing and milling. There is a body of evidence that blasting affects mineral liberation [2]. Thus, the importance of blasting has increased from simply fragmenting and loosing the rock mass, to a broader role that encompasses many aspects of mining, which affects the cost of the end product. A new approach is proposed in this paper which facilitates this trend 'to treat non-homogeneous media (rock mass) in a non-homogeneous manner (an asymmetrical pattern) in order to achieve an optimal result (in terms of muckpile size distribution).' It is postulated there are no logical reasons (besides the current lack of means to infer rock mass properties in the blind zones of the bench and onsite precedents) for drilling a regular blast pattern over a rock mass that is inherently heterogeneous. Real and theoretical examples of such a method are presented.
Resumo:
Most finite element packages use the Newmark algorithm for time integration of structural dynamics. Various algorithms have been proposed to better optimize the high frequency dissipation of this algorithm. Hulbert and Chung proposed both implicit and explicit forms of the generalized alpha method. The algorithms optimize high frequency dissipation effectively, and despite recent work on algorithms that possess momentum conserving/energy dissipative properties in a non-linear context, the generalized alpha method remains an efficient way to solve many problems, especially with adaptive timestep control. However, the implicit and explicit algorithms use incompatible parameter sets and cannot be used together in a spatial partition, whereas this can be done for the Newmark algorithm, as Hughes and Liu demonstrated, and for the HHT-alpha algorithm developed from it. The present paper shows that the explicit generalized alpha method can be rewritten so that it becomes compatible with the implicit form. All four algorithmic parameters can be matched between the explicit and implicit forms. An element interface between implicit and explicit partitions can then be used, analogous to that devised by Hughes and Liu to extend the Newmark method. The stability of the explicit/implicit algorithm is examined in a linear context and found to exceed that of the explicit partition. The element partition is significantly less dissipative of intermediate frequencies than one using the HHT-alpha method. The explicit algorithm can also be rewritten so that the discrete equation of motion evaluates forces from displacements and velocities found at the predicted mid-point of a cycle. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
This communications describes an electromagnetic model of a radial line planar antenna consisting of a radial guide with one central probe and many peripheral probes arranged in concentric circles feeding an array of antenna elements such as patches or wire curls. The model takes into account interactions between the coupling probes while assuming isolation of radiating elements. Based on this model, computer programs are developed to determine equivalent circuit parameters of the feed network and the radiation pattern of the radial line planar antenna. Comparisons are made between the present model and the two-probe model developed earlier by other researchers.