958 resultados para index method
Resumo:
Anew thermodynamic approach has been developed in this paper to analyze adsorption in slitlike pores. The equilibrium is described by two thermodynamic conditions: the Helmholtz free energy must be minimal, and the grand potential functional at that minimum must be negative. This approach has led to local isotherms that describe adsorption in the form of a single layer or two layers near the pore walls. In narrow pores local isotherms have one step that could be either very sharp but continuous or discontinuous benchlike for a definite range of pore width. The latter reflects a so-called 0 --> 1 monolayer transition. In relatively wide pores, local isotherms have two steps, of which the first step corresponds to the appearance of two layers near the pore walls, while the second step corresponds to the filling of the space between these layers. All features of local isotherms are in agreement with the results obtained from the density functional theory and Monte Carlo simulations. The approach is used for determining pore size distributions of carbon materials. We illustrate this with the benzene adsorption data on activated carbon at 20, 50, and 80 degreesC, argon adsorption on activated carbon Norit ROX at 87.3 K, and nitrogen adsorption on activated carbon Norit R1 at 77.3 K.
Resumo:
Aims: To characterise chronic lateral epicondylalgia using the McGill Pain Questionnaire, Visual Analog Scales for pain and function, and Quantitative Sensory Tests; and to examine the relationship between these tests in a population with chronic lateral epicondylalgia. Method: Fifty-six patients (29 female, 27 male) diagnosed with unilateral lateral epicondylalgia of 18.7 months (mean) duration (range 1-300), with a mean age of 50.7 years (range 27-73) participated in this study. Each participant underwent assessment with the McGill Pain Questionnaire (MPQ), Visual Analog Scales (VAS) for pain and function. and Quantitative Sensory Tests (QST) including thermal and pressure pain thresholds, pain free grip strength, and neuromeningeal tissue testing via the upper limb tension test 2b (ULTT 2b). Results: Moderate correlation (r = .338-.514, p = .000-.013) was found between all indices of the MPQ and VAS for pain experienced in the previous 24 hours and week. Thermal pain threshold was found to be significantly higher in males. A significant poor to moderate correlation was found between the Pain Rating Index (PRI) in the sensory category of the MPQ and ULTT2b scores (r = .353, p = .038). There was no other significant correlation between MPQ and QST data. Pain free grip strength was poorly yet significantly correlated with duration of pathology (r = 318, p = .038). Conclusion: The findings of this study are in agreement with others (Melzack and Katz, 1994) regarding the multidimensional nature of pain, in a condition conventionally conceived as a musculoskeletal pain state. The findings also suggest that utilisation of only one pain measurement tool is unlikely to provide a thorough clinical picture of pain experienced with chronic lateral epicondylalgia.
Resumo:
Objective: To determine item, subscale and total score agreement on the Frenchay Activities Index (FAI) between stroke patients and proxies six months after discharge from rehabilitation. Design: Prospective study design. Setting/subjects: Fifty patient-proxy pairs, interviewed separately, in the patient's residence. Main outcome measures: Modified FAI using 13 items. Individual FAI items, subscales and total score agreement as measured by weighted kappa and intraclass correlation coefficients (ICC). Results: Excellent agreement was found for the total FAI (ICC 0.87, 95% confidence interval (CI) 0.78-0.93), and domestic (ICC 0.85, 95% CI 0.73-0.91) and outdoor (ICC 0.87, 95% CI 0.78-0.95) subscales, with moderate agreement found for the work/leisure subscale (ICC 0.63, 95% CI 0.34-0.78). For the individual FAI items, good, moderate, fair and poor agreement was found for five, three, four and one item, respectively. The best agreement was for objective items of preparing meals, washing-up, washing clothes, shopping and driving. The poorest agreement was for participation in hobbies, social outings and heavy housework. Scoring biases associated with patient or proxy demographic characteristics were found. Female proxies, and those who were spouses, scored patients lower on domestic activities; male patients, and those who were younger, scored themselves higher on outdoor activities and higher patient FIM scores were positively correlated with higher FAI scores. Conclusions: While total and subscale agreement on the FAI was high, individual item agreement varied. Proxy scores should be used with caution due to bias.
Resumo:
Many large-scale stochastic systems, such as telecommunications networks, can be modelled using a continuous-time Markov chain. However, it is frequently the case that a satisfactory analysis of their time-dependent, or even equilibrium, behaviour is impossible. In this paper, we propose a new method of analyzing Markovian models, whereby the existing transition structure is replaced by a more amenable one. Using rates of transition given by the equilibrium expected rates of the corresponding transitions of the original chain, we are able to approximate its behaviour. We present two formulations of the idea of expected rates. The first provides a method for analysing time-dependent behaviour, while the second provides a highly accurate means of analysing equilibrium behaviour. We shall illustrate our approach with reference to a variety of models, giving particular attention to queueing and loss networks. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
A supersweet sweet corn hybrid, Pacific H5, was planted at weekly intervals (P-1 to P-5) in spring in South-Eastern Queensland. All plantings were harvested at the same time resulting in immature seed for the last planting (P-5). The seed was handled by three methods: manual harvest and processing (M-1), manual harvest and mechanical processing (M-2) and mechanical harvest and processing (M-3), and later graded into three sizes (small, medium and large). After eight months storage at 12-14degreesC, seed was maintained at 30degreesC with bimonthly monitoring of germination for fourteen months and seed damage at the end of this period. Seed quality was greatest for M-1 and was reduced by mechanical processing but not by mechanical harvesting. Large and medium seed had higher germination due to greater storage reserves but also more seed damage during mechanical processing. Immature seed from premature harvest (P-5) had poor quality especially when processed mechanically and reinforced the need for harvested seed to be physiologically mature.
Resumo:
Trials conducted in Queensland, Australia between 1997 and 2002 demonstrated that fungicides belonging to the triazole group were the most effective in minimising the severity of infection of sorghum by Claviceps africana, the causal agent of sorghum ergot. Triadimenol ( as Bayfidan 250EC) at 0.125 kg a. i./ha was the most effective fungicide. A combination of the systemic activated resistance compound acibenzolar-S-methyl ( as Bion 50WG) at 0.05 kg a. i./ha and mancozeb ( as Penncozeb 750DF) at 1.5 kg a. i./ha has the potential to provide protection against the pathogen, should triazole-resistant isolates be detected. Timing and method of fungicide application are important. Our results suggest that the triazole fungicides have no systemic activity in sorghum panicles, necessitating the need for multiple applications from first anthesis to the end of flowering, whereas acibenzolar-S-methyl is most effective when applied 4 days before flowering. The flat fan nozzles tested in the trials provided higher levels of protection against C. africana and greater droplet deposition on panicles than the tested hollow cone nozzles. Application of triadimenol by a fixed wing aircraft was as efficacious as application through a tractor-mounted boom spray.
Resumo:
A high definition, finite difference time domain (HD-FDTD) method is presented in this paper. This new method allows the FDTD method to be efficiently applied over a very large frequency range including low frequencies, which are problematic for conventional FDTD methods. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and has been verified against analytical solutions within the frequency range 50 Hz-1 GHz. As an example of the lower frequency range, the method has been applied to the problem of induced eddy currents in the human body resulting from the pulsed magnetic field gradients of an MRI system. The new method only requires approximately 0.3% of the source period to obtain an accurate solution. (C) 2003 Elsevier Science Inc. All rights reserved.
Resumo:
Blasting has been the most frequently used method for rock breakage since black powder was first used to fragment rocks, more than two hundred years ago. This paper is an attempt to reassess standard design techniques used in blasting by providing an alternative approach to blast design. The new approach has been termed asymmetric blasting. Based on providing real time rock recognition through the capacity of measurement while drilling (MWD) techniques, asymmetric blasting is an approach to deal with rock properties as they occur in nature, i.e., randomly and asymmetrically spatially distributed. It is well accepted that performance of basic mining operations, such as excavation and crushing rely on a broken rock mass which has been pre conditioned by the blast. By pre-conditioned we mean well fragmented, sufficiently loose and with adequate muckpile profile. These muckpile characteristics affect loading and hauling [1]. The influence of blasting does not end there. Under the Mine to Mill paradigm, blasting has a significant leverage on downstream operations such as crushing and milling. There is a body of evidence that blasting affects mineral liberation [2]. Thus, the importance of blasting has increased from simply fragmenting and loosing the rock mass, to a broader role that encompasses many aspects of mining, which affects the cost of the end product. A new approach is proposed in this paper which facilitates this trend 'to treat non-homogeneous media (rock mass) in a non-homogeneous manner (an asymmetrical pattern) in order to achieve an optimal result (in terms of muckpile size distribution).' It is postulated there are no logical reasons (besides the current lack of means to infer rock mass properties in the blind zones of the bench and onsite precedents) for drilling a regular blast pattern over a rock mass that is inherently heterogeneous. Real and theoretical examples of such a method are presented.
Resumo:
Most finite element packages use the Newmark algorithm for time integration of structural dynamics. Various algorithms have been proposed to better optimize the high frequency dissipation of this algorithm. Hulbert and Chung proposed both implicit and explicit forms of the generalized alpha method. The algorithms optimize high frequency dissipation effectively, and despite recent work on algorithms that possess momentum conserving/energy dissipative properties in a non-linear context, the generalized alpha method remains an efficient way to solve many problems, especially with adaptive timestep control. However, the implicit and explicit algorithms use incompatible parameter sets and cannot be used together in a spatial partition, whereas this can be done for the Newmark algorithm, as Hughes and Liu demonstrated, and for the HHT-alpha algorithm developed from it. The present paper shows that the explicit generalized alpha method can be rewritten so that it becomes compatible with the implicit form. All four algorithmic parameters can be matched between the explicit and implicit forms. An element interface between implicit and explicit partitions can then be used, analogous to that devised by Hughes and Liu to extend the Newmark method. The stability of the explicit/implicit algorithm is examined in a linear context and found to exceed that of the explicit partition. The element partition is significantly less dissipative of intermediate frequencies than one using the HHT-alpha method. The explicit algorithm can also be rewritten so that the discrete equation of motion evaluates forces from displacements and velocities found at the predicted mid-point of a cycle. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
This communications describes an electromagnetic model of a radial line planar antenna consisting of a radial guide with one central probe and many peripheral probes arranged in concentric circles feeding an array of antenna elements such as patches or wire curls. The model takes into account interactions between the coupling probes while assuming isolation of radiating elements. Based on this model, computer programs are developed to determine equivalent circuit parameters of the feed network and the radiation pattern of the radial line planar antenna. Comparisons are made between the present model and the two-probe model developed earlier by other researchers.
Resumo:
Adiabatic self-heating tests were carried out on five New Zealand coal samples ranging in rank from lignite to high-volatile bituminous. Kinetic parameters of oxidation were obtained front the self-heating curves assuming Arrhenius behaviour. The activation energy E (kJ mol(-1)) and the pre-exponential factor A (s(-1)) were determined in the temperature range of 70-140 degreesC. The activation energy exhibited a definite rank relationship with a minimum E of 55 kJ mol(-1) occurring at a Suggate rank of similar to6.2 corresponding to subbituminous C. Either side of this rank there was a noticeable increase in the activation energy indicating lower reactivity of the coal. A similar rank trend was also observed in the R-70 self-heating rate index values that were taken from the initial portion of the self-heating curve front 40 to 70 degreesC. From these results it is clear that the adiabatic method is capable of providing reliable kinetic parameters of coal oxidation.
Resumo:
The role of sunscreens in preventing skin cancer and melanoma is the focus of ongoing research. Currently, there is no objective measure which can be used in field studies to determine whether a person has applied sunscreen to their skin, and researchers must use indirect assessments such as questionnaires. We sought to develop a rapid, non-invasive method for identifying sunscreen on the skin for use in epidemiological studies. Our basic method is to swab the skin, elute any residues which have been adsorbed onto the swab by rinsing in ethanol, and submit the eluted washings for spectrophotometric analysis. In a controlled study, we applied 0.1 ml of sunscreen to a 50 cm(2) grid on both forearms of 21 volunteers. Each forearm was allocated one of 10 different sunscreen brands. The skin was swabbed after intervals of 20 min, 1 h, 2 h and 4 h. In a field study conducted among 12 children aged 2-4 years attending a child care centre, sunscreen was applied to the faces of half the children. Swabs were then taken from the face and back of all children without knowledge of sunscreen status. In the controlled study, sunscreen was clearly detectable up to 2 h after application for all brands containing organic sunscreen, and marginally detectable at 4 h. In the field study, this method correctly identified all children with and without sunscreen. We conclude that spectrophotometric analysis of skin swabs can reliably detect the presence of sunscreen on the skin for up to 2 It after application. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Free-space optical interconnects (FSOIs), made up of dense arrays of vertical-cavity surface-emitting lasers, photodetectors and microlenses can be used for implementing high-speed and high-density communication links, and hence replace the inferior electrical interconnects. A major concern in the design of FSOIs is minimization of the optical channel cross talk arising from laser beam diffraction. In this article we introduce modifications to the mode expansion method of Tanaka et al. [IEEE Trans. Microwave Theory Tech. MTT-20, 749 (1972)] to make it an efficient tool for modelling and design of FSOIs in the presence of diffraction. We demonstrate that our modified mode expansion method has accuracy similar to the exact solution of the Huygens-Kirchhoff diffraction integral in cases of both weak and strong beam clipping, and that it is much more accurate than the existing approximations. The strength of the method is twofold: first, it is applicable in the region of pronounced diffraction (strong beam clipping) where all other approximations fail and, second, unlike the exact-solution method, it can be efficiently used for modelling diffraction on multiple apertures. These features make the mode expansion method useful for design and optimization of free-space architectures containing multiple optical elements inclusive of optical interconnects and optical clock distribution systems. (C) 2003 Optical Society of America.
Resumo:
This paper considers the question of which is better: the batch or the continuous activated sludge processes? It is an important question because dissension still exists in the wastewater industry as to the relative merits of each of the processes. A review of perceived differences in the processes from the point of view of two related disciplines, process engineering and biotechnology, is presented together with the results of previous comparative studies. These reviews highlight possible areas where more understanding is required. This is provided in the paper by application of the flexibility index to two case studies. The flexibility index is a useful process design tool that measures the ability of the process to cope with long term changes in operation.