308 resultados para Coefficient


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work investigates the computer modelling of the photochemical formation of smog products such as ozone and aerosol, in a system containing toluene, NOx and water vapour. In particular, the problem of modelling this process in the Commonwealth Scientific and Industrial Research Organization (CSIRO) smog chambers, which utilize outdoor exposure, is addressed. The primary requirement for such modelling is a knowledge of the photolytic rate coefficients. Photolytic rate coefficients of species other than N02 are often related to JNo2 (rate coefficient for the photolysis ofN02) by a simple factor, but for outdoor chambers, this method is prone to error as the diurnal profiles may not be similar in shape. Three methods for the calculation of diurnal JNo2 are investigated. The most suitable method for incorporation into a general model, is found to be one which determines the photolytic rate coefficients for N02, as well as several other species, from actinic flux, absorption cross section and quantum yields. A computer model was developed, based on this method, to calculate in-chamber photolysis rate coefficients for the CSIRO smog chambers, in which ex-chamber rate coefficients are adjusted by accounting for variation in light intensity by transmittance through the Teflon walls, albedo from the chamber floor and radiation attenuation due to clouds. The photochemical formation of secondary aerosol is investigated in a series of toluene-NOx experiments, which were performed in the CSIRO smog chambers. Three stages of aerosol formation, in plots of total particulate volume versus time, are identified: a delay period in which no significant mass of aerosol is formed, a regime of rapid aerosol formation (regime 1) and a second regime of slowed aerosol formation (regime 2). Two models are presented which were developed from the experimental data. One model is empirically based on observations of discrete stages of aerosol formation and readily allows aerosol growth profiles to be calculated. The second model is based on an adaptation of published toluene photooxidation mechanisms and provides some chemical information about the oxidation products. Both models compare favorably against the experimental data. The gross effects of precursor concentrations (toluene, NOx and H20) and ambient conditions (temperature, photolysis rate) on the formation of secondary aerosol are also investigated, primarily using the mechanism model. An increase in [NOx]o results in increased delay time, rate of aerosol formation in regime 1 and volume of aerosol formed in regime 1. This is due to increased formation of dinitrocresol and furanone products. An increase in toluene results in a decrease in the delay time and an increase in the rate of aerosol formation in regime 1, due to enhanced reactivity from the toluene products, such as the radicals from the photolysis of benzaldehyde. Water vapor has very little effect on the formation of aerosol volume, except that rates are slightly increased due to more OH radicals from reaction with 0(1D) from ozone photolysis. Increased temperature results in increased volume of aerosol formed in regime 1 (increased dinitrocresol formation), while increased photolysis rate results in increased rate of aerosol formation in regime 1. Both the rate and volume of aerosol formed in regime 2 are increased by increased temperature or photolysis rate. Both models indicate that the yield of secondary particulates from hydrocarbons (mass concentration aerosol formed/mass concentration hydrocarbon precursor) is proportional to the ratio [NOx]0/[hydrocarbon]0

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hydrogel polymers are used for the manufacture of soft (or disposable) contact lenses worldwide today, but have a tendency to dehydrate on the eye. In vitro methods that can probe the potential for a given hydrogel polymer to dehydrate in vivo are much sought after. Nuclear magnetic resonance (NMR) has been shown to be effective in characterising water mobility and binding in similar systems (Barbieri, Quaglia et al., 1998, Larsen, Huff et al., 1990, Peschier, Bouwstra et al., 1993), predominantly through measurement of the spin-lattice relaxation time (T1), the spinspin relaxation time (T2) and the water diffusion coefficient (D). The aim of this work was to use NMR to quantify the molecular behaviour of water in a series of commercially available contact lens hydrogels, and relate these measurements to the binding and mobility of the water, and ultimately the potential for the hydrogel to dehydrate. As a preliminary study, in vitro evaporation rates were measured for a set of commercial contact lens hydrogels. Following this, comprehensive measurement of the temperature and water content dependencies of T1, T2 and D was performed for a series of commercial hydrogels that spanned the spectrum of equilibrium water content (EWC) and common compositions of contact lenses that are manufactured today. To quantify material differences, the data were then modelled based on theory that had been used for similar systems in the literature (Walker, Balmer et al., 1989, Hills, Takacs et al., 1989). The differences were related to differences in water binding and mobility. The evaporative results suggested that the EWC of the material was important in determining a material's potential to dehydrate in this way. Similarly, the NMR water self-diffusion coefficient was also found to be largely (if not wholly) determined by the WC. A specific binding model confirmed that the we was the dominant factor in determining the diffusive behaviour, but also suggested that subtle differences existed between the materials used, based on their equilibrium we (EWC). However, an alternative modified free volume model suggested that only the current water content of the material was important in determining the diffusive behaviour, and not the equilibrium water content. It was shown that T2 relaxation was dominated by chemical exchange between water and exchangeable polymer protons for materials that contained exchangeable polymer protons. The data was analysed using a proton exchange model, and the results were again reasonably correlated with EWC. Specifically, it was found that the average water mobility increased with increasing EWe approaching that of free water. The T1 relaxation was also shown to be reasonably well described by the same model. The main conclusion that can be drawn from this work is that the hydrogel EWe is an important parameter, which largely determines the behaviour of water in the gel. Higher EWe results in a hydrogel with water that behaves more like bulk water on average, or is less strongly 'bound' on average, compared with a lower EWe material. Based on the set of materials used, significant differences due to composition (for materials of the same or similar water content) could not be found. Similar studies could be used in the future to highlight hydrogels that deviate significantly from this 'average' behaviour, and may therefore have the least/greatest potential to dehydrate on the eye.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Radioactive wastes are by-products of the use of radiation technologies. As with many technologies, the wastes are required to be disposed of in a safe manner so as to minimise risk to human health. This study examines the requirements for a hypothetical repository and develops techniques for decision making to permit the establishment of a shallow ground burial facility to receive an inventory of low-level radioactive wastes. Australia’s overall inventory is used as an example. Essential and desirable siting criteria are developed and applied to Australia's Northern Territory resulting in the selection of three candidate sites for laboratory investigations into soil behaviour. The essential quantifiable factors which govern radionuclide migration and ultimately influence radiation doses following facility closure are reviewed. Simplified batch and column procedures were developed to enable laboratory determination of distribution and retardation coefficient values for use in one-dimensional advection-dispersion transport equations. Batch and column experiments were conducted with Australian soils sampled from the three identified candidate sites using a radionuclide representative of the current national low-level radioactive waste inventory. The experimental results are discussed and site soil performance compared. The experimental results are subsequently used to compare the relative radiation health risks between each of the three sites investigated. A recommendation is made as to the preferred site to construct an engineered near-surface burial facility to receive the Australian low-level radioactive waste inventory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis applies Monte Carlo techniques to the study of X-ray absorptiometric methods of bone mineral measurement. These studies seek to obtain information that can be used in efforts to improve the accuracy of the bone mineral measurements. A Monte Carlo computer code for X-ray photon transport at diagnostic energies has been developed from first principles. This development was undertaken as there was no readily available code which included electron binding energy corrections for incoherent scattering and one of the objectives of the project was to study the effects of inclusion of these corrections in Monte Carlo models. The code includes the main Monte Carlo program plus utilities for dealing with input data. A number of geometrical subroutines which can be used to construct complex geometries have also been written. The accuracy of the Monte Carlo code has been evaluated against the predictions of theory and the results of experiments. The results show a high correlation with theoretical predictions. In comparisons of model results with those of direct experimental measurements, agreement to within the model and experimental variances is obtained. The code is an accurate and valid modelling tool. A study of the significance of inclusion of electron binding energy corrections for incoherent scatter in the Monte Carlo code has been made. The results show this significance to be very dependent upon the type of application. The most significant effect is a reduction of low angle scatter flux for high atomic number scatterers. To effectively apply the Monte Carlo code to the study of bone mineral density measurement by photon absorptiometry the results must be considered in the context of a theoretical framework for the extraction of energy dependent information from planar X-ray beams. Such a theoretical framework is developed and the two-dimensional nature of tissue decomposition based on attenuation measurements alone is explained. This theoretical framework forms the basis for analytical models of bone mineral measurement by dual energy X-ray photon absorptiometry techniques. Monte Carlo models of dual energy X-ray absorptiometry (DEXA) have been established. These models have been used to study the contribution of scattered radiation to the measurements. It has been demonstrated that the measurement geometry has a significant effect upon the scatter contribution to the detected signal. For the geometry of the models studied in this work the scatter has no significant effect upon the results of the measurements. The model has also been used to study a proposed technique which involves dual energy X-ray transmission measurements plus a linear measurement of the distance along the ray path. This is designated as the DPA( +) technique. The addition of the linear measurement enables the tissue decomposition to be extended to three components. Bone mineral, fat and lean soft tissue are the components considered here. The results of the model demonstrate that the measurement of bone mineral using this technique is stable over a wide range of soft tissue compositions and hence would indicate the potential to overcome a major problem of the two component DEXA technique. However, the results also show that the accuracy of the DPA( +) technique is highly dependent upon the composition of the non-mineral components of bone and has poorer precision (approximately twice the coefficient of variation) than the standard DEXA measurements. These factors may limit the usefulness of the technique. These studies illustrate the value of Monte Carlo computer modelling of quantitative X-ray measurement techniques. The Monte Carlo models of bone densitometry measurement have:- 1. demonstrated the significant effects of the measurement geometry upon the contribution of scattered radiation to the measurements, 2. demonstrated that the statistical precision of the proposed DPA( +) three tissue component technique is poorer than that of the standard DEXA two tissue component technique, 3. demonstrated that the proposed DPA(+) technique has difficulty providing accurate simultaneous measurement of body composition in terms of a three component model of fat, lean soft tissue and bone mineral,4. and provided a knowledge base for input to decisions about development (or otherwise) of a physical prototype DPA( +) imaging system. The Monte Carlo computer code, data, utilities and associated models represent a set of significant, accurate and valid modelling tools for quantitative studies of physical problems in the fields of diagnostic radiology and radiography.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a static synchronous series compensator (SSSC), along with a fixed capacitor, is used to avoid torsional mode instability in a series compensated transmission system. A 48-step harmonic neutralized inverter is used for the realization of the SSSC. The system under consideration is the IEEE first benchmark model on SSR analysis. The system stability is studied both through eigenvalue analysis and EMTDC/PSCAD simulation studies. It is shown that the combination of the SSSC and the fixed capacitor improves the synchronizing power coefficient. The presence of the fixed capacitor ensures increased damping of small signal oscillations. At higher levels of fixed capacitor compensation, a damping controller is required to stabilize the torsional modes of SSR.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Typical daily decision-making process of individuals regarding use of transport system involves mainly three types of decisions: mode choice, departure time choice and route choice. This paper focuses on the mode and departure time choice processes and studies different model specifications for a combined mode and departure time choice model. The paper compares different sets of explanatory variables as well as different model structures to capture the correlation among alternatives and taste variations among the commuters. The main hypothesis tested in this paper is that departure time alternatives are also correlated by the amount of delay. Correlation among different alternatives is confirmed by analyzing different nesting structures as well as error component formulations. Random coefficient logit models confirm the presence of the random taste heterogeneity across commuters. Mixed nested logit models are estimated to jointly account for the random taste heterogeneity and the correlation among different alternatives. Results indicate that accounting for the random taste heterogeneity as well as inter-alternative correlation improves the model performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Streaming SIMD extension (SSE) is a special feature embedded in the Intel Pentium III and IV classes of microprocessors. It enables the execution of SIMD type operations to exploit data parallelism. This article presents improving computation performance of a railway network simulator by means of SSE. Voltage and current at various points of the supply system to an electrified railway line are crucial for design, daily operation and planning. With computer simulation, their time-variations can be attained by solving a matrix equation, whose size mainly depends upon the number of trains present in the system. A large coefficient matrix, as a result of congested railway line, inevitably leads to heavier computational demand and hence jeopardizes the simulation speed. With the special architectural features of the latest processors on PC platforms, significant speed-up in computations can be achieved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A novel model for the potentiostatic discharge of primary alkaline battery cathodes is presented. The model is used to simulate discharges resulting from the stepped potential electrochemical spectroscopy (SPECS) of primary alkaline battery cathodes cathodes, and the results are validated with experimental data. We show that a model based on a single (or mean) reaction framework can be used to simulate multi-reaction discharge behaviour and we develop a consistent functional modification to the kinetic equation of the model that allows for this to occur. The model is used to investigate the effects that the initial exchange current density, i00, and the diffusion coefficient for protons in electrolytic manganese dioxide (EMD), DH+, have on SPECS discharge. The behaviour observed is consistent with the idea that individual reduction reactions, within the multi-reaction, reduction behaviour of EMD, have distinct i00 and DH+ values.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The link between measured sub-saturated hygroscopicity and cloud activation potential of secondary organic aerosol particles produced by the chamber photo-oxidation of α-pinene in the presence or absence of ammonium sulphate seed aerosol was investigated using two models of varying complexity. A simple single hygroscopicity parameter model and a more complex model (incorporating surface effects) were used to assess the detail required to predict the cloud condensation nucleus (CCN) activity from the subsaturated water uptake. Sub-saturated water uptake measured by three hygroscopicity tandem differential mobility analyser (HTDMA) instruments was used to determine the water activity for use in the models. The predicted CCN activity was compared to the measured CCN activation potential using a continuous flow CCN counter. Reconciliation using the more complex model formulation with measured cloud activation could be achieved widely different assumed surface tension behavior of the growing droplet; this was entirely determined by the instrument used as the source of water activity data. This unreliable derivation of the water activity as a function of solute concentration from sub-saturated hygroscopicity data indicates a limitation in the use of such data in predicting cloud condensation nucleus behavior of particles with a significant organic fraction. Similarly, the ability of the simpler single parameter model to predict cloud activation behaviour was dependent on the instrument used to measure sub-saturated hygroscopicity and the relative humidity used to provide the model input. However, agreement was observed for inorganic salt solution particles, which were measured by all instruments in agreement with theory. The difference in HTDMA data from validated and extensively used instruments means that it cannot be stated with certainty the detail required to predict the CCN activity from sub-saturated hygroscopicity. In order to narrow the gap between measurements of hygroscopic growth and CCN activity the processes involved must be understood and the instrumentation extensively quality assured. It is impossible to say from the results presented here due to the differences in HTDMA data whether: i) Surface tension suppression occurs ii) Bulk to surface partitioning is important iii) The water activity coefficient changes significantly as a function of the solute concentration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: To analyze the repeatability of measuring nerve fiber length (NFL) from images of the human corneal subbasal nerve plexus using semiautomated software. Methods: Images were captured from the corneas of 50 subjects with type 2 diabetes mellitus who showed varying severity of neuropathy, using the Heidelberg Retina Tomograph 3 with Rostock Corneal Module. Semiautomated nerve analysis software was independently used by two observers to determine NFL from images of the subbasal nerve plexus. This procedure was undertaken on two occasions, 3 days apart. Results: The intraclass correlation coefficient values were 0.95 (95% confidence intervals: 0.92–0.97) for individual subjects and 0.95 (95% confidence intervals: 0.74–1.00) for observer. Bland-Altman plots of the NFL values indicated a reduced spread of data with lower NFL values. The overall spread of data was less for (a) the observer who was more experienced at analyzing nerve fiber images and (b) the second measurement occasion. Conclusions: Semiautomated measurement of NFL in the subbasal nerve fiber layer is highly repeatable. Repeatability can be enhanced by using more experienced observers. It may be possible to markedly improve repeatability when measuring this anatomic structure using fully automated image analysis software.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose. To investigate evidence-based visual field size criteria for referral of low-vision (LV) patients for mobility rehabilitation. Methods. One hundred and nine participants with LV and 41 age-matched participants with normal sight (NS) were recruited. The LV group was heterogeneous with diverse causes of visual impairment. We measured binocular kinetic visual fields with the Humphrey Field Analyzer and mobility performance on an obstacle-rich, indoor course. Mobility was assessed as percent preferred walking speed (PPWS) and number of obstacle-contact errors. The weighted kappa coefficient of association (κr) was used to discriminate LV participants with both unsafe and inefficient mobility from those with adequate mobility on the basis of their visual field size for the full sample and for subgroups according to type of visual field loss and whether or not the participants had previously received orientation and mobility training. Results. LV participants with both PPWS <38% and errors >6 on our course were classified as having inadequate (inefficient and unsafe) mobility compared with NS participants. Mobility appeared to be first compromised when the visual field was less than about 1.2 steradians (sr; solid angle of a circular visual field of about 70° diameter). Visual fields <0.23 and 0.63 sr (31 to 52° diameter) discriminated patients with at-risk mobility for the full sample and across the two subgroups. A visual field of 0.05 sr (15° diameter) discriminated those with critical mobility. Conclusions. Our study suggests that: practitioners should be alert to potential mobility difficulties when the visual field is less than about 1.2 sr (70° diameter); assessment for mobility rehabilitation may be warranted when the visual field is constricted to about 0.23 to 0.63 sr (31 to 52° diameter) depending on the nature of their visual field loss and previous history (at risk); and mobility rehabilitation should be conducted before the visual field is constricted to 0.05 sr (15° diameter; critical).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: To identify agreement levels between conventional longitudinal evaluation of change (post–pre) and patient-perceived change (post–then test) in health-related quality of life. Design: A prospective cohort investigation with two assessment points (baseline and six-month follow-up) was implemented. Setting: Community rehabilitation setting. Subjects: Frail older adults accessing community-based rehabilitation services. Intervention: Nil as part of this investigation. Main measures: Conventional longitudinal change in health-related quality of life was considered the difference between standard EQ-5D assessments completed at baseline and follow-up. To evaluate patient-perceived change a ‘then test’ was also completed at the follow-up assessment. This required participants to report (from their current perspective) how they believe their health-related quality of life was at baseline (using the EQ-5D). Patient-perceived change was considered the difference between ‘then test’ and standard follow-up EQ-5D assessments. Results: The mean (SD) age of participants was 78.8 (7.3). Of the 70 participants 62 (89%) of data sets were complete and included in analysis. Agreement between conventional (post–pre) and patient-perceived (post–then test) change was low to moderate (EQ-5D utility intraclass correlation coefficient (ICC)¼0.41, EQ-5D visual analogue scale (VAS) ICC¼0.21). Neither approach inferred greater change than the other (utility P¼0.925, VAS P¼0.506). Mean (95% confidence interval (CI)) conventional change in EQ-5D utility and VAS were 0.140 (0.045,0.236) and 8.8 (3.3,14.3) respectively, while patient-perceived change was 0.147 (0.055,0.238) and 6.4 (1.7,11.1) respectively. Conclusions: Substantial disagreement exists between conventional longitudinal evaluation of change in health-related quality of life and patient-perceived change in health-related quality of life (as measured using a then test) within individuals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, two ideal formation models of serrated chips, the symmetric formation model and the unilateral right-angle formation model, have been established for the first time. Based on the ideal models and related adiabatic shear theory of serrated chip formation, the theoretical relationship among average tooth pitch, average tooth height and chip thickness are obtained. Further, the theoretical relation of the passivation coefficient of chip's sawtooth and the chip thickness compression ratio is deduced as well. The comparison between these theoretical prediction curves and experimental data shows good agreement, which well validates the robustness of the ideal chip formation models and the correctness of the theoretical deducing analysis. The proposed ideal models may have provided a simple but effective theoretical basis for succeeding research on serrated chip morphology. Finally, the influences of most principal cutting factors on serrated chip formation are discussed on the basis of a series of finite element simulation results for practical advices of controlling serrated chips in engineering application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is concerned with some plane strain and axially symmetric free surface problems which arise in the study of static granular solids that satisfy the Coulomb-Mohr yield condition. Such problems are inherently nonlinear, and hence difficult to attack analytically. Given a Coulomb friction condition holds on a solid boundary, it is shown that the angle a free surface is allowed to attach to the boundary is dependent only on the angle of wall friction, assuming the stresses are all continuous at the attachment point, and assuming also that the coefficient of cohesion is nonzero. As a model problem, the formation of stable cohesive arches in hoppers is considered. This undesirable phenomena is an obstacle to flow, and occurs when the hopper outlet is too small. Typically, engineers are concerned with predicting the critical outlet size for a given hopper and granular solid, so that for hoppers with outlets larger than this critical value, arching cannot occur. This is a topic of considerable practical interest, with most accepted engineering methods being conservative in nature. Here, the governing equations in two limiting cases (small cohesion and high angle of internal friction) are considered directly. No information on the critical outlet size is found; however solutions for the shape of the free boundary (the arch) are presented, for both plane and axially symmetric geometries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diffusion is the process that leads to the mixing of substances as a result of spontaneous and random thermal motion of individual atoms and molecules. It was first detected by the English botanist Robert Brown in 1827, and the phenomenon became known as ‘Brownian motion’. More specifically, the motion observed by Brown was translational diffusion – thermal motion resulting in random variations of the position of a molecule. This type of motion was given a correct theoretical interpretation in 1905 by Albert Einstein, who derived the relationship between temperature, the viscosity of the medium, the size of the diffusing molecule, and its diffusion coefficient. It is translational diffusion that is indirectly observed in MR diffusion-tensor imaging (DTI). The relationship obtained by Einstein provides the physical basis for using translational diffusion to probe the microscopic environment surrounding the molecule.