256 resultados para Drag coefficients


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper examines the feasibility of automation of dragline bucket excavators used to strip over-burden from open cut mines. In particular the automatic control of bucket carry angle and bucket trajectory are addressed. Open-loop dynamics of a 1:20 scale model dragline bucket are identified, through measurement of frequency response between carry angle and drag motor input voltage. A strategy for automatic control of carry angle is devised and implemented using bucket angle and rate feedback. System compensation and tuning are explained and closed loop frequency and time responses are measured.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this study was to design and validate an interviewer-administered pelvic floor questionnaire that integrates bladder, bowel and sexual function, pelvic organ prolapse, severity, bothersomeness and condition-specific quality of life. Validation testing of the questionnaire was performed using data from 106 urogynaecological patients and a separately sampled community cohort of 49 women. Missing data did not exceed 2% for any question. It distinguished community and urogynaecological populations regarding pelvic floor dysfunction. The bladder domain correlated with the short version of the Urogenital Distress Inventory, bowel function with an established bowel questionnaire and prolapse symptoms with the International Continence Society prolapse quantification. Sexual function assessment reflected scores on the McCoy Female Sexuality Questionnaire. Cronbach’s α coefficients were acceptable in all domains. Kappa coefficients of agreement for the test–retest analyses varied from 0.5 to 1.0. The interviewer-administered pelvic floor questionnaire assessed pelvic floor function in a reproducible and valid fashion in a typical urogynaecological clinic.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Older adults may find it problematic to attend hospital appointments due to the difficulty associated with travelling to, within and from a hospital facility for the purpose of a face-to-face assessment. This study aims to investigate equivalence between telephone and face-to-face administration for the Frenchay Activities Index (FAI) and the Euroqol-5D (EQ-5D) generic health-related quality of life instrument amongst an older adult population. Methods Patients aged >65 (n = 53) who had been discharged to the community following an acute hospital admission underwent telephone administration of the FAI and EQ-5D instruments seven days prior to attending a hospital outpatient appointment where they completed a face-to-face administration of these instruments. Results Overall, 40 subjects' datasets were complete for both assessments and included in analysis. The FAI items had high levels of agreement between the two modes of administration (item kappa's ranged 0.73 to 1.00) as did the EQ-5D (item kappa's ranged 0.67–0.83). For the FAI, EQ-5D VAS and EQ-5D utility score, intraclass correlation coefficients were 0.94, 0.58 and 0.82 respectively with paired t-tests indicating no significant systematic difference (p = 0.100, p = 0.690 and p = 0.290 respectively). Conclusion Telephone administration of the FAI and EQ-5D instruments provides comparable results to face-to-face administration amongst older adults deemed to have cognitive functioning intact at a basic level, indicating that this is a suitable alternate approach for collection of this information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Disability following a stroke can impose various restrictions on patients’ attempts at participating in life roles. The measurement of social participation, for instance, is important in estimating recovery and assessing quality of care at the community level. Thus, the identification of factors influencing social participation is essential in developing effective measures for promoting the reintegration of stroke survivors into the community. Data were collected from 188 stroke survivors (mean age 71.7 years) 12 months after discharge from a stroke rehabilitation hospital. Of these survivors, 128 (61 %) had suffered a first ever stroke, and 81 (43 %) had a right hemisphere lesion. Most (n = 156, 83 %) were living in their own home, though 32 (17 %) were living in residential care facilities. Path analysis was used to test a hypothesized model of participation restriction which included the direct and indirect effects between social, psychological and physical outcomes and demographic variables. Participation restriction was the dependent variable. Exogenous independent variables were age, functional ability, living arrangement and gender. Endogenous independent variables were depressive symptoms, state self-esteem and social support satisfaction. The path coefficients showed functional ability having the largest direct effect on participation restriction. The results also showed that more depressive symptoms, low state self-esteem, female gender, older age and living in a residential care facility had a direct effect on participation restriction. The explanatory variables accounted for 71% of the variance in explaining participation restriction. Prediction models have empirical and practical applications such as suggesting important factors to be considered in promoting stroke recovery. The findings suggest that interventions offered over the course of rehabilitation should be aimed at improving functional ability and promoting psychological aspects of recovery. These are likely to enhance stroke survivors resume or maximize their social participation so that they may fulfill productive and positive life roles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Films of piezoelectric PVDF and P(VDF-TrFE) were exposed to vacuum UV (115-300 nm VUV) and -radiation to investigate how these two forms of radiation affect the chemical, morphological, and piezoelectric properties of the polymers. The extent of crosslinking was almost identical in both polymers after -irradiation, but surprisingly, was significantly higher for the TrFE copolymer after VUV-irradiation. Changes in the melting behavior were also more significant in the TrFE copolymer after VUV-irradiation due to both surface and bulk crosslinking, compared with only surface crosslinking for the PVDF films. The piezoelectric properties (measured using d33 piezoelectric coefficients and D-E hysteresis loops) were unchanged in the PVDF homopolymer, while the TrFE copolymer exhibited more narrow D-E loops after exposure to either - or VUV-radiation. The more severe damage to the TrFE copolymer in comparison with the PVDF homopolymer after VUV-irradiation is explained by different energy deposition characteristics. The short wavelength, highly energetic photons are undoubtedly absorbed in the surface layers of both polymers, and we propose that while the longer wavelength components of the VUV-radiation are absorbed by the bulk of the TrFE copolymer causing crosslinking, they are transmitted harmlessly in the PVDF homopolymer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Smart materials, such as thin-film piezoelectric polymers, are interesting for potential applications on Gossamer spacecraft. This investigation aims to predict the performance and long-term stability of the piezoelectric properties of poly(vinylidene fluoride) (PVDF) and its copolymers under conditions simulating the low-Earthorbit environment. To examine the effects of temperature on the piezoelectric properties of PVDF, poly(vinylidenefluoride-co-trifluoroethylene), and poly(vinylidenefluoride-cohexafluoropropylene), the d33 piezoelectric coefficients were measured up to 160 8C, and the electric displacement/electric field (D–E) hysteresis loops were measured from �80 to þ110 8C. The room-temperature d33 coefficient of PVDF homopolymer films, annealed at 50, 80, and 125 8C, dropped rapidly within a few days of thermal exposure and then remained unchanged. In contrast, the TrFE copolymer exhibited greater thermal stability than the homopolymer, with d33 remaining almost unchanged up to 125 8C. The HFP copolymer exhibited poor retention of d33 at temperatures above 80 8C. In situ D–E loop measurements from �80 to þ110 8C showed that the remanent polarization of the TrFE copolymer was more stable than that of the PVDF homopolymer. D–E hysteresis loop and d33 results were also compared with the deflection of the PVDF homopolymer and TrFE copolymer bimorphs tested over a wide temperature range.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The effects of simulated low earth orbit conditions on vinylidene-fluoride based thin-film piezoelectrics for use in lightweight, large surface area spacecraft such as telescope mirrors and antennae is presented. The environmental factors considered as having the greatest potential to cause damage are temperature, atomic oxygen and vacuum UV radiation. Using the piezoelectric strain coefficients and bimorph deflection measurements the piezoelectric performance over the temperature range -100 to +150°C was studied. The effects of simultaneous AO/VUV exposure were also examined and films characterized by their piezoelectric, surface, and thermal properties. Two fluorinated piezoelectric polymers, poly(vinylidene fluoride) and poly(vinylidene fluoride-co-trifluoroethylene), were adversely affected at elevated temperatures due to depoling caused by randomization of the dipole orientation, while AO/VUV contributed little to depoling but did cause significant surface erosion and, in the case of P(VDF-TrFE), bulk crosslinking. These results highlight the importance of materials selection for use in space environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, the optimal design of an active flow control device; Shock Control Bump (SCB) on suction and pressure sides of transonic aerofoil to reduce transonic total drag is investigated. Two optimisation test cases are conducted using different advanced Evolutionary Algorithms (EAs); the first optimiser is the Hierarchical Asynchronous Parallel Evolutionary Algorithm (HAPMOEA) based on canonical Evolutionary Strategies (ES). The second optimiser is the HAPMOEA is hybridised with one of well-known Game Strategies; Nash-Game. Numerical results show that SCB significantly reduces the drag by 30% when compared to the baseline design. In addition, the use of a Nash-Game strategy as a pre-conditioner of global control saves computational cost up to 90% when compared to the first optimiser HAPMOEA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

LiteSteel beam (LSB) is a new cold-formed steel hollow flange channel beam. The unique LSB section is produced by a patented manufacturing process involving simultaneous cold-forming and dual electric resistance welding. To date, limited research has been undertaken on the shear buckling behaviour of LSBs with torsionally rigid, rectangular hollow flanges. For the shear design of LSB web panels, their elastic shear buckling strength must be determined accurately including the potential post-buckling strength. Currently the elastic shear buckling coefficients of web panels are determined by assuming conservatively that the web panels are simply supported at the junction between the flange and web elements. Therefore finite element analyses were carried out to investigate the elastic shear buckling behaviour of LSB sections including the effect of true support conditions at the junction between their flange and web elements. An improved equation for the higher elastic shear buckling coefficient of LSBs was developed and included in the shear capacity equations of Australian cold-formed steel codes. Predicted ultimate shear capacity results were compared with available experimental results, both of which showed considerable improvement to the shear capacities of LSBs. A study on the shear flow distribution of LSBs was also undertaken prior to the elastic buckling analysis study. This paper presents the details of this investigation and the results including the shear flow distribution of LSBs. Keywords: LiteSteel beam, Elastic shear buckling, Shear flow, Cold-formed steel structures, Slender web, Hollow flanges.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Advances in symptom management strategies through a better understanding of cancer symptom clusters depend on the identification of symptom clusters that are valid and reliable. The purpose of this exploratory research was to investigate alternative analytical approaches to identify symptom clusters for patients with cancer, using readily accessible statistical methods, and to justify which methods of identification may be appropriate for this context. Three studies were undertaken: (1) a systematic review of the literature, to identify analytical methods commonly used for symptom cluster identification for cancer patients; (2) a secondary data analysis to identify symptom clusters and compare alternative methods, as a guide to best practice approaches in cross-sectional studies; and (3) a secondary data analysis to investigate the stability of symptom clusters over time. The systematic literature review identified, in 10 years prior to March 2007, 13 cross-sectional studies implementing multivariate methods to identify cancer related symptom clusters. The methods commonly used to group symptoms were exploratory factor analysis, hierarchical cluster analysis and principal components analysis. Common factor analysis methods were recommended as the best practice cross-sectional methods for cancer symptom cluster identification. A comparison of alternative common factor analysis methods was conducted, in a secondary analysis of a sample of 219 ambulatory cancer patients with mixed diagnoses, assessed within one month of commencing chemotherapy treatment. Principal axis factoring, unweighted least squares and image factor analysis identified five consistent symptom clusters, based on patient self-reported distress ratings of 42 physical symptoms. Extraction of an additional cluster was necessary when using alpha factor analysis to determine clinically relevant symptom clusters. The recommended approaches for symptom cluster identification using nonmultivariate normal data were: principal axis factoring or unweighted least squares for factor extraction, followed by oblique rotation; and use of the scree plot and Minimum Average Partial procedure to determine the number of factors. In contrast to other studies which typically interpret pattern coefficients alone, in these studies symptom clusters were determined on the basis of structure coefficients. This approach was adopted for the stability of the results as structure coefficients are correlations between factors and symptoms unaffected by the correlations between factors. Symptoms could be associated with multiple clusters as a foundation for investigating potential interventions. The stability of these five symptom clusters was investigated in separate common factor analyses, 6 and 12 months after chemotherapy commenced. Five qualitatively consistent symptom clusters were identified over time (Musculoskeletal-discomforts/lethargy, Oral-discomforts, Gastrointestinaldiscomforts, Vasomotor-symptoms, Gastrointestinal-toxicities), but at 12 months two additional clusters were determined (Lethargy and Gastrointestinal/digestive symptoms). Future studies should include physical, psychological, and cognitive symptoms. Further investigation of the identified symptom clusters is required for validation, to examine causality, and potentially to suggest interventions for symptom management. Future studies should use longitudinal analyses to investigate change in symptom clusters, the influence of patient related factors, and the impact on outcomes (e.g., daily functioning) over time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OneSteel Australian Tube Mills has recently developed a new hollow flange channel cold-formed section, known as the LiteSteel Beam (LSB). The innovative LSB sections have the beneficial characteristics of torsionally rigid closed rectangular flanges combined with economical fabrication processes from a single strip of high strength steel. They combine the stability of hot-rolled steel sections with the high strength to weight ratio of conventional cold-formed steel sections. The LSB sections are commonly used as flexural members in residential, industrial and commercial buildings. In order to ensure safe and efficient designs of LSBs, many research studies have been undertaken on the flexural behaviour of LSBs. However, no research has been undertaken on the shear behaviour of LSBs. Therefore this thesis investigated the ultimate shear strength behaviour of LSBs with and without web openings including their elastic buckling and post-buckling characteristics using both experimental and finite element analyses, and developed accurate shear design rules. Currently the elastic shear buckling coefficients of web panels are determined by assuming conservatively that the web panels are simply supported at the junction between the web and flange elements. Therefore finite element analyses were conducted first to investigate the elastic shear buckling behaviour of LSBs to determine the true support condition at the junction between their web and flange elements. An equation for the higher elastic shear buckling coefficient of LSBs was developed and included in the shear capacity equations in the cold-formed steel structures code, AS/NZS 4600. Predicted shear capacities from the modified equations and the available experimental results demonstrated the improvements to the shear capacities of LSBs due to the presence of higher level of fixity at the LSB flange to web juncture. A detailed study into the shear flow distribution of LSB was also undertaken prior to the elastic buckling analysis study. The experimental study of ten LSB sections included 42 shear tests of LSBs with aspect ratios of 1.0 and 1.5 that were loaded at midspan until failure. Both single and back to back LSB arrangements were used. Test specimens were chosen such that all three types of shear failure (shear yielding, inelastic and elastic shear buckling) occurred in the tests. Experimental results showed that the current cold-formed steel design rules are very conservative for the shear design of LSBs. Significant improvements to web shear buckling occurred due to the presence of rectangular hollow flanges while considerable post-buckling strength was also observed. Experimental results were presented and compared with corresponding predictions from the current design rules. Appropriate improvements have been proposed for the shear strength of LSBs based on AISI (2007) design equations and test results. Suitable design rules were also developed under the direct strength method (DSM) format. This thesis also includes the shear test results of cold-formed lipped channel beams from LaBoube and Yu (1978a), and the new design rules developed based on them using the same approach used with LSBs. Finite element models of LSBs in shear were also developed to investigate the ultimate shear strength behaviour of LSBs including their elastic and post-buckling characteristics. They were validated by comparing their results with experimental test results. Details of the finite element models of LSBs, the nonlinear analysis results and their comparisons with experimental results are presented in this thesis. Finite element analysis results showed that the current cold-formed steel design rules are very conservative for the shear design of LSBs. They also confirmed other experimental findings relating to elastic and post-buckling shear strength of LSBs. A detailed parametric study based on validated experimental finite element model was undertaken to develop an extensive shear strength data base and was then used to confirm the accuracy of the new shear strength equations proposed in this thesis. Experimental and numerical studies were also undertaken to investigate the shear behaviour of LSBs with web openings. Twenty six shear tests were first undertaken using a three point loading arrangement. It was found that AS/NZS 4600 and Shan et al.'s (1997) design equations are conservative for the shear design of LSBs with web openings while McMahon et al.'s (2008) design equation are unconservative. Experimental finite element models of LSBs with web openings were then developed and validated by comparing their results with experimental test results. The developed nonlinear finite element model was found to predict the shear capacity of LSBs with web opening with very good accuracy. Improved design equations have been proposed for the shear capacity of LSBs with web openings based on both experimental and FEA parametric study results. This thesis presents the details of experimental and numerical studies of the shear behaviour and strength of LSBs with and without web openings and the results including the developed accurate design rules.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work investigates the computer modelling of the photochemical formation of smog products such as ozone and aerosol, in a system containing toluene, NOx and water vapour. In particular, the problem of modelling this process in the Commonwealth Scientific and Industrial Research Organization (CSIRO) smog chambers, which utilize outdoor exposure, is addressed. The primary requirement for such modelling is a knowledge of the photolytic rate coefficients. Photolytic rate coefficients of species other than N02 are often related to JNo2 (rate coefficient for the photolysis ofN02) by a simple factor, but for outdoor chambers, this method is prone to error as the diurnal profiles may not be similar in shape. Three methods for the calculation of diurnal JNo2 are investigated. The most suitable method for incorporation into a general model, is found to be one which determines the photolytic rate coefficients for N02, as well as several other species, from actinic flux, absorption cross section and quantum yields. A computer model was developed, based on this method, to calculate in-chamber photolysis rate coefficients for the CSIRO smog chambers, in which ex-chamber rate coefficients are adjusted by accounting for variation in light intensity by transmittance through the Teflon walls, albedo from the chamber floor and radiation attenuation due to clouds. The photochemical formation of secondary aerosol is investigated in a series of toluene-NOx experiments, which were performed in the CSIRO smog chambers. Three stages of aerosol formation, in plots of total particulate volume versus time, are identified: a delay period in which no significant mass of aerosol is formed, a regime of rapid aerosol formation (regime 1) and a second regime of slowed aerosol formation (regime 2). Two models are presented which were developed from the experimental data. One model is empirically based on observations of discrete stages of aerosol formation and readily allows aerosol growth profiles to be calculated. The second model is based on an adaptation of published toluene photooxidation mechanisms and provides some chemical information about the oxidation products. Both models compare favorably against the experimental data. The gross effects of precursor concentrations (toluene, NOx and H20) and ambient conditions (temperature, photolysis rate) on the formation of secondary aerosol are also investigated, primarily using the mechanism model. An increase in [NOx]o results in increased delay time, rate of aerosol formation in regime 1 and volume of aerosol formed in regime 1. This is due to increased formation of dinitrocresol and furanone products. An increase in toluene results in a decrease in the delay time and an increase in the rate of aerosol formation in regime 1, due to enhanced reactivity from the toluene products, such as the radicals from the photolysis of benzaldehyde. Water vapor has very little effect on the formation of aerosol volume, except that rates are slightly increased due to more OH radicals from reaction with 0(1D) from ozone photolysis. Increased temperature results in increased volume of aerosol formed in regime 1 (increased dinitrocresol formation), while increased photolysis rate results in increased rate of aerosol formation in regime 1. Both the rate and volume of aerosol formed in regime 2 are increased by increased temperature or photolysis rate. Both models indicate that the yield of secondary particulates from hydrocarbons (mass concentration aerosol formed/mass concentration hydrocarbon precursor) is proportional to the ratio [NOx]0/[hydrocarbon]0

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bioelectrical impedance analysis, (BIA), is a method of body composition analysis first investigated in 1962 which has recently received much attention by a number of research groups. The reasons for this recent interest are its advantages, (viz: inexpensive, non-invasive and portable) and also the increasing interest in the diagnostic value of body composition analysis. The concept utilised by BIA to predict body water volumes is the proportional relationship for a simple cylindrical conductor, (volume oc length2/resistance), which allows the volume to be predicted from the measured resistance and length. Most of the research to date has measured the body's resistance to the passage of a 50· kHz AC current to predict total body water, (TBW). Several research groups have investigated the application of AC currents at lower frequencies, (eg 5 kHz), to predict extracellular water, (ECW). However all research to date using BIA to predict body water volumes has used the impedance measured at a discrete frequency or frequencies. This thesis investigates the variation of impedance and phase of biological systems over a range of frequencies and describes the development of a swept frequency bioimpedance meter which measures impedance and phase at 496 frequencies ranging from 4 kHz to 1 MHz. The impedance of any biological system varies with the frequency of the applied current. The graph of reactance vs resistance yields a circular arc with the resistance decreasing with increasing frequency and reactance increasing from zero to a maximum then decreasing to zero. Computer programs were written to analyse the measured impedance spectrum and determine the impedance, Zc, at the characteristic frequency, (the frequency at which the reactance is a maximum). The fitted locus of the measured data was extrapolated to determine the resistance, Ro, at zero frequency; a value that cannot be measured directly using surface electrodes. The explanation of the theoretical basis for selecting these impedance values (Zc and Ro), to predict TBW and ECW is presented. Studies were conducted on a group of normal healthy animals, (n=42), in which TBW and ECW were determined by the gold standard of isotope dilution. The prediction quotients L2/Zc and L2/Ro, (L=length), yielded standard errors of 4.2% and 3.2% respectively, and were found to be significantly better than previously reported, empirically determined prediction quotients derived from measurements at a single frequency. The prediction equations established in this group of normal healthy animals were applied to a group of animals with abnormally low fluid levels, (n=20), and also to a group with an abnormal balance of extra-cellular to intracellular fluids, (n=20). In both cases the equations using L2/Zc and L2/Ro accurately and precisely predicted TBW and ECW. This demonstrated that the technique developed using multiple frequency bioelectrical impedance analysis, (MFBIA), can accurately predict both TBW and ECW in both normal and abnormal animals, (with standard errors of the estimate of 6% and 3% for TBW and ECW respectively). Isotope dilution techniques were used to determine TBW and ECW in a group of 60 healthy human subjects, (male. and female, aged between 18 and 45). Whole body impedance measurements were recorded on each subject using the MFBIA technique and the correlations between body water volumes, (TBW and ECW), and heighe/impedance, (for all measured frequencies), were compared. The prediction quotients H2/Zc and H2/Ro, (H=height), again yielded the highest correlation with TBW and ECW respectively with corresponding standard errors of 5.2% and 10%. The values of the correlation coefficients obtained in this study were very similar to those recently reported by others. It was also observed that in healthy human subjects the impedance measured at virtually any frequency yielded correlations not significantly different from those obtained from the MFBIA quotients. This phenomenon has been reported by other research groups and emphasises the need to validate the technique by investigating its application in one or more groups with abnormalities in fluid levels. The clinical application of MFBIA was trialled and its capability of detecting lymphoedema, (an excess of extracellular fluid), was investigated. The MFBIA technique was demonstrated to be significantly more sensitive, (P<.05), in detecting lymphoedema than the current technique of circumferential measurements. MFBIA was also shown to provide valuable information describing the changes in the quantity of muscle mass of the patient during the course of the treatment. The determination of body composition, (viz TBW and ECW), by MFBIA has been shown to be a significant improvement on previous bioelectrical impedance techniques. The merit of the MFBIA technique is evidenced in its accurate, precise and valid application in animal groups with a wide variation in body fluid volumes and balances. The multiple frequency bioelectrical impedance analysis technique developed in this study provides accurate and precise estimates of body composition, (viz TBW and ECW), regardless of the individual's state of health.