948 resultados para Coefficient


Relevância:

10.00% 10.00%

Publicador:

Resumo:

LiteSteel beam (LSB) is a new cold-formed steel hollow flange channel beam. The unique LSB section is produced by a patented manufacturing process involving simultaneous cold-forming and dual electric resistance welding. To date, limited research has been undertaken on the shear buckling behaviour of LSBs with torsionally rigid, rectangular hollow flanges. For the shear design of LSB web panels, their elastic shear buckling strength must be determined accurately including the potential post-buckling strength. Currently the elastic shear buckling coefficients of web panels are determined by assuming conservatively that the web panels are simply supported at the junction between the flange and web elements. Therefore finite element analyses were carried out to investigate the elastic shear buckling behaviour of LSB sections including the effect of true support conditions at the junction between their flange and web elements. An improved equation for the higher elastic shear buckling coefficient of LSBs was developed and included in the shear capacity equations of Australian cold-formed steel codes. Predicted ultimate shear capacity results were compared with available experimental results, both of which showed considerable improvement to the shear capacities of LSBs. A study on the shear flow distribution of LSBs was also undertaken prior to the elastic buckling analysis study. This paper presents the details of this investigation and the results including the shear flow distribution of LSBs. Keywords: LiteSteel beam, Elastic shear buckling, Shear flow, Cold-formed steel structures, Slender web, Hollow flanges.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: There is an increasing interest in measuring quality of life (QOL) in clinical settings and in clinical trials. None of the commonly used QOL instrument have been validated for use postnatally. Aim: To assess the psychometric properties of the 26-item WHOQOL-BREF among women following childbirth. Methods: Using a prospective cohort design we recruited 320 women within the first few days of childbirth. At six weeks postpartum, participants were asked to complete the WHOQOL-BREF, the Edinburgh Postnatal Depression Index and the Australian Unity Wellbeing Index. Validation of the WHOQOL-BREF included an analysis of internal consistency, discriminate validity, convergent validity and an examination of the domain structure. Results: 221 (69.1%) women returned their six-week questionnaire. All domains of the WHOQOL-BREF met reliability standards (alpha coefficient exceeding 0.70). The questionnaire discriminated well between known groups (depressed and non-depressed women. P = <0.000) and demonstrated satisfactory correlations with the Australian Unity Wellbeing index (r = >0.45). The domain structure of the WHOQOL-BREF was also valid in this population of new mothers, with moderate to high correlation between individual items and the domain structure to which the items were originally assigned. Conclusion: The WHOQOL-BRF is well-accepted and valid instrument in this population and may be used in postnatal clinical settings or for assessing intervention effects in research studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nanoindentation is a useful technique for probing the mechanical properties of bone, and finite element (FE) modeling of the indentation allows inverse determination of elasto-plastic constitutive properties. However, FE simulations to date have assumed frictionless contact between indenter and bone. The aim of this study was to explore the effect of friction in simulations of bone nanoindentation. Two dimensional axisymmetric FE simulations were performed using a spheroconical indenter of tip radius 0.6m and angle 90°. The coefficient of friction between indenter and bone was varied between 0.0 (frictionless) and 0.3. Isotropic linear elasticity was used in all simulations, with bone elastic modulus E=13.56GPa and Poisson’s ratio =0.3. Plasticity was incorporated using both Drucker-Prager and von Mises yield surfaces. Friction had a modest effect on the predicted force-indentation curve for both von Mises and Drucker-Prager plasticity, reducing maximum indenter displacement by 10% and 20% respectively as friction coefficient was increased from zero to 0.3 (at a maximum indenter force of 5mN). However, friction has a much greater effect on predicted pile-up after indentation, reducing predicted pile-up from 0.27m to 0.11m with a von Mises model, and from 0.09m to 0.02m with Drucker-Prager plasticity. We conclude that it is important to include friction in nanoindentation simulations of bone.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transport regulators consider that, with respect to pavement damage, heavy vehicles (HVs) are the riskiest vehicles on the road network. That HV suspension design contributes to road and bridge damage has been recognised for some decades. This thesis deals with some aspects of HV suspension characteristics, particularly (but not exclusively) air suspensions. This is in the areas of developing low-cost in-service heavy vehicle (HV) suspension testing, the effects of larger-than-industry-standard longitudinal air lines and the characteristics of on-board mass (OBM) systems for HVs. All these areas, whilst seemingly disparate, seek to inform the management of HVs, reduce of their impact on the network asset and/or provide a measurement mechanism for worn HV suspensions. A number of project management groups at the State and National level in Australia have been, and will be, presented with the results of the project that resulted in this thesis. This should serve to inform their activities applicable to this research. A number of HVs were tested for various characteristics. These tests were used to form a number of conclusions about HV suspension behaviours. Wheel forces from road test data were analysed. A “novel roughness” measure was developed and applied to the road test data to determine dynamic load sharing, amongst other research outcomes. Further, it was proposed that this approach could inform future development of pavement models incorporating roughness and peak wheel forces. Left/right variations in wheel forces and wheel force variations for different speeds were also presented. This led on to some conclusions regarding suspension and wheel force frequencies, their transmission to the pavement and repetitive wheel loads in the spatial domain. An improved method of determining dynamic load sharing was developed and presented. It used the correlation coefficient between two elements of a HV to determine dynamic load sharing. This was validated against a mature dynamic loadsharing metric, the dynamic load sharing coefficient (de Pont, 1997). This was the first time that the technique of measuring correlation between elements on a HV has been used for a test case vs. a control case for two different sized air lines. That dynamic load sharing was improved at the air springs was shown for the test case of the large longitudinal air lines. The statistically significant improvement in dynamic load sharing at the air springs from larger longitudinal air lines varied from approximately 30 percent to 80 percent. Dynamic load sharing at the wheels was improved only for low air line flow events for the test case of larger longitudinal air lines. Statistically significant improvements to some suspension metrics across the range of test speeds and “novel roughness” values were evident from the use of larger longitudinal air lines, but these were not uniform. Of note were improvements to suspension metrics involving peak dynamic forces ranging from below the error margin to approximately 24 percent. Abstract models of HV suspensions were developed from the results of some of the tests. Those models were used to propose further development of, and future directions of research into, further gains in HV dynamic load sharing. This was from alterations to currently available damping characteristics combined with implementation of large longitudinal air lines. In-service testing of HV suspensions was found to be possible within a documented range from below the error margin to an error of approximately 16 percent. These results were in comparison with either the manufacturer’s certified data or test results replicating the Australian standard for “road-friendly” HV suspensions, Vehicle Standards Bulletin 11. OBM accuracy testing and development of tamper evidence from OBM data were detailed for over 2000 individual data points across twelve test and control OBM systems from eight suppliers installed on eleven HVs. The results indicated that 95 percent of contemporary OBM systems available in Australia are accurate to +/- 500 kg. The total variation in OBM linearity, after three outliers in the data were removed, was 0.5 percent. A tamper indicator and other OBM metrics that could be used by jurisdictions to determine tamper events were developed and documented. That OBM systems could be used as one vector for in-service testing of HV suspensions was one of a number of synergies between the seemingly disparate streams of this project.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OneSteel Australian Tube Mills has recently developed a new hollow flange channel cold-formed section, known as the LiteSteel Beam (LSB). The innovative LSB sections have the beneficial characteristics of torsionally rigid closed rectangular flanges combined with economical fabrication processes from a single strip of high strength steel. They combine the stability of hot-rolled steel sections with the high strength to weight ratio of conventional cold-formed steel sections. The LSB sections are commonly used as flexural members in residential, industrial and commercial buildings. In order to ensure safe and efficient designs of LSBs, many research studies have been undertaken on the flexural behaviour of LSBs. However, no research has been undertaken on the shear behaviour of LSBs. Therefore this thesis investigated the ultimate shear strength behaviour of LSBs with and without web openings including their elastic buckling and post-buckling characteristics using both experimental and finite element analyses, and developed accurate shear design rules. Currently the elastic shear buckling coefficients of web panels are determined by assuming conservatively that the web panels are simply supported at the junction between the web and flange elements. Therefore finite element analyses were conducted first to investigate the elastic shear buckling behaviour of LSBs to determine the true support condition at the junction between their web and flange elements. An equation for the higher elastic shear buckling coefficient of LSBs was developed and included in the shear capacity equations in the cold-formed steel structures code, AS/NZS 4600. Predicted shear capacities from the modified equations and the available experimental results demonstrated the improvements to the shear capacities of LSBs due to the presence of higher level of fixity at the LSB flange to web juncture. A detailed study into the shear flow distribution of LSB was also undertaken prior to the elastic buckling analysis study. The experimental study of ten LSB sections included 42 shear tests of LSBs with aspect ratios of 1.0 and 1.5 that were loaded at midspan until failure. Both single and back to back LSB arrangements were used. Test specimens were chosen such that all three types of shear failure (shear yielding, inelastic and elastic shear buckling) occurred in the tests. Experimental results showed that the current cold-formed steel design rules are very conservative for the shear design of LSBs. Significant improvements to web shear buckling occurred due to the presence of rectangular hollow flanges while considerable post-buckling strength was also observed. Experimental results were presented and compared with corresponding predictions from the current design rules. Appropriate improvements have been proposed for the shear strength of LSBs based on AISI (2007) design equations and test results. Suitable design rules were also developed under the direct strength method (DSM) format. This thesis also includes the shear test results of cold-formed lipped channel beams from LaBoube and Yu (1978a), and the new design rules developed based on them using the same approach used with LSBs. Finite element models of LSBs in shear were also developed to investigate the ultimate shear strength behaviour of LSBs including their elastic and post-buckling characteristics. They were validated by comparing their results with experimental test results. Details of the finite element models of LSBs, the nonlinear analysis results and their comparisons with experimental results are presented in this thesis. Finite element analysis results showed that the current cold-formed steel design rules are very conservative for the shear design of LSBs. They also confirmed other experimental findings relating to elastic and post-buckling shear strength of LSBs. A detailed parametric study based on validated experimental finite element model was undertaken to develop an extensive shear strength data base and was then used to confirm the accuracy of the new shear strength equations proposed in this thesis. Experimental and numerical studies were also undertaken to investigate the shear behaviour of LSBs with web openings. Twenty six shear tests were first undertaken using a three point loading arrangement. It was found that AS/NZS 4600 and Shan et al.'s (1997) design equations are conservative for the shear design of LSBs with web openings while McMahon et al.'s (2008) design equation are unconservative. Experimental finite element models of LSBs with web openings were then developed and validated by comparing their results with experimental test results. The developed nonlinear finite element model was found to predict the shear capacity of LSBs with web opening with very good accuracy. Improved design equations have been proposed for the shear capacity of LSBs with web openings based on both experimental and FEA parametric study results. This thesis presents the details of experimental and numerical studies of the shear behaviour and strength of LSBs with and without web openings and the results including the developed accurate design rules.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The collective purpose of these two studies was to determine a link between the V02 slow component and the muscle activation patterns that occur during cycling. Six, male subjects performed an incremental cycle ergometer exercise test to determine asub-TvENT (i.e. 80% of TvENT) and supra-TvENT (TvENT + 0.75*(V02 max - TvENT) work load. These two constant work loads were subsequently performed on either three or four occasions for 8 mins each, with V02 captured on a breath-by-breath basis for every test, and EMO of eight major leg muscles collected on one occasion. EMG was collected for the first 10 s of every 30 s period, except for the very first 10 s period. The V02 data was interpolated, time aligned, averaged and smoothed for both intensities. Three models were then fitted to the V02 data to determine the kinetics responses. One of these models was mono-exponential, while the other two were biexponential. A second time delay parameter was the only difference between the two bi-exponential models. An F-test was used to determine significance between the biexponential models using the residual sum of squares term for each model. EMO was integrated to obtain one value for each 10 s period, per muscle. The EMG data was analysed by a two-way repeated measures ANOV A. A correlation was also used to determine significance between V02 and IEMG. The V02 data during the sub-TvENT intensity was best described by a mono-exponential response. In contrast, during supra-TvENT exercise the two bi-exponential models best described the V02 data. The resultant F-test revealed no significant difference between the two models and therefore demonstrated that the slow component was not delayed relative to the onset of the primary component. Furthermore, only two parameters were deemed to be significantly different based upon the two models. This is in contrast to other findings. The EMG data, for most muscles, appeared to follow the same pattern as V02 during both intensities of exercise. On most occasions, the correlation coefficient demonstrated significance. Although some muscles demonstrated the same relative increase in IEMO based upon increases in intensity and duration, it cannot be assumed that these muscles increase their contribution to V02 in a similar fashion. Larger muscles with a higher percentage of type II muscle fibres would have a larger increase in V02 over the same increase in intensity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work investigates the computer modelling of the photochemical formation of smog products such as ozone and aerosol, in a system containing toluene, NOx and water vapour. In particular, the problem of modelling this process in the Commonwealth Scientific and Industrial Research Organization (CSIRO) smog chambers, which utilize outdoor exposure, is addressed. The primary requirement for such modelling is a knowledge of the photolytic rate coefficients. Photolytic rate coefficients of species other than N02 are often related to JNo2 (rate coefficient for the photolysis ofN02) by a simple factor, but for outdoor chambers, this method is prone to error as the diurnal profiles may not be similar in shape. Three methods for the calculation of diurnal JNo2 are investigated. The most suitable method for incorporation into a general model, is found to be one which determines the photolytic rate coefficients for N02, as well as several other species, from actinic flux, absorption cross section and quantum yields. A computer model was developed, based on this method, to calculate in-chamber photolysis rate coefficients for the CSIRO smog chambers, in which ex-chamber rate coefficients are adjusted by accounting for variation in light intensity by transmittance through the Teflon walls, albedo from the chamber floor and radiation attenuation due to clouds. The photochemical formation of secondary aerosol is investigated in a series of toluene-NOx experiments, which were performed in the CSIRO smog chambers. Three stages of aerosol formation, in plots of total particulate volume versus time, are identified: a delay period in which no significant mass of aerosol is formed, a regime of rapid aerosol formation (regime 1) and a second regime of slowed aerosol formation (regime 2). Two models are presented which were developed from the experimental data. One model is empirically based on observations of discrete stages of aerosol formation and readily allows aerosol growth profiles to be calculated. The second model is based on an adaptation of published toluene photooxidation mechanisms and provides some chemical information about the oxidation products. Both models compare favorably against the experimental data. The gross effects of precursor concentrations (toluene, NOx and H20) and ambient conditions (temperature, photolysis rate) on the formation of secondary aerosol are also investigated, primarily using the mechanism model. An increase in [NOx]o results in increased delay time, rate of aerosol formation in regime 1 and volume of aerosol formed in regime 1. This is due to increased formation of dinitrocresol and furanone products. An increase in toluene results in a decrease in the delay time and an increase in the rate of aerosol formation in regime 1, due to enhanced reactivity from the toluene products, such as the radicals from the photolysis of benzaldehyde. Water vapor has very little effect on the formation of aerosol volume, except that rates are slightly increased due to more OH radicals from reaction with 0(1D) from ozone photolysis. Increased temperature results in increased volume of aerosol formed in regime 1 (increased dinitrocresol formation), while increased photolysis rate results in increased rate of aerosol formation in regime 1. Both the rate and volume of aerosol formed in regime 2 are increased by increased temperature or photolysis rate. Both models indicate that the yield of secondary particulates from hydrocarbons (mass concentration aerosol formed/mass concentration hydrocarbon precursor) is proportional to the ratio [NOx]0/[hydrocarbon]0

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hydrogel polymers are used for the manufacture of soft (or disposable) contact lenses worldwide today, but have a tendency to dehydrate on the eye. In vitro methods that can probe the potential for a given hydrogel polymer to dehydrate in vivo are much sought after. Nuclear magnetic resonance (NMR) has been shown to be effective in characterising water mobility and binding in similar systems (Barbieri, Quaglia et al., 1998, Larsen, Huff et al., 1990, Peschier, Bouwstra et al., 1993), predominantly through measurement of the spin-lattice relaxation time (T1), the spinspin relaxation time (T2) and the water diffusion coefficient (D). The aim of this work was to use NMR to quantify the molecular behaviour of water in a series of commercially available contact lens hydrogels, and relate these measurements to the binding and mobility of the water, and ultimately the potential for the hydrogel to dehydrate. As a preliminary study, in vitro evaporation rates were measured for a set of commercial contact lens hydrogels. Following this, comprehensive measurement of the temperature and water content dependencies of T1, T2 and D was performed for a series of commercial hydrogels that spanned the spectrum of equilibrium water content (EWC) and common compositions of contact lenses that are manufactured today. To quantify material differences, the data were then modelled based on theory that had been used for similar systems in the literature (Walker, Balmer et al., 1989, Hills, Takacs et al., 1989). The differences were related to differences in water binding and mobility. The evaporative results suggested that the EWC of the material was important in determining a material's potential to dehydrate in this way. Similarly, the NMR water self-diffusion coefficient was also found to be largely (if not wholly) determined by the WC. A specific binding model confirmed that the we was the dominant factor in determining the diffusive behaviour, but also suggested that subtle differences existed between the materials used, based on their equilibrium we (EWC). However, an alternative modified free volume model suggested that only the current water content of the material was important in determining the diffusive behaviour, and not the equilibrium water content. It was shown that T2 relaxation was dominated by chemical exchange between water and exchangeable polymer protons for materials that contained exchangeable polymer protons. The data was analysed using a proton exchange model, and the results were again reasonably correlated with EWC. Specifically, it was found that the average water mobility increased with increasing EWe approaching that of free water. The T1 relaxation was also shown to be reasonably well described by the same model. The main conclusion that can be drawn from this work is that the hydrogel EWe is an important parameter, which largely determines the behaviour of water in the gel. Higher EWe results in a hydrogel with water that behaves more like bulk water on average, or is less strongly 'bound' on average, compared with a lower EWe material. Based on the set of materials used, significant differences due to composition (for materials of the same or similar water content) could not be found. Similar studies could be used in the future to highlight hydrogels that deviate significantly from this 'average' behaviour, and may therefore have the least/greatest potential to dehydrate on the eye.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Radioactive wastes are by-products of the use of radiation technologies. As with many technologies, the wastes are required to be disposed of in a safe manner so as to minimise risk to human health. This study examines the requirements for a hypothetical repository and develops techniques for decision making to permit the establishment of a shallow ground burial facility to receive an inventory of low-level radioactive wastes. Australia’s overall inventory is used as an example. Essential and desirable siting criteria are developed and applied to Australia's Northern Territory resulting in the selection of three candidate sites for laboratory investigations into soil behaviour. The essential quantifiable factors which govern radionuclide migration and ultimately influence radiation doses following facility closure are reviewed. Simplified batch and column procedures were developed to enable laboratory determination of distribution and retardation coefficient values for use in one-dimensional advection-dispersion transport equations. Batch and column experiments were conducted with Australian soils sampled from the three identified candidate sites using a radionuclide representative of the current national low-level radioactive waste inventory. The experimental results are discussed and site soil performance compared. The experimental results are subsequently used to compare the relative radiation health risks between each of the three sites investigated. A recommendation is made as to the preferred site to construct an engineered near-surface burial facility to receive the Australian low-level radioactive waste inventory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis applies Monte Carlo techniques to the study of X-ray absorptiometric methods of bone mineral measurement. These studies seek to obtain information that can be used in efforts to improve the accuracy of the bone mineral measurements. A Monte Carlo computer code for X-ray photon transport at diagnostic energies has been developed from first principles. This development was undertaken as there was no readily available code which included electron binding energy corrections for incoherent scattering and one of the objectives of the project was to study the effects of inclusion of these corrections in Monte Carlo models. The code includes the main Monte Carlo program plus utilities for dealing with input data. A number of geometrical subroutines which can be used to construct complex geometries have also been written. The accuracy of the Monte Carlo code has been evaluated against the predictions of theory and the results of experiments. The results show a high correlation with theoretical predictions. In comparisons of model results with those of direct experimental measurements, agreement to within the model and experimental variances is obtained. The code is an accurate and valid modelling tool. A study of the significance of inclusion of electron binding energy corrections for incoherent scatter in the Monte Carlo code has been made. The results show this significance to be very dependent upon the type of application. The most significant effect is a reduction of low angle scatter flux for high atomic number scatterers. To effectively apply the Monte Carlo code to the study of bone mineral density measurement by photon absorptiometry the results must be considered in the context of a theoretical framework for the extraction of energy dependent information from planar X-ray beams. Such a theoretical framework is developed and the two-dimensional nature of tissue decomposition based on attenuation measurements alone is explained. This theoretical framework forms the basis for analytical models of bone mineral measurement by dual energy X-ray photon absorptiometry techniques. Monte Carlo models of dual energy X-ray absorptiometry (DEXA) have been established. These models have been used to study the contribution of scattered radiation to the measurements. It has been demonstrated that the measurement geometry has a significant effect upon the scatter contribution to the detected signal. For the geometry of the models studied in this work the scatter has no significant effect upon the results of the measurements. The model has also been used to study a proposed technique which involves dual energy X-ray transmission measurements plus a linear measurement of the distance along the ray path. This is designated as the DPA( +) technique. The addition of the linear measurement enables the tissue decomposition to be extended to three components. Bone mineral, fat and lean soft tissue are the components considered here. The results of the model demonstrate that the measurement of bone mineral using this technique is stable over a wide range of soft tissue compositions and hence would indicate the potential to overcome a major problem of the two component DEXA technique. However, the results also show that the accuracy of the DPA( +) technique is highly dependent upon the composition of the non-mineral components of bone and has poorer precision (approximately twice the coefficient of variation) than the standard DEXA measurements. These factors may limit the usefulness of the technique. These studies illustrate the value of Monte Carlo computer modelling of quantitative X-ray measurement techniques. The Monte Carlo models of bone densitometry measurement have:- 1. demonstrated the significant effects of the measurement geometry upon the contribution of scattered radiation to the measurements, 2. demonstrated that the statistical precision of the proposed DPA( +) three tissue component technique is poorer than that of the standard DEXA two tissue component technique, 3. demonstrated that the proposed DPA(+) technique has difficulty providing accurate simultaneous measurement of body composition in terms of a three component model of fat, lean soft tissue and bone mineral,4. and provided a knowledge base for input to decisions about development (or otherwise) of a physical prototype DPA( +) imaging system. The Monte Carlo computer code, data, utilities and associated models represent a set of significant, accurate and valid modelling tools for quantitative studies of physical problems in the fields of diagnostic radiology and radiography.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a static synchronous series compensator (SSSC), along with a fixed capacitor, is used to avoid torsional mode instability in a series compensated transmission system. A 48-step harmonic neutralized inverter is used for the realization of the SSSC. The system under consideration is the IEEE first benchmark model on SSR analysis. The system stability is studied both through eigenvalue analysis and EMTDC/PSCAD simulation studies. It is shown that the combination of the SSSC and the fixed capacitor improves the synchronizing power coefficient. The presence of the fixed capacitor ensures increased damping of small signal oscillations. At higher levels of fixed capacitor compensation, a damping controller is required to stabilize the torsional modes of SSR.