965 resultados para Malthusian parameter


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Accurate estimation of input parameters is essential to ensure the accuracy and reliability of hydrologic and water quality modelling. Calibration is an approach to obtain accurate input parameters for comparing observed and simulated results. However, the calibration approach is limited as it is only applicable to catchments where monitoring data is available. Therefore, methodology to estimate appropriate model input parameters is critical, particularly for catchments where monitoring data is not available. In the research study discussed in the paper, pollutant build-up parameters derived from catchment field investigations and model calibration using MIKE URBAN are compared for three catchments in Southeast Queensland, Australia. Additionally, the sensitivity of MIKE URBAN input parameters was analysed. It was found that Reduction Factor is the most sensitive parameter for peak flow and total runoff volume estimation whilst Build-up rate is the most sensitive parameter for TSS load estimation. Consequently, these input parameters should be determined accurately in hydrologic and water quality simulations using MIKE URBAN. Furthermore, an empirical equation for Southeast Queensland, Australia for the conversion of build-up parameters derived from catchment field investigations as MIKE URBAN input build-up parameters was derived. This will provide guidance for allowing for regional variations in the estimation of input parameters for catchment modelling using MIKE URBAN where monitoring data is not available.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite recent developments in fixed-film combined biological nutrients removal (BNR) technology; fixed-film systems (i.e., biofilters), are still at the early stages of development and their application has been limited to a few laboratory-scale experiments. Achieving enhanced biological phosphorus removal in fixed-film systems requires exposing the micro-organisms and the waste stream to alternating anaerobic/aerobic or anaerobic/anoxic conditions in cycles. The concept of cycle duration (CD) as a process control parameter is unique to fixed-film BNR systems, has not been previously investigated, and can be used to optimise the performance of such systems. The CD refers to the elapsed time before the biomass is re-exposed to the same environmental conditions in cycles. Fixed-film systems offer many advantages over suspended growth systems such as reduced operating costs, simplicity of operation, absence of sludge recycling problems, and compactness. The control of nutrient discharges to water bodies, improves water quality, fish production, and allow water reuse. The main objective of this study was to develop a fundamental understanding of the effect of CD on the transformations of nutrients in fixed-film biofilter systems subjected to alternating aeration I no-aeration cycles A fixed-film biofilter system consisting of three up-flow biofilters connected in series was developed and tested. The first and third biofilters were operated in a cyclic mode in which the biomass was subjected to aeration/no-aeration cycles. The influent wastewater was simulated aquaculture whose composition was based on actual water quality parameters of aquacuture wastewater from a prawn grow-out facility. The influent contained 8.5 - 9:3 mg!L a111monia-N, 8.5- 8.7 mg/L phosphate-P, and 45- 50 mg!L acetate. Two independent studies were conducted at two biofiltration rates to evaluate and confirm the effect of CD on nutrient transformations in the biofilter system for application in aquaculture: A third study was conducted to enhance denitrification in the system using an external carbon- source at a rate varying from 0-24 ml/min. The CD was varied in the range of0.25- 120 hours for the first two studies and fixed at 12 hours for the third study. This study identified the CD as an important process control parameter that can be used to optimise the performance of full-scale fixed-film systems for BNR which represents a novel contribution in this field of research. The CD resulted in environmental conditions that inhibited or enhanced nutrient transformations. The effect of CD on BNR in fixed-film systems in terms of phosphorus biomass saturation and depletion has been established. Short CDs did not permit the establishment of anaerobic activity in the un-aerated biofilter and, thus, inhibited phosphorus release. Long CDs resulted in extended anaerobic activity and, thus, resulted in active phosphorus release. Long CDs, however, resulted in depleting the biomass phosphorus reservoir in the releasing biofilter and saturating the biomass phosphorus reservoir in the up-taking biofilter in the cycle. This phosphorus biomass saturation/depletion phenomenon imposes a practical limit on how short or long the CD can be. The length of the CD should be somewhere just before saturation or depletion occur and for the system tested, the optimal CD was 12 hours for the biofiltration rates tested. The system achieved limited net phosphorus removal due to the limited sludge wasting and lack of external carbon supply during phosphorus uptake. The phosphorus saturation and depletion reflected the need to extract phosphorus from the phosphorus-rich micro-organisms, for example, through back-washing. The major challenges of achieving phosphorus removal in the system included: (I) overcoming the deterioration in the performance of the system during the transition period following the start of each new cycle; and (2) wasting excess phosphorus-saturated biomass following the aeration cycle. Denitrification occurred in poorly aerated sections of the third biofilter and generally declined as the CD increased and as the time progressed in the individual cycle. Denitrification and phosphorus uptake were supplied by an internal organic carbon source, and the addition of an external carbon source (acetate) to the third biofilter resulted in improved denitrification efficiency in the system from 18.4 without supplemental carbon to 88.7% when the carbon dose reached 24 mL/min The removal of TOC and nitrification improved as the CD increased, as a result of the reduction in the frequency of transition periods between the cycles. A conceptual design of an effective fixed-film BNR biofilter system for the treatment of the influent simulated aquaculture wastewater was proposed based on the findings of the study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Shell structures find use in many fields of engineering, notably structural, mechanical, aerospace and nuclear-reactor disciplines. Axisymmetric shell structures are used as dome type of roofs, hyperbolic cooling towers, silos for storage of grain, oil and industrial chemicals and water tanks. Despite their thin walls, strength is derived due to the curvature. The generally high strength-to-weight ratio of the shell form, combined with its inherent stiffness, has formed the basis of this vast application. With the advent in computation technology, the finite element method and optimisation techniques, structural engineers have extremely versatile tools for the optimum design of such structures. Optimisation of shell structures can result not only in improved designs, but also in a large saving of material. The finite element method being a general numerical procedure that could be used to treat any shell problem to any desired degree of accuracy, requires several runs in order to obtain a complete picture of the effect of one parameter on the shell structure. This redesign I re-analysis cycle has been achieved via structural optimisation in the present research, and MSC/NASTRAN (a commercially available finite element code) has been used in this context for volume optimisation of axisymmetric shell structures under axisymmetric and non-axisymmetric loading conditions. The parametric study of different axisymmetric shell structures has revealed that the hyperbolic shape is the most economical solution of shells of revolution. To establish this, axisymmetric loading; self-weight and hydrostatic pressure, and non-axisymmetric loading; wind pressure and earthquake dynamic forces have been modelled on graphical pre and post processor (PATRAN) and analysis has been performed on two finite element codes (ABAQUS and NASTRAN), numerical model verification studies are performed, and optimum material volume required in the walls of cylindrical, conical, parabolic and hyperbolic forms of axisymmetric shell structures are evaluated and reviewed. Free vibration and transient earthquake analysis of hyperbolic shells have been performed once it was established that hyperbolic shape is the most economical under all possible loading conditions. Effect of important parameters of hyperbolic shell structures; shell wall thickness, height and curvature, have been evaluated and empirical relationships have been developed to estimate an approximate value of the lowest (first) natural frequency of vibration. The outcome of this thesis has been the generation of new research information on performance characteristics of axisymmetric shell structures that will facilitate improved designs of shells with better choice of shapes and enhanced levels of economy and performance. Key words; Axisymmetric shell structures, Finite element analysis, Volume Optimisation_ Free vibration_ Transient response.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The LiteSteel Beam (LSB) is a new hollow flange channel section developed by OneSteel Australian Tube Mills using a patented Dual Electric Resistance Welding technique. The LSB has a unique geometry consisting of torsionally rigid rectangular hollow flanges and a relatively slender web. It is commonly used as rafters, floor joists and bearers and roof beams in residential, industrial and commercial buildings. It is on average 40% lighter than traditional hot-rolled steel beams of equivalent performance. The LSB flexural members are subjected to a relatively new Lateral Distortional Buckling mode, which reduces the member moment capacity. Unlike the commonly observed lateral torsional buckling of steel beams, lateral distortional buckling of LSBs is characterised by simultaneous lateral deflection, twist and web distortion. Current member moment capacity design rules for lateral distortional buckling in AS/NZS 4600 (SA, 2005) do not include the effect of section geometry of hollow flange beams although its effect is considered to be important. Therefore detailed experimental and finite element analyses (FEA) were carried out to investigate the lateral distortional buckling behaviour of LSBs including the effect of section geometry. The results showed that the current design rules in AS/NZS 4600 (SA, 2005) are over-conservative in the inelastic lateral buckling region. New improved design rules were therefore developed for LSBs based on both FEA and experimental results. A geometrical parameter (K) defined as the ratio of the flange torsional rigidity to the major axis flexural rigidity of the web (GJf/EIxweb) was identified as the critical parameter affecting the lateral distortional buckling of hollow flange beams. The effect of section geometry was then included in the new design rules using the new parameter (K). The new design rule developed by including this parameter was found to be accurate in calculating the member moment capacities of not only LSBs, but also other types of hollow flange steel beams such as Hollow Flange Beams (HFBs), Monosymmetric Hollow Flange Beams (MHFBs) and Rectangular Hollow Flange Beams (RHFBs). The inelastic reserve bending capacity of LSBs has not been investigated yet although the section moment capacity tests of LSBs in the past revealed that inelastic reserve bending capacity is present in LSBs. However, the Australian and American cold-formed steel design codes limit them to the first yield moment. Therefore both experimental and FEA were carried out to investigate the section moment capacity behaviour of LSBs. A comparison of the section moment capacity results from FEA, experiments and current cold-formed steel design codes showed that compact and non-compact LSB sections classified based on AS 4100 (SA, 1998) have some inelastic reserve capacity while slender LSBs do not have any inelastic reserve capacity beyond their first yield moment. It was found that Shifferaw and Schafer’s (2008) proposed equations and Eurocode 3 Part 1.3 (ECS, 2006) design equations can be used to include the inelastic bending capacities of compact and non-compact LSBs in design. As a simple design approach, the section moment capacity of compact LSB sections can be taken as 1.10 times their first yield moment while it is the first yield moment for non-compact sections. For slender LSB sections, current cold-formed steel codes can be used to predict their section moment capacities. It was believed that the use of transverse web stiffeners could improve the lateral distortional buckling moment capacities of LSBs. However, currently there are no design equations to predict the elastic lateral distortional buckling and member moment capacities of LSBs with web stiffeners under uniform moment conditions. Therefore, a detailed study was conducted using FEA to simulate both experimental and ideal conditions of LSB flexural members. It was shown that the use of 3 to 5 mm steel plate stiffeners welded or screwed to the inner faces of the top and bottom flanges of LSBs at third span points and supports provided an optimum web stiffener arrangement. Suitable design rules were developed to calculate the improved elastic buckling and ultimate moment capacities of LSBs with these optimum web stiffeners. A design rule using the geometrical parameter K was also developed to improve the accuracy of ultimate moment capacity predictions. This thesis presents the details and results of the experimental and numerical studies of the section and member moment capacities of LSBs conducted in this research. It includes the recommendations made regarding the accuracy of current design rules as well as the new design rules for lateral distortional buckling. The new design rules include the effects of section geometry of hollow flange steel beams. This thesis also developed a method of using web stiffeners to reduce the lateral distortional buckling effects, and associated design rules to calculate the improved moment capacities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concept of radar was developed for the estimation of the distance (range) and velocity of a target from a receiver. The distance measurement is obtained by measuring the time taken for the transmitted signal to propagate to the target and return to the receiver. The target's velocity is determined by measuring the Doppler induced frequency shift of the returned signal caused by the rate of change of the time- delay from the target. As researchers further developed conventional radar systems it become apparent that additional information was contained in the backscattered signal and that this information could in fact be used to describe the shape of the target itself. It is due to the fact that a target can be considered to be a collection of individual point scatterers, each of which has its own velocity and time- delay. DelayDoppler parameter estimation of each of these point scatterers thus corresponds to a mapping of the target's range and cross range, thus producing an image of the target. Much research has been done in this area since the early radar imaging work of the 1960s. At present there are two main categories into which radar imaging falls. The first of these is related to the case where the backscattered signal is considered to be deterministic. The second is related to the case where the backscattered signal is of a stochastic nature. In both cases the information which describes the target's scattering function is extracted by the use of the ambiguity function, a function which correlates the backscattered signal in time and frequency with the transmitted signal. In practical situations, it is often necessary to have the transmitter and the receiver of the radar system sited at different locations. The problem in these situations is 'that a reference signal must then be present in order to calculate the ambiguity function. This causes an additional problem in that detailed phase information about the transmitted signal is then required at the receiver. It is this latter problem which has led to the investigation of radar imaging using time- frequency distributions. As will be shown in this thesis, the phase information about the transmitted signal can be extracted from the backscattered signal using time- frequency distributions. The principle aim of this thesis was in the development, and subsequent discussion into the theory of radar imaging, using time- frequency distributions. Consideration is first given to the case where the target is diffuse, ie. where the backscattered signal has temporal stationarity and a spatially white power spectral density. The complementary situation is also investigated, ie. where the target is no longer diffuse, but some degree of correlation exists between the time- frequency points. Computer simulations are presented to demonstrate the concepts and theories developed in the thesis. For the proposed radar system to be practically realisable, both the time- frequency distributions and the associated algorithms developed must be able to be implemented in a timely manner. For this reason an optical architecture is proposed. This architecture is specifically designed to obtain the required time and frequency resolution when using laser radar imaging. The complex light amplitude distributions produced by this architecture have been computer simulated using an optical compiler.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The collective purpose of these two studies was to determine a link between the V02 slow component and the muscle activation patterns that occur during cycling. Six, male subjects performed an incremental cycle ergometer exercise test to determine asub-TvENT (i.e. 80% of TvENT) and supra-TvENT (TvENT + 0.75*(V02 max - TvENT) work load. These two constant work loads were subsequently performed on either three or four occasions for 8 mins each, with V02 captured on a breath-by-breath basis for every test, and EMO of eight major leg muscles collected on one occasion. EMG was collected for the first 10 s of every 30 s period, except for the very first 10 s period. The V02 data was interpolated, time aligned, averaged and smoothed for both intensities. Three models were then fitted to the V02 data to determine the kinetics responses. One of these models was mono-exponential, while the other two were biexponential. A second time delay parameter was the only difference between the two bi-exponential models. An F-test was used to determine significance between the biexponential models using the residual sum of squares term for each model. EMO was integrated to obtain one value for each 10 s period, per muscle. The EMG data was analysed by a two-way repeated measures ANOV A. A correlation was also used to determine significance between V02 and IEMG. The V02 data during the sub-TvENT intensity was best described by a mono-exponential response. In contrast, during supra-TvENT exercise the two bi-exponential models best described the V02 data. The resultant F-test revealed no significant difference between the two models and therefore demonstrated that the slow component was not delayed relative to the onset of the primary component. Furthermore, only two parameters were deemed to be significantly different based upon the two models. This is in contrast to other findings. The EMG data, for most muscles, appeared to follow the same pattern as V02 during both intensities of exercise. On most occasions, the correlation coefficient demonstrated significance. Although some muscles demonstrated the same relative increase in IEMO based upon increases in intensity and duration, it cannot be assumed that these muscles increase their contribution to V02 in a similar fashion. Larger muscles with a higher percentage of type II muscle fibres would have a larger increase in V02 over the same increase in intensity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hydrogel polymers are used for the manufacture of soft (or disposable) contact lenses worldwide today, but have a tendency to dehydrate on the eye. In vitro methods that can probe the potential for a given hydrogel polymer to dehydrate in vivo are much sought after. Nuclear magnetic resonance (NMR) has been shown to be effective in characterising water mobility and binding in similar systems (Barbieri, Quaglia et al., 1998, Larsen, Huff et al., 1990, Peschier, Bouwstra et al., 1993), predominantly through measurement of the spin-lattice relaxation time (T1), the spinspin relaxation time (T2) and the water diffusion coefficient (D). The aim of this work was to use NMR to quantify the molecular behaviour of water in a series of commercially available contact lens hydrogels, and relate these measurements to the binding and mobility of the water, and ultimately the potential for the hydrogel to dehydrate. As a preliminary study, in vitro evaporation rates were measured for a set of commercial contact lens hydrogels. Following this, comprehensive measurement of the temperature and water content dependencies of T1, T2 and D was performed for a series of commercial hydrogels that spanned the spectrum of equilibrium water content (EWC) and common compositions of contact lenses that are manufactured today. To quantify material differences, the data were then modelled based on theory that had been used for similar systems in the literature (Walker, Balmer et al., 1989, Hills, Takacs et al., 1989). The differences were related to differences in water binding and mobility. The evaporative results suggested that the EWC of the material was important in determining a material's potential to dehydrate in this way. Similarly, the NMR water self-diffusion coefficient was also found to be largely (if not wholly) determined by the WC. A specific binding model confirmed that the we was the dominant factor in determining the diffusive behaviour, but also suggested that subtle differences existed between the materials used, based on their equilibrium we (EWC). However, an alternative modified free volume model suggested that only the current water content of the material was important in determining the diffusive behaviour, and not the equilibrium water content. It was shown that T2 relaxation was dominated by chemical exchange between water and exchangeable polymer protons for materials that contained exchangeable polymer protons. The data was analysed using a proton exchange model, and the results were again reasonably correlated with EWC. Specifically, it was found that the average water mobility increased with increasing EWe approaching that of free water. The T1 relaxation was also shown to be reasonably well described by the same model. The main conclusion that can be drawn from this work is that the hydrogel EWe is an important parameter, which largely determines the behaviour of water in the gel. Higher EWe results in a hydrogel with water that behaves more like bulk water on average, or is less strongly 'bound' on average, compared with a lower EWe material. Based on the set of materials used, significant differences due to composition (for materials of the same or similar water content) could not be found. Similar studies could be used in the future to highlight hydrogels that deviate significantly from this 'average' behaviour, and may therefore have the least/greatest potential to dehydrate on the eye.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A review of the main rolling models is conducted to assess their suitability for modelling the foil rolling process. Two such models are Fleck and Johnson's Hertzian model and Fleck, Johnson, Mear and Zhang's Influence Function model. Both of these models are approximated through the use of perturbation methods. Decrease in the computation time resulted when compared with the numerical solution. The Hertzian model was approximated using the ratio of the yield stress of the strip to the plane-strain Young's Modulus of the rolls as the small perturbation parameter. The Influence Function model approximation takes advantage of the solution of the well-known Aerofoil Integral Equation to gain an insight into how the choice of interior boundary points affects the stability of numerical solution of the model's equations. These approximations require less computation than their full models and, in the case of the Hertzian approximation, only introduces a small error in the predictions of roll force roll torque. Hence the Hertzian approximate method is suitable for on-line control. The predictions from the Influence Function approximation underestimates the predictions from the numerical results. Better approximation of the pressure in the plastic reduction regions is the main source of this error.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation is primarily an applied statistical modelling investigation, motivated by a case study comprising real data and real questions. Theoretical questions on modelling and computation of normalization constants arose from pursuit of these data analytic questions. The essence of the thesis can be described as follows. Consider binary data observed on a two-dimensional lattice. A common problem with such data is the ambiguity of zeroes recorded. These may represent zero response given some threshold (presence) or that the threshold has not been triggered (absence). Suppose that the researcher wishes to estimate the effects of covariates on the binary responses, whilst taking into account underlying spatial variation, which is itself of some interest. This situation arises in many contexts and the dingo, cypress and toad case studies described in the motivation chapter are examples of this. Two main approaches to modelling and inference are investigated in this thesis. The first is frequentist and based on generalized linear models, with spatial variation modelled by using a block structure or by smoothing the residuals spatially. The EM algorithm can be used to obtain point estimates, coupled with bootstrapping or asymptotic MLE estimates for standard errors. The second approach is Bayesian and based on a three- or four-tier hierarchical model, comprising a logistic regression with covariates for the data layer, a binary Markov Random field (MRF) for the underlying spatial process, and suitable priors for parameters in these main models. The three-parameter autologistic model is a particular MRF of interest. Markov chain Monte Carlo (MCMC) methods comprising hybrid Metropolis/Gibbs samplers is suitable for computation in this situation. Model performance can be gauged by MCMC diagnostics. Model choice can be assessed by incorporating another tier in the modelling hierarchy. This requires evaluation of a normalization constant, a notoriously difficult problem. Difficulty with estimating the normalization constant for the MRF can be overcome by using a path integral approach, although this is a highly computationally intensive method. Different methods of estimating ratios of normalization constants (N Cs) are investigated, including importance sampling Monte Carlo (ISMC), dependent Monte Carlo based on MCMC simulations (MCMC), and reverse logistic regression (RLR). I develop an idea present though not fully developed in the literature, and propose the Integrated mean canonical statistic (IMCS) method for estimating log NC ratios for binary MRFs. The IMCS method falls within the framework of the newly identified path sampling methods of Gelman & Meng (1998) and outperforms ISMC, MCMC and RLR. It also does not rely on simplifying assumptions, such as ignoring spatio-temporal dependence in the process. A thorough investigation is made of the application of IMCS to the three-parameter Autologistic model. This work introduces background computations required for the full implementation of the four-tier model in Chapter 7. Two different extensions of the three-tier model to a four-tier version are investigated. The first extension incorporates temporal dependence in the underlying spatio-temporal process. The second extensions allows the successes and failures in the data layer to depend on time. The MCMC computational method is extended to incorporate the extra layer. A major contribution of the thesis is the development of a fully Bayesian approach to inference for these hierarchical models for the first time. Note: The author of this thesis has agreed to make it open access but invites people downloading the thesis to send her an email via the 'Contact Author' function.