970 resultados para presence only
Resumo:
Habitat models are widely used in ecology, however there are relatively few studies of rare species, primarily because of a paucity of survey records and lack of robust means of assessing accuracy of modelled spatial predictions. We investigated the potential of compiled ecological data in developing habitat models for Macadamia integrifolia, a vulnerable mid-stratum tree endemic to lowland subtropical rainforests of southeast Queensland, Australia. We compared performance of two binomial models—Classification and Regression Trees (CART) and Generalised Additive Models (GAM)—with Maximum Entropy (MAXENT) models developed from (i) presence records and available absence data and (ii) developed using presence records and background data. The GAM model was the best performer across the range of evaluation measures employed, however all models were assessed as potentially useful for informing in situ conservation of M. integrifolia, A significant loss in the amount of M. integrifolia habitat has occurred (p < 0.05), with only 37% of former habitat (pre-clearing) remaining in 2003. Remnant patches are significantly smaller, have larger edge-to-area ratios and are more isolated from each other compared to pre-clearing configurations (p < 0.05). Whilst the network of suitable habitat patches is still largely intact, there are numerous smaller patches that are more isolated in the contemporary landscape compared with their connectedness before clearing. These results suggest that in situ conservation of M. integrifolia may be best achieved through a landscape approach that considers the relative contribution of small remnant habitat fragments to the species as a whole, as facilitating connectivity among the entire network of habitat patches.
Resumo:
This paper presents the details of an experimental study on the shear behaviour and strength of a recently developed, cold-formed steel hollow flange channel beam known as LiteSteel Beam (LSB). The new LSB sections with rectangular hollow flanges are produced using a patented manufacturing process involving simultaneous cold-forming and dual electric resistance welding. They are commonly used as flexural members in buildings. However, no research has been undertaken on the shear behaviour of LSBs. Therefore a detailed experimental study involving 36 shear tests was undertaken to investigate the shear behaviour of 10 different LSB sections. Simply supported test specimens of LSBs with aspect ratios of 1.0 and 1.5 were loaded at midspan until failure using both single and back to back LSB arrangements. Test specimens were chosen such that all three types of shear failure (shear yielding, inelastic and elastic shear buckling) occurred in the tests. Comparison of experimental results with corresponding predictions from the current Australian and North American cold-formed steel design rules showed that the current design rules are very conservative for the shear design of LSBs. Significant improvements to web shear buckling occurred due to the presence of rectangular hollow flanges while considerable post-buckling strength was also observed. Appropriate improvements have been proposed for the shear strength of LSBs based on the design equations in the North American Specification. This paper presents the details of this experimental study and the results. When reduced height web side plates or only one web side plate was used, the shear capacity of LSB was reduced. Details of these tests and the results are also presented in this paper. Keywords: LiteSteel beam, Shear strength, Shear tests, Cold-formed steel structures, Direct strength method, Slender web, Hollow flanges.
Resumo:
In this study, the host-sensitivity and -specificity of JCV and BKV polyomaviruses were evaluated by testing wastewater/fecal samples from nine host groups in Southeast Queensland, Australia. The JCV and BKV polyomaviruses were detected in 48 human wastewater samples collected from the primary and secondary effluent suggesting high sensitivity of these viruses in human wastewater. Of the 81 animal wastewater/fecal samples tested, 80 were PCR negative for this marker. Only one sample from pig wastewater was positive. Nonetheless, the overall host-specificity of these viruses to differentiate between human and animal wastewater/fecal samples was 0.99. To our knowledge, this is the first study in Australia that reports the high specificity of JCV and BKV polyomaviruses. To evaluate the field application of these viruses to detect human fecal pollution, 20 environmental samples were collected from a coastal river. Of the 20 samples tested, 15% and 70% samples exceeded the regulatory guidelines for E. coli and enterococci levels for marine waters. In all, 5 (25%) samples were PCR positive for JCV and BKV indicated the presence of human fecal pollution in the studied river. The results suggest that JCV and BKV detection using PCR could be a useful tool for the identification of human sourced fecal pollution in coastal waters.
Resumo:
Calcium oxalate (CaOX) is the most intractable scale component to remove in sugar mill evaporators by either mechanical or chemical means. The operating conditions of sugar mill evaporators should preferentially favour the formation of the thermodynamically stable calcium oxalate monohydrate (COM), yet analysis of scale deposit from different sugar factories have shown that calcium oxalate dihydrate (COD) is usually the predominant phase, and in some cases is the only hydrate formed. The effects of trans-aconitic, succinic and acetic acids, all of which are present in sugarcane juice, and ethylenediamine tetraacetic acid disodium salt (EDTA) on the growth of CaOX crystals have been examined by scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray powder diffraction (XRD) and thermogravimetric analysis (TGA). trans-Aconitic acid, which constitutes two-thirds of the organic acid component in sugarcane juice, in the presence of sugar resulted in the formation of COD and COM in a 3:1 ratio. EDTA was the most effective acid to promote the formation of COD followed by trans-aconitic acid, then acetic acid and lastly succinic acid.
Resumo:
Hand-held mobile phone use while driving is illegal throughout Australia yet many drivers persist with this behaviour. This study aims to understand the internal, driver-related and external, situational-related factors influencing drivers’ willingness to use a hand-held mobile phone while driving. Sampling 160 university students, this study utilised the Theory of Planned Behaviour (TPB) to examine a range of belief-based constructs. Additionally, drivers’ personality traits of neuroticism and extroversion were measured with the Neuroticism Extroversion Openness-Five Factor Inventory (NEO-FFI). In relation to the external, situational-related factors, four different driving-related scenarios, which were intended to evoke differing levels of drivers’ reported stress, were devised for the study and manipulated drivers’ time urgency (low versus high) and passenger presence (alone versus with friends). In these scenarios, drivers’ willingness to use a mobile phone in general was measured. Hierarchical regression analyses across the four different driving scenarios found that, overall, the TPB components significantly accounted for drivers’ willingness to use a mobile phone above and beyond the demographic variables. Subjective norms, however, was only a significant predictor of drivers’ willingness in situations where the drivers were driving alone. Generally, neuroticism and extroversion did not significantly predict drivers’ willingness above and beyond the TPB and demographic variables. Overall, the findings broaden our understanding of the internal and external factors influencing drivers’ willingness to use a hand-held mobile phone while driving despite the illegality of this behaviour. The findings may have important practical implications in terms of better informing road safety campaigns targeting drivers’ mobile phone use which, in turn, may contribute to a reduction in the extent that mobile phone use contributes to road crashes.
Resumo:
Aims: To develop clinical protocols for acquiring PET images, performing CT-PET registration and tumour volume definition based on the PET image data, for radiotherapy for lung cancer patients and then to test these protocols with respect to levels of accuracy and reproducibility. Method: A phantom-based quality assurance study of the processes associated with using registered CT and PET scans for tumour volume definition was conducted to: (1) investigate image acquisition and manipulation techniques for registering and contouring CT and PET images in a radiotherapy treatment planning system, and (2) determine technology-based errors in the registration and contouring processes. The outcomes of the phantom image based quality assurance study were used to determine clinical protocols. Protocols were developed for (1) acquiring patient PET image data for incorporation into the 3DCRT process, particularly for ensuring that the patient is positioned in their treatment position; (2) CT-PET image registration techniques and (3) GTV definition using the PET image data. The developed clinical protocols were tested using retrospective clinical trials to assess levels of inter-user variability which may be attributed to the use of these protocols. A Siemens Somatom Open Sensation 20 slice CT scanner and a Philips Allegro stand-alone PET scanner were used to acquire the images for this research. The Philips Pinnacle3 treatment planning system was used to perform the image registration and contouring of the CT and PET images. Results: Both the attenuation-corrected and transmission images obtained from standard whole-body PET staging clinical scanning protocols were acquired and imported into the treatment planning system for the phantom-based quality assurance study. Protocols for manipulating the PET images in the treatment planning system, particularly for quantifying uptake in volumes of interest and window levels for accurate geometric visualisation were determined. The automatic registration algorithms were found to have sub-voxel levels of accuracy, with transmission scan-based CT-PET registration more accurate than emission scan-based registration of the phantom images. Respiration induced image artifacts were not found to influence registration accuracy while inadequate pre-registration over-lap of the CT and PET images was found to result in large registration errors. A threshold value based on a percentage of the maximum uptake within a volume of interest was found to accurately contour the different features of the phantom despite the lower spatial resolution of the PET images. Appropriate selection of the threshold value is dependant on target-to-background ratios and the presence of respiratory motion. The results from the phantom-based study were used to design, implement and test clinical CT-PET fusion protocols. The patient PET image acquisition protocols enabled patients to be successfully identified and positioned in their radiotherapy treatment position during the acquisition of their whole-body PET staging scan. While automatic registration techniques were found to reduce inter-user variation compared to manual techniques, there was no significant difference in the registration outcomes for transmission or emission scan-based registration of the patient images, using the protocol. Tumour volumes contoured on registered patient CT-PET images using the tested threshold values and viewing windows determined from the phantom study, demonstrated less inter-user variation for the primary tumour volume contours than those contoured using only the patient’s planning CT scans. Conclusions: The developed clinical protocols allow a patient’s whole-body PET staging scan to be incorporated, manipulated and quantified in the treatment planning process to improve the accuracy of gross tumour volume localisation in 3D conformal radiotherapy for lung cancer. Image registration protocols which factor in potential software-based errors combined with adequate user training are recommended to increase the accuracy and reproducibility of registration outcomes. A semi-automated adaptive threshold contouring technique incorporating a PET windowing protocol, accurately defines the geometric edge of a tumour volume using PET image data from a stand alone PET scanner, including 4D target volumes.
Resumo:
Recently it has been shown that the consumption of a diet high in saturated fat is associated with impaired insulin sensitivity and increased incidence of type 2 diabetes. In contrast, diets that are high in monounsaturated fatty acids (MUFAs) or polyunsaturated fatty acids (PUFAs), especially very long chain n-3 fatty acids (FAs), are protective against disease. However, the molecular mechanisms by which saturated FAs induce the insulin resistance and hyperglycaemia associated with metabolic syndrome and type 2 diabetes are not clearly defined. It is possible that saturated FAs may act through alternative mechanisms compared to MUFA and PUFA to regulate of hepatic gene expression and metabolism. It is proposed that, like MUFA and PUFA, saturated FAs regulate the transcription of target genes. To test this hypothesis, hepatic gene expression analysis was undertaken in a human hepatoma cell line, Huh-7, after exposure to the saturated FA, palmitate. These experiments showed that palmitate is an effective regulator of gene expression for a wide variety of genes. A total of 162 genes were differentially expressed in response to palmitate. These changes not only affected the expression of genes related to nutrient transport and metabolism, they also extend to other cellular functions including, cytoskeletal architecture, cell growth, protein synthesis and oxidative stress response. In addition, this thesis has shown that palmitate exposure altered the expression patterns of several genes that have previously been identified in the literature as markers of risk of disease development, including CVD, hypertension, obesity and type 2 diabetes. The altered gene expression patterns associated with an increased risk of disease include apolipoprotein-B100 (apo-B100), apo-CIII, plasminogen activator inhibitor 1, insulin-like growth factor-I and insulin-like growth factor binding protein 3. This thesis reports the first observation that palmitate directly signals in cultured human hepatocytes to regulate expression of genes involved in energy metabolism as well as other important genes. Prolonged exposure to long-chain saturated FAs reduces glucose phosphorylation and glycogen synthesis in the liver. Decreased glucose metabolism leads to elevated rates of lipolysis, resulting in increased release of free FAs. Free FAs have a negative effect on insulin action on the liver, which in turn results in increased gluconeogenesis and systemic dyslipidaemia. It has been postulated that disruption of glucose transport and insulin secretion by prolonged excessive FA availability might be a non-genetic factor that has contributed to the staggering rise in prevalence of type 2 diabetes. As glucokinase (GK) is a key regulatory enzyme of hepatic glucose metabolism, changes in its activity may alter flux through the glycolytic and de novo lipogenic pathways and result in hyperglycaemia and ultimately insulin resistance. This thesis investigated the effects of saturated FA on the promoter activity of the glycolytic enzyme, GK, and various transcription factors that may influence the regulation of GK gene expression. These experiments have shown that the saturated FA, palmitate, is capable of decreasing GK promoter activity. In addition, quantitative real-time PCR has shown that palmitate incubation may also regulate GK gene expression through a known FA sensitive transcription factor, sterol regulatory element binding protein-1c (SREBP-1c), which upregulates GK transcription. To parallel the investigations into the mechanisms of FA molecular signalling, further studies of the effect of FAs on metabolic pathway flux were performed. Although certain FAs reduce SREBP-1c transcription in vitro, it is unclear whether this will result in decreased GK activity in vivo where positive effectors of SREBP-1c such as insulin are also present. Under these conditions, it is uncertain if the inhibitory effects of FAs would be overcome by insulin. The effects of a combination of FAs, insulin and glucose on glucose phosphorylation and metabolism in cultured primary rat hepatocytes at concentrations that mimic those in the portal circulation after a meal was examined. It was found that total GK activity was unaffected by an increased concentration of insulin, but palmitate and eicosapentaenoic acid significantly lowered total GK activity in the presence of insulin. Despite the fact that total GK enzyme activity was reduced in response to FA incubation, GK enzyme translocation from the inactive, nuclear bound, to active, cytoplasmic state was unaffected. Interestingly, none of the FAs tested inhibited glucose phosphorylation or the rate of glycolysis when insulin is present. These results suggest that in the presence of insulin the levels of the active, unbound cytoplasmic GK are sufficient to buffer a slight decrease in GK enzyme activity and decreased promoter activity caused by FA exposure. Although a high fat diet has been associated with impaired hepatic glucose metabolism, there is no evidence from this thesis that FAs themselves directly modulate flux through the glycolytic pathway in isolated primary hepatocytes when insulin is also present. Therefore, although FA affected expression of a wide range of genes, including GK, this did not affect glycolytic flux in the presence of insulin. However, it may be possible that a saturated FA-induced decrease in GK enzyme activity when combined with the onset of insulin resistance may promote the dys-regulation of glucose homeostasis and the subsequent development of hyperglycaemia, metabolic syndrome and type 2 diabetes.
Resumo:
This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent
Resumo:
This dissertation is primarily an applied statistical modelling investigation, motivated by a case study comprising real data and real questions. Theoretical questions on modelling and computation of normalization constants arose from pursuit of these data analytic questions. The essence of the thesis can be described as follows. Consider binary data observed on a two-dimensional lattice. A common problem with such data is the ambiguity of zeroes recorded. These may represent zero response given some threshold (presence) or that the threshold has not been triggered (absence). Suppose that the researcher wishes to estimate the effects of covariates on the binary responses, whilst taking into account underlying spatial variation, which is itself of some interest. This situation arises in many contexts and the dingo, cypress and toad case studies described in the motivation chapter are examples of this. Two main approaches to modelling and inference are investigated in this thesis. The first is frequentist and based on generalized linear models, with spatial variation modelled by using a block structure or by smoothing the residuals spatially. The EM algorithm can be used to obtain point estimates, coupled with bootstrapping or asymptotic MLE estimates for standard errors. The second approach is Bayesian and based on a three- or four-tier hierarchical model, comprising a logistic regression with covariates for the data layer, a binary Markov Random field (MRF) for the underlying spatial process, and suitable priors for parameters in these main models. The three-parameter autologistic model is a particular MRF of interest. Markov chain Monte Carlo (MCMC) methods comprising hybrid Metropolis/Gibbs samplers is suitable for computation in this situation. Model performance can be gauged by MCMC diagnostics. Model choice can be assessed by incorporating another tier in the modelling hierarchy. This requires evaluation of a normalization constant, a notoriously difficult problem. Difficulty with estimating the normalization constant for the MRF can be overcome by using a path integral approach, although this is a highly computationally intensive method. Different methods of estimating ratios of normalization constants (N Cs) are investigated, including importance sampling Monte Carlo (ISMC), dependent Monte Carlo based on MCMC simulations (MCMC), and reverse logistic regression (RLR). I develop an idea present though not fully developed in the literature, and propose the Integrated mean canonical statistic (IMCS) method for estimating log NC ratios for binary MRFs. The IMCS method falls within the framework of the newly identified path sampling methods of Gelman & Meng (1998) and outperforms ISMC, MCMC and RLR. It also does not rely on simplifying assumptions, such as ignoring spatio-temporal dependence in the process. A thorough investigation is made of the application of IMCS to the three-parameter Autologistic model. This work introduces background computations required for the full implementation of the four-tier model in Chapter 7. Two different extensions of the three-tier model to a four-tier version are investigated. The first extension incorporates temporal dependence in the underlying spatio-temporal process. The second extensions allows the successes and failures in the data layer to depend on time. The MCMC computational method is extended to incorporate the extra layer. A major contribution of the thesis is the development of a fully Bayesian approach to inference for these hierarchical models for the first time. Note: The author of this thesis has agreed to make it open access but invites people downloading the thesis to send her an email via the 'Contact Author' function.
Resumo:
In order to effect permanent closure in burns patients suffering from full thickness wounds, replacing their skin via split thickness autografting, is essential. Dermal substitutes in conjunction with widely meshed split thickness autografts (+/- cultured keratinocytes) reduce scarring at the donor and recipient sites of burns patients by reducing demand for autologous skin (both surface area and thickness), without compromising dermal delivery at the wound face. Tissue engineered products such as Integra consist of a dermal template which is rapidly remodelled to form a neodermis, at which time the temporary silicone outer layer is removed and replaced with autologous split thickness skin. Whilst provision of a thick tissue engineered dermis at full thickness burn sites reduces scarring, it is hampered by delays in vascularisation which results in clinical failure. The ultimate success of any skin graft product is dependent upon a number of basic factors including adherence, haemostasis and in the case of viable tissue grafts, success is ultimately dependent upon restoration of a normal blood supply, and hence this study. Ultimately, the goal of this research is to improve the therapeutic properties of tissue replacements, through impregnation with growth factors aimed at stimulating migration and proliferation of microvascular endothelial cells into the donor tissue post grafting. For the purpose of my masters, the aim was to evaluate the responsiveness of a dermal microvascular endothelial cell line to growth factors and haemostatic factors, in the presence of the glycoprotein vitronectin. Vitronectin formed the backbone for my hypothesis and research due to its association with both epithelial and, more specifically, endothelial migration and proliferation. Early work using a platform technology referred to as VitroGro (Tissue Therapies Ltd), which is comprised of vitronectin bound BP5/IGF-1, aided keratinocyte proliferation. I hypothesised that this result would translate to another epithelium - endothelium. VitroGro had no effect on endothelial proliferation or migration. Vitronectin increases the presence of Fibroblast Growth Factor (FGF) and Vascular Endothelial Growth Factor (VEGF) receptors, enhancing cell responsiveness to their respective ligands. So, although Human Microvascular Endothelial Cell line 1 (HMEC-1) VEGF receptor expression is generally low, it was hypothesised that exposure to vitronectin would up-regulate this receptor. HMEC-1 migration, but not proliferation, was enhanced by vitronectin bound VEGF, as well as vitronectin bound Epidermal Growth Factor (EGF), both of which could be used to stimulate microvascular endothelial cell migration for the purpose of transplantation. In addition to vitronectin's synergy with various growth factors, it has also been shown to play a role in haemostasis. Vitronectin binds thrombin-antithrombin III (TAT) to form a trimeric complex that takes on many of the attributes of vitronectin, such as heparin affinity, which results in its adherence to endothelium via heparan sulfate proteoglycans (HSP), followed by unaltered transcytosis through the endothelium, and ultimately its removal from the circulation. This has been documented as a mechanism designed to remove thrombin from the circulation. Equally, it could be argued that it is a mechanism for delivering vitronectin to the matrix. My results show that matrix-bound vitronectin dramatically alters the effect that conformationally altered antithrombin three (cATIII) has on proliferation of microvascular endothelial cells. cATIII stimulates HMEC-1 proliferation in the presence of matrix-bound vitronectin, as opposed to inhibiting proliferation in its absence. Binding vitronectin to tissues and organs prior to transplant, in the presence of cATIII, will have a profound effect on microvascular infiltration of the graft, by preventing occlusion of existing vessels whilst stimulating migration and proliferation of endothelium within the tissue.
Resumo:
The present paper focuses on some interesting classes of process-control games, where winning essentially means successfully controlling the process. A master for one of these games is an agent who plays a winning strategy. In this paper we investigate situations in which even a complete model (given by a program) of a particular game does not provide enough information to synthesize—even incrementally—a winning strategy. However, if in addition to getting a program, a machine may also watch masters play winning strategies, then the machine is able to incrementally learn a winning strategy for the given game. Studied are successful learning from arbitrary masters and from pedagogically useful selected masters. It is shown that selected masters are strictly more helpful for learning than are arbitrary masters. Both for learning from arbitrary masters and for learning from selected masters, though, there are cases where one can learn programs for winning strategies from masters but not if one is required to learn a program for the master's strategy itself. Both for learning from arbitrary masters and for learning from selected masters, one can learn strictly more by watching m+1 masters than one can learn by watching only m. Last, a simulation result is presented where the presence of a selected master reduces the complexity from infinitely many semantic mind changes to finitely many syntactic ones.