243 resultados para Risks Assessment Methods
Resumo:
Purpose: The purpose of this paper is to identify changes in bank lending criteria due to the GFC and to explore the associated impacts on new housing supply in Queensland, Australia. Design/methodology/approach: This research involves a survey of each of Australia’s big four banks, as well as two prominent arrangers of development finance. Data on key lending criteria was collected: Pre GFC, during the GFC, and GFC recovery stage. Findings: The GFC has resulted in a retraction of funds available for residential development. The few institutions lending are filtering out only the best credit risks by way of constrictive loan covenants including: low loan to value ratios, high cash equity requirements, regional “no go” zones, and demonstrated borrower track record. The ability of developers to proceed with new housing developments is being constrained by their inability to obtain sufficient finance. Research limitations/implications: This research uses survey data, together with an understanding of the project finance process to extrapolate impacts on the residential development industry across Queensland. No regional or sub-market analysis is included. Future research will include subsequent surveys to track any loosening of credit policies over time and sub-market sector analysis. Practical implications: The inability to obtain project finance is identified as a key constraint to new housing supply. This research will inform policy makers and provide important quantitative evidence of the importance of availability of development finance in the housing supply chain. Social implications: Queensland is facing a supply shortfall, which if not corrected, may lead to upward pressure on house prices and falling housing affordability. Originality/value: There is very little academic research on development funding. This research is unique in linking bank lending criteria to new housing supply and demonstrating the impact on the development industry.
Resumo:
Virtual methods to assess the fitting of a fracture fixation plate were proposed recently, however with limitations such as simplified fit criteria or manual data processing. This study aims to automate a fit analysis procedure using clinical-based criteria, and then to analyse the results further for borderline fit cases. Three dimensional (3D) models of 45 bones and of a precontoured distal tibial plate were utilized to assess the fitting of the plate automatically. A Matlab program was developed to automatically measure the shortest distance between the bone and the plate at three regions of interest and a plate-bone angle. The measured values including the fit assessment results were recorded in a spreadsheet as part of the batch-process routine. An automated fit analysis procedure will enable the processing of larger bone datasets in a significantly shorter time, which will provide more representative data of the target population for plate shape design and validation. As a result, better fitting plates can be manufactured and made available to surgeons, thereby reducing the risk and cost associated with complications or corrective procedures. This in turn, is expected to translate into improving patients' quality of life.
Resumo:
The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.
Resumo:
Preterm infants commence breastfeeding when health-care professionals deem them to be ready. However, the optimal timing for commencement of breastfeeding is unclear. Currently, there is little guidance for neonatal care providers to decide when to initiate breastfeeding among preterm infants. A mixed-methods study was conducted to develop and test the Preterm Sucking Readiness (PTSR) scale in four phases. The first phase involved a chart audit to explore the use of age as a criterion by investigating when preterm infants meet feeding milestones as well as other factors that may affect an infant’s readiness to engage in nutritive sucking behaviour. The second phase utilised focus groups to explore and define how neonatal care providers decide when to commence breastfeeding. To gain consensus on the criteria mentioned by the focus groups, a Delphi survey was conducted in phase 3, involving neonatal providers across Australia and New Zealand. Phase 4 of the study involved an observational study that was used to test the six-item PTSR. The age at which specific feeding milestones were reached was consistent with what has been previously described in the literature. The chart audit showed that the time taken to the first feeding attempt in the preterm infant population was affected by gestational age at birth, birth weight, and specific interventions. Staff also considered age along with other criteria when deciding when to initiate feeding. Consensus on nine criteria for inclusion into the six-item PTSR was achieved using the Delphi technique. Three items of PTSR showed significant differences between the preterm and fullterm infant groups. Only two items, feeding-readiness behaviour and low pulse oximetry during handling, explained the variance in breastfeeding behaviour. The inter-rater variability ranged between moderate and very good for the PTSR items. The results of this study indicate the importance of assessing behavioural cues as an indication of breastfeeding readiness in the preterm infant population, once an infant is deemed physiologically stable. Age continues to be a factor in some clinicians' decisions to commence breastfeeding. However, age alone cannot be used to decide if an infant is ready to engage in breastfeeding. Further research is needed to confirm these findings.
Resumo:
Objective: To critically appraise the Biodex System 4 isokinetic dynamometer for strength assessment of children. Methods: Appraisal was based on experiences from two independent laboratories involving testing of 213 children. Issues were recorded and the manufacturer was consulted regarding appropriate solutions. Results: The dynamometer had insufficient height adjustment for alignment of the knee for some children, requiring the construction of padding to better fit the child within the dynamometer. Potential for entrapment of the non-testing leg was evident in the passive and eccentric modes and a leg bracket restraint was constructed. Automated gravity correction did not operate when protocols were linked or data was exported to an external device. Conclusions: Limitations were noted, some of which were applicable to knee strength testing in general and others which were specific to use with children. However, most of these obstacles could be overcome, making the Biodex System 4 suitable for assessment of knee strength in children.
Resumo:
In 2009 the Australian Federal and State governments are expected to have spent some AU$30 billion procuring infrastructure projects. For governments with finite resources but many competing projects, formal capital rationing is achieved through use of Business Cases. These Business cases articulate the merits of investing in particular projects along with the estimated costs and risks of each project. Despite the sheer size and impact of infrastructure projects, there is very little research in Australia, or internationally, on the performance of these projects against Business Case assumptions when the decision to invest is made. If such assumptions (particularly cost assumptions) are not met, then there is serious potential for the misallocation of Australia’s finite financial resources. This research addresses this important gap in the literature by using combined quantitative and qualitative research methods, to examine the actual performance of 14 major Australian government infrastructure projects. The research findings are controversial as they challenge widely held perceptions of the effectiveness of certain infrastructure delivery practices. Despite this controversy, the research has had a significant impact on the field and has been described as ‘outstanding’ and ‘definitive’ (Alliancing Association of Australasia), "one of the first of its kind" (Infrastructure Partnerships of Australia) and "making a critical difference to infrastructure procurement" (Victorian Department of Treasury). The implications for practice of the research have been profound and included the withdrawal by Government of various infrastructure procurement guidelines, the formulation of new infrastructure policies by several state governments and the preparation of new infrastructure guidelines that substantially reflect the research findings. Building on the practical research, a more rigorous academic investigation focussed on the comparative cost uplift of various project delivery strategies was submitted to Australia’s premier academic management conference, the Australian and New Zealand Academy of Management (ANZAM) Annual Conference. This paper has been accepted for the 2010 ANZAM National Conference following a process of double blind peer review with reviewers rating the paper’s overall contribution as "Excellent" and "Good".
Resumo:
Overcoming many of the constraints to early stage investment in biofuels production from sugarcane bagasse in Australia requires an understanding of the complex technical, economic and systemic challenges associated with the transition of established sugar industry structures from single product agri-businesses to new diversified multi-product biorefineries. While positive investment decisions in new infrastructure requires technically feasible solutions and the attainment of project economic investment thresholds, many other systemic factors will influence the investment decision. These factors include the interrelationships between feedstock availability and energy use, competing product alternatives, technology acceptance and perceptions of project uncertainty and risk. This thesis explores the feasibility of a new cellulosic ethanol industry in Australia based on the large sugarcane fibre (bagasse) resource available. The research explores industry feasibility from multiple angles including the challenges of integrating ethanol production into an established sugarcane processing system, scoping the economic drivers and key variables relating to bioethanol projects and considering the impact of emerging technologies in improving industry feasibility. The opportunities available from pilot scale technology demonstration are also addressed. Systems analysis techniques are used to explore the interrelationships between the existing sugarcane industry and the developing cellulosic biofuels industry. This analysis has resulted in the development of a conceptual framework for a bagassebased cellulosic ethanol industry in Australia and uses this framework to assess the uncertainty in key project factors and investment risk. The analysis showed that the fundamental issue affecting investment in a cellulosic ethanol industry from sugarcane in Australia is the uncertainty in the future price of ethanol and government support that reduces the risks associated with early stage investment is likely to be necessary to promote commercialisation of this novel technology. Comprehensive techno-economic models have been developed and used to assess the potential quantum of ethanol production from sugarcane in Australia, to assess the feasibility of a soda-based biorefinery at the Racecourse Sugar Mill in Mackay, Queensland and to assess the feasibility of reducing the cost of production of fermentable sugars from the in-planta expression of cellulases in sugarcane in Australia. These assessments show that ethanol from sugarcane in Australia has the potential to make a significant contribution to reducing Australia’s transportation fuel requirements from fossil fuels and that economically viable projects exist depending upon assumptions relating to product price, ethanol taxation arrangements and greenhouse gas emission reduction incentives. The conceptual design and development of a novel pilot scale cellulosic ethanol research and development facility is also reported in this thesis. The establishment of this facility enables the technical and economic feasibility of new technologies to be assessed in a multi-partner, collaborative environment. As a key outcome of this work, this study has delivered a facility that will enable novel cellulosic ethanol technologies to be assessed in a low investment risk environment, reducing the potential risks associated with early stage investment in commercial projects and hence promoting more rapid technology uptake. While the study has focussed on an exploration of the feasibility of a commercial cellulosic ethanol industry from sugarcane in Australia, many of the same key issues will be of relevance to other sugarcane industries throughout the world seeking diversification of revenue through the implementation of novel cellulosic ethanol technologies.
Resumo:
Bioinformatics involves analyses of biological data such as DNA sequences, microarrays and protein-protein interaction (PPI) networks. Its two main objectives are the identification of genes or proteins and the prediction of their functions. Biological data often contain uncertain and imprecise information. Fuzzy theory provides useful tools to deal with this type of information, hence has played an important role in analyses of biological data. In this thesis, we aim to develop some new fuzzy techniques and apply them on DNA microarrays and PPI networks. We will focus on three problems: (1) clustering of microarrays; (2) identification of disease-associated genes in microarrays; and (3) identification of protein complexes in PPI networks. The first part of the thesis aims to detect, by the fuzzy C-means (FCM) method, clustering structures in DNA microarrays corrupted by noise. Because of the presence of noise, some clustering structures found in random data may not have any biological significance. In this part, we propose to combine the FCM with the empirical mode decomposition (EMD) for clustering microarray data. The purpose of EMD is to reduce, preferably to remove, the effect of noise, resulting in what is known as denoised data. We call this method the fuzzy C-means method with empirical mode decomposition (FCM-EMD). We applied this method on yeast and serum microarrays, and the silhouette values are used for assessment of the quality of clustering. The results indicate that the clustering structures of denoised data are more reasonable, implying that genes have tighter association with their clusters. Furthermore we found that the estimation of the fuzzy parameter m, which is a difficult step, can be avoided to some extent by analysing denoised microarray data. The second part aims to identify disease-associated genes from DNA microarray data which are generated under different conditions, e.g., patients and normal people. We developed a type-2 fuzzy membership (FM) function for identification of diseaseassociated genes. This approach is applied to diabetes and lung cancer data, and a comparison with the original FM test was carried out. Among the ten best-ranked genes of diabetes identified by the type-2 FM test, seven genes have been confirmed as diabetes-associated genes according to gene description information in Gene Bank and the published literature. An additional gene is further identified. Among the ten best-ranked genes identified in lung cancer data, seven are confirmed that they are associated with lung cancer or its treatment. The type-2 FM-d values are significantly different, which makes the identifications more convincing than the original FM test. The third part of the thesis aims to identify protein complexes in large interaction networks. Identification of protein complexes is crucial to understand the principles of cellular organisation and to predict protein functions. In this part, we proposed a novel method which combines the fuzzy clustering method and interaction probability to identify the overlapping and non-overlapping community structures in PPI networks, then to detect protein complexes in these sub-networks. Our method is based on both the fuzzy relation model and the graph model. We applied the method on several PPI networks and compared with a popular protein complex identification method, the clique percolation method. For the same data, we detected more protein complexes. We also applied our method on two social networks. The results showed our method works well for detecting sub-networks and give a reasonable understanding of these communities.
Resumo:
High resolution thermogravimetric analysis (TGA) has attracted much attention in the synthesis of organoclays and its applications. In this study, organoclays were synthesised through ion exchange of a single cationic surfactant for sodium ions, and characterised by methods including X-ray diffraction (XRD), and thermogravimetric analysis (TGA). The changes of surface properties in montmorillonite and organoclays intercalated with surfactant were determined using XRD through the changes in the basal spacing. The thermogravimetric analysis (TGA) was applied in this study to investigate more information of the configuration and structural changes in the organoclays with thermal decomposition. There are four different decompositions steps in differential thermogravimetric (DTG) curves. The obtained TG steps are relevant to the arrangement of the surfactant molecules intercalated in montmorillonite and the thermal analysis indicates the thermal stability of surfactant modified clays. This investigation provides new insights into the properties of organoclays and is important in the synthesis and processing of organoclays for environmental applications.
Resumo:
This paper reviews the current state in the application of infrared methods, particularly mid-infrared (mid-IR) and near infrared (NIR), for the evaluation of the structural and functional integrity of articular cartilage. It is noted that while a considerable amount of research has been conducted with respect to tissue characterization using mid-IR, it is almost certain that full-thickness cartilage assessment is not feasible with this method. On the contrary, the relatively more considerable penetration capacity of NIR suggests that it is a suitable candidate for full-thickness cartilage evaluation. Nevertheless, significant research is still required to improve the specificity and clinical applicability of the method if we are going to be able to use it for distinguishing between functional and dysfunctional cartilage.
Resumo:
Background Comprehensive geriatric assessment has been shown to improve patient outcomes, but the geriatricians who deliver it are in short-supply. A web-based method of comprehensive geriatric assessment has been developed with the potential to improve access to specialist geriatric expertise. The current study aims to test the reliability and safety of comprehensive geriatric assessment performed “online” in making geriatric triage decisions. It will also explore the accuracy of the procedure in identifying common geriatric syndromes, and its cost relative to conventional “live” consultations. Methods/Design The study population will consist of 270 acutely hospitalized patients referred for geriatric consultation at three sites. Paired assessments (live and online) will be conducted by independent, blinded geriatricians and the level of agreement examined. This will be compared with the level of agreement between two independent, blinded geriatricians each consulting with the patient in person (i.e. “live”). Agreement between the triage decision from live-live assessments and between the triage decision from live-online assessments will be calculated using kappa statistics. Agreement between the online and live detection of common geriatric syndromes will also be assessed using kappa statistics. Resource use data will be collected for online and live-live assessments to allow comparison between the two procedures. Discussion If the online approach is found to be less precise than live assessment, further analysis will seek to identify patient subgroups where disagreement is more likely. This may enable a protocol to be developed that avoids unsafe clinical decisions at a distance. Trial registration Trial registration number: ACTRN12611000936921
Resumo:
Background The increasing popularity and use of the internet makes it an attractive option for providing health information and treatment, including alcohol/other drug use. There is limited research examining how people identify and access information about alcohol or other drug (AOD) use online, or how they assess the usefulness of the information presented. This study examined the strategies that individuals used to identify and navigate a range of AOD websites, along with the attitudes concerning presentation and content. Methods Members of the general community in Brisbane and Roma (Queensland, Australia) were invited to participate in a 30-minute search of the internet for sites related to AOD use, followed by a focus group discussion. Fifty one subjects participated in the study across nine focus groups. Results Participants spent a maximum of 6.5 minutes on any one website, and less if the user was under 25 years of age. Time spent was as little as 2 minutes if the website was not the first accessed. Participants recommended that AOD-related websites should have an engaging home or index page, which quickly and accurately portrayed the site’s objectives, and provided clear site navigation options. Website content should clearly match the title and description of the site that is used by internet search engines. Participants supported the development of a portal for AOD websites, suggesting that it would greatly facilitate access and navigation. Treatment programs delivered online were initially viewed with caution. This appeared to be due to limited understanding of what constituted online treatment, including its potential efficacy. Conclusions A range of recommendations arise from this study regarding the design and development of websites, particularly those related to AOD use. These include prudent use of text and information on any one webpage, the use of graphics and colours, and clear, uncluttered navigation options. Implications for future website development are discussed.
Resumo:
Prevention and safety promotion programmes. Traditionally, in-depth investigations of crash risks are conducted using exposure controlled study or case-control methodology. However, these studies need either observational data for control cases or exogenous exposure data like vehicle-kilometres travel, entry flow or product of conflicting flow for a particular traffic location, or a traffic site. These data are not readily available and often require extensive data collection effort on a system-wide basis. Aim: The objective of this research is to propose an alternative methodology to investigate crash risks of a road user group in different circumstances using readily available traffic police crash data. Methods: This study employs a combination of a log-linear model and the quasi-induced exposure technique to estimate crash risks of a road user group. While the log-linear model reveals the significant interactions and thus the prevalence of crashes of a road user group under various sets of traffic, environmental and roadway factors, the quasi-induced exposure technique estimates relative exposure of that road user in the same set of explanatory variables. Therefore, the combination of these two techniques provides relative measures of crash risks under various influences of roadway, environmental and traffic conditions. The proposed methodology has been illustrated using Brisbane motorcycle crash data of five years. Results: Interpretations of results on different combination of interactive factors show that the poor conspicuity of motorcycles is a predominant cause of motorcycle crashes. Inability of other drivers to correctly judge the speed and distance of an oncoming motorcyclist is also evident in right-of-way violation motorcycle crashes at intersections. Discussion and Conclusions: The combination of a log-linear model and the induced exposure technique is a promising methodology and can be applied to better estimate crash risks of other road users. This study also highlights the importance of considering interaction effects to better understand hazardous situations. A further study on the comparison between the proposed methodology and case-control method would be useful.
Resumo:
Navigational safety analysis relying on collision statistics is often hampered because of low number of observations. A promising alternative approach that overcomes this problem is proposed in this paper. By analyzing critical vessel interactions this approach proactively measures collision risk in port waters. The proposed method is illustrated for quantitative measurement of collision risks in Singapore port fairways, and validated by examining correlations between the measured risks with those perceived by pilots. This method is an ethically appealing alternative to the collision-based analysis for fast, reliable and effective safety assessment, thus possesses great potential for managing collision risks in port waters.