363 resultados para Predictive Mean Squared Efficiency
Resumo:
Analytical expressions are derived for the mean and variance, of estimates of the bispectrum of a real-time series assuming a cosinusoidal model. The effects of spectral leakage, inherent in discrete Fourier transform operation when the modes present in the signal have a nonintegral number of wavelengths in the record, are included in the analysis. A single phase-coupled triad of modes can cause the bispectrum to have a nonzero mean value over the entire region of computation owing to leakage. The variance of bispectral estimates in the presence of leakage has contributions from individual modes and from triads of phase-coupled modes. Time-domain windowing reduces the leakage. The theoretical expressions for the mean and variance of bispectral estimates are derived in terms of a function dependent on an arbitrary symmetric time-domain window applied to the record. the number of data, and the statistics of the phase coupling among triads of modes. The theoretical results are verified by numerical simulations for simple test cases and applied to laboratory data to examine phase coupling in a hypothesis testing framework
Resumo:
Multivariate volatility forecasts are an important input in many financial applications, in particular portfolio optimisation problems. Given the number of models available and the range of loss functions to discriminate between them, it is obvious that selecting the optimal forecasting model is challenging. The aim of this thesis is to thoroughly investigate how effective many commonly used statistical (MSE and QLIKE) and economic (portfolio variance and portfolio utility) loss functions are at discriminating between competing multivariate volatility forecasts. An analytical investigation of the loss functions is performed to determine whether they identify the correct forecast as the best forecast. This is followed by an extensive simulation study examines the ability of the loss functions to consistently rank forecasts, and their statistical power within tests of predictive ability. For the tests of predictive ability, the model confidence set (MCS) approach of Hansen, Lunde and Nason (2003, 2011) is employed. As well, an empirical study investigates whether simulation findings hold in a realistic setting. In light of these earlier studies, a major empirical study seeks to identify the set of superior multivariate volatility forecasting models from 43 models that use either daily squared returns or realised volatility to generate forecasts. This study also assesses how the choice of volatility proxy affects the ability of the statistical loss functions to discriminate between forecasts. Analysis of the loss functions shows that QLIKE, MSE and portfolio variance can discriminate between multivariate volatility forecasts, while portfolio utility cannot. An examination of the effective loss functions shows that they all can identify the correct forecast at a point in time, however, their ability to discriminate between competing forecasts does vary. That is, QLIKE is identified as the most effective loss function, followed by portfolio variance which is then followed by MSE. The major empirical analysis reports that the optimal set of multivariate volatility forecasting models includes forecasts generated from daily squared returns and realised volatility. Furthermore, it finds that the volatility proxy affects the statistical loss functions’ ability to discriminate between forecasts in tests of predictive ability. These findings deepen our understanding of how to choose between competing multivariate volatility forecasts.
Resumo:
In recent years, development of Unmanned Aerial Vehicles (UAV) has become a significant growing segment of the global aviation industry. These vehicles are developed with the intention of operating in regions where the presence of onboard human pilots is either too risky or unnecessary. Their popularity with both the military and civilian sectors have seen the use of UAVs in a diverse range of applications, from reconnaissance and surveillance tasks for the military, to civilian uses such as aid relief and monitoring tasks. Efficient energy utilisation on an UAV is essential to its functioning, often to achieve the operational goals of range, endurance and other specific mission requirements. Due to the limitations of the space available and the mass budget on the UAV, it is often a delicate balance between the onboard energy available (i.e. fuel) and achieving the operational goals. This thesis presents an investigation of methods for increasing the energy efficiency on UAVs. One method is via the development of a Mission Waypoint Optimisation (MWO) procedure for a small fixed-wing UAV, focusing on improving the onboard fuel economy. MWO deals with a pre-specified set of waypoints by modifying the given waypoints within certain limits to achieve its optimisation objectives of minimising/maximising specific parameters. A simulation model of a UAV was developed in the MATLAB Simulink environment, utilising the AeroSim Blockset and the in-built Aerosonde UAV block and its parameters. This simulation model was separately integrated with a multi-objective Evolutionary Algorithm (MOEA) optimiser and a Sequential Quadratic Programming (SQP) solver to perform single-objective and multi-objective optimisation procedures of a set of real-world waypoints in order to minimise the onboard fuel consumption. The results of both procedures show potential in reducing fuel consumption on a UAV in a ight mission. Additionally, a parallel Hybrid-Electric Propulsion System (HEPS) on a small fixedwing UAV incorporating an Ideal Operating Line (IOL) control strategy was developed. An IOL analysis of an Aerosonde engine was performed, and the most efficient (i.e. provides greatest torque output at the least fuel consumption) points of operation for this engine was determined. Simulation models of the components in a HEPS were designed and constructed in the MATLAB Simulink environment. It was demonstrated through simulation that an UAV with the current HEPS configuration was capable of achieving a fuel saving of 6.5%, compared to the ICE-only configuration. These components form the basis for the development of a complete simulation model of a Hybrid-Electric UAV (HEUAV).
Resumo:
PURPOSE: To examine the relationship between contact lens (CL) case contamination and various potential predictive factors. METHODS: 74 subjects were fitted with lotrafilcon B (CIBA Vision) CLs on a daily wear basis for 1 month. Subjects were randomly assigned one of two polyhexamethylene biguanide (PHMB) preserved disinfecting solutions with the corresponding regular lens case. Clinical evaluations were conducted at lens delivery and after 1 month, when cases were collected for microbial culture. A CL care non-compliance score was determined through administration of a questionnaire and the volume of solution used was calculated for each subject. Data was examined using backward stepwise binary logistic regression. RESULTS: 68% of cases were contaminated. 35% were moderately or heavily contaminated and 36% contained gram-negative bacteria. Case contamination was significantly associated with subjective dryness symptoms (OR 4.22, CI 1.37–13.01) (P<0.05). There was no association between contamination and subject age, ethnicity, gender, average wearing time, amount of solution used, non-compliance score, CL power and subjective redness (P>0.05). The effect of lens care system on case contamination approached significance (P=0.07). Failure to rinse the case with disinfecting solution following CL insertion (OR 2.51, CI 0.52–12.09) and not air drying the case (OR 2.31, CI 0.39–13.35) were positively correlated with contamination; however, did not reach statistical significance. CONCLUSIONS: Our results suggest that case contamination may influence subjective comfort. It is difficult to predict the development of case contamination from a variety of clinical factors. The efficacy of CL solutions, bacterial resistance to disinfection and biofilm formation are likely to play a role. Further evaluation of these factors will improve our understanding of the development of case contamination and its clinical impact.
Resumo:
The concept of recovery is now widely promoted as the guiding principle for the provision of mental health services in Australia and overseas. While there is increasing pressure on service providers to ensure that services are recovery oriented, the way in which recovery-based practice is operationalized at the coalface presents a number of challenges. These are discussed in the context of five key questions that address (i) the appropriateness of recovery as a focus for service delivery, (ii) the distinction between recovery as a process and an outcome, (iii) the assessment of recovery initiatives, (iv) the alignment of recovery with current service delivery models, and (v) the risks associated with recovery-based practice. It is argued that these questions provide a framework for a debate that must extend beyond patients and providers of mental health services to the broader public, whose attitudes will ultimately determine the possibilities and limits of recovery-oriented practice.
Resumo:
INTRODUCTION: Workforce planning for first aid and medical coverage of mass gatherings is hampered by limited research. In particular, the characteristics and likely presentation patterns of low-volume mass gatherings of between several hundred to several thousand people are poorly described in the existing literature. OBJECTIVES: This study was conducted to: 1. Describe key patient and event characteristics of medical presentations at a series of mass gatherings, including events smaller than those previously described in the literature; 2. Determine whether event type and event size affect the mean number of patients presenting for treatment per event, and specifically, whether the 1:2,000 deployment rule used by St John Ambulance Australia is appropriate; and 3. Identify factors that are predictive of injury at mass gatherings. METHODS: A retrospective, observational, case-series design was used to examine all cases treated by two Divisions of St John Ambulance (Queensland) in the greater metropolitan Brisbane region over a three-year period (01 January 2002-31 December 2004). Data were obtained from routinely collected patient treatment forms completed by St John officers at the time of treatment. Event-related data (e.g., weather, event size) were obtained from event forms designed for this study. Outcome measures include: total and average number of patient presentations for each event; event type; and event size category. Descriptive analyses were conducted using chi-square tests, and mean presentations per event and event type were investigated using Kruskal-Wallis tests. Logistic regression analyses were used to identify variables independently associated with injury presentation (compared with non-injury presentations). RESULTS: Over the three-year study period, St John Ambulance officers treated 705 patients over 156 separate events. The mean number of patients who presented with any medical condition at small events (less than or equal to 2,000 attendees) did not differ significantly from that of large (>2,000 attendees) events (4.44 vs. 4.67, F = 0.72, df = 1, 154, p = 0.79). Logistic regression analyses indicated that presentation with an injury compared with non-injury was independently associated with male gender, winter season, and sporting events, even after adjusting for relevant variables. CONCLUSIONS: In this study of low-volume mass gatherings, a similar number of patients sought medical treatment at small (<2,000 patrons) and large (>2,000 patrons) events. This demonstrates that for low-volume mass gatherings, planning based solely on anticipated event size may be flawed, and could lead to inappropriate levels of first-aid coverage. This study also highlights the importance of considering other factors, such as event type and patient characteristics, when determining appropriate first-aid resourcing for low-volume events. Additionally, identification of factors predictive of injury presentations at mass gatherings has the potential to significantly enhance the ability of event coordinators to plan effective prevention strategies and response capability for these events.
Resumo:
This chapter provides an analysis of feedback from key stakeholders, collected as part of a research project, on the problems and tensions evident in the collective work practices of learning advisers employed in learning assistance services at an Australian metropolitan university (Peach, 2003). The term 'learning assistance' is used in the Australian higher education sector generally to refer to student support services that include assistance with academic writing and other study skills. The aim of the study was to help learning advisers and other key stakeholders develop a better understanding of the work activity with a view to using this understanding to generate improvements in service provision. Over twenty problems and associated tensions were identified through stakeholder feedback however the focus of this chapter is the analysis of tensions related to a cluster of problems referred to as cost-efficiency versus quality service. Theoretical modelling derived from the tools made available through cultural historical activity theory and expansive visibilsation (Engestrom and Miettinen, 1999) and excerpts from data are used to illustrate how different understandings of the purpose of learning assistance services impacts on the work practices of learning advisers and creates problems and tensions in relation to the type of service available (including use of technology),level of service available, and learning adviser workload.
Resumo:
The motivation of the study stems from the results reported in the Excellence in Research for Australia (ERA) 2010 report. The report showed that only 12 universities performed research at or above international standards, of which, the Group of Eight (G8) universities filled the top eight spots. While performance of universities was based on number of research outputs, total amount of research income and other quantitative indicators, the measure of efficiency or productivity was not considered. The objectives of this paper are twofold. First, to provide a review of the research performance of 37 Australian universities using the data envelopment analysis (DEA) bootstrap approach of Simar and Wilson (2007). Second, to determine sources of productivity drivers by regressing the efficiency scores against a set of environmental variables.
Resumo:
Hypertrophic scars arise when there is an overproduction of collagen during wound healing. These are often associated with poor regulation of the rate of programmed cell death(apoptosis) of the cells synthesizing the collagen or by an exuberant inflammatory response that prolongs collagen production and increases wound contraction. Severe contractures that occur, for example, after a deep burn can cause loss of function especially if the wound is over a joint such as the elbow or knee. Recently, we have developed a morphoelastic mathematical model for dermal repair that incorporates the chemical, cellular and mechanical aspects of dermal wound healing. Using this model, we examine pathological scarring in dermal repair by first assuming a smaller than usual apoptotic rate for myofibroblasts, and then considering a prolonged inflammatory response, in an attempt to determine a possible optimal intervention strategy to promote normal repair, or terminate the fibrotic scarring response. Our model predicts that in both cases it is best to apply the intervention strategy early in the wound healing response. Further, the earlier an intervention is made, the less aggressive the intervention required. Finally, if intervention is conducted at a late time during healing, a significant intervention is required; however, there is a threshold concentration of the drug or therapy applied, above which minimal further improvement to wound repair is obtained.
Resumo:
Networked control systems (NCSs) offer many advantages over conventional control; however, they also demonstrate challenging problems such as network-induced delay and packet losses. This paper proposes an approach of predictive compensation for simultaneous network-induced delays and packet losses. Different from the majority of existing NCS control methods, the proposed approach addresses co-design of both network and controller. It also alleviates the requirements of precise process models and full understanding of NCS network dynamics. For a series of possible sensor-to-actuator delays, the controller computes a series of corresponding redundant control values. Then, it sends out those control values in a single packet to the actuator. Once receiving the control packet, the actuator measures the actual sensor-to-actuator delay and computes the control signals from the control packet. When packet dropout occurs, the actuator utilizes past control packets to generate an appropriate control signal. The effectiveness of the approach is demonstrated through examples.
Resumo:
The main aim of this thesis is to analyse and optimise a public hospital Emergency Department. The Emergency Department (ED) is a complex system with limited resources and a high demand for these resources. Adding to the complexity is the stochastic nature of almost every element and characteristic in the ED. The interaction with other functional areas also complicates the system as these areas have a huge impact on the ED and the ED is powerless to change them. Therefore it is imperative that OR be applied to the ED to improve the performance within the constraints of the system. The main characteristics of the system to optimise included tardiness, adherence to waiting time targets, access block and length of stay. A validated and verified simulation model was built to model the real life system. This enabled detailed analysis of resources and flow without disruption to the actual ED. A wide range of different policies for the ED and a variety of resources were able to be investigated. Of particular interest was the number and type of beds in the ED and also the shift times of physicians. One point worth noting was that neither of these resources work in isolation and for optimisation of the system both resources need to be investigated in tandem. The ED was likened to a flow shop scheduling problem with the patients and beds being synonymous with the jobs and machines typically found in manufacturing problems. This enabled an analytic scheduling approach. Constructive heuristics were developed to reactively schedule the system in real time and these were able to improve the performance of the system. Metaheuristics that optimised the system were also developed and analysed. An innovative hybrid Simulated Annealing and Tabu Search algorithm was developed that out-performed both simulated annealing and tabu search algorithms by combining some of their features. The new algorithm achieves a more optimal solution and does so in a shorter time.
Resumo:
Proteases regulate a spectrum of diverse physiological processes, and dysregulation of proteolytic activity drives a plethora of pathological conditions. Understanding protease function is essential to appreciating many aspects of normal physiology and progression of disease. Consequently, development of potent and specific inhibitors of proteolytic enzymes is vital to provide tools for the dissection of protease function in biological systems and for the treatment of diseases linked to aberrant proteolytic activity. The studies in this thesis describe the rational design of potent inhibitors of three proteases that are implicated in disease development. Additionally, key features of the interaction of proteases and their cognate inhibitors or substrates are analysed and a series of rational inhibitor design principles are expounded and tested. Rational design of protease inhibitors relies on a comprehensive understanding of protease structure and biochemistry. Analysis of known protease cleavage sites in proteins and peptides is a commonly used source of such information. However, model peptide substrate and protein sequences have widely differing levels of backbone constraint and hence can adopt highly divergent structures when binding to a protease’s active site. This may result in identical sequences in peptides and proteins having different conformations and diverse spatial distribution of amino acid functionalities. Regardless of this, protein and peptide cleavage sites are often regarded as being equivalent. One of the key findings in the following studies is a definitive demonstration of the lack of equivalence between these two classes of substrate and invalidation of the common practice of using the sequences of model peptide substrates to predict cleavage of proteins in vivo. Another important feature for protease substrate recognition is subsite cooperativity. This type of cooperativity is commonly referred to as protease or substrate binding subsite cooperativity and is distinct from allosteric cooperativity, where binding of a molecule distant from the protease active site affects the binding affinity of a substrate. Subsite cooperativity may be intramolecular where neighbouring residues in substrates are interacting, affecting the scissile bond’s susceptibility to protease cleavage. Subsite cooperativity can also be intermolecular where a particular residue’s contribution to binding affinity changes depending on the identity of neighbouring amino acids. Although numerous studies have identified subsite cooperativity effects, these findings are frequently ignored in investigations probing subsite selectivity by screening against diverse combinatorial libraries of peptides (positional scanning synthetic combinatorial library; PS-SCL). This strategy for determining cleavage specificity relies on the averaged rates of hydrolysis for an uncharacterised ensemble of peptide sequences, as opposed to the defined rate of hydrolysis of a known specific substrate. Further, since PS-SCL screens probe the preference of the various protease subsites independently, this method is inherently unable to detect subsite cooperativity. However, mean hydrolysis rates from PS-SCL screens are often interpreted as being comparable to those produced by single peptide cleavages. Before this study no large systematic evaluation had been made to determine the level of correlation between protease selectivity as predicted by screening against a library of combinatorial peptides and cleavage of individual peptides. This subject is specifically explored in the studies described here. In order to establish whether PS-SCL screens could accurately determine the substrate preferences of proteases, a systematic comparison of data from PS-SCLs with libraries containing individually synthesised peptides (sparse matrix library; SML) was carried out. These SML libraries were designed to include all possible sequence combinations of the residues that were suggested to be preferred by a protease using the PS-SCL method. SML screening against the three serine proteases kallikrein 4 (KLK4), kallikrein 14 (KLK14) and plasmin revealed highly preferred peptide substrates that could not have been deduced by PS-SCL screening alone. Comparing protease subsite preference profiles from screens of the two types of peptide libraries showed that the most preferred substrates were not detected by PS SCL screening as a consequence of intermolecular cooperativity being negated by the very nature of PS SCL screening. Sequences that are highly favoured as result of intermolecular cooperativity achieve optimal protease subsite occupancy, and thereby interact with very specific determinants of the protease. Identifying these substrate sequences is important since they may be used to produce potent and selective inhibitors of protolytic enzymes. This study found that highly favoured substrate sequences that relied on intermolecular cooperativity allowed for the production of potent inhibitors of KLK4, KLK14 and plasmin. Peptide aldehydes based on preferred plasmin sequences produced high affinity transition state analogue inhibitors for this protease. The most potent of these maintained specificity over plasma kallikrein (known to have a very similar substrate preference to plasmin). Furthermore, the efficiency of this inhibitor in blocking fibrinolysis in vitro was comparable to aprotinin, which previously saw clinical use to reduce perioperative bleeding. One substrate sequence particularly favoured by KLK4 was substituted into the 14 amino acid, circular sunflower trypsin inhibitor (SFTI). This resulted in a highly potent and selective inhibitor (SFTI-FCQR) which attenuated protease activated receptor signalling by KLK4 in vitro. Moreover, SFTI-FCQR and paclitaxel synergistically reduced growth of ovarian cancer cells in vitro, making this inhibitor a lead compound for further therapeutic development. Similar incorporation of a preferred KLK14 amino acid sequence into the SFTI scaffold produced a potent inhibitor for this protease. However, the conformationally constrained SFTI backbone enforced a different intramolecular cooperativity, which masked a KLK14 specific determinant. As a consequence, the level of selectivity achievable was lower than that found for the KLK4 inhibitor. Standard mechanism inhibitors such as SFTI rely on a stable acyl-enzyme intermediate for high affinity binding. This is achieved by a conformationally constrained canonical binding loop that allows for reformation of the scissile peptide bond after cleavage. Amino acid substitutions within the inhibitor to target a particular protease may compromise structural determinants that support the rigidity of the binding loop and thereby prevent the engineered inhibitor reaching its full potential. An in silico analysis was carried out to examine the potential for further improvements to the potency and selectivity of the SFTI-based KLK4 and KLK14 inhibitors. Molecular dynamics simulations suggested that the substitutions within SFTI required to target KLK4 and KLK14 had compromised the intramolecular hydrogen bond network of the inhibitor and caused a concomitant loss of binding loop stability. Furthermore in silico amino acid substitution revealed a consistent correlation between a higher frequency of formation and the number of internal hydrogen bonds of SFTI-variants and lower inhibition constants. These predictions allowed for the production of second generation inhibitors with enhanced binding affinity toward both targets and highlight the importance of considering intramolecular cooperativity effects when engineering proteins or circular peptides to target proteases. The findings from this study show that although PS-SCLs are a useful tool for high throughput screening of approximate protease preference, later refinement by SML screening is needed to reveal optimal subsite occupancy due to cooperativity in substrate recognition. This investigation has also demonstrated the importance of maintaining structural determinants of backbone constraint and conformation when engineering standard mechanism inhibitors for new targets. Combined these results show that backbone conformation and amino acid cooperativity have more prominent roles than previously appreciated in determining substrate/inhibitor specificity and binding affinity. The three key inhibitors designed during this investigation are now being developed as lead compounds for cancer chemotherapy, control of fibrinolysis and cosmeceutical applications. These compounds form the basis of a portfolio of intellectual property which will be further developed in the coming years.
Resumo:
The issue of a more sustainable environment has been the aim of many governments and institutions for decades. Current research and literature has shown the continuing impact of global development and population increases on the planet as a whole. Issues such as carbon emissions, global warming, resource sustainability, industrial pollution, waste management and the decline in scarce resources, including food, are now realities and are being addressed at various levels. All levels of government, business and the public now equally share responsibility for the continued sustainable environment in general. Although these issues of global warming, climate change and the overuse of scarce resources are well documented, and constantly covered in all media forms, public attitudes to these issues vary significantly. Despite being aware of these issues many individuals consider that the problem is one for governments to tackle and that their individual efforts are not important or necessary. In many cases individuals are concerned with sustainability, but are either not in the position to take action due to economic circumstances or are not prepared to offset sustainability gains with personal interests...
Resumo:
This paper studies the missing covariate problem which is often encountered in survival analysis. Three covariate imputation methods are employed in the study, and the effectiveness of each method is evaluated within the hazard prediction framework. Data from a typical engineering asset is used in the case study. Covariate values in some time steps are deliberately discarded to generate an incomplete covariate set. It is found that although the mean imputation method is simpler than others for solving missing covariate problems, the results calculated by it can differ largely from the real values of the missing covariates. This study also shows that in general, results obtained from the regression method are more accurate than those of the mean imputation method but at the cost of a higher computational expensive. Gaussian Mixture Model (GMM) method is found to be the most effective method within these three in terms of both computation efficiency and predication accuracy.