869 resultados para PROPORTIONAL HAZARD AND ACCELERATED FAILURE MODELS
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-04
Resumo:
Risk assessment systems for introduced species are being developed and applied globally, but methods for rigorously evaluating them are still in their infancy. We explore classification and regression tree models as an alternative to the current Australian Weed Risk Assessment system, and demonstrate how the performance of screening tests for unwanted alien species may be quantitatively compared using receiver operating characteristic (ROC) curve analysis. The optimal classification tree model for predicting weediness included just four out of a possible 44 attributes of introduced plants examined, namely: (i) intentional human dispersal of propagules; (ii) evidence of naturalization beyond native range; (iii) evidence of being a weed elsewhere; and (iv) a high level of domestication. Intentional human dispersal of propagules in combination with evidence of naturalization beyond a plants native range led to the strongest prediction of weediness. A high level of domestication in combination with no evidence of naturalization mitigated the likelihood of an introduced plant becoming a weed resulting from intentional human dispersal of propagules. Unlikely intentional human dispersal of propagules combined with no evidence of being a weed elsewhere led to the lowest predicted probability of weediness. The failure to include intrinsic plant attributes in the model suggests that either these attributes are not useful general predictors of weediness, or data and analysis were inadequate to elucidate the underlying relationship(s). This concurs with the historical pessimism that we will ever be able to accurately predict invasive plants. Given the apparent importance of propagule pressure (the number of individuals of an species released), future attempts at evaluating screening model performance for identifying unwanted plants need to account for propagule pressure when collating and/or analysing datasets. The classification tree had a cross-validated sensitivity of 93.6% and specificity of 36.7%. Based on the area under the ROC curve, the performance of the classification tree in correctly classifying plants as weeds or non-weeds was slightly inferior (Area under ROC curve = 0.83 +/- 0.021 (+/- SE)) to that of the current risk assessment system in use (Area under ROC curve = 0.89 +/- 0.018 (+/- SE)), although requires many fewer questions to be answered.
Resumo:
Background It has been suggested that community treatment orders (CTOs) will prevent readmission to hospital, but controlled studies have been inconclusive. We aimed to test the hypothesis that hospital discharges made subject to CTOs are associated with a reduced risk of readmission. The use of such a measure is likely to change after its introduction as clinicians acquire familiarity with it, and we also tested the hypothesis that the characteristics of patients subject to CTOs changed over time in the first decade of their use in Victoria, Australia. Method A database from Victoria, Australia (total population 4.8 million) was used. Cox proportional hazard models compared the hazard ratios of readmission to hospital before the end of the study period (1992-2000) for 16,216 discharges subject to a CTO and 112,211 not subject to a CTO. Results Community treatment orders used on discharge from a first admission to hospital were associated with a higher risk of readmission, but CTOs following subsequent admissions were associated with lower readmission risk. The risk also declined over the study period. Conclusions The effect of using a CTO depends on the patient's history. At a population level their introduction may not reduce readmission to hospital. Their impact may change over time.
Resumo:
Purpose: The effectiveness of synchronous carboplatin, etoposide, and radiation therapy in improving survival was evaluated by comparison of a matched set of historic control subjects with patients treated in a prospective Phase II study that used synchronous chemotherapy and radiation and adjuvant chemotherapy. Patients and Methods: Patients were included in the analysis if they had disease localized to the primary site and nodes, and they were required to have at least one of the following high-risk features: recurrence after initial therapy, involved nodes, primary size greater than 1 cm, or gross residual disease after surgery. All patients who received chemotherapy were treated in a standardized fashion as part of a Phase II study (Trans-Tasman Radiation Oncology Group TROG 96:07) from 1997 to 2001. Radiation was delivered to the primary site and nodes to a dose of 50 Gy in 25 fractions over 5 weeks, and synchronous carboplatin (AUC 4.5) and etoposide, 80 mg/m(2) i.v. on Days 1 to 3, were given in Weeks 1, 4, 7, and 10. The historic group represents a single institution's experience from 1988 to 1996 and was treated with surgery and radiation alone, and patients were included if they fulfilled the eligibility criteria of TROG 96:07. Patients with occult cutaneous disease were not included for the purpose of this analysis. Because of imbalances in the prognostic variables between the two treatment groups, comparisons were made by application of Cox's proportional hazard modeling. Overall survival, disease-specific survival, locoregional control, and distant control were used as endpoints for the study. Results: Of the 102 patients who had high-risk Stage I and II disease, 40 were treated with chemotherapy (TROG 96:07) and 62 were treated without chemotherapy (historic control subjects). When Cox's proportional hazards modeling was applied, the only significant factors for overall survival were recurrent disease, age, and the presence of residual disease. For disease-specific survival, recurrent disease was the only significant factor. Primary site on the lower limb had an adverse effect on locoregional control. For distant control, the only significant factor was residual disease. Conclusions: The multivariate analysis suggests chemotherapy has no effect on survival, but because of the wide confidence limits, a chemotherapy effect cannot be excluded. A study of this size is inadequately powered to detect small improvements in survival, and a larger randomized study remains the only way to truly confirm whether chemotherapy improves the results in high-risk MCC. (c) 2006 Elsevier Inc.
Resumo:
Academia Sinica is the leading research institute in Taiwan founded in 1928. Its Office of Technology Transfer (OTT) was established in 1998. It made great efforts to dramatically turn around the technology transfer activity of Academia Sinica, especially in biotechnology. Academia Sinica has more than 80 cases of experience in biotechnology transfer with companies in Taiwanese industry in the past five years. The purpose of this study is to identify potential success and failure factors for biotechnology transfer in Taiwan. Eight cases were studied through in-depth interview. The results of the analysis were used to design two surveys to further investigate 81 cases (48 successful and 33 failure cases) of biotechnology transfer in Academia Sinica from 1999–2003. The results indicated that 10 of the 14 success factors were cited in more than 40% of the cases as contributing to the success of technology transfer. By contrast, only 5 out of 16 key factors were present in more than 30% of the failure cases.
Resumo:
Aims Technological advances in cardiac imaging have led to dramatic increases in test utilization and consumption of a growing proportion of cardiovascular healthcare costs. The opportunity costs of strategies favouring exercise echocardiography or SPECT imaging have been incompletely evaluated. Methods and results We examined prognosis and cost-effectiveness of exercise echocardiography (n=4884) vs. SPECT (n=4637) imaging in stable, intermediate risk, chest pain patients. Ischaemia extent was defined as the number of vascular territories with echocardiographic wall motion or SPECT perfusion abnormalities. Cox proportional hazard models were employed to assess time to cardiac death or myocardial infarction (MI). Total cardiovascular costs were summed (discounted and inflation-corrected) throughout follow-up. A cost-effectiveness ratio = 2% annual event risk), SPECT ischaemia was associated with earlier and greater utilization of coronary revascularization (P < 0.0001) resulting in an incremental cost-effectiveness ratio of $32 381/LYS. Conclusion Health care policies aimed at allocating limited resources can be effectively guided by applying clinical and economic outcomes evidence. A strategy aimed at cost-effective testing would support using echocardiography in low-risk patients with suspected coronary disease, whereas those higher risk patients benefit from referral to SPECT imaging.
Resumo:
Abstract Development data of eggs and pupae of Xyleborus fornicatus Eichh. (Coleoptera: Scolytidae), the shot-hole borer of tea in Sri Lanka, at constant temperatures were used to evaluate a linear and seven nonlinear models for insect development. Model evaluation was based on fit to data (residual sum of squares and coefficient of determination or coefficient of nonlinear regression), number of measurable parameters, the biological value of the fitted coefficients and accuracy in the estimation of thresholds. Of the nonlinear models, the Lactin model fitted experimental data well and along with the linear model, can be used to describe the temperature-dependent development of this species.
Resumo:
There is growing interest in the use of context-awareness as a technique for developing pervasive computing applications that are flexible, adaptable, and capable of acting autonomously on behalf of users. However, context-awareness introduces a variety of software engineering challenges. In this paper, we address these challenges by proposing a set of conceptual models designed to support the software engineering process, including context modelling techniques, a preference model for representing context-dependent requirements, and two programming models. We also present a software infrastructure and software engineering process that can be used in conjunction with our models. Finally, we discuss a case study that demonstrates the strengths of our models and software engineering approach with respect to a set of software quality metrics.
Resumo:
Linear models reach their limitations in applications with nonlinearities in the data. In this paper new empirical evidence is provided on the relative Euro inflation forecasting performance of linear and non-linear models. The well established and widely used univariate ARIMA and multivariate VAR models are used as linear forecasting models whereas neural networks (NN) are used as non-linear forecasting models. It is endeavoured to keep the level of subjectivity in the NN building process to a minimum in an attempt to exploit the full potentials of the NN. It is also investigated whether the historically poor performance of the theoretically superior measure of the monetary services flow, Divisia, relative to the traditional Simple Sum measure could be attributed to a certain extent to the evaluation of these indices within a linear framework. Results obtained suggest that non-linear models provide better within-sample and out-of-sample forecasts and linear models are simply a subset of them. The Divisia index also outperforms the Simple Sum index when evaluated in a non-linear framework. © 2005 Taylor & Francis Group Ltd.
Resumo:
Signal integration determines cell fate on the cellular level, affects cognitive processes and affective responses on the behavioural level, and is likely to be involved in psychoneurobiological processes underlying mood disorders. Interactions between stimuli may subjected to time effects. Time-dependencies of interactions between stimuli typically lead to complex cell responses and complex responses on the behavioural level. We show that both three-factor models and time series models can be used to uncover such time-dependencies. However, we argue that for short longitudinal data the three factor modelling approach is more suitable. In order to illustrate both approaches, we re-analysed previously published short longitudinal data sets. We found that in human embryonic kidney 293 cells cells the interaction effect in the regulation of extracellular signal-regulated kinase (ERK) 1 signalling activation by insulin and epidermal growth factor is subjected to a time effect and dramatically decays at peak values of ERK activation. In contrast, we found that the interaction effect induced by hypoxia and tumour necrosis factor-alpha for the transcriptional activity of the human cyclo-oxygenase-2 promoter in HEK293 cells is time invariant at least in the first 12-h time window after stimulation. Furthermore, we applied the three-factor model to previously reported animal studies. In these studies, memory storage was found to be subjected to an interaction effect of the beta-adrenoceptor agonist clenbuterol and certain antagonists acting on the alpha-1-adrenoceptor / glucocorticoid-receptor system. Our model-based analysis suggests that only if the antagonist drug is administer in a critical time window, then the interaction effect is relevant.
Resumo:
Jackson (2005) developed a hybrid model of personality and learning, known as the learning styles profiler (LSP) which was designed to span biological, socio-cognitive, and experiential research foci of personality and learning research. The hybrid model argues that functional and dysfunctional learning outcomes can be best understood in terms of how cognitions and experiences control, discipline, and re-express the biologically based scale of sensation-seeking. In two studies with part-time workers undertaking tertiary education (N=137 and 58), established models of approach and avoidance from each of the three different research foci were compared with Jackson's hybrid model in their predictiveness of leadership, work, and university outcomes using self-report and supervisor ratings. Results showed that the hybrid model was generally optimal and, as hypothesized, that goal orientation was a mediator of sensation-seeking on outcomes (work performance, university performance, leader behaviours, and counterproductive work behaviour). Our studies suggest that the hybrid model has considerable promise as a predictor of work and educational outcomes as well as dysfunctional outcomes.
Resumo:
Hazard and operability (HAZOP) studies on chemical process plants are very time consuming, and often tedious, tasks. The requirement for HAZOP studies is that a team of experts systematically analyse every conceivable process deviation, identifying possible causes and any hazards that may result. The systematic nature of the task, and the fact that some team members may be unoccupied for much of the time, can lead to tedium, which in turn may lead to serious errors or omissions. An aid to HAZOP are fault trees, which present the system failure logic graphically such that the study team can readily assimilate their findings. Fault trees are also useful to the identification of design weaknesses, and may additionally be used to estimate the likelihood of hazardous events occurring. The one drawback of fault trees is that they are difficult to generate by hand. This is because of the sheer size and complexity of modern process plants. The work in this thesis proposed a computer-based method to aid the development of fault trees for chemical process plants. The aim is to produce concise, structured fault trees that are easy for analysts to understand. Standard plant input-output equation models for major process units are modified such that they include ancillary units and pipework. This results in a reduction in the nodes required to represent a plant. Control loops and protective systems are modelled as operators which act on process variables. This modelling maintains the functionality of loops, making fault tree generation easier and improving the structure of the fault trees produced. A method, called event ordering, is proposed which allows the magnitude of deviations of controlled or measured variables to be defined in terms of the control loops and protective systems with which they are associated.
Resumo:
How are innovative new business models established if organizations constantly compare themselves against existing criteria and expectations? The objective is to address this question from the perspective of innovators and their ability to redefine established expectations and evaluation criteria. The research questions ask whether there are discernible patterns of discursive action through which innovators theorize institutional change and what role such theorizations play for mobilizing support and realizing change projects. These questions are investigated through a case study on a critical area of enterprise computing software, Java application servers. In the present case, business practices and models were already well established among incumbents with critical market areas allocated to few dominant firms. Fringe players started experimenting with a new business approach of selling services around freely available opensource application servers. While most new players struggled, one new entrant succeeded in leading incumbents to adopt and compete on the new model. The case demonstrates that innovative and substantially new models and practices are established in organizational fields when innovators are able to refine expectations and evaluation criteria within an organisational field. The study addresses the theoretical paradox of embedded agency. Actors who are embedded in prevailing institutional logics and structures find it hard to perceive potentially disruptive opportunities that fall outside existing ways of doing things. Changing prevailing institutional logics and structures requires strategic and institutional work aimed at overcoming barriers to innovation. The study addresses this problem through the lens of (new) institutional theory. This discourse methodology traces the process through which innovators were able to establish a new social and business model in the field.
Resumo:
Common approaches to IP-traffic modelling have featured the use of stochastic models, based on the Markov property, which can be classified into black box and white box models based on the approach used for modelling traffic. White box models, are simple to understand, transparent and have a physical meaning attributed to each of the associated parameters. To exploit this key advantage, this thesis explores the use of simple classic continuous-time Markov models based on a white box approach, to model, not only the network traffic statistics but also the source behaviour with respect to the network and application. The thesis is divided into two parts: The first part focuses on the use of simple Markov and Semi-Markov traffic models, starting from the simplest two-state model moving upwards to n-state models with Poisson and non-Poisson statistics. The thesis then introduces the convenient to use, mathematically derived, Gaussian Markov models which are used to model the measured network IP traffic statistics. As one of the most significant contributions, the thesis establishes the significance of the second-order density statistics as it reveals that, in contrast to first-order density, they carry much more unique information on traffic sources and behaviour. The thesis then exploits the use of Gaussian Markov models to model these unique features and finally shows how the use of simple classic Markov models coupled with use of second-order density statistics provides an excellent tool for capturing maximum traffic detail, which in itself is the essence of good traffic modelling. The second part of the thesis, studies the ON-OFF characteristics of VoIP traffic with reference to accurate measurements of the ON and OFF periods, made from a large multi-lingual database of over 100 hours worth of VoIP call recordings. The impact of the language, prosodic structure and speech rate of the speaker on the statistics of the ON-OFF periods is analysed and relevant conclusions are presented. Finally, an ON-OFF VoIP source model with log-normal transitions is contributed as an ideal candidate to model VoIP traffic and the results of this model are compared with those of previously published work.