950 resultados para AFT Models for Crash Duration Survival Analysis
Resumo:
Improvement of intra-ventricular dysynchrony (IVD) in pts undergoing bi-ventricular pacing is associated with clinical improvementbut little isknownabout the relationship between IVD and prognosis.We sought whether IVD influences long-term outcome in pts with known or suspected coronary disease (CAD). Tissue Doppler imaging was performed in 184 pts (aged 61±10 years, 67% male) prior to dobutamine echo. From velocity curves the interval between QRS onset and max systolic velocity (Ts) was measured in basal septal, lateral, inferior and anterior segments. The maximal difference in Ts between segments (TsMax) was used as a measure of IVD. The standard deviation (TsSD) between all segments and the septal-lateral difference (TsSL) were also calculated. Pts were followed up for a median interval of 5 years and a Cox model used for survival analysis. The medianwall motion index (WMI) was 1.3 (IQR 1.0–1.8) at rest and 1.4 (IQR 1.3–1.9) at stress. The table shows IVD parameters. Forty-one deaths occurred during follow-up. Pts who died during follow-up, compared to survivors, showed greater IVD. WMI at rest (p = 0.03) and peak stress (p = 0.02), TsSD (p = 0.06), TsSL (p = 0.02) and TsMax (p = 0.05) but not QRS width were univariate predictors of mortality. TsSL was the only independent predictor of death (p = 0.01). Therefore, IVD is common in pts with known or suspected CAD. Pts with more IVD have reduced long-term survival, independent of WMI.
Resumo:
Risk and knowledge are two concepts and components of business management which have so far been studied almost independently. This is especially true where risk management (RM) is conceived mainly in financial terms, as for example, in the financial institutions sector. Financial institutions are affected by internal and external changes with the consequent accommodation to new business models, new regulations and new global competition that includes new big players. These changes induce financial institutions to develop different methodologies for managing risk, such as the enterprise risk management (ERM) approach, in order to adopt a holistic view of risk management and, consequently, to deal with different types of risk, levels of risk appetite, and policies in risk management. However, the methodologies for analysing risk do not explicitly include knowledge management (KM). This research examines the potential relationships between KM and two RM concepts: perceived quality of risk control and perceived value of ERM. To fulfill the objective of identifying how KM concepts can have a positive influence on some RM concepts, a literature review of KM and its processes and RM and its processes was performed. From this literature review eight hypotheses were analysed using a classification into people, process and technology variables. The data for this research was gathered from a survey applied to risk management employees in financial institutions and 121 answers were analysed. The analysis of the data was based on multivariate techniques, more specifically stepwise regression analysis. The results showed that the perceived quality of risk control is significantly associated with the variables: perceived quality of risk knowledge sharing, perceived quality of communication among people, web channel functionality, and risk management information system functionality. However, the relationships of the KM variables to the perceived value of ERM are not identified because of the low performance of the models describing these relationships. The analysis reveals important insights into the potential KM support to RM such as: the better adoption of KM people and technology actions, the better the perceived quality of risk control. Equally, the results suggest that the quality of risk control and the benefits of ERM follow different patterns given that there is no correlation between both concepts and the distinct influence of the KM variables in each concept. The ERM scenario is different from that of risk control because ERM, as an answer to RM failures and adaptation to new regulation in financial institutions, has led organizations to adopt new processes, technologies, and governance models. Thus, the search for factors influencing the perceived value of ERM implementation needs additional analysis because what is improved in RM processes individually is not having the same effect on the perceived value of ERM. Based on these model results and the literature review the basis of the ERKMAS (Enterprise Risk Knowledge Management System) is presented.
Resumo:
The research is concerned with the application of the computer simulation technique to study the performance of reinforced concrete columns in a fire environment. The effect of three different concrete constitutive models incorporated in the computer simulation on the structural response of reinforced concrete columns exposed to fire is investigated. The material models differed mainly in respect to the formulation of the mechanical properties of concrete. The results from the simulation have clearly illustrated that a more realistic response of a reinforced concrete column exposed to fire is given by a constitutive model with transient creep or appropriate strain effect The assessment of the relative effect of the three concrete material models is considered from the analysis by adopting the approach of a parametric study, carried out using the results from a series of analyses on columns heated on three sides which produce substantial thermal gradients. Three different loading conditions were used on the column; axial loading and eccentric loading both to induce moments in the same sense and opposite sense to those induced by the thermal gradient. An axially loaded column heated on four sides was also considered. The computer modelling technique adopted separated the thermal and structural responses into two distinct computer programs. A finite element heat transfer analysis was used to determine the thermal response of the reinforced concrete columns when exposed to the ISO 834 furnace environment. The temperature distribution histories obtained were then used in conjunction with a structural response program. The effect of the occurrence of spalling on the structural behaviour of reinforced concrete column is also investigated. There is general recognition of the potential problems of spalling but no real investigation into what effect spalling has on the fire resistance of reinforced concrete members. In an attempt to address the situation, a method has been developed to model concrete columns exposed to fire which incorporates the effect of spalling. A total of 224 computer simulations were undertaken by varying the amounts of concrete lost during a specified period of exposure to fire. An array of six percentages of spalling were chosen for one range of simulation while a two stage progressive spalling regime was used for a second range. The quantification of the reduction in fire resistance of the columns against the amount of spalling, heating and loading patterns, and the time at which the concrete spalls appears to indicate that it is the amount of spalling which is the most significant variable in the reduction of fire resistance.
Resumo:
Practitioners assess performance of entities in increasingly large and complicated datasets. If non-parametric models, such as Data Envelopment Analysis, were ever considered as simple push-button technologies, this is impossible when many variables are available or when data have to be compiled from several sources. This paper introduces by the 'COOPER-framework' a comprehensive model for carrying out non-parametric projects. The framework consists of six interrelated phases: Concepts and objectives, On structuring data, Operational models, Performance comparison model, Evaluation, and Result and deployment. Each of the phases describes some necessary steps a researcher should examine for a well defined and repeatable analysis. The COOPER-framework provides for the novice analyst guidance, structure and advice for a sound non-parametric analysis. The more experienced analyst benefits from a check list such that important issues are not forgotten. In addition, by the use of a standardized framework non-parametric assessments will be more reliable, more repeatable, more manageable, faster and less costly. © 2010 Elsevier B.V. All rights reserved.
Resumo:
Original method and technology of systemological «Unit-Function-Object» analysis for solving complete ill-structured problems is proposed. The given visual grapho-analytical UFO technology for the fist time combines capabilities and advantages of the system and object approaches and can be used for business reengineering and for information systems design. UFO- technology procedures are formalized by pattern-theory methods and developed by embedding systemological conceptual classification models into the system-object analysis and software tools. Technology is based on natural classification and helps to investigate deep semantic regularities of subject domain and to take proper account of system-classes essential properties the most objectively. Systemological knowledge models are based on method which for the first time synthesizes system and classification analysis. It allows creating CASE-toolkit of a new generation for organizational modelling for companies’ sustainable development and competitive advantages providing.
Resumo:
This thesis studies survival analysis techniques dealing with censoring to produce predictive tools that predict the risk of endovascular aortic aneurysm repair (EVAR) re-intervention. Censoring indicates that some patients do not continue follow up, so their outcome class is unknown. Methods dealing with censoring have drawbacks and cannot handle the high censoring of the two EVAR datasets collected. Therefore, this thesis presents a new solution to high censoring by modifying an approach that was incapable of differentiating between risks groups of aortic complications. Feature selection (FS) becomes complicated with censoring. Most survival FS methods depends on Cox's model, however machine learning classifiers (MLC) are preferred. Few methods adopted MLC to perform survival FS, but they cannot be used with high censoring. This thesis proposes two FS methods which use MLC to evaluate features. The two FS methods use the new solution to deal with censoring. They combine factor analysis with greedy stepwise FS search which allows eliminated features to enter the FS process. The first FS method searches for the best neural networks' configuration and subset of features. The second approach combines support vector machines, neural networks, and K nearest neighbor classifiers using simple and weighted majority voting to construct a multiple classifier system (MCS) for improving the performance of individual classifiers. It presents a new hybrid FS process by using MCS as a wrapper method and merging it with the iterated feature ranking filter method to further reduce the features. The proposed techniques outperformed FS methods based on Cox's model such as; Akaike and Bayesian information criteria, and least absolute shrinkage and selector operator in the log-rank test's p-values, sensitivity, and concordance. This proves that the proposed techniques are more powerful in correctly predicting the risk of re-intervention. Consequently, they enable doctors to set patients’ appropriate future observation plan.
Resumo:
Considering the so-called "multinomial discrete choice" model the focus of this paper is on the estimation problem of the parameters. Especially, the basic question arises how to carry out the point and interval estimation of the parameters when the model is mixed i.e. includes both individual and choice-specific explanatory variables while a standard MDC computer program is not available for use. The basic idea behind the solution is the use of the Cox-proportional hazards method of survival analysis which is available in any standard statistical package and provided a data structure satisfying certain special requirements it yields the MDC solutions desired. The paper describes the features of the data set to be analysed.
Financial aid and the persistence of associate of arts graduates transferring to a senior university
Resumo:
This study examined the effects of financial aid on the persistence of associate of arts graduates transferring to a senior university in one of four consecutive fall semesters (1998-2001). Situated in an international metropolitan area in the southeastern United States, the institution where the study was conducted is a large public research university identified as a Hispanic Serving Institution. Archival databases served as the source of information on the academic and social background of the 4,669 participants in the study. Data from institutional financial aid records were pooled with the data in the student administrative system.^ For purposes of this study, persistence was defined as ongoing progress until completing the baccalaureate degree. Student social background variables used in the study were gender, ethnicity, age, and income, with GPA and part-time or full-time enrollment status being the academic variables. Amount and type of aid, including grants, loans, scholarships, and work study were incorporated in the models to determine the effect of financial aid on the persistence of these transfer students. Because the dependent variable persistence had three possible outcomes (graduated, still enrolled, dropped out) multinomial logistic regression was the appropriate technique for analyzing the data; four multinomial models were employed in the analysis.^ Findings suggest that grants awarded based on the financial need of students and loans were effective in encouraging the persistence of students, but scholarships and work study were not effective.^
Resumo:
In the past two decades, multi-agent systems (MAS) have emerged as a new paradigm for conceptualizing large and complex distributed software systems. A multi-agent system view provides a natural abstraction for both the structure and the behavior of modern-day software systems. Although there were many conceptual frameworks for using multi-agent systems, there was no well established and widely accepted method for modeling multi-agent systems. This dissertation research addressed the representation and analysis of multi-agent systems based on model-oriented formal methods. The objective was to provide a systematic approach for studying MAS at an early stage of system development to ensure the quality of design. ^ Given that there was no well-defined formal model directly supporting agent-oriented modeling, this study was centered on three main topics: (1) adapting a well-known formal model, predicate transition nets (PrT nets), to support MAS modeling; (2) formulating a modeling methodology to ease the construction of formal MAS models; and (3) developing a technique to support machine analysis of formal MAS models using model checking technology. PrT nets were extended to include the notions of dynamic structure, agent communication and coordination to support agent-oriented modeling. An aspect-oriented technique was developed to address the modularity of agent models and compositionality of incremental analysis. A set of translation rules were defined to systematically translate formal MAS models to concrete models that can be verified through the model checker SPIN (Simple Promela Interpreter). ^ This dissertation presents the framework developed for modeling and analyzing MAS, including a well-defined process model based on nested PrT nets, and a comprehensive methodology to guide the construction and analysis of formal MAS models.^
Resumo:
During the past decade, there has been a dramatic increase by postsecondary institutions in providing academic programs and course offerings in a multitude of formats and venues (Biemiller, 2009; Kucsera & Zimmaro, 2010; Lang, 2009; Mangan, 2008). Strategies pertaining to reapportionment of course-delivery seat time have been a major facet of these institutional initiatives; most notably, within many open-door 2-year colleges. Often, these enrollment-management decisions are driven by the desire to increase market-share, optimize the usage of finite facility capacity, and contain costs, especially during these economically turbulent times. So, while enrollments have surged to the point where nearly one in three 18-to-24 year-old U.S. undergraduates are community college students (Pew Research Center, 2009), graduation rates, on average, still remain distressingly low (Complete College America, 2011). Among the learning-theory constructs related to seat-time reapportionment efforts is the cognitive phenomenon commonly referred to as the spacing effect, the degree to which learning is enhanced by a series of shorter, separated sessions as opposed to fewer, more massed episodes. This ex post facto study explored whether seat time in a postsecondary developmental-level algebra course is significantly related to: course success; course-enrollment persistence; and, longitudinally, the time to successfully complete a general-education-level mathematics course. Hierarchical logistic regression and discrete-time survival analysis were used to perform a multi-level, multivariable analysis of a student cohort (N = 3,284) enrolled at a large, multi-campus, urban community college. The subjects were retrospectively tracked over a 2-year longitudinal period. The study found that students in long seat-time classes tended to withdraw earlier and more often than did their peers in short seat-time classes (p < .05). Additionally, a model comprised of nine statistically significant covariates (all with p-values less than .01) was constructed. However, no longitudinal seat-time group differences were detected nor was there sufficient statistical evidence to conclude that seat time was predictive of developmental-level course success. A principal aim of this study was to demonstrate—to educational leaders, researchers, and institutional-research/business-intelligence professionals—the advantages and computational practicability of survival analysis, an underused but more powerful way to investigate changes in students over time.
Resumo:
The prevalence of waterpipe smoking exceeds that of cigarettes among adolescents in the Middle East where waterpipe is believed as less harmful, less addictive and can be a safer alternative to cigarettes. This dissertation tested the gateway hypothesis that waterpipe can provide a bridge to initiate cigarette smoking, identified the predictors of cigarette smoking progression, and identified predictors of waterpipe smoking progression among a school-based sample of Jordanian adolescents (mean age ± SD) (12.7 ±0.61) years at baseline. Data for this research have been drawn from Irbid Longitudinal Study of smoking behavior, Jordan (2008-2011). The grouped-time survival analysis showed that waterpipe smoking was associated with a higher risk of cigarette smoking initiation compared to never smokers (P < 0.001) and this association was dose dependent (P < 0.001). Predictors of cigarette smoking progression were peer smoking and attending public schools for boys, siblings’ smoking for girls, and the urge to smoke for both genders. Predictors of waterpipe smoking progression were enrollment in public schools, frequent physical activity, and low refusal self-efficacy for boys, ever smoking cigarettes, friends’ and siblings’ waterpipe smoking for girls. Awareness of harms of waterpipe among boys and seeing warning labels on the tobacco packs by girls were protective against waterpipe smoking progression. In Conclusion, waterpipe can serve as a gateway to cigarette smoking initiation among adolescents. Waterpipe and cigarette smoking progressions among initiators were solely family-related among girls, and mainly peer-related among boys. The unique gender differences for both cigarette and waterpipe smoking among Jordanian adolescents in Irbid call for cultural and gender-specific smoking prevention interventions to prevent the progression of smoking among initiators.
Resumo:
Deep sea manganese nodules are considered as important natural resources for the future because of their Ni, Cu and Co contents. Their different shapes cannot be correlated clearly with their chemical composition. Surface constitution, however, can be associated with the metal contents. A classification of the nodules is suggested on the basis of these results. The iron content of the nodules strikingly shows relations to the physical properties (e.g. density and porosity). The method of density-measurement is the reason for this covariance. The investigation of freeze-dried nodular substance does not give this result. The Fe-rich nodules lose more hydration water than the Fe-poor ones during heat drying. The reason for this effect is the different crystallinity, respectively the particle size. The mean particle size is calculated on the basis of geometrical models. The X-ray-diffraction analysis proves the variation of crystallinity in connection with the Fe-content, too. The internal nodular textures also show characteristic distinctions.
Resumo:
BACKGROUND: The prevalence of residual shunt in patients after device closure of atrial septal defect and its impact on long-term outcome has not been previously defined. METHODS: From a prospective, single-institution registry of 408 patients, we selected individuals with agitated saline studies performed 1 year after closure. Baseline echocardiographic, invasive hemodynamic, and comorbidity data were compared to identify contributors to residual shunt. Survival was determined by review of the medical records and the Social Security Death Index. Survival analysis according to shunt included construction of Kaplan-Meier curves and Cox proportional hazards modeling. RESULTS: Among 213 analyzed patients, 27% were men and age at repair was 47 ± 17 years. Thirty patients (14%) had residual shunt at 1 year. Residual shunt was more common with Helex (22%) and CardioSEAL/STARFlex (40%) occluder devices than Amplatzer devices (9%; P = .005). Residual shunts were more common in whites (79% vs 46%, P = .004). At 7.3 ± 3.3 years of follow-up, 13 (6%) of patients had died, including 8 (5%) with Amplatzer, 5 (25%) with CardioSEAL/STARFlex, and 0 with Helex devices. Patients with residual shunting had a higher hazard of death (20% vs 4%, P = .001; hazard ratio 4.95 [1.59-14.90]). In an exploratory multivariable analysis, residual shunting, age, hypertension, coronary artery disease, and diastolic dysfunction were associated with death. CONCLUSIONS: Residual shunt after atrial septal defect device closure is common and adversely impacts long-term survival.
Resumo:
Background: RAS mutations predict resistance to anti-epidermal growthfactor receptor (EGFR) monoclonal antibodies in metastatic colorectal cancer. We analysed RAS mutations in 30 non-metastatic rectal cancer patients treated with or without cetuximab within the 31 EXPERT-C trial.
Methods: Ninety of 149 patients with tumours available for analysis were KRAS/BRAF wild-type, and randomly assigned to capecitabine plus oxaliplatin (CAPOX) followed by chemoradiotherapy, surgery and adjuvant CAPOX or the same regimen plus cetuximab (CAPOX-C). Of these, four had a mutation of NRAS exon 3, and 84 were retrospectively analysed for additional KRAS (exon 4) and NRAS (exons 2/4) mutations by using bi-directional Sanger sequencing. The effect of cetuximab on study end-points in the RAS wild-type population was analysed.
Results: Eleven (13%) of 84 patients initially classified as KRAS/BRAF wild-type were found to have a mutation in KRAS exon 4 (11%) or NRAS exons 2/4 (2%). Overall, 78/149 (52%) assessable patients were RAS wild-type (CAPOX, n = 40; CAPOX-C, n = 38). In this population, after a median follow-up of 63.8 months, in line with the initial analysis, the addition of cetuximab was associated with numerically higher, but not statistically significant, rates of complete response (15.8% versus 7.5%, p = 0.31), 5-year progression-free survival (75.5% versus 67.5%, hazard ratio (HR) 0.61, p = 0.25) and 5-year overall survival (83.8% versus 70%, HR 0.54, p = 0.20).
Conclusions: RAS mutations beyond KRAS exon 2 and 3 were identified in 17% of locally advanced rectal cancer patients. Given the small sample size, no definitive conclusions on the effect of additional RAS mutations on cetuximab treatment in this setting can be drawn and further investigation of RAS in larger studies is warranted.
Resumo:
Contaminating tumour cells in apheresis products have proved to influence the outcome of patients with multiple myeloma (MM) undergoing autologous stem cell transplantation (APBSCT). The gene scanning of clonally rearranged VDJ segments of the heavy chain immunoglobulin gene (VDJH) is a reproducible and easy to perform technique that can be optimised for clinical laboratories. We used it to analyse the aphereses of 27 MM patients undergoing APBSCT with clonally detectable VDJH segments, and 14 of them yielded monoclonal peaks in at least one apheresis product. The presence of positive results was not related to any pre-transplant characteristics, except the age at diagnosis (lower in patients with negative products, P = 0.04). Moreover, a better pre-transplant response trended to associate with a negative result (P = 0.069). Patients with clonally free products were more likely to obtain a better response to transplant (complete remission, 54% vs 28%; >90% reduction in the M-component, 93% vs 43% P = 0.028). In addition, patients transplanted with polyclonal products had longer progression-free survival, (39 vs 19 months, P = 0.037) and overall survival (81% vs 28% at 5 years, P = 0.045) than those transplanted with monoclonal apheresis. In summary, the gene scanning of apheresis products is a useful and clinically relevant technique in MM transplanted patients.