907 resultados para Complexity score
Resumo:
The increasing amount of sequences stored in genomic databases has become unfeasible to the sequential analysis. Then, the parallel computing brought its power to the Bioinformatics through parallel algorithms to align and analyze the sequences, providing improvements mainly in the running time of these algorithms. In many situations, the parallel strategy contributes to reducing the computational complexity of the big problems. This work shows some results obtained by an implementation of a parallel score estimating technique for the score matrix calculation stage, which is the first stage of a progressive multiple sequence alignment. The performance and quality of the parallel score estimating are compared with the results of a dynamic programming approach also implemented in parallel. This comparison shows a significant reduction of running time. Moreover, the quality of the final alignment, using the new strategy, is analyzed and compared with the quality of the approach with dynamic programming.
Resumo:
The SYNTAX score (SXscore), an anatomical-based scoring tool reflecting the complexity of coronary anatomy, has established itself as an important long-term prognostic factor in patients undergoing percutaneous coronary intervention (PCI). The incorporation of clinical factors may further augment the utility of the SXscore to longer-term risk stratify the individual patient for clinical outcomes.
Resumo:
The aim of this study was to compare standard plaster models with their digital counterparts for the applicability of the Index of Complexity, Outcome, and Need (ICON). Generated study models of 30 randomly selected patients: 30 pre- (T(0)) and 30 post- (T(1)) treatment. Two examiners, calibrated in the ICON, scored the digital and plaster models. The overall ICON scores were evaluated for reliability and reproducibility using kappa statistics and reliability coefficients. The values for reliability of the total and weighted ICON scores were generally high for the T(0) sample (range 0.83-0.95) but less high for the T(1) sample (range 0.55-0.85). Differences in total ICON score between plaster and digital models resulted in mostly statistically insignificant values (P values ranging from 0.07 to 0.19), except for observer 1 in the T(1) sample. No statistically different values were found for the total ICON score on either plaster or digital models. ICON scores performed on computer-based models appear to be as accurate and reliable as ICON scores on plaster models.
Resumo:
OBJECTIVES The purpose of this study was to compare the 2-year safety and effectiveness of new- versus early-generation drug-eluting stents (DES) according to the severity of coronary artery disease (CAD) as assessed by the SYNTAX (Synergy between Percutaneous Coronary Intervention with Taxus and Cardiac Surgery) score. BACKGROUND New-generation DES are considered the standard-of-care in patients with CAD undergoing percutaneous coronary intervention. However, there are few data investigating the effects of new- over early-generation DES according to the anatomic complexity of CAD. METHODS Patient-level data from 4 contemporary, all-comers trials were pooled. The primary device-oriented clinical endpoint was the composite of cardiac death, myocardial infarction, or ischemia-driven target-lesion revascularization (TLR). The principal effectiveness and safety endpoints were TLR and definite stent thrombosis (ST), respectively. Adjusted hazard ratios (HRs) with 95% confidence intervals (CIs) were calculated at 2 years for overall comparisons, as well as stratified for patients with lower (SYNTAX score ≤11) and higher complexity (SYNTAX score >11). RESULTS A total of 6,081 patients were included in the study. New-generation DES (n = 4,554) compared with early-generation DES (n = 1,527) reduced the primary endpoint (HR: 0.75 [95% CI: 0.63 to 0.89]; p = 0.001) without interaction (p = 0.219) between patients with lower (HR: 0.86 [95% CI: 0.64 to 1.16]; p = 0.322) versus higher CAD complexity (HR: 0.68 [95% CI: 0.54 to 0.85]; p = 0.001). In patients with SYNTAX score >11, new-generation DES significantly reduced TLR (HR: 0.36 [95% CI: 0.26 to 0.51]; p < 0.001) and definite ST (HR: 0.28 [95% CI: 0.15 to 0.55]; p < 0.001) to a greater extent than in the low-complexity group (TLR pint = 0.059; ST pint = 0.013). New-generation DES decreased the risk of cardiac mortality in patients with SYNTAX score >11 (HR: 0.45 [95% CI: 0.27 to 0.76]; p = 0.003) but not in patients with SYNTAX score ≤11 (pint = 0.042). CONCLUSIONS New-generation DES improve clinical outcomes compared with early-generation DES, with a greater safety and effectiveness in patients with SYNTAX score >11.
Resumo:
BACKGROUND Diabetes mellitus and angiographic coronary artery disease complexity are intertwined and unfavorably affect prognosis after percutaneous coronary interventions, but their relative impact on long-term outcomes after percutaneous coronary intervention with drug-eluting stents remains controversial. This study determined drug-eluting stents outcomes in relation to diabetic status and coronary artery disease complexity as assessed by the Synergy Between PCI With Taxus and Cardiac Surgery (SYNTAX) score. METHODS AND RESULTS In a patient-level pooled analysis from 4 all-comers trials, 6081 patients were stratified according to diabetic status and according to the median SYNTAX score ≤11 or >11. The primary end point was major adverse cardiac events, a composite of cardiac death, myocardial infarction, and clinically indicated target lesion revascularization within 2 years. Diabetes mellitus was present in 1310 patients (22%), and new-generation drug-eluting stents were used in 4554 patients (75%). Major adverse cardiac events occurred in 173 diabetics (14.5%) and 436 nondiabetic patients (9.9%; P<0.001). In adjusted Cox regression analyses, SYNTAX score and diabetes mellitus were both associated with the primary end point (P<0.001 and P=0.028, respectively; P for interaction, 0.07). In multivariable analyses, diabetic versus nondiabetic patients had higher risks of major adverse cardiac events (hazard ratio, 1.25; 95% confidence interval, 1.03-1.53; P=0.026) and target lesion revascularization (hazard ratio, 1.54; 95% confidence interval, 1.18-2.01; P=0.002) but similar risks of cardiac death (hazard ratio, 1.41; 95% confidence interval, 0.96-2.07; P=0.08) and myocardial infarction (hazard ratio, 0.89; 95% confidence interval, 0.64-1.22; P=0.45), without significant interaction with SYNTAX score ≤11 or >11 for any of the end points. CONCLUSIONS In this population treated with predominantly new-generation drug-eluting stents, diabetic patients were at increased risk for repeat target-lesion revascularization consistently across the spectrum of disease complexity. The SYNTAX score was an independent predictor of 2-year outcomes but did not modify the respective effect of diabetes mellitus. CLINICAL TRIAL REGISTRATION URL: http://www.clinicaltrials.gov. Unique identifiers: NCT00297661, NCT00389220, NCT00617084, and NCT01443104.
Resumo:
ackground Following incomplete spinal cord injury (iSCI), descending drive is impaired, possibly leading to a decrease in the complexity of gait. To test the hypothesis that iSCI impairs gait coordination and decreases locomotor complexity, we collected 3D joint angle kinematics and muscle parameters of rats with a sham or an incomplete spinal cord injury. Methods 12 adult, female, Long-Evans rats, 6 sham and 6 mild-moderate T8 iSCI, were tested 4 weeks following injury. The Basso Beattie Bresnahan locomotor score was used to verify injury severity. Animals had reflective markers placed on the bony prominences of their limb joints and were filmed in 3D while walking on a treadmill. Joint angles and segment motion were analyzed quantitatively, and complexity of joint angle trajectory and overall gait were calculated using permutation entropy and principal component analysis, respectively. Following treadmill testing, the animals were euthanized and hindlimb muscles removed. Excised muscles were tested for mass, density, fiber length, pennation angle, and relaxed sarcomere length. Results Muscle parameters were similar between groups with no evidence of muscle atrophy. The animals showed overextension of the ankle, which was compensated for by a decreased range of motion at the knee. Left-right coordination was altered, leading to left and right knee movements that are entirely out of phase, with one joint moving while the other is stationary. Movement patterns remained symmetric. Permutation entropy measures indicated changes in complexity on a joint specific basis, with the largest changes at the ankle. No significant difference was seen using principal component analysis. Rats were able to achieve stable weight bearing locomotion at reasonable speeds on the treadmill despite these deficiencies. Conclusions Decrease in supraspinal control following iSCI causes a loss of complexity of ankle kinematics. This loss can be entirely due to loss of supraspinal control in the absence of muscle atrophy and may be quantified using permutation entropy. Joint-specific differences in kinematic complexity may be attributed to different sources of motor control. This work indicates the importance of the ankle for rehabilitation interventions following spinal cord injury.
Resumo:
We generalize the classical notion of Vapnik–Chernovenkis (VC) dimension to ordinal VC-dimension, in the context of logical learning paradigms. Logical learning paradigms encompass the numerical learning paradigms commonly studied in Inductive Inference. A logical learning paradigm is defined as a set W of structures over some vocabulary, and a set D of first-order formulas that represent data. The sets of models of ϕ in W, where ϕ varies over D, generate a natural topology W over W. We show that if D is closed under boolean operators, then the notion of ordinal VC-dimension offers a perfect characterization for the problem of predicting the truth of the members of D in a member of W, with an ordinal bound on the number of mistakes. This shows that the notion of VC-dimension has a natural interpretation in Inductive Inference, when cast into a logical setting. We also study the relationships between predictive complexity, selective complexity—a variation on predictive complexity—and mind change complexity. The assumptions that D is closed under boolean operators and that W is compact often play a crucial role to establish connections between these concepts. We then consider a computable setting with effective versions of the complexity measures, and show that the equivalence between ordinal VC-dimension and predictive complexity fails. More precisely, we prove that the effective ordinal VC-dimension of a paradigm can be defined when all other effective notions of complexity are undefined. On a better note, when W is compact, all effective notions of complexity are defined, though they are not related as in the noncomputable version of the framework.
Resumo:
CFO and I/Q mismatch could cause significant performance degradation to OFDM systems. Their estimation and compensation are generally difficult as they are entangled in the received signal. In this paper, we propose some low-complexity estimation and compensation schemes in the receiver, which are robust to various CFO and I/Q mismatch values although the performance is slightly degraded for very small CFO. These schemes consist of three steps: forming a cosine estimator free of I/Q mismatch interference, estimating I/Q mismatch using the estimated cosine value, and forming a sine estimator using samples after I/Q mismatch compensation. These estimators are based on the perception that an estimate of cosine serves much better as the basis for I/Q mismatch estimation than the estimate of CFO derived from the cosine function. Simulation results show that the proposed schemes can improve system performance significantly, and they are robust to CFO and I/Q mismatch.
Resumo:
This document provides a review of international and national practices in investment decision support tools in road asset management. Efforts were concentrated on identifying analytic frameworks, evaluation methodologies and criteria adopted by current tools. Emphasis was also given to how current approaches support Triple Bottom Line decision-making. Benefit Cost Analysis and Multiple Criteria Analysis are principle methodologies in supporting decision-making in Road Asset Management. The complexity of the applications shows significant differences in international practices. There is continuing discussion amongst practitioners and researchers regarding to which one is more appropriate in supporting decision-making. It is suggested that the two approaches should be regarded as complementary instead of competitive means. Multiple Criteria Analysis may be particularly helpful in early stages of project development, say strategic planning. Benefit Cost Analysis is used most widely for project prioritisation and selecting the final project from amongst a set of alternatives. Benefit Cost Analysis approach is useful tool for investment decision-making from an economic perspective. An extension of the approach, which includes social and environmental externalities, is currently used in supporting Triple Bottom Line decision-making in the road sector. However, efforts should be given to several issues in the applications. First of all, there is a need to reach a degree of commonality on considering social and environmental externalities, which may be achieved by aggregating the best practices. At different decision-making level, the detail of consideration of the externalities should be different. It is intended to develop a generic framework to coordinate the range of existing practices. The standard framework will also be helpful in reducing double counting, which appears in some current practices. Cautions should also be given to the methods of determining the value of social and environmental externalities. A number of methods, such as market price, resource costs and Willingness to Pay, are found in the review. The use of unreasonable monetisation methods in some cases has discredited Benefit Cost Analysis in the eyes of decision makers and the public. Some social externalities, such as employment and regional economic impacts, are generally omitted in current practices. This is due to the lack of information and credible models. It may be appropriate to consider these externalities in qualitative forms in a Multiple Criteria Analysis. Consensus has been reached in considering noise and air pollution in international practices. However, Australia practices generally omitted these externalities. Equity is an important consideration in Road Asset Management. The considerations are either between regions, or social groups, such as income, age, gender, disable, etc. In current practice, there is not a well developed quantitative measure for equity issues. More research is needed to target this issue. Although Multiple Criteria Analysis has been used for decades, there is not a generally accepted framework in the choice of modelling methods and various externalities. The result is that different analysts are unlikely to reach consistent conclusions about a policy measure. In current practices, some favour using methods which are able to prioritise alternatives, such as Goal Programming, Goal Achievement Matrix, Analytic Hierarchy Process. The others just present various impacts to decision-makers to characterise the projects. Weighting and scoring system are critical in most Multiple Criteria Analysis. However, the processes of assessing weights and scores were criticised as highly arbitrary and subjective. It is essential that the process should be as transparent as possible. Obtaining weights and scores by consulting local communities is a common practice, but is likely to result in bias towards local interests. Interactive approach has the advantage in helping decision-makers elaborating their preferences. However, computation burden may result in lose of interests of decision-makers during the solution process of a large-scale problem, say a large state road network. Current practices tend to use cardinal or ordinal scales in measure in non-monetised externalities. Distorted valuations can occur where variables measured in physical units, are converted to scales. For example, decibels of noise converts to a scale of -4 to +4 with a linear transformation, the difference between 3 and 4 represents a far greater increase in discomfort to people than the increase from 0 to 1. It is suggested to assign different weights to individual score. Due to overlapped goals, the problem of double counting also appears in some of Multiple Criteria Analysis. The situation can be improved by carefully selecting and defining investment goals and criteria. Other issues, such as the treatment of time effect, incorporating risk and uncertainty, have been given scant attention in current practices. This report suggested establishing a common analytic framework to deal with these issues.
Resumo:
New product development projects are experiencing increasing internal and external project complexity. Complexity leadership theory proposes that external complexity requires adaptive and enabling leadership, which facilitates opportunity recognition (OR). We ask whether internal complexity also requires OR for increased adaptability. We extend a model of EO and OR to conclude that internal complexity may require more careful OR. This means that leaders of technically or structurally complex projects need to evaluate opportunities more carefully than those in projects with external or technological complexity.
Resumo:
Aim – To develop and assess the predictive capabilities of a statistical model that relates routinely collected Trauma Injury Severity Score (TRISS) variables to length of hospital stay (LOS) in survivors of traumatic injury. Method – Retrospective cohort study of adults who sustained a serious traumatic injury, and who survived until discharge from Auckland City, Middlemore, Waikato, or North Shore Hospitals between 2002 and 2006. Cubic-root transformed LOS was analysed using two-level mixed-effects regression models. Results – 1498 eligible patients were identified, 1446 (97%) injured from a blunt mechanism and 52 (3%) from a penetrating mechanism. For blunt mechanism trauma, 1096 (76%) were male, average age was 37 years (range: 15-94 years), and LOS and TRISS score information was available for 1362 patients. Spearman’s correlation and the median absolute prediction error between LOS and the original TRISS model was ρ=0.31 and 10.8 days, respectively, and between LOS and the final multivariable two-level mixed-effects regression model was ρ=0.38 and 6.0 days, respectively. Insufficient data were available for the analysis of penetrating mechanism models. Conclusions – Neither the original TRISS model nor the refined model has sufficient ability to accurately or reliably predict LOS. Additional predictor variables for LOS and other indicators for morbidity need to be considered.