930 resultados para classification algorithm
Resumo:
Background Schizophrenia has been associated with semantic memory impairment and previous studies report a difficulty in accessing semantic category exemplars (Moelter et al. 2005 Schizophr Res 78:209–217). The anterior temporal cortex (ATC) has been implicated in the representation of semantic knowledge (Rogers et al. 2004 Psychol Rev 111(1):205–235). We conducted a high-field (4T) fMRI study with the Category Judgment and Substitution Task (CJAST), an analogue of the Hayling test. We hypothesised that differential activation of the temporal lobe would be observed in schizophrenia patients versus controls. Methods Eight schizophrenia patients (7M : 1F) and eight matched controls performed the CJAST, involving a randomised series of 55 common nouns (from five semantic categories) across three conditions: semantic categorisation, anomalous categorisation and word reading. High-resolution 3D T1-weighted images and GE EPI with BOLD contrast and sparse temporal sampling were acquired on a 4T Bruker MedSpec system. Image processing and analyses were performed with SPM2. Results Differential activation in the left ATC was found for anomalous categorisation relative to category judgment, in patients versus controls. Conclusions We examined semantic memory deficits in schizophrenia using a novel fMRI task. Since the ATC corresponds to an area involved in accessing abstract semantic representations (Moelter et al. 2005), these results suggest schizophrenia patients utilise the same neural network as healthy controls, however it is compromised in the patients and the different ATC activity might be attributable to weakening of category-to-category associations.
Resumo:
Extended gcd computation is interesting itself. It also plays a fundamental role in other calculations. We present a new algorithm for solving the extended gcd problem. This algorithm has a particularly simple description and is practical. It also provides refined bounds on the size of the multipliers obtained.
Resumo:
Qu-Prolog is an extension of Prolog which performs meta-level computations over object languages, such as predicate calculi and lambda-calculi, which have object-level variables, and quantifier or binding symbols creating local scopes for those variables. As in Prolog, the instantiable (meta-level) variables of Qu-Prolog range over object-level terms, and in addition other Qu-Prolog syntax denotes the various components of the object-level syntax, including object-level variables. Further, the meta-level operation of substitution into object-level terms is directly represented by appropriate Qu-Prolog syntax. Again as in Prolog, the driving mechanism in Qu-Prolog computation is a form of unification, but this is substantially more complex than for Prolog because of Qu-Prolog's greater generality, and especially because substitution operations are evaluated during unification. In this paper, the Qu-Prolog unification algorithm is specified, formalised and proved correct. Further, the analysis of the algorithm is carried out in a frame-work which straightforwardly allows the 'completeness' of the algorithm to be proved: though fully explicit answers to unification problems are not always provided, no information is lost in the unification process.
Resumo:
In studies assessing the trends in coronary events, such as the World Health Organization (WHO) MONICA Project (multinational MONItoring of trends and determinants of CArdiovascular disease), the main emphasis has been on coronary deaths and non-fatal definite myocardial infarctions (MI). It is, however, possible that the proportion of milder MIs may be increasing because of improvements in treatment and reductions in levels of risk factors. We used the MI register data of the WHO MONICA Project to investigate several definitions for mild non-fatal MIs that would be applicable in various settings and could be used to assess trends in milder coronary events. Of 38 populations participating in the WHO MONICA MI register study, more than half registered a sufficiently wide spectrum of events that it was possible to identify subsets of milder cases. The event rates and case fatality rates of MI are clearly dependent on the spectrum of non-fatal MIs, which are included. On clinical grounds we propose that the original MONICA category ''non-fatal possible MI'' could bt:divided into two groups: ''non fatal probable MI'' and ''prolonged chest pain.'' Non-fatal probable MIs are cases, which in addition to ''typical symptoms'' have electrocardiogram (EGG) or enzyme changes suggesting cardiac ischemia, but not severe enough to fulfil the criteria for non-fatal definite MI In more than half of the MONICA Collaborating Centers, the registration of MI covers these milder events reasonably well. Proportions of non-fatal probable MIs vary less between populations than do proportions of non fatal possible MIs. Also rates of non-fatal probable MI are somewhat more highly correlated with rates of fatal events and non-fatal definite MI. These findings support the validity of the category of non-fatal probable MI. In each center the increase in event rates and the decrease in case-fatality due to the inclusion of non-fatal probable MI was lar er for women than men. For the WHO MONICA Project and other epidemiological studies the proposed category of non-fatal probable MIs can be used for assessing trends in rates of milder MI. Copyright (C) 1997 Elsevier Science Inc.
Resumo:
The classification rules of linear discriminant analysis are defined by the true mean vectors and the common covariance matrix of the populations from which the data come. Because these true parameters are generally unknown, they are commonly estimated by the sample mean vector and covariance matrix of the data in a training sample randomly drawn from each population. However, these sample statistics are notoriously susceptible to contamination by outliers, a problem compounded by the fact that the outliers may be invisible to conventional diagnostics. High-breakdown estimation is a procedure designed to remove this cause for concern by producing estimates that are immune to serious distortion by a minority of outliers, regardless of their severity. In this article we motivate and develop a high-breakdown criterion for linear discriminant analysis and give an algorithm for its implementation. The procedure is intended to supplement rather than replace the usual sample-moment methodology of discriminant analysis either by providing indications that the dataset is not seriously affected by outliers (supporting the usual analysis) or by identifying apparently aberrant points and giving resistant estimators that are not affected by them.
Resumo:
The aim of a clinical classification of pulmonary hypertension (PH) is to group together different manifestations of disease sharing similarities in pathophysiologic mechanisms, clinical presentation, and therapeutic approaches. In 2003, during the 3rd World Symposium on Pulmonary Hypertension, the clinical classification of PH initially adopted in 1998 during the 2nd World Symposium was slightly modified. During the 4th World Symposium held in 2008, it was decided to maintain the general architecture and philosophy of the previous clinical classifications. The modifications adopted during this meeting principally concern Group 1, pulmonary arterial hypertension (PAH). This subgroup includes patients with PAH with a family history or patients with idiopathic PAH with germline mutations (e. g., bone morphogenetic protein receptor-2, activin receptor-like kinase type 1, and endoglin). In the new classification, schistosomiasis and chronic hemolytic anemia appear as separate entities in the subgroup of PAH associated with identified diseases. Finally, it was decided to place pulmonary venoocclusive disease and pulmonary capillary hemangiomatosis in a separate group, distinct from but very close to Group 1 (now called Group 1`). Thus, Group 1 of PAH is now more homogeneous. (J Am Coll Cardiol 2009;54:S43-54) (C) 2009 by the American College of Cardiology Foundation
Resumo:
Background-Prasugrel is a novel thienopyridine that reduces new or recurrent myocardial infarctions (MIs) compared with clopidogrel in patients with acute coronary syndrome undergoing percutaneous coronary intervention. This effect must be balanced against an increased bleeding risk. We aimed to characterize the effect of prasugrel with respect to the type, size, and timing of MI using the universal classification of MI. Methods and Results-We studied 13 608 patients with acute coronary syndrome undergoing percutaneous coronary intervention randomized to prasugrel or clopidogrel and treated for 6 to 15 months in the Trial to Assess Improvement in Therapeutic Outcomes by Optimizing Platelet Inhibition With Prasugrel-Thrombolysis in Myocardial Infarction (TRITON-TIMI 38). Each MI underwent supplemental classification as spontaneous, secondary, or sudden cardiac death (types 1, 2, and 3) or procedure related (Types 4 and 5) and examined events occurring early and after 30 days. Prasugrel significantly reduced the overall risk of MI (7.4% versus 9.7%; hazard ratio [HR], 0.76; 95% confidence interval [CI], 0.67 to 0.85; P < 0.0001). This benefit was present for procedure-related MIs (4.9% versus 6.4%; HR, 0.76; 95% CI, 0.66 to 0.88; P = 0.0002) and nonprocedural (type 1, 2, or 3) MIs (2.8% versus 3.7%; HR, 0.72; 95% CI, 0.59 to 0.88; P = 0.0013) and consistently across MI size, including MIs with a biomarker peak >= 5 times the reference limit (HR. 0.74; 95% CI, 0.64 to 0.86; P = 0.0001). In landmark analyses starting at 30 days, patients treated with prasugrel had a lower risk of any MI (2.9% versus 3.7%; HR, 0.77; P = 0.014), including nonprocedural MI (2.3% versus 3.1%; HR, 0.74; 95% CI, 0.60 to 0.92; P = 0.0069). Conclusion-Treatment with prasugrel compared with clopidogrel for up to 15 months in patients with acute coronary syndrome undergoing percutaneous coronary intervention significantly reduces the risk of MIs that are procedure related and spontaneous and those that are small and large, including new MIs occurring during maintenance therapy. (Circulation. 2009; 119: 2758-2764.)
Resumo:
OBJECTIVE To examine cortical thickness and volumetric changes in the cortex of patients with polymicrogyria, using an automated image analysis algorithm. METHODS Cortical thickness of patients with polymicrogyria was measured using magnetic resonance imaging (MRI) cortical surface-based analysis and compared with age-and sex-matched healthy subjects. We studied 3 patients with disorder of cortical development (DCD), classified as polymicrogyria, and 15 controls. Two experienced neuroradiologists performed a conventional visual assessment of the MRIs. The same data were analyzed using an automated algorithm for tissue segmentation and classification. Group and individual average maps of cortical thickness differences were produced by cortical surface-based statistical analysis. RESULTS Patients with polymicrogyria showed increased thickness of the cortex in the same areas identified as abnormal by radiologists. We also identified a reduction in the volume and thickness of cortex within additional areas of apparently normal cortex relative to controls. CONCLUSIONS Our findings indicate that there may be regions of reduced cortical thickness, which appear normal from radiological analysis, in the cortex of patients with polymicrogyria. This finding suggests that alterations in neuronal migration may have an impact in the cortical formation of the cortical areas that are visually normal. These areas are associated or occur concurrently with polymicrogyria.
Resumo:
An algorithm for explicit integration of structural dynamics problems with multiple time steps is proposed that averages accelerations to obtain subcycle states at a nodal interface between regions integrated with different time steps. With integer time step ratios, the resulting subcycle updates at the interface sum to give the same effect as a central difference update over a major cycle. The algorithm is shown to have good accuracy, and stability properties in linear elastic analysis similar to those of constant velocity subcycling algorithms. The implementation of a generalised form of the algorithm with non-integer time step ratios is presented. (C) 1997 by John Wiley & Sons, Ltd.
Resumo:
The popular Newmark algorithm, used for implicit direct integration of structural dynamics, is extended by means of a nodal partition to permit use of different timesteps in different regions of a structural model. The algorithm developed has as a special case an explicit-explicit subcycling algorithm previously reported by Belytschko, Yen and Mullen. That algorithm has been shown, in the absence of damping or other energy dissipation, to exhibit instability over narrow timestep ranges that become narrower as the number of degrees of freedom increases, making them unlikely to be encountered in practice. The present algorithm avoids such instabilities in the case of a one to two timestep ratio (two subcycles), achieving unconditional stability in an exponential sense for a linear problem. However, with three or more subcycles, the trapezoidal rule exhibits stability that becomes conditional, falling towards that of the central difference method as the number of subcycles increases. Instabilities over narrow timestep ranges, that become narrower as the model size increases, also appear with three or more subcycles. However by moving the partition between timesteps one row of elements into the region suitable for integration with the larger timestep these the unstable timestep ranges become extremely narrow, even in simple systems with a few degrees of freedom. As well, accuracy is improved. Use of a version of the Newmark algorithm that dissipates high frequencies minimises or eliminates these narrow bands of instability. Viscous damping is also shown to remove these instabilities, at the expense of having more effect on the low frequency response.
Resumo:
We propose a simulated-annealing-based genetic algorithm for solving model parameter estimation problems. The algorithm incorporates advantages of both genetic algorithms and simulated annealing. Tests on computer-generated synthetic data that closely resemble optical constants of a metal were performed to compare the efficiency of plain genetic algorithms against the simulated-annealing-based genetic algorithms. These tests assess the ability of the algorithms to and the global minimum and the accuracy of values obtained for model parameters. Finally, the algorithm with the best performance is used to fit the model dielectric function to data for platinum and aluminum. (C) 1997 Optical Society of America.