778 resultados para predictive algorithm
Resumo:
Natural selection favors the survival and reproduction of organisms that are best adapted to their environment. Selection mechanism in evolutionary algorithms mimics this process, aiming to create environmental conditions in which artificial organisms could evolve solving the problem at hand. This paper proposes a new selection scheme for evolutionary multiobjective optimization. The similarity measure that defines the concept of the neighborhood is a key feature of the proposed selection. Contrary to commonly used approaches, usually defined on the basis of distances between either individuals or weight vectors, it is suggested to consider the similarity and neighborhood based on the angle between individuals in the objective space. The smaller the angle, the more similar individuals. This notion is exploited during the mating and environmental selections. The convergence is ensured by minimizing distances from individuals to a reference point, whereas the diversity is preserved by maximizing angles between neighboring individuals. Experimental results reveal a highly competitive performance and useful characteristics of the proposed selection. Its strong diversity preserving ability allows to produce a significantly better performance on some problems when compared with stat-of-the-art algorithms.
Resumo:
Invasive aspergillosis (IA) is a life-threatening fungal disease commonly diagnosed among individuals with immunological deficits, namely hematological patients undergoing chemotherapy or allogeneic hematopoietic stem cell transplantation. Vaccines are not available, and despite the improved diagnosis and antifungal therapy, the treatment of IA is associated with a poor outcome. Importantly, the risk of infection and its clinical outcome vary significantly even among patients with similar predisposing clinical factors and microbiological exposure. Recent insights into antifungal immunity have further highlighted the complexity of host-fungus interactions and the multiple pathogen-sensing systems activated to control infection. How to decode this information into clinical practice remains however, a challenging issue in medical mycology. Here, we address recent advances in our understanding of the host-fungus interaction and discuss the application of this knowledge in potential strategies with the aim of moving toward personalized diagnostics and treatment (theranostics) in immunocompromised patients. Ultimately, the integration of individual traits into a clinically applicable process to predict the risk and progression of disease, and the efficacy of antifungal prophylaxis and therapy, holds the promise of a pioneering innovation benefiting patients at risk of IA.
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
Resumo:
Olive oil quality grading is traditionally assessed by human sensory evaluation of positive and negative attributes (olfactory, gustatory, and final olfactorygustatory sensations). However, it is not guaranteed that trained panelist can correctly classify monovarietal extra-virgin olive oils according to olive cultivar. In this work, the potential application of human (sensory panelists) and artificial (electronic tongue) sensory evaluation of olive oils was studied aiming to discriminate eight single-cultivar extra-virgin olive oils. Linear discriminant, partial least square discriminant, and sparse partial least square discriminant analyses were evaluated. The best predictive classification was obtained using linear discriminant analysis with simulated annealing selection algorithm. A low-level data fusion approach (18 electronic tongue signals and nine sensory attributes) enabled 100 % leave-one-out cross-validation correct classification, improving the discrimination capability of the individual use of sensor profiles or sensory attributes (70 and 57 % leave-one-out correct classifications, respectively). So, human sensory evaluation and electronic tongue analysis may be used as complementary tools allowing successful monovarietal olive oil discrimination.
Resumo:
ABSTRACTThe Amazon várzeas are an important component of the Amazon biome, but anthropic and climatic impacts have been leading to forest loss and interruption of essential ecosystem functions and services. The objectives of this study were to evaluate the capability of the Landsat-based Detection of Trends in Disturbance and Recovery (LandTrendr) algorithm to characterize changes in várzeaforest cover in the Lower Amazon, and to analyze the potential of spectral and temporal attributes to classify forest loss as either natural or anthropogenic. We used a time series of 37 Landsat TM and ETM+ images acquired between 1984 and 2009. We used the LandTrendr algorithm to detect forest cover change and the attributes of "start year", "magnitude", and "duration" of the changes, as well as "NDVI at the end of series". Detection was restricted to areas identified as having forest cover at the start and/or end of the time series. We used the Support Vector Machine (SVM) algorithm to classify the extracted attributes, differentiating between anthropogenic and natural forest loss. Detection reliability was consistently high for change events along the Amazon River channel, but variable for changes within the floodplain. Spectral-temporal trajectories faithfully represented the nature of changes in floodplain forest cover, corroborating field observations. We estimated anthropogenic forest losses to be larger (1.071 ha) than natural losses (884 ha), with a global classification accuracy of 94%. We conclude that the LandTrendr algorithm is a reliable tool for studies of forest dynamics throughout the floodplain.
Resumo:
PURPOSE: To evaluate the efficacy of a systematic model of care for patients with chest pain and no ST segment elevation in the emergency room. METHODS: From 1003 patients submitted to an algorithm diagnostic investigation by probability of acute ischemic syndrome. We analyzed 600 ones with no elevation of ST segment, then enrolled to diagnostic routes of median (route 2) and low probability (route 3) to ischemic syndrome. RESULTS: In route 2 we found 17% acute myocardial infarction and 43% unstable angina, whereas in route 3 the rates were 2% and 7%, respectively. Patients with normal/non--specific ECG had 6% probability of AMI whereas in those with negative first CKMB it was 7%; the association of the 2 data only reduced it to 4%. In patients in route 2 the diagnosis of AMI could only be ruled out with serial CKMB measurement up to 9 hours, while in route 3 it could be done in up to 3 hours. Thus, sensitivity and negative predictive value of admission CKMB for AMI were 52% and 93%, respectively. About one-half of patients with unstable angina did not disclose objective ischemic changes on admission. CONCLUSION: The use of a systematic model of care in patients with chest pain offers the opportunity of hindering inappropriate release of patients with ACI and reduces unnecessary admissions. However some patients even with normal ECG should not be released based on a negative first CKMB. Serial measurement of CKMB up to 9 hours is necessary in patients with medium probability of AMI.
Resumo:
OBJECTIVE: Risk stratification of patients with nonsustained ventricular tachycardia (NSVT) and chronic chagasic cardiomyopathy (CCC). METHODS: Seventy eight patients with CCC and NSVT were consecutively and prospectively studied. All patients underwent to 24-hour Holter monitoring, radioisotopic ventriculography, left ventricular angiography, and electrophysiologic study. With programmed ventricular stimulation. RESULTS: Sustained monomorphic ventricular tachycardia (SMVT) was induced in 25 patients (32%), NSVT in 20 (25.6%) and ventricular fibrillation in 4 (5.1%). In 29 patients (37.2%) no arrhythmia was inducible. During a 55.7-month-follow-up, 22 (28.2%) patients died, 16 due to sudden death, 2 due to nonsudden cardiac death and 4 due to noncardiac death. Logistic regression analysis showed that induction was the independent and main variable that predicted the occurrence of subsequent events and cardiac death (probability of 2.56 and 2.17, respectively). The Mantel-Haenszel chi-square test showed that survival probability was significantly lower in the inducible group than in the noninductible group. The percentage of patients free of events was significantly higher in the noninducible group. CONCLUSION: Induction of SMVT during programmed ventricular stimulation was a predictor of arrhythmia occurrence cardiac death and general mortality in patients with CCC and NSVT.
Resumo:
OBJECTIVE: To determine in arrhythmogenic right ventricular cardiomyopathy the value of QT interval dispersion for identifying the induction of sustained ventricular tachycardia in the electrophysiological study or the risk of sudden cardiac death. METHODS: We assessed QT interval dispersion in the 12-lead electrocardiogram of 26 patients with arrhythmogenic right ventricular cardiomyopathy. We analyzed its association with sustained ventricular tachycardia and sudden cardiac death, and in 16 controls similar in age and sex. RESULTS: (mean ± SD). QT interval dispersion: patients = 53.8±14.1ms; control group = 35.0±10.6ms, p=0.001. Patients with induction of ventricular tachycardia: 52.5±13.8ms; without induction of ventricular tachycardia: 57.5±12.8ms, p=0.420. In a mean follow-up period of 41±11 months, five sudden cardiac deaths occurred. QT interval dispersion in this group was 62.0±17.8, and in the others it was 51.9±12.8ms, p=0.852. Using a cutoff > or = 60ms to define an increase in the degree of the QT interval dispersion, we were able to identify patients at risk of sudden cardiac death with a sensitivity of 60%, a specificity of 57%, and positive and negative predictive values of 25% and 85%, respectively. CONCLUSION: Patients with arrhythmogenic right ventricular cardiomyopathy have a significant increase in the degree of QT interval dispersion when compared with the healthy population. However it, did not identify patients with induction of ventricular tachycardia in the electrophysiological study, showing a very low predictive value for defining the risk of sudden cardiac death in the population studied.
Resumo:
The decision support models in intensive care units are developed to support medical staff in their decision making process. However, the optimization of these models is particularly difficult to apply due to dynamic, complex and multidisciplinary nature. Thus, there is a constant research and development of new algorithms capable of extracting knowledge from large volumes of data, in order to obtain better predictive results than the current algorithms. To test the optimization techniques a case study with real data provided by INTCare project was explored. This data is concerning to extubation cases. In this dataset, several models like Evolutionary Fuzzy Rule Learning, Lazy Learning, Decision Trees and many others were analysed in order to detect early extubation. The hydrids Decision Trees Genetic Algorithm, Supervised Classifier System and KNNAdaptive obtained the most accurate rate 93.2%, 93.1%, 92.97% respectively, thus showing their feasibility to work in a real environment.
Resumo:
This paper presents a model predictive current control applied to a proposed single-phase five-level active rectifier (FLAR). This current control strategy uses the discrete-time nature of the active rectifier to define its state in each sampling interval. Although the switching frequency is not constant, this current control strategy allows to follow the reference with low total harmonic distortion (THDF). The implementation of the active rectifier that was used to obtain the experimental results is described in detail along the paper, presenting the circuit topology, the principle of operation, the power theory, and the current control strategy. The experimental results confirm the robustness and good performance (with low current THDF and controlled output voltage) of the proposed single-phase FLAR operating with model predictive current control.
Resumo:
OBJECTIVE: To investigate preoperative predictive factors of severe perioperative intercurrent events and in-hospital mortality in coronary artery bypass graft (CABG) surgery and to develop specific models of risk prediction for these events, mainly those that can undergo changes in the preoperative period. METHODS: We prospectively studied 453 patients who had undergone CABG. Factors independently associated with the events of interest were determined with multiple logistic regression and Cox proportional hazards regression model. RESULTS: The mortality rate was 11.3% (51/453), and 21.2% of the patients had 1 or more perioperative intercurrent events. In the final model, the following variables remained associated with the risk of intercurrent events: age ³ 70 years, female sex, hospitalization via SUS (Sistema Único de Saúde - the Brazilian public health system), cardiogenic shock, ischemia, and dependence on dialysis. Using multiple logistic regression for in-hospital mortality, the following variables participated in the model of risk prediction: age ³ 70 years, female sex, hospitalization via SUS, diabetes, renal dysfunction, and cardiogenic shock. According to the Cox regression model for death within the 7 days following surgery, the following variables remained associated with mortality: age ³ 70 years, female sex, cardiogenic shock, and hospitalization via SUS. CONCLUSION: The aspects linked to the structure of the Brazilian health system, such as factors of great impact on the results obtained, indicate that the events investigated also depend on factors that do not relate to the patient's intrinsic condition.
Resumo:
OBJECTIVE: To analyze the predictive factors of complications after implantation of coronary stents in a consecutive cohort study. METHODS: Clinical and angiographic characteristics related to the procedure were analyzed, and the incidence of major cardiovascular complications (myocardial infarction, urgent surgery, new angioplasty, death) in the in-hospital phase were recorded. Data were stored in an Access database and analyzed by using the SPSS 6.0 statistical program and a stepwise backwards multiple logistic regression model. RESULTS: One thousand eighteen (mean age of 61±11 years, 29% females) patients underwent 1,070 stent implantations. The rate of angiographic success was 96.8%, the rate of clinical success was 91%, and the incidence of major cardiovascular complications was 7.9%. The variables independently associated with major cardiovascular complications, with their respective odds ratio (OR) were: rescue stent, OR = 5.1 (2.7-9.6); filamentary stent, OR = 4.5 (2.2-9.1); first-generation tubular stent, OR = 2.4 (1.2-4.6); multiple stents, OR = 3 (1.6-5.6); complexity of the lesion, OR = 2.4 (1.1-5.1); thrombus, OR = 2 (1.1-3.5). CONCLUSION: The results stress the importance of angiographic variables and techniques in the risk of complications and draw attention to the influence of the stent's design on the result of the procedure.
Resumo:
El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme
Resumo:
Background: End-stage kidney disease patients continue to have markedly increased cardiovascular disease morbidity and mortality. Analysis of genetic factors connected with the renin-angiotensin system that influences the survival of the patients with end-stage kidney disease supports the ongoing search for improved outcomes. Objective: To assess survival and its association with the polymorphism of renin-angiotensin system genes: angiotensin I-converting enzyme insertion/deletion and angiotensinogen M235T in patients undergoing hemodialysis. Methods: Our study was designed to examine the role of renin-angiotensin system genes. It was an observational study. We analyzed 473 chronic hemodialysis patients in four dialysis units in the state of Rio de Janeiro. Survival rates were calculated by the Kaplan-Meier method and the differences between the curves were evaluated by Tarone-Ware, Peto-Prentice, and log rank tests. We also used logistic regression analysis and the multinomial model. A p value ≤ 0.05 was considered to be statistically significant. The local medical ethics committee gave their approval to this study. Results: The mean age of patients was 45.8 years old. The overall survival rate was 48% at 11 years. The major causes of death were cardiovascular diseases (34%) and infections (15%). Logistic regression analysis found statistical significance for the following variables: age (p = 0.000038), TT angiotensinogen (p = 0.08261), and family income greater than five times the minimum wage (p = 0.03089), the latter being a protective factor. Conclusions: The survival of hemodialysis patients is likely to be influenced by the TT of the angiotensinogen M235T gene.
Resumo:
Background:Vascular remodeling, the dynamic dimensional change in face of stress, can assume different directions as well as magnitudes in atherosclerotic disease. Classical measurements rely on reference to segments at a distance, risking inappropriate comparison between dislike vessel portions.Objective:to explore a new method for quantifying vessel remodeling, based on the comparison between a given target segment and its inferred normal dimensions.Methods:Geometric parameters and plaque composition were determined in 67 patients using three-vessel intravascular ultrasound with virtual histology (IVUS-VH). Coronary vessel remodeling at cross-section (n = 27.639) and lesion (n = 618) levels was assessed using classical metrics and a novel analytic algorithm based on the fractional vessel remodeling index (FVRI), which quantifies the total change in arterial wall dimensions related to the estimated normal dimension of the vessel. A prediction model was built to estimate the normal dimension of the vessel for calculation of FVRI.Results:According to the new algorithm, “Ectatic” remodeling pattern was least common, “Complete compensatory” remodeling was present in approximately half of the instances, and “Negative” and “Incomplete compensatory” remodeling types were detected in the remaining. Compared to a traditional diagnostic scheme, FVRI-based classification seemed to better discriminate plaque composition by IVUS-VH.Conclusion:Quantitative assessment of coronary remodeling using target segment dimensions offers a promising approach to evaluate the vessel response to plaque growth/regression.