994 resultados para Correlation algorithm
Resumo:
This paper presents a single-phase Series Active Power Filter (Series APF) for mitigation of the load voltage harmonic content, while maintaining the voltage on the DC side regulated without the support of a voltage source. The proposed series active power filter control algorithm eliminates the additional voltage source to regulate the DC voltage, and with the adopted topology it is not used a coupling transformer to interface the series active power filter with the electrical power grid. The paper describes the control strategy which encapsulates the grid synchronization scheme, the compensation voltage calculation, the damping algorithm and the dead-time compensation. The topology and control strategy of the series active power filter have been evaluated in simulation software and simulations results are presented. Experimental results, obtained with a developed laboratorial prototype, validate the theoretical assumptions, and are within the harmonic spectrum limits imposed by the international recommendations of the IEEE-519 Standard.
Resumo:
Natural selection favors the survival and reproduction of organisms that are best adapted to their environment. Selection mechanism in evolutionary algorithms mimics this process, aiming to create environmental conditions in which artificial organisms could evolve solving the problem at hand. This paper proposes a new selection scheme for evolutionary multiobjective optimization. The similarity measure that defines the concept of the neighborhood is a key feature of the proposed selection. Contrary to commonly used approaches, usually defined on the basis of distances between either individuals or weight vectors, it is suggested to consider the similarity and neighborhood based on the angle between individuals in the objective space. The smaller the angle, the more similar individuals. This notion is exploited during the mating and environmental selections. The convergence is ensured by minimizing distances from individuals to a reference point, whereas the diversity is preserved by maximizing angles between neighboring individuals. Experimental results reveal a highly competitive performance and useful characteristics of the proposed selection. Its strong diversity preserving ability allows to produce a significantly better performance on some problems when compared with stat-of-the-art algorithms.
Resumo:
Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e Computadores
Resumo:
ABSTRACTThe Amazon várzeas are an important component of the Amazon biome, but anthropic and climatic impacts have been leading to forest loss and interruption of essential ecosystem functions and services. The objectives of this study were to evaluate the capability of the Landsat-based Detection of Trends in Disturbance and Recovery (LandTrendr) algorithm to characterize changes in várzeaforest cover in the Lower Amazon, and to analyze the potential of spectral and temporal attributes to classify forest loss as either natural or anthropogenic. We used a time series of 37 Landsat TM and ETM+ images acquired between 1984 and 2009. We used the LandTrendr algorithm to detect forest cover change and the attributes of "start year", "magnitude", and "duration" of the changes, as well as "NDVI at the end of series". Detection was restricted to areas identified as having forest cover at the start and/or end of the time series. We used the Support Vector Machine (SVM) algorithm to classify the extracted attributes, differentiating between anthropogenic and natural forest loss. Detection reliability was consistently high for change events along the Amazon River channel, but variable for changes within the floodplain. Spectral-temporal trajectories faithfully represented the nature of changes in floodplain forest cover, corroborating field observations. We estimated anthropogenic forest losses to be larger (1.071 ha) than natural losses (884 ha), with a global classification accuracy of 94%. We conclude that the LandTrendr algorithm is a reliable tool for studies of forest dynamics throughout the floodplain.
Resumo:
A modified version of the metallic-phase pseudofermion dynamical theory (PDT) of the 1D Hubbard model is introduced for the spin dynamical correlation functions of the half-filled 1D Hubbard model Mott– Hubbard phase. The Mott–Hubbard insulator phase PDT is applied to the study of the model longitudinal and transverse spin dynamical structure factors at finite magnetic field h, focusing in particular on the sin- gularities at excitation energies in the vicinity of the lower thresholds. The relation of our theoretical results to both condensed-matter and ultra-cold atom systems is discussed.
Resumo:
Correlations between the elliptic or triangular flow coefficients vm (m=2 or 3) and other flow harmonics vn (n=2 to 5) are measured using sNN−−−−√=2.76 TeV Pb+Pb collision data collected in 2010 by the ATLAS experiment at the LHC, corresponding to an integrated lumonisity of 7 μb−1. The vm-vn correlations are measured in midrapidity as a function of centrality, and, for events within the same centrality interval, as a function of event ellipticity or triangularity defined in a forward rapidity region. For events within the same centrality interval, v3 is found to be anticorrelated with v2 and this anticorrelation is consistent with similar anticorrelations between the corresponding eccentricities ϵ2 and ϵ3. On the other hand, it is observed that v4 increases strongly with v2, and v5 increases strongly with both v2 and v3. The trend and strength of the vm-vn correlations for n=4 and 5 are found to disagree with ϵm-ϵn correlations predicted by initial-geometry models. Instead, these correlations are found to be consistent with the combined effects of a linear contribution to vn and a nonlinear term that is a function of v22 or of v2v3, as predicted by hydrodynamic models. A simple two-component fit is used to separate these two contributions. The extracted linear and nonlinear contributions to v4 and v5 are found to be consistent with previously measured event-plane correlations.
Resumo:
[INTRODUCTION] An accurate preoperative rectal cancer staging is crucial to the correct management of the disease. Despite great controversy around this issue, pelvic magnetic resonance (RM) is said to be the imagiologic standard modality. This work aimed to evaluate magnetic resonance accuracy in preoperative rectal cancer staging comparing with the anatomopathological results. METHODS We calculated sensibility, specificity, positive (VP positive) and negative (VP negative) predictive values for each T and N. We evaluated the concordance between both methods of staging using the Cohen weighted K (Kw), and through ROC curves, we evaluated magnetic resonance accuracy in rectal cancer staging. RESULTS 41 patients met the inclusion criteria. We achieved an efficacy of 43.9% for T and 61% for N staging. The respective sensibility, specificity, positive and negative predictive values are 33.3%, 94.7%, 33.3% and 94.7% for T1; 62.5%, 32%, 37.0% and 57.1% for T2; 31.8%, 79%, 63.6% and 50% for T3 and 27.8%, 87%, 62.5% and 60.6% for N. We obtained a poor concordance for T and N staging and the anatomopathological results. The ROC curves indicated that magnetic resonance is ineffective in rectal cancer staging. CONCLUSION Magnetic resonance has a moderate efficacy in rectal cancer staging and the major difficulty is in differentiating T2 and T3.
Resumo:
Executive functioning (EF), which is considered to govern complex cognition, and verbal memory (VM) are constructs assumed to be related. However, it is not known the magnitude of the association between EF and VM, and how sociodemographic and psychological factors may affect this relationship, including in normal aging. In this study, we assessed different EF and VM parameters, via a battery of neurocognitive/psychological tests, and performed a Canonical Correlation Analysis (CCA) to explore the connection between these constructs, in a sample of middle- aged and older healthy individuals without cognitive impairment (N = 563, 50+ years of age). The analysis revealed a positive and moderate association between EF and VM independently of gender, age, education, global cognitive performance level, and mood. These results confirm that EF presents a significant association with VM performance.
Resumo:
OBJECTIVE - To evaluate the cardiac abnormalities and their evolution during the course of the acquired immunodeficiency syndrome, as well as to correlate clinical and pathological data. METHODS - Twenty-one patients, admitted to the hospital with the diagnosis of acquired immunodeficiency syndrome, were prospectively studied and followed until their death. Age ranged from 19 to 42 years (17 males). ECG and echocardiogram were also obtained every six months. After death, macro- and microscopic examinations were also performed. RESULTS - The most frequent causes of referral to the hospital were: diarrhea or repeated pneumonias, tuberculosis, toxoplasmosis or Kaposi sarcoma. The most frequent findings were acute or chronic pericarditis (42%) and dilated cardiomyopathy (19%). Four patients died of cardiac problems: infective endocarditis, pericarditis with pericardial effusion, bacterial myocarditis and infection by Toxoplasma gondii. CONCLUSION - Severe cardiac abnormalities were the cause of death in some patients. In the majority of the patients, a good correlation existed between clinical and anatomical-pathological data. Cardiac evaluation was important to detect early manifestations and treat them accordingly, even in asymptomatic patients.
Resumo:
OBJECTIVE: To evaluate the influence of systolic or diastolic dysfunction, or both on congestive heart failure functional class. METHODS: Thirty-six consecutive patients with a clinical diagnosis of congestive heart failure with sinus rhythm, who were seen between September and November of 1998 answered an adapted questionnaire about tolerance to physical activity for the determination of NYHA functional class. The patients were studied with transthoracic Doppler echocardiography. Two groups were compared: group 1 (19 patients in functional classes I and II) and group 2 (17 patients in functional classes III and IV). RESULTS: The average ejection fraction was significantly higher in group 1 (44.84%±8.04% vs. 32.59%±11.48% with p=0.0007). The mean ratio of the initial/final maximum diastolic filling velocity (E/A) of the left ventricle was significantly smaller in group 1 (1.07±0.72 vs. 1.98±1.49 with p=0.03). The average maximum systolic pulmonary venous velocity (S) was significantly higher in group 1 (53.53cm/s ± 12.02cm/s vs. 43.41cm/s ± 13.55cm/s with p=0.02). The mean ratio of maximum systolic/diastolic pulmonary venous velocity was significantly higher in group 1 (1.52±0.48 vs. 1.08±0.48 with p=0.01). A predominance of pseudo-normal and restrictive diastolic patterns existed in group 2 (58.83% in group 2 vs. 21.06% in group 1 with p=0.03). CONCLUSION: Both the systolic dysfunction index and the patterns of diastolic dysfunction evaluated by Doppler echocardiography worsened with the evolution of congestive heart failure.
Resumo:
OBJECTIVE: To evaluate the performance of the turbidimetric method of C-reactive protein (CRP) as a measure of low-grade inflammation in patients admitted with non-ST elevation acute coronary syndromes (ACS). METHODS: Serum samples obtained at hospital arrival from 68 patients (66±11 years, 40 men), admitted with unstable angina or non-ST elevation acute myocardial infarction were used to measure CRP by the methods of nephelometry and turbidimetry. RESULTS: The medians of C-reactive protein by the turbidimetric and nephelometric methods were 0.5 mg/dL and 0.47 mg/dL, respectively. A strong linear association existed between the 2 methods, according to the regression coefficient (b=0.75; 95% C.I.=0.70-0.80) and correlation coefficient (r=0.96; P<0.001). The mean difference between the nephelometric and turbidimetric CRP was 0.02 ± 0.91 mg/dL, and 100% agreement between the methods in the detection of high CRP was observed. CONCLUSION: In patients with non-ST elevation ACS, CRP values obtained by turbidimetry show a strong linear association with the method of nephelometry and perfect agreement in the detection of high CRP.
Resumo:
OBJECTIVE: To verify the association of serum markers of myocardial injury, such as troponin I, creatinine kinase, and creatinine kinase isoenzyme MB, and inflammatory markers, such as tumor necrosis factor alpha (TNF-alpha), C-reactive protein, and the erythrocyte sedimentation rate in the perioperative period of cardiac surgery, with the occurrence of possible postpericardiotomy syndrome. METHODS: This was a cohort study with 96 patients undergoing cardiac surgery assessed at the following 4 different time periods: the day before surgery (D0); the 3rd postoperative day (D3); between the 7th and 10th postoperative days (D7-10); and the 30th postoperative day (D30). During each period, we evaluated demographic variables (sex and age), surgical variables (type and duration , extracorporeal circulation), and serum dosages of the markers of myocardial injury and inflammatory response. RESULTS: Of all patients, 12 (12.5%) met the clinical criteria for a diagnosis of postpericardiotomy syndrome, and their mean age was 10.3 years lower than the age of the others (P=0.02). The results of the serum markers for tissue injury and inflammatory response were not significantly different between the 2 assessed groups. No significant difference existed regarding either surgery duration or extracorporeal circulation. CONCLUSION: The patients who met the clinical criteria for postpericardiotomy syndrome were significantly younger than the others were. Serum markers for tissue injury and inflammatory response were not different in the clinically affected group, and did not correlate with the different types and duration of surgery or with extracorporeal circulation.
Resumo:
El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme
Resumo:
In Tbilisi according to the data of the complex monitoring of light ions concentration, radon and sub-micron aerosols the effect of feedback of intensity of ionizing radiation with the light ions content in atmosphere is discovered.
Resumo:
Background: The equations predicting maximal oxygen uptake (VO2max or peak) presently in use in cardiopulmonary exercise testing (CPET) softwares in Brazil have not been adequately validated. These equations are very important for the diagnostic capacity of this method. Objective: Build and validate a Brazilian Equation (BE) for prediction of VO2peak in comparison to the equation cited by Jones (JE) and the Wasserman algorithm (WA). Methods: Treadmill evaluation was performed on 3119 individuals with CPET (breath by breath). The construction group (CG) of the equation consisted of 2495 healthy participants. The other 624 individuals were allocated to the external validation group (EVG). At the BE (derived from a multivariate regression model), age, gender, body mass index (BMI) and physical activity level were considered. The same equation was also tested in the EVG. Dispersion graphs and Bland-Altman analyses were built. Results: In the CG, the mean age was 42.6 years, 51.5% were male, the average BMI was 27.2, and the physical activity distribution level was: 51.3% sedentary, 44.4% active and 4.3% athletes. An optimal correlation between the BE and the CPET measured VO2peak was observed (0.807). On the other hand, difference came up between the average VO2peak expected by the JE and WA and the CPET measured VO2peak, as well as the one gotten from the BE (p = 0.001). Conclusion: BE presents VO2peak values close to those directly measured by CPET, while Jones and Wasserman differ significantly from the real VO2peak.