852 resultados para H-Infinity Time-Varying Adaptive Algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertation submitted in the fufillment of the requirements for the Degree of Master in Biomedical Engineering

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classical serological screening assays for Chagas' disease are time consuming and subjective. The objective of the present work is to evaluate the enzyme immuno-assay (ELISA) methodology and to propose an algorithm for blood banks to be applied to Chagas' disease. Seven thousand, nine hundred and ninety nine blood donor samples were screened by both reverse passive hemagglutination (RPHA) and indirect immunofluorescence assay (IFA). Samples reactive on RPHA and/or IFA were submitted to supplementary RPHA, IFA and complement fixation (CFA) tests. This strategy allowed us to create a panel of 60 samples to evaluate the ELISA methodology from 3 different manufacturers. The sensitivity of the screening by IFA and the 3 different ELISA's was 100%. The specificity was better on ELISA methodology. For Chagas disease, ELISA seems to be the best test for blood donor screening, because it showed high sensitivity and specificity, it is not subjective and can be automated. Therefore, it was possible to propose an algorithm to screen samples and confirm donor results at the blood bank.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The continued increase in availability of economic data in recent years and, more importantly, the possibility to construct larger frequency time series, have fostered the use (and development) of statistical and econometric techniques to treat them more accurately. This paper presents an exposition of structural time series models by which a time series can be decomposed as the sum of a trend, seasonal and irregular components. In addition to a detailled analysis of univariate speci fications we also address the SUTSE multivariate case and the issue of cointegration. Finally, the recursive estimation and smoothing by means of the Kalman filter algorithm is described taking into account its different stages, from initialisation to parameter s estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human Activity Recognition systems require objective and reliable methods that can be used in the daily routine and must offer consistent results according with the performed activities. These systems are under development and offer objective and personalized support for several applications such as the healthcare area. This thesis aims to create a framework for human activities recognition based on accelerometry signals. Some new features and techniques inspired in the audio recognition methodology are introduced in this work, namely Log Scale Power Bandwidth and the Markov Models application. The Forward Feature Selection was adopted as the feature selection algorithm in order to improve the clustering performances and limit the computational demands. This method selects the most suitable set of features for activities recognition in accelerometry from a 423th dimensional feature vector. Several Machine Learning algorithms were applied to the used accelerometry databases – FCHA and PAMAP databases - and these showed promising results in activities recognition. The developed algorithm set constitutes a mighty contribution for the development of reliable evaluation methods of movement disorders for diagnosis and treatment applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: HTLV-1/2 screening among blood donors commonly utilizes an enzyme-linked immunosorbent assay (EIA), followed by a confirmatory method such as Western blot (WB) if the EIA is positive. However, this algorithm yields a high rate of inconclusive results, and is expensive. METHODS: Two qualitative real-time PCR assays were developed to detect HTLV-1 and 2, and a total of 318 samples were tested (152 blood donors, 108 asymptomatic carriers, 26 HAM/TSP patients and 30 seronegative individuals). RESULTS: The sensitivity and specificity of PCR in comparison with WB results were 99.4% and 98.5%, respectively. PCR tests were more efficient for identifying the virus type, detecting HTLV-2 infection and defining inconclusive cases. CONCLUSIONS: Because real-time PCR is sensitive and practical and costs much less than WB, this technique can be used as a confirmatory test for HTLV in blood banks, as a replacement for WB.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is a difficult task to avoid the “smart systems” topic when discussing smart prevention and, similarly, it is a difficult task to address smart systems without focusing their ability to learn. Following the same line of thought, in the current reality, it seems a Herculean task (or an irreparable omission) to approach the topic of certified occupational health and safety management systems (OHSMS) without discussing the integrated management systems (IMSs). The available data suggest that seldom are the OHSMS operating as the single management system (MS) in a company so, any statement concerning OHSMS should mainly be interpreted from an integrated perspective. A major distinction between generic systems can be drawn between those that learn, i.e., those systems that have “memory” and those that have not. These former systems are often depicted as adaptive since they take into account past events to deal with novel, similar and future events modifying their structure to enable success in its environment. Often, these systems, present a nonlinear behavior and a huge uncertainty related to the forecasting of some events. This paper seeks to portray, for the first time as we were able to find out, the IMSs as complex adaptive systems (CASs) by listing their properties and dissecting the features that enable them to evolve and self-organize in order to, holistically, fulfil the requirements from different stakeholders and thus thrive by assuring the successful sustainability of a company. Based on the revision of literature carried out, this is the first time that IMSs are pointed out as CASs which may develop fruitful synergies both for the MSs and for CASs communities. By performing a thorough revision of literature and based on some concepts embedded in the “DNA” of the subsystems implementation standards it is intended, specifically, to identify, determine and discuss the properties of a generic IMS that should be considered to classify it as a CAS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a single-phase Series Active Power Filter (Series APF) for mitigation of the load voltage harmonic content, while maintaining the voltage on the DC side regulated without the support of a voltage source. The proposed series active power filter control algorithm eliminates the additional voltage source to regulate the DC voltage, and with the adopted topology it is not used a coupling transformer to interface the series active power filter with the electrical power grid. The paper describes the control strategy which encapsulates the grid synchronization scheme, the compensation voltage calculation, the damping algorithm and the dead-time compensation. The topology and control strategy of the series active power filter have been evaluated in simulation software and simulations results are presented. Experimental results, obtained with a developed laboratorial prototype, validate the theoretical assumptions, and are within the harmonic spectrum limits imposed by the international recommendations of the IEEE-519 Standard.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACTThe Amazon várzeas are an important component of the Amazon biome, but anthropic and climatic impacts have been leading to forest loss and interruption of essential ecosystem functions and services. The objectives of this study were to evaluate the capability of the Landsat-based Detection of Trends in Disturbance and Recovery (LandTrendr) algorithm to characterize changes in várzeaforest cover in the Lower Amazon, and to analyze the potential of spectral and temporal attributes to classify forest loss as either natural or anthropogenic. We used a time series of 37 Landsat TM and ETM+ images acquired between 1984 and 2009. We used the LandTrendr algorithm to detect forest cover change and the attributes of "start year", "magnitude", and "duration" of the changes, as well as "NDVI at the end of series". Detection was restricted to areas identified as having forest cover at the start and/or end of the time series. We used the Support Vector Machine (SVM) algorithm to classify the extracted attributes, differentiating between anthropogenic and natural forest loss. Detection reliability was consistently high for change events along the Amazon River channel, but variable for changes within the floodplain. Spectral-temporal trajectories faithfully represented the nature of changes in floodplain forest cover, corroborating field observations. We estimated anthropogenic forest losses to be larger (1.071 ha) than natural losses (884 ha), with a global classification accuracy of 94%. We conclude that the LandTrendr algorithm is a reliable tool for studies of forest dynamics throughout the floodplain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The data acquisition process in real-time is fundamental to provide appropriate services and improve health professionals decision. In this paper a pervasive adaptive data acquisition architecture of medical devices (e.g. vital signs, ventilators and sensors) is presented. The architecture was deployed in a real context in an Intensive Care Unit. It is providing clinical data in real-time to the INTCare system. The gateway is composed by several agents able to collect a set of patients’ variables (vital signs, ventilation) across the network. The paper shows as example the ventilation acquisition process. The clients are installed in a machine near the patient bed. Then they are connected to the ventilators and the data monitored is sent to a multithreading server which using Health Level Seven protocols records the data in the database. The agents associated to gateway are able to collect, analyse, interpret and store the data in the repository. This gateway is composed by a fault tolerant system that ensures a data store in the database even if the agents are disconnected. The gateway is pervasive, universal, and interoperable and it is able to adapt to any service using streaming data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE - To assess mortality and the psychological repercussions of the prolonged waiting time for candidates for heart surgery. METHODS - From July 1999 to May 2000, using a standardized questionnaire, we carried out standardized interviews and semi-structured psychological interviews with 484 patients with coronary heart disease, 121 patients with valvular heart diseases, and 100 patients with congenital heart diseases. RESULTS - The coefficients of mortality (deaths per 100 patients/year) were as follows: patients with coronary heart disease, 5.6; patients with valvular heart diseases, 12.8; and patients with congenital heart diseases, 3.1 (p<0.0001). The survival curve was lower in patients with valvular heart diseases than in patients with coronary heart disease and congenital heart diseases (p<0.001). The accumulated probability of not undergoing surgery was higher in patients with valvular heart diseases than in the other patients (p<0.001), and, among the patients with valvular heart diseases, this probability was higher in females than in males (p<0.01). Several patients experienced intense anxiety and attributed their adaptive problems in the scope of love, professional, and social lives, to not undergoing surgery. CONCLUSION - Mortality was high, and even higher among the patients with valvular heart diseases, with negative psychological and social repercussions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We say the endomorphism problem is solvable for an element W in a free group F if it can be decided effectively whether, given U in F, there is an endomorphism Φ of F sending W to U. This work analyzes an approach due to C. Edmunds and improved by C. Sims. Here we prove that the approach provides an efficient algorithm for solving the endomorphism problem when W is a two- generator word. We show that when W is a two-generator word this algorithm solves the problem in time polynomial in the length of U. This result gives a polynomial-time algorithm for solving, in free groups, two-variable equations in which all the variables occur on one side of the equality and all the constants on the other side.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When speech is degraded, word report is higher for semantically coherent sentences (e.g., her new skirt was made of denim) than for anomalous sentences (e.g., her good slope was done in carrot). Such increased intelligibility is often described as resulting from "top-down" processes, reflecting an assumption that higher-level (semantic) neural processes support lower-level (perceptual) mechanisms. We used time-resolved sparse fMRI to test for top-down neural mechanisms, measuring activity while participants heard coherent and anomalous sentences presented in speech envelope/spectrum noise at varying signal-to-noise ratios (SNR). The timing of BOLD responses to more intelligible speech provides evidence of hierarchical organization, with earlier responses in peri-auditory regions of the posterior superior temporal gyrus than in more distant temporal and frontal regions. Despite Sentence content × SNR interactions in the superior temporal gyrus, prefrontal regions respond after auditory/perceptual regions. Although we cannot rule out top-down effects, this pattern is more compatible with a purely feedforward or bottom-up account, in which the results of lower-level perceptual processing are passed to inferior frontal regions. Behavioral and neural evidence that sentence content influences perception of degraded speech does not necessarily imply "top-down" neural processes.