961 resultados para Fast Algorithm
Resumo:
This paper presents a step count algorithm designed to work in real-time using low computational power. This proposal is our first step for the development of an indoor navigation system, based on Pedestrian Dead Reckoning (PDR). We present two approaches to solve this problem and compare them based in their error on step counting, as well as, the capability of their use in a real time system.
Resumo:
This paper presents an ankle mounted Inertial Navigation System (INS) used to estimate the distance traveled by a pedestrian. This distance is estimated by the number of steps given by the user. The proposed method is based on force sensors to enhance the results obtained from an INS. Experimental results have shown that, depending on the step frequency, the traveled distance error varies between 2.7% and 5.6%.
Resumo:
The application of compressive sensing (CS) to hyperspectral images is an active area of research over the past few years, both in terms of the hardware and the signal processing algorithms. However, CS algorithms can be computationally very expensive due to the extremely large volumes of data collected by imaging spectrometers, a fact that compromises their use in applications under real-time constraints. This paper proposes four efficient implementations of hyperspectral coded aperture (HYCA) for CS, two of them termed P-HYCA and P-HYCA-FAST and two additional implementations for its constrained version (CHYCA), termed P-CHYCA and P-CHYCA-FAST on commodity graphics processing units (GPUs). HYCA algorithm exploits the high correlation existing among the spectral bands of the hyperspectral data sets and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. The proposed P-HYCA and P-CHYCA implementations have been developed using the compute unified device architecture (CUDA) and the cuFFT library. Moreover, this library has been replaced by a fast iterative method in the P-HYCA-FAST and P-CHYCA-FAST implementations that leads to very significant speedup factors in order to achieve real-time requirements. The proposed algorithms are evaluated not only in terms of reconstruction error for different compressions ratios but also in terms of computational performance using two different GPU architectures by NVIDIA: 1) GeForce GTX 590; and 2) GeForce GTX TITAN. Experiments are conducted using both simulated and real data revealing considerable acceleration factors and obtaining good results in the task of compressing remotely sensed hyperspectral data sets.
Resumo:
This paper introduces a new method to blindly unmix hyperspectral data, termed dependent component analysis (DECA). This method decomposes a hyperspectral images into a collection of reflectance (or radiance) spectra of the materials present in the scene (endmember signatures) and the corresponding abundance fractions at each pixel. DECA assumes that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abudances are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. This method overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
The recent changes concerning the consumers’ active participation in the efficient management of load devices for one’s own interest and for the interest of the network operator, namely in the context of demand response, leads to the need for improved algorithms and tools. A continuous consumption optimization algorithm has been improved in order to better manage the shifted demand. It has been done in a simulation and user-interaction tool capable of being integrated in a multi-agent smart grid simulator already developed, and also capable of integrating several optimization algorithms to manage real and simulated loads. The case study of this paper enhances the advantages of the proposed algorithm and the benefits of using the developed simulation and user interaction tool.
Resumo:
The integration of the Smart Grid concept into the electric grid brings to the need for an active participation of small and medium players. This active participation can be achieved using decentralized decisions, in which the end consumer can manage loads regarding the Smart Grid needs. The management of loads must handle the users’ preferences, wills and needs. However, the users’ preferences, wills and needs can suffer changes when faced with exceptional events. This paper proposes the integration of exceptional events into the SCADA House Intelligent Management (SHIM) system developed by the authors, to handle machine learning issues in the domestic consumption context. An illustrative application and learning case study is provided in this paper.
Resumo:
Com a massificação do uso da tecnologia no dia-a-dia, os sistemas de localização têm vindo a aumentar a sua popularidade, devido à grande diversidade de funcionalidades que proporcionam e aplicações a que se destinam. No entanto, a maior parte dos sistemas de posicionamento não funcionam adequadamente em ambientes indoor, impedindo o desenvolvimento de aplicações de localização nestes ambientes. Os acelerómetros são muito utilizados nos sistemas de localização inercial, pelas informações que fornecem acerca das acelerações sofridas por um corpo. Para tal, neste trabalho, recorrendo à análise do sinal de aceleração provindo de um acelerómetro, propõe-se uma técnica baseada na deteção de passos para que, em aplicações futuras, possa constituir-se como um recurso a utilizar para calcular a posição do utilizador dentro de um edifício. Neste sentido, este trabalho tem como objetivo contribuir para o desenvolvimento da análise e identificação do sinal de aceleração obtido num pé, por forma a determinar a duração de um passo e o número de passos dados. Para alcançar o objetivo de estudo foram analisados, com recurso ao Matlab, um conjunto de 12 dados de aceleração (para marcha normal, rápida e corrida) recolhidos por um sistema móvel (e provenientes de um acelerómetro). A partir deste estudo exploratório tornou-se possível apresentar um algoritmo baseado no método de deteção de pico e na utilização de filtros de mediana e Butterworth passa-baixo para a contagem de passos, que apresentou bons resultados. Por forma a validar as informações obtidas nesta fase, procedeu-se, seguidamente, à realização de um conjunto de testes experimentais a partir da recolha de 33 novos dados para a marcha e corrida. Identificaram-se o número de passos efetuados, o tempo médio de passo e da passada e a percentagem de erro como as variáveis em estudo. Obteve-se uma percentagem de erro igual a 1% para o total dos dados recolhidos de 20, 100, 500 e 1000 passos com a aplicação do método proposto para a contagem do passo. Não obstante as dificuldades observadas na análise dos sinais de aceleração relativos à corrida, o algoritmo proposto mostrou bom desempenho, conseguindo valores próximos aos esperados. Os resultados obtidos permitem afirmar que foi possível atingir-se o objetivo de estudo com sucesso. Sugere-se, no entanto, o desenvolvimento de futuras investigações de forma a alargar estes resultados em outras direções.
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
Resumo:
In this paper we present the operational matrices of the left Caputo fractional derivative, right Caputo fractional derivative and Riemann–Liouville fractional integral for shifted Legendre polynomials. We develop an accurate numerical algorithm to solve the two-sided space–time fractional advection–dispersion equation (FADE) based on a spectral shifted Legendre tau (SLT) method in combination with the derived shifted Legendre operational matrices. The fractional derivatives are described in the Caputo sense. We propose a spectral SLT method, both in temporal and spatial discretizations for the two-sided space–time FADE. This technique reduces the two-sided space–time FADE to a system of algebraic equations that simplifies the problem. Numerical results carried out to confirm the spectral accuracy and efficiency of the proposed algorithm. By selecting relatively few Legendre polynomial degrees, we are able to get very accurate approximations, demonstrating the utility of the new approach over other numerical methods.
Resumo:
In this paper, we formulate the electricity retailers’ short-term decision-making problem in a liberalized retail market as a multi-objective optimization model. Retailers with light physical assets, such as generation and storage units in the distribution network, are considered. Following advances in smart grid technologies, electricity retailers are becoming able to employ incentive-based demand response (DR) programs in addition to their physical assets to effectively manage the risks of market price and load variations. In this model, the DR scheduling is performed simultaneously with the dispatch of generation and storage units. The ultimate goal is to find the optimal values of the hourly financial incentives offered to the end-users. The proposed model considers the capacity obligations imposed on retailers by the grid operator. The profit seeking retailer also has the objective to minimize the peak demand to avoid the high capacity charges in form of grid tariffs or penalties. The non-dominated sorting genetic algorithm II (NSGA-II) is used to solve the multi-objective problem. It is a fast and elitist multi-objective evolutionary algorithm. A case study is solved to illustrate the efficient performance of the proposed methodology. Simulation results show the effectiveness of the model for designing the incentive-based DR programs and indicate the efficiency of NSGA-II in solving the retailers’ multi-objective problem.
Resumo:
BACKGROUND: The quantitation of serum HBeAg is not commonly used to monitor viral response to therapy in chronic hepatitis B. METHODS: In this study, 21 patients receiving varying therapies were followed and their viral response monitored by concomitant viral load and HBeAg quantitation in order to study the meaning and the kinetics of both parameters. RESULTS: It was possible to distinguish between three different patterns of viral response. The first was characterized by a simultaneous decrease in serum HBV DNA and HBeAg. The second pattern was characterized by a decrease in serum HBeAg but persistent detection of HBV DNA. The third pattern was characterized by undetectable HBV DNA with persistent HBeAg positivity, which points to a non-response (Pattern III-B) except when HBeAg levels showed a slow but steady drop, characterizing a "slow responder" patient (Pattern III-A). CONCLUSIONS: The first pattern is compatible with a viral response. A long-term HBeAg seropositivity with a slow and persistent decrease (Pattern III-A) is also compatible with a viral response and calls for a prolongation of anti-viral treatment.
Resumo:
Dissertação apresentada para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Ciência Política e Relações Internacionais na área de especialização em Globalização e Ambiente
Resumo:
The purpose of this work is to present an algorithm to solve nonlinear constrained optimization problems, using the filter method with the inexact restoration (IR) approach. In the IR approach two independent phases are performed in each iteration—the feasibility and the optimality phases. The first one directs the iterative process into the feasible region, i.e. finds one point with less constraints violation. The optimality phase starts from this point and its goal is to optimize the objective function into the satisfied constraints space. To evaluate the solution approximations in each iteration a scheme based on the filter method is used in both phases of the algorithm. This method replaces the merit functions that are based on penalty schemes, avoiding the related difficulties such as the penalty parameter estimation and the non-differentiability of some of them. The filter method is implemented in the context of the line search globalization technique. A set of more than two hundred AMPL test problems is solved. The algorithm developed is compared with LOQO and NPSOL software packages.