973 resultados para Iterative Closest Point (ICP) Algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Este trabalho insere-se no âmbito do Projeto de investigação «Impacto e Efeitos da Avaliação Externa nas Escolas do Ensino Não-Superior» (FCT PTDC/CPE-CED/116674/2010), desenvolvidopor 6 universidades nacionais. Em concreto, situa a sua análise sobre os resultados de um inquérito por questionário enviado a todos os diretores de unidades educativas do continente português. Analisamos, neste texto, os resultados atinentes às perspetivas da Região Norte de Portugal, por ser aquela que está geograficamente mais próxima da Universidade do Minho. Os resultados apontam para uma perceção moderadamente positiva da Avaliação Externa de Escolas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Electromagnetism-like (EM) algorithm is a population- based stochastic global optimization algorithm that uses an attraction- repulsion mechanism to move sample points towards the optimal. In this paper, an implementation of the EM algorithm in the Matlab en- vironment as a useful function for practitioners and for those who want to experiment a new global optimization solver is proposed. A set of benchmark problems are solved in order to evaluate the performance of the implemented method when compared with other stochastic methods available in the Matlab environment. The results con rm that our imple- mentation is a competitive alternative both in term of numerical results and performance. Finally, a case study based on a parameter estimation problem of a biology system shows that the EM implementation could be applied with promising results in the control optimization area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose an extension of the firefly algorithm (FA) to multi-objective optimization. FA is a swarm intelligence optimization algorithm inspired by the flashing behavior of fireflies at night that is capable of computing global solutions to continuous optimization problems. Our proposal relies on a fitness assignment scheme that gives lower fitness values to the positions of fireflies that correspond to non-dominated points with smaller aggregation of objective function distances to the minimum values. Furthermore, FA randomness is based on the spread metric to reduce the gaps between consecutive non-dominated solutions. The obtained results from the preliminary computational experiments show that our proposal gives a dense and well distributed approximated Pareto front with a large number of points.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a single-phase Series Active Power Filter (Series APF) for mitigation of the load voltage harmonic content, while maintaining the voltage on the DC side regulated without the support of a voltage source. The proposed series active power filter control algorithm eliminates the additional voltage source to regulate the DC voltage, and with the adopted topology it is not used a coupling transformer to interface the series active power filter with the electrical power grid. The paper describes the control strategy which encapsulates the grid synchronization scheme, the compensation voltage calculation, the damping algorithm and the dead-time compensation. The topology and control strategy of the series active power filter have been evaluated in simulation software and simulations results are presented. Experimental results, obtained with a developed laboratorial prototype, validate the theoretical assumptions, and are within the harmonic spectrum limits imposed by the international recommendations of the IEEE-519 Standard.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e Computadores

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ABSTRACTThe Amazon várzeas are an important component of the Amazon biome, but anthropic and climatic impacts have been leading to forest loss and interruption of essential ecosystem functions and services. The objectives of this study were to evaluate the capability of the Landsat-based Detection of Trends in Disturbance and Recovery (LandTrendr) algorithm to characterize changes in várzeaforest cover in the Lower Amazon, and to analyze the potential of spectral and temporal attributes to classify forest loss as either natural or anthropogenic. We used a time series of 37 Landsat TM and ETM+ images acquired between 1984 and 2009. We used the LandTrendr algorithm to detect forest cover change and the attributes of "start year", "magnitude", and "duration" of the changes, as well as "NDVI at the end of series". Detection was restricted to areas identified as having forest cover at the start and/or end of the time series. We used the Support Vector Machine (SVM) algorithm to classify the extracted attributes, differentiating between anthropogenic and natural forest loss. Detection reliability was consistently high for change events along the Amazon River channel, but variable for changes within the floodplain. Spectral-temporal trajectories faithfully represented the nature of changes in floodplain forest cover, corroborating field observations. We estimated anthropogenic forest losses to be larger (1.071 ha) than natural losses (884 ha), with a global classification accuracy of 94%. We conclude that the LandTrendr algorithm is a reliable tool for studies of forest dynamics throughout the floodplain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For a given self-map f of M, a closed smooth connected and simply-connected manifold of dimension m ≥ 4, we provide an algorithm for estimating the values of the topological invariant Dm r [f], which equals the minimal number of r-periodic points in the smooth homotopy class of f. Our results are based on the combinatorial scheme for computing Dm r [f] introduced by G. Graff and J. Jezierski [J. Fixed Point Theory Appl. 13 (2013), 63–84]. An open-source implementation of the algorithm programmed in C++ is publicly available at http://www.pawelpilarczyk.com/combtop/.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e de Computadores

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação de mestrado em Engenharia de Sistemas

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tese de Doutoramento em Ciência e Engenharia de Polímeros e Compósitos

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE - To identify, the anaerobic threshold and respiratory compensation point in patients with heart failure. METHODS - The study comprised 42 Men,divided according to the functional class (FC) as follows: group I (GI) - 15 patients in FC I; group II (GII) - 15 patients in FC II; and group III (GIII) - 12 patients in FC III. Patients underwent a treadmill cardiopulmonary exercise test, where the expired gases were analyzed. RESULTS - The values for the heart rate (in bpm) at the anaerobic threshold were the following: GI, 122±27; GII, 117±17; GIII, 114±22. At the respiratory compensation point, the heart rates (in bpm) were as follows: GI, 145±33; GII, 133±14; GIII 123±22. The values for the heart rates at the respiratory compensation point in GI and GIII showed statistical difference. The values of oxygen consumption (VO2) at the anaerobic threshold were the following (in ml/kg/min): GI, 13.6±3.25; GII, 10.77±1.89; GIII, 8.7±1.44 and, at the respiratory compensation point, they were as follows: GI, 19.1±2.2; GII, 14.22±2.63; GIII, 10.27±1.85. CONCLUSION - Patients with stable functional class I, II, and III heart failure reached the anaerobic threshold and the respiratory compensation point at different levels of oxygen consumption and heart rate. The role played by these thresholds in physical activity for this group of patients needs to be better clarified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação de mestrado em Direito das Crianças, Família e Sucessões

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As indicações para intervenção coronária percutânea (ICP) continuam a evoluir devido ao contínuo desenvolvimento da tecnologia, critérios de seleção mais amplos para pacientes e lesões e novas evidências advindas de testes clínicos. Uma controvérsia considerável foi gerada pelos resultados principais do estudo COURAGE (Clinical Outcomes Utilizing Revascularization and Aggressive Drug Evaluation), o qual não demonstrou diferenças no resultado em longo prazo para pacientes estáveis com doença coronariana, randomizados para uma estratégia inicial de ICP mais tratamento médico otimizado versus tratamento médico otimizado isolado. Em pacientes com angina estável crônica, o tratamento médico permanece sendo a pedra fundamental e deveria ser otimizado em todos os pacientes, enquanto os maiores objetivos alcançáveis da ICP são ter efeito sobre os sintomas, através de sua diminuição ou prevenção, reduzir a necessidade de procedimentos subsequentes e aliviar a isquemia. Em pacientes com doença arterial coronariana (DAC) estável, entretanto, nenhuma redução na incidência de morte ou infarto do miocárdio foi observada e essas limitações da ICP nesse cenário clínico precisam ser enfatizadas. A mensagem do estudo COURAGE pode ser refinada com base nos recentes sub-estudos nuclear e angiográfico, de forma que os pacientes com isquemia residual significante, submetidos a tratamento médico otimizado, deveriam ser considerados para tratamento com ICP, já que estão associados com maior probabilidade de morte e infarto do miocárdio. Entretanto, esses achados precisam ser confirmados por avaliação prospectiva antes de sua mais ampla aceitação pela comunidade intervencionista.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Object-oriented simulation, mechatronic systems, non-iterative algorithm, electric components, piezo-actuator, symbolic computation, Maple, Sparse-Tableau, Library of components