961 resultados para Fast Algorithm
Resumo:
In this paper, we propose an extension of the firefly algorithm (FA) to multi-objective optimization. FA is a swarm intelligence optimization algorithm inspired by the flashing behavior of fireflies at night that is capable of computing global solutions to continuous optimization problems. Our proposal relies on a fitness assignment scheme that gives lower fitness values to the positions of fireflies that correspond to non-dominated points with smaller aggregation of objective function distances to the minimum values. Furthermore, FA randomness is based on the spread metric to reduce the gaps between consecutive non-dominated solutions. The obtained results from the preliminary computational experiments show that our proposal gives a dense and well distributed approximated Pareto front with a large number of points.
Resumo:
This paper presents a single-phase Series Active Power Filter (Series APF) for mitigation of the load voltage harmonic content, while maintaining the voltage on the DC side regulated without the support of a voltage source. The proposed series active power filter control algorithm eliminates the additional voltage source to regulate the DC voltage, and with the adopted topology it is not used a coupling transformer to interface the series active power filter with the electrical power grid. The paper describes the control strategy which encapsulates the grid synchronization scheme, the compensation voltage calculation, the damping algorithm and the dead-time compensation. The topology and control strategy of the series active power filter have been evaluated in simulation software and simulations results are presented. Experimental results, obtained with a developed laboratorial prototype, validate the theoretical assumptions, and are within the harmonic spectrum limits imposed by the international recommendations of the IEEE-519 Standard.
Resumo:
Natural selection favors the survival and reproduction of organisms that are best adapted to their environment. Selection mechanism in evolutionary algorithms mimics this process, aiming to create environmental conditions in which artificial organisms could evolve solving the problem at hand. This paper proposes a new selection scheme for evolutionary multiobjective optimization. The similarity measure that defines the concept of the neighborhood is a key feature of the proposed selection. Contrary to commonly used approaches, usually defined on the basis of distances between either individuals or weight vectors, it is suggested to consider the similarity and neighborhood based on the angle between individuals in the objective space. The smaller the angle, the more similar individuals. This notion is exploited during the mating and environmental selections. The convergence is ensured by minimizing distances from individuals to a reference point, whereas the diversity is preserved by maximizing angles between neighboring individuals. Experimental results reveal a highly competitive performance and useful characteristics of the proposed selection. Its strong diversity preserving ability allows to produce a significantly better performance on some problems when compared with stat-of-the-art algorithms.
Resumo:
The main features of most components consist of simple basic functional geometries: planes, cylinders, spheres and cones. Shape and position recognition of these geometries is essential for dimensional characterization of components, and represent an important contribution in the life cycle of the product, concerning in particular the manufacturing and inspection processes of the final product. This work aims to establish an algorithm to automatically recognize such geometries, without operator intervention. Using differential geometry large volumes of data can be treated and the basic functional geometries to be dealt recognized. The original data can be obtained by rapid acquisition methods, such as 3D survey or photography, and then converted into Cartesian coordinates. The satisfaction of intrinsic decision conditions allows different geometries to be fast identified, without operator intervention. Since inspection is generally a time consuming task, this method reduces operator intervention in the process. The algorithm was first tested using geometric data generated in MATLAB and then through a set of data points acquired by measuring with a coordinate measuring machine and a 3D scan on real physical surfaces. Comparison time spent in measuring is presented to show the advantage of the method. The results validated the suitability and potential of the algorithm hereby proposed
Resumo:
The current study describes the in vitro phosphorylation of a human hair keratin, using protein kinase for the first time. Phosphorylation of keratin was demonstrated by 31P NMR (Nuclear Magnetic Resonance) and Diffuse Reflectance Infrared Fourier Transform (DRIFT) techniques. Phosphorylation induced a 2.5 fold increase of adsorption capacity in the first 10 minutes for cationic moiety like Methylene Blue (MB). Thorough description of MB adsorption process was performed by several isothermal models. Reconstructed fluorescent microscopy images depict distinct amounts of dye bound to the differently treated hair. The results of this work suggest that the enzymatic phosphorylation of keratins might have significant implications in hair shampooing and conditioning, where short application times of cationic components are of prime importance.
Resumo:
ABSTRACTThe Amazon várzeas are an important component of the Amazon biome, but anthropic and climatic impacts have been leading to forest loss and interruption of essential ecosystem functions and services. The objectives of this study were to evaluate the capability of the Landsat-based Detection of Trends in Disturbance and Recovery (LandTrendr) algorithm to characterize changes in várzeaforest cover in the Lower Amazon, and to analyze the potential of spectral and temporal attributes to classify forest loss as either natural or anthropogenic. We used a time series of 37 Landsat TM and ETM+ images acquired between 1984 and 2009. We used the LandTrendr algorithm to detect forest cover change and the attributes of "start year", "magnitude", and "duration" of the changes, as well as "NDVI at the end of series". Detection was restricted to areas identified as having forest cover at the start and/or end of the time series. We used the Support Vector Machine (SVM) algorithm to classify the extracted attributes, differentiating between anthropogenic and natural forest loss. Detection reliability was consistently high for change events along the Amazon River channel, but variable for changes within the floodplain. Spectral-temporal trajectories faithfully represented the nature of changes in floodplain forest cover, corroborating field observations. We estimated anthropogenic forest losses to be larger (1.071 ha) than natural losses (884 ha), with a global classification accuracy of 94%. We conclude that the LandTrendr algorithm is a reliable tool for studies of forest dynamics throughout the floodplain.
Resumo:
The relaxivity displayed by Gd3+ chelates immobilized onto gold nanoparticles is the result of complex interplay between nanoparticle size, water exchange rate and chelate structure. In this work we study the effect of the length of -thioalkyl linkers, anchoring fast water exchanging Gd3+ chelates onto gold nanoparticles, on the relaxivity of the immobilized chelates. Gold nanoparticles functionalized with Gd3+ chelates of mercaptoundecanoyl and lipoyl amide conjugates of the DO3A-N-(-amino)propionate chelator were prepared and studied as potential CA for MRI. High relaxivities per chelate, of the order of magnitude 28-38 mM-1s-1 (30 MHz, 25 ºC) were attained thanks to simultaneous optimization of the rotational correlation time and of the water exchange rate. Fast local rotational motions of the immobilized chelates around connecting linkers (internal flexibility) still limit the attainable relaxivity. The degree of internal flexibility of the immobilized chelates seems not to be correlated with the length of the connecting linkers. Biodistribution and MRI studies in mice suggest that the in vivo behavior of the gold nanoparticles is determined mainly by size. Small nanoparticles (HD= 3.9 nm) undergo fast renal clearance and avoidance of the RES organs while larger nanoparticles (HD= 4.8 nm) undergo predominantly hepatobiliary excretion. High relaxivities, allied to chelate and nanoparticle stability and fast renal clearance in vivo suggests that functionalized gold nanoparticles hold great potential for further investigation as MRI Contrast Agents. This study contributes to understand the effect of linker length on the relaxivity of gold nanoparticles functionalized with Gd3+ complexes. It is a relevant contribution towards “design rules” for nanostructures functionalized with Gd3+ chelates as Contrast Agents for MRI and multimodal imaging.
Resumo:
A highly robust hydrogel device made from a single biopolymer formulation is reported. Owing to the presence of covalent and non-covalent crosslinks, these engineered systems were able to (i) sustain a compressive strength of ca. 20 MPa, (ii) quickly recover upon unloading, and (iii) encapsulate cells with high viability rates.
Resumo:
Load-bearing soft tissues such as cartilage, blood vessels and muscles are able to withstand a remarkable compressive stress of several MPa without fracturing. Interestingly, most of these structural tissues are mainly composed of water and in this regard, hydrogels, as highly hydrated 3D-crosslinked polymeric networks, constitute a promising class of materials to repair lesions on these tissues. Although several approaches can be employed to shape the mechanical properties of artificial hydrogels to mimic the ones found on biotissues, critical issues regarding, for instance, their biocompatibility and recoverability after loading are often neglected. Therefore, an innovative hydrogel device made only of chitosan (CHI) was developed for the repair of robust biological tissues. These systems were fabricated through a dual-crosslinking process, comprising a photo- and an ionic-crosslinking step. The obtained CHIbased hydrogels exhibited an outstanding compressive strength of ca. 20 MPa at 95% of strain, which is several orders of magnitude higher than those of the individual components and close to the ones found in native soft tissues. Additionally, both crosslinking processes occur rapidly and under physiological conditions, enabling cellsâ encapsulation as confirmed by high cell survival rates (ca. 80%). Furthermore, in contrast with conventional hydrogels, these networks quickly recover upon unloading and are able to keep their mechanical properties under physiological conditions as result of their non-swell nature.
Resumo:
El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme
Resumo:
How long does it take to learn another language? How many words do you need to learn? Are languages within the reach of everybody? Which teachers would you choose and which teachers should you avoid? These are some of the questions you ask yourself when you start learning a new language.The Word Brain provides the answers. If you have learned foreign languages in the past, consider reading it. If you or your children need to learn languages in the future, you must read it. What you will discover in two hours will change for ever the way you see languages and language learning. The principles of The Word Brain are timeless. Our children’s grandchildren will follow them when they discover the people of our planet.
Resumo:
Emotion, audition, event-related potentials, MMN, multidimensional scaling, timbre, perception
Resumo:
Background:Vascular remodeling, the dynamic dimensional change in face of stress, can assume different directions as well as magnitudes in atherosclerotic disease. Classical measurements rely on reference to segments at a distance, risking inappropriate comparison between dislike vessel portions.Objective:to explore a new method for quantifying vessel remodeling, based on the comparison between a given target segment and its inferred normal dimensions.Methods:Geometric parameters and plaque composition were determined in 67 patients using three-vessel intravascular ultrasound with virtual histology (IVUS-VH). Coronary vessel remodeling at cross-section (n = 27.639) and lesion (n = 618) levels was assessed using classical metrics and a novel analytic algorithm based on the fractional vessel remodeling index (FVRI), which quantifies the total change in arterial wall dimensions related to the estimated normal dimension of the vessel. A prediction model was built to estimate the normal dimension of the vessel for calculation of FVRI.Results:According to the new algorithm, “Ectatic” remodeling pattern was least common, “Complete compensatory” remodeling was present in approximately half of the instances, and “Negative” and “Incomplete compensatory” remodeling types were detected in the remaining. Compared to a traditional diagnostic scheme, FVRI-based classification seemed to better discriminate plaque composition by IVUS-VH.Conclusion:Quantitative assessment of coronary remodeling using target segment dimensions offers a promising approach to evaluate the vessel response to plaque growth/regression.
Resumo:
In this paper we investigate various algorithms for performing Fast Fourier Transformation (FFT)/Inverse Fast Fourier Transformation (IFFT), and proper techniques for maximizing the FFT/IFFT execution speed, such as pipelining or parallel processing, and use of memory structures with pre-computed values (look up tables -LUT) or other dedicated hardware components (usually multipliers). Furthermore, we discuss the optimal hardware architectures that best apply to various FFT/IFFT algorithms, along with their abilities to exploit parallel processing with minimal data dependences of the FFT/IFFT calculations. An interesting approach that is also considered in this paper is the application of the integrated processing-in-memory Intelligent RAM (IRAM) chip to high speed FFT/IFFT computing. The results of the assessment study emphasize that the execution speed of the FFT/IFFT algorithms is tightly connected to the capabilities of the FFT/IFFT hardware to support the provided parallelism of the given algorithm. Therefore, we suggest that the basic Discrete Fourier Transform (DFT)/Inverse Discrete Fourier Transform (IDFT) can also provide high performances, by utilizing a specialized FFT/IFFT hardware architecture that can exploit the provided parallelism of the DFT/IDF operations. The proposed improvements include simplified multiplications over symbols given in polar coordinate system, using sinе and cosine look up tables, and an approach for performing parallel addition of N input symbols.
Resumo:
In this paper we investigate various algorithms for performing Fast Fourier Transformation (FFT)/Inverse Fast Fourier Transformation (IFFT), and proper techniquesfor maximizing the FFT/IFFT execution speed, such as pipelining or parallel processing, and use of memory structures with pre-computed values (look up tables -LUT) or other dedicated hardware components (usually multipliers). Furthermore, we discuss the optimal hardware architectures that best apply to various FFT/IFFT algorithms, along with their abilities to exploit parallel processing with minimal data dependences of the FFT/IFFT calculations. An interesting approach that is also considered in this paper is the application of the integrated processing-in-memory Intelligent RAM (IRAM) chip to high speed FFT/IFFT computing. The results of the assessment study emphasize that the execution speed of the FFT/IFFT algorithms is tightly connected to the capabilities of the FFT/IFFT hardware to support the provided parallelism of the given algorithm. Therefore, we suggest that the basic Discrete Fourier Transform (DFT)/Inverse Discrete Fourier Transform (IDFT) can also provide high performances, by utilizing a specialized FFT/IFFT hardware architecture that can exploit the provided parallelism of the DFT/IDF operations. The proposed improvements include simplified multiplications over symbols given in polar coordinate system, using sinе and cosine look up tables,and an approach for performing parallel addition of N input symbols.