994 resultados para Harmonic balance algorithm
Resumo:
The Electromagnetism-like (EM) algorithm is a population- based stochastic global optimization algorithm that uses an attraction- repulsion mechanism to move sample points towards the optimal. In this paper, an implementation of the EM algorithm in the Matlab en- vironment as a useful function for practitioners and for those who want to experiment a new global optimization solver is proposed. A set of benchmark problems are solved in order to evaluate the performance of the implemented method when compared with other stochastic methods available in the Matlab environment. The results con rm that our imple- mentation is a competitive alternative both in term of numerical results and performance. Finally, a case study based on a parameter estimation problem of a biology system shows that the EM implementation could be applied with promising results in the control optimization area.
Resumo:
In this paper, we propose an extension of the firefly algorithm (FA) to multi-objective optimization. FA is a swarm intelligence optimization algorithm inspired by the flashing behavior of fireflies at night that is capable of computing global solutions to continuous optimization problems. Our proposal relies on a fitness assignment scheme that gives lower fitness values to the positions of fireflies that correspond to non-dominated points with smaller aggregation of objective function distances to the minimum values. Furthermore, FA randomness is based on the spread metric to reduce the gaps between consecutive non-dominated solutions. The obtained results from the preliminary computational experiments show that our proposal gives a dense and well distributed approximated Pareto front with a large number of points.
Resumo:
Natural selection favors the survival and reproduction of organisms that are best adapted to their environment. Selection mechanism in evolutionary algorithms mimics this process, aiming to create environmental conditions in which artificial organisms could evolve solving the problem at hand. This paper proposes a new selection scheme for evolutionary multiobjective optimization. The similarity measure that defines the concept of the neighborhood is a key feature of the proposed selection. Contrary to commonly used approaches, usually defined on the basis of distances between either individuals or weight vectors, it is suggested to consider the similarity and neighborhood based on the angle between individuals in the objective space. The smaller the angle, the more similar individuals. This notion is exploited during the mating and environmental selections. The convergence is ensured by minimizing distances from individuals to a reference point, whereas the diversity is preserved by maximizing angles between neighboring individuals. Experimental results reveal a highly competitive performance and useful characteristics of the proposed selection. Its strong diversity preserving ability allows to produce a significantly better performance on some problems when compared with stat-of-the-art algorithms.
Resumo:
Dissertação de mestrado integrado em Engenharia Biomédica
Resumo:
ABSTRACTThe Amazon várzeas are an important component of the Amazon biome, but anthropic and climatic impacts have been leading to forest loss and interruption of essential ecosystem functions and services. The objectives of this study were to evaluate the capability of the Landsat-based Detection of Trends in Disturbance and Recovery (LandTrendr) algorithm to characterize changes in várzeaforest cover in the Lower Amazon, and to analyze the potential of spectral and temporal attributes to classify forest loss as either natural or anthropogenic. We used a time series of 37 Landsat TM and ETM+ images acquired between 1984 and 2009. We used the LandTrendr algorithm to detect forest cover change and the attributes of "start year", "magnitude", and "duration" of the changes, as well as "NDVI at the end of series". Detection was restricted to areas identified as having forest cover at the start and/or end of the time series. We used the Support Vector Machine (SVM) algorithm to classify the extracted attributes, differentiating between anthropogenic and natural forest loss. Detection reliability was consistently high for change events along the Amazon River channel, but variable for changes within the floodplain. Spectral-temporal trajectories faithfully represented the nature of changes in floodplain forest cover, corroborating field observations. We estimated anthropogenic forest losses to be larger (1.071 ha) than natural losses (884 ha), with a global classification accuracy of 94%. We conclude that the LandTrendr algorithm is a reliable tool for studies of forest dynamics throughout the floodplain.
Resumo:
A partir de un amplio acuerdo entre los gobiernos locales de todos los partidos políticos y el gobierno provincial, el 22 de diciembre de 2004, la Legislatura de Córdoba sancionó la Ley orgánica de regionalización provincial Nº 9206. Con el desarrollo como fin, la Ley Nº 9206 supuso dos innovaciones trascendentes: por una parte, la creación de 26 nuevas regiones a razón de una por cada departamento existente y, por la otra, el reconocimiento de Comunidades Regionales a integrarse por decisión autónoma de los municipios y comunas (gobernadas por los Intendentes municipales y Presidentes comunales de cada región). En el 2005, se integraron 212 municipios y 152 comunas (es decir el 85% del total provincial) en 23 Comunidades Regionales que, además, acordaron sus prioridades de gestión regional y un Indicador de Desarrollo Regional, mediante la asistencia técnica del Programa de Fortalecimiento Institucional de Municipios (PROFIM) de la UCC. Sobre esas bases, en el 2006; las universidades cordobesas (mediante convenios con el Gobierno provincial) asesoraron a 17 Comunidades Regionales en el diseño de políticas de desarrollo. Esta investigación problematiza la regionalización en Córdoba, analizando críticamente sus avances y demoras, desde su puesta en marcha (diciembre de 2004) hasta el recambio de autoridades (diciembre de 2007) con el propósito de recomendar correcciones.
Resumo:
El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme
Resumo:
The twin objectives of the work described were to construct nutrient balance models (NBM) for a range of Irish animal production systems and to evaluate their potential as a means of estimating the nutrient composition of farm wastes. The NBM has three components. The first is the intake of nutrients in the animal's diet. The second is retention or the nutrients the animal retains for the production of milk, meat or eggs. The third is the balance or the difference between the nutrient intake and retention. Data on the intake levels and their nutrient value for dairy cows, beef cattle, pigs and poultry systems were assembled. Literature searches and interviews with National experts were the primary sources of information. NBMs were then constructed for each production system. Summary tables of the nutrient values for the common diet constituents used in Irish animal production systems, the nutrient composition of the animal products and the NBMs (nutrient intake, retention and excretion) for a range of production systems were assembled. These represent the first comprehensive data set of this type for Irish animal production systems. There was generally good agreement between the derived NBMs values and those published in the literature. The NBMs were validated on a number of farms. Data on animal numbers, fertiliser use, concentrates inputs and production output were recorded on seven farms. Using the data a nutrient input/output balance was constructed for each farm. This was compared with the NBM estimate of the farm nutrient balance. The results showed good agreement between the measured balance and the NBM estimate particularly for the pig and poultry farms. However, the validation emphasised the inherent risks associated with NBMs. The average values used for feed intake and production parameters in the NEMs may result in the under or over estimate of actual nutrient balances on individual farms where these variables are substantially different. On the grassland farms there was a poor correlation between the input/output estimate and the NBM. This possibly results from the omission of the soil's contribution to the nutrient balance. However, the results indicate that the NBMs developed are a potentially useful tool for estimating nutrient balances. They also will serve to highlight the significant fraction of the nutrient inputs into farming systems that are retained on the farm. The potential of the NBM as a means of estimating the nutrient composition of farm wastes was evaluated on two farms. Feed intake and composition, animal production, slurry production was monitored during the indoor winter feeding period. Slurry samples were taken for analysis. The appropriates NBMs were used to estimate the nutrient balance for each farm. The nutrient content of the slurry produced was calculated. There was a good agreement between the NBM estimate and the measured values. This preliminary evaluation suggests that the NBM has a potential to provide the farmer with a simple means of estimating the nutrient value of his slurry.
Resumo:
Magdeburg, Univ., Fak. für Mathematik, Diss., 2011
Resumo:
Magdeburg, Univ., Fak. für Mathematik, Diss., 2006
Resumo:
Background:Vascular remodeling, the dynamic dimensional change in face of stress, can assume different directions as well as magnitudes in atherosclerotic disease. Classical measurements rely on reference to segments at a distance, risking inappropriate comparison between dislike vessel portions.Objective:to explore a new method for quantifying vessel remodeling, based on the comparison between a given target segment and its inferred normal dimensions.Methods:Geometric parameters and plaque composition were determined in 67 patients using three-vessel intravascular ultrasound with virtual histology (IVUS-VH). Coronary vessel remodeling at cross-section (n = 27.639) and lesion (n = 618) levels was assessed using classical metrics and a novel analytic algorithm based on the fractional vessel remodeling index (FVRI), which quantifies the total change in arterial wall dimensions related to the estimated normal dimension of the vessel. A prediction model was built to estimate the normal dimension of the vessel for calculation of FVRI.Results:According to the new algorithm, “Ectatic” remodeling pattern was least common, “Complete compensatory” remodeling was present in approximately half of the instances, and “Negative” and “Incomplete compensatory” remodeling types were detected in the remaining. Compared to a traditional diagnostic scheme, FVRI-based classification seemed to better discriminate plaque composition by IVUS-VH.Conclusion:Quantitative assessment of coronary remodeling using target segment dimensions offers a promising approach to evaluate the vessel response to plaque growth/regression.
Resumo:
Magdeburg, Univ., Fak. für Mathematik, Diss., 2009
Resumo:
Magdeburg, Univ., Fak. für Mathematik, Diss., 2011
Resumo:
Magdeburg, Univ., Med. Fak., Diss., 2011
Resumo:
[s.c.]