971 resultados para Meta heuristic algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVES: To evaluate the use of inhaled nitric oxide (NO) in the management of persistent pulmonary hypertension of the newborn. METHODS: Computerized bibliographic search on MEDLINE, CURRENT CONTENTS and LILACS covering the period from January 1990 to March 1998; review of references of all papers found on the subject. Only randomized clinical trials evaluating nitric oxide and conventional treatment were included. OUTCOMES STUDIED: death, requirement for extracorporeal membrane oxygenation (ECMO), systemic oxygenation, complications at the central nervous system and development of chronic pulmonary disease. The methodologic quality of the studies was evaluated by a quality score system, on a scale of 13 points. RESULTS: For infants without congenital diaphragmatic hernia, inhaled NO did not change mortality (typical odds ratio: 1.04; 95% CI: 0.6 to 1.8); the need for ECMO was reduced (relative risk: 0.73; 95% CI: 0.60 to 0.90), and the oxygenation was improved (PaO2 by a mean of 53.3 mm Hg; 95% CI: 44.8 to 61.4; oxygenation index by a mean of -12.2; 95% CI: -14.1 to -9.9). For infants with congenital diaphragmatic hernia, mortality, requirement for ECMO, and oxygenation were not changed. For all infants, central nervous system complications and incidence of chronic pulmonary disease did not change. CONCLUSIONS: Inhaled NO improves oxygenation and reduces requirement for ECMO only in newborns with persistent pulmonary hypertension who do not have diaphragmatic hernia. The risk of complications of the central nervous system and chronic pulmonary disease were not affected by inhaled NO.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Contém resumo

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tendo como ponto de partida os estudos da performance para a análise do terrorismo, a presente dissertação teve como resultado a possibilidade de reflectir sobre tácticas de incorporação, reperformance e meta-teatro, três conceitos que permitem compreender de que forma a arte assimila e se compreende em relação com o terrorismo. Apresenta, por um lado, documentos oficiais que demonstram a existência de um conflito quanto à definição de terrorismo, reflectindo sobre “terrorismo de estado” e “contra-estado”. Por outro lado, a partir da análise dos Surveillence Camera Players e da performance Three Posters, ou de artistas como Hasan Elahy e Alyson Wyper, esta dissertação defende que a arte reperforma “táticas de representação” e realização mediática do terrorismo, nomeadamente, o teatro panóptico, a tortura como performance e os vídeo-testemunhos de mártires como retratos e vídeo-performances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ship tracking systems allow Maritime Organizations that are concerned with the Safety at Sea to obtain information on the current location and route of merchant vessels. Thanks to Space technology in recent years the geographical coverage of the ship tracking platforms has increased significantly, from radar based near-shore traffic monitoring towards a worldwide picture of the maritime traffic situation. The long-range tracking systems currently in operations allow the storage of ship position data over many years: a valuable source of knowledge about the shipping routes between different ocean regions. The outcome of this Master project is a software prototype for the estimation of the most operated shipping route between any two geographical locations. The analysis is based on the historical ship positions acquired with long-range tracking systems. The proposed approach makes use of a Genetic Algorithm applied on a training set of relevant ship positions extracted from the long-term storage tracking database of the European Maritime Safety Agency (EMSA). The analysis of some representative shipping routes is presented and the quality of the results and their operational applications are assessed by a Maritime Safety expert.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present paper reports the precipitation process of Al3Sc structures in an aluminum scandium alloy, which has been simulated with a synchronous parallel kinetic Monte Carlo (spkMC) algorithm. The spkMC implementation is based on the vacancy diffusion mechanism. To filter the raw data generated by the spkMC simulations, the density-based clustering with noise (DBSCAN) method has been employed. spkMC and DBSCAN algorithms were implemented in the C language and using MPI library. The simulations were conducted in the SeARCH cluster located at the University of Minho. The Al3Sc precipitation was successfully simulated at the atomistic scale with the spkMC. DBSCAN proved to be a valuable aid to identify the precipitates by performing a cluster analysis of the simulation results. The achieved simulations results are in good agreement with those reported in the literature under sequential kinetic Monte Carlo simulations (kMC). The parallel implementation of kMC has provided a 4x speedup over the sequential version.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Electromagnetism-like (EM) algorithm is a population- based stochastic global optimization algorithm that uses an attraction- repulsion mechanism to move sample points towards the optimal. In this paper, an implementation of the EM algorithm in the Matlab en- vironment as a useful function for practitioners and for those who want to experiment a new global optimization solver is proposed. A set of benchmark problems are solved in order to evaluate the performance of the implemented method when compared with other stochastic methods available in the Matlab environment. The results con rm that our imple- mentation is a competitive alternative both in term of numerical results and performance. Finally, a case study based on a parameter estimation problem of a biology system shows that the EM implementation could be applied with promising results in the control optimization area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose an extension of the firefly algorithm (FA) to multi-objective optimization. FA is a swarm intelligence optimization algorithm inspired by the flashing behavior of fireflies at night that is capable of computing global solutions to continuous optimization problems. Our proposal relies on a fitness assignment scheme that gives lower fitness values to the positions of fireflies that correspond to non-dominated points with smaller aggregation of objective function distances to the minimum values. Furthermore, FA randomness is based on the spread metric to reduce the gaps between consecutive non-dominated solutions. The obtained results from the preliminary computational experiments show that our proposal gives a dense and well distributed approximated Pareto front with a large number of points.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a single-phase Series Active Power Filter (Series APF) for mitigation of the load voltage harmonic content, while maintaining the voltage on the DC side regulated without the support of a voltage source. The proposed series active power filter control algorithm eliminates the additional voltage source to regulate the DC voltage, and with the adopted topology it is not used a coupling transformer to interface the series active power filter with the electrical power grid. The paper describes the control strategy which encapsulates the grid synchronization scheme, the compensation voltage calculation, the damping algorithm and the dead-time compensation. The topology and control strategy of the series active power filter have been evaluated in simulation software and simulations results are presented. Experimental results, obtained with a developed laboratorial prototype, validate the theoretical assumptions, and are within the harmonic spectrum limits imposed by the international recommendations of the IEEE-519 Standard.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Natural selection favors the survival and reproduction of organisms that are best adapted to their environment. Selection mechanism in evolutionary algorithms mimics this process, aiming to create environmental conditions in which artificial organisms could evolve solving the problem at hand. This paper proposes a new selection scheme for evolutionary multiobjective optimization. The similarity measure that defines the concept of the neighborhood is a key feature of the proposed selection. Contrary to commonly used approaches, usually defined on the basis of distances between either individuals or weight vectors, it is suggested to consider the similarity and neighborhood based on the angle between individuals in the objective space. The smaller the angle, the more similar individuals. This notion is exploited during the mating and environmental selections. The convergence is ensured by minimizing distances from individuals to a reference point, whereas the diversity is preserved by maximizing angles between neighboring individuals. Experimental results reveal a highly competitive performance and useful characteristics of the proposed selection. Its strong diversity preserving ability allows to produce a significantly better performance on some problems when compared with stat-of-the-art algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ABSTRACTThe Amazon várzeas are an important component of the Amazon biome, but anthropic and climatic impacts have been leading to forest loss and interruption of essential ecosystem functions and services. The objectives of this study were to evaluate the capability of the Landsat-based Detection of Trends in Disturbance and Recovery (LandTrendr) algorithm to characterize changes in várzeaforest cover in the Lower Amazon, and to analyze the potential of spectral and temporal attributes to classify forest loss as either natural or anthropogenic. We used a time series of 37 Landsat TM and ETM+ images acquired between 1984 and 2009. We used the LandTrendr algorithm to detect forest cover change and the attributes of "start year", "magnitude", and "duration" of the changes, as well as "NDVI at the end of series". Detection was restricted to areas identified as having forest cover at the start and/or end of the time series. We used the Support Vector Machine (SVM) algorithm to classify the extracted attributes, differentiating between anthropogenic and natural forest loss. Detection reliability was consistently high for change events along the Amazon River channel, but variable for changes within the floodplain. Spectral-temporal trajectories faithfully represented the nature of changes in floodplain forest cover, corroborating field observations. We estimated anthropogenic forest losses to be larger (1.071 ha) than natural losses (884 ha), with a global classification accuracy of 94%. We conclude that the LandTrendr algorithm is a reliable tool for studies of forest dynamics throughout the floodplain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJETIVO: Avaliar o percentual de pacientes adequados às metas preconizadas pelas III Diretrizes sobre Dislipidemias da Sociedade Brasileira de Cardiologia, numa população de baixa renda. Determinar se havia diferença deste percentual, nos pacientes de alto risco, conforme a idade (<75 anos x >75 anos). MÉTODOS: Analisamos consecutivamente 190 pacientes, divididos em dois grupos: 51 pacientes de baixo e médio risco (G I) e 139 de alto risco para doença arterial coronariana (G II). A amostra era caracterizada por pacientes de baixa renda (69% dos pacientes tinham uma renda familiar entre 1 e 2 salários mínimos), cuja terapêutica hipolipemiante era fornecida irregularmente pelo Estado. RESULTADOS: Os G I e G II apresentavam, respectivamente, 70,1±13,7 anos e 13,7% de homens e 68,5±10,6 anos e 62,6% de homens. Dentre os pacientes do G II, 30% apresentavam o LDL-colesterol dentro das metas preconizadas. Sendo que, a freqüência de pacientes adequados às metas foi, significativamente, menor em indivíduos com 75 anos ou mais que aqueles com menos de 75 anos (16% vs. 30%, p=0,04). CONCLUSÃO: Numa população, predominantemente, de baixa renda e sem assistência contínua do Estado para adquirir estatinas, a obtenção das metas preconizadas para o LDL- colesterol, pelas III Diretrizes sobre Dislipidemias da Sociedade Brasileira de Cardiologia, é baixa e ainda, significativamente, menor em pacientes muito idosos, com perfil de alto risco para aterosclerose.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As revisões sistemáticas com meta-análise de estudos de exames diagnósticos ou de fatores prognósticos são ferramentas de pesquisa ainda em fase de desenvolvimento. O objetivo do presente texto é descrever a metodologia de revisão sistemática e de meta-análise deste tipo de estudos, passo a passo. Foi feita a revisão da literatura sobre o tema, compilando as recomendações e organizando o texto em: a) Introdução, b) Setalhamento dos oito passos a serem seguidos, c) Forma de publicação da revisão sistemática com meta-análise e d) Conclusão. Foram descritos os métodos de revisão sistemática de forma detalhada, com análise crítica dos métodos de compilação estatística dos resultados, com ênfase na utilização da curva Summary Receiver Operator Characteristic. Forneceu-se referência para os detalhes de cada técnica estatística utilizada na meta-análise. Concluímos que as revisões sistemáticas com meta-análise de exames diagnósticos ou de fatores prognósticos são valiosas na compilação de dados de vários estudos sobre o mesmo tema, reduzindo vieses e aumentando o poder estatístico da pesquisa primária.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme

Relevância:

20.00% 20.00%

Publicador:

Resumo:

n el desarrollo contemporáneo de propuestas normativas de democracia es posible identificar una recuperación de la dimensión territorial de los procesos sociopolíticos, un redescubrimiento de lo local dado por el interés de explicar y proponer soluciones a los nuevos desafíos que la inequidad, las crisis del régimen de acumulación y las crisis en los modos de coordinación social presentan a las sociedades contemporáneas. En este contexto lo local no sólo adquiere un nuevo protagonismo en relación a las preocupaciones concernientes al desarrollo sostenible, sino también como ámbito que por cuestiones de “proximidad” constituye el espacio “natural” para la realización de la democracia. La relación estado y sociedad, las posibilidades de hacer efectiva la participación y de generar condiciones que hacen factible el control del poder político por parte de los ciudadanos parecieran encontrar en lo local mejores condiciones de realización. Sin embargo y a pesar de los avances descriptos, son escasos los intentos por profundizar la especificidad de la democracia local que aborden y articulen reflexiones teórico-conceptuales que permitan identificar principios normativos básicos contrastables empíricamente con las instituciones y prácticas locales concretas. Las bondades de la democracia local son por lo general dadas por sentadas, sin problematizar sus particularidades, su relación con una teoría general de la democracia, su relación con el territorio a diferentes escalas y sus manifestaciones en instituciones y prácticas de los actores sociales. Desde el proyecto se sostiene que a partir de la reconstrucción y problematización de los orígenes filosóficos conceptuales que sustentan la Teoría de la Democracia Local es posible identificar un marco teórico normativo que le es específico y que posibilita el reconocimiento de los meta-requisitos necesarios tanto para su realización como para su contribución al fortalecimiento del régimen democrático en general. El establecimiento de estos requisitos permitirá construir matrices analíticas para el estudio empírico de los diseños y prácticas institucionales, como así también los procesos de constitución, reproducción y contestación de tales arreglos y prácticas por parte de los actores sociales. El proyecto se propone diseñar matrices analíticas que permitan articular distintos niveles y dimensiones de análisis de la Democracia Local que sean aplicables al estudio de casos de ciudades de porte medio de América Latina.