932 resultados para efficient algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concepts involved in sustainable textile fashion, demanding good knowledge about raw materials, processes, end use properties and circuits amongst others, are able to determine the way the textile product is designed and the behavior of the consumer, regarding life style and buying decisions. The textile product`s life integrates raw materials, their processing, distribution, use by the consumer and destination of the product after useful lifetime, this is, his complete life cycle. It is very important to recognize the power of the consumer to influence parameters related to sustainability, namely when he decides how, when and why he buys and afterwards by the attitudes taken during and after use. The conscious act of consumption involves ethical, ecological and technical knowledge in which the concern is overall lifecycle of the fashion product and not exclusively aesthetic and symbolic values strongly related with its ephemeral nature. The present work proposes the classification of textile products by means of an innovative label aiming to establish a rating related to the Life of Fashion Products, by using parameters considered with especial impact in lifecycle, as textile fibers, processing conditions, generated wastes, commercialization circuits, durability and cleaning procedures. This label for sustainable fashion products aims to assist the stakeholders with informed attitudes and correct decisions in order to promote the objectives of sustainable fashion near designers, consumers and industrial experts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Similarity-based operations, similarity join, similarity grouping, data integration

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Magdeburg, Univ., Fak. für Mathematik, Diss., 2012

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background:Vascular remodeling, the dynamic dimensional change in face of stress, can assume different directions as well as magnitudes in atherosclerotic disease. Classical measurements rely on reference to segments at a distance, risking inappropriate comparison between dislike vessel portions.Objective:to explore a new method for quantifying vessel remodeling, based on the comparison between a given target segment and its inferred normal dimensions.Methods:Geometric parameters and plaque composition were determined in 67 patients using three-vessel intravascular ultrasound with virtual histology (IVUS-VH). Coronary vessel remodeling at cross-section (n = 27.639) and lesion (n = 618) levels was assessed using classical metrics and a novel analytic algorithm based on the fractional vessel remodeling index (FVRI), which quantifies the total change in arterial wall dimensions related to the estimated normal dimension of the vessel. A prediction model was built to estimate the normal dimension of the vessel for calculation of FVRI.Results:According to the new algorithm, “Ectatic” remodeling pattern was least common, “Complete compensatory” remodeling was present in approximately half of the instances, and “Negative” and “Incomplete compensatory” remodeling types were detected in the remaining. Compared to a traditional diagnostic scheme, FVRI-based classification seemed to better discriminate plaque composition by IVUS-VH.Conclusion:Quantitative assessment of coronary remodeling using target segment dimensions offers a promising approach to evaluate the vessel response to plaque growth/regression.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Magdeburg, Univ., Fak. für Informatik, Diss., 2015

Relevância:

20.00% 20.00%

Publicador:

Resumo:

...Diese Dissertation zeigt, wie wir Datenbankmanagementsysteme bauen können, die heterogene Prozessoren effizient und zuverlässig zur Beschleunigung der Anfrageverarbeitung nutzen können. Daher untersuchen wir typische Entwurfsentscheidungen von coprozessorbeschleunigten Datenbankmanagementsystemen und leiten darauf aufbauend eine generische Architektur für solche Systeme ab. Unsere Untersuchungen zeigen, dass eines der wichtigsten Probleme für solche Datenbankmanagementsysteme die Entscheidung ist, welche Operatoren einer Anfrage auf welchem Prozessor ausgeführt werden sollen...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The parameterized expectations algorithm (PEA) involves a long simulation and a nonlinear least squares (NLS) fit, both embedded in a loop. Both steps are natural candidates for parallelization. This note shows that parallelization can lead to important speedups for the PEA. I provide example code for a simple model that can serve as a template for parallelization of more interesting models, as well as a download link for an image of a bootable CD that allows creation of a cluster and execution of the example code in minutes, with no need to install any software.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We implement a family of efficient proposals to share benefits generated in environments with externalities. These proposals extend the Shapley value to games with externalities and are parametrized through the method by which the externalities are averaged. We construct two slightly different mechanisms: one for environments with negative externalities and the other for positive externalities. We show that the subgame perfect equilibrium outcomes of these mechanisms coincide with the sharing proposals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study situations of allocating positions or jobs to students or workers based on priorities. An example is the assignment of medical students to hospital residencies on the basis of one or several entrance exams. For markets without couples, e.g., for ``undergraduate student placement,'' acyclicity is a necessary and sufficient condition for the existence of a fair and efficient placement mechanism (Ergin, 2002). We show that in the presence of couples, which introduces complementarities into the students' preferences, acyclicity is still necessary, but not sufficient (Theorem 4.1). A second necessary condition (Theorem 4.2) is ``priority-togetherness'' of couples. A priority structure that satisfies both necessary conditions is called pt-acyclic. For student placement problems where all quotas are equal to one we characterize pt-acyclicity (Lemma 5.1) and show that it is a sufficient condition for the existence of a fair and efficient placement mechanism (Theorem 5.1). If in addition to pt-acyclicity we require ``reallocation-'' and ``vacancy-fairness'' for couples, the so-called dictator-bidictator placement mechanism is the unique fair and efficient placement mechanism (Theorem 5.2). Finally, for general student placement problems, we show that pt-acyclicity may not be sufficient for the existence of a fair and efficient placement mechanism (Examples 5.4, 5.5, and 5.6). We identify a sufficient condition such that the so-called sequential placement mechanism produces a fair and efficient allocation (Theorem 5.3).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Economic activities, both on the macro and micro level, often entail wide-spread externalities. This in turn leads to disputes regarding the compensation levels to the various parties affected. We propose a general, yet simple, method of deciding upon the distribution of the gains (costs) of cooperation in the presence of externalities. This method is shown to be the unique one satisfying several desirable properties. Furthermore, we illustrate the use of this method to resolve the sharing of benefits generated by international climate control agreements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we suggest a simple sequential mechanism whose subgame perfect equilibria give rise to efficient networks. Moreover, the payoffs received by the agents coincide with their Shapley value in an appropriately defined cooperative game.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we consider two classes of economic environments. In the first type, agents are faced with the task of providing local public goods that will benefit some or all of them. In the second type, economic activity takes place via formation of links. Agents need both to both form a network and decide how to share the output generated. For both scenarios, we suggest a bidding mechanism whereby agents bid for the right to decide upon the organization of the economic activity. The subgame perfect equilibria of this game generate efficient outcomes.