15 resultados para Load Balancing

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Con la mayor capacidad de los nodos de procesamiento en relación a la potencia de cómputo, cada vez más aplicaciones intensivas de datos como las aplicaciones de la bioinformática, se llevarán a ejecutar en clusters no dedicados. Los clusters no dedicados se caracterizan por su capacidad de combinar la ejecución de aplicaciones de usuarios locales con aplicaciones, científicas o comerciales, ejecutadas en paralelo. Saber qué efecto las aplicaciones con acceso intensivo a dados producen respecto a la mezcla de otro tipo (batch, interativa, SRT, etc) en los entornos no-dedicados permite el desarrollo de políticas de planificación más eficientes. Algunas de las aplicaciones intensivas de E/S se basan en el paradigma MapReduce donde los entornos que las utilizan, como Hadoop, se ocupan de la localidad de los datos, balanceo de carga de forma automática y trabajan con sistemas de archivos distribuidos. El rendimiento de Hadoop se puede mejorar sin aumentar los costos de hardware, al sintonizar varios parámetros de configuración claves para las especificaciones del cluster, para el tamaño de los datos de entrada y para el procesamiento complejo. La sincronización de estos parámetros de sincronización puede ser demasiado compleja para el usuario y/o administrador pero procura garantizar prestaciones más adecuadas. Este trabajo propone la evaluación del impacto de las aplicaciones intensivas de E/S en la planificación de trabajos en clusters no-dedicados bajo los paradigmas MPI y Mapreduce.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En la actualidad, la computación de altas prestaciones está siendo utilizada en multitud de campos científicos donde los distintos problemas estudiados se resuelven mediante aplicaciones paralelas/distribuidas. Estas aplicaciones requieren gran capacidad de cómputo, bien sea por la complejidad de los problemas o por la necesidad de solventar situaciones en tiempo real. Por lo tanto se debe aprovechar los recursos y altas capacidades computacionales de los sistemas paralelos en los que se ejecutan estas aplicaciones con el fin de obtener un buen rendimiento. Sin embargo, lograr este rendimiento en una aplicación ejecutándose en un sistema es una dura tarea que requiere un alto grado de experiencia, especialmente cuando se trata de aplicaciones que presentan un comportamiento dinámico o cuando se usan sistemas heterogéneos. En estos casos actualmente se plantea realizar una mejora de rendimiento automática y dinámica de las aplicaciones como mejor enfoque para el análisis del rendimiento. El presente trabajo de investigación se sitúa dentro de este ámbito de estudio y su objetivo principal es sintonizar dinámicamente mediante MATE (Monitoring, Analysis and Tuning Environment) una aplicación MPI empleada en computación de altas prestaciones que siga un paradigma Master/Worker. Las técnicas de sintonización integradas en MATE han sido desarrolladas a partir del estudio de un modelo de rendimiento que refleja los cuellos de botella propios de aplicaciones situadas bajo un paradigma Master/Worker: balanceo de carga y número de workers. La ejecución de la aplicación elegida bajo el control dinámico de MATE y de la estrategia de sintonización implementada ha permitido observar la adaptación del comportamiento de dicha aplicación a las condiciones actuales del sistema donde se ejecuta, obteniendo así una mejora de su rendimiento.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Actualmente existen muchas aplicaciones paralelas/distribuidas en las cuales SPMD es el paradigma más usado. Obtener un buen rendimiento en una aplicación paralela de este tipo es uno de los principales desafíos dada la gran cantidad de aplicaciones existentes. Este objetivo no es fácil de resolver ya que existe una gran variedad de configuraciones de hardware, y también la naturaleza de los problemas pueden ser variados así como la forma de implementarlos. En consecuencia, si no se considera adecuadamente la combinación "software/hardware" pueden aparecer problemas inherentes a una aplicación iterativa sin una jerarquía de control definida de acuerdo a este paradigma. En SPMD todos los procesos ejecutan el mismo código pero computan una sección diferente de los datos de entrada. Una solución a un posible problema del rendimiento es proponer una estrategia de balance de carga para homogeneizar el cómputo entre los diferentes procesos. En este trabajo analizamos el benchmark CG con cargas heterogéneas con la finalidad de detectar los posibles problemas de rendimiento en una aplicación real. Un factor que determina el rendimiento en esta aplicación es la cantidad de elementos nonzero contenida en la sección de matriz asignada a cada proceso. Determinamos que es posible definir una estrategia de balance de carga que puede ser implementada de forma dinámica y demostramos experimentalmente que el rendimiento de la aplicación puede mejorarse de forma significativa con dicha estrategia.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, a method for enhancing current QoS routing methods by means of QoS protection is presented. In an MPLS network, the segments (links) to be protected are predefined and an LSP request involves, apart from establishing a working path, creating a specific type of backup path (local, reverse or global). Different QoS parameters, such as network load balancing, resource optimization and minimization of LSP request rejection should be considered. QoS protection is defined as a function of QoS parameters, such as packet loss, restoration time, and resource optimization. A framework to add QoS protection to many of the current QoS routing algorithms is introduced. A backup decision module to select the most suitable protection method is formulated and different case studies are analyzed

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The achievable region approach seeks solutions to stochastic optimisation problems by: (i) characterising the space of all possible performances(the achievable region) of the system of interest, and (ii) optimisingthe overall system-wide performance objective over this space. This isradically different from conventional formulations based on dynamicprogramming. The approach is explained with reference to a simpletwo-class queueing system. Powerful new methodologies due to the authorsand co-workers are deployed to analyse a general multiclass queueingsystem with parallel servers and then to develop an approach to optimalload distribution across a network of interconnected stations. Finally,the approach is used for the first time to analyse a class of intensitycontrol problems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In a previous paper a novel Generalized Multiobjective Multitree model (GMM-model) was proposed. This model considers for the first time multitree-multicast load balancing with splitting in a multiobjective context, whose mathematical solution is a whole Pareto optimal set that can include several results than it has been possible to find in the publications surveyed. To solve the GMM-model, in this paper a multi-objective evolutionary algorithm (MOEA) inspired by the Strength Pareto Evolutionary Algorithm (SPEA) is proposed. Experimental results considering up to 11 different objectives are presented for the well-known NSF network, with two simultaneous data flows

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Peer-reviewed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current parallel applications running on clusters require the use of an interconnection network to perform communications among all computing nodes available. Imbalance of communications can produce network congestion, reducing throughput and increasing latency, degrading the overall system performance. On the other hand, parallel applications running on these networks posses representative stages which allow their characterization, as well as repetitive behavior that can be identified on the basis of this characterization. This work presents the Predictive and Distributed Routing Balancing (PR-DRB), a new method developed to gradually control network congestion, based on paths expansion, traffic distribution and effective traffic load, in order to maintain low latency values. PR-DRB monitors messages latencies on intermediate routers, makes decisions about alternative paths and record communication pattern information encountered during congestion situation. Based on the concept of applications repetitiveness, best solution recorded are reapplied when saved communication pattern re-appears. Traffic congestion experiments were conducted in order to evaluate the performance of the method, and improvements were observed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new 'Consent Commons' licensing framework is proposed, complementing Creative Commons, to clarify the permissions given for using and reusing clinical and non-clinical digital recordings of people (patients and non-patients) for educational purposes. Consent Commons is a sophisticated expression of ethically based 'digital professionalism', which recognises the rights of patients, carers, their families, teachers, clinicians, students and members of the public to have some say in how their digital recordings are used (including refusing or withdrawing their consent), and is necessary in order to ensure the long term sustainability of teaching materials, including Open Educational Resources (OER). Consent Commons can ameliorate uncertainty about the status of educational resources depicting people, and protect institutions from legal risk by developing robust and sophisticated policies and promoting best practice in managing their information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several European telecommunications regulatory agencies have recently introduced a fixed capacity charge (flat rate) to regulate access to the incumbent's network. The purpose of this paper is to show that the optimal capacity charge and the optimal access-minute charge analysed by Armstrong, Doyle, and Vickers (1996) have a similar structure and imply the same payment for the entrant. I extend the analysis tothe case where there is a competitor with market power. In this case, the optimalcapacity charge should be modified to avoid that the entrant cream-skims the market,fixing a longer or a shorter peak period than the optimal. Finally, I consider a multiproduct setting, where the effect of the product differentiation is exacerbated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The fate of a small oral dose of protein given to overnight-starved rats was studied. After 3 h, 62 per cent of the protein amino acids had been absorbed. Most of the absorbed N went into the bloodstream through the portal in the form of amino acids, but urea and ammonia were also present. About one-quarter of all absorbed N was carried as lymph amino acids. The liver was able to take all portal free ammonia and a large proportion of portal amino acids, releasing urea. The hepatic N balance was negative, indicating active proteolysis and net loss of liver protein.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several European telecommunications regulatory agencies have recently introduced a fixed capacity charge (flat rate) to regulate access to the incumbent's network. The purpose of this paper is to show that the optimal capacity charge and the optimal access-minute charge analysed by Armstrong, Doyle, and Vickers (1996) have a similar structure and imply the same payment for the entrant. I extend the analysis tothe case where there is a competitor with market power. In this case, the optimalcapacity charge should be modified to avoid that the entrant cream-skims the market,fixing a longer or a shorter peak period than the optimal. Finally, I consider a multiproduct setting, where the effect of the product differentiation is exacerbated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The General Assembly Line Balancing Problem with Setups (GALBPS) was recently defined in the literature. It adds sequence-dependent setup time considerations to the classical Simple Assembly Line Balancing Problem (SALBP) as follows: whenever a task is assigned next to another at the same workstation, a setup time must be added to compute the global workstation time, thereby providing the task sequence inside each workstation. This paper proposes over 50 priority-rule-based heuristic procedures to solve GALBPS, many of which are an improvement upon heuristic procedures published to date.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Random problem distributions have played a key role in the study and design of algorithms for constraint satisfaction and Boolean satisfiability, as well as in ourunderstanding of problem hardness, beyond standard worst-case complexity. We consider random problem distributions from a highly structured problem domain that generalizes the Quasigroup Completion problem (QCP) and Quasigroup with Holes (QWH), a widely used domain that captures the structure underlying a range of real-world applications. Our problem domain is also a generalization of the well-known Sudoku puz- zle: we consider Sudoku instances of arbitrary order, with the additional generalization that the block regions can have rectangular shape, in addition to the standard square shape. We evaluate the computational hardness of Generalized Sudoku instances, for different parameter settings. Our experimental hardness results show that we can generate instances that are considerably harder than QCP/QWH instances of the same size. More interestingly, we show the impact of different balancing strategies on problem hardness. We also provide insights into backbone variables in Generalized Sudoku instances and how they correlate to problem hardness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this paper was to show the potential additional insight that result from adding greenhouse gas (GHG) emissions to plant performance evaluation criteria, such as effluent quality (EQI) and operational cost (OCI) indices, when evaluating (plant-wide) control/operational strategies in wastewater treatment plants (WWTPs). The proposed GHG evaluation is based on a set of comprehensive dynamic models that estimate the most significant potential on-site and off-site sources of CO2, CH4 and N2O. The study calculates and discusses the changes in EQI, OCI and the emission of GHGs as a consequence of varying the following four process variables: (i) the set point of aeration control in the activated sludge section; (ii) the removal efficiency of total suspended solids (TSS) in the primary clarifier; (iii) the temperature in the anaerobic digester; and (iv) the control of the flow of anaerobic digester supernatants coming from sludge treatment. Based upon the assumptions built into the model structures, simulation results highlight the potential undesirable effects of increased GHG production when carrying out local energy optimization of the aeration system in the activated sludge section and energy recovery from the AD. Although off-site CO2 emissions may decrease, the effect is counterbalanced by increased N2O emissions, especially since N2O has a 300-fold stronger greenhouse effect than CO2. The reported results emphasize the importance and usefulness of using multiple evaluation criteria to compare and evaluate (plant-wide) control strategies in a WWTP for more informed operational decision making