988 resultados para cluster algorithms
Resumo:
El azúcar es un commodity específico y uno de los mayores contribuyentes al producto interno bruto agrícola de los países en desarrollo para el consumo interno y el comercio internacional. A nivel mundial, el azúcar es obtenido industrialmente a partir de remolacha azucarera (Beta vulgaris) y de la caña de azúcar (Saccharum officinarum) como las únicas fuentes importantes para el comercio (...). El mercado internacional del azúcar es uno de los mercados de commodities agrícolas más altamente distorsionado. El comercio del azúcar crudo y refinado se caracteriza en general, por la ayuda interna significativa y generalizada, y algunas políticas que distorsionan el mercado como: pagos mínimos garantizados a productores, controles de producción y comercialización, regulación de precios, aranceles, cuotas de importación y subvenciones a la exportación (...). A nivel general, se pueden distinguir, básicamente, dos tipos de mercados de azúcar: el mercado protegido y el mercado libre. El mercado protegido consiste en acuerdos preferenciales y contratos de largo plazo que incluyen el sistema de cuotas entre diferentes países. En general, los precios del azúcar presentan fuertes fluctuaciones que obedecen a factores económicos, especulaciones, cambios políticos, recesiones y efectos climáticos (...). No obstante lo anterior, en los últimos años se ha observado una clara tendencia a la globalización de los mercados, y mercado del azúcar no es la excepción. Recientemente, se han centrado esfuerzos para liberalizar parcialmente algunos de los mercados más importantes, como es el caso de EEUU, la UE, Brasil y Australia (...). La creciente demanda de fuentes de energía renovable, entre las cuales se destacan los biocombustibles, también ha afectado significativamente la dinámica comercial y productiva de algunos sectores, en particular el sector azucarero. La caña de azúcar, se ha constituido en la principal materia prima para la elaboración de bioetanol, por lo cual, los países productores han experimentado un reciente cambio en su estructura productiva y comercial, buscando productos de mayor valor agregado, a fin de mejorar su posición competitiva. Bajo este esquema, las industrias productoras de azúcar, sobretodo en países en desarrollo enfrenta enormes retos para convertir sus ventajas comparativas en ventajas competitivas. La estructura de los costos de producción y el rendimiento de cada industria azucarera domestica son unos de los principales impulsores de su competitividad, la determinación de los futuros centros de la producción y el crecimiento de las exportaciones. En medio de este contexto, el presente estudio aborda los principales determinantes de la competitividad de la industria azucarera en Colombia, concentrada en un 98,07 por ciento en el cluster del valle geográfico del rio Cauca.
Resumo:
In many practical situations, batching of similar jobs to avoid setups is performed while constructing a schedule. This paper addresses the problem of non-preemptively scheduling independent jobs in a two-machine flow shop with the objective of minimizing the makespan. Jobs are grouped into batches. A sequence independent batch setup time on each machine is required before the first job is processed, and when a machine switches from processing a job in some batch to a job of another batch. Besides its practical interest, this problem is a direct generalization of the classical two-machine flow shop problem with no grouping of jobs, which can be solved optimally by Johnson's well-known algorithm. The problem under investigation is known to be NP-hard. We propose two O(n logn) time heuristic algorithms. The first heuristic, which creates a schedule with minimum total setup time by forcing all jobs in the same batch to be sequenced in adjacent positions, has a worst-case performance ratio of 3/2. By allowing each batch to be split into at most two sub-batches, a second heuristic is developed which has an improved worst-case performance ratio of 4/3. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.
Resumo:
The performance of loadsharing algorithms for heterogeneous distributed systems is investigated by simulation. The systems considered are networks of workstations (nodes) which differ in processing power. Two parameters are proposed for characterising system heterogeneity, namely the variance and skew of the distribution of processing power among the network nodes. A variety of networks are investigated, with the same number of nodes and total processing power, but with the processing power distributed differently among the nodes. Two loadsharing algorithms are evaluated, at overall system loadings of 50% and 90%, using job response time as the performance metric. Comparison is made with the ideal situation of ‘perfect sharing’, where it is assumed that the communication delays are zero and that complete knowledge is available about job lengths and the loading at the different nodes, so that an arriving job can be sent to the node where it will be completed in the shortest time. The algorithms studied are based on those already in use for homogeneous networks, but were adapted to take account of system heterogeneity. Both algorithms take into account the differences in the processing powers of the nodes in their location policies, but differ in the extent to which they ‘discriminate’ against the slower nodes. It is seen that the relative performance of the two is strongly influenced by the system utilisation and the distribution of processing power among the nodes.
Resumo:
Three parallel optimisation algorithms, for use in the context of multilevel graph partitioning of unstructured meshes, are described. The first, interface optimisation, reduces the computation to a set of independent optimisation problems in interface regions. The next, alternating optimisation, is a restriction of this technique in which mesh entities are only allowed to migrate between subdomains in one direction. The third treats the gain as a potential field and uses the concept of relative gain for selecting appropriate vertices to migrate. The results are compared and seen to produce very high global quality partitions, very rapidly. The results are also compared with another partitioning tool and shown to be of higher quality although taking longer to compute.
Resumo:
This paper discusses load-balancing issues when using heterogeneous cluster computers. There is a growing trend towards the use of commodity microprocessor clusters. Although today's microprocessors have reached a theoretical peak performance in the range of one GFLOPS/s, heterogeneous clusters of commodity processors are amongst the most challenging parallel systems to programme efficiently. We will outline an approach for optimising the performance of parallel mesh-based applications for heterogeneous cluster computers and present case studies with the GeoFEM code. The focus is on application cost monitoring and load balancing using the DRAMA library.
Resumo:
The factors that are driving the development and use of grids and grid computing, such as size, dynamic features, distribution and heterogeneity, are also pushing to the forefront service quality issues. These include performance, reliability and security. Although grid middleware can address some of these issues on a wider scale, it has also become imperative to ensure adequate service provision at local level. Load sharing in clusters can contribute to the provision of a high quality service, by exploiting both static and dynamic information. This paper is concerned with the presentation of a load sharing scheme, that can satisfy grid computing requirements. It follows a proactive, non preemptive and distributed approach. Load information is gathered continuously before it is needed, and a task is allocated to the most appropriate node for execution. Performance and reliability are enhanced by the decentralised nature of the scheme and the symmetric roles of the nodes. In addition, the scheme exhibits transparency characteristics that facilitate integration with the grid.
Resumo:
We consider a knapsack problem to minimize a symmetric quadratic function. We demonstrate that this symmetric quadratic knapsack problem is relevant to two problems of single machine scheduling: the problem of minimizing the weighted sum of the completion times with a single machine non-availability interval under the non-resumable scenario; and the problem of minimizing the total weighted earliness and tardiness with respect to a common small due date. We develop a polynomial-time approximation algorithm that delivers a constant worst-case performance ratio for a special form of the symmetric quadratic knapsack problem. We adapt that algorithm to our scheduling problems and achieve a better performance. For the problems under consideration no fixed-ratio approximation algorithms have been previously known.
Resumo:
In this paper, we first demonstrate that the classical Purcell's vector method when combined with row pivoting yields a consistently small growth factor in comparison to the well-known Gauss elimination method, the Gauss–Jordan method and the Gauss–Huard method with partial pivoting. We then present six parallel algorithms of the Purcell method that may be used for direct solution of linear systems. The algorithms differ in ways of pivoting and load balancing. We recommend algorithms V and VI for their reliability and algorithms III and IV for good load balance if local pivoting is acceptable. Some numerical results are presented.
Resumo:
Distributed applications are being deployed on ever-increasing scale and with ever-increasing functionality. Due to the accompanying increase in behavioural complexity, self-management abilities, such as self-healing, have become core requirements. A key challenge is the smooth embedding of such functionality into our systems. Natural distributed systems such as ant colonies have evolved highly efficient behaviour. These emergent systems achieve high scalability through the use of low complexity communication strategies and are highly robust through large-scale replication of simple, anonymous entities. Ways to engineer this fundamentally non-deterministic behaviour for use in distributed applications are being explored. An emergent, dynamic, cluster management scheme, which forms part of a hierarchical resource management architecture, is presented. Natural biological systems, which embed self-healing behaviour at several levels, have influenced the architecture. The resulting system is a simple, lightweight and highly robust platform on which cluster-based autonomic applications can be deployed.
Resumo:
The paper considers an on-line single machine scheduling problem where the goal is to minimize the makespan. The jobs are partitioned into families and a setup is performed every time the machine starts processing a batch of jobs of the same family. The scheduler is aware of the number of families and knows the setup time of each family, although information about a job only becomes available when that job is released. We give a lower bound on the competitive ratio of any on-line algorithm. Moreover, for the case of two families, we provide an algorithm with a competitive ratio that achieves this lower bound. As the number of families increases, the lower bound approaches 2, and we give a simple algorithm with a competitive ratio of 2.
Resumo:
Accurate representation of the coupled effects between turbulent fluid flow with a free surface, heat transfer, solidification, and mold deformation has been shown to be necessary for the realistic prediction of several defects in castings and also for determining the final crystalline structure. A core component of the computational modeling of casting processes involves mold filling, which is the most computationally intensive aspect of casting simulation at the continuum level. Considering the complex geometries involved in shape casting, the evolution of the free surface, gas entrapment, and the entrainment of oxide layers into the casting make this a very challenging task in every respect. Despite well over 30 years of effort in developing algorithms, this is by no means a closed subject. In this article, we will review the full range of computational methods used, from unstructured finite-element (FE) and finite-volume (FV) methods through fully structured and block-structured approaches utilizing the cut-cell family of techniques to capture the geometric complexity inherent in shape casting. This discussion will include the challenges of generating rapid solutions on high-performance parallel cluster technology and how mold filling links in with the full spectrum of physics involved in shape casting. Finally, some indications as to novel techniques emerging now that can address genuinely arbitrarily complex geometries are briefly outlined and their advantages and disadvantages are discussed.
Resumo:
The intrinsic independent features of the optimal codebook cubes searching process in fractal video compression systems are examined and exploited. The design of a suitable parallel algorithm reflecting the concept is presented. The Message Passing Interface (MPI) is chosen to be the communication tool for the implementation of the parallel algorithm on distributed memory parallel computers. Experimental results show that the parallel algorithm is able to reduce the compression time and achieve a high speed-up without changing the compression ratio and the quality of the decompressed image. A scalability test was also performed, and the results show that this parallel algorithm is scalable.