963 resultados para Running time


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The paper considers the single machine due date assignment and scheduling problems with n jobs in which the due dates are to be obtained from the processing times by adding a positive slack q. A schedule is feasible if there are no tardy jobs and the job sequence respects given precedence constraints. The value of q is chosen so as to minimize a function ϕ(F,q) which is non-decreasing in each of its arguments, where F is a certain non-decreasing earliness penalty function. Once q is chosen or fixed, the corresponding scheduling problem is to find a feasible schedule with the minimum value of function F. In the case of arbitrary precedence constraints the problems under consideration are shown to be NP-hard in the strong sense even for F being total earliness. If the precedence constraints are defined by a series-parallel graph, both scheduling and due date assignment problems are proved solvable in time, provided that F is either the sum of linear functions or the sum of exponential functions. The running time of the algorithms can be reduced to if the jobs are independent. Scope and purpose We consider the single machine due date assignment and scheduling problems and design fast algorithms for their solution under a wide range of assumptions. The problems under consideration arise in production planning when the management is faced with a problem of setting the realistic due dates for a number of orders. The due dates of the orders are determined by increasing the time needed for their fulfillment by a common positive slack. If the slack is set to be large enough, the due dates can be easily maintained, thereby producing a good image of the firm. This, however, may result in the substantial holding cost of the finished products before they are brought to the customer. The objective is to explore the trade-off between the size of the slack and the arising holding costs for the early orders.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A FORTRAN 90 program is presented which calculates the total cross sections, and the electron energy spectra of the singly and doubly differential cross sections for the single target ionization of neutral atoms ranging from hydrogen up to and including argon. The code is applicable for the case of both high and low Z projectile impact in fast ion-atom collisions. The theoretical models provided for the program user are based on two quantum mechanical approximations which have proved to be very successful in the study of ionization in ion-atom collisions. These are the continuum-distorted-wave (CDW) and continuum-distorted-wave eikonal-initial-state (CDW-EIS) approximations. The codes presented here extend previously published. codes for single ionization of. target hydrogen [Crothers and McCartney, Comput. Phys. Commun. 72 (1992) 288], target helium [Nesbitt, O'Rourke and Crothers, Comput. Phys. Commun. 114 (1998) 385] and target atoms ranging from lithium to neon [O'Rourke, McSherry and Crothers, Comput. Phys. Commun. 131 (2000) 129]. Cross sections for all of these target atoms may be obtained as limiting cases from the present code. Title of program: ARGON Catalogue identifier: ADSE Program summary URL: http://cpc.cs.qub.ac.uk/cpc/summaries/ADSE Program obtainable from: CPC Program Library Queen's University of Belfast, N. Ireland Licensing provisions: none Computer for which the program is designed and others on which it is operable: Computers: Four by 200 MHz Pro Pentium Linux server, DEC Alpha 21164; Four by 400 MHz Pentium 2 Xeon 450 Linux server, IBM SP2 and SUN Enterprise 3500 Installations: Queen's University, Belfast Operating systems under which the program has been tested: Red-hat Linux 5.2, Digital UNIX Version 4.0d, AIX, Solaris SunOS 5.7 Compilers: PGI workstations, DEC CAMPUS Programming language used: FORTRAN 90 with MPI directives No. of bits in a word: 64, except on Linux servers 32 Number of processors used: any number Has the code been vectorized or parallelized? Parallelized using MPI No. of bytes in distributed program, including test data, etc.: 32 189 Distribution format: tar gzip file Keywords: Single ionization, cross sections, continuum-distorted-wave model, continuum- distorted-wave eikonal-initial-state model, target atoms, wave treatment Nature of physical problem: The code calculates total, and differential cross sections for the single ionization of target atoms ranging from hydrogen up to and including argon by both light and heavy ion impact. Method of solution: ARGON allows the user to calculate the cross sections using either the CDW or CDW-EIS [J. Phys. B 16 (1983) 3229] models within the wave treatment. Restrictions on the complexity of the program: Both the CDW and CDW-EIS models are two-state perturbative approximations. Typical running time: Times vary according to input data and number of processors. For one processor the test input data for double differential cross sections (40 points) took less than one second, whereas the test input for total cross sections (20 points) took 32 minutes. Unusual features of the program: none (C) 2003 Elsevier B.V All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper concerns randomized leader election in synchronous distributed networks. A distributed leader election algorithm is presented for complete n-node networks that runs in O(1) rounds and (with high probability) takes only O(n-vlog3/2n) messages to elect a unique leader (with high probability). This algorithm is then extended to solve leader election on any connected non-bipartiten-node graph G in O(t(G)) time and O(t(G)n-vlog3/2n) messages, where t(G) is the mixing time of a random walk on G. The above result implies highly efficient (sublinear running time and messages) leader election algorithms for networks with small mixing times, such as expanders and hypercubes. In contrast, previous leader election algorithms had at least linear message complexity even in complete graphs. Moreover, super-linear message lower bounds are known for time-efficientdeterministic leader election algorithms. Finally, an almost-tight lower bound is presented for randomized leader election, showing that O(n-v) messages are needed for any O(1) time leader election algorithm which succeeds with high probability. It is also shown that O(n 1/3) messages are needed by any leader election algorithm that succeeds with high probability, regardless of the number of the rounds. We view our results as a step towards understanding the randomized complexity of leader election in distributed networks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Performance evaluation of parallel software and architectural exploration of innovative hardware support face a common challenge with emerging manycore platforms: they are limited by the slow running time and the low accuracy of software simulators. Manycore FPGA prototypes are difficult to build, but they offer great rewards. Software running on such prototypes runs orders of magnitude faster than current simulators. Moreover, researchers gain significant architectural insight during the modeling process. We use the Formic FPGA prototyping board [1], which specifically targets scalable and cost-efficient multi-board prototyping, to build and test a 64-board model of a 512-core, MicroBlaze-based, non-coherent hardware prototype with a full network-on-chip in a 3D-mesh topology. We expand the hardware architecture to include the ARM Versatile Express platforms and build a 520-core heterogeneous prototype of 8 Cortex-A9 cores and 512 MicroBlaze cores. We then develop an MPI library for the prototype and evaluate it extensively using several bare-metal and MPI benchmarks. We find that our processor prototype is highly scalable, models faithfully single-chip multicore architectures, and is a very efficient platform for parallel programming research, being 50,000 times faster than software simulation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper concerns randomized leader election in synchronous distributed networks. A distributed leader election algorithm is presented for complete n-node networks that runs in O(1) rounds and (with high probability) uses only O(√ √nlog<sup>3/2</sup>n) messages to elect a unique leader (with high probability). When considering the "explicit" variant of leader election where eventually every node knows the identity of the leader, our algorithm yields the asymptotically optimal bounds of O(1) rounds and O(. n) messages. This algorithm is then extended to one solving leader election on any connected non-bipartite n-node graph G in O(τ(. G)) time and O(τ(G)n√log<sup>3/2</sup>n) messages, where τ(. G) is the mixing time of a random walk on G. The above result implies highly efficient (sublinear running time and messages) leader election algorithms for networks with small mixing times, such as expanders and hypercubes. In contrast, previous leader election algorithms had at least linear message complexity even in complete graphs. Moreover, super-linear message lower bounds are known for time-efficient deterministic leader election algorithms. Finally, we present an almost matching lower bound for randomized leader election, showing that Ω(n) messages are needed for any leader election algorithm that succeeds with probability at least 1/. e+. ε, for any small constant ε. >. 0. We view our results as a step towards understanding the randomized complexity of leader election in distributed networks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Authenticated encryption algorithms protect both the confidentiality and integrity of messages in a single processing pass. We show how to utilize the L◦P ◦S transform of the Russian GOST R 34.11-2012 standard hash “Streebog” to build an efficient, lightweight algorithm for Authenticated Encryption with Associated Data (AEAD) via the Sponge construction. The proposed algorithm “StriBob” has attractive security properties, is faster than the Streebog hash alone, twice as fast as the GOST 28147-89 encryption algorithm, and requires only a modest amount of running-time memory. StriBob is a Round 1 candidate in the CAESAR competition.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nos últimos anos temos vindo a assistir a uma mudança na forma como a informação é disponibilizada online. O surgimento da web para todos possibilitou a fácil edição, disponibilização e partilha da informação gerando um considerável aumento da mesma. Rapidamente surgiram sistemas que permitem a coleção e partilha dessa informação, que para além de possibilitarem a coleção dos recursos também permitem que os utilizadores a descrevam utilizando tags ou comentários. A organização automática dessa informação é um dos maiores desafios no contexto da web atual. Apesar de existirem vários algoritmos de clustering, o compromisso entre a eficácia (formação de grupos que fazem sentido) e a eficiência (execução em tempo aceitável) é difícil de encontrar. Neste sentido, esta investigação tem por problemática aferir se um sistema de agrupamento automático de documentos, melhora a sua eficácia quando se integra um sistema de classificação social. Analisámos e discutimos dois métodos baseados no algoritmo k-means para o clustering de documentos e que possibilitam a integração do tagging social nesse processo. O primeiro permite a integração das tags diretamente no Vector Space Model e o segundo propõe a integração das tags para a seleção das sementes iniciais. O primeiro método permite que as tags sejam pesadas em função da sua ocorrência no documento através do parâmetro Social Slider. Este método foi criado tendo por base um modelo de predição que sugere que, quando se utiliza a similaridade dos cossenos, documentos que partilham tags ficam mais próximos enquanto que, no caso de não partilharem, ficam mais distantes. O segundo método deu origem a um algoritmo que denominamos k-C. Este para além de permitir a seleção inicial das sementes através de uma rede de tags também altera a forma como os novos centróides em cada iteração são calculados. A alteração ao cálculo dos centróides teve em consideração uma reflexão sobre a utilização da distância euclidiana e similaridade dos cossenos no algoritmo de clustering k-means. No contexto da avaliação dos algoritmos foram propostos dois algoritmos, o algoritmo da “Ground truth automática” e o algoritmo MCI. O primeiro permite a deteção da estrutura dos dados, caso seja desconhecida, e o segundo é uma medida de avaliação interna baseada na similaridade dos cossenos entre o documento mais próximo de cada documento. A análise de resultados preliminares sugere que a utilização do primeiro método de integração das tags no VSM tem mais impacto no algoritmo k-means do que no algoritmo k-C. Além disso, os resultados obtidos evidenciam que não existe correlação entre a escolha do parâmetro SS e a qualidade dos clusters. Neste sentido, os restantes testes foram conduzidos utilizando apenas o algoritmo k-C (sem integração de tags no VSM), sendo que os resultados obtidos indicam que a utilização deste algoritmo tende a gerar clusters mais eficazes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La survie des réseaux est un domaine d'étude technique très intéressant ainsi qu'une préoccupation critique dans la conception des réseaux. Compte tenu du fait que de plus en plus de données sont transportées à travers des réseaux de communication, une simple panne peut interrompre des millions d'utilisateurs et engendrer des millions de dollars de pertes de revenu. Les techniques de protection des réseaux consistent à fournir une capacité supplémentaire dans un réseau et à réacheminer les flux automatiquement autour de la panne en utilisant cette disponibilité de capacité. Cette thèse porte sur la conception de réseaux optiques intégrant des techniques de survie qui utilisent des schémas de protection basés sur les p-cycles. Plus précisément, les p-cycles de protection par chemin sont exploités dans le contexte de pannes sur les liens. Notre étude se concentre sur la mise en place de structures de protection par p-cycles, et ce, en supposant que les chemins d'opération pour l'ensemble des requêtes sont définis a priori. La majorité des travaux existants utilisent des heuristiques ou des méthodes de résolution ayant de la difficulté à résoudre des instances de grande taille. L'objectif de cette thèse est double. D'une part, nous proposons des modèles et des méthodes de résolution capables d'aborder des problèmes de plus grande taille que ceux déjà présentés dans la littérature. D'autre part, grâce aux nouveaux algorithmes, nous sommes en mesure de produire des solutions optimales ou quasi-optimales. Pour ce faire, nous nous appuyons sur la technique de génération de colonnes, celle-ci étant adéquate pour résoudre des problèmes de programmation linéaire de grande taille. Dans ce projet, la génération de colonnes est utilisée comme une façon intelligente d'énumérer implicitement des cycles prometteurs. Nous proposons d'abord des formulations pour le problème maître et le problème auxiliaire ainsi qu'un premier algorithme de génération de colonnes pour la conception de réseaux protegées par des p-cycles de la protection par chemin. L'algorithme obtient de meilleures solutions, dans un temps raisonnable, que celles obtenues par les méthodes existantes. Par la suite, une formulation plus compacte est proposée pour le problème auxiliaire. De plus, nous présentons une nouvelle méthode de décomposition hiérarchique qui apporte une grande amélioration de l'efficacité globale de l'algorithme. En ce qui concerne les solutions en nombres entiers, nous proposons deux méthodes heurisiques qui arrivent à trouver des bonnes solutions. Nous nous attardons aussi à une comparaison systématique entre les p-cycles et les schémas classiques de protection partagée. Nous effectuons donc une comparaison précise en utilisant des formulations unifiées et basées sur la génération de colonnes pour obtenir des résultats de bonne qualité. Par la suite, nous évaluons empiriquement les versions orientée et non-orientée des p-cycles pour la protection par lien ainsi que pour la protection par chemin, dans des scénarios de trafic asymétrique. Nous montrons quel est le coût de protection additionnel engendré lorsque des systèmes bidirectionnels sont employés dans de tels scénarios. Finalement, nous étudions une formulation de génération de colonnes pour la conception de réseaux avec des p-cycles en présence d'exigences de disponibilité et nous obtenons des premières bornes inférieures pour ce problème.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dans ce mémoire, nous abordons le problème de l’ensemble dominant connexe de cardinalité minimale. Nous nous penchons, en particulier, sur le développement de méthodes pour sa résolution basées sur la programmation par contraintes et la programmation en nombres entiers. Nous présentons, en l’occurrence, une heuristique et quelques méthodes exactes pouvant être utilisées comme heuristiques si on limite leur temps d’exécution. Nous décrivons notamment un algorithme basé sur l’approche de décomposition de Benders, un autre combinant cette dernière avec une stratégie d’investigation itérative, une variante de celle-ci utilisant la programmation par contraintes, et enfin une méthode utilisant uniquement la programmation par contraintes. Des résultats expérimentaux montrent que ces méthodes sont efficaces puisqu’elles améliorent les méthodes connues dans la littérature. En particulier, la méthode de décomposition de Benders avec une stratégie d’investigation itérative fournit les résultats les plus performants.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Die Auszeichnungssprache XML dient zur Annotation von Dokumenten und hat sich als Standard-Datenaustauschformat durchgesetzt. Dabei entsteht der Bedarf, XML-Dokumente nicht nur als reine Textdateien zu speichern und zu transferieren, sondern sie auch persistent in besser strukturierter Form abzulegen. Dies kann unter anderem in speziellen XML- oder relationalen Datenbanken geschehen. Relationale Datenbanken setzen dazu bisher auf zwei grundsätzlich verschiedene Verfahren: Die XML-Dokumente werden entweder unverändert als binäre oder Zeichenkettenobjekte gespeichert oder aber aufgespalten, sodass sie in herkömmlichen relationalen Tabellen normalisiert abgelegt werden können (so genanntes „Flachklopfen“ oder „Schreddern“ der hierarchischen Struktur). Diese Dissertation verfolgt einen neuen Ansatz, der einen Mittelweg zwischen den bisherigen Lösungen darstellt und die Möglichkeiten des weiterentwickelten SQL-Standards aufgreift. SQL:2003 definiert komplexe Struktur- und Kollektionstypen (Tupel, Felder, Listen, Mengen, Multimengen), die es erlauben, XML-Dokumente derart auf relationale Strukturen abzubilden, dass der hierarchische Aufbau erhalten bleibt. Dies bietet zwei Vorteile: Einerseits stehen bewährte Technologien, die aus dem Bereich der relationalen Datenbanken stammen, uneingeschränkt zur Verfügung. Andererseits lässt sich mit Hilfe der SQL:2003-Typen die inhärente Baumstruktur der XML-Dokumente bewahren, sodass es nicht erforderlich ist, diese im Bedarfsfall durch aufwendige Joins aus den meist normalisierten und auf mehrere Tabellen verteilten Tupeln zusammenzusetzen. In dieser Arbeit werden zunächst grundsätzliche Fragen zu passenden, effizienten Abbildungsformen von XML-Dokumenten auf SQL:2003-konforme Datentypen geklärt. Darauf aufbauend wird ein geeignetes, umkehrbares Umsetzungsverfahren entwickelt, das im Rahmen einer prototypischen Applikation implementiert und analysiert wird. Beim Entwurf des Abbildungsverfahrens wird besonderer Wert auf die Einsatzmöglichkeit in Verbindung mit einem existierenden, ausgereiften relationalen Datenbankmanagementsystem (DBMS) gelegt. Da die Unterstützung von SQL:2003 in den kommerziellen DBMS bisher nur unvollständig ist, muss untersucht werden, inwieweit sich die einzelnen Systeme für das zu implementierende Abbildungsverfahren eignen. Dabei stellt sich heraus, dass unter den betrachteten Produkten das DBMS IBM Informix die beste Unterstützung für komplexe Struktur- und Kollektionstypen bietet. Um die Leistungsfähigkeit des Verfahrens besser beurteilen zu können, nimmt die Arbeit Untersuchungen des nötigen Zeitbedarfs und des erforderlichen Arbeits- und Datenbankspeichers der Implementierung vor und bewertet die Ergebnisse.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis describes Optimist, an optimizing compiler for the Concurrent Smalltalk language developed by the Concurrent VLSI Architecture Group. Optimist compiles Concurrent Smalltalk to the assembly language of the Message-Driven Processor (MDP). The compiler includes numerous optimization techniques such as dead code elimination, dataflow analysis, constant folding, move elimination, concurrency analysis, duplicate code merging, tail forwarding, use of register variables, as well as various MDP-specific optimizations in the code generator. The MDP presents some unique challenges and opportunities for compilation. Due to the MDP's small memory size, it is critical that the size of the generated code be as small as possible. The MDP is an inherently concurrent processor with efficient mechanisms for sending and receiving messages; the compiler takes advantage of these mechanisms. The MDP's tagged architecture allows very efficient support of object-oriented languages such as Concurrent Smalltalk. The initial goals for the MDP were to have the MDP execute about twenty instructions per method and contain 4096 words of memory. This compiler shows that these goals are too optimistic -- most methods are longer, both in terms of code size and running time. Thus, the memory size of the MDP should be increased.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

IPLV overall coefficient, presented by Air-Conditioning and Refrigeration Institute (ARI) of America, shows running/operation status of air-conditioning system host only. For overall operation coefficient, logical solution has not been developed, to reflect the whole air-conditioning system under part load. In this research undertaking, the running time proportions of air-conditioning systems under part load have been obtained through analysis on energy consumption data during practical operation in all public buildings in Chongqing. This was achieved by using analysis methods, based on the statistical energy consumption data distribution of public buildings month-by-month. Comparing with the weight number of IPLV, part load operation coefficient of air-conditioning system, based on this research, does not only show the status of system refrigerating host, but also reflects and calculate energy efficiency of the whole air-conditioning system. The coefficient results from the processing and analyzing of practical running data, shows the practical running status of area and building type (actual and objective) – not clear. The method is different from model analysis which gets IPLV weight number, in the sense that this method of coefficient results in both four equal proportions and also part load operation coefficient of air-conditioning system under any load rate as necessary.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Results from both experimental measurements and 3D numerical simulations of Ground Source Heat Pump systems (GSHP) at a UK climate are presented. Experimental measurements of a horizontal-coupled slinky GSHP were undertaken in Talbot Cottage at Drayton St Leonard site, Oxfordshire, UK. The measured thermophysical properties of in situ soil were used in the CFD model. The thermal performance of slinky heat exchangers for the horizontal-coupled GSHP system for different coil diameters and slinky interval distances was investigated using a validated 3D model. Results from a two month period of monitoring the performance of the GSHP system showed that the COP decreased with the running time. The average COP of the horizontal-coupled GSHP was 2.5. The numerical prediction showed that there was no significant difference in the specific heat extraction of the slinky heat exchanger at different coil diameters. However, the larger the diameter of coil, the higher the heat extraction per meter length of soil. The specific heat extraction also increased, but the heat extraction per meter length of soil decreased with the increase of coil central interval distance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Direct outdoor air cooling contributes a lot not only to the improvement of the indoor air quality but also to the energy saving. Its full use will reduce the water chiller’s running time especially in some stores where cooling load keeps much higher and longer than that in other buildings. A novel air-conditioning system named Combined Variable Air Volume system (CVAV), combining a normal AHU with a separate outdoor air supply system, was proposed firstly by the authors. The most attractive feature of the system is its full utilization of cooling capacity and freshness of outdoor air in the transition period of the year round. On the basis of the obtain of the dynamic cooling loads of the typical shopping malls in different four cities located in cold climates in China with the aid of DOE-2, the possibility of increasing the amount of outdoor air volume of CVAV system in the transition period instead of operating the water chillers was confirmed. Moreover, a new concept, Direct Outdoor Air Cooling Efficiency (DOACE), was defined as the ratio of cooling capacity of outdoor air to the water chiller, indicating the degree of outdoor air’s utilization. And the DOACE of the CVAV was calculated and compared with that of conventional all-air constant volume air-conditioning systems, the results showed that CVAV bear much more energy saving potential with the 10%~19% higher DOACE and it is a kind of energy efficient systems and can improve the indoor air quality as well.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A bipartite graph G = (V, W, E) is convex if there exists an ordering of the vertices of W such that, for each v. V, the neighbors of v are consecutive in W. We describe both a sequential and a BSP/CGM algorithm to find a maximum independent set in a convex bipartite graph. The sequential algorithm improves over the running time of the previously known algorithm and the BSP/CGM algorithm is a parallel version of the sequential one. The complexity of the algorithms does not depend on |W|.