75 resultados para NETTRA-PG1-FIFO


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data-intensive Grid applications require huge data transfers between grid computing nodes. These computing nodes, where computing jobs are executed, are usually geographically separated. A grid network that employs optical wavelength division multiplexing (WDM) technology and optical switches to interconnect computing resources with dynamically provisioned multi-gigabit rate bandwidth lightpath is called a Lambda Grid network. A computing task may be executed on any one of several computing nodes which possesses the necessary resources. In order to reflect the reality in job scheduling, allocation of network resources for data transfer should be taken into consideration. However, few scheduling methods consider the communication contention on Lambda Grids. In this paper, we investigate the joint scheduling problem while considering both optical network and computing resources in a Lambda Grid network. The objective of our work is to maximize the total number of jobs that can be scheduled in a Lambda Grid network. An adaptive routing algorithm is proposed and implemented for accomplishing the communication tasks for every job submitted in the network. Four heuristics (FIFO, ESTF, LJF, RS) are implemented for job scheduling of the computational tasks. Simulation results prove the feasibility and efficiency of the proposed solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Web content hosting, in which a Web server stores and provides Web access to documents for different customers, is becoming increasingly common. For example, a web server can host webpages for several different companies and individuals. Traditionally, Web Service Providers (WSPs) provide all customers with the same level of performance (best-effort service). Most service differentiation has been in the pricing structure (individual vs. business rates) or the connectivity type (dial-up access vs. leased line, etc.). This report presents DiffServer, a program that implements two simple, server-side, application-level mechanisms (server-centric and client-centric) to provide different levels of web service. The results of the experiments show that there is not much overhead due to the addition of this additional layer of abstraction between the client and the Apache web server under light load conditions. Also, the average waiting time for high priority requests decreases significantly after they are assigned priorities as compared to a FIFO approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data-intensive Grid applications require huge data transfers between grid computing nodes. These computing nodes, where computing jobs are executed, are usually geographically separated. A grid network that employs optical wavelength division multiplexing (WDM) technology and optical switches to interconnect computing resources with dynamically provisioned multi-gigabit rate bandwidth lightpath is called a Lambda Grid network. A computing task may be executed on any one of several computing nodes which possesses the necessary resources. In order to reflect the reality in job scheduling, allocation of network resources for data transfer should be taken into consideration. However, few scheduling methods consider the communication contention on Lambda Grids. In this paper, we investigate the joint scheduling problem while considering both optical network and computing resources in a Lambda Grid network. The objective of our work is to maximize the total number of jobs that can be scheduled in a Lambda Grid network. An adaptive routing algorithm is proposed and implemented for accomplishing the communication tasks for every job submitted in the network. Four heuristics (FIFO, ESTF, LJF, RS) are implemented for job scheduling of the computational tasks. Simulation results prove the feasibility and efficiency of the proposed solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ALICE, that is an experiment held at CERN using the LHC, is specialized in analyzing lead-ion collisions. ALICE will study the properties of quarkgluon plasma, a state of matter where quarks and gluons, under conditions of very high temperatures and densities, are no longer confined inside hadrons. Such a state of matter probably existed just after the Big Bang, before particles such as protons and neutrons were formed. The SDD detector, one of the ALICE subdetectors, is part of the ITS that is composed by 6 cylindrical layers with the innermost one attached to the beam pipe. The ITS tracks and identifies particles near the interaction point, it also aligns the tracks of the articles detected by more external detectors. The two ITS middle layers contain the whole 260 SDD detectors. A multichannel readout board, called CARLOSrx, receives at the same time the data coming from 12 SDD detectors. In total there are 24 CARLOSrx boards needed to read data coming from all the SDD modules (detector plus front end electronics). CARLOSrx packs data coming from the front end electronics through optical link connections, it stores them in a large data FIFO and then it sends them to the DAQ system. Each CARLOSrx is composed by two boards. One is called CARLOSrx data, that reads data coming from the SDD detectors and configures the FEE; the other one is called CARLOSrx clock, that sends the clock signal to all the FEE. This thesis contains a description of the hardware design and firmware features of both CARLOSrx data and CARLOSrx clock boards, which deal with all the SDD readout chain. A description of the software tools necessary to test and configure the front end electronics will be presented at the end of the thesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dieser Beitrag zeigt die Anwendung des Ant-Colony-System (ACS) Algorithmus auf die Sequenzierung von Querverteil-Wagen in einem Lager. Wir erweitern den Basisalgorithmus der Ant-Colony-Optimierung (ACO) für die Minimierung der Bearbeitungszeit einer Menge von Fahraufträgen für die Querverteil-Wagen. Im Vergleich zu dem Greedy-Algorithmus ist der ACO-Algorithmus wettbewerbsfähig und schnell. In vielen Lagerverwaltungssystemen werden die Fahraufträge nach dem FIFO-Prinzip (First-in-First-out) ausgeführt. In diesem Beitrag wird der ACO-Algorithmus genutzt, um eine optimale Sequenz der Fahraufträge zu bilden.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mit den Mitteln der Simulation lässt sich nachweisen, dass der maximal realisierbare Durchsatz in Zeilenregallagern unter dem Regime der beiden üblichen Lagerbetriebsstrategien Querverteilung und FIFO nach Wiederinbetriebnahme einer vorübergehend ausgefallenen Regalgasse zunächst deutlich abfallen kann. Mit Hilfe theoretischer Betrachtungen zu der Verteilung von Lagereinheiten einer Sorte auf die einzelnen Lagergassen werden die sich ergebenden Verteilungsänderungen und damit die zugrunde liegenden Ursachen sowie die sich daraus abzuleitenden Zwangsfolgen für das Betriebsverhalten des Lagers dargestellt. Anhand der Ergebnisse von Simulationsexperimenten wird der Einfluss von Ausfalldauer und Sortimentsbreite auf die Folgen eines Gassenausfalls aufgezeigt und eine allgemeine These über mögliche Bedingungszusammenhänge formuliert. Eine Möglichkeit zur Verhinderung der beschriebenen negativen Ausfallfolgen stellt die Bestimmung einer entsprechenden Ersatz-Auslagerreihenfolge dar.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

IS1296, a new insertion sequence belonging to the IS3 family of insertion elements has been identified in Mycoplasma mycoides subsp. mycoides (Mmm) biotype small colony (SC), the agent of contagious bovine pleuropneumonia (CBPP). IS1296 is 1485-bp long and has 30-bp inverted repeats. It contains two open reading frames, ORFA and ORFB, which show significant similarities to the ORFs which encode the transposase function of IS elements of the IS3 family, in particular IS150 of Escherichia coli. IS1296 is present in 19 copies in Mmm SC-type strain PG1 and in 18 copies in a recently isolated field strain L2. It seems to transpose at low frequency in Mmm SC. IS1296 is also present in 5 copies in Mmm biotype large colony (LC)-type strain Y-goat, and in two copies in Mycoplasma sp. 'bovine group 7' reference strain PG50. It is, however, not present in other species of the 'mycoides cluster' or other closely related Mycoplasma sp. of ruminants.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I propose that the Last in, First out (LIFO) inventory valuation method needs to be reevaluated. I will evaluate the impact of the LIFO method on earnings of publically traded companies with a LIFO reserve over the past 10 years. I will begin my proposal with the history of how the LIFO method became an acceptable valuation method and discuss the significance of LIFO within the accounting profession Next I will provide a description of LIFO, the First in, First out (FIFO), and the weighted average inventory valuation methods and explore the differences among each. More specifically, I will explore the arguments for and against the use of the LIFO method and the potential shift towards financial standards that do not allow LIFO (a standard adopted and influenced by the International Financial Accounting Standards Board). Data will be collected from Compustat for publicly traded companies (with a LIFO Reserve) for the past 10 years. I will document which firms use LIFO, analyze trends relating to LIFO usage and LIFO reserves (the difference in the cost of inventory between using LIFO and FIFO), and evaluate the effect on earnings. The purpose of this research is to evaluate the accuracy of LIFO in portraying earnings and to see how much tax has gone uncollected over the years because of the use of LIFO. Moreover, I will provide an opinion as to whether U.S. GAAP should adopt a standard similar to IFRS and ban the LIFO method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El presente proyecto final de carrera titulado “Modelado de alto nivel con SystemC” tiene como objetivo principal el modelado de algunos módulos de un codificador de vídeo MPEG-2 utilizando el lenguaje de descripción de sistemas igitales SystemC con un nivel de abstracción TLM o Transaction Level Modeling. SystemC es un lenguaje de descripción de sistemas digitales basado en C++. En él hay un conjunto de rutinas y librerías que implementan tipos de datos, estructuras y procesos especiales para el modelado de sistemas digitales. Su descripción se puede consultar en [GLMS02] El nivel de abstracción TLM se caracteriza por separar la comunicación entre los módulos de su funcionalidad. Este nivel de abstracción hace un mayor énfasis en la funcionalidad de la comunicación entre los módulos (de donde a donde van datos) que la implementación exacta de la misma. En los documentos [RSPF] y [HG] se describen el TLM y un ejemplo de implementación. La arquitectura del modelo se basa en el codificador MVIP-2 descrito en [Gar04], de dicho modelo, los módulos implementados son: · IVIDEOH: módulo que realiza un filtrado del vídeo de entrada en la dimensión horizontal y guarda en memoria el video filtrado. · IVIDEOV: módulo que lee de la memoria el vídeo filtrado por IVIDEOH, realiza el filtrado en la dimensión horizontal y escribe el video filtrado en memoria. · DCT: módulo que lee el video filtrado por IVIDEOV, hace la transformada discreta del coseno y guarda el vídeo transformado en la memoria. · QUANT: módulo que lee el video transformado por DCT, lo cuantifica y guarda el resultado en la memoria. · IQUANT: módulo que lee el video cuantificado por QUANT, realiza la cuantificación inversa y guarda el resultado en memoria. · IDCT: módulo que lee el video procesado por IQUANT, realiza la transformada inversa del coseno y guarda el resultado en memoria. · IMEM: módulo que hace de interfaz entre los módulos anteriores y la memoria. Gestiona las peticiones simultáneas de acceso a la memoria y asegura el acceso exclusivo a la memoria en cada instante de tiempo. Todos estos módulos aparecen en gris en la siguiente figura en la que se muestra la arquitectura del modelo: Figura 1. Arquitectura del modelo (VER PDF DEL PFC) En figura también aparecen unos módulos en blanco, dichos módulos son de pruebas y se han añadido para realizar simulaciones y probar los módulos del modelo: · CAMARA: módulo que simula una cámara en blanco y negro, lee la luminancia de un fichero de vídeo y lo envía al modelo a través de una FIFO. · FIFO: hace de interfaz entre la cámara y el modelo, guarda los datos que envía la cámara hasta que IVIDEOH los lee. · CONTROL: módulo que se encarga de controlar los módulos que procesan el vídeo, estos le indican cuando terminan de procesar un frame de vídeo y este módulo se encarga de iniciar los módulos que sean necesarios para seguir con la codificación. Este módulo se encarga del correcto secuenciamiento de los módulos procesadores de vídeo. · RAM: módulo que simula una memoria RAM, incluye un retardo programable en el acceso. Para las pruebas también se han generado ficheros de vídeo con el resultado de cada módulo procesador de vídeo, ficheros con mensajes y un fichero de trazas en el que se muestra el secuenciamiento de los procesadores. Como resultado del trabajo en el presente PFC se puede concluir que SystemC permite el modelado de sistemas digitales con bastante sencillez (hace falta conocimientos previos de C++ y programación orientada objetos) y permite la realización de modelos con un nivel de abstracción mayor a RTL, el habitual en Verilog y VHDL, en el caso del presente PFC, el TLM. ABSTRACT This final career project titled “High level modeling with SystemC” have as main objective the modeling of some of the modules of an MPEG-2 video coder using the SystemC digital systems description language at the TLM or Transaction Level Modeling abstraction level. SystemC is a digital systems description language based in C++. It contains routines and libraries that define special data types, structures and process to model digital systems. There is a complete description of the SystemC language in the document [GLMS02]. The main characteristic of TLM abstraction level is that it separates the communication among modules of their functionality. This abstraction level puts a higher emphasis in the functionality of the communication (from where to where the data go) than the exact implementation of it. The TLM and an example are described in the documents [RSPF] and [HG]. The architecture of the model is based in the MVIP-2 video coder (described in the document [Gar04]) The modeled modules are: · IVIDEOH: module that filter the video input in the horizontal dimension. It saves the filtered video in the memory. · IVIDEOV: module that read the IVIDEOH filtered video, filter it in the vertical dimension and save the filtered video in the memory. · DCT: module that read the IVIDEOV filtered video, do the discrete cosine transform and save the transformed video in the memory. · QUANT: module that read the DCT transformed video, quantify it and save the quantified video in the memory. · IQUANT: module that read the QUANT processed video, do the inverse quantification and save the result in the memory. · IDCT: module that read the IQUANT processed video, do the inverse cosine transform and save the result in the memory. · IMEM: this module is the interface between the modules described previously and the memory. It manage the simultaneous accesses to the memory and ensure an unique access at each instant of time All this modules are included in grey in the following figure (SEE PDF OF PFC). This figure shows the architecture of the model: Figure 1. Architecture of the model This figure also includes other modules in white, these modules have been added to the model in order to simulate and prove the modules of the model: · CAMARA: simulates a black and white video camera, it reads the luminance of a video file and sends it to the model through a FIFO. · FIFO: is the interface between the camera and the model, it saves the video data sent by the camera until the IVIDEOH module reads it. · CONTROL: controls the modules that process the video. These modules indicate the CONTROL module when they have finished the processing of a video frame. The CONTROL module, then, init the necessary modules to continue with the video coding. This module is responsible of the right sequence of the video processing modules. · RAM: it simulates a RAM memory; it also simulates a programmable delay in the access to the memory. It has been generated video files, text files and a trace file to check the correct function of the model. The trace file shows the sequence of the video processing modules. As a result of the present final career project, it can be deduced that it is quite easy to model digital systems with SystemC (it is only needed previous knowledge of C++ and object oriented programming) and it also allow the modeling with a level of abstraction higher than the RTL used in Verilog and VHDL, in the case of the present final career project, the TLM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

"UILU-ENG 77 1703."

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O presente trabalho propõe demonstrar como o sistema PHC Manufactor se adequa à empresa em estudo, Ciclo Fapril, apresentando as opções de planeamento que este oferece, as dificuldades com que a empresa se irá deparar e, quando possível, o que fazer para ultrapassar as adversidades colocadas pelo sistema. Numa segunda parte são estudadas algumas heurísticas, nomeadamente FIFO, Tempo de Processamento, EDD, MOR e LOR, para se perceber qual a que melhor se adapta à empresa, de forma a poder cumprir com os prazos acordados. Posteriormente utilizou-se a heurística com melhores resultados e fez-se algumas alterações aos tempos de processamento dos centros de trabalho para melhorar a sua capacidade de resposta aos pedidos. No Final deste estudo percebeuse que o planeamento por EDD era o que melhor se adaptava a empresa. Percebeu-se ainda que os centros de trabalho AS e AT são os que têm menor produtividade e por este motivo se deveria aumentar a sua produtividade, de forma a aumentar a produtividade global.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the development of electronic devices, more and more mobile clients are connected to the Internet and they generate massive data every day. We live in an age of “Big Data”, and every day we generate hundreds of million magnitude data. By analyzing the data and making prediction, we can carry out better development plan. Unfortunately, traditional computation framework cannot meet the demand, so the Hadoop would be put forward. First the paper introduces the background and development status of Hadoop, compares the MapReduce in Hadoop 1.0 and YARN in Hadoop 2.0, and analyzes the advantages and disadvantages of them. Because the resource management module is the core role of YARN, so next the paper would research about the resource allocation module including the resource management, resource allocation algorithm, resource preemption model and the whole resource scheduling process from applying resource to finishing allocation. Also it would introduce the FIFO Scheduler, Capacity Scheduler, and Fair Scheduler and compare them. The main work has been done in this paper is researching and analyzing the Dominant Resource Fair algorithm of YARN, putting forward a maximum resource utilization algorithm based on Dominant Resource Fair algorithm. The paper also provides a suggestion to improve the unreasonable facts in resource preemption model. Emphasizing “fairness” during resource allocation is the core concept of Dominant Resource Fair algorithm of YARM. Because the cluster is multiple users and multiple resources, so the user’s resource request is multiple too. The DRF algorithm would divide the user’s resources into dominant resource and normal resource. For a user, the dominant resource is the one whose share is highest among all the request resources, others are normal resource. The DRF algorithm requires the dominant resource share of each user being equal. But for these cases where different users’ dominant resource amount differs greatly, emphasizing “fairness” is not suitable and can’t promote the resource utilization of the cluster. By analyzing these cases, this thesis puts forward a new allocation algorithm based on DRF. The new algorithm takes the “fairness” into consideration but not the main principle. Maximizing the resource utilization is the main principle and goal of the new algorithm. According to comparing the result of the DRF and new algorithm based on DRF, we found that the new algorithm has more high resource utilization than DRF. The last part of the thesis is to install the environment of YARN and use the Scheduler Load Simulator (SLS) to simulate the cluster environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Toxoplasma gondii is the causative protozoan agent of toxoplasmosis, which is a common infection that is widely distributed worldwide. Studies revealed stronger clonal strains in North America and Europe and genetic diversity in South American strains. Our study aimed to differentiate the pathogenicity and sulfadiazine resistance of three T. gondii isolates obtained from livestock intended for human consumption. The cytopathic effects of the T. gondii isolates were evaluated. The pathogenicity was determined by polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) using a CS3 marker and in a rodent model in vivo. Phenotypic sulfadiazine resistance was measured using a kinetic curve of drug activity in Swiss mice. IgM and IgG were measured by ELISA, and the dihydropteroate synthase (DHPS) gene sequence was analysed. The cytopathic effects and the PCR-RFLP profiles from chickens indicated a different infection source. The Ck3 isolate displayed more cytopathic effects in vitro than the Ck2 and ME49 strains. Additionally, the Ck2 isolate induced a differential humoral immune response compared to ME49. The Ck3 and Pg1 isolates, but not the Ck2 isolate, showed sulfadiazine resistance in the sensitivity assay. We did not find any DHPS gene polymorphisms in the mouse samples. These atypical pathogenicity and sulfadiazine resistance profiles were not previously reported and served as a warning to local health authorities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En este capítulo se presenta el proceso realizado para simular diferentes esquemas de encolamiento (FIFO, PQ y LLQ) de manera básica con el fin de ofrecer QoS en una red. También se presentan gráficas que permiten realizar el análisis posterior de los datos obtenidos en la simulación.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Red blood cells (RBCs) and platelets are examples of perishable items with a fixed shelf life. Recent studies show that transfusing fresh RBCs may lead to an improvement of patient outcomes. In addition, to better manage their inventory, hospitals prefer to receive fresh RBCs and platelets. Therefore, as well as minimizing outdates and shortages, reducing the average age of issue is a key performance criterion for blood banks. The issuing policy in a perishable inventory system has a substantial impact on the age of issue and outdate and shortage rates. Although several studies have compared the last in first out (LIFO) and the first in first out (FIFO) policies for perishable products, only a few studies have considered the situation of blood banks where replenishment is not controllable. In this study, we examine various issuing policies for a perishable inventory system with uncontrollable replenishment, and outline a modified FIFO policy. Our proposed modified FIFO policy partitions the inventory into two parts such that the first part holds the items with age less than a threshold. It then applies the FIFO policy in each part and the LIFO policy between the parts. We present two approximation techniques to estimate the average age of issue, the average time between successive outdates and the average time between successive shortages of the modified FIFO policy. Our analysis shows in several cases that where the objective function is a single economic function, or it is formulated as a multiobjective model, the modified FIFO policy outperforms the FIFO and LIFO policies.