888 resultados para Grid computing environment
Resumo:
Pós-graduação em Agronomia (Genética e Melhoramento de Plantas) - FCAV
Resumo:
Esta dissertação apresenta uma técnica para detecção e diagnósticos de faltas incipientes. Tais faltas provocam mudanças no comportamento do sistema sob investigação, o que se reflete em alterações nos valores dos parâmetros do seu modelo matemático representativo. Como plataforma de testes, foi elaborado um modelo de um sistema industrial em ambiente computacional Matlab/Simulink, o qual consiste em uma planta dinâmica composta de dois tanques comunicantes entre si. A modelagem dessa planta foi realizada através das equações físicas que descrevem a dinâmica do sistema. A falta, a que o sistema foi submetido, representa um estrangulamento gradual na tubulação de saída de um dos tanques. Esse estrangulamento provoca uma redução lenta, de até 20 %, na seção desse tubo. A técnica de detecção de falta foi realizada através da estimação em tempo real dos parâmetros de modelos Auto-regressivos com Entradas Exógenas (ARX) com estimadores Fuzzy e de Mínimos Quadrados Recursivos. Já, o diagnóstico do percentual de entupimento da tubulação foi obtido por um sistema fuzzy de rastreamento de parâmetro, realimentado pela integral do resíduo de detecção. Ao utilizar essa metodologia, foi possível detectar e diagnosticar a falta simulada em três pontos de operação diferentes do sistema. Em ambas as técnicas testadas, o método de MQR teve um bom desempenho, apenas para detectar a falta. Já, o método que utilizou estimação com supervisão fuzzy obteve melhor desempenho, em detectar e diagnosticar as faltas aplicadas ao sistema, constatando a proposta do trabalho.
Resumo:
The web services (WS) technology provides a comprehensive solution for representing, discovering, and invoking services in a wide variety of environments, including Service Oriented Architectures (SOA) and grid computing systems. At the core of WS technology lie a number of XML-based standards, such as the Simple Object Access Protocol (SOAP), that have successfully ensured WS extensibility, transparency, and interoperability. Nonetheless, there is an increasing demand to enhance WS performance, which is severely impaired by XML's verbosity. SOAP communications produce considerable network traffic, making them unfit for distributed, loosely coupled, and heterogeneous computing environments such as the open Internet. Also, they introduce higher latency and processing delays than other technologies, like Java RMI and CORBA. WS research has recently focused on SOAP performance enhancement. Many approaches build on the observation that SOAP message exchange usually involves highly similar messages (those created by the same implementation usually have the same structure, and those sent from a server to multiple clients tend to show similarities in structure and content). Similarity evaluation and differential encoding have thus emerged as SOAP performance enhancement techniques. The main idea is to identify the common parts of SOAP messages, to be processed only once, avoiding a large amount of overhead. Other approaches investigate nontraditional processor architectures, including micro-and macrolevel parallel processing solutions, so as to further increase the processing rates of SOAP/XML software toolkits. This survey paper provides a concise, yet comprehensive review of the research efforts aimed at SOAP performance enhancement. A unified view of the problem is provided, covering almost every phase of SOAP processing, ranging over message parsing, serialization, deserialization, compression, multicasting, security evaluation, and data/instruction-level processing.
Resumo:
[ES]Diseño e instalación de un servicio web de tienda virtual en un entorno de computación en clúster para proporcionar alta disponibilidad y balanceo de carga del mencionado servicio. La infraestructura requerida para este despliegue será virtual, utilizándose como plataforma de virtualización KVM. Las tareas se organizan en : 1.Tareas para la creación de un clúster de balanceo de carga donde una máquina será la encargada de recibir las peticiones de los clientes y de redirigirlas a los servidores web según la carga de éstos. 2. Tareas para la creación de un clúster de alta disponibilidad para que en caso de fallo siempre haya un servidor de base de datos que pueda atender a las peticiones de los servidores web.
Resumo:
Das am Südpol gelegene Neutrinoteleskop IceCube detektiert hochenergetische Neutrinos über die schwache Wechselwirkung geladener und neutraler Ströme. Die Analyse basiert auf einem Vergleich mit Monte-Carlo-Simulationen, deren Produktion global koordiniert wird. In Mainz ist es erstmalig gelungen, Simulationen innerhalb der Architektur des Worldwide LHC Computing Grid (WLCG) zu realisieren, was die Möglichkeit eröffnet, Monte-Carlo-Berechnungen auch auf andere deutsche Rechnerfarmen (CEs) mit IceCube-Berechtigung zu verteilen. Atmosphärische Myonen werden mit einer Rate von über 1000 Ereignissen pro Sekunde aufgezeichnet. Eine korrekte Interpretation dieses dominanten Signals, welches um einen Faktor von 10^6 reduziert werden muss um das eigentliche Neutrinosignal zu extrahieren, ist deswegen von großer Bedeutung. Eigene Simulationen mit der Software-Umgebung CORSIKA wurden durchgeführt um die von Energie und Einfallswinkel abhängige Entstehungshöhe atmosphärischer Myonen zu bestimmen. IceCube Myonraten wurden mit Wetterdaten des European Centre for Medium-Range Weather Forcasts (ECMWF) verglichen und Korrelationen zwischen jahreszeitlichen sowie kurzzeitigen Schwankungen der Atmosphärentemperatur und Myonraten konnten nachgewiesen werden. Zudem wurde eine Suche nach periodischen Effekten in der Atmosphäre, verursacht durch z.B. meteorologische Schwerewellen, mit Hilfe einer Fourieranalyse anhand der IceCube-Daten durchgeführt. Bislang konnte kein signifikanter Nachweis zur Existenz von Schwerewellen am Südpol erbracht werden.
Resumo:
Nella fisica delle particelle, onde poter effettuare analisi dati, è necessario disporre di una grande capacità di calcolo e di storage. LHC Computing Grid è una infrastruttura di calcolo su scala globale e al tempo stesso un insieme di servizi, sviluppati da una grande comunità di fisici e informatici, distribuita in centri di calcolo sparsi in tutto il mondo. Questa infrastruttura ha dimostrato il suo valore per quanto riguarda l'analisi dei dati raccolti durante il Run-1 di LHC, svolgendo un ruolo fondamentale nella scoperta del bosone di Higgs. Oggi il Cloud computing sta emergendo come un nuovo paradigma di calcolo per accedere a grandi quantità di risorse condivise da numerose comunità scientifiche. Date le specifiche tecniche necessarie per il Run-2 (e successivi) di LHC, la comunità scientifica è interessata a contribuire allo sviluppo di tecnologie Cloud e verificare se queste possano fornire un approccio complementare, oppure anche costituire una valida alternativa, alle soluzioni tecnologiche esistenti. Lo scopo di questa tesi è di testare un'infrastruttura Cloud e confrontare le sue prestazioni alla LHC Computing Grid. Il Capitolo 1 contiene un resoconto generale del Modello Standard. Nel Capitolo 2 si descrive l'acceleratore LHC e gli esperimenti che operano a tale acceleratore, con particolare attenzione all’esperimento CMS. Nel Capitolo 3 viene trattato il Computing nella fisica delle alte energie e vengono esaminati i paradigmi Grid e Cloud. Il Capitolo 4, ultimo del presente elaborato, riporta i risultati del mio lavoro inerente l'analisi comparata delle prestazioni di Grid e Cloud.
Resumo:
L'obiettivo di questa tesi è studiare la fattibilità dello studio della produzione associata ttH del bosone di Higgs con due quark top nell'esperimento CMS, e valutare le funzionalità e le caratteristiche della prossima generazione di toolkit per l'analisi distribuita a CMS (CRAB versione 3) per effettuare tale analisi. Nel settore della fisica del quark top, la produzione ttH è particolarmente interessante, soprattutto perchè rappresenta l'unica opportunità di studiare direttamente il vertice t-H senza dover fare assunzioni riguardanti possibili contributi dalla fisica oltre il Modello Standard. La preparazione per questa analisi è cruciale in questo momento, prima dell'inizio del Run-2 dell'LHC nel 2015. Per essere preparati a tale studio, le implicazioni tecniche di effettuare un'analisi completa in un ambito di calcolo distribuito come la Grid non dovrebbero essere sottovalutate. Per questo motivo, vengono presentati e discussi un'analisi dello stesso strumento CRAB3 (disponibile adesso in versione di pre-produzione) e un confronto diretto di prestazioni con CRAB2. Saranno raccolti e documentati inoltre suggerimenti e consigli per un team di analisi che sarà eventualmente coinvolto in questo studio. Nel Capitolo 1 è introdotta la fisica delle alte energie a LHC nell'esperimento CMS. Il Capitolo 2 discute il modello di calcolo di CMS e il sistema di analisi distribuita della Grid. Nel Capitolo 3 viene brevemente presentata la fisica del quark top e del bosone di Higgs. Il Capitolo 4 è dedicato alla preparazione dell'analisi dal punto di vista degli strumenti della Grid (CRAB3 vs CRAB2). Nel capitolo 5 è presentato e discusso uno studio di fattibilità per un'analisi del canale ttH in termini di efficienza di selezione.
Resumo:
The 5th generation of mobile networking introduces the concept of “Network slicing”, the network will be “sliced” horizontally, each slice will be compliant with different requirements in terms of network parameters such as bandwidth, latency. This technology is built on logical instead of physical resources, relies on virtual network as main concept to retrieve a logical resource. The Network Function Virtualisation provides the concept of logical resources for a virtual network function, enabling the concept virtual network; it relies on the Software Defined Networking as main technology to realize the virtual network as resource, it also define the concept of virtual network infrastructure with all components needed to enable the network slicing requirements. SDN itself uses cloud computing technology to realize the virtual network infrastructure, NFV uses also the virtual computing resources to enable the deployment of virtual network function instead of having custom hardware and software for each network function. The key of network slicing is the differentiation of slice in terms of Quality of Services parameters, which relies on the possibility to enable QoS management in cloud computing environment. The QoS in cloud computing denotes level of performances, reliability and availability offered. QoS is fundamental for cloud users, who expect providers to deliver the advertised quality characteristics, and for cloud providers, who need to find the right tradeoff between QoS levels that has possible to offer and operational costs. While QoS properties has received constant attention before the advent of cloud computing, performance heterogeneity and resource isolation mechanisms of cloud platforms have significantly complicated QoS analysis and deploying, prediction, and assurance. This is prompting several researchers to investigate automated QoS management methods that can leverage the high programmability of hardware and software resources in the cloud.
Resumo:
Neurally adjusted ventilatory assist (NAVA) delivers airway pressure (P(aw)) in proportion to the electrical activity of the diaphragm (EAdi) using an adjustable proportionality constant (NAVA level, cm·H(2)O/μV). During systematic increases in the NAVA level, feedback-controlled down-regulation of the EAdi results in a characteristic two-phased response in P(aw) and tidal volume (Vt). The transition from the 1st to the 2nd response phase allows identification of adequate unloading of the respiratory muscles with NAVA (NAVA(AL)). We aimed to develop and validate a mathematical algorithm to identify NAVA(AL). P(aw), Vt, and EAdi were recorded while systematically increasing the NAVA level in 19 adult patients. In a multistep approach, inspiratory P(aw) peaks were first identified by dividing the EAdi into inspiratory portions using Gaussian mixture modeling. Two polynomials were then fitted onto the curves of both P(aw) peaks and Vt. The beginning of the P(aw) and Vt plateaus, and thus NAVA(AL), was identified at the minimum of squared polynomial derivative and polynomial fitting errors. A graphical user interface was developed in the Matlab computing environment. Median NAVA(AL) visually estimated by 18 independent physicians was 2.7 (range 0.4 to 5.8) cm·H(2)O/μV and identified by our model was 2.6 (range 0.6 to 5.0) cm·H(2)O/μV. NAVA(AL) identified by our model was below the range of visually estimated NAVA(AL) in two instances and was above in one instance. We conclude that our model identifies NAVA(AL) in most instances with acceptable accuracy for application in clinical routine and research.
Resumo:
This article gives an overview over the methods used in the low--level analysis of gene expression data generated using DNA microarrays. This type of experiment allows to determine relative levels of nucleic acid abundance in a set of tissues or cell populations for thousands of transcripts or loci simultaneously. Careful statistical design and analysis are essential to improve the efficiency and reliability of microarray experiments throughout the data acquisition and analysis process. This includes the design of probes, the experimental design, the image analysis of microarray scanned images, the normalization of fluorescence intensities, the assessment of the quality of microarray data and incorporation of quality information in subsequent analyses, the combination of information across arrays and across sets of experiments, the discovery and recognition of patterns in expression at the single gene and multiple gene levels, and the assessment of significance of these findings, considering the fact that there is a lot of noise and thus random features in the data. For all of these components, access to a flexible and efficient statistical computing environment is an essential aspect.
Resumo:
How do developers and designers of a new technology make sense of intended users? The critical groundwork for user-centred technology development begins not by involving actual users' exposure to the technological artefact but much earlier, with designers' and developers' vision of future users. Thus, anticipating intended users is critical to technology uptake. We conceptualise the anticipation of intended users as a form of prospective sensemaking in technology development. Employing a narrative analytical approach and drawing on four key communities in the development of Grid computing, we reconstruct how each community anticipated the intended Grid user. Based on our findings, we conceptualise user anticipation in Terms of two key dimensions, namely the intended possibility to inscribe user needs into the technological artefact as well as the intended scope of the application domain. In turn, these dimensions allow us to develop an initial typology of intended user concepts that in turn might provide a key building block towards a generic typology of intended users.
Resumo:
The future Internet is expected to be composed of a mesh of interoperable web services accessible from all over the web. This approach has not yet caught on since global user?service interaction is still an open issue. This paper states one vision with regard to next-generation front-end Web 2.0 technology that will enable integrated access to services, contents and things in the future Internet. In this paper, we illustrate how front-ends that wrap traditional services and resources can be tailored to the needs of end users, converting end users into prosumers (creators and consumers of service-based applications). To do this, we propose an architecture that end users without programming skills can use to create front-ends, consult catalogues of resources tailored to their needs, easily integrate and coordinate front-ends and create composite applications to orchestrate services in their back-end. The paper includes a case study illustrating that current user-centred web development tools are at a very early stage of evolution. We provide statistical data on how the proposed architecture improves these tools. This paper is based on research conducted by the Service Front End (SFE) Open Alliance initiative.
Resumo:
Energy consumption in data centers is nowadays a critical objective because of its dramatic environmental and economic impact. Over the last years, several approaches have been proposed to tackle the energy/cost optimization problem, but most of them have failed on providing an analytical model to target both the static and dynamic optimization domains for complex heterogeneous data centers. This paper proposes and solves an optimization problem for the energy-driven configuration of a heterogeneous data center. It also advances in the proposition of a new mechanism for task allocation and distribution of workload. The combination of both approaches outperforms previous published results in the field of energy minimization in heterogeneous data centers and scopes a promising area of research.
Resumo:
Quality of service (QoS) can be a critical element for achieving the business goals of a service provider, for the acceptance of a service by the user, or for guaranteeing service characteristics in a composition of services, where a service is defined as either a software or a software-support (i.e., infrastructural) service which is available on any type of network or electronic channel. The goal of this article is to compare the approaches to QoS description in the literature, where several models and metamodels are included. consider a large spectrum of models and metamodels to describe service quality, ranging from ontological approaches to define quality measures, metrics, and dimensions, to metamodels enabling the specification of quality-based service requirements and capabilities as well as of SLAs (Service-Level Agreements) and SLA templates for service provisioning. Our survey is performed by inspecting the characteristics of the available approaches to reveal which are the consolidated ones and which are the ones specific to given aspects and to analyze where the need for further research and investigation lies. The approaches here illustrated have been selected based on a systematic review of conference proceedings and journals spanning various research areas in computer science and engineering, including: distributed, information, and telecommunication systems, networks and security, and service-oriented and grid computing.
Resumo:
En el marco del proyecto europeo FI-WARE, en el CoNWet Lab (laboratorio de la ETSI Informáticos de la UPM) se ha implementado la plataforma Web Wstore que es una implementación de referencia del Store Generic Enabler perteneciente a dicho proyecto. El objetivo de FI-WARE es crear la plataforma núcleo del Internet del Futuro (IoF) con la intención de incrementar la competitividad global europea en el mundo de las TI. El proyecto introduce una infraestructura innovadora para la creación y distribución de servicios digitales en internet. WStore ofrece a los proveedores de servicios la plataforma donde publicar sus ofertas y desde la cual los clientes podrán acceder ellas. Estos proveedores ofrecen servicios Web, aplicaciones, widgets y data sets del mismo modo que Google ofrece aplicaciones en la tienda online Google Play o Apple en el App Store. WStore está implementada actualmente como una plataforma Web, por lo que una organización que desee ofrecer el servicio de la store necesita instalar el software en un servidor propio y disponer de un dominio para ofrecer sus productos. El objetivo de este trabajo es migrar WStore a un entorno de computación en la nube de manera que con una única instancia se ofrezca el servicio a las organizaciones que deseen disponer de su propia plataforma, de la cual tendrán total control como si se encontrase en su propia infraestructura. Para esto se implementa una versión de WStore que será desplegada en una infraestructura cloud y ofrecida como Software as a Service. La implementación incluye una serie de módulos de código que se podrán añadir opcionalmente en el proceso de instalación si se desea que la instancia instalada sea Multitenant. Además, en este trabajo se estudian y prueban las herramientas que ofrece MongoDB para desplegar la plataforma Wstore Multitenant en una infraestructura cloud. Estas herramientas son replica sets y sharding que permiten desplegar una base de datos escalable y de alta disponibilidad. ---ABSTRACT---In the context of the European project FI-WARE, the CoNWeT Lab (IT Lab from ETSIINF UPM university) has been implemented the web platform WStore. WStore is a reference implementation of the Generic Enabler Store from FI-WARE project. The FI-WARE goal is to create the core platform of the Future Internet (IoF) with the intention of enhancing Europe's global competitiveness in IT technologies. FI-WARE introduces an innovative infrastructure for the creation and distribution of digital services over the Internet. WStore offers to service providers a platform to publicate offerings and where customers can access them. The providers offer web services, applications, widgets and data sets in the same way that Google offers online applications on Google Play or Apple on App Store plataforms. WStore is currently implemented as a web platform, so if an organization wants to offer the store service, it need to install the software on it’s own serves and have a domain to offer their products. The objective of this paper is to migrate WStore to a cloud computing environment where a single instance of the WStore is offered as a web service to organizations who want their own store. Customers (tenants) of the WStore web service will have total control over the software and WStore administration. The implementation includes several code modules that can be optionally added in the installation process to build a Multitenant instance. In addition, this paper review the tools that MongoDB provide for scalability and high availability (replica sets and sharding) with the purpose of deploying multi-tenant WStore on a cloud infrastructure.