820 resultados para sistema distribuito data-grid cloud computing CERN LHC Hazelcast Elasticsearch


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Applying location-focused data protection law within the context of a location-agnostic cloud computing framework is fraught with difficulties. While the Proposed EU Data Protection Regulation has introduced a lot of changes to the current data protection framework, the complexities of data processing in the cloud involve various layers and intermediaries of actors that have not been properly addressed. This leaves some gaps in the regulation when analyzed in cloud scenarios. This paper gives a brief overview of the relevant provisions of the regulation that will have an impact on cloud transactions and addresses the missing links. It is hoped that these loopholes will be reconsidered before the final version of the law is passed in order to avoid unintended consequences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Managing large medical image collections is an increasingly demanding important issue in many hospitals and other medical settings. A huge amount of this information is daily generated, which requires robust and agile systems. In this paper we present a distributed multi-agent system capable of managing very large medical image datasets. In this approach, agents extract low-level information from images and store them in a data structure implemented in a relational database. The data structure can also store semantic information related to images and particular regions. A distinctive aspect of our work is that a single image can be divided so that the resultant sub-images can be stored and managed separately by different agents to improve performance in data accessing and processing. The system also offers the possibility of applying some region-based operations and filters on images, facilitating image classification. These operations can be performed directly on data structures in the database.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En el campo de la biomedicina se genera una inmensa cantidad de imágenes diariamente. Para administrarlas es necesaria la creación de sistemas informáticos robustos y ágiles, que necesitan gran cantidad de recursos computacionales. El presente artículo presenta un servicio de cloud computing capaz de manejar grandes colecciones de imágenes biomédicas. Gracias a este servicio organizaciones y usuarios podrían administrar sus imágenes biomédicas sin necesidad de poseer grandes recursos informáticos. El servicio usa un sistema distribuido multi agente donde las imágenes son procesadas y se extraen y almacenan en una estructura de datos las regiones que contiene junto con sus características. Una característica novedosa del sistema es que una misma imagen puede ser dividida, y las sub-imágenes resultantes pueden ser almacenadas por separado por distintos agentes. Esta característica ayuda a mejorar el rendimiento del sistema a la hora de buscar y recuperar las imágenes almacenadas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação apresentada ao Instituto Politécnico de Castelo Branco para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Desenvolvimento de Software e Sistemas Interactivos, realizada sob a orientação científica do Doutor Osvaldo Arede dos Santos, Professor Adjunto da Unidade Técnico Científica de Informática da Escola Superior de Tecnologia do Instituto Politécnico de Castelo Branco.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação apresentada ao Instituto Politécnico de Castelo Branco para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Desenvolvimento de Software e Sistemas Interativos, realizada sob a orientação científica do Doutor Fernando Reinaldo Silva Garcia Ribeiro e do Doutor José Carlos Meireles Monteiro Metrôlho, Professores Adjuntos da Unidade Técnico-Científica de Informática da Escola Superior de Tecnologia do Instituto Politécnico de Castelo Branco.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I dati sono una risorsa di valore inestimabile per tutte le organizzazioni. Queste informazioni vanno da una parte gestite tramite i classici sistemi operazionali, dall’altra parte analizzate per ottenere approfondimenti che possano guidare le scelte di business. Uno degli strumenti fondamentali a supporto delle scelte di business è il data warehouse. Questo elaborato è il frutto di un percorso di tirocinio svolto con l'azienda Injenia S.r.l. Il focus del percorso era rivolto all'ottimizzazione di un data warehouse che l'azienda vende come modulo aggiuntivo di un software di nome Interacta. Questo data warehouse, Interacta Analytics, ha espresso nel tempo notevoli criticità architetturali e di performance. L’architettura attualmente usata per la creazione e la gestione dei dati all'interno di Interacta Analytics utilizza un approccio batch, pertanto, l’obiettivo cardine dello studio è quello di trovare soluzioni alternative batch che garantiscano un risparmio sia in termini economici che di tempo, esplorando anche la possibilità di una transizione ad un’architettura streaming. Gli strumenti da utilizzare in questa ricerca dovevano inoltre mantenersi in linea con le tecnologie utilizzate per Interacta, ossia i servizi della Google Cloud Platform. Dopo una breve dissertazione sul background teorico di questa area tematica, l'elaborato si concentra sul funzionamento del software principale e sulla struttura logica del modulo di analisi. Infine, si espone il lavoro sperimentale, innanzitutto proponendo un'analisi delle criticità principali del sistema as-is, dopodiché ipotizzando e valutando quattro ipotesi migliorative batch e due streaming. Queste, come viene espresso nelle conclusioni della ricerca, migliorano di molto le performance del sistema di analisi in termini di tempistiche di elaborazione, di costo totale e di semplicità dell'architettura, in particolare grazie all'utilizzo dei servizi serverless con container e FaaS della piattaforma cloud di Google.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate a neutrino mass model in which the neutrino data is accounted for by bilinear R-parity violating supersymmetry with anomaly mediated supersymmetry breaking. We focus on the CERN Large Hadron Collider (LHC) phenomenology, studying the reach of generic supersymmetry search channels with leptons, missing energy and jets. A special feature of this model is the existence of long-lived neutralinos and charginos which decay inside the detector leading to detached vertices. We demonstrate that the largest reach is obtained in the displaced vertices channel and that practically all of the reasonable parameter space will be covered with an integrated luminosity of 10 fb(-1). We also compare the displaced vertex reaches of the LHC and Tevatron.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The constant evolution of the Internet and its increasing use and subsequent entailing to private and public activities, resulting in a strong impact on their survival, originates an emerging technology. Through cloud computing, it is possible to abstract users from the lower layers to the business, focusing only on what is most important to manage and with the advantage of being able to grow (or degrades) resources as needed. The paradigm of cloud arises from the necessity of optimization of IT resources evolving in an emergent and rapidly expanding and technology. In this regard, after a study of the most common cloud platforms and the tactic of the current implementation of the technologies applied at the Institute of Biomedical Sciences of Abel Salazar and Faculty of Pharmacy of Oporto University a proposed evolution is suggested in order adorn certain requirements in the context of cloud computing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lunacloud is a cloud service provider with offices in Portugal, Spain, France and UK that focus on delivering reliable, elastic and low cost cloud Infrastructure as a Service (IaaS) solutions. The company currently relies on a proprietary IaaS platform - the Parallels Automation for Cloud Infrastructure (PACI) - and wishes to expand and integrate other IaaS solutions seamlessly, namely open source solutions. This is the challenge addressed in this thesis. This proposal, which was fostered by Eurocloud Portugal Association, contributes to the promotion of interoperability and standardisation in Cloud Computing. The goal is to investigate, propose and develop an interoperable open source solution with standard interfaces for the integrated management of IaaS Cloud Computing resources based on new as well as existing abstraction libraries or frameworks. The solution should provide bothWeb and application programming interfaces. The research conducted consisted of two surveys covering existing open source IaaS platforms and PACI (features and API) and open source IaaS abstraction solutions. The first study was focussed on the characteristics of most popular open source IaaS platforms, namely OpenNebula, OpenStack, CloudStack and Eucalyptus, as well as PACI and included a thorough inventory of the provided Application Programming Interfaces (API), i.e., offered operations, followed by a comparison of these platforms in order to establish their similarities and dissimilarities. The second study on existing open source interoperability solutions included the analysis of existing abstraction libraries and frameworks and their comparison. The approach proposed and adopted, which was supported on the conclusions of the carried surveys, reuses an existing open source abstraction solution – the Apache Deltacloud framework. Deltacloud relies on the development of software driver modules to interface with different IaaS platforms, officially provides and supports drivers to sixteen IaaS platform, including OpenNebula and OpenStack, and allows the development of new provider drivers. The latter functionality was used to develop a new Deltacloud driver for PACI. Furthermore, Deltacloud provides a Web dashboard and REpresentational State Transfer (REST) API interfaces. To evaluate the adopted solution, a test bed integrating OpenNebula, Open- Stack and PACI nodes was assembled and deployed. The tests conducted involved time elapsed and data payload measurements via the Deltacloud framework as well as via the pre-existing IaaS platform API. The Deltacloud framework behaved as expected, i.e., introduced additional delays, but no substantial overheads. Both the Web and the REST interfaces were tested and showed identical measurements. The developed interoperable solution for the seamless integration and provision of IaaS resources from PACI, OpenNebula and OpenStack IaaS platforms fulfils the specified requirements, i.e., provides Lunacloud with the ability to expand the range of adopted IaaS platforms and offers a Web dashboard and REST API for the integrated management. The contributions of this work include the surveys and comparisons made, the selection of the abstraction framework and, last, but not the least, the PACI driver developed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, vehicular cloud computing (VCC) has emerged as a new technology which is being used in wide range of applications in the area of multimedia-based healthcare applications. In VCC, vehicles act as the intelligent machines which can be used to collect and transfer the healthcare data to the local, or global sites for storage, and computation purposes, as vehicles are having comparatively limited storage and computation power for handling the multimedia files. However, due to the dynamic changes in topology, and lack of centralized monitoring points, this information can be altered, or misused. These security breaches can result in disastrous consequences such as-loss of life or financial frauds. Therefore, to address these issues, a learning automata-assisted distributive intrusion detection system is designed based on clustering. Although there exist a number of applications where the proposed scheme can be applied but, we have taken multimedia-based healthcare application for illustration of the proposed scheme. In the proposed scheme, learning automata (LA) are assumed to be stationed on the vehicles which take clustering decisions intelligently and select one of the members of the group as a cluster-head. The cluster-heads then assist in efficient storage and dissemination of information through a cloud-based infrastructure. To secure the proposed scheme from malicious activities, standard cryptographic technique is used in which the auotmaton learns from the environment and takes adaptive decisions for identification of any malicious activity in the network. A reward and penalty is given by the stochastic environment where an automaton performs its actions so that it updates its action probability vector after getting the reinforcement signal from the environment. The proposed scheme was evaluated using extensive simulations on ns-2 with SUMO. The results obtained indicate that the proposed scheme yields an improvement of 10 % in detection rate of malicious nodes when compared with the existing schemes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Estatística e Gestão de Informação

Relevância:

100.00% 100.00%

Publicador:

Resumo:

From a narratological perspective, this paper aims to address the theoretical issues concerning the functioning of the so called «narrative bifurcation» in data presentation and information retrieval. Its use in cyberspace calls for a reassessment as a storytelling device. Films have shown its fundamental role for the creation of suspense. Interactive fiction and games have unveiled the possibility of plots with multiple choices, giving continuity to cinema split-screen experiences. Using practical examples, this paper will show how this storytelling tool returns to its primitive form and ends up by conditioning cloud computing interface design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a proposal for a management model based on reliability requirements concerning Cloud Computing (CC). The proposal was based on a literature review focused on the problems, challenges and underway studies related to the safety and reliability of Information Systems (IS) in this technological environment. This literature review examined the existing obstacles and challenges from the point of view of respected authors on the subject. The main issues are addressed and structured as a model, called "Trust Model for Cloud Computing environment". This is a proactive proposal that purposes to organize and discuss management solutions for the CC environment, aiming improved reliability of the IS applications operation, for both providers and their customers. On the other hand and central to trust, one of the CC challenges is the development of models for mutual audit management agreements, so that a formal relationship can be established involving the relevant legal responsibilities. To establish and control the appropriate contractual requirements, it is necessary to adopt technologies that can collect the data needed to inform risk decisions, such as access usage, security controls, location and other references related to the use of the service. In this process, the cloud service providers and consumers themselves must have metrics and controls to support cloud-use management in compliance with the SLAs agreed between the parties. The organization of these studies and its dissemination in the market as a conceptual model that is able to establish parameters to regulate a reliable relation between provider and user of IT services in CC environment is an interesting instrument to guide providers, developers and users in order to provide services and secure and reliable applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational resources. Grid enables access to the resources but it does not guarantee any quality of service. Moreover, Grid does not provide performance isolation; job of one user can influence the performance of other user’s job. The other problem with Grid is that the users of Grid belong to scientific community and the jobs require specific and customized software environment. Providing the perfect environment to the user is very difficult in Grid for its dispersed and heterogeneous nature. Though, Cloud computing provide full customization and control, but there is no simple procedure available to submit user jobs as in Grid. The Grid computing can provide customized resources and performance to the user using virtualization. A virtual machine can join the Grid as an execution node. The virtual machine can also be submitted as a job with user jobs inside. Where the first method gives quality of service and performance isolation, the second method also provides customization and administration in addition. In this thesis, a solution is proposed to enable virtual machine reuse which will provide performance isolation with customization and administration. The same virtual machine can be used for several jobs. In the proposed solution customized virtual machines join the Grid pool on user request. Proposed solution describes two scenarios to achieve this goal. In first scenario, user submits their customized virtual machine as a job. The virtual machine joins the Grid pool when it is powered on. In the second scenario, user customized virtual machines are preconfigured in the execution system. These virtual machines join the Grid pool on user request. Condor and VMware server is used to deploy and test the scenarios. Condor supports virtual machine jobs. The scenario 1 is deployed using Condor VM universe. The second scenario uses VMware-VIX API for scripting powering on and powering off of the remote virtual machines. The experimental results shows that as scenario 2 does not need to transfer the virtual machine image, the virtual machine image becomes live on pool more faster. In scenario 1, the virtual machine runs as a condor job, so it easy to administrate the virtual machine. The only pitfall in scenario 1 is the network traffic.