758 resultados para cloud computing.
Resumo:
Las TIC son inseparables de la museografía in situ e imprescindibles en la museografía en red fija y móvil. En demasiados casos se han instalado prótesis tecnológicas para barnizar de modernidad el espacio cultural, olvidando que la tecnología debe estar al servicio de los contenidos de manera que resulte invisible y perfectamente imbricada con la museografía tradicional. Las interfaces móviles pueden fusionar museo in situ y en red y acompañar a las personas más allá del espacio físico. Esa fusión debe partir de una base de datos narrativa y abierta a obras materiales e inmateriales de otros museos de manera que no se trasladen las limitaciones del museo físico al virtual. En el museo in situ tienen sentido las instalaciones hipermedia inmersivas que faciliten experiencias culturales innovadoras. La interactividad (relaciones virtuales) debe convivir con la interacción (relaciones físicas y personales) y estar al servicio de todas las personas, partiendo de que todas, todos tenemos limitaciones. Trabajar interdisciplinarmente ayuda a comprender mejor el museo para ponerlo al servicio de las personas.
Resumo:
El ciberespacio es un escenario de conflicto altamente complejo al estar en constante evolución. Ni la Unión Europea ni ningún otro actor del sistema internacional se encuentra a salvo de las amenazas procedentes del ciberespacio. Pero los pasos dados desde la UE en el mundo de la ciberseguridad no son en absoluto suficientes. Europa necesita que su Estrategia de ciberseguridad sea realmente capaz de integrar a las diferentes Estrategias nacionales. Es urgente una mayor determinación, unos mayores recursos y unos mejores instrumentos que permitan a la Unión implementar una gestión de crisis y una prevención de ciberconflictos verdaderamente eficaz.
Resumo:
Development of Internet-of-Services will be hampered by heterogeneous Internet-of-Things infrastructures, such as inconsistency in communicating with participating objects, connectivity between them, topology definition & data transfer, access via cloud computing for data storage etc. Our proposed solutions are applicable to a random topology scenario that allow establishing of multi-operational sensor networks out of single networks and/or single service networks with the participation of multiple networks; thus allowing virtual links to be created and resources to be shared. The designed layers are context-aware, application-oriented, and capable of representing physical objects to a management system, along with discovery of services. The reliability issue is addressed by deploying IETF supported IEEE 802.15.4 network model for low-rate wireless personal networks. Flow- sensor succeeded better results in comparison to the typical - sensor from reachability, throughput, energy consumption and diversity gain viewpoint and through allowing the multicast groups into maximum number, performances can be improved.
Resumo:
Con la crescita in complessità delle infrastrutture IT e la pervasività degli scenari di Internet of Things (IoT) emerge il bisogno di nuovi modelli computazionali basati su entità autonome capaci di portare a termine obiettivi di alto livello interagendo tra loro grazie al supporto di infrastrutture come il Fog Computing, per la vicinanza alle sorgenti dei dati, e del Cloud Computing per offrire servizi analitici complessi di back-end in grado di fornire risultati per milioni di utenti. Questi nuovi scenarii portano a ripensare il modo in cui il software viene progettato e sviluppato in una prospettiva agile. Le attività dei team di sviluppatori (Dev) dovrebbero essere strettamente legate alle attività dei team che supportano il Cloud (Ops) secondo nuove metodologie oggi note come DevOps. Tuttavia, data la mancanza di astrazioni adeguata a livello di linguaggio di programmazione, gli sviluppatori IoT sono spesso indotti a seguire approcci di sviluppo bottom-up che spesso risulta non adeguato ad affrontare la compessità delle applicazione del settore e l'eterogeneità dei compomenti software che le formano. Poichè le applicazioni monolitiche del passato appaiono difficilmente scalabili e gestibili in un ambiente Cloud con molteplici utenti, molti ritengono necessaria l'adozione di un nuovo stile architetturale, in cui un'applicazione dovrebbe essere vista come una composizione di micro-servizi, ciascuno dedicato a uno specifica funzionalità applicativa e ciascuno sotto la responsabilità di un piccolo team di sviluppatori, dall'analisi del problema al deployment e al management. Poichè al momento non si è ancora giunti a una definizione univoca e condivisa dei microservices e di altri concetti che emergono da IoT e dal Cloud, nè tantomento alla definzione di linguaggi sepcializzati per questo settore, la definzione di metamodelli custom associati alla produzione automatica del software di raccordo con le infrastrutture potrebbe aiutare un team di sviluppo ad elevare il livello di astrazione, incapsulando in una software factory aziendale i dettagli implementativi. Grazie a sistemi di produzione del sofware basati sul Model Driven Software Development (MDSD), l'approccio top-down attualmente carente può essere recuperato, permettendo di focalizzare l'attenzione sulla business logic delle applicazioni. Nella tesi viene mostrato un esempio di questo possibile approccio, partendo dall'idea che un'applicazione IoT sia in primo luogo un sistema software distribuito in cui l'interazione tra componenti attivi (modellati come attori) gioca un ruolo fondamentale.
Resumo:
The Twitter System is the biggest social network in the world, and everyday millions of tweets are posted and talked about, expressing various views and opinions. A large variety of research activities have been conducted to study how the opinions can be clustered and analyzed, so that some tendencies can be uncovered. Due to the inherent weaknesses of the tweets - very short texts and very informal styles of writing - it is rather hard to make an investigation of tweet data analysis giving results with good performance and accuracy. In this paper, we intend to attack the problem from another aspect - using a two-layer structure to analyze the twitter data: LDA with topic map modelling. The experimental results demonstrate that this approach shows a progress in twitter data analysis. However, more experiments with this method are expected in order to ensure that the accurate analytic results can be maintained.
Resumo:
The growing demand for large-scale virtualization environments, such as the ones used in cloud computing, has led to a need for efficient management of computing resources. RAM memory is the one of the most required resources in these environments, and is usually the main factor limiting the number of virtual machines that can run on the physical host. Recently, hypervisors have brought mechanisms for transparent memory sharing between virtual machines in order to reduce the total demand for system memory. These mechanisms “merge” similar pages detected in multiple virtual machines into the same physical memory, using a copy-on-write mechanism in a manner that is transparent to the guest systems. The objective of this study is to present an overview of these mechanisms and also evaluate their performance and effectiveness. The results of two popular hypervisors (VMware and KVM) using different guest operating systems (Linux and Windows) and different workloads (synthetic and real) are presented herein. The results show significant performance differences between hypervisors according to the guest system workloads and execution time.
Resumo:
The air-sea flux of greenhouse gases (e.g. carbon dioxide, CO2) is a critical part of the climate system and a major factor in the biogeochemical development of the oceans. More accurate and higher resolution calculations of these gas fluxes are required if we are to fully understand and predict our future climate. Satellite Earth observation is able to provide large spatial scale datasets that can be used to study gas fluxes. However, the large storage requirements needed to host such data can restrict its use by the scientific community. Fortunately, the development of cloud-computing can provide a solution. Here we describe an open source air-sea CO2 flux processing toolbox called the ‘FluxEngine’, designed for use on a cloud-computing infrastructure. The toolbox allows users to easily generate global and regional air-sea CO2 flux data from model, in situ and Earth observation data, and its air-sea gas flux calculation is user configurable. Its current installation on the Nephalae cloud allows users to easily exploit more than 8 terabytes of climate-quality Earth observation data for the derivation of gas fluxes. The resultant NetCDF data output files contain >20 data layers containing the various stages of the flux calculation along with process indicator layers to aid interpretation of the data. This paper describes the toolbox design, the verification of the air-sea CO2 flux calculations, demonstrates the use of the tools for studying global and shelf-sea air-sea fluxes and describes future developments.
Resumo:
The big data era has dramatically transformed our lives; however, security incidents such as data breaches can put sensitive data (e.g. photos, identities, genomes) at risk. To protect users' data privacy, there is a growing interest in building secure cloud computing systems, which keep sensitive data inputs hidden, even from computation providers. Conceptually, secure cloud computing systems leverage cryptographic techniques (e.g., secure multiparty computation) and trusted hardware (e.g. secure processors) to instantiate a “secure” abstract machine consisting of a CPU and encrypted memory, so that an adversary cannot learn information through either the computation within the CPU or the data in the memory. Unfortunately, evidence has shown that side channels (e.g. memory accesses, timing, and termination) in such a “secure” abstract machine may potentially leak highly sensitive information, including cryptographic keys that form the root of trust for the secure systems. This thesis broadly expands the investigation of a research direction called trace oblivious computation, where programming language techniques are employed to prevent side channel information leakage. We demonstrate the feasibility of trace oblivious computation, by formalizing and building several systems, including GhostRider, which is a hardware-software co-design to provide a hardware-based trace oblivious computing solution, SCVM, which is an automatic RAM-model secure computation system, and ObliVM, which is a programming framework to facilitate programmers to develop applications. All of these systems enjoy formal security guarantees while demonstrating a better performance than prior systems, by one to several orders of magnitude.
Resumo:
Con agrado, se presenta un nuevo número de la Revista Bibliotecas. Incluye investigaciones de interés acerca de los siguientes temas: arquetipos bibliotecarios, alfabetización informacional e informática en la nube. En el artículo dedicado a los arquetipos, el autor afirma que la profesión bibliotecológica, al igual que ocurre con otras disciplinas, está expuesta a una serie de estereotipos. Tales impresiones y creencias provocan que las personas, motivadas por el prejuicio, no incursionen en una fascinante disciplina; como consecuencia, inciden en la matrícula de estudiantes de primer ingreso. Este escrito expone la situación de la Escuela de Bibliotecología, Documentación e Información, de la Universidad Nacional, y plantea una propuesta didáctica en aras de mejorar la concepción de la bibliotecología entre la población estudiantil de secundaria.
Resumo:
Têm-se notado nos últimos anos um crescimento na adoção de tecnologias de computação em nuvem, com uma adesão inicial por parte de particulares e pequenas empresas, e mais recentemente por grandes organizações. Esta tecnologia tem servido de base ao aparecimento de um conjunto de novas tendências, como a Internet das Coisas ligando os nossos equipamentos pessoais e wearables às redes sociais, processos de big data que permitem tipificar comportamentos de clientes ou ainda facilitar a vida ao cidadão com serviços de atendimento integrados. No entanto, tal como em todas as novas tendências disruptivas, que trazem consigo um conjunto de oportunidades, trazem também um conjunto de novos riscos que são necessários de serem equacionados. Embora este caminho praticamente se torne inevitável para uma grande parte de empresas e entidades governamentais, a sua adoção como funcionamento deve ser alvo de uma permanente avaliação e monitorização entre as vantagens e riscos associados. Para tal, é fundamental que as organizações se dotem de uma eficiente gestão do risco, de modo que possam tipificar os riscos (identificar, analisar e quantificar) e orientar-se de uma forma segura e metódica para este novo paradigma. Caso não o façam, os riscos ficam evidenciados, desde uma possível perda de competitividade face às suas congéneres, falta de confiança dos clientes, dos parceiros de negócio e podendo culminar numa total inatividade do negócio. Com esta tese de mestrado desenvolve-se uma análise genérica de risco tendo como base a Norma ISO 31000:2009 e a elaboração de uma proposta de registo de risco, que possa servir de auxiliar em processos de tomada de decisão na contratação e manutenção de serviços de Computação em Nuvem por responsáveis de organizações privadas ou estatais.
Resumo:
Descreve-se, no presente trabalho, os esforços envidados no sentido de criar uma solução informática generalista, para os problemas mais recorrentes do processo de produção de videojogos 20, baseados em sprites, a correr em plataformas móveis. O sistema desenvolvido é uma aplicação web que está inserida no paradigma cloudcomputing, usufruindo, portanto, de todas as vantagens em termos de acessibilidade, segurança da informação e manutenção que este paradigma oferece actualmente. Além das questões funcionais, a aplicação é ainda explorada do ponto de vista da arquitetura da implementação, com vista a garantir um sistema com implementação escalável, adaptável e de fácil manutenção. Propõe-se ainda um algoritmo que foi desenvolvido para resolver o problema de obter uma distribuição espacial otimizada de várias áreas retangulares, sem sobreposições nem restrições a nível das dimensões, quer do arranjo final, quer das áreas arranjadas. ABSTRACT: This document describes the efforts taken to create a generic computing solution for the most recurrent problems found in the production of two dimensional, spritebased videogames, running on mobile platforms. The developed system is a web application that fits within the scope of the recent cloud-computing paradigm and, therefore, enjoys all of its advantages in terms of data safety, accessibility and application maintainability. In addition, to the functional issues, the system is also studied in terms of its internal software architecture, since it was planned and implemented in the perspective of attaining an easy to maintain application, that is both scalable and adaptable. Furthermore, it is also proposed an algorithm that aims to find an optimized solution to the space distribution problem of several rectangular areas, with no overlapping and no dimensinal restrictions, neither on the final arrangement nor on the arranged areas.
Resumo:
The diversity in the way cloud providers o↵er their services, give their SLAs, present their QoS, or support di↵erent technologies, makes very difficult the portability and interoperability of cloud applications, and favours the well-known vendor lock-in problem. We propose a model to describe cloud applications and the required resources in an agnostic, and providers- and resources-independent way, in which individual application modules, and entire applications, may be re-deployed using different services without modification. To support this model, and after the proposal of a variety of cross-cloud application management tools by different authors, we propose going one step further in the unification of cloud services with a management approach in which IaaS and PaaS services are integrated into a unified interface. We provide support for deploying applications whose components are distributed on different cloud providers, indistinctly using IaaS and PaaS services.
Resumo:
117 p.
Resumo:
Cet essai est présenté en tant que mémoire de maîtrise dans le cadre du programme de droit des technologies de l’information. Ce mémoire traite de différents modèles d’affaires qui ont pour caractéristique commune de commercialiser les données dans le contexte des technologies de l’information. Les pratiques commerciales observées sont peu connues et l’un des objectifs est d’informer le lecteur quant au fonctionnement de ces pratiques. Dans le but de bien situer les enjeux, cet essai discutera d’abord des concepts théoriques de vie privée et de protection des renseignements personnels. Une fois ce survol tracé, les pratiques de « data brokerage », de « cloud computing » et des solutions « analytics » seront décortiquées. Au cours de cette description, les enjeux juridiques soulevés par chaque aspect de la pratique en question seront étudiés. Enfin, le dernier chapitre de cet essai sera réservé à deux enjeux, soit le rôle du consentement et la sécurité des données, qui ne relèvent pas d’une pratique commerciale spécifique, mais qui sont avant tout des conséquences directes de l’évolution des technologies de l’information.
Resumo:
The increasing needs for computational power in areas such as weather simulation, genomics or Internet applications have led to sharing of geographically distributed and heterogeneous resources from commercial data centers and scientific institutions. Research in the areas of utility, grid and cloud computing, together with improvements in network and hardware virtualization has resulted in methods to locate and use resources to rapidly provision virtual environments in a flexible manner, while lowering costs for consumers and providers. However, there is still a lack of methodologies to enable efficient and seamless sharing of resources among institutions. In this work, we concentrate in the problem of executing parallel scientific applications across distributed resources belonging to separate organizations. Our approach can be divided in three main points. First, we define and implement an interoperable grid protocol to distribute job workloads among partners with different middleware and execution resources. Second, we research and implement different policies for virtual resource provisioning and job-to-resource allocation, taking advantage of their cooperation to improve execution cost and performance. Third, we explore the consequences of on-demand provisioning and allocation in the problem of site-selection for the execution of parallel workloads, and propose new strategies to reduce job slowdown and overall cost.