828 resultados para cloud computing resources
Resumo:
The big data era has dramatically transformed our lives; however, security incidents such as data breaches can put sensitive data (e.g. photos, identities, genomes) at risk. To protect users' data privacy, there is a growing interest in building secure cloud computing systems, which keep sensitive data inputs hidden, even from computation providers. Conceptually, secure cloud computing systems leverage cryptographic techniques (e.g., secure multiparty computation) and trusted hardware (e.g. secure processors) to instantiate a “secure” abstract machine consisting of a CPU and encrypted memory, so that an adversary cannot learn information through either the computation within the CPU or the data in the memory. Unfortunately, evidence has shown that side channels (e.g. memory accesses, timing, and termination) in such a “secure” abstract machine may potentially leak highly sensitive information, including cryptographic keys that form the root of trust for the secure systems. This thesis broadly expands the investigation of a research direction called trace oblivious computation, where programming language techniques are employed to prevent side channel information leakage. We demonstrate the feasibility of trace oblivious computation, by formalizing and building several systems, including GhostRider, which is a hardware-software co-design to provide a hardware-based trace oblivious computing solution, SCVM, which is an automatic RAM-model secure computation system, and ObliVM, which is a programming framework to facilitate programmers to develop applications. All of these systems enjoy formal security guarantees while demonstrating a better performance than prior systems, by one to several orders of magnitude.
Resumo:
Con agrado, se presenta un nuevo número de la Revista Bibliotecas. Incluye investigaciones de interés acerca de los siguientes temas: arquetipos bibliotecarios, alfabetización informacional e informática en la nube. En el artículo dedicado a los arquetipos, el autor afirma que la profesión bibliotecológica, al igual que ocurre con otras disciplinas, está expuesta a una serie de estereotipos. Tales impresiones y creencias provocan que las personas, motivadas por el prejuicio, no incursionen en una fascinante disciplina; como consecuencia, inciden en la matrícula de estudiantes de primer ingreso. Este escrito expone la situación de la Escuela de Bibliotecología, Documentación e Información, de la Universidad Nacional, y plantea una propuesta didáctica en aras de mejorar la concepción de la bibliotecología entre la población estudiantil de secundaria.
Resumo:
Têm-se notado nos últimos anos um crescimento na adoção de tecnologias de computação em nuvem, com uma adesão inicial por parte de particulares e pequenas empresas, e mais recentemente por grandes organizações. Esta tecnologia tem servido de base ao aparecimento de um conjunto de novas tendências, como a Internet das Coisas ligando os nossos equipamentos pessoais e wearables às redes sociais, processos de big data que permitem tipificar comportamentos de clientes ou ainda facilitar a vida ao cidadão com serviços de atendimento integrados. No entanto, tal como em todas as novas tendências disruptivas, que trazem consigo um conjunto de oportunidades, trazem também um conjunto de novos riscos que são necessários de serem equacionados. Embora este caminho praticamente se torne inevitável para uma grande parte de empresas e entidades governamentais, a sua adoção como funcionamento deve ser alvo de uma permanente avaliação e monitorização entre as vantagens e riscos associados. Para tal, é fundamental que as organizações se dotem de uma eficiente gestão do risco, de modo que possam tipificar os riscos (identificar, analisar e quantificar) e orientar-se de uma forma segura e metódica para este novo paradigma. Caso não o façam, os riscos ficam evidenciados, desde uma possível perda de competitividade face às suas congéneres, falta de confiança dos clientes, dos parceiros de negócio e podendo culminar numa total inatividade do negócio. Com esta tese de mestrado desenvolve-se uma análise genérica de risco tendo como base a Norma ISO 31000:2009 e a elaboração de uma proposta de registo de risco, que possa servir de auxiliar em processos de tomada de decisão na contratação e manutenção de serviços de Computação em Nuvem por responsáveis de organizações privadas ou estatais.
Resumo:
Descreve-se, no presente trabalho, os esforços envidados no sentido de criar uma solução informática generalista, para os problemas mais recorrentes do processo de produção de videojogos 20, baseados em sprites, a correr em plataformas móveis. O sistema desenvolvido é uma aplicação web que está inserida no paradigma cloudcomputing, usufruindo, portanto, de todas as vantagens em termos de acessibilidade, segurança da informação e manutenção que este paradigma oferece actualmente. Além das questões funcionais, a aplicação é ainda explorada do ponto de vista da arquitetura da implementação, com vista a garantir um sistema com implementação escalável, adaptável e de fácil manutenção. Propõe-se ainda um algoritmo que foi desenvolvido para resolver o problema de obter uma distribuição espacial otimizada de várias áreas retangulares, sem sobreposições nem restrições a nível das dimensões, quer do arranjo final, quer das áreas arranjadas. ABSTRACT: This document describes the efforts taken to create a generic computing solution for the most recurrent problems found in the production of two dimensional, spritebased videogames, running on mobile platforms. The developed system is a web application that fits within the scope of the recent cloud-computing paradigm and, therefore, enjoys all of its advantages in terms of data safety, accessibility and application maintainability. In addition, to the functional issues, the system is also studied in terms of its internal software architecture, since it was planned and implemented in the perspective of attaining an easy to maintain application, that is both scalable and adaptable. Furthermore, it is also proposed an algorithm that aims to find an optimized solution to the space distribution problem of several rectangular areas, with no overlapping and no dimensinal restrictions, neither on the final arrangement nor on the arranged areas.
Resumo:
117 p.
Resumo:
Cet essai est présenté en tant que mémoire de maîtrise dans le cadre du programme de droit des technologies de l’information. Ce mémoire traite de différents modèles d’affaires qui ont pour caractéristique commune de commercialiser les données dans le contexte des technologies de l’information. Les pratiques commerciales observées sont peu connues et l’un des objectifs est d’informer le lecteur quant au fonctionnement de ces pratiques. Dans le but de bien situer les enjeux, cet essai discutera d’abord des concepts théoriques de vie privée et de protection des renseignements personnels. Une fois ce survol tracé, les pratiques de « data brokerage », de « cloud computing » et des solutions « analytics » seront décortiquées. Au cours de cette description, les enjeux juridiques soulevés par chaque aspect de la pratique en question seront étudiés. Enfin, le dernier chapitre de cet essai sera réservé à deux enjeux, soit le rôle du consentement et la sécurité des données, qui ne relèvent pas d’une pratique commerciale spécifique, mais qui sont avant tout des conséquences directes de l’évolution des technologies de l’information.
Resumo:
In the presented thesis work, the meshfree method with distance fields was coupled with the lattice Boltzmann method to obtain solutions of fluid-structure interaction problems. The thesis work involved development and implementation of numerical algorithms, data structure, and software. Numerical and computational properties of the coupling algorithm combining the meshfree method with distance fields and the lattice Boltzmann method were investigated. Convergence and accuracy of the methodology was validated by analytical solutions. The research was focused on fluid-structure interaction solutions in complex, mesh-resistant domains as both the lattice Boltzmann method and the meshfree method with distance fields are particularly adept in these situations. Furthermore, the fluid solution provided by the lattice Boltzmann method is massively scalable, allowing extensive use of cutting edge parallel computing resources to accelerate this phase of the solution process. The meshfree method with distance fields allows for exact satisfaction of boundary conditions making it possible to exactly capture the effects of the fluid field on the solid structure.
Resumo:
Cet essai est présenté en tant que mémoire de maîtrise dans le cadre du programme de droit des technologies de l’information. Ce mémoire traite de différents modèles d’affaires qui ont pour caractéristique commune de commercialiser les données dans le contexte des technologies de l’information. Les pratiques commerciales observées sont peu connues et l’un des objectifs est d’informer le lecteur quant au fonctionnement de ces pratiques. Dans le but de bien situer les enjeux, cet essai discutera d’abord des concepts théoriques de vie privée et de protection des renseignements personnels. Une fois ce survol tracé, les pratiques de « data brokerage », de « cloud computing » et des solutions « analytics » seront décortiquées. Au cours de cette description, les enjeux juridiques soulevés par chaque aspect de la pratique en question seront étudiés. Enfin, le dernier chapitre de cet essai sera réservé à deux enjeux, soit le rôle du consentement et la sécurité des données, qui ne relèvent pas d’une pratique commerciale spécifique, mais qui sont avant tout des conséquences directes de l’évolution des technologies de l’information.
Resumo:
The popularity of cloud computing has led to a dramatic increase in the number of data centers in the world. The ever-increasing computational demands along with the slowdown in technology scaling has ushered an era of power-limited servers. Techniques such as near-threshold computing (NTC) can be used to improve energy efficiency in the post-Dennard scaling era. This paper describes an architecture based on the FD-SOI process technology for near-threshold operation in servers. Our work explores the trade-offs in energy and performance when running a wide range of applications found in private and public clouds, ranging from traditional scale-out applications, such as web search or media streaming, to virtualized banking applications. Our study demonstrates the benefits of near-threshold operation and proposes several directions to synergistically increase the energy proportionality of a near-threshold server.
Resumo:
Objectives: To discuss how current research in the area of smart homes and ambient assisted living will be influenced by the use of big data. Methods: A scoping review of literature published in scientific journals and conference proceedings was performed, focusing on smart homes, ambient assisted living and big data over the years 2011-2014. Results: The health and social care market has lagged behind other markets when it comes to the introduction of innovative IT solutions and the market faces a number of challenges as the use of big data will increase. First, there is a need for a sustainable and trustful information chain where the needed information can be transferred from all producers to all consumers in a structured way. Second, there is a need for big data strategies and policies to manage the new situation where information is handled and transferred independently of the place of the expertise. Finally, there is a possibility to develop new and innovative business models for a market that supports cloud computing, social media, crowdsourcing etc. Conclusions: The interdisciplinary area of big data, smart homes and ambient assisted living is no longer only of interest for IT developers, it is also of interest for decision makers as customers make more informed choices among today's services. In the future it will be of importance to make information usable for managers and improve decision making, tailor smart home services based on big data, develop new business models, increase competition and identify policies to ensure privacy, security and liability.
Resumo:
Scientific research is increasingly data-intensive, relying more and more upon advanced computational resources to be able to answer the questions most pressing to our society at large. This report presents findings from a brief descriptive survey sent to a sample of 342 leading researchers at the University of Washington (UW), Seattle, Washington in 2010 and 2011 as the first stage of the larger National Science Foundation project “Interacting with Cyberinfrastructure in the Face of Changing Science.” This survey assesses these researcher’s use of advanced computational resources, data, and software in their research. We present high-level findings that describe UW researchers’: demographics, interdisciplinarity, research groups, data use, software and computational use—including software development and use, data storage and transfer activities, and collaboration tools, and computing resources. These findings offer insights into the state of computational resources in use during this time period as well as offering a look at the data intensiveness of UW researchers.
Resumo:
The 10th European Conference on Information Systems Management is being held at The University of Evora, Portugal on the 8 /9 September 2016. The Conference Chair is Paulo Silva and the Programme Chairs are Prof. Rui Quaresma and Prof. António Guerreiro. ECISM provides an opportunity for individuals researching and working in the broad field of information systems management, including IT evaluation to come together to exchange ideas and discuss current research in the field. This has developed into a particularly important forum for the present era, where the modern challenges of managing information and evaluating the effectiveness of related technologies are constantly evolving in the world of Big Data and Cloud Computing. We hope that this year’s conference will provide you with plenty of opportunities to share your expertise with colleagues from around the world. The keynote speakers for the Conference are Carlos Zorrinho from the Portuguese Delegation and Isabel Ramos from University of Minho, Portugal. ECISM 2016 received an initial submission of 84 abstracts. After the double blind peer review process 25 aca demic papers, 7 PhD research papers, 3 Masters research paper and 5 work in progress papers have been ac cepted for publication in these Conference Proceedings. These papers represent research from around the world, including Belgium, Brazil, China, Czech Republic, Kazakhstan, Malaysia, New Zealand, Norway, Oman, Poland, Portugal, South Africa, Sweden, The Netherlands, UK and Vietnam.
Resumo:
A High-Performance Computing job dispatcher is a critical software that assigns the finite computing resources to submitted jobs. This resource assignment over time is known as the on-line job dispatching problem in HPC systems. The fact the problem is on-line means that solutions must be computed in real-time, and their required time cannot exceed some threshold to do not affect the normal system functioning. In addition, a job dispatcher must deal with a lot of uncertainty: submission times, the number of requested resources, and duration of jobs. Heuristic-based techniques have been broadly used in HPC systems, at the cost of achieving (sub-)optimal solutions in a short time. However, the scheduling and resource allocation components are separated, thus generates a decoupled decision that may cause a performance loss. Optimization-based techniques are less used for this problem, although they can significantly improve the performance of HPC systems at the expense of higher computation time. Nowadays, HPC systems are being used for modern applications, such as big data analytics and predictive model building, that employ, in general, many short jobs. However, this information is unknown at dispatching time, and job dispatchers need to process large numbers of them quickly while ensuring high Quality-of-Service (QoS) levels. Constraint Programming (CP) has been shown to be an effective approach to tackle job dispatching problems. However, state-of-the-art CP-based job dispatchers are unable to satisfy the challenges of on-line dispatching, such as generate dispatching decisions in a brief period and integrate current and past information of the housing system. Given the previous reasons, we propose CP-based dispatchers that are more suitable for HPC systems running modern applications, generating on-line dispatching decisions in a proper time and are able to make effective use of job duration predictions to improve QoS levels, especially for workloads dominated by short jobs.
Resumo:
The modern industrial environment is populated by a myriad of intelligent devices that collaborate for the accomplishment of the numerous business processes in place at the production sites. The close collaboration between humans and work machines poses new interesting challenges that industry must overcome in order to implement the new digital policies demanded by the industrial transition. The Industry 5.0 movement is a companion revolution of the previous Industry 4.0, and it relies on three characteristics that any industrial sector should have and pursue: human centrality, resilience, and sustainability. The application of the fifth industrial revolution cannot be completed without moving from the implementation of Industry 4.0-enabled platforms. The common feature found in the development of this kind of platform is the need to integrate the Information and Operational layers. Our thesis work focuses on the implementation of a platform addressing all the digitization features foreseen by the fourth industrial revolution, making the IT/OT convergence inside production plants an improvement and not a risk. Furthermore, we added modular features to our platform enabling the Industry 5.0 vision. We favored the human centrality using the mobile crowdsensing techniques and the reliability and sustainability using pluggable cloud computing services, combined with data coming from the crowd support. We achieved important and encouraging results in all the domains in which we conducted our experiments. Our IT/OT convergence-enabled platform exhibits the right performance needed to satisfy the strict requirements of production sites. The multi-layer capability of the framework enables the exploitation of data not strictly coming from work machines, allowing a more strict interaction between the company, its employees, and customers.
Resumo:
The term Artificial intelligence acquired a lot of baggage since its introduction and in its current incarnation is synonymous with Deep Learning. The sudden availability of data and computing resources has opened the gates to myriads of applications. Not all are created equal though, and problems might arise especially for fields not closely related to the tasks that pertain tech companies that spearheaded DL. The perspective of practitioners seems to be changing, however. Human-Centric AI emerged in the last few years as a new way of thinking DL and AI applications from the ground up, with a special attention at their relationship with humans. The goal is designing a system that can gracefully integrate in already established workflows, as in many real-world scenarios AI may not be good enough to completely replace its humans. Often this replacement may even be unneeded or undesirable. Another important perspective comes from, Andrew Ng, a DL pioneer, who recently started shifting the focus of development from “better models” towards better, and smaller, data. He defined his approach Data-Centric AI. Without downplaying the importance of pushing the state of the art in DL, we must recognize that if the goal is creating a tool for humans to use, more raw performance may not align with more utility for the final user. A Human-Centric approach is compatible with a Data-Centric one, and we find that the two overlap nicely when human expertise is used as the driving force behind data quality. This thesis documents a series of case-studies where these approaches were employed, to different extents, to guide the design and implementation of intelligent systems. We found human expertise proved crucial in improving datasets and models. The last chapter includes a slight deviation, with studies on the pandemic, still preserving the human and data centric perspective.