758 resultados para Cloud Computing
Resumo:
Cet essai est présenté en tant que mémoire de maîtrise dans le cadre du programme de droit des technologies de l’information. Ce mémoire traite de différents modèles d’affaires qui ont pour caractéristique commune de commercialiser les données dans le contexte des technologies de l’information. Les pratiques commerciales observées sont peu connues et l’un des objectifs est d’informer le lecteur quant au fonctionnement de ces pratiques. Dans le but de bien situer les enjeux, cet essai discutera d’abord des concepts théoriques de vie privée et de protection des renseignements personnels. Une fois ce survol tracé, les pratiques de « data brokerage », de « cloud computing » et des solutions « analytics » seront décortiquées. Au cours de cette description, les enjeux juridiques soulevés par chaque aspect de la pratique en question seront étudiés. Enfin, le dernier chapitre de cet essai sera réservé à deux enjeux, soit le rôle du consentement et la sécurité des données, qui ne relèvent pas d’une pratique commerciale spécifique, mais qui sont avant tout des conséquences directes de l’évolution des technologies de l’information.
Resumo:
The popularity of cloud computing has led to a dramatic increase in the number of data centers in the world. The ever-increasing computational demands along with the slowdown in technology scaling has ushered an era of power-limited servers. Techniques such as near-threshold computing (NTC) can be used to improve energy efficiency in the post-Dennard scaling era. This paper describes an architecture based on the FD-SOI process technology for near-threshold operation in servers. Our work explores the trade-offs in energy and performance when running a wide range of applications found in private and public clouds, ranging from traditional scale-out applications, such as web search or media streaming, to virtualized banking applications. Our study demonstrates the benefits of near-threshold operation and proposes several directions to synergistically increase the energy proportionality of a near-threshold server.
Resumo:
Objectives: To discuss how current research in the area of smart homes and ambient assisted living will be influenced by the use of big data. Methods: A scoping review of literature published in scientific journals and conference proceedings was performed, focusing on smart homes, ambient assisted living and big data over the years 2011-2014. Results: The health and social care market has lagged behind other markets when it comes to the introduction of innovative IT solutions and the market faces a number of challenges as the use of big data will increase. First, there is a need for a sustainable and trustful information chain where the needed information can be transferred from all producers to all consumers in a structured way. Second, there is a need for big data strategies and policies to manage the new situation where information is handled and transferred independently of the place of the expertise. Finally, there is a possibility to develop new and innovative business models for a market that supports cloud computing, social media, crowdsourcing etc. Conclusions: The interdisciplinary area of big data, smart homes and ambient assisted living is no longer only of interest for IT developers, it is also of interest for decision makers as customers make more informed choices among today's services. In the future it will be of importance to make information usable for managers and improve decision making, tailor smart home services based on big data, develop new business models, increase competition and identify policies to ensure privacy, security and liability.
Resumo:
The 10th European Conference on Information Systems Management is being held at The University of Evora, Portugal on the 8 /9 September 2016. The Conference Chair is Paulo Silva and the Programme Chairs are Prof. Rui Quaresma and Prof. António Guerreiro. ECISM provides an opportunity for individuals researching and working in the broad field of information systems management, including IT evaluation to come together to exchange ideas and discuss current research in the field. This has developed into a particularly important forum for the present era, where the modern challenges of managing information and evaluating the effectiveness of related technologies are constantly evolving in the world of Big Data and Cloud Computing. We hope that this year’s conference will provide you with plenty of opportunities to share your expertise with colleagues from around the world. The keynote speakers for the Conference are Carlos Zorrinho from the Portuguese Delegation and Isabel Ramos from University of Minho, Portugal. ECISM 2016 received an initial submission of 84 abstracts. After the double blind peer review process 25 aca demic papers, 7 PhD research papers, 3 Masters research paper and 5 work in progress papers have been ac cepted for publication in these Conference Proceedings. These papers represent research from around the world, including Belgium, Brazil, China, Czech Republic, Kazakhstan, Malaysia, New Zealand, Norway, Oman, Poland, Portugal, South Africa, Sweden, The Netherlands, UK and Vietnam.
Resumo:
This paper presents the study and experimental tests for the viability analysis of using multiple wireless technologies in urban traffic light controllers in a Smart City environment. Communication drivers, different types of antennas, data acquisition methods and data processing for monitoring the network are presented. The sensors and actuators modules are connected in a local area network through two distinct low power wireless networks using both 868 MHz and 2.4 GHz frequency bands. All data communications using 868 MHz go through a Moteino. Various tests are made to assess the most advantageous features of each communication type. The experimental results show better range for 868 MHz solutions, whereas the 2.4 GHz presents the advantage of self-regenerating the network and mesh. The different pros and cons of both communication methods are presented.
Resumo:
As computational models in fields such as medicine and engineering get more refined, resource requirements are increased. In a first instance, these needs have been satisfied using parallel computing and HPC clusters. However, such systems are often costly and lack flexibility. HPC users are therefore tempted to move to elastic HPC using cloud services. One difficulty in making this transition is that HPC and cloud systems are different, and performance may vary. The purpose of this study is to evaluate cloud services as a means to minimise both cost and computation time for large-scale simulations, and to identify which system properties have the most significant impact on performance. Our simulation results show that, while the performance of Virtual CPU (VCPU) is satisfactory, network throughput may lead to difficulties.
Resumo:
With the development of wearable and mobile computing technology, more and more people start using sleep-tracking tools to collect personal sleep data on a daily basis aiming at understanding and improving their sleep. While sleep quality is influenced by many factors in a person’s lifestyle context, such as exercise, diet and steps walked, existing tools simply visualize sleep data per se on a dashboard rather than analyse those data in combination with contextual factors. Hence many people find it difficult to make sense of their sleep data. In this paper, we present a cloud-based intelligent computing system named SleepExplorer that incorporates sleep domain knowledge and association rule mining for automated analysis on personal sleep data in light of contextual factors. Experiments show that the same contextual factors can play a distinct role in sleep of different people, and SleepExplorer could help users discover factors that are most relevant to their personal sleep.
Resumo:
Current trends in broadband mobile networks are addressed towards the placement of different capabilities at the edge of the mobile network in a centralised way. On one hand, the split of the eNB between baseband processing units and remote radio headers makes it possible to process some of the protocols in centralised premises, likely with virtualised resources. On the other hand, mobile edge computing makes use of processing and storage capabilities close to the air interface in order to deploy optimised services with minimum delay. The confluence of both trends is a hot topic in the definition of future 5G networks. The full centralisation of both technologies in cloud data centres imposes stringent requirements to the fronthaul connections in terms of throughput and latency. Therefore, all those cells with limited network access would not be able to offer these types of services. This paper proposes a solution for these cases, based on the placement of processing and storage capabilities close to the remote units, which is especially well suited for the deployment of clusters of small cells. The proposed cloud-enabled small cells include a highly efficient microserver with a limited set of virtualised resources offered to the cluster of small cells. As a result, a light data centre is created and commonly used for deploying centralised eNB and mobile edge computing functionalities. The paper covers the proposed architecture, with special focus on the integration of both aspects, and possible scenarios of application.
Resumo:
Emerging from the challenge to reduce energy consumption in buildings is the need for energy simulation to be used more effectively to support integrated decision making in early design. As a critical response to a Green Star case study, we present DEEPA, a parametric modeling framework that enables architects and engineers to work at the same semantic level to generate shared models for energy simulation. A cloud-based toolkit provides web and data services for parametric design software that automate the process of simulating and tracking design alternatives, by linking building geometry more directly to analysis inputs. Data, semantics, models and simulation results can be shared on the fly. This allows the complex relationships between architecture, building services and energy consumption to be explored in an integrated manner, and decisions to be made collaboratively.
Resumo:
Software as a Service (SaaS) in Cloud is getting more and more significant among software users and providers recently. A SaaS that is delivered as composite application has many benefits including reduced delivery costs, flexible offers of the SaaS functions and decreased subscription cost for users. However, this approach has introduced a new problem in managing the resources allocated to the composite SaaS. The resource allocation that has been done at the initial stage may be overloaded or wasted due to the dynamic environment of a Cloud. A typical data center resource management usually triggers a placement reconfiguration for the SaaS in order to maintain its performance as well as to minimize the resource used. Existing approaches for this problem often ignore the underlying dependencies between SaaS components. In addition, the reconfiguration also has to comply with SaaS constraints in terms of its resource requirements, placement requirement as well as its SLA. To tackle the problem, this paper proposes a penalty-based Grouping Genetic Algorithm for multiple composite SaaS components clustering in Cloud. The main objective is to minimize the resource used by the SaaS by clustering its component without violating any constraint. Experimental results demonstrate the feasibility and the scalability of the proposed algorithm.
Resumo:
In the modern connected world, pervasive computing has become reality. Thanks to the ubiquity of mobile computing devices and emerging cloud-based services, the users permanently stay connected to their data. This introduces a slew of new security challenges, including the problem of multi-device key management and single-sign-on architectures. One solution to this problem is the utilization of secure side-channels for authentication, including the visual channel as vicinity proof. However, existing approaches often assume confidentiality of the visual channel, or provide only insufficient means of mitigating a man-in-the-middle attack. In this work, we introduce QR-Auth, a two-step, 2D barcode based authentication scheme for mobile devices which aims specifically at key management and key sharing across devices in a pervasive environment. It requires minimal user interaction and therefore provides better usability than most existing schemes, without compromising its security. We show how our approach fits in existing authorization delegation and one-time-password generation schemes, and that it is resilient to man-in-the-middle attacks.
Resumo:
Guaranteeing Quality of Service (QoS) with minimum computation cost is the most important objective of cloud-based MapReduce computations. Minimizing the total computation cost of cloud-based MapReduce computations is done through MapReduce placement optimization. MapReduce placement optimization approaches can be classified into two categories: homogeneous MapReduce placement optimization and heterogeneous MapReduce placement optimization. It is generally believed that heterogeneous MapReduce placement optimization is more effective than homogeneous MapReduce placement optimization in reducing the total running cost of cloud-based MapReduce computations. This paper proposes a new approach to the heterogeneous MapReduce placement optimization problem. In this new approach, the heterogeneous MapReduce placement optimization problem is transformed into a constrained combinatorial optimization problem and is solved by an innovative constructive algorithm. Experimental results show that the running cost of the cloud-based MapReduce computation platform using this new approach is 24:3%-44:0% lower than that using the most popular homogeneous MapReduce placement approach, and 2:0%-36:2% lower than that using the heterogeneous MapReduce placement approach not considering the spare resources from the existing MapReduce computations. The experimental results have also demonstrated the good scalability of this new approach.
Resumo:
Computer generated holography is an extremely demanding and complex task when it comes to providing realistic reconstructions with full parallax, occlusion, and shadowing. We present an algorithm designed for data-parallel computing on modern graphics processing units to alleviate the computational burden. We apply Gaussian interpolation to create a continuous surface representation from discrete input object points. The algorithm maintains a potential occluder list for each individual hologram plane sample to keep the number of visibility tests to a minimum.We experimented with two approximations that simplify and accelerate occlusion computation. It is observed that letting several neighboring hologramplane samples share visibility information on object points leads to significantly faster computation without causing noticeable artifacts in the reconstructed images. Computing a reduced sample set via nonuniform sampling is also found to be an effective acceleration technique. © 2009 Optical Society of America.
Resumo:
With the availability of a wide range of cloud Virtual Machines (VMs) it is difficult to determine which VMs can maximise the performance of an application. Benchmarking is commonly used to this end for capturing the performance of VMs. Most cloud benchmarking techniques are typically heavyweight - time consuming processes which have to benchmark the entire VM in order to obtain accurate benchmark data. Such benchmarks cannot be used in real-time on the cloud and incur extra costs even before an application is deployed.
In this paper, we present lightweight cloud benchmarking techniques that execute quickly and can be used in near real-time on the cloud. The exploration of lightweight benchmarking techniques are facilitated by the development of DocLite - Docker Container-based Lightweight Benchmarking. DocLite is built on the Docker container technology which allows a user-defined portion (such as memory size and the number of CPU cores) of the VM to be benchmarked. DocLite operates in two modes, in the first mode, containers are used to benchmark a small portion of the VM to generate performance ranks. In the second mode, historic benchmark data is used along with the first mode as a hybrid to generate VM ranks. The generated ranks are evaluated against three scientific high-performance computing applications. The proposed techniques are up to 91 times faster than a heavyweight technique which benchmarks the entire VM. It is observed that the first mode can generate ranks with over 90% and 86% accuracy for sequential and parallel execution of an application. The hybrid mode improves the correlation slightly but the first mode is sufficient for benchmarking cloud VMs.
Resumo:
Existing benchmarking methods are time consuming processes as they typically benchmark the entire Virtual Machine (VM) in order to generate accurate performance data, making them less suitable for real-time analytics. The research in this paper is aimed to surmount the above challenge by presenting DocLite - Docker Container-based Lightweight benchmarking tool. DocLite explores lightweight cloud benchmarking methods for rapidly executing benchmarks in near real-time. DocLite is built on the Docker container technology, which allows a user-defined memory size and number of CPU cores of the VM to be benchmarked. The tool incorporates two benchmarking methods - the first referred to as the native method employs containers to benchmark a small portion of the VM and generate performance ranks, and the second uses historic benchmark data along with the native method as a hybrid to generate VM ranks. The proposed methods are evaluated on three use-cases and are observed to be up to 91 times faster than benchmarking the entire VM. In both methods, small containers provide the same quality of rankings as a large container. The native method generates ranks with over 90% and 86% accuracy for sequential and parallel execution of an application compared against benchmarking the whole VM. The hybrid method did not improve the quality of the rankings significantly.