827 resultados para user-controlled cloud computing
Resumo:
Provenance plays a pivotal in tracing the origin of something and determining how and why something had occurred. With the emergence of the cloud and the benefits it encompasses, there has been a rapid proliferation of services being adopted by commercial and government sectors. However, trust and security concerns for such services are on an unprecedented scale. Currently, these services expose very little internal working to their customers; this can cause accountability and compliance issues especially in the event of a fault or error, customers and providers are left to point finger at each other. Provenance-based traceability provides a mean to address part of this problem by being able to capture and query events occurred in the past to understand how and why it took place. However, due to the complexity of the cloud infrastructure, the current provenance models lack the expressibility required to describe the inner-working of a cloud service. For a complete solution, a provenance-aware policy language is also required for operators and users to define policies for compliance purpose. The current policy standards do not cater for such requirement. To address these issues, in this paper we propose a provenance (traceability) model cProv, and a provenance-aware policy language (cProvl) to capture traceability data, and express policies for validating against the model. For implementation, we have extended the XACML3.0 architecture to support provenance, and provided a translator that converts cProvl policy and request into XACML type.
Resumo:
Brown dwarfs and giant gas extrasolar planets have cold atmospheres with rich chemical compositions from which mineral cloud particles form. Their properties, like particle sizes and material composition, vary with height, and the mineral cloud particles are charged due to triboelectric processes in such dynamic atmospheres. The dynamics of the atmospheric gas is driven by the irradiating host star and/or by the rotation of the objects that changes during its lifetime. Thermal gas ionisation in these ultra-cool but dense atmospheres allows electrostatic interactions and magnetic coupling of a substantial atmosphere volume. Combined with a strong magnetic field , a chromosphere and aurorae might form as suggested by radio and x-ray observations of brown dwarfs. Non-equilibrium processes like cosmic ray ionisation and discharge processes in clouds will increase the local pool of free electrons in the gas. Cosmic rays and lighting discharges also alter the composition of the local atmospheric gas such that tracer molecules might be identified. Cosmic rays affect the atmosphere through air showers in a certain volume which was modelled with a 3D Monte Carlo radiative transfer code to be able to visualise their spacial extent. Given a certain degree of thermal ionisation of the atmospheric gas, we suggest that electron attachment to charge mineral cloud particles is too inefficient to cause an electrostatic disruption of the cloud particles. Cloud particles will therefore not be destroyed by Coulomb explosion for the local temperature in the collisional dominated brown dwarf and giant gas planet atmospheres. However, the cloud particles are destroyed electrostatically in regions with strong gas ionisation. The potential size of such cloud holes would, however, be too small and might occur too far inside the cloud to mimic the effect of, e.g. magnetic field induced star spots.
Resumo:
A primary goal of context-aware systems is delivering the right information at the right place and right time to users in order to enable them to make effective decisions and improve their quality of life. There are three key requirements for achieving this goal: determining what information is relevant, personalizing it based on the users’ context (location, preferences, behavioral history etc.), and delivering it to them in a timely manner without an explicit request from them. These requirements create a paradigm that we term as “Proactive Context-aware Computing”. Most of the existing context-aware systems fulfill only a subset of these requirements. Many of these systems focus only on personalization of the requested information based on users’ current context. Moreover, they are often designed for specific domains. In addition, most of the existing systems are reactive - the users request for some information and the system delivers it to them. These systems are not proactive i.e. they cannot anticipate users’ intent and behavior and act proactively without an explicit request from them. In order to overcome these limitations, we need to conduct a deeper analysis and enhance our understanding of context-aware systems that are generic, universal, proactive and applicable to a wide variety of domains. To support this dissertation, we explore several directions. Clearly the most significant sources of information about users today are smartphones. A large amount of users’ context can be acquired through them and they can be used as an effective means to deliver information to users. In addition, social media such as Facebook, Flickr and Foursquare provide a rich and powerful platform to mine users’ interests, preferences and behavioral history. We employ the ubiquity of smartphones and the wealth of information available from social media to address the challenge of building proactive context-aware systems. We have implemented and evaluated a few approaches, including some as part of the Rover framework, to achieve the paradigm of Proactive Context-aware Computing. Rover is a context-aware research platform which has been evolving for the last 6 years. Since location is one of the most important context for users, we have developed ‘Locus’, an indoor localization, tracking and navigation system for multi-story buildings. Other important dimensions of users’ context include the activities that they are engaged in. To this end, we have developed ‘SenseMe’, a system that leverages the smartphone and its multiple sensors in order to perform multidimensional context and activity recognition for users. As part of the ‘SenseMe’ project, we also conducted an exploratory study of privacy, trust, risks and other concerns of users with smart phone based personal sensing systems and applications. To determine what information would be relevant to users’ situations, we have developed ‘TellMe’ - a system that employs a new, flexible and scalable approach based on Natural Language Processing techniques to perform bootstrapped discovery and ranking of relevant information in context-aware systems. In order to personalize the relevant information, we have also developed an algorithm and system for mining a broad range of users’ preferences from their social network profiles and activities. For recommending new information to the users based on their past behavior and context history (such as visited locations, activities and time), we have developed a recommender system and approach for performing multi-dimensional collaborative recommendations using tensor factorization. For timely delivery of personalized and relevant information, it is essential to anticipate and predict users’ behavior. To this end, we have developed a unified infrastructure, within the Rover framework, and implemented several novel approaches and algorithms that employ various contextual features and state of the art machine learning techniques for building diverse behavioral models of users. Examples of generated models include classifying users’ semantic places and mobility states, predicting their availability for accepting calls on smartphones and inferring their device charging behavior. Finally, to enable proactivity in context-aware systems, we have also developed a planning framework based on HTN planning. Together, these works provide a major push in the direction of proactive context-aware computing.
Resumo:
Designing for users rather than with users is still a common practice in technology design and innovation as opposed to taking them on board in the process. Design for inclusion aims to define and understand end-users, their needs, context of use, and, by doing so, ensure that end-users are catered for and included, while the results are geared towards universality of use. We describe the central role of end-user and designer participation, immersion and perspective to build user-driven solutions. These approaches provided a critical understanding of the counterpart role. Designer(s) could understand what the user’s needs were, experience physical impairments, and see from other’s perspective the interaction with the environment. Users could understand challenges of designing for physical impairments, build a sense of ownership with technology and explore it from a creative perspective. The understanding of the peer’s role (user and designer), needs and perspective enhanced user participation and inclusion.
Resumo:
Only recently, during the past five years, consumer electronics has been evolving rapidly. Many products have started to include “smart home” capabilities, enabling communication and interoperability of various smart devices. Even more devices and sensors can be remote controlled and monitored through cloud services. While the smart home systems have become very affordable to average consumer compared to the early solutions decades ago, there are still many issues and things that need to be fixed or improved upon: energy efficiency, connectivity with other devices and applications, security and privacy concerns, reliability, and response time. This paper focuses on designing Internet of Things (IoT) node and platform architectures that take these issues into account, notes other currently used solutions, and selects technologies in order to provide better solution. The node architecture aims for energy efficiency and modularity, while the platform architecture goals are in scalability, portability, maintainability, performance, and modularity. Moreover, the platform architecture attempts to improve user experience by providing higher reliability and lower response time compared to the alternative platforms. The architectures were developed iteratively using a development process involving research, planning, design, implementation, testing, and analysis. Additionally, they were documented using Kruchten’s 4+1 view model, which is used to describe the use cases and different views of the architectures. The node architecture consisted of energy efficient hardware, FC3180 microprocessor and CC2520 RF transceiver, modular operating system, Contiki, and a communication protocol, AllJoyn, used for providing better interoperability with other IoT devices and applications. The platform architecture provided reliable low response time control, monitoring, and initial setup capabilities by utilizing web technologies on various devices such as smart phones, tablets, and computers. Furthermore, an optional cloud service was provided in order to control devices and monitor sensors remotely by utilizing scalable high performance technologies in the backend enabling low response time and high reliability.
Resumo:
In this thesis, tool support is addressed for the combined disciplines of Model-based testing and performance testing. Model-based testing (MBT) utilizes abstract behavioral models to automate test generation, thus decreasing time and cost of test creation. MBT is a functional testing technique, thereby focusing on output, behavior, and functionality. Performance testing, however, is non-functional and is concerned with responsiveness and stability under various load conditions. MBPeT (Model-Based Performance evaluation Tool) is one such tool which utilizes probabilistic models, representing dynamic real-world user behavior patterns, to generate synthetic workload against a System Under Test and in turn carry out performance analysis based on key performance indicators (KPI). Developed at Åbo Akademi University, the MBPeT tool is currently comprised of a downloadable command-line based tool as well as a graphical user interface. The goal of this thesis project is two-fold: 1) to extend the existing MBPeT tool by deploying it as a web-based application, thereby removing the requirement of local installation, and 2) to design a user interface for this web application which will add new user interaction paradigms to the existing feature set of the tool. All phases of the MBPeT process will be realized via this single web deployment location including probabilistic model creation, test configurations, test session execution against a SUT with real-time monitoring of user configurable metric, and final test report generation and display. This web application (MBPeT Dashboard) is implemented with the Java programming language on top of the Vaadin framework for rich internet application development. The Vaadin framework handles the complicated web communications processes and front-end technologies, freeing developers to implement the business logic as well as the user interface in pure Java. A number of experiments are run in a case study environment to validate the functionality of the newly developed Dashboard application as well as the scalability of the solution implemented in handling multiple concurrent users. The results support a successful solution with regards to the functional and performance criteria defined, while improvements and optimizations are suggested to increase both of these factors.
Resumo:
110 p.
Resumo:
Part 5: Service Orientation in Collaborative Networks
Resumo:
User Quality of Experience (QoE) is a subjective entity and difficult to measure. One important aspect of it, User Experience (UX), corresponds to the sensory and emotional state of a user. For a user interacting through a User Interface (UI), precise information on how they are using the UI can contribute to understanding their UX, and thereby understanding their QoE. As well as a user’s use of the UI such as clicking, scrolling, touching, or selecting, other real-time digital information about the user such as from smart phone sensors (e.g. accelerometer, light level) and physiological sensors (e.g. heart rate, ECG, EEG) could contribute to understanding UX. Baran is a framework that is designed to capture, record, manage and analyse the User Digital Imprint (UDI) which, is the data structure containing all user context information. Baran simplifies the process of collecting experimental information in Human and Computer Interaction (HCI) studies, by recording comprehensive real-time data for any UI experiment, and making the data available as a standard UDI data structure. This paper presents an overview of the Baran framework, and provides an example of its use to record user interaction and perform some basic analysis of the interaction.
Resumo:
An overview is given of a user interaction monitoring and analysis framework called BaranC. Monitoring and analysing human-digital interaction is an essential part of developing a user model as the basis for investigating user experience. The primary human-digital interaction, such as on a laptop or smartphone, is best understood and modelled in the wider context of the user and their environment. The BaranC framework provides monitoring and analysis capabilities that not only records all user interaction with a digital device (e.g. smartphone), but also collects all available context data (such as from sensors in the digital device itself, a fitness band or a smart appliances). The data collected by BaranC is recorded as a User Digital Imprint (UDI) which is, in effect, the user model and provides the basis for data analysis. BaranC provides functionality that is useful for user experience studies, user interface design evaluation, and providing user assistance services. An important concern for personal data is privacy, and the framework gives the user full control over the monitoring, storing and sharing of their data.
Resumo:
Mobile and wireless networks have long exploited mobility predictions, focused on predicting the future location of given users, to perform more efficient network resource management. In this paper, we present a new approach in which we provide predictions as a probability distribution of the likelihood of moving to a set of future locations. This approach provides wireless services a greater amount of knowledge and enables them to perform more effectively. We present a framework for the evaluation of this new type of predictor, and develop 2 new predictors, HEM and G-Stat. We evaluate our predictors accuracy in predicting future cells for mobile users, using two large geolocation data sets, from MDC [11], [12] and Crawdad [13]. We show that our predictors can successfully predict with as low as an average 2.2% inaccuracy in certain scenarios.
Resumo:
In a general purpose cloud system efficiencies are yet to be had from supporting diverse applications and their requirements within a storage system used for a private cloud. Supporting such diverse requirements poses a significant challenge in a storage system that supports fine grained configuration on a variety of parameters. This paper uses the Ceph distributed file system, and in particular its global parameters, to show how a single changed parameter can effect the performance for a range of access patterns when tested with an OpenStack cloud system.
Resumo:
Early definitions of Smart Building focused almost entirely on the technology aspect and did not suggest user interaction at all. Indeed, today we would attribute it more to the concept of the automated building. In this sense, control of comfort conditions inside buildings is a problem that is being well investigated, since it has a direct effect on users’ productivity and an indirect effect on energy saving. Therefore, from the users’ perspective, a typical environment can be considered comfortable, if it’s capable of providing adequate thermal comfort, visual comfort and indoor air quality conditions and acoustic comfort. In the last years, the scientific community has dealt with many challenges, especially from a technological point of view. For instance, smart sensing devices, the internet, and communication technologies have enabled a new paradigm called Edge computing that brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth. This has allowed us to improve services, sustainability and decision making. Many solutions have been implemented such as smart classrooms, controlling the thermal condition of the building, monitoring HVAC data for energy-efficient of the campus and so forth. Though these projects provide to the realization of smart campus, a framework for smart campus is yet to be determined. These new technologies have also introduced new research challenges: within this thesis work, some of the principal open challenges will be faced, proposing a new conceptual framework, technologies and tools to move forward the actual implementation of smart campuses. Keeping in mind, several problems known in the literature have been investigated: the occupancy detection, noise monitoring for acoustic comfort, context awareness inside the building, wayfinding indoor, strategic deployment for air quality and books preserving.
Resumo:
Modern scientific discoveries are driven by an unsatisfiable demand for computational resources. High-Performance Computing (HPC) systems are an aggregation of computing power to deliver considerably higher performance than one typical desktop computer can provide, to solve large problems in science, engineering, or business. An HPC room in the datacenter is a complex controlled environment that hosts thousands of computing nodes that consume electrical power in the range of megawatts, which gets completely transformed into heat. Although a datacenter contains sophisticated cooling systems, our studies indicate quantitative evidence of thermal bottlenecks in real-life production workload, showing the presence of significant spatial and temporal thermal and power heterogeneity. Therefore minor thermal issues/anomalies can potentially start a chain of events that leads to an unbalance between the amount of heat generated by the computing nodes and the heat removed by the cooling system originating thermal hazards. Although thermal anomalies are rare events, anomaly detection/prediction in time is vital to avoid IT and facility equipment damage and outage of the datacenter, with severe societal and business losses. For this reason, automated approaches to detect thermal anomalies in datacenters have considerable potential. This thesis analyzed and characterized the power and thermal characteristics of a Tier0 datacenter (CINECA) during production and under abnormal thermal conditions. Then, a Deep Learning (DL)-powered thermal hazard prediction framework is proposed. The proposed models are validated against real thermal hazard events reported for the studied HPC cluster while in production. This thesis is the first empirical study of thermal anomaly detection and prediction techniques of a real large-scale HPC system to the best of my knowledge. For this thesis, I used a large-scale dataset, monitoring data of tens of thousands of sensors for around 24 months with a data collection rate of around 20 seconds.
Resumo:
Embedding intelligence in extreme edge devices allows distilling raw data acquired from sensors into actionable information, directly on IoT end-nodes. This computing paradigm, in which end-nodes no longer depend entirely on the Cloud, offers undeniable benefits, driving a large research area (TinyML) to deploy leading Machine Learning (ML) algorithms on micro-controller class of devices. To fit the limited memory storage capability of these tiny platforms, full-precision Deep Neural Networks (DNNs) are compressed by representing their data down to byte and sub-byte formats, in the integer domain. However, the current generation of micro-controller systems can barely cope with the computing requirements of QNNs. This thesis tackles the challenge from many perspectives, presenting solutions both at software and hardware levels, exploiting parallelism, heterogeneity and software programmability to guarantee high flexibility and high energy-performance proportionality. The first contribution, PULP-NN, is an optimized software computing library for QNN inference on parallel ultra-low-power (PULP) clusters of RISC-V processors, showing one order of magnitude improvements in performance and energy efficiency, compared to current State-of-the-Art (SoA) STM32 micro-controller systems (MCUs) based on ARM Cortex-M cores. The second contribution is XpulpNN, a set of RISC-V domain specific instruction set architecture (ISA) extensions to deal with sub-byte integer arithmetic computation. The solution, including the ISA extensions and the micro-architecture to support them, achieves energy efficiency comparable with dedicated DNN accelerators and surpasses the efficiency of SoA ARM Cortex-M based MCUs, such as the low-end STM32M4 and the high-end STM32H7 devices, by up to three orders of magnitude. To overcome the Von Neumann bottleneck while guaranteeing the highest flexibility, the final contribution integrates an Analog In-Memory Computing accelerator into the PULP cluster, creating a fully programmable heterogeneous fabric that demonstrates end-to-end inference capabilities of SoA MobileNetV2 models, showing two orders of magnitude performance improvements over current SoA analog/digital solutions.