847 resultados para Cloud Computing, attori, piattaforme, Pattern, Orleans


Relevância:

100.00% 100.00%

Publicador:

Resumo:

as tecnologías emergentes como el cloud computing y los dispositivos móviles están creando una oportunidad sin precedentes para mejorar el sistema educativo, permitiendo tanto a los educadores personalizar y mejorar la experiencia de aprendizaje, como facilitar a los estudiantes que adquieran conocimientos sin importar dónde estén. Por otra parte, a través de técnicas de gamificacion será posible promover y motivar a los estudiantes a que aprendan materias arduas haciendo que la experiencia sea más motivadora. Los juegos móviles pueden ser el camino correcto para dar soporte a esta experiencia de aprendizaje mejorada. Este proyecto integra el diseño y desarrollo de una arquitectura en la nube altamente escalable y con alto rendimiento, así como el propio cliente de iOS, para dar soporte a una nueva version de Temporis, un juego móvil multijugador orientado a reordenar eventos históricos en una línea temporal (e.j. historia, arte, deportes, entretenimiento y literatura). Temporis actualmente está disponible en Google Play. Esta memoria describe el desarrollo de la nueva versión de Temporis (Temporis v.2.0) proporcionando detalles acerca de la mejora y adaptación basados en el Temporis original. En particular se describe el nuevo backend hecho en Go sobre Google App Engine creado para soportar miles de usuarios, asó como otras características por ejemplo como conseguir enviar noticaciones push desde la propia plataforma. Por último, el cliente de iOS en Temporis v.2.0 se ha desarrollado utilizando las últimas y más relevantes tecnologías, prestando especial atención a Swift (el lenguaje de programación nuevo de Apple, que es seguro y rápido), el Paradigma Funcional Reactivo (que ayuda a construir aplicaciones altamente interactivas además de a minimizar errores) y la arquitectura VIPER (una arquitectura que sigue los principios SOLID, se centra en la separación de asuntos y favorece la reutilización de código en otras plataformas). ABSTRACT Emerging technologies such as cloud computing and mobile devices are creating an unprecedented opportunity for enhancing the educational system, letting both educators customize and improve the learning experience, and students acquire knowledge regardless of where they are. Moreover, through gamification techniques it would be possible to encourage and motivate students to learn arduous subjects by making the experience more motivating. Mobile games can be a perfect vehicle to support this enhanced learning experience. This project integrates the design and development of a highly scalable and performant cloud architecture, as well as the iOS client that uses it, in order to provide support to a new version of Temporis, a mobile multiplayer game focused on ordering time-based (e.g. history, art, sports, entertainment and literature) in a timeline that currently is available on Google Play. This work describes the development of the new Temporis version (Temporis v.2.0), providing details about improvements and details on the adaptation of the original Temporis. In particular, the new Google App Engine backend is described, which was created to support thousand of users developed in Go language are provided, in addition to other features like how to achieve push notications in this platform. Finally, the mobile iOS client developed using the latest and more relevant technologies is explained paying special attention to Swift (Apple's new programming language, that is safe and fast), the Functional Reactive Paradigm (that helps building highly interactive apps while minimizing bugs) and the VIPER architecture (a SOLID architecture that enforces separation of concerns and makes it easy to reuse code for other platforms).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud Agile Manufacturing is a new paradigm proposed in this article. The main objective of Cloud Agile Manufacturing is to offer industrial production systems as a service. Thus users can access any functionality available in the cloud of manufacturing (process design, production, management, business integration, factories virtualization, etc.) without knowledge — or at least without having to be experts — in managing the required resources. The proposal takes advantage of many of the benefits that can offer technologies and models like: Business Process Management (BPM), Cloud Computing, Service Oriented Architectures (SOA) and Ontologies. To develop the proposal has been taken as a starting point the Semantic Industrial Machinery as a Service (SIMaaS) proposed in previous work. This proposal facilitates the effective integration of industrial machinery in a computing environment, offering it as a network service. The work also includes an analysis of the benefits and disadvantages of the proposal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a new manufacturing paradigm, we call Cloud Agile Manufacturing, and whose principal objective is to offer industrial production systems as a service. Thus users can access any functionality available in the cloud of manufacturing (process design, production, management, business integration, factories virtualization, etc.) without knowledge — or at least without having to be experts — in managing the required resources. The proposal takes advantage of many of the benefits that can offer technologies and models like: Business Process Management (BPM), Cloud Computing, Service Oriented Architectures (SOA) and Ontologies. To develop the proposal has been taken as a starting point the Semantic Industrial Machinery as a Service (SIMaaS) proposed in previous work. This proposal facilitates the effective integration of industrial machinery in a computing environment, offering it as a network service. The work also includes an analysis of the benefits and disadvantages of the proposal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the results of a study that collected, compared and analyzed the terms and conditions of a number of cloud services vis-a-vis privacy and data protection. First, we assembled a list of factors that comprehensively capture cloud companies' treatment of user data with regard to privacy and data protection; then, we assessed how various cloud services of different types protect their users in the collection, retention, and use of their data, as well as in the disclosure to law enforcement authorities. This commentary provides comparative and aggregate analysis of the results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An on-line survey of experts was conducted to solicit their views on policy priorities in the area of information and communication technologies (ICT) in the Caribbean. The experts considered the goal to “promote teacher training in the use of ICTs in the classroom” to be the highest priority, followed by goals to “reduce the cost of broadband services” and “promote the use of ICT in emergency and disaster prevention, preparedness and response.” Goals in the areas of cybercrime, e-commerce, egovernment, universal service funds, consumer protection, and on-line privacy rounded out the top 10. Some of the lowest ranked goals were those related to coordinating the management of infrastructure changes. These included the switchover for digital terrestrial television (DTT) and digital FM radio, cloud computing for government ICT, the introduction of satellite-based internet services, and the installation of content distribution networks (CDNs). Initiatives aimed at using ICT to promote specific industries, or specific means of promoting the digital economy, tended toward the centre of the rankings. Thus, a general pattern emerged which elevated the importance of focusing on how ICT is integrated into the broader society, with economic issues a lower priority, and concerns about coordination on infrastructure issues lower still.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cybercrime and related malicious activity in our increasingly digital world has become more prevalent and sophisticated, evading traditional security mechanisms. Digital forensics has been proposed to help investigate, understand and eventually mitigate such attacks. The practice of digital forensics, however, is still fraught with various challenges. Some of the most prominent of these challenges include the increasing amounts of data and the diversity of digital evidence sources appearing in digital investigations. Mobile devices and cloud infrastructures are an interesting specimen, as they inherently exhibit these challenging circumstances and are becoming more prevalent in digital investigations today. Additionally they embody further characteristics such as large volumes of data from multiple sources, dynamic sharing of resources, limited individual device capabilities and the presence of sensitive data. These combined set of circumstances make digital investigations in mobile and cloud environments particularly challenging. This is not aided by the fact that digital forensics today still involves manual, time consuming tasks within the processes of identifying evidence, performing evidence acquisition and correlating multiple diverse sources of evidence in the analysis phase. Furthermore, industry standard tools developed are largely evidence-oriented, have limited support for evidence integration and only automate certain precursory tasks, such as indexing and text searching. In this study, efficiency, in the form of reducing the time and human labour effort expended, is sought after in digital investigations in highly networked environments through the automation of certain activities in the digital forensic process. To this end requirements are outlined and an architecture designed for an automated system that performs digital forensics in highly networked mobile and cloud environments. Part of the remote evidence acquisition activity of this architecture is built and tested on several mobile devices in terms of speed and reliability. A method for integrating multiple diverse evidence sources in an automated manner, supporting correlation and automated reasoning is developed and tested. Finally the proposed architecture is reviewed and enhancements proposed in order to further automate the architecture by introducing decentralization particularly within the storage and processing functionality. This decentralization also improves machine to machine communication supporting several digital investigation processes enabled by the architecture through harnessing the properties of various peer-to-peer overlays. Remote evidence acquisition helps to improve the efficiency (time and effort involved) in digital investigations by removing the need for proximity to the evidence. Experiments show that a single TCP connection client-server paradigm does not offer the required scalability and reliability for remote evidence acquisition and that a multi-TCP connection paradigm is required. The automated integration, correlation and reasoning on multiple diverse evidence sources demonstrated in the experiments improves speed and reduces the human effort needed in the analysis phase by removing the need for time-consuming manual correlation. Finally, informed by published scientific literature, the proposed enhancements for further decentralizing the Live Evidence Information Aggregator (LEIA) architecture offer a platform for increased machine-to-machine communication thereby enabling automation and reducing the need for manual human intervention.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The enormous potential of cloud computing for improved and cost-effective service has generated unprecedented interest in its adoption. However, a potential cloud user faces numerous risks regarding service requirements, cost implications of failure and uncertainty about cloud providers' ability to meet service level agreements. These risks hinder the adoption of cloud. We extend the work on goal-oriented requirements engineering (GORE) and obstacles for informing the adoption process. We argue that obstacles prioritisation and their resolution is core to mitigating risks in the adoption process. We propose a novel systematic method for prioritising obstacles and their resolution tactics using Analytical Hierarchy Process (AHP). We provide an example to demonstrate the applicability and effectiveness of the approach. To assess the AHP choice of the resolution tactics we support the method by stability and sensitivity analysis. Copyright 2014 ACM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Доклад, поместен в сборника на Националната конференция "Образованието в информационното общество", Пловдив, май, 2012 г.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Work on human self-Awareness is the basis for a framework to develop computational systems that can adaptively manage complex dynamic tradeoffs at runtime. An architectural case study in cloud computing illustrates the framework's potential benefits.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To benefit from the advantages that Cloud Computing brings to the IT industry, management policies must be implemented as a part of the operation of the Cloud. Among others, for example, the specification of policies can be used for the management of energy to reduce the cost of running the IT system or also for security policies while handling privacy issues of users. As cloud platforms are large, manual enforcement of policies is not scalable. Hence, autonomic approaches for management policies have recently received a considerable attention. These approaches allow specification of rules that are executed via rule-engines. The process of rules creation starts by the interpretation of the policies drafted by high-rank managers. Then, technical IT staff translate such policies to operational activities to implement them. Such process can start from a textual declarative description and after numerous steps terminates in a set of rules to be executed on a rule engine. To simplify the steps and to bridge the considerable gap between the declarative policies and executable rules, we propose a domain-specific language called CloudMPL. We also design a method of automated transformation of the rules captured in CloudMPL to the popular rule-engine Drools. As the policies are changed over time, code generation will reduce the time required for the implementation of the policies. In addition, using a declarative language for writing the specifications is expected to make the authoring of rules easier. We demonstrate the use of the CloudMPL language into a running example extracted from a management energy consumption case study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation presents and evaluates a methodology for scheduling medical application workloads in virtualized computing environments. Such environments are being widely adopted by providers of "cloud computing" services. In the context of provisioning resources for medical applications, such environments allow users to deploy applications on distributed computing resources while keeping their data secure. Furthermore, higher level services that further abstract the infrastructure-related issues can be built on top of such infrastructures. For example, a medical imaging service can allow medical professionals to process their data in the cloud, easing them from the burden of having to deploy and manage these resources themselves. In this work, we focus on issues related to scheduling scientific workloads on virtualized environments. We build upon the knowledge base of traditional parallel job scheduling to address the specific case of medical applications while harnessing the benefits afforded by virtualization technology. To this end, we provide the following contributions: (1) An in-depth analysis of the execution characteristics of the target applications when run in virtualized environments. (2) A performance prediction methodology applicable to the target environment. (3) A scheduling algorithm that harnesses application knowledge and virtualization-related benefits to provide strong scheduling performance and quality of service guarantees. In the process of addressing these pertinent issues for our target user base (i.e. medical professionals and researchers), we provide insight that benefits a large community of scientific application users in industry and academia. Our execution time prediction and scheduling methodologies are implemented and evaluated on a real system running popular scientific applications. We find that we are able to predict the execution time of a number of these applications with an average error of 15%. Our scheduling methodology, which is tested with medical image processing workloads, is compared to that of two baseline scheduling solutions and we find that it outperforms them in terms of both the number of jobs processed and resource utilization by 20–30%, without violating any deadlines. We conclude that our solution is a viable approach to supporting the computational needs of medical users, even if the cloud computing paradigm is not widely adopted in its current form.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud Computing is a paradigm that enables the access, in a simple and pervasive way, through the network, to shared and configurable computing resources. Such resources can be offered on demand to users in a pay-per-use model. With the advance of this paradigm, a single service offered by a cloud platform might not be enough to meet all the requirements of clients. Ergo, it is needed to compose services provided by different cloud platforms. However, current cloud platforms are not implemented using common standards, each one has its own APIs and development tools, which is a barrier for composing different services. In this context, the Cloud Integrator, a service-oriented middleware platform, provides an environment to facilitate the development and execution of multi-cloud applications. The applications are compositions of services, from different cloud platforms and, represented by abstract workflows. However, Cloud Integrator has some limitations, such as: (i) applications are locally executed; (ii) users cannot specify the application in terms of its inputs and outputs, and; (iii) experienced users cannot directly determine the concrete Web services that will perform the workflow. In order to deal with such limitations, this work proposes Cloud Stratus, a middleware platform that extends Cloud Integrator and offers different ways to specify an application: as an abstract workflow or a complete/partial execution flow. The platform enables the application deployment in cloud virtual machines, so that several users can access it through the Internet. It also supports the access and management of virtual machines in different cloud platforms and provides services monitoring mechanisms and assessment of QoS parameters. Cloud Stratus was validated through a case study that consists of an application that uses different services provided by different cloud platforms. Cloud Stratus was also evaluated through computing experiments that analyze the performance of its processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud computing can be defined as a distributed computational model by through resources (hardware, storage, development platforms and communication) are shared, as paid services accessible with minimal management effort and interaction. A great benefit of this model is to enable the use of various providers (e.g a multi-cloud architecture) to compose a set of services in order to obtain an optimal configuration for performance and cost. However, the multi-cloud use is precluded by the problem of cloud lock-in. The cloud lock-in is the dependency between an application and a cloud platform. It is commonly addressed by three strategies: (i) use of intermediate layer that stands to consumers of cloud services and the provider, (ii) use of standardized interfaces to access the cloud, or (iii) use of models with open specifications. This paper outlines an approach to evaluate these strategies. This approach was performed and it was found that despite the advances made by these strategies, none of them actually solves the problem of lock-in cloud. In this sense, this work proposes the use of Semantic Web to avoid cloud lock-in, where RDF models are used to specify the features of a cloud, which are managed by SPARQL queries. In this direction, this work: (i) presents an evaluation model that quantifies the problem of cloud lock-in, (ii) evaluates the cloud lock-in from three multi-cloud solutions and three cloud platforms, (iii) proposes using RDF and SPARQL on management of cloud resources, (iv) presents the cloud Query Manager (CQM), an SPARQL server that implements the proposal, and (v) comparing three multi-cloud solutions in relation to CQM on the response time and the effectiveness in the resolution of cloud lock-in.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-Cloud Applications are composed of services offered by multiple cloud platforms where the user/developer has full knowledge of the use of such platforms. The use of multiple cloud platforms avoids the following problems: (i) vendor lock-in, which is dependency on the application of a certain cloud platform, which is prejudicial in the case of degradation or failure of platform services, or even price increasing on service usage; (ii) degradation or failure of the application due to fluctuations in quality of service (QoS) provided by some cloud platform, or even due to a failure of any service. In multi-cloud scenario is possible to change a service in failure or with QoS problems for an equivalent of another cloud platform. So that an application can adopt the perspective multi-cloud is necessary to create mechanisms that are able to select which cloud services/platforms should be used in accordance with the requirements determined by the programmer/user. In this context, the major challenges in terms of development of such applications include questions such as: (i) the choice of which underlying services and cloud computing platforms should be used based on the defined user requirements in terms of functionality and quality (ii) the need to continually monitor the dynamic information (such as response time, availability, price, availability), related to cloud services, in addition to the wide variety of services, and (iii) the need to adapt the application if QoS violations affect user defined requirements. This PhD thesis proposes an approach for dynamic adaptation of multi-cloud applications to be applied when a service is unavailable or when the requirements set by the user/developer point out that other available multi-cloud configuration meets more efficiently. Thus, this work proposes a strategy composed of two phases. The first phase consists of the application modeling, exploring the similarities representation capacity and variability proposals in the context of the paradigm of Software Product Lines (SPL). In this phase it is used an extended feature model to specify the cloud service configuration to be used by the application (similarities) and the different possible providers for each service (variability). Furthermore, the non-functional requirements associated with cloud services are specified by properties in this model by describing dynamic information about these services. The second phase consists of an autonomic process based on MAPE-K control loop, which is responsible for selecting, optimally, a multicloud configuration that meets the established requirements, and perform the adaptation. The adaptation strategy proposed is independent of the used programming technique for performing the adaptation. In this work we implement the adaptation strategy using various programming techniques such as aspect-oriented programming, context-oriented programming and components and services oriented programming. Based on the proposed steps, we tried to assess the following: (i) the process of modeling and the specification of non-functional requirements can ensure effective monitoring of user satisfaction; (ii) if the optimal selection process presents significant gains compared to sequential approach; and (iii) which techniques have the best trade-off when compared efforts to development/modularity and performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud computing enables independent end users and applications to share data and pooled resources, possibly located in geographically distributed Data Centers, in a fully transparent way. This need is particularly felt by scientific applications to exploit distributed resources in efficient and scalable way for the processing of big amount of data. This paper proposes an open so- lution to deploy a Platform as a service (PaaS) over a set of multi- site data centers by applying open source virtualization tools to facilitate operation among virtual machines while optimizing the usage of distributed resources. An experimental testbed is set up in Openstack environment to obtain evaluations with different types of TCP sample connections to demonstrate the functionality of the proposed solution and to obtain throughput measurements in relation to relevant design parameters.