804 resultados para Cloud Computing, Software-as-a-Service (SaaS), SaaS Multi-Tenant, Windows Azure
Resumo:
Software applications created on top of the service-oriented architecture (SOA) are increasingly popular but testing them remains a challenge. In this paper a framework named TASSA for testing the functional and non-functional behaviour of service-based applications is presented. The paper focuses on the concept of design time testing, the corresponding testing approach and architectural integration of the consisting TASSA tools. The individual TASSA tools with sample validation scenarios were already presented with a general view of their relation. This paper’s contribution is the structured testing approach, based on the integral use of the tools and their architectural integration. The framework is based on SOA principles and is composable depending on user requirements.
Resumo:
In this paper we evaluate and compare two representativeand popular distributed processing engines for large scalebig data analytics, Spark and graph based engine GraphLab. Wedesign a benchmark suite including representative algorithmsand datasets to compare the performances of the computingengines, from performance aspects of running time, memory andCPU usage, network and I/O overhead. The benchmark suite istested on both local computer cluster and virtual machines oncloud. By varying the number of computers and memory weexamine the scalability of the computing engines with increasingcomputing resources (such as CPU and memory). We also runcross-evaluation of generic and graph based analytic algorithmsover graph processing and generic platforms to identify thepotential performance degradation if only one processing engineis available. It is observed that both computing engines showgood scalability with increase of computing resources. WhileGraphLab largely outperforms Spark for graph algorithms, ithas close running time performance as Spark for non-graphalgorithms. Additionally the running time with Spark for graphalgorithms over cloud virtual machines is observed to increaseby almost 100% compared to over local computer clusters.
Resumo:
The phenomenonal growth of the Internet has connected us to a vast amount of computation and information resources around the world. However, making use of these resources is difficult due to the unparalleled massiveness, high communication latency, share-nothing architecture and unreliable connection of the Internet. In this dissertation, we present a distributed software agent approach, which brings a new distributed problem-solving paradigm to the Internet computing researches with enhanced client-server scheme, inherent scalability and heterogeneity. Our study discusses the role of a distributed software agent in Internet computing and classifies it into three major categories by the objects it interacts with: computation agent, information agent and interface agent. The discussion of the problem domain and the deployment of the computation agent and the information agent are presented with the analysis, design and implementation of the experimental systems in high performance Internet computing and in scalable Web searching. ^ In the computation agent study, high performance Internet computing can be achieved with our proposed Java massive computation agent (JAM) model. We analyzed the JAM computing scheme and built a brutal force cipher text decryption prototype. In the information agent study, we discuss the scalability problem of the existing Web search engines and designed the approach of Web searching with distributed collaborative index agent. This approach can be used for constructing a more accurate, reusable and scalable solution to deal with the growth of the Web and of the information on the Web. ^ Our research reveals that with the deployment of the distributed software agent in Internet computing, we can have a more cost effective approach to make better use of the gigantic scale network of computation and information resources on the Internet. The case studies in our research show that we are now able to solve many practically hard or previously unsolvable problems caused by the inherent difficulties of Internet computing. ^
Resumo:
This dissertation studies the context-aware application with its proposed algorithms at client side. The required context-aware infrastructure is discussed in depth to illustrate that such an infrastructure collects the mobile user’s context information, registers service providers, derives mobile user’s current context, distributes user context among context-aware applications, and provides tailored services. The approach proposed tries to strike a balance between the context server and mobile devices. The context acquisition is centralized at the server to ensure the reusability of context information among mobile devices, while context reasoning remains at the application level. Hence, a centralized context acquisition and distributed context reasoning are viewed as a better solution overall. The context-aware search application is designed and implemented at the server side. A new algorithm is proposed to take into consideration the user context profiles. By promoting feedback on the dynamics of the system, any prior user selection is now saved for further analysis such that it may contribute to help the results of a subsequent search. On the basis of these developments at the server side, various solutions are consequently provided at the client side. A proxy software-based component is set up for the purpose of data collection. This research endorses the belief that the proxy at the client side should contain the context reasoning component. Implementation of such a component provides credence to this belief in that the context applications are able to derive the user context profiles. Furthermore, a context cache scheme is implemented to manage the cache on the client device in order to minimize processing requirements and other resources (bandwidth, CPU cycle, power). Java and MySQL platforms are used to implement the proposed architecture and to test scenarios derived from user’s daily activities. To meet the practical demands required of a testing environment without the impositions of a heavy cost for establishing such a comprehensive infrastructure, a software simulation using a free Yahoo search API is provided as a means to evaluate the effectiveness of the design approach in a most realistic way. The integration of Yahoo search engine into the context-aware architecture design proves how context aware application can meet user demands for tailored services and products in and around the user’s environment. The test results show that the overall design is highly effective, providing new features and enriching the mobile user’s experience through a broad scope of potential applications.
Resumo:
Il mondo dell’Internet of Things e del single board computing sono settori in forte espansione al giorno d’oggi e le architetture ARM sono, al momento, i dominatori in questo ambito. I sistemi operativi e i software si stanno evolvendo per far fronte a questo cambiamento e ai nuovi casi d’uso che queste tecnologie introducono. In questa tesi ci occuperemo del porting della distribuzione Linux Sabayon per queste architetture, la creazione di un infrastruttura per il rilascio delle immagini e la compilazione dei pacchetti software.
Resumo:
This dissertation studies the context-aware application with its proposed algorithms at client side. The required context-aware infrastructure is discussed in depth to illustrate that such an infrastructure collects the mobile user’s context information, registers service providers, derives mobile user’s current context, distributes user context among context-aware applications, and provides tailored services. The approach proposed tries to strike a balance between the context server and mobile devices. The context acquisition is centralized at the server to ensure the usability of context information among mobile devices, while context reasoning remains at the application level. Hence, a centralized context acquisition and distributed context reasoning are viewed as a better solution overall. The context-aware search application is designed and implemented at the server side. A new algorithm is proposed to take into consideration the user context profiles. By promoting feedback on the dynamics of the system, any prior user selection is now saved for further analysis such that it may contribute to help the results of a subsequent search. On the basis of these developments at the server side, various solutions are consequently provided at the client side. A proxy software-based component is set up for the purpose of data collection. This research endorses the belief that the proxy at the client side should contain the context reasoning component. Implementation of such a component provides credence to this belief in that the context applications are able to derive the user context profiles. Furthermore, a context cache scheme is implemented to manage the cache on the client device in order to minimize processing requirements and other resources (bandwidth, CPU cycle, power). Java and MySQL platforms are used to implement the proposed architecture and to test scenarios derived from user’s daily activities. To meet the practical demands required of a testing environment without the impositions of a heavy cost for establishing such a comprehensive infrastructure, a software simulation using a free Yahoo search API is provided as a means to evaluate the effectiveness of the design approach in a most realistic way. The integration of Yahoo search engine into the context-aware architecture design proves how context aware application can meet user demands for tailored services and products in and around the user’s environment. The test results show that the overall design is highly effective,providing new features and enriching the mobile user’s experience through a broad scope of potential applications.
Resumo:
The European CloudSME project that incorporated 24 European SMEs, besides five academic partners, has finished its funded phase in March 2016. This presentation will provide a summary of the results of the project, and will analyze the challenges and differences when developing “SME Gateways”, when compared to “Science Gateways”. CloudSME started in 2013 with the aim to develop a cloud-based simulation platform for manufacturing and engineering SMEs. The project was based around industry use-cases, five of which were incorporated in the project from the start, and seven additional ones that were added as an outcome of an open call in January 2015. CloudSME utilized science gateway related technologies, such as the commercial CloudBroker Platform and the WS-PGRADE/gUSE Gateway Framework that were developed in the preceding SCI-BUS project. As most important outcome, the project successfully implemented 12 industry quality demonstrators that showcase how SMEs in the manufacturing and engineering sector can utilize cloud-based simulation services. Some of these solutions are already market-ready and currently being rolled out by the software vendor companies. Some others require further fine-tuning and the implementation of commercial interfaces before being put into the market. The CloudSME use-cases came from a very wide application spectrum. The project implemented, for example, an open marketplace for micro-breweries to optimize their production and distribution processes, an insole design validation service to be used by podiatrists and shoe manufacturers, a generic stock management solution for manufacturing SMEs, and also several “classical” high-performance computing case-studies, such as fluid dynamics simulations for model helicopter design, and dual-fuel internal combustion engine simulation. As the project generated significant impact and interest in the manufacturing sector, 10 CloudSME stakeholders established a follow-up company called CloudSME UG for the future commercialization of the results. Besides the success stories, this talk would also like to highlight the difficulties when transferring the outcomes of an academic research project to real commercial applications. The different mindset and approach of academic and industry partners presented a real challenge for the CloudSME project, with some interesting and valuable lessons learnt. The academic way of supporting SMEs did not always work well with the rather different working practices and culture of many participants. Also, the quality of support regarding operational solutions required by the SMEs is well beyond the typical support services academic institutions are prepared for. Finally, a clear lack of trust in academic solutions when compared to commercial solutions was also imminent. The talk will highlight some of these challenges underpinned by the implementation of the CloudSME use-cases.
Resumo:
Smartphones have undergone a remarkable evolution over the last few years, from simple calling devices to full fledged computing devices where multiple services and applications run concurrently. Unfortunately, battery capacity increases at much slower pace, resulting as a main bottleneck for Internet connected smartphones. Several software-based techniques have been proposed in the literature for improving the battery life. Most common techniques include data compression, packet aggregation or batch scheduling, offloading partial computations to cloud, switching OFF interfaces (e.g., WiFi or 3G/4G) periodically for short intervals etc. However, there has been no focus on eliminating the energy waste of background applications that extensively utilize smartphone resources such as CPU, memory, GPS, WiFi, 3G/4G data connection etc. In this paper, we propose an Application State Proxy (ASP) that suppresses/stops the applications on smartphones and maintains their presence on any other network device. The applications are resumed/restarted on smartphones only in case of any event, such as a new message arrival. In this paper, we present the key requirements for the ASP service and different possible architectural designs. In short, the ASP concept can significantly improve the battery life of smartphones, by reducing to maximum extent the usage of its resources due to background applications.
Resumo:
Many cloud-based applications employ a data centre as a central server to process data that is generated by edge devices, such as smartphones, tablets and wearables. This model places ever increasing demands on communication and computational infrastructure with inevitable adverse effect on Quality-of-Service and Experience. The concept of Edge Computing is predicated on moving some of this computational load towards the edge of the network to harness computational capabilities that are currently untapped in edge nodes, such as base stations, routers and switches. This position paper considers the challenges and opportunities that arise out of this new direction in the computing landscape.
Resumo:
Provenance plays a pivotal in tracing the origin of something and determining how and why something had occurred. With the emergence of the cloud and the benefits it encompasses, there has been a rapid proliferation of services being adopted by commercial and government sectors. However, trust and security concerns for such services are on an unprecedented scale. Currently, these services expose very little internal working to their customers; this can cause accountability and compliance issues especially in the event of a fault or error, customers and providers are left to point finger at each other. Provenance-based traceability provides a mean to address part of this problem by being able to capture and query events occurred in the past to understand how and why it took place. However, due to the complexity of the cloud infrastructure, the current provenance models lack the expressibility required to describe the inner-working of a cloud service. For a complete solution, a provenance-aware policy language is also required for operators and users to define policies for compliance purpose. The current policy standards do not cater for such requirement. To address these issues, in this paper we propose a provenance (traceability) model cProv, and a provenance-aware policy language (cProvl) to capture traceability data, and express policies for validating against the model. For implementation, we have extended the XACML3.0 architecture to support provenance, and provided a translator that converts cProvl policy and request into XACML type.
Resumo:
Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.
Resumo:
In this thesis, tool support is addressed for the combined disciplines of Model-based testing and performance testing. Model-based testing (MBT) utilizes abstract behavioral models to automate test generation, thus decreasing time and cost of test creation. MBT is a functional testing technique, thereby focusing on output, behavior, and functionality. Performance testing, however, is non-functional and is concerned with responsiveness and stability under various load conditions. MBPeT (Model-Based Performance evaluation Tool) is one such tool which utilizes probabilistic models, representing dynamic real-world user behavior patterns, to generate synthetic workload against a System Under Test and in turn carry out performance analysis based on key performance indicators (KPI). Developed at Åbo Akademi University, the MBPeT tool is currently comprised of a downloadable command-line based tool as well as a graphical user interface. The goal of this thesis project is two-fold: 1) to extend the existing MBPeT tool by deploying it as a web-based application, thereby removing the requirement of local installation, and 2) to design a user interface for this web application which will add new user interaction paradigms to the existing feature set of the tool. All phases of the MBPeT process will be realized via this single web deployment location including probabilistic model creation, test configurations, test session execution against a SUT with real-time monitoring of user configurable metric, and final test report generation and display. This web application (MBPeT Dashboard) is implemented with the Java programming language on top of the Vaadin framework for rich internet application development. The Vaadin framework handles the complicated web communications processes and front-end technologies, freeing developers to implement the business logic as well as the user interface in pure Java. A number of experiments are run in a case study environment to validate the functionality of the newly developed Dashboard application as well as the scalability of the solution implemented in handling multiple concurrent users. The results support a successful solution with regards to the functional and performance criteria defined, while improvements and optimizations are suggested to increase both of these factors.
Resumo:
Current trends in broadband mobile networks are addressed towards the placement of different capabilities at the edge of the mobile network in a centralised way. On one hand, the split of the eNB between baseband processing units and remote radio headers makes it possible to process some of the protocols in centralised premises, likely with virtualised resources. On the other hand, mobile edge computing makes use of processing and storage capabilities close to the air interface in order to deploy optimised services with minimum delay. The confluence of both trends is a hot topic in the definition of future 5G networks. The full centralisation of both technologies in cloud data centres imposes stringent requirements to the fronthaul connections in terms of throughput and latency. Therefore, all those cells with limited network access would not be able to offer these types of services. This paper proposes a solution for these cases, based on the placement of processing and storage capabilities close to the remote units, which is especially well suited for the deployment of clusters of small cells. The proposed cloud-enabled small cells include a highly efficient microserver with a limited set of virtualised resources offered to the cluster of small cells. As a result, a light data centre is created and commonly used for deploying centralised eNB and mobile edge computing functionalities. The paper covers the proposed architecture, with special focus on the integration of both aspects, and possible scenarios of application.
Resumo:
Ultimamente si stanno sviluppando tecnologie per rendere più efficiente la virtualizzazione a livello di sistema operativo, tra cui si cita la suite Docker, che permette di gestire processi come se fossero macchine virtuali. Inoltre i meccanismi di clustering, come Kubernetes, permettono di collegare macchine multiple, farle comunicare tra loro e renderle assimilabili ad un server monolitico per l'utente esterno. Il connubio tra virtualizzazione a livello di sistema operativo e clustering permette di costruire server potenti quanto quelli monolitici ma più economici e possono adattarsi meglio alle richieste esterne. Data l'enorme mole di dati e di potenza di calcolo necessaria per gestire le comunicazioni e le interazioni tra utenti e servizi web, molte imprese non possono permettersi investimenti su un server proprietario e la sua manutenzione, perciò affittano le risorse necessarie che costituiscono il cosiddetto "cloud", cioè l'insieme di server che le aziende mettono a disposizione dei propri clienti. Il trasferimento dei servizi da macchina fisica a cloud ha modificato la visione che si ha dei servizi stessi, infatti non sono più visti come software monolitici ma come microservizi che interagiscono tra di loro. L'infrastruttura di comunicazione che permette ai microservizi di comunicare è chiamata service mesh e la sua suddivisione richiama la tecnologia SDN. È stato studiato il comportamento del software di service mesh Istio installato in un cluster Kubernetes. Sono state raccolte metriche su memoria occupata, CPU utilizzata, pacchetti trasmessi ed eventuali errori e infine latenza per confrontarle a quelle ottenute da un cluster su cui non è stato installato Istio. Lo studio dimostra che, in un cluster rivolto all'uso in produzione, la service mesh offerta da Istio fornisce molti strumenti per il controllo della rete a scapito di una richiesta leggermente più alta di risorse hardware.