898 resultados para Cloud OS, cloud operating system, cloud computing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the vision of Mark Weiser on ubiquitous computing, computers are disappearing from the focus of the users and are seamlessly interacting with other computers and users in order to provide information and services. This shift of computers away from direct computer interaction requires another way of applications to interact without bothering the user. Context is the information which can be used to characterize the situation of persons, locations, or other objects relevant for the applications. Context-aware applications are capable of monitoring and exploiting knowledge about external operating conditions. These applications can adapt their behaviour based on the retrieved information and thus to replace (at least a certain amount) the missing user interactions. Context awareness can be assumed to be an important ingredient for applications in ubiquitous computing environments. However, context management in ubiquitous computing environments must reflect the specific characteristics of these environments, for example distribution, mobility, resource-constrained devices, and heterogeneity of context sources. Modern mobile devices are equipped with fast processors, sufficient memory, and with several sensors, like Global Positioning System (GPS) sensor, light sensor, or accelerometer. Since many applications in ubiquitous computing environments can exploit context information for enhancing their service to the user, these devices are highly useful for context-aware applications in ubiquitous computing environments. Additionally, context reasoners and external context providers can be incorporated. It is possible that several context sensors, reasoners and context providers offer the same type of information. However, the information providers can differ in quality levels (e.g. accuracy), representations (e.g. position represented in coordinates and as an address) of the offered information, and costs (like battery consumption) for providing the information. In order to simplify the development of context-aware applications, the developers should be able to transparently access context information without bothering with underlying context accessing techniques and distribution aspects. They should rather be able to express which kind of information they require, which quality criteria this information should fulfil, and how much the provision of this information should cost (not only monetary cost but also energy or performance usage). For this purpose, application developers as well as developers of context providers need a common language and vocabulary to specify which information they require respectively they provide. These descriptions respectively criteria have to be matched. For a matching of these descriptions, it is likely that a transformation of the provided information is needed to fulfil the criteria of the context-aware application. As it is possible that more than one provider fulfils the criteria, a selection process is required. In this process the system has to trade off the provided quality of context and required costs of the context provider against the quality of context requested by the context consumer. This selection allows to turn on context sources only if required. Explicitly selecting context services and thereby dynamically activating and deactivating the local context provider has the advantage that also the resource consumption is reduced as especially unused context sensors are deactivated. One promising solution is a middleware providing appropriate support in consideration of the principles of service-oriented computing like loose coupling, abstraction, reusability, or discoverability of context providers. This allows us to abstract context sensors, context reasoners and also external context providers as context services. In this thesis we present our solution consisting of a context model and ontology, a context offer and query language, a comprehensive matching and mediation process and a selection service. Especially the matching and mediation process and the selection service differ from the existing works. The matching and mediation process allows an autonomous establishment of mediation processes in order to transfer information from an offered representation into a requested representation. In difference to other approaches, the selection service selects not only a service for a service request, it rather selects a set of services in order to fulfil all requests which also facilitates the sharing of services. The approach is extensively reviewed regarding the different requirements and a set of demonstrators shows its usability in real-world scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Increasingly, distributed systems are being used to host all manner of applications. While these platforms provide a relatively cheap and effective means of executing applications, so far there has been little work in developing tools and utilities that can help application developers understand problems with the supporting software, or the executing applications. To fully understand why an application executing on a distributed system is not behaving as would be expected it is important that not only the application, but also the underlying middleware, and the operating system are analysed too, otherwise issues could be missed and certainly overall performance profiling and fault diagnoses would be harder to understand. We believe that one approach to profiling and the analysis of distributed systems and the associated applications is via the plethora of log files generated at runtime. In this paper we report on a system (Slogger), that utilises various emerging Semantic Web technologies to gather the heterogeneous log files generated by the various layers in a distributed system and unify them in common data store. Once unified, the log data can be queried and visualised in order to highlight potential problems or issues that may be occurring in the supporting software or the application itself.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract: A new methodology was created to measure the energy consumption and related green house gas (GHG) emissions of a computer operating system (OS) across different device platforms. The methodology involved the direct power measurement of devices under different activity states. In order to include all aspects of an OS, the methodology included measurements in various OS modes, whilst uniquely, also incorporating measurements when running an array of defined software activities, so as to include OS application management features. The methodology was demonstrated on a laptop and phone that could each run multiple OSs, results confirmed that OS can significantly impact the energy consumption of devices. In particular, the new versions of the Microsoft Windows OS were tested and highlighted significant differences between the OS versions on the same hardware. The developed methodology could enable a greater awareness of energy consumption, during both the software development and software marketing processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The TCABR data analysis and acquisition system has been upgraded to support a joint research programme using remote participation technologies. The architecture of the new system uses Java language as programming environment. Since application parameters and hardware in a joint experiment are complex with a large variability of components, requirements and specification solutions need to be flexible and modular, independent from operating system and computer architecture. To describe and organize the information on all the components and the connections among them, systems are developed using the extensible Markup Language (XML) technology. The communication between clients and servers uses remote procedure call (RPC) based on the XML (RPC-XML technology). The integration among Java language, XML and RPC-XML technologies allows to develop easily a standard data and communication access layer between users and laboratories using common software libraries and Web application. The libraries allow data retrieval using the same methods for all user laboratories in the joint collaboration, and the Web application allows a simple graphical user interface (GUI) access. The TCABR tokamak team in collaboration with the IPFN (Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Universidade Tecnica de Lisboa) is implementing this remote participation technologies. The first version was tested at the Joint Experiment on TCABR (TCABRJE), a Host Laboratory Experiment, organized in cooperation with the IAEA (International Atomic Energy Agency) in the framework of the IAEA Coordinated Research Project (CRP) on ""Joint Research Using Small Tokamaks"". (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software based Distributed Shared Memory (DSM) systems have been the focus of considerable research effort, primarily in improving performance and consistency protocols. Unfortunately, computer clusters present a number of challenges for any DSM systems that are not solvable through consistency protocols alone. These challenges relate to the ability of DSM systems to adjust to load fluctuations, computers being added/removed from the cluster, to deal with faults, and the ability to use DSM objects larger than the available physical memory. We present here a proposal for the Synergy Distributed Shared Memory System and its integration with the virtual memory, group communication and process migration services of the Genesis Cluster Operating System.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Considerable research and development has been invested in software Distributed Shared Memory (DSM). The primary focus of this work has traditionally been on high performance and consistency protocols. Unfortunately, clusters present a number of challenges for any DSM systems not solvable through consistency protocols alone. These challenges relate to the ability of DSM systems to adjust to load fluctuations, computers being added/removed from the cluster, to deal with faults, and the ability to use DSM objects larger than the available physical memory. This paper introduces the Synergy DSM System and its integration with the virtual memory, group communication and process migration services of the Genesis Cluster Operating System.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The human body was used to illustrate an Autonomic Computing system that possesses self-knowledge, self-configuration, self-optimization, self-healing, and self-protection, knowledge of its environment and user friendliness properties. Autonomic Computing was identified by IBM as one of the Grand Challenges. Many researchers and research groups have responded positively to the challenge by initiating research around one or two of the characteristics
identified by IBM as the requirements for Autonomic Computing. One of the areas that could benefit from the comprehensive approach created by the Autonomic Computing vision is parallel processing on nondedicated clusters. This paper shows a general design of services and initial implementation of a system that moves parallel processing on clusters to the computing mainstream using the Autonomic Computing vision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dynamic deployment of Web services is a term used frequently when describing the selection and deployment of a service to a grid host. Although current grid systems (such as Globus) provide dynamic deployment, the requirements of the service being deployed are not considered. Therefore truly dynamic deployment cannot be achieved as the services deployed are restricted to the grid system used. We present a dynamic deployment mechanism as part of self configuration in a service oriented grid environment. The dynamic deployment mechanism takes the requirements of the service into consideration, including parameters such as the operating system required to execute the service, the required software libraries, any additional required software packages, price and Quality of Service (QoS) parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The future of computing lies with distributed systems, i.e. a network of workstations controlled by a modern distributed operating system. By supporting load balancing and parallel execution, the overall performance of a distributed system can be improved dramatically. Process migration, the act of moving a running process from a highly loaded machine to a lightly loaded machine, could be used to support load balancing, parallel execution, reliability etc. This thesis identifies the problems past process migration facilities have had and determines the possible differing strategies that can be used to resolve these problems. The result of this analysis has led to a new design philosophy. This philosophy requires the design of a process migration facility and the design of an operating system to be conducted in parallel. Modern distributed operating systems follow the microkernel and client/server paradigms. Applying these design paradigms, in conjunction with the requirements of both process migration and a distributed operating system, results in a system where each resource is controlled by a separate server process. However, a process is a complex resource composed of simple resources such as data structures, an address space and communication state. For this reason, a process migration facility does not directly migrate the resources of a process. Instead, it requests the appropriate servers to transfer the resources. This novel solution yields a modular, high performance facility that is easy to create, debug and maintain. Furthermore, the design easily incorporates providing multiple migration strategies. In order to verify the validity of this design, a process migration facility was developed and tested within RHODOS (ResearcH Oriented Distributed Operating System). RHODOS is a modern microkernel and client/server based distributed operating system. In RHODOS, a process is composed of at least three separate resources: process state - maintained by a process manager, address space - maintained by a memory manager and communication state - maintained by an InterProcess Communication Manager (IPCM). The RHODOS multiple strategy migration manager utilises the services of the process, memory and IPC Managers to migrate the resources of a process. Performance testing of this facility indicates that this design is as fast or better than existing systems which use faster hardware. Furthermore, by studying the results of the performance test ing, the conditions under which a particular strategy should be employed have been identified. This thesis also addresses heterogeneous process migration. The current trend is to have islands of homogeneous workstations amid a sea of heterogeneity. From this situation and the current literature on the topic, heterogeneous process migration can be seen as too inefficient for general use. Instead, only homogeneous workstations should be used for process migration. This implies a need to locate homogeneous workstations. Entities called traders, which store and disseminate knowledge about the resources of several workstations, should be used to provide resource discovery. Resource discovery will enable the detection of homogeneous workstations to which processes can be migrated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current attempts to manage parallel applications on Clusters of Workstations (COWs) have either generally followed the parallel execution environment approach or been extensions to existing network operating systems, both of which do not provide complete or satisfactory solutions. The efficient and transparent management of parallelism within the COW environment requires enhanced methods of process instantiation, mapping of parallel process to workstations, maintenance of process relationships, process communication facilities, and process coordination mechanisms. The aim of this research is to synthesise, design, develop and experimentally study a system capable of efficiently and transparently managing SPMD parallelism on a COW. This system should both improve the performance of SPMD based parallel programs and relieve the programmer from the involvement into parallelism management in order to allow them to concentrate on application programming. It is also the aim of this research to show that such a system, to achieve these objectives, is best achieved by adding new special services and exploiting the existing services of a client/server and microkernel based distributed operating system. To achieve these goals the research methods of the experimental computer science should be employed. In order to specify the scope of this project, this work investigated the issues related to parallel processing on COWs and surveyed a number of relevant systems including PVM, NOW and MOSIX. It was shown that although the MOSIX system provide a number of good services related to parallelism management, none of the system forms a complete solution. The problems identified with these systems include: instantiation services that are not suited to parallel processing; duplication of services between the parallelism management environment and the operating system; and poor levels of transparency. A high performance and transparent system capable of managing the execution of SPMD parallel applications was synthesised and the specific services of process instantiation, process mapping and process interaction detailed. The process instantiation service designed here provides the capability to instantiate parallel processes using either creation or duplication methods and also supports multiple and group based instantiation which is specifically design for SPMD parallel processing. The process mapping service provides the combination of process allocation and dynamic load balancing to ensure the load of a COW remains balanced not only at the time a parallel program is initialised but also during the execution of the program. The process interaction service guarantees to maintain transparently process relationships, communications and coordination services between parallel processes regardless of their location within the COW. The combination of these services provides an original architecture and organisation of a system that is capable of fully managing the execution of SPMD parallel applications on a COW. A logical design of a parallelism management system was developed derived from the synthesised system and was shown that it should ideally be based on a distributed operating system employing the client server model. The client/server based distributed operating system provides the level of transparency, modularity and flexibility necessary for a complete parallelism management system. The services identified in the synthesised system have been mapped to a set of server processes including: Process Instantiation Server providing advanced multiple and group based process creation and duplication; Process Mapping Server combining load collection, process allocation and dynamic load balancing services; and Process Interaction Server providing transparent interprocess communication and coordination. A Process Migration Server was also identified as vital to support both the instantiation and mapping servers. The RHODOS client/server and microkernel based distributed operating system was selected to carry out research into the detailed design and to be used for the implementation this parallelism management system. RHODOS was enhanced to provide the required servers and resulted in the development of the REX Manager, Global Scheduler and Process Migration Manager to provide the services of process instantiation, mapping and migration, respectively. The process interaction services were already provided within RHODOS and only required some extensions to the existing Process Manager and IPC Managers. Through a variety of experiments it was shown that when this system was used to support the execution of SPMD parallel applications the overall execution times were improved, especially when multiple and group based instantiation services are employed. The RHODOS PMS was also shown to greatly reduce the programming burden experienced by users when writing SPMD parallel applications by providing a small set of powerful primitives specially designed to support parallel processing. The system was also shown to be applicable and has been used in a variety of other research areas such as Distributed Shared Memory, Parallelising Compilers and assisting the port of PVM to the RHODOS system. The RHODOS Parallelism Management System (PMS) provides a unique and creative solution to the problem of transparently and efficiently controlling the execution of SPMD parallel applications on COWs. Combining advanced services such as multiple and group based process creation and duplication; combined process allocation and dynamic load balancing; and complete COW wide transparency produces a totally new system that addresses many of the problems not addressed in other systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

 The penetration of social networking platforms such as Facebook is becoming pervasive in education, along with mobile applications (apps) and mobile devices. Students are using these technologies and apps to organise their learning material. Social media via apps is the most popular activity among college students. In this paper we discuss how teachers could take advantage of Facebook social media platform to promote community-based-learning environment that is flexible, portable and challengeable. We describe how this could be achieved with no restriction to any particular mobile device brand or operating system and how student would simply bring their own device (BYOD).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

 This evolution of mobile technologies and application integration in education across the world has provided a shift to a new learning environment via various mobile platforms.Educational institutions globally are missing to identify specific mobile technologies initiatives and strategies as a method to evaluate these mobile technologies and to expose both students and teachers to the potential it engenders.This panel will undertake a cross country comparison among culturally diverse countries: Turkey,UAE,USA,Lebanon,Iceland,Israel,Japan,Germany.Questions will be raised such as:Why some countries are branding mobile learning and their integration of these technologies has been made device specific, app specific and operating system specific?Is this the right approach?The digital gap between countries will be discussed?Availability access, barriers and limitations for some countries are described.We will try to figure out similarities,differences and challenges among these countries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

 Mobile computing is taking the educational institutions into a new era of instruction. Educational institutions globally are opting for new mobile devices to integrate, and it seems that the vast majority are integrating the iPad without even looking at other options, they are unintentionally branding mobile learning. We believe that mobile learning should not be branded, should not be restricted and should not be made device specific, operating system specific, controlled and brand specific. This paper is based on a global panel discussion entitled: Is this an iPad Revolution or Mobile Learning Revolution. Also this paper presents an argument as to why is the iPad dominating in education with a focus on the current iPad initiatives in the UAE, few possible assumptions on why educational institutions are opting for the iPad are discussed, and some suggested recommendations on what educational institutions should know before making the decision about this integration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the hardware and software design for using a TF card in debugging an embedded system are described. The used hardware platform is designed based on a PXA310 application processor. The Android open source operating system is used as the software platform. The design of the connection circuit between the application processor and the TF card is introduced first. Secondly, the design of the TF card driver program and the method for Android system to mount the TF card are described. In designing the TF driver program, an SPI operation mode and FAT32 file system are used. The transplant of the FAT32 file system is presented more detail. Finally, the paper introduced the system debugging and the test results are given for the TF card used in a video data acquisition unit of a video monitoring. It is shown that high speed data exchange and good universal property can be obtained by using a TF card to download a system image during developing and debugging. The TF card used in debugging can be used as a mass storage in the embedded product without the need of changing the design for debugging the system and it is also convenient for a user to upgrade operating system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To simplify computer management, various administration systems based on wired connections adopt advanced techniques to manage software configuration. Nevertheless, the strong relation between hardware and software makes for an individualism of that management, besides penalizing computational mobility and ubiquity. All these issues lead to degradation of scalability, flexibility and the facility to install and maintain distributed applications. This article presents an environment for centralized wireless communication network management, named WSE-OS (Wireless Sharing Environment - Operating Systems): a model based on Virtual Desktop Infrastructure (VDI) which associates virtualization techniques and safe remote access systems to create a distributed architecture as a base for a managing system. WSE-OS is capable of accomplishing the replication of operating system images using wireless communication network, besides offering abstraction of hardware to its clients, making the management more flexible and independent of wired connections. Results obtained from this work indicate that WSE-OS allows disseminating, through a single software configuration, the execution of data related to operating system images in client computers. WSE-OS can also be used as a management tool for operating systems in a wireless network.