78 resultados para Distributed operating systems (Computers)
em CentAUR: Central Archive University of Reading - UK
Resumo:
Semiotics is the study of signs. Application of semiotics in information systems design is based on the notion that information systems are organizations within which agents deploy signs in the form of actions according to a set of norms. An analysis of the relationships among the agents, their actions and the norms would give a better specification of the system. Distributed multimedia systems (DMMS) could be viewed as a system consisted of many dynamic, self-controlled normative agents engaging in complex interaction and processing of multimedia information. This paper reports the work of applying the semiotic approach to the design and modeling of DMMS, with emphasis on using semantic analysis under the semiotic framework. A semantic model of DMMS describing various components and their ontological dependencies is presented, which then serves as a design model and implemented in a semantic database. Benefits of using the semantic database are discussed with reference to various design scenarios.
Resumo:
Abstract: A new methodology was created to measure the energy consumption and related green house gas (GHG) emissions of a computer operating system (OS) across different device platforms. The methodology involved the direct power measurement of devices under different activity states. In order to include all aspects of an OS, the methodology included measurements in various OS modes, whilst uniquely, also incorporating measurements when running an array of defined software activities, so as to include OS application management features. The methodology was demonstrated on a laptop and phone that could each run multiple OSs, results confirmed that OS can significantly impact the energy consumption of devices. In particular, the new versions of the Microsoft Windows OS were tested and highlighted significant differences between the OS versions on the same hardware. The developed methodology could enable a greater awareness of energy consumption, during both the software development and software marketing processes.
Resumo:
User interaction within a virtual environment may take various forms: a teleconferencing application will require users to speak to each other (Geak, 1993), with computer supported co-operative working; an Engineer may wish to pass an object to another user for examination; in a battle field simulation (McDonough, 1992), users might exchange fire. In all cases it is necessary for the actions of one user to be presented to the others sufficiently quickly to allow realistic interaction. In this paper we take a fresh look at the approach of virtual reality operating systems by tackling the underlying issues of creating real-time multi-user environments.
Resumo:
Importance measures in reliability engineering are used to identify weak areas of a system and signify the roles of components in either causing or contributing to proper functioning of the system. Traditional importance measures for multistate systems mainly concern reliability importance of an individual component and seldom consider the utility performance of the systems. This paper extends the joint importance concepts of two components from the binary system case to the multistate system case. A joint structural importance and a joint reliability importance are defined on the basis of the performance utility of the system. The joint structural importance measures the relationship of two components when the reliabilities of components are not available. The joint reliability importance is inferred when the reliabilities of the components are given. The properties of the importance measures are also investigated. A case study for an offshore electrical power generation system is given.
Resumo:
Traditionally, applications and tools supporting collaborative computing have been designed only with personal computers in mind and support a limited range of computing and network platforms. These applications are therefore not well equipped to deal with network heterogeneity and, in particular, do not cope well with dynamic network topologies. Progress in this area must be made if we are to fulfil the needs of users and support the diversity, mobility, and portability that are likely to characterise group work in future. This paper describes a groupware platform called Coco that is designed to support collaboration in a heterogeneous network environment. The work demonstrates that progress in the p development of a generic supporting groupware is achievable, even in the context of heterogeneous and dynamic networks. The work demonstrates the progress made in the development of an underlying communications infrastructure, building on peer-to-peer concept and topologies to improve scalability and robustness.
Resumo:
Multiple cooperating robot systems may be required to take up a closely coupled configuration in order to perform a task. An example is extended baseline stereo (EBS), requiring that two robots must establish and maintain for a certain period of time a constrained kinematic relationship to each other. In this paper we report on the development of a networked robotics framework for modular, distributed robot systems that supports the creation of such configurations. The framework incorporates a query mechanism to locate modules distributed across the two robot systems. The work presented in this paper introduces special mechanisms to model the kinematic constraint and its instantiation. The EBS configuration is used as a case study and experimental implementation to demonstrate the approach.
Resumo:
The problem of state estimation occurs in many applications of fluid flow. For example, to produce a reliable weather forecast it is essential to find the best possible estimate of the true state of the atmosphere. To find this best estimate a nonlinear least squares problem has to be solved subject to dynamical system constraints. Usually this is solved iteratively by an approximate Gauss–Newton method where the underlying discrete linear system is in general unstable. In this paper we propose a new method for deriving low order approximations to the problem based on a recently developed model reduction method for unstable systems. To illustrate the theoretical results, numerical experiments are performed using a two-dimensional Eady model – a simple model of baroclinic instability, which is the dominant mechanism for the growth of storms at mid-latitudes. It is a suitable test model to show the benefit that may be obtained by using model reduction techniques to approximate unstable systems within the state estimation problem.
Resumo:
Mobile devices can enhance undergraduate research projects and students’ research capabilities. The use of mobile devices such as tablet computers will not automatically make undergraduates better researchers, but their use should make investigations, writing, and publishing more effective and may even save students time. We have explored some of the possibilities of using “tablets” and “smartphones” to aid the research and inquiry process in geography and bioscience fieldwork. We provide two case studies as illustration of how students working in small research groups use mobile devices to gather and analyze primary data in field-based inquiry. Since April 2010, Apple’s iPad has changed the way people behave in the digital world and how they access their music, watch videos, or read their email much as the entrepreneurs Steve Jobs and Jonathan Ive intended. Now with “apps” and “the cloud” and the ubiquitous references to them appearing in the press and on TV, academics’ use of tablets is also having an impact on education and research. In our discussion we will refer to use of smartphones such as the iPhone, iPod, and Android devices under the term “tablet”. Android and Microsoft devices may not offer the same facilities as the iPad/iphone, but many app producers now provide versions for several operating systems. Smartphones are becoming more affordable and ubiquitous (Melhuish and Falloon 2010), but a recent study of undergraduate students (Woodcock et al. 2012, 1) found that many students who own smartphones are “largely unaware of their potential to support learning”. Importantly, however, students were found to be “interested in and open to the potential as they become familiar with the possibilities” (Woodcock et al. 2012). Smartphones and iPads could be better utilized than laptops when conducting research in the field because of their portability (Welsh and France 2012). It is imperative for faculty to provide their students with opportunities to discover and employ the potential uses of mobile devices in their learning. However, it is not only the convenience of the iPad or tablet devices or smartphones we wish to promote, but also a way of thinking and behaving digitally. We essentially suggest that making a tablet the center of research increases the connections between related research activities.
Resumo:
One among the most influential and popular data mining methods is the k-Means algorithm for cluster analysis. Techniques for improving the efficiency of k-Means have been largely explored in two main directions. The amount of computation can be significantly reduced by adopting geometrical constraints and an efficient data structure, notably a multidimensional binary search tree (KD-Tree). These techniques allow to reduce the number of distance computations the algorithm performs at each iteration. A second direction is parallel processing, where data and computation loads are distributed over many processing nodes. However, little work has been done to provide a parallel formulation of the efficient sequential techniques based on KD-Trees. Such approaches are expected to have an irregular distribution of computation load and can suffer from load imbalance. This issue has so far limited the adoption of these efficient k-Means variants in parallel computing environments. In this work, we provide a parallel formulation of the KD-Tree based k-Means algorithm for distributed memory systems and address its load balancing issue. Three solutions have been developed and tested. Two approaches are based on a static partitioning of the data set and a third solution incorporates a dynamic load balancing policy.
Resumo:
A full assessment of para-virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-‐metal, as well as on para-‐virtualization. The idea is to see what the overheads of para-‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-‐metal, then on the para-‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-‐native performance. You can deploy both para-‐virtualization and full virtualization across various virtualized systems. Para-‐virtualization is an OS-‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.
Resumo:
This article reviews current technological developments, particularly Peer-to-Peer technologies and Distributed Data Systems, and their value to community memory projects, particularly those concerned with the preservation of the cultural, literary and administrative data of cultures which have suffered genocide or are at risk of genocide. It draws attention to the comparatively good representation online of genocide denial groups and changes in the technological strategies of holocaust denial and other far-right groups. It draws on the author's work in providing IT support for a UK-based Non-Governmental Organization providing support for survivors of genocide in Rwanda.
Resumo:
Synchronous collaborative systems allow geographically distributed users to form a virtual work environment enabling cooperation between peers and enriching the human interaction. The technology facilitating this interaction has been studied for several years and various solutions can be found at present. In this paper, we discuss our experiences with one such widely adopted technology, namely the Access Grid [1]. We describe our experiences with using this technology, identify key problem areas and propose our solution to tackle these issues appropriately. Moreover, we propose the integration of Access Grid with an Application Sharing tool, developed by the authors. Our approach allows these integrated tools to utilise the enhanced features provided by our underlying dynamic transport layer.