854 resultados para Multics (Computer operating system)
Resumo:
A multi-channel gated integrator and PXI based data acquisition system have been developed for nuclear detector arrays with hundreds of detector units. The multi-channel gated integrator can be controlled by a programmable Cl controller. The PXI-DAQ system consists of NI PXI-1033 chassis with several PXI-DAQ cards. The system software has a user-friendly GUI which is written in C language using LabWindows/CVI under Windows XP operating system. The performance of the PXI-DAQ system is very reliable and capable of handling event rate up to 40 kHz. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
在总结前人工作的基础上,结合安全操作系统对测试的特殊需求,提出了简并测试集(degenerate test set,简称DTS)的概念,设计了一种使用模型检测的基于安全状态转移的高效测试集生成方法.该方法以状态转移为化简对象,在利用模型检测技术生成测试用例的同时,归并相同的状态转移并化简需求集中的冗余属性,从而最终达到化简测试集的目的.在此基础上,探讨了单个用例失败时用例集的有效性问题,并对DTS生成算法进行了改进.实验结果表明,该方法可以有效地对测试集中的冗余进行化简.
Resumo:
本文介绍遥控移动式作业机器人实时多任务管理系统.该系统是在多总线和位总线相结合的计算机系统上,利用iRMXⅡ实时多任务操作系统设计编制的实时多任务管理系统.该系统为遥控机器人提供了数据通讯,人机接口,任务作业等重要管理工作,使机器人能顺利地完成机械手操作和车体运动等重要控制功能。
Resumo:
The Jellybean Machine is a scalable MIMD concurrent processor consisting of special purpose RISC processors loosely coupled into a low latency network. I have developed an operating system to provide the supportive environment required to efficiently coordinate the collective power of the distributed processing elements. The system services are developed in detail, and may be of interest to other designers of fine grain, distributed memory processing networks.
Resumo:
This thesis presents SodaBot, a general-purpose software agent user-environment and construction system. Its primary component is the basic software agent --- a computational framework for building agents which is essentially an agent operating system. We also present a new language for programming the basic software agent whose primitives are designed around human-level descriptions of agent activity. Via this programming language, users can easily implement a wide-range of typical software agent applications, e.g. personal on-line assistants and meeting scheduling agents. The SodaBot system has been implemented and tested, and its description comprises the bulk of this thesis.
Resumo:
Server performance has become a crucial issue for improving the overall performance of the World-Wide Web. This paper describes Webmonitor, a tool for evaluating and understanding server performance, and presents new results for a realistic workload. Webmonitor measures activity and resource consumption, both within the kernel and in HTTP processes running in user space. Webmonitor is implemented using an efficient combination of sampling and event-driven techniques that exhibit low overhead. Our initial implementation is for the Apache World-Wide Web server running on the Linux operating system. We demonstrate the utility of Webmonitor by measuring and understanding the performance of a Pentium-based PC acting as a dedicated WWW server. Our workload uses a file size distribution with a heavy tail. This captures the fact that Web servers must concurrently handle some requests for large audio and video files, and a large number of requests for small documents, containing text or images. Our results show that in a Web server saturated by client requests, over 90% of the time spent handling HTTP requests is spent in the kernel. Furthermore, keeping TCP connections open, as required by TCP, causes a factor of 2-9 increase in the elapsed time required to service an HTTP request. Data gathered from Webmonitor provide insight into the causes of this performance penalty. Specifically, we observe a significant increase in resource consumption along three dimensions: the number of HTTP processes running at the same time, CPU utilization, and memory utilization. These results emphasize the important role of operating system and network protocol implementation in determining Web server performance.
Resumo:
Statistical Rate Monotonic Scheduling (SRMS) is a generalization of the classical RMS results of Liu and Layland [LL73] for periodic tasks with highly variable execution times and statistical QoS requirements. The main tenet of SRMS is that the variability in task resource requirements could be smoothed through aggregation to yield guaranteed QoS. This aggregation is done over time for a given task and across multiple tasks for a given period of time. Similar to RMS, SRMS has two components: a feasibility test and a scheduling algorithm. SRMS feasibility test ensures that it is possible for a given periodic task set to share a given resource without violating any of the statistical QoS constraints imposed on each task in the set. The SRMS scheduling algorithm consists of two parts: a job admission controller and a scheduler. The SRMS scheduler is a simple, preemptive, fixed-priority scheduler. The SRMS job admission controller manages the QoS delivered to the various tasks through admit/reject and priority assignment decisions. In particular, it ensures the important property of task isolation, whereby tasks do not infringe on each other. In this paper we present the design and implementation of SRMS within the KURT Linux Operating System [HSPN98, SPH 98, Sri98]. KURT Linux supports conventional tasks as well as real-time tasks. It provides a mechanism for transitioning from normal Linux scheduling to a mixed scheduling of conventional and real-time tasks, and to a focused mode where only real-time tasks are scheduled. We overview the technical issues that we had to overcome in order to integrate SRMS into KURT Linux and present the API we have developed for scheduling periodic real-time tasks using SRMS.
Resumo:
Under high loads, a Web server may be servicing many hundreds of connections concurrently. In traditional Web servers, the question of the order in which concurrent connections are serviced has been left to the operating system. In this paper we ask whether servers might provide better service by using non-traditional service ordering. In particular, for the case when a Web server is serving static files, we examine the costs and benefits of a policy that gives preferential service to short connections. We start by assessing the scheduling behavior of a commonly used server (Apache running on Linux) with respect to connection size and show that it does not appear to provide preferential service to short connections. We then examine the potential performance improvements of a policy that does favor short connections (shortest-connection-first). We show that mean response time can be improved by factors of four or five under shortest-connection-first, as compared to an (Apache-like) size-independent policy. Finally we assess the costs of shortest-connection-first scheduling in terms of unfairness (i.e., the degree to which long connections suffer). We show that under shortest-connection-first scheduling, long connections pay very little penalty. This surprising result can be understood as a consequence of heavy-tailed Web server workloads, in which most connections are small, but most server load is due to the few large connections. We support this explanation using analysis.
Resumo:
Gemstone Team FACE
Resumo:
This paper presents a proactive approach to load sharing and describes the architecture of a scheme, Concert, based on this approach. A proactive approach is characterized by a shift of emphasis from reacting to load imbalance to avoiding its occurrence. In contrast, in a reactive load sharing scheme, activity is triggered when a processing node is either overloaded or underloaded. The main drawback of this approach is that a load imbalance is allowed to develop before costly corrective action is taken. Concert is a load sharing scheme for loosely-coupled distributed systems. Under this scheme, load and task behaviour information is collected and cached in advance of when it is needed. Concert uses Linux as a platform for development. Implemented partially in kernel space and partially in user space, it achieves transparency to users and applications whilst keeping the extent of kernel modifications to a minimum. Non-preemptive task transfers are used exclusively, motivated by lower complexity, lower overheads and faster transfers. The goal is to minimize the average response-time of tasks. Concert is compared with other schemes by considering the level of transparency it provides with respect to users, tasks and the underlying operating system.
Resumo:
The scalability of a computer system is its response to growth. It is also depended on its hardware, its operating system and the applications it is running. Most distributed systems technology today still depends on bus-based shared memory which do not scale well, and systems based on the grid or hypercube scheme requires significantly less connections than a full inter-connection that would exhibit a quadratic growth rate. The rapid convergence of mobile communication, digital broadcasting and network infrastructures calls for rich multimedia content that is adaptive and responsive to the needs of individuals, businesses and the public organisations. This paper will discuss the emergence of mobile Multimedia systems and provides an overview of the issues regarding design and delivery of multimedia content to mobile devices.
Resumo:
Purpose – To present key challenges associated with the evolution of system-in-package technologies and present technical work in reliability modeling and embedded test that contributes to these challenges. Design/methodology/approach – Key challenges have been identified from the electronics and integrated MEMS industrial sectors. Solutions to optimising the reliability of a typical assembly process and reducing the cost of production test have been studied through simulation and modelling studies based on technology data released by NXP and in collaboration with EDA tool vendors Coventor and Flomerics. Findings – Characterised models that deliver special and material dependent reliability data that can be used to optimize robustness of SiP assemblies together with results that indicate relative contributions of various structural variables. An initial analytical model for solder ball reliability and a solution for embedding a low cost test for a capacitive RF-MEMS switch identified as an SiP component presenting a key test challenge. Research limitations/implications – Results will contribute to the further development of NXP wafer level system-in-package technology. Limitations are that feedback on the implementation of recommendations and the physical characterisation of the embedded test solution. Originality/value – Both the methodology and associated studies on the structural reliability of an industrial SiP technology are unique. The analytical model for solder ball life is new as is the embedded test solution for the RF-MEMS switch.
Resumo:
WebCom-G is a fledgling Grid Operating System, designed to provide independent service access through interoperability with existing middlewares. It offers an expressive programming model that automatically handles task synchronisation – load balancing, fault tolerance, and task allocation are handled at the WebCom-G system level – without burdening the application writer. These characteristics, together with the ability of its computing model to mix evaluation strategies to match the characteristics of the geographically dispersed facilities and the overall problem- solving environment, make WebCom-G a promising grid middleware candidate.
Resumo:
An H-file is used to convey information from the inner-region to the outer-region in R-matrix computations. HBrowse is a workstation tool for displaying a graphical abstraction of a local or remote R-matrix H-file. While it is published as a stand-alone tool for post-processing the output from R-matrix inner-region computations it also forms part of the Graphical R-matrix Atomic Collision Environment (GRACE), HBrowse is written in C and OSF/Motif for the UNIX operating system. (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
Enhancing sampling and analyzing simulations are central issues in molecular simulation. Recently, we introduced PLUMED, an open-source plug-in that provides some of the most popular molecular dynamics (MD) codes with implementations of a variety of different enhanced sampling algorithms and collective variables (CVs). The rapid changes in this field, in particular new directions in enhanced sampling and dimensionality reduction together with new hardware, require a code that is more flexible and more efficient. We therefore present PLUMED 2 here a,complete rewrite of the code in an object-oriented programming language (C++). This new version introduces greater flexibility and greater modularity, which both extends its core capabilities and makes it far easier to add new methods and CVs. It also has a simpler interface with the MD engines and provides a single software library containing both tools and core facilities. Ultimately, the new code better serves the ever-growing community of users and contributors in coping with the new challenges arising in the field.
Program summary
Program title: PLUMED 2
Catalogue identifier: AEEE_v2_0
Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEEE_v2_0.html
Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland
Licensing provisions: Yes
No. of lines in distributed program, including test data, etc.: 700646
No. of bytes in distributed program, including test data, etc.: 6618136
Distribution format: tar.gz
Programming language: ANSI-C++.
Computer: Any computer capable of running an executable produced by a C++ compiler.
Operating system: Linux operating system, Unix OSs.
Has the code been vectorized or parallelized?: Yes, parallelized using MPI.
RAM: Depends on the number of atoms, the method chosen and the collective variables used.
Classification: 3, 7.7, 23. Catalogue identifier of previous version: AEEE_v1_0.
Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 1961.
External routines: GNU libmatheval, Lapack, Bias, MPI. (C) 2013 Elsevier B.V. All rights reserved.