809 resultados para Secure operating system
Resumo:
以5坐标并联机床为依托,面向不同构型并联机床,采用“PMAC+IPC”双CPU为硬件平台、VisualC++6.0为软件平台,开发了基于Windows操作平台的开放式并联机床数控系统。本文介绍了本数控系统的硬件结构、软件构成,并对数控软件开发的关键技术进行了阐述。
Resumo:
基于QNX实时多任务操作系统,设计了机器人软件系统。该系统采用了RTM(实时监控)和DCP(设备控制)两个公共数据区,利用QNX的消息传送机制通过两个公共数据区的通信接口在RTM和DCP之间通信,这种数据隔离机制保障了程序的模块化和可扩展性。
Resumo:
本文介绍遥控移动式作业机器人实时多任务管理系统.该系统是在多总线和位总线相结合的计算机系统上,利用iRMXⅡ实时多任务操作系统设计编制的实时多任务管理系统.该系统为遥控机器人提供了数据通讯,人机接口,任务作业等重要管理工作,使机器人能顺利地完成机械手操作和车体运动等重要控制功能。
Resumo:
OS/22·1是当今客户/服务器计算模式的首选操作系统之一。本文对IBMOS/22·1操作系统所具备的,对客户/服务器计算模式的各种支持功能作了详细的介绍。
Resumo:
The technique of energy extraction using groundwater source heat pumps, as a sustainable way of low-grade thermal energy utilization, has widely been used since mid-1990's. Based on the basic theories of groundwater flow and heat transfer and by employing two analytic models, the relationship of the thermal breakthrough time for a production well with the effect factors involved is analyzed and the impact of heat transfer by means of conduction and convection, under different groundwater velocity conditions, on geo-temperature field is discussed.A mathematical model, coupling the equations for groundwater flow with those for heat transfer, was developed. The impact of energy mining using a single well system of supplying and returning water on geo-temperature field under different hydrogeological conditions, well structures, withdraw-and-reinjection rates, and natural groundwater flow velocities was quantitatively simulated using the finite difference simulator HST3D. Theoretical analyses of the simulated results were also made. The simulated results of the single well system indicate that neither the permeability nor the porosity of a homogeneous aquifer has significant effect on the temperature of the production segment provided that the production and injection capability of each well in the aquifers involved can meet the designed value. If there exists a lower permeable interlayer, compared with the main aquifer, between the production and injection segments, the temperature changes of the production segment will decrease. The thicker the interlayer and the lower the interlayer permeability, the longer the thermal breakthrough time of the production segment and the smaller the temperature changes of the production segment. According to the above modeling, it can also be found that with the increase of the aquifer thickness, the distance between the production and injection screens, and/or the regional groundwater flow velocity, and/or the decrease of the production-and-reinjection rate, the temperature changes of the production segment decline. For an aquifer of a constant thickness, continuously increase the screen lengths of production and injection segments may lead to the decrease of the distance between the production and injection screens, and the temperature changes of the production segment will increase, consequently.According to the simulation results of the single well system, the parameters, that can cause significant influence on heat transfer as well as geo-temperature field, were chosen for doublet system simulation. It is indicated that the temperature changes of the pumping well will decrease as the aquifer thickness, the distance between the well pair and/or the screen lengths of the doublet increase. In the case of a low permeable interlayer embedding in the main aquifer, if the screens of the pumping and the injection wells are installed respectively below and above the interlayer, the temperature changes of the pumping well will be smaller than that without the interlay. The lower the permeability of the interlayer, the smaller the temperature changes. The simulation results also indicate that the lower the pumping-and-reinjection rate, the greater the temperature changes of the pumping well. It can also be found that if the producer and the injector are chosen reasonably, the temperature changes of the pumping well will decline as the regional groundwater flow velocity increases. Compared with the case that the groundwater flow direction is perpendicular to the well pair, if the regional flow is directed from the pumping well to the injection well, the temperature changes of the pumping well is relatively smaller.Based on the above simulation study, a case history was conducted using the data from an operating system in Beijing. By means of the conceptual model and the mathematical model, a 3-D simulation model was developed and the hydrogeological parameters and the thermal properties were calibrated. The calibrated model was used to predict the evolution of the geo-temperature field for the next five years. The simulation results indicate that the calibrated model can represent the hydrogeological conditions and the nature of the aquifers. It can also be found that the temperature fronts in high permeable aquifers move very fast and the radiuses of temperature influence are large. Comparatively, the temperature changes in clay layers are smaller and there is an obvious lag of the temperature changes. According to the current energy mining load, the temperature of the pumping wells will increase by 0.7°C at the end of the next five years. The above case study may provide reliable base for the scientific management of the operating system studied.
Resumo:
A prototype presentation system base is described. It offers mechanisms, tools, and ready-made parts for building user interfaces. A general user interface model underlies the base, organized around the concept of a presentation: a visible text or graphic for conveying information. Te base and model emphasize domain independence and style independence, to apply to the widest possible range of interfaces. The primitive presentation system model treats the interface as a system of processes maintaining a semantic relation between an application data base and a presentation data base, the symbolic screen description containing presentations. A presenter continually updates the presentation data base from the application data base. The user manipulates presentations with a presentation editor. A recognizer translates the user's presentation manipulation into application data base commands. The primitive presentation system can be extended to model more complex systems by attaching additional presentation systems. In order to illustrate the model's generality and descriptive capabilities, extended model structures for several existing user interfaces are discussed. The base provides support for building the application and presentation data bases, linked together into a single, uniform network, including descriptions of classes of objects as we as the objects themselves. The base provides an initial presentation data base network graphics to continually display it, and editing functions. A variety of tools and mechanisms help create and control presenters and recognizers. To demonstrate the base's utility, three interfaces to an operating system were constructed, embodying different styles: icons, menu, and graphical annotation.
Resumo:
In view of a constant growth of writings on didactic and educational problems it is necessary to create an efficient system of scientific educational information. This system will provide creative teachers with materials that will facilitate the selection and access to materials that will enrich the teachers' methodological base and their own intellectual potential by means of a network of school and pedagogical libraries. Such well-organized and efficiently operating system at the level of the school superintendent's office, whose links will be educational institutions as well as those that improve teaching methods of the teaching staff, may be of great information and practical importance in the present age of rapid transformations. It will become an instrument that will make contact with pedagogical writings and improvement of qualifications of the teaching staff possible.
Resumo:
Server performance has become a crucial issue for improving the overall performance of the World-Wide Web. This paper describes Webmonitor, a tool for evaluating and understanding server performance, and presents new results for a realistic workload. Webmonitor measures activity and resource consumption, both within the kernel and in HTTP processes running in user space. Webmonitor is implemented using an efficient combination of sampling and event-driven techniques that exhibit low overhead. Our initial implementation is for the Apache World-Wide Web server running on the Linux operating system. We demonstrate the utility of Webmonitor by measuring and understanding the performance of a Pentium-based PC acting as a dedicated WWW server. Our workload uses a file size distribution with a heavy tail. This captures the fact that Web servers must concurrently handle some requests for large audio and video files, and a large number of requests for small documents, containing text or images. Our results show that in a Web server saturated by client requests, over 90% of the time spent handling HTTP requests is spent in the kernel. Furthermore, keeping TCP connections open, as required by TCP, causes a factor of 2-9 increase in the elapsed time required to service an HTTP request. Data gathered from Webmonitor provide insight into the causes of this performance penalty. Specifically, we observe a significant increase in resource consumption along three dimensions: the number of HTTP processes running at the same time, CPU utilization, and memory utilization. These results emphasize the important role of operating system and network protocol implementation in determining Web server performance.
Resumo:
Statistical Rate Monotonic Scheduling (SRMS) is a generalization of the classical RMS results of Liu and Layland [LL73] for periodic tasks with highly variable execution times and statistical QoS requirements. The main tenet of SRMS is that the variability in task resource requirements could be smoothed through aggregation to yield guaranteed QoS. This aggregation is done over time for a given task and across multiple tasks for a given period of time. Similar to RMS, SRMS has two components: a feasibility test and a scheduling algorithm. SRMS feasibility test ensures that it is possible for a given periodic task set to share a given resource without violating any of the statistical QoS constraints imposed on each task in the set. The SRMS scheduling algorithm consists of two parts: a job admission controller and a scheduler. The SRMS scheduler is a simple, preemptive, fixed-priority scheduler. The SRMS job admission controller manages the QoS delivered to the various tasks through admit/reject and priority assignment decisions. In particular, it ensures the important property of task isolation, whereby tasks do not infringe on each other. In this paper we present the design and implementation of SRMS within the KURT Linux Operating System [HSPN98, SPH 98, Sri98]. KURT Linux supports conventional tasks as well as real-time tasks. It provides a mechanism for transitioning from normal Linux scheduling to a mixed scheduling of conventional and real-time tasks, and to a focused mode where only real-time tasks are scheduled. We overview the technical issues that we had to overcome in order to integrate SRMS into KURT Linux and present the API we have developed for scheduling periodic real-time tasks using SRMS.
Resumo:
Under high loads, a Web server may be servicing many hundreds of connections concurrently. In traditional Web servers, the question of the order in which concurrent connections are serviced has been left to the operating system. In this paper we ask whether servers might provide better service by using non-traditional service ordering. In particular, for the case when a Web server is serving static files, we examine the costs and benefits of a policy that gives preferential service to short connections. We start by assessing the scheduling behavior of a commonly used server (Apache running on Linux) with respect to connection size and show that it does not appear to provide preferential service to short connections. We then examine the potential performance improvements of a policy that does favor short connections (shortest-connection-first). We show that mean response time can be improved by factors of four or five under shortest-connection-first, as compared to an (Apache-like) size-independent policy. Finally we assess the costs of shortest-connection-first scheduling in terms of unfairness (i.e., the degree to which long connections suffer). We show that under shortest-connection-first scheduling, long connections pay very little penalty. This surprising result can be understood as a consequence of heavy-tailed Web server workloads, in which most connections are small, but most server load is due to the few large connections. We support this explanation using analysis.
Resumo:
This paper presents a proactive approach to load sharing and describes the architecture of a scheme, Concert, based on this approach. A proactive approach is characterized by a shift of emphasis from reacting to load imbalance to avoiding its occurrence. In contrast, in a reactive load sharing scheme, activity is triggered when a processing node is either overloaded or underloaded. The main drawback of this approach is that a load imbalance is allowed to develop before costly corrective action is taken. Concert is a load sharing scheme for loosely-coupled distributed systems. Under this scheme, load and task behaviour information is collected and cached in advance of when it is needed. Concert uses Linux as a platform for development. Implemented partially in kernel space and partially in user space, it achieves transparency to users and applications whilst keeping the extent of kernel modifications to a minimum. Non-preemptive task transfers are used exclusively, motivated by lower complexity, lower overheads and faster transfers. The goal is to minimize the average response-time of tasks. Concert is compared with other schemes by considering the level of transparency it provides with respect to users, tasks and the underlying operating system.
Resumo:
The scalability of a computer system is its response to growth. It is also depended on its hardware, its operating system and the applications it is running. Most distributed systems technology today still depends on bus-based shared memory which do not scale well, and systems based on the grid or hypercube scheme requires significantly less connections than a full inter-connection that would exhibit a quadratic growth rate. The rapid convergence of mobile communication, digital broadcasting and network infrastructures calls for rich multimedia content that is adaptive and responsive to the needs of individuals, businesses and the public organisations. This paper will discuss the emergence of mobile Multimedia systems and provides an overview of the issues regarding design and delivery of multimedia content to mobile devices.
Resumo:
WebCom-G is a fledgling Grid Operating System, designed to provide independent service access through interoperability with existing middlewares. It offers an expressive programming model that automatically handles task synchronisation – load balancing, fault tolerance, and task allocation are handled at the WebCom-G system level – without burdening the application writer. These characteristics, together with the ability of its computing model to mix evaluation strategies to match the characteristics of the geographically dispersed facilities and the overall problem- solving environment, make WebCom-G a promising grid middleware candidate.
Resumo:
An H-file is used to convey information from the inner-region to the outer-region in R-matrix computations. HBrowse is a workstation tool for displaying a graphical abstraction of a local or remote R-matrix H-file. While it is published as a stand-alone tool for post-processing the output from R-matrix inner-region computations it also forms part of the Graphical R-matrix Atomic Collision Environment (GRACE), HBrowse is written in C and OSF/Motif for the UNIX operating system. (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
Enhancing sampling and analyzing simulations are central issues in molecular simulation. Recently, we introduced PLUMED, an open-source plug-in that provides some of the most popular molecular dynamics (MD) codes with implementations of a variety of different enhanced sampling algorithms and collective variables (CVs). The rapid changes in this field, in particular new directions in enhanced sampling and dimensionality reduction together with new hardware, require a code that is more flexible and more efficient. We therefore present PLUMED 2 here a,complete rewrite of the code in an object-oriented programming language (C++). This new version introduces greater flexibility and greater modularity, which both extends its core capabilities and makes it far easier to add new methods and CVs. It also has a simpler interface with the MD engines and provides a single software library containing both tools and core facilities. Ultimately, the new code better serves the ever-growing community of users and contributors in coping with the new challenges arising in the field.
Program summary
Program title: PLUMED 2
Catalogue identifier: AEEE_v2_0
Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEEE_v2_0.html
Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland
Licensing provisions: Yes
No. of lines in distributed program, including test data, etc.: 700646
No. of bytes in distributed program, including test data, etc.: 6618136
Distribution format: tar.gz
Programming language: ANSI-C++.
Computer: Any computer capable of running an executable produced by a C++ compiler.
Operating system: Linux operating system, Unix OSs.
Has the code been vectorized or parallelized?: Yes, parallelized using MPI.
RAM: Depends on the number of atoms, the method chosen and the collective variables used.
Classification: 3, 7.7, 23. Catalogue identifier of previous version: AEEE_v1_0.
Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 1961.
External routines: GNU libmatheval, Lapack, Bias, MPI. (C) 2013 Elsevier B.V. All rights reserved.