83 resultados para Unix
Resumo:
本文阐述了自行设计的一种具有“远程变换服务”目的的简明通用的远程服务语言(RSL)的设计方法,基于 RSL 实现了对异构环境下远程服务系统(HERSS)的逻辑结构的构造.并基于 RSL 自行设计完成了SUN(UNIX)/VAX(VMS)异构分布式计算机网远程服务系统软件.RSL 语言简化了异构分布式计算机网络环境下的远程用户使用网络资源的任务,由此设计的 HERSS 具有很好的开放性与透明性.
Resumo:
This book originally accompanied a 2-day course on using the LATEX typesetting system. It has been extensively revised and updated and can now be used or self-study or in the classroom. It is aimed at users of Linux, Macintosh, or Microsoft Windows but it can be used with LATEX systems on any platform, including other Unix workstations, mainframes, and even your Personal Digital Assistant (PDA).
Resumo:
This paper presents work on document retrieval based on first time participation in the CLEF 2001 monolingual retrieval task using French. The experiment findings indicated that Okapi, the text retrieval system in use, can successfully be used for non-English text retrieval. A lot of internal pre-processing is required in the basic search system for conversion into Okapi access formats. Various shell scripts were written to achieve the conversion in a UNIX environment, failure of which would significantly have impeded the overall performance. Based on the experiment findings using Okapi - originally designed for English - it was clear that, although most European languages share conventional word boundaries and variant word morphemes formed by the additon of suffixes, there is significant difference between French and English retrieval depending on the adaptation of indexing and search strategies in use. No sophisticated method for higher recall and precision such as stemming techniques, phrase translation or de-compounding was employed for the experiment and our results were suggestively poor. Future participation would include more refined query translation tools.
Resumo:
Models and software products have been developed for modelling, simulation and prediction of different correlations in materials science, including 1. the correlation between processing parameters and properties in titanium alloys and ?-titanium aluminides; 2. time–temperature–transformation (TTT) diagrams for titanium alloys; 3. corrosion resistance of titanium alloys; 4. surface hardness and microhardness profile of nitrocarburised layers; 5. fatigue stress life (S–N) diagrams for Ti–6Al–4V alloys. The programs are based on trained artificial neural networks. For each particular case appropriate combination of inputs and outputs is chosen. Very good performances of the models are achieved. Graphical user interfaces (GUI) are created for easy use of the models. In addition interactive text versions are developed. The models designed are combined and integrated in software package that is built up on a modular fashion. The software products are available in versions for different platforms including Windows 95/98/2000/NT, UNIX and Apple Macintosh. Description of the software products is given, to demonstrate that they are convenient and powerful tools for practical applications in solving various problems in materials science. Examples for optimisation of the alloy compositions, processing parameters and working conditions are illustrated. An option for use of the software in materials selection procedure is described.
Resumo:
We present a novel approach to goal recognition based on a two-stage paradigm of graph construction and analysis. First, a graph structure called a Goal Graph is constructed to represent the observed actions, the state of the world, and the achieved goals as well as various connections between these nodes at consecutive time steps. Then, the Goal Graph is analysed at each time step to recognise those partially or fully achieved goals that are consistent with the actions observed so far. The Goal Graph analysis also reveals valid plans for the recognised goals or part of these goals. Our approach to goal recognition does not need a plan library. It does not suffer from the problems in the acquisition and hand-coding of large plan libraries, neither does it have the problems in searching the plan space of exponential size. We describe two algorithms for Goal Graph construction and analysis in this paradigm. These algorithms are both provably sound, polynomial-time, and polynomial-space. The number of goals recognised by our algorithms is usually very small after a sequence of observed actions has been processed. Thus the sequence of observed actions is well explained by the recognised goals with little ambiguity. We have evaluated these algorithms in the UNIX domain, in which excellent performance has been achieved in terms of accuracy, efficiency, and scalability.
Resumo:
A FORTRAN 90 program is presented which calculates the total cross sections, and the electron energy spectra of the singly and doubly differential cross sections for the single target ionization of neutral atoms ranging from hydrogen up to and including argon. The code is applicable for the case of both high and low Z projectile impact in fast ion-atom collisions. The theoretical models provided for the program user are based on two quantum mechanical approximations which have proved to be very successful in the study of ionization in ion-atom collisions. These are the continuum-distorted-wave (CDW) and continuum-distorted-wave eikonal-initial-state (CDW-EIS) approximations. The codes presented here extend previously published. codes for single ionization of. target hydrogen [Crothers and McCartney, Comput. Phys. Commun. 72 (1992) 288], target helium [Nesbitt, O'Rourke and Crothers, Comput. Phys. Commun. 114 (1998) 385] and target atoms ranging from lithium to neon [O'Rourke, McSherry and Crothers, Comput. Phys. Commun. 131 (2000) 129]. Cross sections for all of these target atoms may be obtained as limiting cases from the present code. Title of program: ARGON Catalogue identifier: ADSE Program summary URL: http://cpc.cs.qub.ac.uk/cpc/summaries/ADSE Program obtainable from: CPC Program Library Queen's University of Belfast, N. Ireland Licensing provisions: none Computer for which the program is designed and others on which it is operable: Computers: Four by 200 MHz Pro Pentium Linux server, DEC Alpha 21164; Four by 400 MHz Pentium 2 Xeon 450 Linux server, IBM SP2 and SUN Enterprise 3500 Installations: Queen's University, Belfast Operating systems under which the program has been tested: Red-hat Linux 5.2, Digital UNIX Version 4.0d, AIX, Solaris SunOS 5.7 Compilers: PGI workstations, DEC CAMPUS Programming language used: FORTRAN 90 with MPI directives No. of bits in a word: 64, except on Linux servers 32 Number of processors used: any number Has the code been vectorized or parallelized? Parallelized using MPI No. of bytes in distributed program, including test data, etc.: 32 189 Distribution format: tar gzip file Keywords: Single ionization, cross sections, continuum-distorted-wave model, continuum- distorted-wave eikonal-initial-state model, target atoms, wave treatment Nature of physical problem: The code calculates total, and differential cross sections for the single ionization of target atoms ranging from hydrogen up to and including argon by both light and heavy ion impact. Method of solution: ARGON allows the user to calculate the cross sections using either the CDW or CDW-EIS [J. Phys. B 16 (1983) 3229] models within the wave treatment. Restrictions on the complexity of the program: Both the CDW and CDW-EIS models are two-state perturbative approximations. Typical running time: Times vary according to input data and number of processors. For one processor the test input data for double differential cross sections (40 points) took less than one second, whereas the test input for total cross sections (20 points) took 32 minutes. Unusual features of the program: none (C) 2003 Elsevier B.V All rights reserved.
Resumo:
An H-file is used to convey information from the inner-region to the outer-region in R-matrix computations. HBrowse is a workstation tool for displaying a graphical abstraction of a local or remote R-matrix H-file. While it is published as a stand-alone tool for post-processing the output from R-matrix inner-region computations it also forms part of the Graphical R-matrix Atomic Collision Environment (GRACE), HBrowse is written in C and OSF/Motif for the UNIX operating system. (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
Enhancing sampling and analyzing simulations are central issues in molecular simulation. Recently, we introduced PLUMED, an open-source plug-in that provides some of the most popular molecular dynamics (MD) codes with implementations of a variety of different enhanced sampling algorithms and collective variables (CVs). The rapid changes in this field, in particular new directions in enhanced sampling and dimensionality reduction together with new hardware, require a code that is more flexible and more efficient. We therefore present PLUMED 2 here a,complete rewrite of the code in an object-oriented programming language (C++). This new version introduces greater flexibility and greater modularity, which both extends its core capabilities and makes it far easier to add new methods and CVs. It also has a simpler interface with the MD engines and provides a single software library containing both tools and core facilities. Ultimately, the new code better serves the ever-growing community of users and contributors in coping with the new challenges arising in the field.
Program summary
Program title: PLUMED 2
Catalogue identifier: AEEE_v2_0
Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEEE_v2_0.html
Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland
Licensing provisions: Yes
No. of lines in distributed program, including test data, etc.: 700646
No. of bytes in distributed program, including test data, etc.: 6618136
Distribution format: tar.gz
Programming language: ANSI-C++.
Computer: Any computer capable of running an executable produced by a C++ compiler.
Operating system: Linux operating system, Unix OSs.
Has the code been vectorized or parallelized?: Yes, parallelized using MPI.
RAM: Depends on the number of atoms, the method chosen and the collective variables used.
Classification: 3, 7.7, 23. Catalogue identifier of previous version: AEEE_v1_0.
Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 1961.
External routines: GNU libmatheval, Lapack, Bias, MPI. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Stratigraphic Columns (SC) are the most useful and common ways to represent the eld descriptions (e.g., grain size, thickness of rock packages, and fossil and lithological components) of rock sequences and well logs. In these representations the width of SC vary according to the grain size (i.e., the wider the strata, the coarser the rocks (Miall 1990; Tucker 2011)), and the thickness of each layer is represented at the vertical axis of the diagram. Typically these representations are drawn 'manually' using vector graphic editors (e.g., Adobe Illustrator®, CorelDRAW®, Inskape). Nowadays there are various software which automatically plot SCs, but there are not versatile open-source tools and it is very di cult to both store and analyse stratigraphic information. This document presents Stratigraphic Data Analysis in R (SDAR), an analytical package1 designed for both plotting and facilitate the analysis of Stratigraphic Data in R (R Core Team 2014). SDAR, uses simple stratigraphic data and takes advantage of the exible plotting tools available in R to produce detailed SCs. The main bene ts of SDAR are: (i) used to generate accurate and complete SC plot including multiple features (e.g., sedimentary structures, samples, fossil content, color, structural data, contacts between beds), (ii) developed in a free software environment for statistical computing and graphics, (iii) run on a wide variety of platforms (i.e., UNIX, Windows, and MacOS), (iv) both plotting and analysing functions can be executed directly on R's command-line interface (CLI), consequently this feature enables users to integrate SDAR's functions with several others add-on packages available for R from The Comprehensive R Archive Network (CRAN).
Resumo:
Affiliation: Centre Robert-Cedergren de l'Université de Montréal en bio-informatique et génomique & Département de biochimie, Université de Montréal
Resumo:
Das hier frei verfügbare Skript gehört zu einer gleichnamigen Vorlesung, die von Prof. Dr. Lutz Wegner bis zum Sommersemester 2007 gehalten wurde. Davor lief sie bis 1999 unter dem etwas irreführenden Titel „Ausgewählte Themen zu Rechnernetzen“. Behandelt wird die IPC in UNIX-basierten Rechnernetzen. Dazu gehören allgemeine Kenntnisse der Prozessumgebung, die fork- und exec-Systemaufrufe, Lock Files, Signale, Pipes, das Botschaftenkonzept (message queues), Semaphore, Shared Memory, Remote Procedure Calls, Sockets und Threads. Jedes Konzept wird mit kleinen Beispielen besprochen, die in C geschrieben sind. Der Quelltext liegt auf unseren Anlagen vor (für AIX, LINUX, Solaris). Grundlage der Vorlesung und des Skripts ist das ausgezeichnete Buch von John Shapley Gray „Interprocess Communications in UNIX“ aus dem Jahr 1998 bzw. die auf Linux angepasste Auflage desselben Buches „Interprocess Communications in LINUX“ aus dem Jahr 2003.
Resumo:
Presentation at the 1997 Dagstuhl Seminar "Evaluation of Multimedia Information Retrieval", Norbert Fuhr, Keith van Rijsbergen, Alan F. Smeaton (eds.), Dagstuhl Seminar Report 175, 14.04. - 18.04.97 (9716). - Abstract: This presentation will introduce ESCHER, a database editor which supports visualization in non-standard applications in engineering, science, tourism and the entertainment industry. It was originally based on the extended nested relational data model and is currently extended to include object-relational properties like inheritance, object types, integrity constraints and methods. It serves as a research platform into areas such as multimedia and visual information systems, QBE-like queries, computer-supported concurrent work (CSCW) and novel storage techniques. In its role as a Visual Information System, a database editor must support browsing and navigation. ESCHER provides this access to data by means of so called fingers. They generalize the cursor paradigm in graphical and text editors. On the graphical display, a finger is reflected by a colored area which corresponds to the object a finger is currently pointing at. In a table more than one finger may point to objects, one of which is the active finger and is used for navigating through the table. The talk will mostly concentrate on giving examples for this type of navigation and will discuss some of the architectural needs for fast object traversal and display. ESCHER is available as public domain software from our ftp site in Kassel. The portable C source can be easily compiled for any machine running UNIX and OSF/Motif, in particular our working environments IBM RS/6000 and Intel-based LINUX systems. A porting to Tcl/Tk is under way.
Resumo:
The JModel suite consists of a number of models of aspects of the Earth System. They can all be run from the JModels website. They are written in the Java language for maximum portability, and are capable of running on most computing platforms including Windows, MacOS and Unix/Linux. The models are controlled via graphical user interfaces (GUI), so no knowledge of computer programming is required to run them. The models currently available from the JModels website are: Ocean phosphorus cycle Ocean nitrogen and phosphorus cycles Ocean silicon and phosphorus cycles Ocean and atmosphere carbon cycle Energy radiation balance model (under development) The main purpose of the models is to investigate how material and energy cycles of the Earth system are regulated and controlled by different feedbacks. While the central focus is on these feedbacks and Earth System stabilisation, the models can also be used in other ways. These resources have been developed by: National Oceanography Centre, Southampton project led by Toby Tyrrell and Andrew Yool, focus on how the Earth system works.
Resumo:
Infographic providing a timeline of important events in the history of open source software since the fifties. Also includes stats for OSS licenses, usage in Business and reasons for participating in an OSS community.
Resumo:
Introduction to UNIX processes