22 resultados para Modula-2 (Computer program language)
Resumo:
Programming is a skill which requires knowledge of both the basic constructs of the computer language used and techniques employing these constructs. How these are used in any given application is determined intuitively, and this intuition is based on experience of programs already written. One aim of this book is to describe the techniques and give practical examples of the techniques in action - to provide some experience. Another aim of the book is to show how a program should be developed, in particular how a relatively large program should be tackled in a structured manner. These aims are accomplished essentially by describing the writing of one large program, a diagram generator package, in which a number of useful programming techniques are employed. Also, the book provides a useful program, with an in-built manual describing not only how the program works, but also how it does it, with full source code listings. This means that the user can, if required, modify the package to meet particular requirements. A floppy disk is available from the publishers containing the program, including listings of the source code. All the programs are written in Modula-2, using JPI's Top Speed Modula-2 system running on IBM-PCs and compatibles. This language was chosen as it is an ideal language for implementing large programs and it is the main language taught in the Cybernetics Department at the University of Reading. There are some aspects of the Top Speed implementation which are not standard, so suitable comments are given when these occur. Although implemented in Modula-2, many of the techniques described here are appropriate to other languages, like Pascal of C, for example. The book and programs are based on a second year undergraduate course taught at Reading to Cybernetics students, entitled Algorithms and Data Structures. Useful techniques are described for the reader to use, applications where they are appropriate are recommended, but detailed analyses of the techniques are not given.
Resumo:
Individual identification via DNA profiling is important in molecular ecology, particularly in the case of noninvasive sampling. A key quantity in determining the number of loci required is the probability of identity (PIave), the probability of observing two copies of any profile in the population. Previously this has been calculated assuming no inbreeding or population structure. Here we introduce formulae that account for these factors, whilst also accounting for relatedness structure in the population. These formulae are implemented in API-CALC 1.0, which calculates PIave for either a specified value, or a range of values, for F-IS and F-ST.
Resumo:
Selection rules and matrix elements are derived for Coriolis interactions between vibrational levels due to rotation about (x, y) axes in symmetric top molecules. The theory is developed in detail for the case of interaction between an A1 and an E species vibrational level in a C3v molecule; perturbations to both the positions and the intensities of the rovibration transitions in the spectrum are considered. A computer program has been written which calculates exactly the perturbed spectrum of two interacting rovibration bands according to this model, the results being presented directly by a graph plotter connected to the computer. This has been used to interpret perturbations observed in two pairs of interacting fundamentals in the spectrum of CH3F (ν2 - ν5 and ν3 - ν6) and one pair in CD3Cl (ν2 - ν5). The resulting analysis of the observed spectrum leads to new values for some vibration-rotation interaction constants and also leads to a unique determination of the sign relationship between the dipole moment derivatives in each pair of interacting normal vibrations. These sign relations are summarized in Figs. 8, 12, and 15.
Resumo:
The vibration-rotation Raman spectrum of the ν2 and ν5 fundamentals of CH3F is reported, from 1320 to 1640 cm−1, with a resolution of about 0.3 cm−1. The Coriolis resonance between the two bands leads to many perturbation-allowed transitions. Where the resonance is still sufficiently weak that the quantum number K′ retains its meaning, perturbation-allowed transitions are observed for all values of ΔK from +4 to −4; in regions of strong resonance, however, we can only say that the observed transitions obey the selection rule Δ(k−l) = 0 or ±3. The spectrum has been analyzed by band contour simulation using a computer program based on exact diagonalization of the Hamiltonian within the ν2, ν5 vibrational levels, and improved vibration-rotation constants for these bands are reported. The relative magnitudes and relative sings of polarizability derivatives involved in these vibrations are also reported.
Resumo:
The rovibration partition function of CH4 was calculated in the temperature range of 100-1000 K using well-converged energy levels that were calculated by vibrational-rotational configuration interaction using the Watson Hamiltonian for total angular momenta J=0-50 and the MULTIMODE computer program. The configuration state functions are products of ground-state occupied and virtual modals obtained using the vibrational self-consistent field method. The Gilbert and Jordan potential energy surface was used for the calculations. The resulting partition function was used to test the harmonic oscillator approximation and the separable-rotation approximation. The harmonic oscillator, rigid-rotator approximation is in error by a factor of 2.3 at 300 K, but we also propose a separable-rotation approximation that is accurate within 2% from 100 to 1000 K. (C) 2004 American Institute of Physics.
Resumo:
Once unit-cell dimensions have been determined from a powder diffraction data set and therefore the crystal system is known (e.g. orthorhombic), the method presented by Markvardsen, David, Johnson & Shankland [Acta Cryst. (2001), A57, 47-54] can be used to generate a table ranking the extinction symbols of the given crystal system according to probability. Markvardsen et al. tested a computer program (ExtSym) implementing the method against Pawley refinement outputs generated using the TF12LS program [David, Ibberson & Matthewman (1992). Report RAL-92-032. Rutherford Appleton Laboratory, Chilton, Didcot, Oxon, UK]. Here, it is shown that ExtSym can be used successfully with many well known powder diffraction analysis packages, namely DASH [David, Shankland, van de Streek, Pidcock, Motherwell & Cole (2006). J. Appl. Cryst. 39, 910-915], FullProf [Rodriguez-Carvajal (1993). Physica B, 192, 55-69], GSAS [Larson & Von Dreele (1994). Report LAUR 86-748. Los Alamos National Laboratory, New Mexico, USA], PRODD [Wright (2004). Z. Kristallogr. 219, 1-11] and TOPAS [Coelho (2003). Bruker AXS GmbH, Karlsruhe, Germany]. In addition, a precise description of the optimal input for ExtSym is given to enable other software packages to interface with ExtSym and to allow the improvement/modification of existing interfacing scripts. ExtSym takes as input the powder data in the form of integrated intensities and error estimates for these intensities. The output returned by ExtSym is demonstrated to be strongly dependent on the accuracy of these error estimates and the reason for this is explained. ExtSym is tested against a wide range of data sets, confirming the algorithm to be very successful at ranking the published extinction symbol as the most likely. (C) 2008 International Union of Crystallography Printed in Singapore - all rights reserved.
Resumo:
In 1989, the computer programming language POP-11 is 21 years old. This book looks at the reasons behind its invention, and traces its rise from an experimental language to a major AI language, playing a major part in many innovating projects. There is a chapter on the inventor of the language, Robin Popplestone, and a discussion of the applications of POP-11 in a variety of areas. The efficiency of AI programming is covered, along with a comparison between POP-11 and other programming languages. The book concludes by reviewing the standardization of POP-11 into POP91.
Resumo:
Natural ventilation relies on less controllable natural forces so that it needs more artificial control, and thus its prediction, design and analysis become more important. This paper presents both theoretical and numerical simulations for predicting the natural ventilation flow in a two-zone building with multiple openings which is subjected to the combined natural forces. To our knowledge, this is the first analytical solutions obtained so far for a building with more than one zones and in each zone with possibly more than 2 openings. The analytical solution offers a possibility for validating a multi-zone airflow program. A computer program MIX is employed to conduct the numerical simulation. Good agreement is achieved. Different airflow modes are identified and some design recommendations are also provided.
Resumo:
In the summer of 1982, the ICLCUA CAFS Special Interest Group defined three subject areas for working party activity. These were: 1) interfaces with compilers and databases, 2) end-user language facilities and display methods, and 3) text-handling and office automation. The CAFS SIG convened one working party to address the first subject with the following terms of reference: 1) review facilities and map requirements onto them, 2) "Database or CAFS" or "Database on CAFS", 3) training needs for users to bridge to new techniques, and 4) repair specifications to cover gaps in software. The working party interpreted the topic broadly as the data processing professional's, rather than the end-user's, view of and relationship with CAFS. This report is the result of the working party's activities. The report content for good reasons exceeds the terms of reference in their strictest sense. For example, we examine QUERYMASTER, which is deemed to be an end-user tool by ICL, from both the DP and end-user perspectives. First, this is the only interface to CAFS in the current SV201. Secondly, it is necessary for the DP department to understand the end-user's interface to CAFS. Thirdly, the other subjects have not yet been addressed by other active working parties.
Resumo:
Genetic data obtained on population samples convey information about their evolutionary history. Inference methods can extract part of this information but they require sophisticated statistical techniques that have been made available to the biologist community (through computer programs) only for simple and standard situations typically involving a small number of samples. We propose here a computer program (DIY ABC) for inference based on approximate Bayesian computation (ABC), in which scenarios can be customized by the user to fit many complex situations involving any number of populations and samples. Such scenarios involve any combination of population divergences, admixtures and population size changes. DIY ABC can be used to compare competing scenarios, estimate parameters for one or more scenarios and compute bias and precision measures for a given scenario and known values of parameters (the current version applies to unlinked microsatellite data). This article describes key methods used in the program and provides its main features. The analysis of one simulated and one real dataset, both with complex evolutionary scenarios, illustrates the main possibilities of DIY ABC.
Resumo:
We describe a general likelihood-based 'mixture model' for inferring phylogenetic trees from gene-sequence or other character-state data. The model accommodates cases in which different sites in the alignment evolve in qualitatively distinct ways, but does not require prior knowledge of these patterns or partitioning of the data. We call this qualitative variability in the pattern of evolution across sites "pattern-heterogeneity" to distinguish it from both a homogenous process of evolution and from one characterized principally by differences in rates of evolution. We present studies to show that the model correctly retrieves the signals of pattern-heterogeneity from simulated gene-sequence data, and we apply the method to protein-coding genes and to a ribosomal 12S data set. The mixture model outperforms conventional partitioning in both these data sets. We implement the mixture model such that it can simultaneously detect rate- and pattern-heterogeneity. The model simplifies to a homogeneous model or a rate- variability model as special cases, and therefore always performs at least as well as these two approaches, and often considerably improves upon them. We make the model available within a Bayesian Markov-chain Monte Carlo framework for phylogenetic inference, as an easy-to-use computer program.
Resumo:
When a computer program requires legitimate access to confidential data, the question arises whether such a program may illegally reveal sensitive information. This paper proposes a policy model to specify what information flow is permitted in a computational system. The security definition, which is based on a general notion of information lattices, allows various representations of information to be used in the enforcement of secure information flow in deterministic or nondeterministic systems. A flexible semantics-based analysis technique is presented, which uses the input-output relational model induced by an attacker's observational power, to compute the information released by the computational system. An illustrative attacker model demonstrates the use of the technique to develop a termination-sensitive analysis. The technique allows the development of various information flow analyses, parametrised by the attacker's observational power, which can be used to enforce what declassification policies.
Resumo:
This paper summarizes the theory of simple cumulative risks—for example, the risk of food poisoning from the consumption of a series of portions of tainted food. Problems concerning such risks are extraordinarily difficult for naı¨ve individuals, and the paper explains the reasons for this difficulty. It describes how naı¨ve individuals usually attempt to estimate cumulative risks, and it outlines a computer program that models these methods. This account predicts that estimates can be improved if problems of cumulative risk are framed so that individuals can focus on the appropriate subset of cases. The paper reports two experiments that corroborated this prediction. They also showed that whether problems are stated in terms of frequencies (80 out of 100 people got food poisoning) or in terms of percentages (80% of people got food poisoning) did not reliably affect accuracy.
Resumo:
This study investigates transfer at the third-language (L3) initial state, testing between the following possibilities: (1) the first language (L1) transfer hypothesis (an L1 effect for all adult acquisition), (2) the second language (L2) transfer hypothesis, where the L2 blocks L1 transfer (often referred to in the recent literature as the ‘L2 status factor’; Williams and Hammarberg, 1998), and (3) the Cumulative Enhancement Model (Flynn et al., 2004), which proposes selective transfer from all previous linguistic knowledge. We provide data from successful English-speaking learners of L2 Spanish at the initial state of acquiring L3 French and L3 Italian relating to properties of the Null-Subject Parameter (e.g. Chomsky, 1981; Rizzi, 1982). We compare these groups to each other, as well as to groups of English learners of L2 French and L2 Italian at the initial state, and conclude that the data are consistent with the predictions of the ‘L2 status factor’. However, we discuss an alternative possible interpretation based on (psycho)typologically-motivated transfer (borrowing from Kellerman, 1983), providing a methodology for future research in this domain to meaningfully tease apart the ‘L2 status factor’ from this alternative account.