60 resultados para Precise Description
em CentAUR: Central Archive University of Reading - UK
Resumo:
Once unit-cell dimensions have been determined from a powder diffraction data set and therefore the crystal system is known (e.g. orthorhombic), the method presented by Markvardsen, David, Johnson & Shankland [Acta Cryst. (2001), A57, 47-54] can be used to generate a table ranking the extinction symbols of the given crystal system according to probability. Markvardsen et al. tested a computer program (ExtSym) implementing the method against Pawley refinement outputs generated using the TF12LS program [David, Ibberson & Matthewman (1992). Report RAL-92-032. Rutherford Appleton Laboratory, Chilton, Didcot, Oxon, UK]. Here, it is shown that ExtSym can be used successfully with many well known powder diffraction analysis packages, namely DASH [David, Shankland, van de Streek, Pidcock, Motherwell & Cole (2006). J. Appl. Cryst. 39, 910-915], FullProf [Rodriguez-Carvajal (1993). Physica B, 192, 55-69], GSAS [Larson & Von Dreele (1994). Report LAUR 86-748. Los Alamos National Laboratory, New Mexico, USA], PRODD [Wright (2004). Z. Kristallogr. 219, 1-11] and TOPAS [Coelho (2003). Bruker AXS GmbH, Karlsruhe, Germany]. In addition, a precise description of the optimal input for ExtSym is given to enable other software packages to interface with ExtSym and to allow the improvement/modification of existing interfacing scripts. ExtSym takes as input the powder data in the form of integrated intensities and error estimates for these intensities. The output returned by ExtSym is demonstrated to be strongly dependent on the accuracy of these error estimates and the reason for this is explained. ExtSym is tested against a wide range of data sets, confirming the algorithm to be very successful at ranking the published extinction symbol as the most likely. (C) 2008 International Union of Crystallography Printed in Singapore - all rights reserved.
Resumo:
It is generally assumed that the variability of neuronal morphology has an important effect on both the connectivity and the activity of the nervous system, but this effect has not been thoroughly investigated. Neuroanatomical archives represent a crucial tool to explore structure–function relationships in the brain. We are developing computational tools to describe, generate, store and render large sets of three–dimensional neuronal structures in a format that is compact, quantitative, accurate and readily accessible to the neuroscientist. Single–cell neuroanatomy can be characterized quantitatively at several levels. In computer–aided neuronal tracing files, a dendritic tree is described as a series of cylinders, each represented by diameter, spatial coordinates and the connectivity to other cylinders in the tree. This ‘Cartesian’ description constitutes a completely accurate mapping of dendritic morphology but it bears little intuitive information for the neuroscientist. In contrast, a classical neuroanatomical analysis characterizes neuronal dendrites on the basis of the statistical distributions of morphological parameters, e.g. maximum branching order or bifurcation asymmetry. This description is intuitively more accessible, but it only yields information on the collective anatomy of a group of dendrites, i.e. it is not complete enough to provide a precise ‘blueprint’ of the original data. We are adopting a third, intermediate level of description, which consists of the algorithmic generation of neuronal structures within a certain morphological class based on a set of ‘fundamental’, measured parameters. This description is as intuitive as a classical neuroanatomical analysis (parameters have an intuitive interpretation), and as complete as a Cartesian file (the algorithms generate and display complete neurons). The advantages of the algorithmic description of neuronal structure are immense. If an algorithm can measure the values of a handful of parameters from an experimental database and generate virtual neurons whose anatomy is statistically indistinguishable from that of their real counterparts, a great deal of data compression and amplification can be achieved. Data compression results from the quantitative and complete description of thousands of neurons with a handful of statistical distributions of parameters. Data amplification is possible because, from a set of experimental neurons, many more virtual analogues can be generated. This approach could allow one, in principle, to create and store a neuroanatomical database containing data for an entire human brain in a personal computer. We are using two programs, L–NEURON and ARBORVITAE, to investigate systematically the potential of several different algorithms for the generation of virtual neurons. Using these programs, we have generated anatomically plausible virtual neurons for several morphological classes, including guinea pig cerebellar Purkinje cells and cat spinal cord motor neurons. These virtual neurons are stored in an online electronic archive of dendritic morphology. This process highlights the potential and the limitations of the ‘computational neuroanatomy’ strategy for neuroscience databases.
Resumo:
The automatic transformation of sequential programs for efficient execution on parallel computers involves a number of analyses and restructurings of the input. Some of these analyses are based on computing array sections, a compact description of a range of array elements. Array sections describe the set of array elements that are either read or written by program statements. These sections can be compactly represented using shape descriptors such as regular sections, simple sections, or generalized convex regions. However, binary operations such as Union performed on these representations do not satisfy a straightforward closure property, e.g., if the operands to Union are convex, the result may be nonconvex. Approximations are resorted to in order to satisfy this closure property. These approximations introduce imprecision in the analyses and, furthermore, the imprecisions resulting from successive operations have a cumulative effect. Delayed merging is a technique suggested and used in some of the existing analyses to minimize the effects of approximation. However, this technique does not guarantee an exact solution in a general setting. This article presents a generalized technique to precisely compute Union which can overcome these imprecisions.
Resumo:
FAMOUS is an ocean-atmosphere general circulation model of low resolution, capable of simulating approximately 120 years of model climate per wallclock day using current high performance computing facilities. It uses most of the same code as HadCM3, a widely used climate model of higher resolution and computational cost, and has been tuned to reproduce the same climate reasonably well. FAMOUS is useful for climate simulations where the computational cost makes the application of HadCM3 unfeasible, either because of the length of simulation or the size of the ensemble desired. We document a number of scientific and technical improvements to the original version of FAMOUS. These improvements include changes to the parameterisations of ozone and sea-ice which alleviate a significant cold bias from high northern latitudes and the upper troposphere, and the elimination of volume-averaged drifts in ocean tracers. A simple model of the marine carbon cycle has also been included. A particular goal of FAMOUS is to conduct millennial-scale paleoclimate simulations of Quaternary ice ages; to this end, a number of useful changes to the model infrastructure have been made.
Resumo:
This article describes the development and evaluation of the U.K.’s new High-Resolution Global Environmental Model (HiGEM), which is based on the latest climate configuration of the Met Office Unified Model, known as the Hadley Centre Global Environmental Model, version 1 (HadGEM1). In HiGEM, the horizontal resolution has been increased to 0.83° latitude × 1.25° longitude for the atmosphere, and 1/3° × 1/3° globally for the ocean. Multidecadal integrations of HiGEM, and the lower-resolution HadGEM, are used to explore the impact of resolution on the fidelity of climate simulations. Generally, SST errors are reduced in HiGEM. Cold SST errors associated with the path of the North Atlantic drift improve, and warm SST errors are reduced in upwelling stratocumulus regions where the simulation of low-level cloud is better at higher resolution. The ocean model in HiGEM allows ocean eddies to be partially resolved, which dramatically improves the representation of sea surface height variability. In the Southern Ocean, most of the heat transports in HiGEM is achieved by resolved eddy motions, which replaces the parameterized eddy heat transport in the lower-resolution model. HiGEM is also able to more realistically simulate small-scale features in the wind stress curl around islands and oceanic SST fronts, which may have implications for oceanic upwelling and ocean biology. Higher resolution in both the atmosphere and the ocean allows coupling to occur on small spatial scales. In particular, the small-scale interaction recently seen in satellite imagery between the atmosphere and tropical instability waves in the tropical Pacific Ocean is realistically captured in HiGEM. Tropical instability waves play a role in improving the simulation of the mean state of the tropical Pacific, which has important implications for climate variability. In particular, all aspects of the simulation of ENSO (spatial patterns, the time scales at which ENSO occurs, and global teleconnections) are much improved in HiGEM.
Resumo:
Perchlorate-reducing bacteria fractionate chlorine stable isotopes giving a powerful approach to monitor the extent of microbial consumption of perchlorate in contaminated sites undergoing remediation or natural perchlorate containing sites. This study reports the full experimental data and methodology used to re-evaluate the chlorine isotope fractionation of perchlorate reduction in duplicate culture experiments of Azospira suillum strain PS at 37 degrees C (Delta Cl-37(Cr)--ClO4-) previously reported, without a supporting data set by Coleman et al. [Coleman, M.L., Ader, M., Chaudhuri, S., Coates,J.D., 2003. Microbial Isotopic Fractionation of Perchlorate Chlorine. Appl. Environ. Microbiol. 69, 4997-5000] in a reconnaissance study, with the goal of increasing the accuracy and precision of the isotopic fractionation determination. The method fully described here for the first time, allows the determination of a higher precision Delta Cl-37(Cl)--ClO4- value, either from accumulated chloride content and isotopic composition or from the residual perchlorate content and isotopic composition. The result sets agree perfectly, within error, giving average Delta Cl-37(Cl)--ClO4- = -14.94 +/- 0.15%omicron. Complementary use of chloride and perchlorate data allowed the identification and rejection of poor quality data by applying mass and isotopic balance checks. This precise Delta Cl-37(Cl)--ClO4-, value can serve as a reference point for comparison with future in situ or microcosm studies but we also note its similarity to the theoretical equilibrium isotopic fractionation between a hypothetical chlorine species of redox state +6 and perchlorate at 37 degrees C and suggest that the first electron transfer during perchlorate reduction may occur at isotopic equilibrium between art enzyme-bound chlorine and perchlorate. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Microbial processes in soil are moisture, nutrient and temperature dependent and, consequently, accurate calculation of soil temperature is important for modelling nitrogen processes. Microbial activity in soil occurs even at sub-zero temperatures so that, in northern latitudes, a method to calculate soil temperature under snow cover and in frozen soils is required. This paper describes a new and simple model to calculate daily values for soil temperature at various depths in both frozen and unfrozen soils. The model requires four parameters average soil thermal conductivity, specific beat capacity of soil, specific heat capacity due to freezing and thawing and an empirical snow parameter. Precipitation, air temperature and snow depth (measured or calculated) are needed as input variables. The proposed model was applied to five sites in different parts of Finland representing different climates and soil types. Observed soil temperatures at depths of 20 and 50 cm (September 1981-August 1990) were used for model calibration. The calibrated model was then tested using observed soil temperatures from September 1990 to August 2001. R-2-values of the calibration period varied between 0.87 and 0.96 at a depth of 20 cm and between 0.78 and 0.97 at 50 cm. R-2 -values of the testing period were between 0.87 and 0.94 at a depth of 20cm. and between 0.80 and 0.98 at 50cm. Thus, despite the simplifications made, the model was able to simulate soil temperature at these study sites. This simple model simulates soil temperature well in the uppermost soil layers where most of the nitrogen processes occur. The small number of parameters required means, that the model is suitable for addition to catchment scale models.
Resumo:
Previous work has established the value of goal-oriented approaches to requirements engineering. Achieving clarity and agreement about stakeholders’ goals and assumptions is critical for building successful software systems and managing their subsequent evolution. In general, this decision-making process requires stakeholders to understand the implications of decisions outside the domains of their own expertise. Hence it is important to support goal negotiation and decision making with description languages that are both precise and expressive, yet easy to grasp. This paper presents work in progress to develop a pattern language for describing goal refinement graphs. The language has a simple graphical notation, which is supported by a prototype editor tool, and a symbolic notation based on modal logic.
Resumo:
Variation calculations of the vibration–rotation energy levels of many isotopomers of HCN are reported, for J=0, 1, and 2, extending up to approximately 8 quanta of each of the stretching vibrations and 14 quanta of the bending mode. The force field, which is represented as a polynomial expansion in Morse coordinates for the bond stretches and even powers of the angle bend, has been refined by least squares to fit simultaneously all observed data on the Σ and Π state vibrational energies, and the Σ state rotational constants, for both HCN and DCN. The observed vibrational energies are fitted to roughly ±0.5 cm−1, and the rotational constants to roughly ±0.0001 cm−1. The force field has been used to predict the vibration rotation spectra of many isotopomers of HCN up to 25 000 cm−1. The results are consistent with the axis‐switching assignments of some weak overtone bands reported recently by Jonas, Yang, and Wodtke, and they also fit and provide the assignment for recent observations by Romanini and Lehmann of very weak absorption bands above 20 000 cm−1.
Resumo:
We report the results of variational calculations of the rovibrational energy levels of HCN for J = 0, 1 and 2, where we reproduce all the ca. 100 observed vibrational states for all observed isotopic species, with energies up to 18000 cm$^{-1}$, to about $\pm $1 cm$^{-1}$, and the corresponding rotational constants to about $\pm $0.001 cm$^{-1}$. We use a hamiltonian expressed in internal coordinates r$_{1}$, r$_{2}$ and $\theta $, using the exact expression for the kinetic energy operator T obtained by direct transformation from the cartesian representation. The potential energy V is expressed as a polynomial expansion in the Morse coordinates y$_{i}$ for the bond stretches and the interbond angle $\theta $. The basis functions are built as products of appropriately scaled Morse functions in the bond-stretches and Legendre or associated Legendre polynomials of cos $\theta $ in the angle bend, and we evaluate matrix elements by Gauss quadrature. The hamiltonian matripx is factorized using the full rovibrational symmetry, and the basis is contracted to an optimized form; the dimensions of the final hamiltonian matrix vary from 240 $\times $ 240 to 1000 $\times $ 1000.We believe that our calculation is converged to better than 1 cm$^{-1}$ at 18 000 cm$^{-1}$. Our potential surface is expressed in terms of 31 parameters, about half of which have been refined by least squares to optimize the fit to the experimental data. The advantages and disadvantages and the future potential of calculations of this type are discussed.
Resumo:
Feed samples received by commercial analytical laboratories are often undefined or mixed varieties of forages, originate from various agronomic or geographical areas of the world, are mixtures (e.g., total mixed rations) and are often described incompletely or not at all. Six unified single equation approaches to predict the metabolizable energy (ME) value of feeds determined in sheep fed at maintenance ME intake were evaluated utilizing 78 individual feeds representing 17 different forages, grains, protein meals and by-product feedstuffs. The predictive approaches evaluated were two each from National Research Council [National Research Council (NRC), Nutrient Requirements of Dairy Cattle, seventh revised ed. National Academy Press, Washington, DC, USA, 2001], University of California at Davis (UC Davis) and ADAS (Stratford, UK). Slopes and intercepts for the two ADAS approaches that utilized in vitro digestibility of organic matter and either measured gross energy (GE), or a prediction of GE from component assays, and one UC Davis approach, based upon in vitro gas production and some component assays, differed from both unity and zero, respectively, while this was not the case for the two NRC and one UC Davis approach. However, within these latter three approaches, the goodness of fit (r(2)) increased from the NRC approach utilizing lignin (0.61) to the NRC approach utilizing 48 h in vitro digestion of neutral detergent fibre (NDF:0.72) and to the UC Davis approach utilizing a 30 h in vitro digestion of NDF (0.84). The reason for the difference between the precision of the NRC procedures was the failure of assayed lignin values to accurately predict 48 h in vitro digestion of NDF. However, differences among the six predictive approaches in the number of supporting assays, and their costs, as well as that the NRC approach is actually three related equations requiring categorical description of feeds (making them unsuitable for mixed feeds) while the ADAS and UC Davis approaches are single equations, suggests that the procedure of choice will vary dependent Upon local conditions, specific objectives and the feedstuffs to be evaluated. In contrast to the evaluation of the procedures among feedstuffs, no procedure was able to consistently discriminate the ME values of individual feeds within feedstuffs determined in vivo, suggesting that the quest for an accurate and precise ME predictive approach among and within feeds, may remain to be identified. (C) 2004 Elsevier B.V. All rights reserved.