968 resultados para Computation laboratories
Resumo:
The use of 'balanced' Ca, Mg, and K ratios, as prescribed by the basic cation saturation ratio (BCSR) concept, is still used by some private soil-testing laboratories for the interpretation of soil analytical data. This review aims to examine the suitability of the BCSR concept as a method for the interpretation of soil analytical data. According to the BCSR concept, maximum plant growth will be achieved only when the soil’s exchangeable Ca, Mg, and K concentrations are approximately 65 % Ca, 10 % Mg, and 5 % K (termed the ‘ideal soil’). This ‘ideal soil’ was originally proposed by Firman Bear and co-workers in New Jersey (USA) during the 1940s as a method of reducing luxury K uptake by alfalfa (Medicago sativa L.). At about the same time, William Albrecht, working in Missouri (USA), concluded through his own investigations that plants require a soil with a high Ca saturation for optimal growth. Whilst it now appears that several of Albrecht’s experiments were fundamentally flawed, the BCSR (‘balanced soil’) concept has been widely promoted, suggesting that the prescribed cationic ratios provide optimum chemical, physical, and biological soil properties. Our examination of data from numerous studies (particularly those of Albrecht and Bear, themselves) would suggest that, within the ranges commonly found in soils, the chemical, physical, and biological fertility of a soil is generally not influenced by the ratios of Ca, Mg, and K. The data do not support the claims of the BCSR, and continued promotion of the BCSR will result in the inefficient use of resources in agriculture and horticulture.
Resumo:
We show that a two-level atom interacting with an extremely weak squeezed vacuum can display resonance fluorescence spectra that are qualitatively different to those that can be obtained using fields with a classical analogue. We consider first the free space situation with monochromatic excitation, and then discuss a bichromatically driven two-level atom in a cavity as a practical scenario for experimentally detecting the anomalous features predicted. We show that in the bad cavity limit, the anomalous spectral features appear for a weak squeezed vacuum and large frequency differences of the bichromatic field, conditions which are easily accessible in laboratories. The advantage of bichromatic, as opposed to monochromatic, excitation is that there is no coherent scattering at line centre which could obscure the observations. A scaling law is derived, N similar to Omega(4) which relates the squeezed photon number to the Rabi frequency at which the anomalous features appear. (C) 1998 Elsevier Science B.V.
Resumo:
The cost of spatial join processing can be very high because of the large sizes of spatial objects and the computation-intensive spatial operations. While parallel processing seems a natural solution to this problem, it is not clear how spatial data can be partitioned for this purpose. Various spatial data partitioning methods are examined in this paper. A framework combining the data-partitioning techniques used by most parallel join algorithms in relational databases and the filter-and-refine strategy for spatial operation processing is proposed for parallel spatial join processing. Object duplication caused by multi-assignment in spatial data partitioning can result in extra CPU cost as well as extra communication cost. We find that the key to overcome this problem is to preserve spatial locality in task decomposition. We show in this paper that a near-optimal speedup can be achieved for parallel spatial join processing using our new algorithms.
Resumo:
We investigate in detail the effects of a QND vibrational number measurement made on single ions in a recently proposed measurement scheme for the vibrational state of a register of ions in a linear rf trap [C. D'HELON and G. J. MILBURN, Phys Rev. A 54, 5141 (1996)]. The performance of a measurement shows some interesting patterns which are closely related to searching.
Resumo:
Coset enumeration is a most important procedure for investigating finitely presented groups. We present a practical parallel procedure for coset enumeration on shared memory processors. The shared memory architecture is particularly interesting because such parallel computation is both faster and cheaper. The lower cost comes when the program requires large amounts of memory, and additional CPU's. allow us to lower the time that the expensive memory is being used. Rather than report on a suite of test cases, we take a single, typical case, and analyze the performance factors in-depth. The parallelization is achieved through a master-slave architecture. This results in an interesting phenomenon, whereby the CPU time is divided into a sequential and a parallel portion, and the parallel part demonstrates a speedup that is linear in the number of processors. We describe an early version for which only 40% of the program was parallelized, and we describe how this was modified to achieve 90% parallelization while using 15 slave processors and a master. In the latter case, a sequential time of 158 seconds was reduced to 29 seconds using 15 slaves.
Resumo:
A method for the accurate computation of the current densities produced in a wide-runged bi-planar radio-frequency coil is presented. The device has applications in magnetic resonance imaging. There is a set of opposing primary rungs, symmetrically placed on parallel planes and a similar arrangement of rungs on two parallel planes surrounding the primary serves as a shield. Current densities induced in these primary and shielding rungs are calculated to a high degree of accuracy using an integral-equation approach, combined with the inverse finite Hilbert transform. Once these densities are known, accurate electrical and magnetic fields are then computed without difficulty. Some test results are shown. The method is so rapid that it can be incorporated into optimization software. Some preliminary fields produced from optimized coils are presented.
Resumo:
Expokit provides a set of routines aimed at computing matrix exponentials. More precisely, it computes either a small matrix exponential in full, the action of a large sparse matrix exponential on an operand vector, or the solution of a system of linear ODEs with constant inhomogeneity. The backbone of the sparse routines consists of matrix-free Krylov subspace projection methods (Arnoldi and Lanczos processes), and that is why the toolkit is capable of coping with sparse matrices of large dimension. The software handles real and complex matrices and provides specific routines for symmetric and Hermitian matrices. The computation of matrix exponentials is a numerical issue of critical importance in the area of Markov chains and furthermore, the computed solution is subject to probabilistic constraints. In addition to addressing general matrix exponentials, a distinct attention is assigned to the computation of transient states of Markov chains.
Resumo:
There are some interesting connections between the theory of quantum computation and quantum measurement. As an illustration, we present a scheme in which an ion trap quantum computer can be used to make arbitrarily accurate measurements of the quadrature phase variables for the collective vibrational motion of the ion. We also discuss some more general aspects of quantum computation and measurement in terms of the Feynman-Deutsch principle.
Resumo:
Krylov subspace techniques have been shown to yield robust methods for the numerical computation of large sparse matrix exponentials and especially the transient solutions of Markov Chains. The attractiveness of these methods results from the fact that they allow us to compute the action of a matrix exponential operator on an operand vector without having to compute, explicitly, the matrix exponential in isolation. In this paper we compare a Krylov-based method with some of the current approaches used for computing transient solutions of Markov chains. After a brief synthesis of the features of the methods used, wide-ranging numerical comparisons are performed on a power challenge array supercomputer on three different models. (C) 1999 Elsevier Science B.V. All rights reserved.AMS Classification: 65F99; 65L05; 65U05.
Resumo:
Burnside asked questions about periodic groups in his influential paper of 1902. The study of groups with exponent six is a special case of the study of the Burnside questions on which there has been significant progress. It has contributed a number of worthwhile aspects to the theory of groups and in particular to computation related to groups. Finitely generated groups with exponent six are finite. We investigate the nature of relations required to provide proofs of finiteness for some groups with exponent six. We give upper and lower bounds for the number of sixth powers needed to define the largest 2-generator group with exponent six. We solve related questions about other groups with exponent sis using substantial computations which we explain.
Resumo:
We use the finite element method to solve the coupled problem between convective pore-fluid flow, heat transfer and mineralization in layered hydrothermal systems with upward throughflow. In particular, we present the improved rock alteration index (IRAI) concept for predicting the most probable precipitation and dissolution regions of gold (Au) minerals in the systems. To validate the numerical method used in the computation, analytical solutions to a benchmark problem have been derived. After the numerical method is validated, it is used to investigate the pattern of pore-fluid Aom, the distribution of temperature and the mineralization pattern of gold minerals in a layered hydrothermal system with upward throughflow. The related numerical results have demonstrated that the present concept of IRAI is useful and applicable for predicting the most probable precipitation and dissolution regions of gold (Au) minerals in hydrothermal systems. (C) 2000 Elsevier Science S.A. All rights reserved.
Resumo:
In this paper we present a model of specification-based testing of interactive systems. This model provides the basis for a framework to guide such testing. Interactive systems are traditionally decomposed into a functionality component and a user interface component; this distinction is termed dialogue separation and is the underlying basis for conceptual and architectural models of such systems. Correctness involves both proper behaviour of the user interface and proper computation by the underlying functionality. Specification-based testing is one method used to increase confidence in correctness, but it has had limited application to interactive system development to date.
Resumo:
Hedley er al. (1982) developed what has become the most widely used land modified), phosphorus (P) fractionation technique. It consists of sequential extraction of increasingly less phytoavailable P pools. Extracts are centrifuged at up to 25000 g (RCF) and filtered to 0.45 mu m to ensure that soil is not lost between extractions. In attempting to transfer this method to laboratories with limited facilities, it was considered that access to high-speed centrifuges, and the cost of frequent filtration may prevent adoption of this P fractionation technique. The modified method presented here was developed to simplify methodology, reduce cost, and therefore increase accessibility of P fractionation technology. It provides quantitative recovery of soil between extractions, using low speed centrifugation without filtration. This is achieved by increasing the ionic strength of dilute extracts, through the addition of NaCl, to flocculate clay particles. Addition of NaCl does not change the amount of P extracted. Flocculation with low speed centrifugation produced extracts comparable with those having undergone filtration (0.025 mu m). A malachite green colorimetric method was adopted for inorganic P determination, as this simple manual method provides high sensitivity with negligible interference from other anions. This approach can also be used for total P following digestion, alternatively non-discriminatory methods, such as inductively coupled plasma atomic emission spectroscopy, may be employed.
Resumo:
We analyze the fidelity of teleportation protocols, as a function of resource entanglement, for three kinds of two-mode oscillator states: states with fixed total photon number, number states entangled at a beam splitter, and the two-mode squeezed vacuum state. We define corresponding teleportation protocols for each case including phase noise to model degraded entanglement of each resource.
Resumo:
Continuous-valued recurrent neural networks can learn mechanisms for processing context-free languages. The dynamics of such networks is usually based on damped oscillation around fixed points in state space and requires that the dynamical components are arranged in certain ways. It is shown that qualitatively similar dynamics with similar constraints hold for a(n)b(n)c(n), a context-sensitive language. The additional difficulty with a(n)b(n)c(n), compared with the context-free language a(n)b(n), consists of 'counting up' and 'counting down' letters simultaneously. The network solution is to oscillate in two principal dimensions, one for counting up and one for counting down. This study focuses on the dynamics employed by the sequential cascaded network, in contrast to the simple recurrent network, and the use of backpropagation through time. Found solutions generalize well beyond training data, however, learning is not reliable. The contribution of this study lies in demonstrating how the dynamics in recurrent neural networks that process context-free languages can also be employed in processing some context-sensitive languages (traditionally thought of as requiring additional computation resources). This continuity of mechanism between language classes contributes to our understanding of neural networks in modelling language learning and processing.