147 resultados para Precision
Resumo:
Diffuse optical tomographic image reconstruction uses advanced numerical models that are computationally costly to be implemented in the real time. The graphics processing units (GPUs) offer desktop massive parallelization that can accelerate these computations. An open-source GPU-accelerated linear algebra library package is used to compute the most intensive matrix-matrix calculations and matrix decompositions that are used in solving the system of linear equations. These open-source functions were integrated into the existing frequency-domain diffuse optical image reconstruction algorithms to evaluate the acceleration capability of the GPUs (NVIDIA Tesla C 1060) with increasing reconstruction problem sizes. These studies indicate that single precision computations are sufficient for diffuse optical tomographic image reconstruction. The acceleration per iteration can be up to 40, using GPUs compared to traditional CPUs in case of three-dimensional reconstruction, where the reconstruction problem is more underdetermined, making the GPUs more attractive in the clinical settings. The current limitation of these GPUs in the available onboard memory (4 GB) that restricts the reconstruction of a large set of optical parameters, more than 13, 377. (C) 2010 Society of Photo-Optical Instrumentation Engineers. DOI: 10.1117/1.3506216]
Resumo:
The constructional details of an 18-bit binary inductive voltage divider (IVD) for a.c. bridge applications is described. Simplified construction with less number of windings, interconnection of winding through SPDT solid state relays instead of DPDT relays, improves reliability of IVD. High accuracy for most precision measurement achieved without D/A converters. The checks for self consistency in voltage division shows that the error is less than 2 counts in 2(18).
Resumo:
Woolley's revolutionary proposal that quantum mechanics does not sanction the concept of ''molecular structure'' - which is but only a ''metaphor'' - has fundamental implications for physical organic chemistry. On the one hand, the Uncertainty Principle limits the precision with which transition state structures may be defined; on the other, extension of the structure concept to the transition state may be unviable. Attempts to define transition states have indeed caused controversy. Consequences for molecular recognition, and a mechanistic classification, are also discussed.
Resumo:
One of the main disturbances in EEG signals is EMG artefacts generated by muscle movements. In the paper, the use of a linear phase FIR digital low-pass filter with finite wordlength precision coefficients is proposed, designed using the compensation procedure, to minimise EMG artefacts in contaminated EEG signals. To make the filtering more effective, different structures are used, i.e. cascading, twicing and sharpening (apart from simple low-pass filtering) of the designed FIR filter Modifications are proposed to twicing and sharpening structures to regain the linear phase characteristics that are lost in conventional twicing and sharpening operations. The efficacy of all these transformed filters in minimising EMG artefacts is studied, using SNR improvements as a performance measure for simulated signals. Time plots of the signals are also compared. Studies show that the modified sharpening structure is superior in performance to all other proposed methods. These algorithms have also been applied to real or recorded EMG-contaminated EEG signal. Comparison of time plots, and also the output SNR, show that the proposed modified sharpened structure works better in minimising EMG artefacts compared with other methods considered.
Resumo:
The physics potential of e(+) e(-) linear colliders is summarized in this report. These machines are planned to operate in the first phase at a center-of-mass energy of 500 GeV, before being scaled up to about 1 TeV. In the second phase of the operation, a final energy of about 2 TeV is expected. The machines will allow us to perform precision tests of the heavy particles in the Standard Model, the top quark and the electroweak bosons. They are ideal facilities for exploring the properties of Higgs particles, in particular in the intermediate mass range. New vector bosons and novel matter particles in extended gauge theories can be searched for and studied thoroughly. The machines provide unique opportunities for the discovery of particles in supersymmetric extensions of the Standard Model, the spectrum of Higgs particles, the supersymmetric partners of the electroweak gauge and Higgs bosons, and of the matter particles. High precision analyses of their properties and interactions will allow for extrapolations to energy scales close to the Planck scale where gravity becomes significant. In alternative scenarios, i.e. compositeness models, novel matter particles and interactions can be discovered and investigated in the energy range above the existing colliders lip to the TeV scale. Whatever scenario is realized in Nature, the discovery potential of e(+) e(-) linear colliders and the high precision with which the properties of particles and their interactions can be analyzed, define an exciting physics program complementary to hadron machines. (C) 1998 Elsevier Science B.V. All rights reserved.
Resumo:
The cis-regulatory regions on DNA serve as binding sites for proteins such as transcription factors and RNA polymerase. The combinatorial interaction of these proteins plays a crucial role in transcription initiation, which is an important point of control in the regulation of gene expression. We present here an analysis of the performance of an in silico method for predicting cis-regulatory regions in the plant genomes of Arabidopsis (Arabidopsis thaliana) and rice (Oryza sativa) on the basis of free energy of DNA melting. For protein-coding genes, we achieve recall and precision of 96% and 42% for Arabidopsis and 97% and 31% for rice, respectively. For noncoding RNA genes, the program gives recall and precision of 94% and 75% for Arabidopsis and 95% and 90% for rice, respectively. Moreover, 96% of the false-positive predictions were located in noncoding regions of primary transcripts, out of which 20% were found in the first intron alone, indicating possible regulatory roles. The predictions for orthologous genes from the two genomes showed a good correlation with respect to prediction scores and promoter organization. Comparison of our results with an existing program for promoter prediction in plant genomes indicates that our method shows improved prediction capability.
Resumo:
A shear flexible 4-noded finite element formulation, having five mechanical degrees of freedom per node, is presented for modeling the dynamic as well as the static thermal response of laminated composites containing distributed piezoelectric layers. This element has been developed to have one electrical degree of freedom per piezoelectric layer. The mass, stiffness and thermo-electro-mechanical coupling effects on the actuator and sensor layers have been considered. Numerical studies have been conducted to investigate both the sensory and active responses on piezoelectric composite beam and plate structures. It is. concluded that both the thermal and pyroelectric effects are important and need to be considered in the precision distributed control of intelligent structures.
Resumo:
We demonstrate a technique for precisely measuring hyperfine intervals in alkali atoms. The atoms form a three-level system in the presence of a strong control laser and a weak probe laser. The dressed states created by the control laser show significant linewidth reduction. We have developed a technique for Doppler-free spectroscopy that enables the separation between the dressed states to be measured with high accuracy even in room temperature atoms. The states go through an avoided crossing as the detuning of the control laser is changed from positive to negative. By studying the separation as a function of detuning, the center of the level-crossing diagram is determined with high precision, which yields the hyperfine interval. Using room temperature Rb vapor, we obtain a precision of 44 kHz. This is a significant improvement over the current precision of similar to1 MHz.
Resumo:
Thanks to advances in sensor technology, today we have many applications (space-borne imaging, medical imaging, etc.) where images of large sizes are generated. Straightforward application of wavelet techniques for above images involves certain difficulties. Embedded coders such as EZW and SPIHT require that the wavelet transform of the full image be buffered for coding. Since the transform coefficients also require storing in high precision, buffering requirements for large images become prohibitively high. In this paper, we first devise a technique for embedded coding of large images using zero trees with reduced memory requirements. A 'strip buffer' capable of holding few lines of wavelet coefficients from all the subbands belonging to the same spatial location is employed. A pipeline architecure for a line implementation of above technique is then proposed. Further, an efficient algorithm to extract an encoded bitstream corresponding to a region of interest in the image has also been developed. Finally, the paper describes a strip based non-embedded coding which uses a single pass algorithm. This is to handle high-input data rates. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
We analyse the Roy equations for the lowest partial waves of elastic ππ scattering. In the first part of the paper, we review the mathematical properties of these equations as well as their phenomenological applications. In particular, the experimental situation concerning the contributions from intermediate energies and the evaluation of the driving terms are discussed in detail. We then demonstrate that the two S-wave scattering lengths a00 and a02 are the essential parameters in the low energy region: Once these are known, the available experimental information determines the behaviour near threshold to within remarkably small uncertainties. An explicit numerical representation for the energy dependence of the S- and P-waves is given and it is shown that the threshold parameters of the D- and F-waves are also fixed very sharply in terms of a00 and a20. In agreement with earlier work, which is reviewed in some detail, we find that the Roy equations admit physically acceptable solutions only within a band of the (a00,a02) plane. We show that the data on the reactions e+e−→ππ and τ→ππν reduce the width of this band quite significantly. Furthermore, we discuss the relevance of the decay K→ππeν in restricting the allowed range of a00, preparing the grounds for an analysis of the forthcoming precision data on this decay and on pionic atoms. We expect these to reduce the uncertainties in the two basic low energy parameters very substantially, so that a meaningful test of the chiral perturbation theory predictions will become possible.
Resumo:
In this work, we evaluate performance of a real-world image processing application that uses a cross-correlation algorithm to compare a given image with a reference one. The algorithm processes individual images represented as 2-dimensional matrices of single-precision floating-point values using O(n4) operations involving dot-products and additions. We implement this algorithm on a nVidia GTX 285 GPU using CUDA, and also parallelize it for the Intel Xeon (Nehalem) and IBM Power7 processors, using both manual and automatic techniques. Pthreads and OpenMP with SSE and VSX vector intrinsics are used for the manually parallelized version, while a state-of-the-art optimization framework based on the polyhedral model is used for automatic compiler parallelization and optimization. The performance of this algorithm on the nVidia GPU suffers from: (1) a smaller shared memory, (2) unaligned device memory access patterns, (3) expensive atomic operations, and (4) weaker single-thread performance. On commodity multi-core processors, the application dataset is small enough to fit in caches, and when parallelized using a combination of task and short-vector data parallelism (via SSE/VSX) or through fully automatic optimization from the compiler, the application matches or beats the performance of the GPU version. The primary reasons for better multi-core performance include larger and faster caches, higher clock frequency, higher on-chip memory bandwidth, and better compiler optimization and support for parallelization. The best performing versions on the Power7, Nehalem, and GTX 285 run in 1.02s, 1.82s, and 1.75s, respectively. These results conclusively demonstrate that, under certain conditions, it is possible for a FLOP-intensive structured application running on a multi-core processor to match or even beat the performance of an equivalent GPU version.
Resumo:
Learning to rank from relevance judgment is an active research area. Itemwise score regression, pairwise preference satisfaction, and listwise structured learning are the major techniques in use. Listwise structured learning has been applied recently to optimize important non-decomposable ranking criteria like AUC (area under ROC curve) and MAP(mean average precision). We propose new, almost-lineartime algorithms to optimize for two other criteria widely used to evaluate search systems: MRR (mean reciprocal rank) and NDCG (normalized discounted cumulative gain)in the max-margin structured learning framework. We also demonstrate that, for different ranking criteria, one may need to use different feature maps. Search applications should not be optimized in favor of a single criterion, because they need to cater to a variety of queries. E.g., MRR is best for navigational queries, while NDCG is best for informational queries. A key contribution of this paper is to fold multiple ranking loss functions into a multi-criteria max-margin optimization.The result is a single, robust ranking model that is close to the best accuracy of learners trained on individual criteria. In fact, experiments over the popular LETOR and TREC data sets show that, contrary to conventional wisdom, a test criterion is often not best served by training with the same individual criterion.
Resumo:
Theoretical approaches are of fundamental importance to predict the potential impact of waste disposal facilities on ground water contamination. Appropriate design parameters are, in general, estimated by fitting the theoretical models to a field monitoring or laboratory experimental data. Double-reservoir diffusion (Transient Through-Diffusion) experiments are generally conducted in the laboratory to estimate the mass transport parameters of the proposed barrier material. These design parameters are estimated by manual parameter adjusting techniques (also called eye-fitting) like Pollute. In this work an automated inverse model is developed to estimate the mass transport parameters from transient through-diffusion experimental data. The proposed inverse model uses particle swarm optimization (PSO) algorithm which is based on the social behaviour of animals for finding their food sources. Finite difference numerical solution of the transient through-diffusion mathematical model is integrated with the PSO algorithm to solve the inverse problem of parameter estimation.The working principle of the new solver is demonstrated by estimating mass transport parameters from the published transient through-diffusion experimental data. The estimated values are compared with the values obtained by existing procedure. The present technique is robust and efficient. The mass transport parameters are obtained with a very good precision in less time
Resumo:
Over past few years, the studies of cultured neuronal networks have opened up avenues for understanding the ion channels, receptor molecules, and synaptic plasticity that may form the basis of learning and memory. The hippocampal neurons from rats are dissociated and cultured on a surface containing a grid of 64 electrodes. The signals from these 64 electrodes are acquired using a fast data acquisition system MED64 (Alpha MED Sciences, Japan) at a sampling rate of 20 K samples with a precision of 16-bits per sample. A few minutes of acquired data runs in to a few hundreds of Mega Bytes. The data processing for the neural analysis is highly compute-intensive because the volume of data is huge. The major processing requirements are noise removal, pattern recovery, pattern matching, clustering and so on. In order to interface a neuronal colony to a physical world, these computations need to be performed in real-time. A single processor such as a desk top computer may not be adequate to meet this computational requirements. Parallel computing is a method used to satisfy the real-time computational requirements of a neuronal system that interacts with an external world while increasing the flexibility and scalability of the application. In this work, we developed a parallel neuronal system using a multi-node Digital Signal processing system. With 8 processors, the system is able to compute and map incoming signals segmented over a period of 200 ms in to an action in a trained cluster system in real time.