900 resultados para Precision timed machines
Resumo:
Objects viewed through transparent sheets with residual non-parallelism and irregularity appear shifted and distorted. This distortion is measured in terms of angular and binocular deviation of an object viewed through the transparent sheet. The angular and binocular deviations introduced are particularly important in the context of aircraft windscreens and canopies as they can interfere with decision making of pilots especially while landing, leading to accidents. In this work, we have developed an instrument to measure both the angular and binocular deviations introduced by transparent sheets. This instrument is especially useful in the qualification of aircraft windscreens and canopies. It measures the deviation in the geometrical shadow cast by a periodic dot pattern trans-illuminated by the distorted light beam from the transparent test specimen compared to the reference pattern. Accurate quantification of the shift in the pattern is obtained by cross-correlating the reference shadow pattern with the specimen shadow pattern and measuring the location of the correlation peak. The developed instrument is handy to use and computes both angular and binocular deviation with an accuracy of less than +/- 0.1 mrad (approximate to 0.036 mrad) and has an excellent repeatability with an error of less than 2%. (C) 2012 American Institute of Physics. http://dx.doi.org/10.1063/1.4769756]
Resumo:
The possibility of establishing an accurate relative chronology of the early solar system events based on the decay of short-lived Al-26 to Mg-26 (half-life of 0.72 Myr) depends on the level of homogeneity (or heterogeneity) of Al-26 and Mg isotopes. However, this level is difficult. to constrain precisely because of the very high precision needed for the determination of isotopic ratios, typically of +/- 5 ppm. In this study, we report for the first time a detailed analytical protocol developed for high precision in situ Mg isotopic measurements ((25)mg/(24)mg and (26)mg/Mg-24 ratios, as well as Mg-26 excess) by MC-SIMS. As the data reduction process is critical for both accuracy and precision of the final isotopic results, factors such as the Faraday cup (FC) background drift and matrix effects on instrumental fractionation have been investigated. Indeed these instrumental effects impacting the measured Mg-isotope ratios can be as large or larger than the variations we are looking for to constrain the initial distribution of Al-26 and Mg isotopes in the early solar system. Our results show that they definitely are limiting factors regarding the precision of Mg isotopic compositions, and that an under- or over-correction of both FC background instabilities and instrumental isotopic fractionation leads to important bias on delta Mg-25, delta(26)mg and Delta Mg-26 values (for example, olivines not corrected for FC background drifts display Delta Mg-26 values that can differ by as much as 10 ppm from the truly corrected value). The new data reduction process described here can then be applied to meteoritic samples (components of chondritic meteorites for instance) to accurately establish their relative chronology of formation.
Resumo:
When stimulated by a point source of cyclic AMP, a starved amoeba of Dictyostelium discoideum responds by putting out a hollow balloon-like membrane extension followed by a pseudopod. The effect of the stimulus is to influence the position where either of these protrusions is made on the cell rather than to cause them to be made. Because the pseudopod forms perpendicular to the cell surface, its location is a measure of the precision with which the cell can locate the cAMP source. Cells beyond 1 h of starvation respond non-randomly with a precision that improves steadily thereafter. A cell that is starved for 1-2 h can locate the source accurately 43% of the time; and if starved for 6-7 h, 87% of the time. The response always has a high scatter; population-level heterogeneity reflects stochasticity in single cell behaviour. From the angular distribution of the response its maximum information content is estimated to be 2-3 bits. In summary, we quantitatively demonstrate the stochastic nature of the directional response and the increase in its accuracy over time.
Resumo:
Realistic and realtime computational simulation of soft biological organs (e.g., liver, kidney) is necessary when one tries to build a quality surgical simulator that can simulate surgical procedures involving these organs. Since the realistic simulation of these soft biological organs should account for both nonlinear material behavior and large deformation, achieving realistic simulations in realtime using continuum mechanics based numerical techniques necessitates the use of a supercomputer or a high end computer cluster which are costly. Hence there is a need to employ soft computing techniques like Support Vector Machines (SVMs) which can do function approximation, and hence could achieve physically realistic simulations in realtime by making use of just a desktop computer. Present work tries to simulate a pig liver in realtime. Liver is assumed to be homogeneous, isotropic, and hyperelastic. Hyperelastic material constants are taken from the literature. An SVM is employed to achieve realistic simulations in realtime, using just a desktop computer. The code for the SVM is obtained from [1]. The SVM is trained using the dataset generated by performing hyperelastic analyses on the liver geometry, using the commercial finite element software package ANSYS. The methodology followed in the present work closely follows the one followed in [2] except that [2] uses Artificial Neural Networks (ANNs) while the present work uses SVMs to achieve realistic simulations in realtime. Results indicate the speed and accuracy that is obtained by employing the SVM for the targeted realistic and realtime simulation of the liver.
Resumo:
This paper presents a multi-class support vector machine (SVMs) approach for locating and diagnosing faults in electric power distribution feeders with the penetration of Distributed Generations (DGs). The proposed approach is based on the three phase voltage and current measurements which are available at all the sources i.e. substation and at the connection points of DG. To illustrate the proposed methodology, a practical distribution feeder emanating from 132/11kV-grid substation in India with loads and suitable number of DGs at different locations is considered. To show the effectiveness of the proposed methodology, practical situations in distribution systems (DS) such as all types of faults with a wide range of varying fault locations, source short circuit (SSC) levels and fault impedances are considered for studies. The proposed fault location scheme is capable of accurately identify the fault type, location of faulted feeder section and the fault impedance. The results demonstrate the feasibility of applying the proposed method in practical in smart grid distribution automation (DA) for fault diagnosis.
Resumo:
SARAS is a correlation spectrometer purpose designed for precision measurements of the cosmic radio background and faint features in the sky spectrum at long wavelengths that arise from redshifted 21-cm from gas in the reionization epoch. SARAS operates in the octave band 87.5-175 MHz. We present herein the system design arguing for a complex correlation spectrometer concept. The SARAS design concept provides a differential measurement between the antenna temperature and that of an internal reference termination, with measurements in switched system states allowing for cancellation of additive contaminants from a large part of the signal flow path including the digital spectrometer. A switched noise injection scheme provides absolute spectral calibration. Additionally, we argue for an electrically small frequency-independent antenna over an absorber ground. Various critical design features that aid in avoidance of systematics and in providing calibration products for the parametrization of other unavoidable systematics are described and the rationale discussed. The signal flow and processing is analyzed and the response to noise temperatures of the antenna, reference termination and amplifiers is computed. Multi-path propagation arising from internal reflections are considered in the analysis, which includes a harmonic series of internal reflections. We opine that the SARAS design concept is advantageous for precision measurement of the absolute cosmic radio background spectrum; therefore, the design features and analysis methods presented here are expected to serve as a basis for implementations tailored to measurements of a multiplicity of features in the background sky at long wavelengths, which may arise from events in the dark ages and subsequent reionization era.
Resumo:
One of the challenges for accurately estimating Worst Case Execu-tion Time(WCET) of executables is to accurately predict their cache behaviour. Various techniques have been developed to predict the cache contents at different program points to estimate the execution time of memory-accessing instructions. One of the most widely used techniques is Abstract Interpretation based Must Analysis, which de-termines the cache blocks guaranteed to be present in the cache, and hence provides safe estimation of cache hits and misses. However,Must Analysis is highly imprecise, and platforms using Must Analysis have been known to produce blown-up WCET estimates. In our work, we propose to use May Analysis to assist the Must Analysis cache up-date and make it more precise. We prove the safety of our approach as well as provide examples where our Improved Must Analysis provides better precision. Further, we also detect a serious flaw in the original Persistence Analysis, and use Must and May Analysis to assist the Persistence Analysis cache update, to make it safe and more precise than the known solutions to the problem.
Resumo:
Realization of cloud computing has been possible due to availability of virtualization technologies on commodity platforms. Measuring resource usage on the virtualized servers is difficult because of the fact that the performance counters used for resource accounting are not virtualized. Hence, many of the prevalent virtualization technologies like Xen, VMware, KVM etc., use host specific CPU usage monitoring, which is coarse grained. In this paper, we present a performance monitoring tool for KVM based virtualized machines, which measures the CPU overhead incurred by the hypervisor on behalf of the virtual machine along-with the CPU usage of virtual machine itself. This fine-grained resource usage information, provided by the above tool, can be used for diverse situations like resource provisioning to support performance associated QoS requirements, identification of bottlenecks during VM placements, resource profiling of applications in cloud environments, etc. We demonstrate a use case of this tool by measuring the performance of web-servers hosted on a KVM based virtualized server.
Resumo:
The two-pion contribution from low energies to the muon magnetic moment anomaly, although small, has a large relative uncertainty since in this region the experimental data on the cross sections are neither sufficient nor precise enough. It is therefore of interest to see whether the precision can be improved by means of additional theoretical information on the pion electromagnetic form factor, which controls the leading-order contribution. In the present paper, we address this problem by exploiting analyticity and unitarity of the form factor in a parametrization-free approach that uses the phase in the elastic region, known with high precision from the Fermi-Watson theorem and Roy equations for pi pi elastic scattering as input. The formalism also includes experimental measurements on the modulus in the region 0.65-0.70 GeV, taken from the most recent e(+)e(-) ->pi(+)pi(-) experiments, and recent measurements of the form factor on the spacelike axis. By combining the results obtained with inputs from CMD2, SND, BABAR, and KLOE, we make the predictions a(mu)(pi pi,LO)2m(pi), 0.30 GeV] = (0.553 +/- 0.004) x 10(-10) and a(mu)(pi pi,LO)0.30 GeV; 0.63 GeV] = (133.083 +/- 0.837) x 10(-10). These are consistent with the other recent determinations and have slightly smaller errors.
Resumo:
Multi-GPU machines are being increasingly used in high-performance computing. Each GPU in such a machine has its own memory and does not share the address space either with the host CPU or other GPUs. Hence, applications utilizing multiple GPUs have to manually allocate and manage data on each GPU. Existing works that propose to automate data allocations for GPUs have limitations and inefficiencies in terms of allocation sizes, exploiting reuse, transfer costs, and scalability. We propose a scalable and fully automatic data allocation and buffer management scheme for affine loop nests on multi-GPU machines. We call it the Bounding-Box-based Memory Manager (BBMM). BBMM can perform at runtime, during standard set operations like union, intersection, and difference, finding subset and superset relations on hyperrectangular regions of array data (bounding boxes). It uses these operations along with some compiler assistance to identify, allocate, and manage data required by applications in terms of disjoint bounding boxes. This allows it to (1) allocate exactly or nearly as much data as is required by computations running on each GPU, (2) efficiently track buffer allocations and hence maximize data reuse across tiles and minimize data transfer overhead, and (3) and as a result, maximize utilization of the combined memory on multi-GPU machines. BBMM can work with any choice of parallelizing transformations, computation placement, and scheduling schemes, whether static or dynamic. Experiments run on a four-GPU machine with various scientific programs showed that BBMM reduces data allocations on each GPU by up to 75% compared to current allocation schemes, yields performance of at least 88% of manually written code, and allows excellent weak scaling.
Resumo:
This work analyses the unique spatio-temporal alteration of the deposition pattern of evaporating nanoparticle laden droplets resting on a hydrophobic surface through targeted low frequency substrate vibrations. External excitation near the lowest resonant mode (n = 2) of the droplet initially de-pins and then subsequently re-pins the droplet edge creating pseudo-hydrophilicity (low contact angle). Vibration subsequently induces droplet shape oscillations (cyclic elongation and flattening) resulting in strong flow recirculation. This strong radially outward liquid flow augments nanoparticle transport, vaporization, and agglomeration near the pinned edge resulting in much reduced drying time under certain characteristic frequency of oscillations. The resultant deposit exhibits a much flatter structure with sharp, defined peripheral wedge topology as compared to natural drying. Such controlled manipulation of transport enables tailoring of structural and topological morphology of the deposits and offers possible routes towards controlling the formation and drying timescales which are crucial for applications ranging from pharmaceutics to surface patterning. (C) 2014 AIP Publishing LLC.
Resumo:
The problem addressed in this paper is sound, scalable, demand-driven null-dereference verification for Java programs. Our approach consists conceptually of a base analysis, plus two major extensions for enhanced precision. The base analysis is a dataflow analysis wherein we propagate formulas in the backward direction from a given dereference, and compute a necessary condition at the entry of the program for the dereference to be potentially unsafe. The extensions are motivated by the presence of certain ``difficult'' constructs in real programs, e.g., virtual calls with too many candidate targets, and library method calls, which happen to need excessive analysis time to be analyzed fully. The base analysis is hence configured to skip such a difficult construct when it is encountered by dropping all information that has been tracked so far that could potentially be affected by the construct. Our extensions are essentially more precise ways to account for the effect of these constructs on information that is being tracked, without requiring full analysis of these constructs. The first extension is a novel scheme to transmit formulas along certain kinds of def-use edges, while the second extension is based on using manually constructed backward-direction summary functions of library methods. We have implemented our approach, and applied it on a set of real-life benchmarks. The base analysis is on average able to declare about 84% of dereferences in each benchmark as safe, while the two extensions push this number up to 91%. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
An online computing server, Online_DPI (where DPI denotes the diffraction precision index), has been created to calculate the `Cruickshank DPI' value for a given three-dimensional protein or macromolecular structure. It also estimates the atomic coordinate error for all the atoms available in the structure. It is an easy-to-use web server that enables users to visualize the computed values dynamically on the client machine. Users can provide the Protein Data Bank (PDB) identification code or upload the three-dimensional atomic coordinates from the client machine. The computed DPI value for the structure and the atomic coordinate errors for all the atoms are included in the revised PDB file. Further, users can graphically view the atomic coordinate error along with `temperature factors' (i.e. atomic displacement parameters). In addition, the computing engine is interfaced with an up-to-date local copy of the Protein Data Bank. New entries are updated every week, and thus users can access all the structures available in the Protein Data Bank. The computing engine is freely accessible online at http://cluster.physics.iisc.ernet.in/dpi/.
Resumo:
The power of X-ray crystal structure analysis as a technique is to `see where the atoms are'. The results are extensively used by a wide variety of research communities. However, this `seeing where the atoms are' can give a false sense of security unless the precision of the placement of the atoms has been taken into account. Indeed, the presentation of bond distances and angles to a false precision (i.e. to too many decimal places) is commonplace. This article has three themes. Firstly, a basis for a proper representation of protein crystal structure results is detailed and demonstrated with respect to analyses of Protein Data Bank entries. The basis for establishing the precision of placement of each atom in a protein crystal structure is non-trivial. Secondly, a knowledge base harnessing such a descriptor of precision is presented. It is applied here to the case of salt bridges, i.e. ion pairs, in protein structures; this is the most fundamental place to start with such structure-precision representations since salt bridges are one of the tenets of protein structure stability. Ion pairs also play a central role in protein oligomerization, molecular recognition of ligands and substrates, allosteric regulation, domain motion and alpha-helix capping. A new knowledge base, SBPS (Salt Bridges in Protein Structures), takes these structural precisions into account and is the first of its kind. The third theme of the article is to indicate natural extensions of the need for such a description of precision, such as those involving metalloproteins and the determination of the protonation states of ionizable amino acids. Overall, it is also noted that this work and these examples are also relevant to protein three-dimensional structure molecular graphics software.
Resumo:
We present up-to-date electroweak fits of various Randall-Sundrum (RS) models. We consider the bulk RS, deformed RS, and the custodial RS models. For the bulk RS case we find the lightest Kaluza-Klein (KK) mode of the gauge boson to be similar to 8 TeV, while for the custodial case it is similar to 3 TeV. The deformed model is the least fine-tuned of all which can give a good fit for KK masses < 2 TeV depending on the choice of the model parameters. We also comment on the fine-tuning in each case.