147 resultados para Precision
Resumo:
This work analyses the unique spatio-temporal alteration of the deposition pattern of evaporating nanoparticle laden droplets resting on a hydrophobic surface through targeted low frequency substrate vibrations. External excitation near the lowest resonant mode (n = 2) of the droplet initially de-pins and then subsequently re-pins the droplet edge creating pseudo-hydrophilicity (low contact angle). Vibration subsequently induces droplet shape oscillations (cyclic elongation and flattening) resulting in strong flow recirculation. This strong radially outward liquid flow augments nanoparticle transport, vaporization, and agglomeration near the pinned edge resulting in much reduced drying time under certain characteristic frequency of oscillations. The resultant deposit exhibits a much flatter structure with sharp, defined peripheral wedge topology as compared to natural drying. Such controlled manipulation of transport enables tailoring of structural and topological morphology of the deposits and offers possible routes towards controlling the formation and drying timescales which are crucial for applications ranging from pharmaceutics to surface patterning. (C) 2014 AIP Publishing LLC.
Resumo:
The problem addressed in this paper is sound, scalable, demand-driven null-dereference verification for Java programs. Our approach consists conceptually of a base analysis, plus two major extensions for enhanced precision. The base analysis is a dataflow analysis wherein we propagate formulas in the backward direction from a given dereference, and compute a necessary condition at the entry of the program for the dereference to be potentially unsafe. The extensions are motivated by the presence of certain ``difficult'' constructs in real programs, e.g., virtual calls with too many candidate targets, and library method calls, which happen to need excessive analysis time to be analyzed fully. The base analysis is hence configured to skip such a difficult construct when it is encountered by dropping all information that has been tracked so far that could potentially be affected by the construct. Our extensions are essentially more precise ways to account for the effect of these constructs on information that is being tracked, without requiring full analysis of these constructs. The first extension is a novel scheme to transmit formulas along certain kinds of def-use edges, while the second extension is based on using manually constructed backward-direction summary functions of library methods. We have implemented our approach, and applied it on a set of real-life benchmarks. The base analysis is on average able to declare about 84% of dereferences in each benchmark as safe, while the two extensions push this number up to 91%. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
An online computing server, Online_DPI (where DPI denotes the diffraction precision index), has been created to calculate the `Cruickshank DPI' value for a given three-dimensional protein or macromolecular structure. It also estimates the atomic coordinate error for all the atoms available in the structure. It is an easy-to-use web server that enables users to visualize the computed values dynamically on the client machine. Users can provide the Protein Data Bank (PDB) identification code or upload the three-dimensional atomic coordinates from the client machine. The computed DPI value for the structure and the atomic coordinate errors for all the atoms are included in the revised PDB file. Further, users can graphically view the atomic coordinate error along with `temperature factors' (i.e. atomic displacement parameters). In addition, the computing engine is interfaced with an up-to-date local copy of the Protein Data Bank. New entries are updated every week, and thus users can access all the structures available in the Protein Data Bank. The computing engine is freely accessible online at http://cluster.physics.iisc.ernet.in/dpi/.
Resumo:
The power of X-ray crystal structure analysis as a technique is to `see where the atoms are'. The results are extensively used by a wide variety of research communities. However, this `seeing where the atoms are' can give a false sense of security unless the precision of the placement of the atoms has been taken into account. Indeed, the presentation of bond distances and angles to a false precision (i.e. to too many decimal places) is commonplace. This article has three themes. Firstly, a basis for a proper representation of protein crystal structure results is detailed and demonstrated with respect to analyses of Protein Data Bank entries. The basis for establishing the precision of placement of each atom in a protein crystal structure is non-trivial. Secondly, a knowledge base harnessing such a descriptor of precision is presented. It is applied here to the case of salt bridges, i.e. ion pairs, in protein structures; this is the most fundamental place to start with such structure-precision representations since salt bridges are one of the tenets of protein structure stability. Ion pairs also play a central role in protein oligomerization, molecular recognition of ligands and substrates, allosteric regulation, domain motion and alpha-helix capping. A new knowledge base, SBPS (Salt Bridges in Protein Structures), takes these structural precisions into account and is the first of its kind. The third theme of the article is to indicate natural extensions of the need for such a description of precision, such as those involving metalloproteins and the determination of the protonation states of ionizable amino acids. Overall, it is also noted that this work and these examples are also relevant to protein three-dimensional structure molecular graphics software.
Resumo:
We present up-to-date electroweak fits of various Randall-Sundrum (RS) models. We consider the bulk RS, deformed RS, and the custodial RS models. For the bulk RS case we find the lightest Kaluza-Klein (KK) mode of the gauge boson to be similar to 8 TeV, while for the custodial case it is similar to 3 TeV. The deformed model is the least fine-tuned of all which can give a good fit for KK masses < 2 TeV depending on the choice of the model parameters. We also comment on the fine-tuning in each case.
Resumo:
Hydrogen bonds in biological macromolecules play significant structural and functional roles. They are the key contributors to most of the interactions without which no living system exists. In view of this, a web-based computing server, the Hydrogen Bonds Computing Server (HBCS), has been developed to compute hydrogen-bond interactions and their standard deviations for any given macromolecular structure. The computing server is connected to a locally maintained Protein Data Bank (PDB) archive. Thus, the user can calculate the above parameters for any deposited structure, and options have also been provided for the user to upload a structure in PDB format from the client machine. In addition, the server has been interfaced with the molecular viewers Jmol and JSmol to visualize the hydrogen-bond interactions. The proposed server is freely available and accessible via the World Wide Web at http://bioserver1.physics.iisc.ernet.in/hbcs/.
Resumo:
This paper presents the programming an FPGA (Field Programmable Gate Array) to emulate the dynamics of DC machines. FPGA allows high speed real time simulation with high precision. The described design includes block diagram representation of DC machine, which contain all arithmetic and logical operations. The real time simulation of the machine in FPGA is controlled by user interfaces they are Keypad interface, LCD display on-line and digital to analog converter. This approach provides emulation of electrical machine by changing the parameters. Separately Exited DC machine implemented and experimental results are presented.
Resumo:
The use of two liquid crystals as solvents in the determination of molecular structure has been demonstrated for systems which do not provide structural information from studies in a single solvent owing to the fact that the spectra are deceptively simple, with the result that all the spectral parameters cannot be derived with reasonable precision. The specific system studied was 2-(p-bromophenyl)-4,6-dichloropyrimidine, for which relative inter-proton discances have been determined from the proton NMR spectra in two nematic solvents.
Resumo:
The emission from neutral hydrogen (HI) clouds in the post-reionization era (z <= 6), too faint to be individually detected, is present as a diffuse background in all low frequency radio observations below 1420MHz. The angular and frequency fluctuations of this radiation (similar to 1 mK) are an important future probe of the large-scale structures in the Universe. We show that such observations are a very effective probe of the background cosmological model and the perturbed Universe. In our study we focus on the possibility of determining the redshift-space distortion parameter beta, coordinate distance r(nu), and its derivative with redshift r(nu)('). Using reasonable estimates for the observational uncertainties and configurations representative of the ongoing and upcoming radio interferometers, we predict parameter estimation at a precision comparable with supernova Ia observations and galaxy redshift surveys, across a wide range in redshift that is only partially accessed by other probes. Future HI observations of the post-reionization era present a new technique, complementing several existing ones, to probe the expansion history and to elucidate the nature of the dark energy.
Resumo:
With many innovations in process technology, forging is establishing itself as a precision manufacturing process: as forging is used to produce complex shapes in difficult materials, it requires dies of complex configuration of high strength and of wear-resistant materials. Extensive research and development work is being undertaken, internationally, to analyse the stresses in forging dies and the flow of material in forged components. Identification of the location, size and shape of dead-metal zones is required for component design. Further, knowledge of the strain distribution in the flowing metal indicates the degree to which the component is being work hardened. Such information is helpful in the selection of process parameters such as dimensional allowances and interface lubrication, as well as in the determination of post-forging operations such as heat treatment and machining. In the presently reported work the effect of aperture width and initial specimen height on the strain distribution in the plane-strain extrusion forging of machined lead billets is observed: the distortion of grids inscribed on the face of the specimen gives the strain distribution. The stress-equilibrium approach is used to optimise a model of flow in extrusion forging, which model is found to be effective in estimating the size of the dead-metal zone. The work carried out so far indicates that the methodology of using the stress-equilibrium approach to develop models of flow in closed-die forging can be a useful tool in component, process and die design.
Resumo:
A fuzzy logic based centralized control algorithm for irrigation canals is presented. Purpose of the algorithm is to control downstream discharge and water level of pools in the canal, by adjusting discharge release from the upstream end and gates settings. The algorithm is based on the dynamic wave model (Saint-Venant equations) inversion in space, wherein the momentum equation is replaced by a fuzzy rule based model, while retaining the continuity equation in its complete form. The fuzzy rule based model is developed on fuzzification of a new mathematical model for wave velocity, the derivational details of which are given. The advantages of the fuzzy control algorithm, over other conventional control algorithms, are described. It is transparent and intuitive, and no linearizations of the governing equations are involved. Timing of the algorithm and method of computation are explained. It is shown that the tuning is easy and the computations are straightforward. The algorithm provides stable, realistic and robust outputs. The disadvantage of the algorithm is reduced precision in its outputs due to the approximation inherent in the fuzzy logic. Feed back control logic is adopted to eliminate error caused by the system disturbances as well as error caused by the reduced precision in the outputs. The algorithm is tested by applying it to water level control problem in a fictitious canal with a single pool and also in a real canal with a series of pools. It is found that results obtained from the algorithm are comparable to those obtained from conventional control algorithms.
Resumo:
Theoretical approaches are of fundamental importance to predict the potential impact of waste disposal facilities on ground water contamination. Appropriate design parameters are generally estimated be fitting theoretical models to data gathered from field monitoring or laboratory experiments. Transient through-diffusion tests are generally conducted in the laboratory to estimate the mass transport parameters of the proposed barrier material. Thes parameters are usually estimated either by approximate eye-fitting calibration or by combining the solution of the direct problem with any available gradient-based techniques. In this work, an automated, gradient-free solver is developed to estimate the mass transport parameters of a transient through-diffusion model. The proposed inverse model uses a particle swarm optimization (PSO) algorithm that is based on the social behavior of animals searching for food sources. The finite difference numerical solution of the forward model is integrated with the PSO algorithm to solve the inverse problem of parameter estimation. The working principle of the new solver is demonstrated and mass transport parameters are estimated from laboratory through-diffusion experimental data. An inverse model based on the standard gradient-based technique is formulated to compare with the proposed solver. A detailed comparative study is carried out between conventional methods and the proposed solver. The present automated technique is found to be very efficient and robust. The mass transport parameters are obtained with great precision.
Resumo:
Data-flow analysis is an integral part of any aggressive optimizing compiler. We propose a framework for improving the precision of data-flow analysis in the presence of complex control-flow. W initially perform data-flow analysis to determine those control-flow merges which cause the loss in data-flow analysis precision. The control-flow graph of the program is then restructured such that performing data-flow analysis on the resulting restructured graph gives more precise results. The proposed framework is both simple, involving the familiar notion of product automata, and also general, since it is applicable to any forward data-flow analysis. Apart from proving that our restructuring process is correct, we also show that restructuring is effective in that it necessarily leads to more optimization opportunities. Furthermore, the framework handles the trade-off between the increase in data-flow precision and the code size increase inherent in the restructuring. We show that determining an optimal restructuring is NP-hard, and propose and evaluate a greedy strategy. The framework has been implemented in the Scale research compiler, and instantiated for the specific problem of Constant Propagation. On the SPECINT 2000 benchmark suite we observe an average speedup of 4% in the running times over Wegman-Zadeck conditional constant propagation algorithm and 2% over a purely path profile guided approach.
Resumo:
The application of Gaussian Quadrature (GQ) procedures to the evaluation of i—E curves in linear sweep voltammetry is advocated. It is shown that a high degree of precision is achieved with these methods and the values obtained through GQ are in good agreement with (and even better than) the values reported in literature by Nicholson-Shain, for example. Another welcome feature with GQ is its ability to be interpreted as an elegant, efficient analytic approximation scheme too. A comparison of the values obtained by this approach and by a recent scheme based on series approximation proposed by Oldham is made and excellent agreement is shown to exist.
Resumo:
A global recursive bisection algorithm is described for computing the complex zeros of a polynomial. It has complexityO(n 3 p) wheren is the degree of the polynomial andp the bit precision requirement. Ifn processors are available, it can be realized in parallel with complexityO(n 2 p); also it can be implemented using exact arithmetic. A combined Wilf-Hansen algorithm is suggested for reduction in complexity.