66 resultados para STANDARD AUTOMATED PERIMETRY
Resumo:
A completely automated temperature-programmed reaction (TPR) system for carrying out gas-solid catalytic reactions under atmospheric flow conditions is fabricated to study CO and hydrocarbon oxidation, and NO reduction. The system consists of an all-stainless steel UHV system, quadrupole mass spectrometer SX200 (VG Scientific), a tubular furnace and micro-reactor, a temperature controller, a versatile gas handling system, and a data acquisition and analysis system. The performance of the system has been tested under standard experimental conditions for CO oxidation over well-characterized Ce1-x-y(La/Y)(y)O2-delta catalysts. Testing of 3-way catalysis with CO, NO and C2H2 to convert to CO2, N-2 and H2O is done with this catalyst which shows complete removal of pollutants below 325 degrees C. Fixed oxide-ion defects in Pt substituted Ce1-y(La/Y)(y)O2-y/2 show higher catalytic activity than Pt ion-substituted CeO2
Resumo:
A completely automated temperature-programmed reaction (TPR) system for carrying out gas-solid catalytic reactions under atmospheric flow conditions is fabricated to study CO and hydrocarbon oxidation, and NO reduction. The system consists of an all-stainless steel UHV system, quadrupole mass spectrometer SX200 (VG Scientific), a tubular furnace and micro-reactor, a temperature controller, a versatile gas handling system, and a data acquisition and analysis system. The performance of the system has been tested under standard experimental conditions for CO oxidation over well-characterized Ce1-x-y(La/Y)(y)O2-delta catalysts. Testing of 3-way catalysis with CO, NO and C2H2 to convert to CO2, N-2 and H2O is done with this catalyst which shows complete removal of pollutants below 325 degrees C. Fixed oxide-ion defects in Pt substituted Ce1-y(La/Y)(y)O2-y/2 show higher catalytic activity than Pt ion-substituted CeO2.
Resumo:
This is a summary of the beyond the Standard Model (including model building working group of the WHEPP-X workshop held at Chennai from January 3 to 15, 2008.
Resumo:
Theoretical approaches are of fundamental importance to predict the potential impact of waste disposal facilities on ground water contamination. Appropriate design parameters are generally estimated be fitting theoretical models to data gathered from field monitoring or laboratory experiments. Transient through-diffusion tests are generally conducted in the laboratory to estimate the mass transport parameters of the proposed barrier material. Thes parameters are usually estimated either by approximate eye-fitting calibration or by combining the solution of the direct problem with any available gradient-based techniques. In this work, an automated, gradient-free solver is developed to estimate the mass transport parameters of a transient through-diffusion model. The proposed inverse model uses a particle swarm optimization (PSO) algorithm that is based on the social behavior of animals searching for food sources. The finite difference numerical solution of the forward model is integrated with the PSO algorithm to solve the inverse problem of parameter estimation. The working principle of the new solver is demonstrated and mass transport parameters are estimated from laboratory through-diffusion experimental data. An inverse model based on the standard gradient-based technique is formulated to compare with the proposed solver. A detailed comparative study is carried out between conventional methods and the proposed solver. The present automated technique is found to be very efficient and robust. The mass transport parameters are obtained with great precision.
Resumo:
An experimental technique is proposed for the estimation of crack length as well as crack closure/opening stress during fatigue crack growth. A specially designed, single cantilever, crack opening displacement gauge is used to monitor these variables during fatigue crack propagation testing. The technique was experimentally validated through electronfractography.
Resumo:
This work deals with the formulation and implementation of finite deformation viscoplasticity within the framework of stress-based hybrid finite element methods. Hybrid elements, which are based on a two-field variational formulation, are much less susceptible to locking than conventional displacement-based elements. The conventional return-mapping scheme cannot be used in the context of hybrid stress methods since the stress is known, and the strain and the internal plastic variables have to be recovered using this known stress field.We discuss the formulation and implementation of the consistent tangent tensor, and the return-mapping algorithm within the context of the hybrid method. We demonstrate the efficacy of the algorithm on a wide range of problems.
Resumo:
Two typical alternative conformations for double strandee polynucleotides with Watson-Crick base pairing scheme are presented. these types avoid tangling of the chains. Representative models of these types with two different views, to show the similarity and dissimilarity between these models and the Watson-Crick model, are given.
Resumo:
The application of computer-aided inspection integrated with the coordinate measuring machine and laser scanners to inspect manufactured aircraft parts using robust registration of two-point datasets is a subject of active research in computational metrology. This paper presents a novel approach to automated inspection by matching shapes based on the modified iterative closest point (ICP) method to define a criterion for the acceptance or rejection of a part. This procedure improves upon existing methods by doing away with the following, viz., the need for constructing either a tessellated or smooth representation of the inspected part and requirements for an a priori knowledge of approximate registration and correspondence between the points representing the computer-aided design datasets and the part to be inspected. In addition, this procedure establishes a better measure for error between the two matched datasets. The use of localized region-based triangulation is proposed for tracking the error. The approach described improves the convergence of the ICP technique with a dramatic decrease in computational effort. Experimental results obtained by implementing this proposed approach using both synthetic and practical data show that the present method is efficient and robust. This method thereby validates the algorithm, and the examples demonstrate its potential to be used in engineering applications.
Resumo:
Non-standard finite difference methods (NSFDM) introduced by Mickens [Non-standard Finite Difference Models of Differential Equations, World Scientific, Singapore, 1994] are interesting alternatives to the traditional finite difference and finite volume methods. When applied to linear hyperbolic conservation laws, these methods reproduce exact solutions. In this paper, the NSFDM is first extended to hyperbolic systems of conservation laws, by a novel utilization of the decoupled equations using characteristic variables. In the second part of this paper, the NSFDM is studied for its efficacy in application to nonlinear scalar hyperbolic conservation laws. The original NSFDMs introduced by Mickens (1994) were not in conservation form, which is an important feature in capturing discontinuities at the right locations. Mickens [Construction and analysis of a non-standard finite difference scheme for the Burgers–Fisher equations, Journal of Sound and Vibration 257 (4) (2002) 791–797] recently introduced a NSFDM in conservative form. This method captures the shock waves exactly, without any numerical dissipation. In this paper, this algorithm is tested for the case of expansion waves with sonic points and is found to generate unphysical expansion shocks. As a remedy to this defect, we use the strategy of composite schemes [R. Liska, B. Wendroff, Composite schemes for conservation laws, SIAM Journal of Numerical Analysis 35 (6) (1998) 2250–2271] in which the accurate NSFDM is used as the basic scheme and localized relaxation NSFDM is used as the supporting scheme which acts like a filter. Relaxation schemes introduced by Jin and Xin [The relaxation schemes for systems of conservation laws in arbitrary space dimensions, Communications in Pure and Applied Mathematics 48 (1995) 235–276] are based on relaxation systems which replace the nonlinear hyperbolic conservation laws by a semi-linear system with a stiff relaxation term. The relaxation parameter (λ) is chosen locally on the three point stencil of grid which makes the proposed method more efficient. This composite scheme overcomes the problem of unphysical expansion shocks and captures the shock waves with an accuracy better than the upwind relaxation scheme, as demonstrated by the test cases, together with comparisons with popular numerical methods like Roe scheme and ENO schemes.
Resumo:
In this paper, we exploit the idea of decomposition to match buyers and sellers in an electronic exchange for trading large volumes of homogeneous goods, where the buyers and sellers specify marginal-decreasing piecewise constant price curves to capture volume discounts. Such exchanges are relevant for automated trading in many e-business applications. The problem of determining winners and Vickrey prices in such exchanges is known to have a worst-case complexity equal to that of as many as (1 + m + n) NP-hard problems, where m is the number of buyers and n is the number of sellers. Our method proposes the overall exchange problem to be solved as two separate and simpler problems: 1) forward auction and 2) reverse auction, which turns out to be generalized knapsack problems. In the proposed approach, we first determine the quantity of units to be traded between the sellers and the buyers using fast heuristics developed by us. Next, we solve a forward auction and a reverse auction using fully polynomial time approximation schemes available in the literature. The proposed approach has worst-case polynomial time complexity. and our experimentation shows that the approach produces good quality solutions to the problem. Note to Practitioners- In recent times, electronic marketplaces have provided an efficient way for businesses and consumers to trade goods and services. The use of innovative mechanisms and algorithms has made it possible to improve the efficiency of electronic marketplaces by enabling optimization of revenues for the marketplace and of utilities for the buyers and sellers. In this paper, we look at single-item, multiunit electronic exchanges. These are electronic marketplaces where buyers submit bids and sellers ask for multiple units of a single item. We allow buyers and sellers to specify volume discounts using suitable functions. Such exchanges are relevant for high-volume business-to-business trading of standard products, such as silicon wafers, very large-scale integrated chips, desktops, telecommunications equipment, commoditized goods, etc. The problem of determining winners and prices in such exchanges is known to involve solving many NP-hard problems. Our paper exploits the familiar idea of decomposition, uses certain algorithms from the literature, and develops two fast heuristics to solve the problem in a near optimal way in worst-case polynomial time.
Resumo:
Understanding of the shape and size of different features of the human body from scanned data is necessary for automated design and evaluation of product ergonomics. In this paper, a computational framework is presented for automatic detection and recognition of important facial feature regions, from scanned head and shoulder polyhedral models. A noise tolerant methodology is proposed using discrete curvature computations, band-pass filtering, and morphological operations for isolation of the primary feature regions of the face, namely, the eyes, nose, and mouth. Spatial disposition of the critical points of these isolated feature regions is analyzed for the recognition of these critical points as the standard landmarks associated with the primary facial features. A number of clinically identified landmarks lie on the facial midline. An efficient algorithm for detection and processing of the midline, using a point sampling technique, is also presented. The results obtained using data of more than 20 subjects are verified through visualization and physical measurements. A color based and triangle skewness based schemes for isolation of geometrically nonprominent features and ear region are also presented. [DOI: 10.1115/1.3330420]
Resumo:
In this paper, we propose a self Adaptive Migration Model for Genetic Algorithms, where parameters of population size, the number of points of crossover and mutation rate for each population are fixed adaptively. Further, the migration of individuals between populations is decided dynamically. This paper gives a mathematical schema analysis of the method stating and showing that the algorithm exploits previously discovered knowledge for a more focused and concentrated search of heuristically high yielding regions while simultaneously performing a highly explorative search on the other regions of the search space. The effective performance of the algorithm is then shown using standard testbed functions, when compared with Island model GA(IGA) and Simple GA(SGA).
Resumo:
Background: MHC/HLA class II molecules are important components of the immune system and play a critical role in processes such as phagocytosis. Understanding peptide recognition properties of the hundreds of MHC class II alleles is essential to appreciate determinants of antigenicity and ultimately to predict epitopes. While there are several methods for epitope prediction, each differing in their success rates, there are no reports so far in the literature to systematically characterize the binding sites at the structural level and infer recognition profiles from them. Results: Here we report a new approach to compare the binding sites of MHC class II molecules using their three dimensional structures. We use a specifically tuned version of our recent algorithm, PocketMatch. We show that our methodology is useful for classification of MHC class II molecules based on similarities or differences among their binding sites. A new module has been used to define binding sites in MHC molecules. Comparison of binding sites of 103 MHC molecules, both at the whole groove and individual sub-pocket levels has been carried out, and their clustering patterns analyzed. While clusters largely agree with serotypic classification, deviations from it and several new insights are obtained from our study. We also present how differences in sub-pockets of molecules associated with a pair of autoimmune diseases, narcolepsy and rheumatoid arthritis, were captured by PocketMatch(13). Conclusion: The systematic framework for understanding structuralvariations in MHC class II molecules enables large scale comparison of binding grooves and sub-pockets, which is likely to have direct implications towards predicting epitopes and understanding peptide binding preferences.
Resumo:
Background: MHC/HLA class II molecules are important components of the immune system and play a critical role in processes such as phagocytosis. Understanding peptide recognition properties of the hundreds of MHC class II alleles is essential to appreciate determinants of antigenicity and ultimately to predict epitopes. While there are several methods for epitope prediction, each differing in their success rates, there are no reports so far in the literature to systematically characterize the binding sites at the structural level and infer recognition profiles from them. Results: Here we report a new approach to compare the binding sites of MHC class II molecules using their three dimensional structures. We use a specifically tuned version of our recent algorithm, PocketMatch. We show that our methodology is useful for classification of MHC class II molecules based on similarities or differences among their binding sites. A new module has been used to define binding sites in MHC molecules. Comparison of binding sites of 103 MHC molecules, both at the whole groove and individual sub-pocket levels has been carried out, and their clustering patterns analyzed. While clusters largely agree with serotypic classification, deviations from it and several new insights are obtained from our study. We also present how differences in sub-pockets of molecules associated with a pair of autoimmune diseases, narcolepsy and rheumatoid arthritis, were captured by PocketMatch(13). Conclusion: The systematic framework for understanding structural variations in MHC class II molecules enables large scale comparison of binding grooves and sub-pockets, which is likely to have direct implications towards predicting epitopes and understanding peptide binding preferences.
Resumo:
Here the design and operation of a novel transmission electron microscope (TEM) triboprobe instrument with real-time vision control for advanced in situ electron microscopy is demonstrated. The NanoLAB triboprobe incorporates a new high stiffness coarse slider design for increased stability and positioning performance. This is linked with an advanced software control system which introduces both new and flexible in situ experimental functional testing modes, plus an automated vision control feedback system. This advancement in instrumentation design unlocks new possibilities of performing a range of new dynamical nanoscale materials tests, including novel friction and fatigue experiments inside the electron microscope.