995 resultados para SCHWINGER MULTICHANNEL METHOD
Resumo:
Surface electrodes are essentially required to be switched for boundary data collection in electrical impedance tomography (Ell). Parallel digital data bits are required to operate the multiplexers used, generally, for electrode switching in ELT. More the electrodes in an EIT system more the digital data bits are needed. For a sixteen electrode system. 16 parallel digital data bits are required to operate the multiplexers in opposite or neighbouring current injection method. In this paper a common ground current injection is proposed for EIT and the resistivity imaging is studied. Common ground method needs only two analog multiplexers each of which need only 4 digital data bits and hence only 8 digital bits are required to switch the 16 surface electrodes. Results show that the USB based data acquisition system sequentially generate digital data required for multiplexers operating in common ground current injection method. The profile of the boundary data collected from practical phantom show that the multiplexers are operating in the required sequence in common ground current injection protocol. The voltage peaks obtained for all the inhomogeneity configurations are found at the accurate positions in the boundary data matrix which proved the sequential operation of multiplexers. Resistivity images reconstructed from the boundary data collected from the practical phantom with different configurations also show that the entire digital data generation module is functioning properly. Reconstructed images and their image parameters proved that the boundary data are successfully acquired by the DAQ system which in turn indicates a sequential and proper operation of multiplexers.
Resumo:
The solvated metal atom dispersion (SMAD) method has been used for the synthesis of colloids of metal nanoparticles. It is a top-down approach involving condensation of metal atoms in low temperature solvent matrices in a SMAD reactor maintained at 77 K. Warming of the matrix results in a slurry of metal atoms that interact with one another to form particles that grow in size. The organic solvent solvates the particles and acts as a weak capping agent to halt/slow down the growth process to a certain extent. This as-prepared colloid consists of metal nanoparticles that are quite polydisperse. In a process termed as digestive ripening, addition of a capping agent to the as-prepared colloid which is polydisperse renders it highly monodisperse either under ambient or thermal conditions. In this, as yet not well-understood process, smaller particles grow and the larger ones diminish in size until the system attains uniformity in size and a dynamic equilibrium is established. Using the SMAD method in combination with digestive ripening process, highly monodisperse metal, core-shell, alloy, and composite nanoparticles have been synthesized. This article is a review of our contributions together with some literature reports on this methodology to realize various nanostructured materials.
Resumo:
A combination of chemical and thermal annealing techniques has been employed to synthesize a rarely reported nanocup structure of Mn doped ZnO with good yield. Nanocup structures are obtained by thermally annealing the powder samples consisting of nanosheets, synthesized chemically at room temperature, isochronally in a furnace at 200-500 degrees C temperature range for 2 h. Strong excitonic absorption in the UV and photoluminescence (PL) emission in UV-visible regions are observed in all the samples at room temperature. The sample obtained at 300 degrees C annealing temperature exhibits strong PL emission in the UV due to near-band-edge emission along with very week defect related emissions in the visible regions. The synthesized samples have been found to be exhibiting stable optical properties for 10 months which proved the unique feature of the presented technique of synthesis of nanocup structures. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
In many real world prediction problems the output is a structured object like a sequence or a tree or a graph. Such problems range from natural language processing to compu- tational biology or computer vision and have been tackled using algorithms, referred to as structured output learning algorithms. We consider the problem of structured classifi- cation. In the last few years, large margin classifiers like sup-port vector machines (SVMs) have shown much promise for structured output learning. The related optimization prob -lem is a convex quadratic program (QP) with a large num-ber of constraints, which makes the problem intractable for large data sets. This paper proposes a fast sequential dual method (SDM) for structural SVMs. The method makes re-peated passes over the training set and optimizes the dual variables associated with one example at a time. The use of additional heuristics makes the proposed method more efficient. We present an extensive empirical evaluation of the proposed method on several sequence learning problems.Our experiments on large data sets demonstrate that the proposed method is an order of magnitude faster than state of the art methods like cutting-plane method and stochastic gradient descent method (SGD). Further, SDM reaches steady state generalization performance faster than the SGD method. The proposed SDM is thus a useful alternative for large scale structured output learning.
Resumo:
With the rapid scaling down of the semiconductor process technology, the process variation aware circuit design has become essential today. Several statistical models have been proposed to deal with the process variation. We propose an accurate BSIM model for handling variability in 45nm CMOS technology. The MOSFET is designed to meet the specification of low standby power technology of International Technology Roadmap for Semiconductors (ITRS).The process parameters variation of annealing temperature, oxide thickness, halo dose and title angle of halo implant are considered for the model development. One parameter variation at a time is considered for developing the model. The model validation is done by performance matching with device simulation results and reported error is less than 10%.© (2012) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Resumo:
The b-phase of polyvinylidene fluoride (PVDF) is well known for its piezoelectric properties. PVDF films have been developed using solvent cast method. The films thus produced are in a-phase. The a-phase is transformed to piezoelectric b-phase when the film is hotstretched with various different stretching factors at various different temperatures. The films are then characterized in terms of their mechanical properties and surface morphological changes during the transformation from a- to b-phases by using X-ray diffraction, differential scanning calorimeter, Raman spectra, Infrared spectra, tensile testing, and scanning electron microscopy. The films showed increased crystallinity with stretching at temperature up to 808C. The optimum conditions to achieve b-phase have been discussed in detail. The fabricated PVDF sensors have been tested for free vibration and impact on plate structure, and its response is compared with conventional piezoelectric wafer type sensor. The resonant and antiresonant peaks in the frequency response of PVDF sensor match well with that of lead zirconate titanate wafer sensors. Effective piezoelectric properties and the variations in the frequency response spectra due to free vibration and impact loading conditions are reported. POLYM. ENG. SCI., 00:000–000, 2012. ª2012 Society of Plastics Engineers
Resumo:
A methodology for measurement of planar liquid volume fraction in dense sprays using a combination of Planar Laser-Induced Fluorescence (PLIF) and Particle/Droplet Imaging Analysis (PDIA) is presented in this work. The PLIF images are corrected for loss of signal intensity due to laser sheet scattering, absorption and auto-absorption. The key aspect of this work pertains to simultaneously solving the equations involving the corrected PLIF signal and liquid volume fraction. From this, a quantitative estimate of the planar liquid volume fraction is obtained. The corrected PLIF signal and the corrected planar Mie scattering can be also used together to obtain the Sauter Mean Diameter (SMD) distribution by using data from the PDIA technique at a particular location for calibration. This methodology is applied to non-evaporating sprays of diesel and a more viscous pure plant oil at an injection pressure of 1000 bar and a gas pressure of 30 bar in a high pressure chamber. These two fuels are selected since their viscosity values are very different with a consequently very different spray structure. The spatial distribution of liquid volume fraction and SMD is obtained for two fuels. The proposed method is validated by comparing liquid volume fraction obtained by the current method with data from PDIA technique. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
The paper focuses on reliability based design of bridge abutments when subjected to earthquake loading. Planar failure surface has been used in conjunction with pseudo-dynamic approach to compute the seismic active earth pressures on the bridge abutment. The proposed pseudo dynamic method, considers the effects of strain localization in the backfill soil and associated post-peak reduction in the shear resistance from peak to residual values along a previously formed failure plane, phase difference in shear waves and soil amplification along with the horizontal seismic accelerations. Four modes of stability viz. sliding, overturning, eccentricity and bearing capacity of the foundation soil are considered for the reliability analysis. The influence of various design parameters on the seismic reliability indices against four modes of failure is presented, following the suggestions of Japan Road Association, Caltrans Bridge Design Specifications and U.S Department of the Army.
Resumo:
Faraday-type electromagnetic flow meters are employed for measuring the flow rate of liquid sodium in fast breeder reactors. The calibration of such flow meters, owing to the required elaborative arrangements is rather difficult. On the other hand, theoretical approach requires solution of two coupled electromagnetic partial differential equation with profile of the flow and applied magnetic field as the inputs. This is also quite involved due to the 3D nature of the problem. Alternatively, Galerkin finite element method based numerical solution is suggested in the literature as an attractive option for the required calibration. Based on the same, a computer code in Matlab platform has been developed in this work with both 20 and 27 node brick elements. The boundary conditions are correctly defined and several intermediate validation exercises are carried out. Finally it is shown that the sensitivities predicted by the code for flow meters of four different dimensions agrees well with the results given by analytical expression, thereby providing strong validation. Sensitivity for higher flow rates, for which analytical approach does not exist, is shown to decrease with increase in flow velocity.
Resumo:
Purpose: Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. Methods: The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. Results: The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. Conclusions: The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time. (C) 2013 American Association of Physicists in Medicine. http://dx.doi.org/10.1118/1.4792459]
Resumo:
This study presents an overview of seismic microzonation and existing methodologies with a newly proposed methodology covering all aspects. Earlier seismic microzonation methods focused on parameters that affect the structure or foundation related problems. But seismic microzonation has generally been recognized as an important component of urban planning and disaster management. So seismic microzonation should evaluate all possible hazards due to earthquake and represent the same by spatial distribution. This paper presents a new methodology for seismic microzonation which has been generated based on location of study area and possible associated hazards. This new method consists of seven important steps with defined output for each step and these steps are linked with each other. Addressing one step and respective result may not be seismic microzonation, which is practiced widely. This paper also presents importance of geotechnical aspects in seismic microzonation and how geotechnical aspects affect the final map. For the case study, seismic hazard values at rock level are estimated considering the seismotectonic parameters of the region using deterministic and probabilistic seismic hazard analysis. Surface level hazard values are estimated considering site specific study and local site effects based on site classification/characterization. The liquefaction hazard is estimated using standard penetration test data. These hazard parameters are integrated in Geographical Information System (GIS) using Analytic Hierarchy Process (AHP) and used to estimate hazard index. Hazard index is arrived by following a multi-criteria evaluation technique - AHP, in which each theme and features have been assigned weights and then ranked respectively according to a consensus opinion about their relative significance to the seismic hazard. The hazard values are integrated through spatial union to obtain the deterministic microzonation map and probabilistic microzonation map for a specific return period. Seismological parameters are widely used for microzonation rather than geotechnical parameters. But studies show that the hazard index values are based on site specific geotechnical parameters.
Resumo:
Structural Support Vector Machines (SSVMs) have recently gained wide prominence in classifying structured and complex objects like parse-trees, image segments and Part-of-Speech (POS) tags. Typical learning algorithms used in training SSVMs result in model parameters which are vectors residing in a large-dimensional feature space. Such a high-dimensional model parameter vector contains many non-zero components which often lead to slow prediction and storage issues. Hence there is a need for sparse parameter vectors which contain a very small number of non-zero components. L1-regularizer and elastic net regularizer have been traditionally used to get sparse model parameters. Though L1-regularized structural SVMs have been studied in the past, the use of elastic net regularizer for structural SVMs has not been explored yet. In this work, we formulate the elastic net SSVM and propose a sequential alternating proximal algorithm to solve the dual formulation. We compare the proposed method with existing methods for L1-regularized Structural SVMs. Experiments on large-scale benchmark datasets show that the proposed dual elastic net SSVM trained using the sequential alternating proximal algorithm scales well and results in highly sparse model parameters while achieving a comparable generalization performance. Hence the proposed sequential alternating proximal algorithm is a competitive method to achieve sparse model parameters and a comparable generalization performance when elastic net regularized Structural SVMs are used on very large datasets.
Resumo:
Lead telluride micro and nanostructures have been grown on silicon and glass substrates by a simple thermal evaporation of PbTe in high vacuum of 3 x 10(-5) mbar. Growth was carried out for two different distances between the evaporation source and the substrates. Synthesized products consist of nanorods and micro towers for 2.4 cm and 3.4 cm of distance between the evaporation source and the substrates respectively. X-ray diffraction and transmission electron microscopy studies confirmed crystalline nature of the nanorods and micro towers. Nanorods were grown by vapor solid mechanism. Each micro tower consists of nano platelets and is capped with spherical catalyst particle at their end, suggesting that the growth proceeds via vapor-liquid-solid (VLS) mechanism. EDS spectrum recorded on the tip of the micro tower has shown the presence of Pb and Te confirming the self catalytic VLS growth of the micro towers. These results open up novel synthesis methods for PbTe nano and microstructures for various applications.