990 resultados para Vector sensor
Resumo:
Novel chromogenic thiourea based sensors 4,4'-bis-[3-(4-nitrophenyl) thiourea] diphenyl ether 1 and 4,4'-bis-[3-(4-nitrophenyl) thiourea] diphenyl methane 2 having nitrophenyl group as signaling unit have been synthesized and characterized by spectroscopic techniques and X-ray crystallography. The both sensors show visual detection, UV-vis and NMR spectral changes in presence of fluoride and cyanide anions in organic solvent as well as in aqueous medium. The absorption spectra indicated the formation of complex between host and guest is in 1:2 stoichiometric ratios. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
A novel dodecagonal space vector structure for induction motor drive is presented in this paper. It consists of two dodecagons, with the radius of the outer one twice the inner one. Compared to existing dodecagonal space vector structures, to achieve the same PWM output voltage quality, the proposed topology lowers the switching frequency of the inverters and reduces the device ratings to half. At the same time, other benefits obtained from existing dodecagonal space vector structure are retained here. This includes the extension of the linear modulation range and elimination of all 6+/-1 harmonics (n=odd) from the phase voltage. The proposed structure is realized by feeding an open-end winding induction motor with two conventional three level inverters. A detailed calculation of the PWM timings for switching the space vector points is also presented. Simulation and experimental results indicate the possible application of the proposed idea for high power drives.
Resumo:
Optimization in energy consumption of the existing synchronization mechanisms can lead to substantial gains in terms of network life in Wireless Sensor Networks (WSNs). In this paper, we analyze ERBS and TPSN, two existing synchronization algorithms for WSNs which use widely different approach, and compare their performance in large scale WSNs each of which consists of different type of platform and has varying node density. We, then, propose a novel algorithm, PROBESYNC, which takes advantage of differences in power required to transmit and receive a message on ERBS and TPSN and leverages the shortcomings of each of these algorithms. This leads to considerable improvement in energy conservation and enhanced life of large scale WSNs.
Resumo:
The determination of settlement of shallow foundations on cohesionless soil is an important task in geotechnical engineering. Available methods for the determination of settlement are not reliable. In this study, the support vector machine (SVM), a novel type of learning algorithm based on statistical theory, has been used to predict the settlement of shallow foundations on cohesionless soil. SVM uses a regression technique by introducing an ε – insensitive loss function. A thorough sensitive analysis has been made to ascertain which parameters are having maximum influence on settlement. The study shows that SVM has the potential to be a useful and practical tool for prediction of settlement of shallow foundation on cohesionless soil.
Resumo:
We describe here a novel sensor for cGMP based on the GAF domain of the cGMP-binding, cGMP-specific phosphodiesterase 5 (PDE5) using bioluminescence resonance energy transfer (BRET). The wild type GAFa domain, capable of binding cGMP with high affinity, and a mutant (GAFaF163A) unable to bind cGMP were cloned as fusions between GFP and Rluc for BRET2 assays. BRET2 ratios of the wild type GAFa fusion protein, but not GAFaF163A, increased in the presence of cGMP but not cAMP. Higher basal BRET2 ratios were observed in cells expressing the wild type GAFa domain than in cells expressing GAFaF163A. This was correlated with elevated basal intracellular levels of cGMP, indicating that the GAF domain could act as a sink for cGMP. The tandem GAF domains in full length PDE5 could also sequester cGMP when the catalytic activity of PDE5 was inhibited. Therefore, these results describe a cGMP sensor utilizing BRET2 technology and experimentally demonstrate the reservoir of cGMP that can be present in cells that express cGMP-binding GAF domain-containing proteins. PDE5 is the target for the anti-impotence drug sildenafil citrate; therefore, this GAF-BRET2 sensor could be used for the identification of novel compounds that inhibit cGMP binding to the GAF domain, thereby regulating PDE5 catalytic activity.
Resumo:
We consider a scenario in which a wireless sensor network is formed by randomly deploying n sensors to measure some spatial function over a field, with the objective of computing a function of the measurements and communicating it to an operator station. We restrict ourselves to the class of type-threshold functions (as defined in the work of Giridhar and Kumar, 2005), of which max, min, and indicator functions are important examples: our discussions are couched in terms of the max function. We view the problem as one of message-passing distributed computation over a geometric random graph. The network is assumed to be synchronous, and the sensors synchronously measure values and then collaborate to compute and deliver the function computed with these values to the operator station. Computation algorithms differ in (1) the communication topology assumed and (2) the messages that the nodes need to exchange in order to carry out the computation. The focus of our paper is to establish (in probability) scaling laws for the time and energy complexity of the distributed function computation over random wireless networks, under the assumption of centralized contention-free scheduling of packet transmissions. First, without any constraint on the computation algorithm, we establish scaling laws for the computation time and energy expenditure for one-time maximum computation. We show that for an optimal algorithm, the computation time and energy expenditure scale, respectively, as Theta(radicn/log n) and Theta(n) asymptotically as the number of sensors n rarr infin. Second, we analyze the performance of three specific computation algorithms that may be used in specific practical situations, namely, the tree algorithm, multihop transmission, and the Ripple algorithm (a type of gossip algorithm), and obtain scaling laws for the computation time and energy expenditure as n rarr infin. In particular, we show that the computation time for these algorithms scales as Theta(radicn/lo- g n), Theta(n), and Theta(radicn log n), respectively, whereas the energy expended scales as , Theta(n), Theta(radicn/log n), and Theta(radicn log n), respectively. Finally, simulation results are provided to show that our analysis indeed captures the correct scaling. The simulations also yield estimates of the constant multipliers in the scaling laws. Our analyses throughout assume a centralized optimal scheduler, and hence, our results can be viewed as providing bounds for the performance with practical distributed schedulers.
Resumo:
Extensible Markup Language ( XML) has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing, there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Adaptive Genetic Algorithms and multi class Support Vector Machine ( SVM) is used to learn a user model. Based on the feedback from the users, the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents, indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.
Resumo:
We present a generic method/model for multi-objective design optimization of laminated composite components, based on vector evaluated particle swarm optimization (VEPSO) algorithm. VEPSO is a novel, co-evolutionary multi-objective variant of the popular particle swarm optimization algorithm (PSO). In the current work a modified version of VEPSO algorithm for discrete variables has been developed and implemented successfully for the, multi-objective design optimization of composites. The problem is formulated with multiple objectives of minimizing weight and the total cost of the composite component to achieve a specified strength. The primary optimization variables are - the number of layers, its stacking sequence (the orientation of the layers) and thickness of each layer. The classical lamination theory is utilized to determine the stresses in the component and the design is evaluated based on three failure criteria; failure mechanism based failure criteria, Maximum stress failure criteria and the Tsai-Wu failure criteria. The optimization method is validated for a number of different loading configurations - uniaxial, biaxial and bending loads. The design optimization has been carried for both variable stacking sequences, as well fixed standard stacking schemes and a comparative study of the different design configurations evolved has been presented. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The change in extension-twist Coupling due to delamination in antisymmetric laminates is experimentally measured. Experimental results are compared with the results from analytical expression existing in literature and finite element analysis. The application of the Macro-Fiber Composite (MFC) developed at the NASA Langley Research Center for sensing the delamination in the laminates is investigated. While many applications have been reported in the literature using the MFC as an actuator, here its use as a twist sensor has been studied. The real-life application envisaged is structural health monitoring of laminated composite flexbeams taking advantage of the symmetry in the structure. Apart from the defect detection under symmetric conditions, other methods of health monitoring for the same structure are reported for further validation. Results show that MFC works well as a sensor.
Resumo:
A chenodeoxycholic acid based K+ ion sensor has been designed using a modular approach in which a fluorophore and a cation receptor are attached to the bile acid backbone. In the absence of K+ the fluorescence of the molecule is quenched because of through-space, photo-induced electron-transfer from the aza-crown unit. Fluorescence enhancement was observed upon titration with K+ (and other alkali metal ions too). In methanol, good selectivity towards the sensing of K+ has been observed.
Resumo:
This paper addresses the challenges of flood mapping using multispectral images. Quantitative flood mapping is critical for flood damage assessment and management. Remote sensing images obtained from various satellite or airborne sensors provide valuable data for this application, from which the information on the extent of flood can be extracted. However the great challenge involved in the data interpretation is to achieve more reliable flood extent mapping including both the fully inundated areas and the 'wet' areas where trees and houses are partly covered by water. This is a typical combined pure pixel and mixed pixel problem. In this paper, an extended Support Vector Machines method for spectral unmixing developed recently has been applied to generate an integrated map showing both pure pixels (fully inundated areas) and mixed pixels (trees and houses partly covered by water). The outputs were compared with the conventional mean based linear spectral mixture model, and better performance was demonstrated with a subset of Landsat ETM+ data recorded at the Daly River Basin, NT, Australia, on 3rd March, 2008, after a flood event.
Resumo:
Fuel cells are emerging as alternate green power producers for both large power production and for use in automobiles. Hydrogen is seen as the best option as a fuel; however, hydrogen fuel cells require recirculation of unspent hydrogen. A supersonic ejector is an apt device for recirculation in the operating regimes of a hydrogen fuel cell. Optimal ejectors have to be designed to achieve best performances. The use of the vector evaluated particle swarm optimization technique to optimize supersonic ejectors with a focus on its application for hydrogen recirculation in fuel cells is presented here. Two parameters, compression ratio and efficiency, have been identified as the objective functions to be optimized. Their relation to operating and design parameters of ejector is obtained by control volume based analysis using a constant area mixing approximation. The independent parameters considered are the area ratio and the exit Mach number of the nozzle. The optimization is carried out at a particularentrainment ratio and results in a set of nondominated solutions, the Pareto front. A set of such curves can be used for choosing the optimal design parameters of the ejector.
Resumo:
Novel switching sequences can be employed in spacevector-based pulsewidth modulation (PWM) of voltage source inverters. Differentswitching sequences are evaluated and compared in terms of inverter switching loss. A hybrid PWM technique named minimum switching loss PWM is proposed, which reduces the inverter switching loss compared to conventional space vector PWM (CSVPWM) and discontinuous PWM techniques at a given average switching frequency. Further, four space-vector-based hybrid PWM techniques are proposed that reduce line current distortion as well as switching loss in motor drives, compared to CSVPWM. Theoretical and experimental results are presented.
Resumo:
The motivation behind the fusion of Intrusion Detection Systems was the realization that with the increasing traffic and increasing complexity of attacks, none of the present day stand-alone Intrusion Detection Systems can meet the high demand for a very high detection rate and an extremely low false positive rate. Multi-sensor fusion can be used to meet these requirements by a refinement of the combined response of different Intrusion Detection Systems. In this paper, we show the design technique of sensor fusion to best utilize the useful response from multiple sensors by an appropriate adjustment of the fusion threshold. The threshold is generally chosen according to the past experiences or by an expert system. In this paper, we show that the choice of the threshold bounds according to the Chebyshev inequality principle performs better. This approach also helps to solve the problem of scalability and has the advantage of failsafe capability. This paper theoretically models the fusion of Intrusion Detection Systems for the purpose of proving the improvement in performance, supplemented with the empirical evaluation. The combination of complementary sensors is shown to detect more attacks than the individual components. Since the individual sensors chosen detect sufficiently different attacks, their result can be merged for improved performance. The combination is done in different ways like (i) taking all the alarms from each system and avoiding duplications, (ii) taking alarms from each system by fixing threshold bounds, and (iii) rule-based fusion with a priori knowledge of the individual sensor performance. A number of evaluation metrics are used, and the results indicate that there is an overall enhancement in the performance of the combined detector using sensor fusion incorporating the threshold bounds and significantly better performance using simple rule-based fusion.
Resumo:
Extensible Markup Language ( XML) has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing, there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Adaptive Genetic Algorithms and multi class Support Vector Machine ( SVM) is used to learn a user model. Based on the feedback from the users, the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents, indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.