976 resultados para Automatic detection
Resumo:
This study outlines the quantification of low levels of Alicyclobacillus acidoterrestris in pure cultures, since this bacterium is not inactivated by pasteurization and may remain in industrialized foods and beverages. Electroconductive polymer-modified fluorine tin oxide (FTO) electrodes and multiple nanoparticle labels were used for biosensing. The detection of A. acidoterrestris in pure cultures was performed by reverse transcription polymerase chain reaction (RT-PCR) and the sensitivity was further increased by asymmetric nested RT-PCR using electrochemical detection for quantification of the amplicon. The quantification of nested RT-PCR products by Ag/Au-based electrochemical detection was able to detect 2 colony forming units per mL (CFU mL(-1)) of spores in pure culture and low detection and quantification limits (7.07 and 23.6 nM, respectively) were obtained for the target A. acidoterrestris on the electrochemical detection bioassay.
Resumo:
This article describes an effective microchip protocol based on electrophoretic-separation and electrochemical detection for highly sensitive and rapid measurements of nitrate ester explosives, including ethylene glycol dinitrate (EGDN), pentaerythritol tetranitrate (PETN), propylene glycol dinitrate (PGDN) and glyceryl trinitrate (nitroglycerin, NG). Factors influencing the separation and detection processes were examined and optimized. Under the optimal separation conditions obtained using a 15 mM borate buffer (pH 9.2) containing 20 mM SDS, and applying a separation voltage of 1500 V, the four nitrate ester explosives were separated within less than 3 min. The glassy-carbon amperometric detector (operated at -0.9 V vs. Ag/AgCl) offers convenient cathodic detection down to the picogram level, with detection limits of 0.5 ppm and 0.3 ppm for PGDN and for NG, respectively, along with good repeatability (RSD of 1.8-2.3%; n = 6) and linearity (over the 10-60 ppm range). Such effective microchip operation offers great promise for field screening of nitrate ester explosives and for supporting various counter-terrorism surveillance activities.
Resumo:
A simple and easy approach to produce polymeric microchips with integrated copper electrodes for capacitively coupled contactless conductivity detection (CD) is described. Copper electrodes were fabricated using a printed circuit board (PCB) as an inexpensive thin-layer of metal. The electrode layout was first drawn and laser printed on a wax paper sheet. The toner layer deposited on the paper sheet was thermally transferred to the PCB surface working as a mask for wet chemical etching of the copper layer. After the etching step, the toner was removed with an acetonitrile-dampened cotton. A poly(ethylene terephthalate) (PET) film coated with a thin thermo-sensitive adhesive layer was used to laminate the PCB plate providing an insulator layer of the electrodes to perform CID measurements. Electrophoresis microchannels were fabricated in poly(dimethylsiloxane) (PDMS) by soft lithography and reversibly sealed against the PET film. These hybrid PDMS/PET chips exhibited a stable electroosmotic mobility of 4.25 +/- 0.04 x 10(-4) V cm(-2) s(-1), at pH 6.1, over fifty runs. Efficiencies ranging from 1127 to 1690 theoretical plates were obtained for inorganic cations.
Resumo:
An implementation of a computational tool to generate new summaries from new source texts is presented, by means of the connectionist approach (artificial neural networks). Among other contributions that this work intends to bring to natural language processing research, the use of a more biologically plausible connectionist architecture and training for automatic summarization is emphasized. The choice relies on the expectation that it may bring an increase in computational efficiency when compared to the sa-called biologically implausible algorithms.
Resumo:
This paper proposes a three-stage offline approach to detect, identify, and correct series and shunt branch parameter errors. In Stage 1 the branches suspected of having parameter errors are identified through an Identification Index (II). The II of a branch is the ratio between the number of measurements adjacent to that branch, whose normalized residuals are higher than a specified threshold value, and the total number of measurements adjacent to that branch. Using several measurement snapshots, in Stage 2 the suspicious parameters are estimated, in a simultaneous multiple-state-and-parameter estimation, via an augmented state and parameter estimator which increases the V - theta state vector for the inclusion of suspicious parameters. Stage 3 enables the validation of the estimation obtained in Stage 2, and is performed via a conventional weighted least squares estimator. Several simulation results (with IEEE bus systems) have demonstrated the reliability of the proposed approach to deal with single and multiple parameter errors in adjacent and non-adjacent branches, as well as in parallel transmission lines with series compensation. Finally the proposed approach is confirmed on tests performed on the Hydro-Quebec TransEnergie network.
Resumo:
The main objective of this paper is to relieve the power system engineers from the burden of the complex and time-consuming process of power system stabilizer (PSS) tuning. To achieve this goal, the paper proposes an automatic process for computerized tuning of PSSs, which is based on an iterative process that uses a linear matrix inequality (LMI) solver to find the PSS parameters. It is shown in the paper that PSS tuning can be written as a search problem over a non-convex feasible set. The proposed algorithm solves this feasibility problem using an iterative LMI approach and a suitable initial condition, corresponding to a PSS designed for nominal operating conditions only (which is a quite simple task, since the required phase compensation is uniquely defined). Some knowledge about the PSS tuning is also incorporated in the algorithm through the specification of bounds defining the allowable PSS parameters. The application of the proposed algorithm to a benchmark test system and the nonlinear simulation of the resulting closed-loop models demonstrate the efficiency of this algorithm. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The design of supplementary damping controllers to mitigate the effects of electromechanical oscillations in power systems is a highly complex and time-consuming process, which requires a significant amount of knowledge from the part of the designer. In this study, the authors propose an automatic technique that takes the burden of tuning the controller parameters away from the power engineer and places it on the computer. Unlike other approaches that do the same based on robust control theories or evolutionary computing techniques, our proposed procedure uses an optimisation algorithm that works over a formulation of the classical tuning problem in terms of bilinear matrix inequalities. Using this formulation, it is possible to apply linear matrix inequality solvers to find a solution to the tuning problem via an iterative process, with the advantage that these solvers are widely available and have well-known convergence properties. The proposed algorithm is applied to tune the parameters of supplementary controllers for thyristor controlled series capacitors placed in the New England/New York benchmark test system, aiming at the improvement of the damping factor of inter-area modes, under several different operating conditions. The results of the linear analysis are validated by non-linear simulation and demonstrate the effectiveness of the proposed procedure.
Resumo:
The main purpose of this paper is to present architecture of automated system that allows monitoring and tracking in real time (online) the possible occurrence of faults and electromagnetic transients observed in primary power distribution networks. Through the interconnection of this automated system to the utility operation center, it will be possible to provide an efficient tool that will assist in decisionmaking by the Operation Center. In short, the desired purpose aims to have all tools necessary to identify, almost instantaneously, the occurrence of faults and transient disturbances in the primary power distribution system, as well as to determine its respective origin and probable location. The compilations of results from the application of this automated system show that the developed techniques provide accurate results, identifying and locating several occurrences of faults observed in the distribution system.
Resumo:
This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
In this study, the innovation approach is used to estimate the measurement total error associated with power system state estimation. This is required because the power system equations are very much correlated with each other and as a consequence part of the measurements errors is masked. For that purpose an index, innovation index (II), which provides the quantity of new information a measurement contains is proposed. A critical measurement is the limit case of a measurement with low II, it has a zero II index and its error is totally masked. In other words, that measurement does not bring any innovation for the gross error test. Using the II of a measurement, the masked gross error by the state estimation is recovered; then the total gross error of that measurement is composed. Instead of the classical normalised measurement residual amplitude, the corresponding normalised composed measurement residual amplitude is used in the gross error detection and identification test, but with m degrees of freedom. The gross error processing turns out to be very simple to implement, requiring only few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.
Resumo:
On-line leak detection is a main concern for the safe operation of pipelines. Acoustic and mass balance are the most important and extensively applied technologies in field problems. The objective of this work is to compare these leak detection methods with respect to a given reference situation, i.e., the same pipeline and monitoring signals acquired at the inlet and outlet ends. Experimental tests were conducted in a 749 m long laboratory pipeline transporting water as the working fluid. The instrumentation included pressure transducers and electromagnetic flowmeters. Leaks were simulated by opening solenoid valves placed at known positions and previously calibrated to produce known average leak flow rates. Results have clearly shown the limitations and advantages of each method. It is also quite clear that acoustics and mass balance technologies are, in fact, complementary. In general, an acoustic leak detection system sends out an alarm more rapidly and locates the leak more precisely, provided that the rupture of the pipeline occurs abruptly enough. On the other hand, a mass balance leak detection method is capable of quantifying the leak flow rate very accurately and of detecting progressive leaks.
Resumo:
Leakage reduction in water supply systems and distribution networks has been an increasingly important issue in the water industry since leaks and ruptures result in major physical and economic losses. Hydraulic transient solvers can be used in the system operational diagnosis, namely for leak detection purposes, due to their capability to describe the dynamic behaviour of the systems and to provide substantial amounts of data. In this research work, the association of hydraulic transient analysis with an optimisation model, through inverse transient analysis (ITA), has been used for leak detection and its location in an experimental facility containing PVC pipes. Observed transient pressure data have been used for testing ITA. A key factor for the success of the leak detection technique used is the accurate calibration of the transient solver, namely adequate boundary conditions and the description of energy dissipation effects since PVC pipes are characterised by a viscoelastic mechanical response. Results have shown that leaks were located with an accuracy between 4-15% of the total length of the pipeline, depending on the discretisation of the system model.
Resumo:
In this paper, a framework for detection of human skin in digital images is proposed. This framework is composed of a training phase and a detection phase. A skin class model is learned during the training phase by processing several training images in a hybrid and incremental fuzzy learning scheme. This scheme combines unsupervised-and supervised-learning: unsupervised, by fuzzy clustering, to obtain clusters of color groups from training images; and supervised to select groups that represent skin color. At the end of the training phase, aggregation operators are used to provide combinations of selected groups into a skin model. In the detection phase, the learned skin model is used to detect human skin in an efficient way. Experimental results show robust and accurate human skin detection performed by the proposed framework.
Resumo:
Acoustic resonances are observed in high-pressure discharge lamps operated with ac input modulated power frequencies in the kilohertz range. This paper describes an optical resonance detection method for high-intensity discharge lamps using computer-controlled cameras and image processing software. Experimental results showing acoustic resonances in high-pressure sodium lamps are presented.
Resumo:
Sigma phase is a deleterious one which can be formed in duplex stainless steels during heat treatment or welding. Aiming to accompany this transformation, ferrite and sigma percentage and hardness were measured on samples of a UNS S31803 duplex stainless steel submitted to heat treatment. These results were compared to measurements obtained from ultrasound and eddy current techniques, i.e., velocity and impedance, respectively. Additionally, backscattered signals produced by wave propagation were acquired during ultrasonic inspection as well as magnetic Barkhausen noise during magnetic inspection. Both signal types were processed via a combination of detrended-fluctuation analysis (DFA) and principal component analysis (PCA). The techniques used were proven to be sensitive to changes in samples related to sigma phase formation due to heat treatment. Furthermore, there is an advantage using these methods since they are nondestructive. (C) 2010 Elsevier B.V. All rights reserved.