873 resultados para Data acquisition
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
An automatic Procedure with a high current-density anodic electrodissolution unit (HDAE) is proposed for the determination of aluminium, copper and zinc in non-ferroalloys by flame atonic absorption spectrometry, based on the direct solid analysis. It consists of solenoid valve-based commutation in a flow-injection system for on-line sample electro-dissolution and calibration with one multi-element standard, an electrolytic cell equipped with two electrodes (a silver needle acts as cathode, and sample as anode), and an intelligent unit. The latter is assembled in a PC-compatible microcomputer for instrument control, and far data acquisition and processing. General management of the process is achieved by use of software written in Pascal. Electrolyte compositions, flow rates, commutation times, applied current and electrolysis time mere investigated. A 0.5 mol l(-1) HNO3 solution was elected as electrolyte and 300 A/cm(2) as the continuous current pulse. The performance of the proposed system was evaluated by analysing aluminium in Al-allay samples, and copper/zinc in brass and bronze samples, respectively. The system handles about 50 samples per hour. Results are precise (R.S.D < 2%) and in agreement with those obtained by ICP-AES and spectrophotometry at a 95% confidence level.
Resumo:
Grinding is a finishing process in machining operations, and the topology of the grinding tool is responsible for producing the desired result on the surface of the machined material The tool topology is modeled in the dressing process and precision is therefore extremely important This study presents a solution in the monitoring of the dressing process, using a digital signal processor (DSP) operating in real time to detect the optimal dressing moment To confirm the monitoring efficiency by DSP, the results were compared with those of a data acquisition system (DAQ) and offline processing The method employed here consisted of analyzing the acoustic emission and electrical power signal by applying the DPO and DPKS parameters The analysis of the results allowed us to conclude that the application of the DPO and DPKS parameters can be substituted by processing of the mean acoustic emission signal, thus reducing the computational effort
Resumo:
Background: Obstructive sleep apnea (OSA) is a respiratory disease characterized by the collapse of the extrathoracic airway and has important social implications related to accidents and cardiovascular risk. The main objective of the present study was to investigate whether the drop in expiratory flow and the volume expired in 0.2 s during the application of negative expiratory pressure (NEP) are associated with the presence and severity of OSA in a population of professional interstate bus drivers who travel medium and long distances.Methods/Design: An observational, analytic study will be carried out involving adult male subjects of an interstate bus company. Those who agree to participate will undergo a detailed patient history, physical examination involving determination of blood pressure, anthropometric data, circumference measurements (hips, waist and neck), tonsils and Mallampati index. Moreover, specific questionnaires addressing sleep apnea and excessive daytime sleepiness will be administered. Data acquisition will be completely anonymous. Following the medical examination, the participants will perform a spirometry, NEP test and standard overnight polysomnography. The NEP test is performed through the administration of negative pressure at the mouth during expiration. This is a practical test performed while awake and requires little cooperation from the subject. In the absence of expiratory flow limitation, the increase in the pressure gradient between the alveoli and open upper airway caused by NEP results in an increase in expiratory flow.Discussion: Despite the abundance of scientific evidence, OSA is still underdiagnosed in the general population. In addition, diagnostic procedures are expensive, and predictive criteria are still unsatisfactory. Because increased upper airway collapsibility is one of the main determinants of OSA, the response to the application of NEP could be a predictor of this disorder. With the enrollment of this study protocol, the expectation is to encounter predictive NEP values for different degrees of OSA in order to contribute toward an early diagnosis of this condition and reduce its impact and complications among commercial interstate bus drivers.
Resumo:
Concept drift is a problem of increasing importance in machine learning and data mining. Data sets under analysis are no longer only static databases, but also data streams in which concepts and data distributions may not be stable over time. However, most learning algorithms produced so far are based on the assumption that data comes from a fixed distribution, so they are not suitable to handle concept drifts. Moreover, some concept drifts applications requires fast response, which means an algorithm must always be (re) trained with the latest available data. But the process of labeling data is usually expensive and/or time consuming when compared to unlabeled data acquisition, thus only a small fraction of the incoming data may be effectively labeled. Semi-supervised learning methods may help in this scenario, as they use both labeled and unlabeled data in the training process. However, most of them are also based on the assumption that the data is static. Therefore, semi-supervised learning with concept drifts is still an open challenge in machine learning. Recently, a particle competition and cooperation approach was used to realize graph-based semi-supervised learning from static data. In this paper, we extend that approach to handle data streams and concept drift. The result is a passive algorithm using a single classifier, which naturally adapts to concept changes, without any explicit drift detection mechanism. Its built-in mechanisms provide a natural way of learning from new data, gradually forgetting older knowledge as older labeled data items became less influent on the classification of newer data items. Some computer simulation are presented, showing the effectiveness of the proposed method.
Resumo:
We outline a method for registration of images of cross sections using the concepts of The Generalized Hough Transform (GHT). The approach may be useful in situations where automation should be a concern. To overcome known problems of noise of traditional GHT we have implemented a slight modified version of the basic algorithm. The modification consists of eliminating points of no interest in the process before the application of the accumulation step of the algorithm. This procedure minimizes the amount of accumulation points while reducing the probability of appearing of spurious peaks. Also, we apply image warping techniques to interpolate images among cross sections. This is needed where the distance of samples between sections is too large. Then it is suggested that the step of registration with GHT can help the interpolation automation by simplifying the correspondence between points of images. Some results are shown.
Resumo:
The acquisition and update of Geographic Information System (GIS) data are typically carried out using aerial or satellite imagery. Since new roads are usually linked to georeferenced pre-existing road network, the extraction of pre-existing road segments may provide good hypotheses for the updating process. This paper addresses the problem of extracting georeferenced roads from images and formulating hypotheses for the presence of new road segments. Our approach proceeds in three steps. First, salient points are identified and measured along roads from a map or GIS database by an operator or an automatic tool. These salient points are then projected onto the image-space and errors inherent in this process are calculated. In the second step, the georeferenced roads are extracted from the image using a dynamic programming (DP) algorithm. The projected salient points and corresponding error estimates are used as input for this extraction process. Finally, the road center axes extracted in the previous step are analyzed to identify potential new segments attached to the extracted, pre-existing one. This analysis is performed using a combination of edge-based and correlation-based algorithms. In this paper we present our approach and early implementation results.
Resumo:
An overview is given on the possibility of controlling the status of circuit breakers (CB) in a substations with the use of a knowledge base that relates some of the operation magnitudes, mixing status variables with time variables and fuzzy sets. It is shown that even when all the magnitudes to be controlled cannot be included in the analysis, it is possible to control the desired status while supervising some important magnitudes as the voltage, power factor, and harmonic distortion, as well as the present status.
Resumo:
Grinding process is usually the last finishing process of a precision component in the manufacturing industries. This process is utilized for manufacturing parts of different materials, so it demands results such as low roughness, dimensional and shape error control, optimum tool-life, with minimum cost and time. Damages on the parts are very expensive since the previous processes and the grinding itself are useless when the part is damaged in this stage. This work aims to investigate the efficiency of digital signal processing tools of acoustic emission signals in order to detect thermal damages in grinding process. To accomplish such a goal, an experimental work was carried out for 15 runs in a surface grinding machine operating with an aluminum oxide grinding wheel and ABNT 1045 e VC131 steels. The acoustic emission signals were acquired from a fixed sensor placed on the workpiece holder. A high sampling rate acquisition system at 2.5 MHz was used to collect the raw acoustic emission instead of root mean square value usually employed. In each test AE data was analyzed off-line, with results compared to inspection of each workpiece for burn and other metallurgical anomaly. A number of statistical signal processing tools have been evaluated.
Resumo:
Phasor Measurement Units (PMUs) optimized allocation allows control, monitoring and accurate operation of electric power distribution systems, improving reliability and service quality. Good quality and considerable results are obtained for transmission systems using fault location techniques based on voltage measurements. Based on these techniques and performing PMUs optimized allocation it is possible to develop an electric power distribution system fault locator, which provides accurate results. The PMUs allocation problem presents combinatorial features related to devices number that can be allocated, and also probably places for allocation. Tabu search algorithm is the proposed technique to carry out PMUs allocation. This technique applied in a 141 buses real-life distribution urban feeder improved significantly the fault location results. © 2004 IEEE.
Resumo:
Systematic errors can have a significant effect on GPS observable. In medium and long baselines the major systematic error source are the ionosphere and troposphere refraction and the GPS satellites orbit errors. But, in short baselines, the multipath is more relevant. These errors degrade the accuracy of the positioning accomplished by GPS. So, this is a critical problem for high precision GPS positioning applications. Recently, a method has been suggested to mitigate these errors: the semiparametric model and the penalised least squares technique. It uses a natural cubic spline to model the errors as a function which varies smoothly in time. The systematic errors functions, ambiguities and station coordinates, are estimated simultaneously. As a result, the ambiguities and the station coordinates are estimated with better reliability and accuracy than the conventional least square method.
Resumo:
An intelligent system that emulates human decision behaviour based on visual data acquisition is proposed. The approach is useful in applications where images are used to supply information to specialists who will choose suitable actions. An artificial neural classifier aids a fuzzy decision support system to deal with uncertainty and imprecision present in available information. Advantages of both techniques are exploited complementarily. As an example, this method was applied in automatic focus checking and adjustment in video monitor manufacturing. Copyright © 2005 IFAC.
Resumo:
This work aims to investigate the efficiency of digital signal processing tools of acoustic emission signals in order to detect thermal damages in grinding processes. To accomplish such a goal, an experimental work was carried out for 15 runs in a surface grinding machine operating with an aluminum oxide grinding wheel and ABNT 1045 Steel as work material. The acoustic emission signals were acquired from a fixed sensor placed on the workpiece holder. A high sampling rate data acquisition system working at 2.5 MHz was used to collect the raw acoustic emission instead of the root mean square value usually employed. Many statistical analyses have shown to be effective to detect burn, such as the root mean square (RMS), correlation of the AE, constant false alarm rate (CFAR), ratio of power (ROP) and mean-value deviance (MVD). However, the CFAR, ROP, Kurtosis and correlation of the AE have been presented more sensitive than the RMS. Copyright © 2006 by ABCM.
Resumo:
We present a search for associated Higgs boson production in the process pp̄→WH→WWW*→l±νl′±ν′ +X in final states containing two like-sign isolated electrons or muons (e±e±, e±μ±, or μ±μ±). The search is based on D0 run II data samples corresponding to integrated luminosities of 360-380pb-1. No excess is observed over the predicted standard model background. We set 95% C.L. upper limits on σ(pp̄→WH) ×Br(H→WW*) between 3.2 and 2.8 pb for Higgs boson masses from 115 to 175 GeV. © 2006 The American Physical Society.
Resumo:
The aim of the work was to prepare an overview about the microstructures present in high-speed steel, focused on the crystallography of the carbides. High-speed steels are currently obtained by casting, powder metallurgy and more recently spray forming. High-speed steels have a high hardness resulting from a microstructure, which consists of a steel matrix (martensite and ferrite), in which embedded carbides of different crystal structure, chemical composition, morphology and size, exist. These carbides are commonly named MxC, where M represents one or more metallic atoms. These carbides can be identified by X-ray diffraction considering M as a unique metallic atom. In this work, it is discussed, in basis of the first principles of physics crystallography, the validation of this identification when it is considered that other atoms in the structure are substitutional. Further, it is discussed some requirements for data acquisition that allows the Rietveld refinement to be applied on carbide crystallography and phase amount determination.