145 resultados para Data acquisition card
Resumo:
The DO experiment enjoyed a very successful data-collection run at the Fermilab Tevatron collider between 1992 and 1996. Since then, the detector has been upgraded to take advantage of improvements to the Tevatron and to enhance its physics capabilities. We describe the new elements of the detector, including the silicon microstrip tracker, central fiber tracker, solenoidal magnet, preshower detectors, forward muon detector, and forward proton detector. The uranium/liquid -argon calorimeters and central muon detector, remaining from Run 1, are discussed briefly. We also present the associated electronics, triggering, and data acquisition systems, along with the design and implementation of software specific to DO. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
The Compact Muon Solenoid (CMS) detector is described. The detector operates at the Large Hadron Collider (LHC) at CERN. It was conceived to study proton-proton (and lead-lead) collisions at a centre-of-mass energy of 14 TeV (5.5 TeV nucleon-nucleon) and at luminosities up to 10(34)cm(-2)s(-1) (10(27)cm(-2)s(-1)). At the core of the CMS detector sits a high-magnetic-field and large-bore superconducting solenoid surrounding an all-silicon pixel and strip tracker, a lead-tungstate scintillating-crystals electromagnetic calorimeter, and a brass-scintillator sampling hadron calorimeter. The iron yoke of the flux-return is instrumented with four stations of muon detectors covering most of the 4 pi solid angle. Forward sampling calorimeters extend the pseudo-rapidity coverage to high values (vertical bar eta vertical bar <= 5) assuring very good hermeticity. The overall dimensions of the CMS detector are a length of 21.6 m, a diameter of 14.6 m and a total weight of 12500 t.
Construção de uma câmara para monitoramento in situ do processo de secagem de geis e sólidos porosos
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Uma forma de verificar a eficiência de métodos de estimativa da evapotranspiração de referência (ETo) é a comparação com o método-padrão. Este trabalho tem por finalidade comparar três métodos de estimativa da ETo: Radiação Solar (RS), Makkink (MAK) e Tanque Classe A (TCA) em relação ao método de Penman-Monteith (PM), em dois períodos distintos das fases de desenvolvimento da cultura de citros, com dados médios quinzenais para os períodos inverno-primavera e verão-outono. A pesquisa foi desenvolvida em uma fazenda de citros, em Araraquara - SP, onde foi instalada uma estação meteorológica automatizada e um tanque Classe A. Por intermédio da estação meteorológica automatizada, foram obtidas medidas da radiação solar global, saldo de radiação, temperatura do ar, umidade relativa do ar e velocidade do vento. A análise de regressão indica que, para o método TCA, pode ser utilizado o modelo de regressão y = bx, em que, y representa a EToPM e x a EToTCA. Para os demais métodos analisados, o modelo mais adequado foi y = bx + a. Os resultados obtidos neste estudo evidenciam que o método do TCA superestimou a ETo em 26% no período verão-outono e em 24% no período inverno-primavera. O método de MAK subestimou a ETo nos dois períodos analisados, enquanto o método da RS superestimou a ETo.
Resumo:
Este estudo tem por objetivo verificar a influência do tempo de coleta de dados com receptores GPS nas determinações altimétricas. O levantamento altimétrico é realizado através do método de posicionamento relativo estático, utilizando dois receptores GPS de uma freqüência, em diferentes tempos de ocupação (30, 15, 10 e 5 minutos) com uma taxa de gravação de dois segundos. As altitudes obtidas com receptores GPS são comparadas com as altitudes determinadas por nivelamento trigonométrico com Estação Total. Os resultados mostraram que os tempos de ocupação menores que 30 minutos (15, 10 e 5 minutos) também são adequados para a obtenção de diferenças centimétricas nas altitudes analisadas. Mesmo considerando a precisão dos métodos topográficos convencionais, este estudo demonstra a possibilidade da utilização do Sistema de Posicionamento Global (GPS) de forma precisa nos levantamentos altimétricos, desde que se efetue a modelagem da ondulação geoidal.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
An automatic Procedure with a high current-density anodic electrodissolution unit (HDAE) is proposed for the determination of aluminium, copper and zinc in non-ferroalloys by flame atonic absorption spectrometry, based on the direct solid analysis. It consists of solenoid valve-based commutation in a flow-injection system for on-line sample electro-dissolution and calibration with one multi-element standard, an electrolytic cell equipped with two electrodes (a silver needle acts as cathode, and sample as anode), and an intelligent unit. The latter is assembled in a PC-compatible microcomputer for instrument control, and far data acquisition and processing. General management of the process is achieved by use of software written in Pascal. Electrolyte compositions, flow rates, commutation times, applied current and electrolysis time mere investigated. A 0.5 mol l(-1) HNO3 solution was elected as electrolyte and 300 A/cm(2) as the continuous current pulse. The performance of the proposed system was evaluated by analysing aluminium in Al-allay samples, and copper/zinc in brass and bronze samples, respectively. The system handles about 50 samples per hour. Results are precise (R.S.D < 2%) and in agreement with those obtained by ICP-AES and spectrophotometry at a 95% confidence level.
Resumo:
Grinding is a finishing process in machining operations, and the topology of the grinding tool is responsible for producing the desired result on the surface of the machined material The tool topology is modeled in the dressing process and precision is therefore extremely important This study presents a solution in the monitoring of the dressing process, using a digital signal processor (DSP) operating in real time to detect the optimal dressing moment To confirm the monitoring efficiency by DSP, the results were compared with those of a data acquisition system (DAQ) and offline processing The method employed here consisted of analyzing the acoustic emission and electrical power signal by applying the DPO and DPKS parameters The analysis of the results allowed us to conclude that the application of the DPO and DPKS parameters can be substituted by processing of the mean acoustic emission signal, thus reducing the computational effort
Resumo:
Background: Obstructive sleep apnea (OSA) is a respiratory disease characterized by the collapse of the extrathoracic airway and has important social implications related to accidents and cardiovascular risk. The main objective of the present study was to investigate whether the drop in expiratory flow and the volume expired in 0.2 s during the application of negative expiratory pressure (NEP) are associated with the presence and severity of OSA in a population of professional interstate bus drivers who travel medium and long distances.Methods/Design: An observational, analytic study will be carried out involving adult male subjects of an interstate bus company. Those who agree to participate will undergo a detailed patient history, physical examination involving determination of blood pressure, anthropometric data, circumference measurements (hips, waist and neck), tonsils and Mallampati index. Moreover, specific questionnaires addressing sleep apnea and excessive daytime sleepiness will be administered. Data acquisition will be completely anonymous. Following the medical examination, the participants will perform a spirometry, NEP test and standard overnight polysomnography. The NEP test is performed through the administration of negative pressure at the mouth during expiration. This is a practical test performed while awake and requires little cooperation from the subject. In the absence of expiratory flow limitation, the increase in the pressure gradient between the alveoli and open upper airway caused by NEP results in an increase in expiratory flow.Discussion: Despite the abundance of scientific evidence, OSA is still underdiagnosed in the general population. In addition, diagnostic procedures are expensive, and predictive criteria are still unsatisfactory. Because increased upper airway collapsibility is one of the main determinants of OSA, the response to the application of NEP could be a predictor of this disorder. With the enrollment of this study protocol, the expectation is to encounter predictive NEP values for different degrees of OSA in order to contribute toward an early diagnosis of this condition and reduce its impact and complications among commercial interstate bus drivers.
Resumo:
Concept drift is a problem of increasing importance in machine learning and data mining. Data sets under analysis are no longer only static databases, but also data streams in which concepts and data distributions may not be stable over time. However, most learning algorithms produced so far are based on the assumption that data comes from a fixed distribution, so they are not suitable to handle concept drifts. Moreover, some concept drifts applications requires fast response, which means an algorithm must always be (re) trained with the latest available data. But the process of labeling data is usually expensive and/or time consuming when compared to unlabeled data acquisition, thus only a small fraction of the incoming data may be effectively labeled. Semi-supervised learning methods may help in this scenario, as they use both labeled and unlabeled data in the training process. However, most of them are also based on the assumption that the data is static. Therefore, semi-supervised learning with concept drifts is still an open challenge in machine learning. Recently, a particle competition and cooperation approach was used to realize graph-based semi-supervised learning from static data. In this paper, we extend that approach to handle data streams and concept drift. The result is a passive algorithm using a single classifier, which naturally adapts to concept changes, without any explicit drift detection mechanism. Its built-in mechanisms provide a natural way of learning from new data, gradually forgetting older knowledge as older labeled data items became less influent on the classification of newer data items. Some computer simulation are presented, showing the effectiveness of the proposed method.
Resumo:
We outline a method for registration of images of cross sections using the concepts of The Generalized Hough Transform (GHT). The approach may be useful in situations where automation should be a concern. To overcome known problems of noise of traditional GHT we have implemented a slight modified version of the basic algorithm. The modification consists of eliminating points of no interest in the process before the application of the accumulation step of the algorithm. This procedure minimizes the amount of accumulation points while reducing the probability of appearing of spurious peaks. Also, we apply image warping techniques to interpolate images among cross sections. This is needed where the distance of samples between sections is too large. Then it is suggested that the step of registration with GHT can help the interpolation automation by simplifying the correspondence between points of images. Some results are shown.
Resumo:
The acquisition and update of Geographic Information System (GIS) data are typically carried out using aerial or satellite imagery. Since new roads are usually linked to georeferenced pre-existing road network, the extraction of pre-existing road segments may provide good hypotheses for the updating process. This paper addresses the problem of extracting georeferenced roads from images and formulating hypotheses for the presence of new road segments. Our approach proceeds in three steps. First, salient points are identified and measured along roads from a map or GIS database by an operator or an automatic tool. These salient points are then projected onto the image-space and errors inherent in this process are calculated. In the second step, the georeferenced roads are extracted from the image using a dynamic programming (DP) algorithm. The projected salient points and corresponding error estimates are used as input for this extraction process. Finally, the road center axes extracted in the previous step are analyzed to identify potential new segments attached to the extracted, pre-existing one. This analysis is performed using a combination of edge-based and correlation-based algorithms. In this paper we present our approach and early implementation results.
Resumo:
Grinding process is usually the last finishing process of a precision component in the manufacturing industries. This process is utilized for manufacturing parts of different materials, so it demands results such as low roughness, dimensional and shape error control, optimum tool-life, with minimum cost and time. Damages on the parts are very expensive since the previous processes and the grinding itself are useless when the part is damaged in this stage. This work aims to investigate the efficiency of digital signal processing tools of acoustic emission signals in order to detect thermal damages in grinding process. To accomplish such a goal, an experimental work was carried out for 15 runs in a surface grinding machine operating with an aluminum oxide grinding wheel and ABNT 1045 e VC131 steels. The acoustic emission signals were acquired from a fixed sensor placed on the workpiece holder. A high sampling rate acquisition system at 2.5 MHz was used to collect the raw acoustic emission instead of root mean square value usually employed. In each test AE data was analyzed off-line, with results compared to inspection of each workpiece for burn and other metallurgical anomaly. A number of statistical signal processing tools have been evaluated.
Resumo:
Phasor Measurement Units (PMUs) optimized allocation allows control, monitoring and accurate operation of electric power distribution systems, improving reliability and service quality. Good quality and considerable results are obtained for transmission systems using fault location techniques based on voltage measurements. Based on these techniques and performing PMUs optimized allocation it is possible to develop an electric power distribution system fault locator, which provides accurate results. The PMUs allocation problem presents combinatorial features related to devices number that can be allocated, and also probably places for allocation. Tabu search algorithm is the proposed technique to carry out PMUs allocation. This technique applied in a 141 buses real-life distribution urban feeder improved significantly the fault location results. © 2004 IEEE.
Resumo:
Systematic errors can have a significant effect on GPS observable. In medium and long baselines the major systematic error source are the ionosphere and troposphere refraction and the GPS satellites orbit errors. But, in short baselines, the multipath is more relevant. These errors degrade the accuracy of the positioning accomplished by GPS. So, this is a critical problem for high precision GPS positioning applications. Recently, a method has been suggested to mitigate these errors: the semiparametric model and the penalised least squares technique. It uses a natural cubic spline to model the errors as a function which varies smoothly in time. The systematic errors functions, ambiguities and station coordinates, are estimated simultaneously. As a result, the ambiguities and the station coordinates are estimated with better reliability and accuracy than the conventional least square method.