936 resultados para Data Acquisition Methods.
Resumo:
The purpose of this lecture is to review recent development in data analysis, initialization and data assimilation. The development of 3-dimensional multivariate schemes has been very timely because of its suitability to handle the many different types of observations during FGGE. Great progress has taken place in the initialization of global models by the aid of non-linear normal mode technique. However, in spite of great progress, several fundamental problems are still unsatisfactorily solved. Of particular importance is the question of the initialization of the divergent wind fields in the Tropics and to find proper ways to initialize weather systems driven by non-adiabatic processes. The unsatisfactory ways in which such processes are being initialized are leading to excessively long spin-up times.
Resumo:
Remote sensing observations often have correlated errors, but the correlations are typically ignored in data assimilation for numerical weather prediction. The assumption of zero correlations is often used with data thinning methods, resulting in a loss of information. As operational centres move towards higher-resolution forecasting, there is a requirement to retain data providing detail on appropriate scales. Thus an alternative approach to dealing with observation error correlations is needed. In this article, we consider several approaches to approximating observation error correlation matrices: diagonal approximations, eigendecomposition approximations and Markov matrices. These approximations are applied in incremental variational assimilation experiments with a 1-D shallow water model using synthetic observations. Our experiments quantify analysis accuracy in comparison with a reference or ‘truth’ trajectory, as well as with analyses using the ‘true’ observation error covariance matrix. We show that it is often better to include an approximate correlation structure in the observation error covariance matrix than to incorrectly assume error independence. Furthermore, by choosing a suitable matrix approximation, it is feasible and computationally cheap to include error correlation structure in a variational data assimilation algorithm.
Resumo:
Data assimilation methods which avoid the assumption of Gaussian error statistics are being developed for geoscience applications. We investigate how the relaxation of the Gaussian assumption affects the impact observations have within the assimilation process. The effect of non-Gaussian observation error (described by the likelihood) is compared to previously published work studying the effect of a non-Gaussian prior. The observation impact is measured in three ways: the sensitivity of the analysis to the observations, the mutual information, and the relative entropy. These three measures have all been studied in the case of Gaussian data assimilation and, in this case, have a known analytical form. It is shown that the analysis sensitivity can also be derived analytically when at least one of the prior or likelihood is Gaussian. This derivation shows an interesting asymmetry in the relationship between analysis sensitivity and analysis error covariance when the two different sources of non-Gaussian structure are considered (likelihood vs. prior). This is illustrated for a simple scalar case and used to infer the effect of the non-Gaussian structure on mutual information and relative entropy, which are more natural choices of metric in non-Gaussian data assimilation. It is concluded that approximating non-Gaussian error distributions as Gaussian can give significantly erroneous estimates of observation impact. The degree of the error depends not only on the nature of the non-Gaussian structure, but also on the metric used to measure the observation impact and the source of the non-Gaussian structure.
Resumo:
Astronomy has evolved almost exclusively by the use of spectroscopic and imaging techniques, operated separately. With the development of modern technologies, it is possible to obtain data cubes in which one combines both techniques simultaneously, producing images with spectral resolution. To extract information from them can be quite complex, and hence the development of new methods of data analysis is desirable. We present a method of analysis of data cube (data from single field observations, containing two spatial and one spectral dimension) that uses Principal Component Analysis (PCA) to express the data in the form of reduced dimensionality, facilitating efficient information extraction from very large data sets. PCA transforms the system of correlated coordinates into a system of uncorrelated coordinates ordered by principal components of decreasing variance. The new coordinates are referred to as eigenvectors, and the projections of the data on to these coordinates produce images we will call tomograms. The association of the tomograms (images) to eigenvectors (spectra) is important for the interpretation of both. The eigenvectors are mutually orthogonal, and this information is fundamental for their handling and interpretation. When the data cube shows objects that present uncorrelated physical phenomena, the eigenvector`s orthogonality may be instrumental in separating and identifying them. By handling eigenvectors and tomograms, one can enhance features, extract noise, compress data, extract spectra, etc. We applied the method, for illustration purpose only, to the central region of the low ionization nuclear emission region (LINER) galaxy NGC 4736, and demonstrate that it has a type 1 active nucleus, not known before. Furthermore, we show that it is displaced from the centre of its stellar bulge.
Resumo:
In this paper a new parametric method to deal with discrepant experimental results is developed. The method is based on the fit of a probability density function to the data. This paper also compares the characteristics of different methods used to deduce recommended values and uncertainties from a discrepant set of experimental data. The methods are applied to the (137)Cs and (90)Sr published half-lives and special emphasis is given to the deduced confidence intervals. The obtained results are analyzed considering two fundamental properties expected from an experimental result: the probability content of confidence intervals and the statistical consistency between different recommended values. The recommended values and uncertainties for the (137)Cs and (90)Sr half-lives are 10,984 (24) days and 10,523 (70) days, respectively. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
In this work, mixed oxides were synthesized by two methods: polymeric precursor and gel-combustion. The oxides, Niquelate of Lanthanum, Cobaltate of Lanthanum and Cuprate of Lanthanum were synthesized by the polymeric precursor method, and treated at 300 º C for 2 hours, calcined at 800 º C for 6h in air atmosphere. In gel-combustion method were produced and oxides using urea and citric acid as fuel, forming for each fuel the following oxides Ferrate of Lanthanum, Cobaltato of Lanthanum and Ferrato of Cobalt and Lanthanum, which were submitted to the combustion process assisted by microwave power maximum of 10min. The samples were characterized by: thermogravimetric analysis, X-ray diffraction; fisisorção of N2 (BET method) and scanning electron microscopy. The reactions catalytic of depolymerization of poly (methyl methacrylate), were performed in a reactor of silica, with catalytic and heating system equipped with a data acquisition system and the gas chromatograph. For the catalysts synthesized using the polymeric precursor method, the cuprate of lanthanum was best for the depolymerization of the recycled polymer, obtaining 100% conversion in less time 554 (min), and the pure polymer, was the Niquelate of Lanthanum, with 100% conversion in less time 314 (min). By gel-combustion method using urea as fuel which was the best result obtained Ferrate of Lanthanum for the pure polymer with 100% conversion in less time 657 (min), and the recycled polymer was Cobaltate of Lanthanum with 100 % conversion in less time 779 (min). And using citric acid to obtain the best result for the pure polymer, was Ferrate of Lanthanum with 100% conversion in less time 821 (min and) for the recycled polymer, was Ferrate of Lanthanum with 98.28% conversion in less time 635 (min)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The DO experiment enjoyed a very successful data-collection run at the Fermilab Tevatron collider between 1992 and 1996. Since then, the detector has been upgraded to take advantage of improvements to the Tevatron and to enhance its physics capabilities. We describe the new elements of the detector, including the silicon microstrip tracker, central fiber tracker, solenoidal magnet, preshower detectors, forward muon detector, and forward proton detector. The uranium/liquid -argon calorimeters and central muon detector, remaining from Run 1, are discussed briefly. We also present the associated electronics, triggering, and data acquisition systems, along with the design and implementation of software specific to DO. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
The Compact Muon Solenoid (CMS) detector is described. The detector operates at the Large Hadron Collider (LHC) at CERN. It was conceived to study proton-proton (and lead-lead) collisions at a centre-of-mass energy of 14 TeV (5.5 TeV nucleon-nucleon) and at luminosities up to 10(34)cm(-2)s(-1) (10(27)cm(-2)s(-1)). At the core of the CMS detector sits a high-magnetic-field and large-bore superconducting solenoid surrounding an all-silicon pixel and strip tracker, a lead-tungstate scintillating-crystals electromagnetic calorimeter, and a brass-scintillator sampling hadron calorimeter. The iron yoke of the flux-return is instrumented with four stations of muon detectors covering most of the 4 pi solid angle. Forward sampling calorimeters extend the pseudo-rapidity coverage to high values (vertical bar eta vertical bar <= 5) assuring very good hermeticity. The overall dimensions of the CMS detector are a length of 21.6 m, a diameter of 14.6 m and a total weight of 12500 t.
Resumo:
Uma forma de verificar a eficiência de métodos de estimativa da evapotranspiração de referência (ETo) é a comparação com o método-padrão. Este trabalho tem por finalidade comparar três métodos de estimativa da ETo: Radiação Solar (RS), Makkink (MAK) e Tanque Classe A (TCA) em relação ao método de Penman-Monteith (PM), em dois períodos distintos das fases de desenvolvimento da cultura de citros, com dados médios quinzenais para os períodos inverno-primavera e verão-outono. A pesquisa foi desenvolvida em uma fazenda de citros, em Araraquara - SP, onde foi instalada uma estação meteorológica automatizada e um tanque Classe A. Por intermédio da estação meteorológica automatizada, foram obtidas medidas da radiação solar global, saldo de radiação, temperatura do ar, umidade relativa do ar e velocidade do vento. A análise de regressão indica que, para o método TCA, pode ser utilizado o modelo de regressão y = bx, em que, y representa a EToPM e x a EToTCA. Para os demais métodos analisados, o modelo mais adequado foi y = bx + a. Os resultados obtidos neste estudo evidenciam que o método do TCA superestimou a ETo em 26% no período verão-outono e em 24% no período inverno-primavera. O método de MAK subestimou a ETo nos dois períodos analisados, enquanto o método da RS superestimou a ETo.
Resumo:
Este estudo tem por objetivo verificar a influência do tempo de coleta de dados com receptores GPS nas determinações altimétricas. O levantamento altimétrico é realizado através do método de posicionamento relativo estático, utilizando dois receptores GPS de uma freqüência, em diferentes tempos de ocupação (30, 15, 10 e 5 minutos) com uma taxa de gravação de dois segundos. As altitudes obtidas com receptores GPS são comparadas com as altitudes determinadas por nivelamento trigonométrico com Estação Total. Os resultados mostraram que os tempos de ocupação menores que 30 minutos (15, 10 e 5 minutos) também são adequados para a obtenção de diferenças centimétricas nas altitudes analisadas. Mesmo considerando a precisão dos métodos topográficos convencionais, este estudo demonstra a possibilidade da utilização do Sistema de Posicionamento Global (GPS) de forma precisa nos levantamentos altimétricos, desde que se efetue a modelagem da ondulação geoidal.
Resumo:
An automatic Procedure with a high current-density anodic electrodissolution unit (HDAE) is proposed for the determination of aluminium, copper and zinc in non-ferroalloys by flame atonic absorption spectrometry, based on the direct solid analysis. It consists of solenoid valve-based commutation in a flow-injection system for on-line sample electro-dissolution and calibration with one multi-element standard, an electrolytic cell equipped with two electrodes (a silver needle acts as cathode, and sample as anode), and an intelligent unit. The latter is assembled in a PC-compatible microcomputer for instrument control, and far data acquisition and processing. General management of the process is achieved by use of software written in Pascal. Electrolyte compositions, flow rates, commutation times, applied current and electrolysis time mere investigated. A 0.5 mol l(-1) HNO3 solution was elected as electrolyte and 300 A/cm(2) as the continuous current pulse. The performance of the proposed system was evaluated by analysing aluminium in Al-allay samples, and copper/zinc in brass and bronze samples, respectively. The system handles about 50 samples per hour. Results are precise (R.S.D < 2%) and in agreement with those obtained by ICP-AES and spectrophotometry at a 95% confidence level.
Resumo:
Background: Obstructive sleep apnea (OSA) is a respiratory disease characterized by the collapse of the extrathoracic airway and has important social implications related to accidents and cardiovascular risk. The main objective of the present study was to investigate whether the drop in expiratory flow and the volume expired in 0.2 s during the application of negative expiratory pressure (NEP) are associated with the presence and severity of OSA in a population of professional interstate bus drivers who travel medium and long distances.Methods/Design: An observational, analytic study will be carried out involving adult male subjects of an interstate bus company. Those who agree to participate will undergo a detailed patient history, physical examination involving determination of blood pressure, anthropometric data, circumference measurements (hips, waist and neck), tonsils and Mallampati index. Moreover, specific questionnaires addressing sleep apnea and excessive daytime sleepiness will be administered. Data acquisition will be completely anonymous. Following the medical examination, the participants will perform a spirometry, NEP test and standard overnight polysomnography. The NEP test is performed through the administration of negative pressure at the mouth during expiration. This is a practical test performed while awake and requires little cooperation from the subject. In the absence of expiratory flow limitation, the increase in the pressure gradient between the alveoli and open upper airway caused by NEP results in an increase in expiratory flow.Discussion: Despite the abundance of scientific evidence, OSA is still underdiagnosed in the general population. In addition, diagnostic procedures are expensive, and predictive criteria are still unsatisfactory. Because increased upper airway collapsibility is one of the main determinants of OSA, the response to the application of NEP could be a predictor of this disorder. With the enrollment of this study protocol, the expectation is to encounter predictive NEP values for different degrees of OSA in order to contribute toward an early diagnosis of this condition and reduce its impact and complications among commercial interstate bus drivers.
Resumo:
Concept drift is a problem of increasing importance in machine learning and data mining. Data sets under analysis are no longer only static databases, but also data streams in which concepts and data distributions may not be stable over time. However, most learning algorithms produced so far are based on the assumption that data comes from a fixed distribution, so they are not suitable to handle concept drifts. Moreover, some concept drifts applications requires fast response, which means an algorithm must always be (re) trained with the latest available data. But the process of labeling data is usually expensive and/or time consuming when compared to unlabeled data acquisition, thus only a small fraction of the incoming data may be effectively labeled. Semi-supervised learning methods may help in this scenario, as they use both labeled and unlabeled data in the training process. However, most of them are also based on the assumption that the data is static. Therefore, semi-supervised learning with concept drifts is still an open challenge in machine learning. Recently, a particle competition and cooperation approach was used to realize graph-based semi-supervised learning from static data. In this paper, we extend that approach to handle data streams and concept drift. The result is a passive algorithm using a single classifier, which naturally adapts to concept changes, without any explicit drift detection mechanism. Its built-in mechanisms provide a natural way of learning from new data, gradually forgetting older knowledge as older labeled data items became less influent on the classification of newer data items. Some computer simulation are presented, showing the effectiveness of the proposed method.