953 resultados para DATA QUALITY
Resumo:
OPAL is an English national programme that takes scientists into the community to investigate environmental issues. Biological monitoring plays a pivotal role covering topics of: i) soil and earthworms; ii) air, lichens and tar spot on sycamore; iii) water and aquatic invertebrates; iv) biodiversity and hedgerows; v) climate, clouds and thermal comfort. Each survey has been developed by an interdisciplinary team and tested by voluntary, statutory and community sectors. Data are submitted via the web and instantly mapped. Preliminary results are presented, together with a discussion on data quality and uncertainty. Communities also investigate local pollution issues, ranging from nitrogen deposition on heathlands to traffic emissions on roadside vegetation. Over 200,000 people have participated so far, including over 1000 schools and 1000 voluntary groups. Benefits include a substantial, growing database on biodiversity and habitat condition, much from previously unsampled sites particularly in urban areas, and a more engaged public.
Resumo:
We review the scientific literature since the 1960s to examine the evolution of modeling tools and observations that have advanced understanding of global stratospheric temperature changes. Observations show overall cooling of the stratosphere during the period for which they are available (since the late 1950s and late 1970s from radiosondes and satellites, respectively), interrupted by episodes of warming associated with volcanic eruptions, and superimposed on variations associated with the solar cycle. There has been little global mean temperature change since about 1995. The temporal and vertical structure of these variations are reasonably well explained bymodels that include changes in greenhouse gases, ozone, volcanic aerosols, and solar output, although there are significant uncertainties in the temperature observations and regarding the nature and influence of past changes in stratospheric water vapor. As a companion to a recent WIREs review of tropospheric temperature trends, this article identifies areas of commonality and contrast between the tropospheric and stratospheric trend literature. For example, the increased attention over time to radiosonde and satellite data quality has contributed to better characterization of uncertainty in observed trends both in the troposphere and in the lower stratosphere, and has highlighted the relative deficiency of attention to observations in the middle and upper stratosphere. In contrast to the relatively unchanging expectations of surface and tropospheric warming primarily induced by greenhouse gas increases, stratospheric temperature change expectations have arisen from experiments with a wider variety of model types, showingmore complex trend patterns associated with a greater diversity of forcing agents.
Resumo:
The long observational record is critical to our understanding of the Earth’s climate, but most observing systems were not developed with a climate objective in mind. As a result, tremendous efforts have gone into assessing and reprocessing the data records to improve their usefulness in climate studies. The purpose of this paper is to both review recent progress in reprocessing and reanalyzing observations, and summarize the challenges that must be overcome in order to improve our understanding of climate and variability. Reprocessing improves data quality through more scrutiny and improved retrieval techniques for individual observing systems, while reanalysis merges many disparate observations with models through data assimilation, yet both aim to provide a climatology of Earth processes. Many challenges remain, such as tracking the improvement of processing algorithms and limited spatial coverage. Reanalyses have fostered significant research, yet reliable global trends in many physical fields are not yet attainable, despite significant advances in data assimilation and numerical modeling. Oceanic reanalyses have made significant advances in recent years, but will only be discussed here in terms of progress toward integrated Earth system analyses. Climate data sets are generally adequate for process studies and large-scale climate variability. Communication of the strengths, limitations and uncertainties of reprocessed observations and reanalysis data, not only among the community of developers, but also with the extended research community, including the new generations of researchers and the decision makers is crucial for further advancement of the observational data records. It must be emphasized that careful investigation of the data and processing methods are required to use the observations appropriately.
Resumo:
This special issue is focused on the assessment of algorithms for the observation of Earth’s climate from environ- mental satellites. Climate data records derived by remote sensing are increasingly a key source of insight into the workings of and changes in Earth’s climate system. Producers of data sets must devote considerable effort and expertise to maximise the true climate signals in their products and minimise effects of data processing choices and changing sensors. A key choice is the selection of algorithm(s) for classification and/or retrieval of the climate variable. Within the European Space Agency Climate Change Initiative, science teams undertook systematic assessment of algorithms for a range of essential climate variables. The papers in the special issue report some of these exercises (for ocean colour, aerosol, ozone, greenhouse gases, clouds, soil moisture, sea surface temper- ature and glaciers). The contributions show that assessment exercises must be designed with care, considering issues such as the relative importance of different aspects of data quality (accuracy, precision, stability, sensitivity, coverage, etc.), the availability and degree of independence of validation data and the limitations of validation in characterising some important aspects of data (such as long-term stability or spatial coherence). As well as re- quiring a significant investment of expertise and effort, systematic comparisons are found to be highly valuable. They reveal the relative strengths and weaknesses of different algorithmic approaches under different observa- tional contexts, and help ensure that scientific conclusions drawn from climate data records are not influenced by observational artifacts, but are robust.
Resumo:
Variations in the spatial configuration of the interstellar magnetic field (ISMF) near the Sun can be constrained by comparing the ISMF direction at the heliosphere found from the Interstellar Boundary Explorer (IBEX) spacecraft observations of a ""Ribbon"" of energetic neutral atoms (ENAs), with the ISMF direction derived from optical polarization data for stars within similar to 40 pc. Using interstellar polarization observations toward similar to 30 nearby stars within similar to 90 degrees of the heliosphere nose, we find that the best fits to the polarization position angles are obtained for a magnetic pole directed toward ecliptic coordinates of lambda, beta similar to 263 degrees, 37 degrees (or galactic coordinates of l, b similar to 38 degrees, 23 degrees), with uncertainties of +/- 35 degrees based on the broad minimum of the best fits and the range of data quality. This magnetic pole is 33 degrees from the magnetic pole that is defined by the center of the arc of the ENA Ribbon. The IBEX ENA ribbon is seen in sight lines that are perpendicular to the ISMF as it drapes over the heliosphere. The similarity of the polarization and Ribbon directions for the local ISMF suggests that the local field is coherent over scale sizes of tens of parsecs. The ISMF vector direction is nearly perpendicular to the flow of local interstellar material (ISM) through the local standard of rest, supporting a possible local ISM origin related to an evolved expanding magnetized shell. The local ISMF direction is found to have a curious geometry with respect to the cosmic microwave background dipole moment.
Resumo:
Last century Six Sigma Strategy has been the focus of study for many scientists, between the discoveries we have the importance of data process for the free of error product manufactory. So, this work focuses on data quality importance in an enterprise. For this, a descriptive-exploratory study of seventeen pharmacies of manipulations from Rio Grande do Norte was undertaken with the objective to be able to create a base structure model to classify enterprises according to their data bases. Therefore, statistical methods such as cluster and discriminant analyses were used applied to a questionnaire built for this specific study. Data collection identified four group showing strong and weak characteristics for each group and that are differentiated from each other
Resumo:
The Brazilian Geodetic Network started to be established in the early 40's, employing classical surveying methods, such as triangulation and trilateration. With the introduction of satellite positioning systems, such as TRANSIT and GPS, that network was densified. That data was adjusted by employing a variety of methods, yielding distortions in the network that need to be understood. In this work, we analyze and interpret study cases in an attempt to understand the distortions in the Brazilian network. For each case, we performed the network adjustment employing the GHOST software suite. The results show that the distortion is least sensitive to the removal of invar baselines in the classical network. The network would be more affected by the inexistence of Laplace stations and Doppler control points, with differences up to 4.5 m.
Resumo:
The CMS High-Level Trigger (HLT) is responsible for ensuring that data samples with potentially interesting events are recorded with high efficiency and good quality. This paper gives an overview of the HLT and focuses on its commissioning using cosmic rays. The selection of triggers that were deployed is presented and the online grouping of triggered events into streams and primary datasets is discussed. Tools for online and offline data quality monitoring for the HLT are described, and the operational performance of the muon HLT algorithms is reviewed. The average time taken for the HLT selection and its dependence on detector and operating conditions are presented. The HLT performed reliably and helped provide a large dataset. This dataset has proven to be invaluable for understanding the performance of the trigger and the CMS experiment as a whole. © 2010 IOP Publishing Ltd and SISSA.
Resumo:
Acoustic Doppler current profilers are currently the main option for flow measurement and hydrodynamic monitoring of streams, replacing traditional methods. The spread of such equipment is mainly due to their operational advantages ranging from speed measurement to the greatest detail and amount of information generated about the hydrodynamics of hydrometric sections. As in the use of traditional methods and equipments, the use of acoustic Doppler profilers should be guided by the pursuit of data quality, since these are the basis for project and management of water resources constructions and systems. In this sense, the paper presents an analysis of measurement uncertainties of a hydrometric campaign held in Sapucaí River (Piranguinho-MG), using two different Doppler profilers - a Rio Grande ADCP 1200 kHz and a Qmetrix Qliner. 10 measurements were performed with each equipment consecutively, following the literature quality protocols, and later, a Type A uncertainty analysis (statistical analysis of several independent observations of the input under the same conditions). The measurements of the ADCP and Qliner presented, respectively, standard uncertainties of 0.679% and 0.508% compared with the averages. These results are satisfactory and acceptable when compared to references in the literature, indicating that the use of Doppler profilers is valid for expansion and upgrade of streamflow measurement networks and generation of hydrological data.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Pós-graduação em Alimentos e Nutrição - FCFAR
Resumo:
Pós-graduação em Geociências e Meio Ambiente - IGCE
Resumo:
Durante o processo de extração do conhecimento em bases de dados, alguns problemas podem ser encontrados como por exemplo, a ausência de determinada instância de um atributo. A ocorrência de tal problemática pode causar efeitos danosos nos resultados finais do processo, pois afeta diretamente a qualidade dos dados a ser submetido a um algoritmo de aprendizado de máquina. Na literatura, diversas propostas são apresentadas a fim de contornar tal dano, dentre eles está a de imputação de dados, a qual estima um valor plausível para substituir o ausente. Seguindo essa área de solução para o problema de valores ausentes, diversos trabalhos foram analisados e algumas observações foram realizadas como, a pouca utilização de bases sintéticas que simulem os principais mecanismos de ausência de dados e uma recente tendência a utilização de algoritmos bio-inspirados como tratamento do problema. Com base nesse cenário, esta dissertação apresenta um método de imputação de dados baseado em otimização por enxame de partículas, pouco explorado na área, e o aplica para o tratamento de bases sinteticamente geradas, as quais consideram os principais mecanismos de ausência de dados, MAR, MCAR e NMAR. Os resultados obtidos ao comprar diferentes configurações do método à outros dois conhecidos na área (KNNImpute e SVMImpute) são promissores para sua utilização na área de tratamento de valores ausentes uma vez que alcançou os melhores valores na maioria dos experimentos realizados.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)