16 resultados para datasets

em CentAUR: Central Archive University of Reading - UK


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data from four recent reanalysis projects [ECMWF, NCEP-NCAR, NCEP - Department of Energy ( DOE), NASA] have been diagnosed at the scale of synoptic weather systems using an objective feature tracking method. The tracking statistics indicate that, overall, the reanalyses correspond very well in the Northern Hemisphere (NH) lower troposphere, although differences for the spatial distribution of mean intensities show that the ECMWF reanalysis is systematically stronger in the main storm track regions but weaker around major orographic features. A direct comparison of the track ensembles indicates a number of systems with a broad range of intensities that compare well among the reanalyses. In addition, a number of small-scale weak systems are found that have no correspondence among the reanalyses or that only correspond upon relaxing the matching criteria, indicating possible differences in location and/or temporal coherence. These are distributed throughout the storm tracks, particularly in the regions known for small-scale activity, such as secondary development regions and the Mediterranean. For the Southern Hemisphere (SH), agreement is found to be generally less consistent in the lower troposphere with significant differences in both track density and mean intensity. The systems that correspond between the various reanalyses are considerably reduced and those that do not match span a broad range of storm intensities. Relaxing the matching criteria indicates that there is a larger degree of uncertainty in both the location of systems and their intensities compared with the NH. At upper-tropospheric levels, significant differences in the level of activity occur between the ECMWF reanalysis and the other reanalyses in both the NH and SH winters. This occurs due to a lack of coherence in the apparent propagation of the systems in ERA15 and appears most acute above 500 hPa. This is probably due to the use of optimal interpolation data assimilation in ERA15. Also shown are results based on using the same techniques to diagnose the tropical easterly wave activity. Results indicate that the wave activity is sensitive not only to the resolution and assimilation methods used but also to the model formulation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

RothC and Century are two of the most widely used soil organic matter (SOM) models. However there are few examples of specific parameterisation of these models for environmental conditions in East Africa. The aim of this study was therefore, to evaluate the ability of RothC and the Century to estimate changes in soil organic carbon (SOC) resulting from varying land use/management practices for the climate and soil conditions found in Kenya. The study used climate, soils and crop data from a long term experiment (1976-2001) carried out at The Kabete site at The Kenya National Agricultural Research Laboratories (NARL, located in a semi-humid region) and data from a 13 year experiment carried out in Machang'a (Embu District, located in a semi-arid region). The NARL experiment included various fertiliser (0, 60 and 120 kg of N and P2O5 ha(-1)), farmyard manure (FYM - 5 and 10 t ha(-1)) and plant residue treatments, in a variety of combinations. The Machang'a experiment involved a fertiliser (51 kg N ha(-1)) and a FYM (0, 5 and 10 t ha(-1)) treatment with both monocropping and intercropping. At Kabete both models showed a fair to good fit to measured data, although Century simulations for treatments with high levels of FYM were better than those without. At the Machang'a site with monocrops, both models showed a fair to good fit to measured data for all treatments. However, the fit of both models (especially RothC) to measured data for intercropping treatments at Machang'a was much poorer. Further model development for intercrop systems is recommended. Both models can be useful tools in soil C Predictions, provided time series of measured soil C and crop production data are available for validating model performance against local or regional agricultural crops. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Liquid chromatography-mass spectrometry (LC-MS) datasets can be compared or combined following chromatographic alignment. Here we describe a simple solution to the specific problem of aligning one LC-MS dataset and one LC-MS/MS dataset, acquired on separate instruments from an enzymatic digest of a protein mixture, using feature extraction and a genetic algorithm. First, the LC-MS dataset is searched within a few ppm of the calculated theoretical masses of peptides confidently identified by LC-MS/MS. A piecewise linear function is then fitted to these matched peptides using a genetic algorithm with a fitness function that is insensitive to incorrect matches but sufficiently flexible to adapt to the discrete shifts common when comparing LC datasets. We demonstrate the utility of this method by aligning ion trap LC-MS/MS data with accurate LC-MS data from an FTICR mass spectrometer and show how hybrid datasets can improve peptide and protein identification by combining the speed of the ion trap with the mass accuracy of the FTICR, similar to using a hybrid ion trap-FTICR instrument. We also show that the high resolving power of FTICR can improve precision and linear dynamic range in quantitative proteomics. The alignment software, msalign, is freely available as open source.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Virtual Reality (VR) is widely used in visualizing medical datasets. This interest has emerged due to the usefulness of its techniques and features. Such features include immersion, collaboration, and interactivity. In a medical visualization context, immersion is important, because it allows users to interact directly and closelywith detailed structures in medical datasets. Collaboration on the other hand is beneficial, because it gives medical practitioners the chance to share their expertise and offer feedback and advice in a more effective and intuitive approach. Interactivity is crucial in medical visualization and simulation systems, because responsiveand instantaneous actions are key attributes in applications, such as surgical simulations. In this paper we present a case study that investigates the use of VR in a collaborative networked CAVE environment from a medical volumetric visualization perspective. The study will present a networked CAVE application, which has been built to visualize and interact with volumetric datasets. We will summarize the advantages of such an application and the potential benefits of our system. We also will describe the aspects related to this application area and the relevant issues of such implementations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the earth sciences, data are commonly cast on complex grids in order to model irregular domains such as coastlines, or to evenly distribute grid points over the globe. It is common for a scientist to wish to re-cast such data onto a grid that is more amenable to manipulation, visualization, or comparison with other data sources. The complexity of the grids presents a significant technical difficulty to the regridding process. In particular, the regridding of complex grids may suffer from severe performance issues, in the worst case scaling with the product of the sizes of the source and destination grids. We present a mechanism for the fast regridding of such datasets, based upon the construction of a spatial index that allows fast searching of the source grid. We discover that the most efficient spatial index under test (in terms of memory usage and query time) is a simple look-up table. A kd-tree implementation was found to be faster to build and to give similar query performance at the expense of a larger memory footprint. Using our approach, we demonstrate that regridding of complex data may proceed at speeds sufficient to permit regridding on-the-fly in an interactive visualization application, or in a Web Map Service implementation. For large datasets with complex grids the new mechanism is shown to significantly outperform algorithms used in many scientific visualization packages.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sting jets are transient coherent mesoscale strong wind features that can cause damaging surface wind gusts in extratropical cyclones. Currently, we have only limited knowledge of their climatological characteristics. Numerical weather prediction models require enough resolution to represent slantwise motions with horizontal scales of tens of kilometres and vertical scales of just a few hundred metres to represent sting jets. Hence, the climatological characteristics of sting jets and the associated extratropical cyclones can not be determined by searching for sting jets in low-resolution datasets such as reanalyses. A diagnostic is presented and evaluated for the detection in low-resolution datasets of atmospheric regions from which sting jets may originate. Previous studies have shown that conditional symmetric instability (CSI) is present in all storms studied with sting jets, while other, rapidly developing storms of a similar character but no CSI do not develop sting jets. Therefore, we assume that the release of CSI is needed for sting jets to develop. While this instability will not be released in a physically realistic way in low-resolution models (and hence sting jets are unlikely to occur), it is hypothesized that the signature of this instability (combined with other criteria that restrict analysis to moist mid-tropospheric regions in the neighbourhood of a secondary cold front) can be used to identify cyclones in which sting jets occurred in reality. The diagnostic is evaluated, and appropriate parameter thresholds defined, by applying it to three case studies simulated using two resolutions (with CSI-release resolved in only the higher-resolution simulation).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a rising demand for the quantitative performance evaluation of automated video surveillance. To advance research in this area, it is essential that comparisons in detection and tracking approaches may be drawn and improvements in existing methods can be measured. There are a number of challenges related to the proper evaluation of motion segmentation, tracking, event recognition, and other components of a video surveillance system that are unique to the video surveillance community. These include the volume of data that must be evaluated, the difficulty in obtaining ground truth data, the definition of appropriate metrics, and achieving meaningful comparison of diverse systems. This chapter provides descriptions of useful benchmark datasets and their availability to the computer vision community. It outlines some ground truth and evaluation techniques, and provides links to useful resources. It concludes by discussing the future direction for benchmark datasets and their associated processes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Global syntheses of palaeoenvironmental data are required to test climate models under conditions different from the present. Data sets for this purpose contain data from spatially extensive networks of sites. The data are either directly comparable to model output or readily interpretable in terms of modelled climate variables. Data sets must contain sufficient documentation to distinguish between raw (primary) and interpreted (secondary, tertiary) data, to evaluate the assumptions involved in interpretation of the data, to exercise quality control, and to select data appropriate for specific goals. Four data bases for the Late Quaternary, documenting changes in lake levels since 30 kyr BP (the Global Lake Status Data Base), vegetation distribution at 18 kyr and 6 kyr BP (BIOME 6000), aeolian accumulation rates during the last glacial-interglacial cycle (DIRTMAP), and tropical terrestrial climates at the Last Glacial Maximum (the LGM Tropical Terrestrial Data Synthesis) are summarised. Each has been used to evaluate simulations of Last Glacial Maximum (LGM: 21 calendar kyr BP) and/or mid-Holocene (6 cal. kyr BP) environments. Comparisons have demonstrated that changes in radiative forcing and orography due to orbital and ice-sheet variations explain the first-order, broad-scale (in space and time) features of global climate change since the LGM. However, atmospheric models forced by 6 cal. kyr BP orbital changes with unchanged surface conditions fail to capture quantitative aspects of the observed climate, including the greatly increased magnitude and northward shift of the African monsoon during the early to mid-Holocene. Similarly, comparisons with palaeoenvironmental datasets show that atmospheric models have underestimated the magnitude of cooling and drying of much of the land surface at the LGM. The inclusion of feedbacks due to changes in ocean- and land-surface conditions at both times, and atmospheric dust loading at the LGM, appears to be required in order to produce a better simulation of these past climates. The development of Earth system models incorporating the dynamic interactions among ocean, atmosphere, and vegetation is therefore mandated by Quaternary science results as well as climatological principles. For greatest scientific benefit, this development must be paralleled by continued advances in palaeodata analysis and synthesis, which in turn will help to define questions that call for new focused data collection efforts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Indian monsoon is an important component of Earth's climate system, accurate forecasting of its mean rainfall being essential for regional food and water security. Accurate measurement of the rainfall is essential for various water-related applications, the evaluation of numerical models and detection and attribution of trends, but a variety of different gridded rainfall datasets are available for these purposes. In this study, six gridded rainfall datasets are compared against the India Meteorological Department (IMD) gridded rainfall dataset, chosen as the most representative of the observed system due to its high gauge density. The datasets comprise those based solely on rain gauge observations and those merging rain gauge data with satellite-derived products. Various skill scores and subjective comparisons are carried out for the Indian region during the south-west monsoon season (June to September). Relative biases and skill metrics are documented at all-India and sub-regional scales. In the gauge-based (land-only) category, Asian Precipitation-Highly-Resolved Observational Data Integration Towards Evaluation of water resources (APHRODITE) and Global Precipitation Climatology Center (GPCC) datasets perform better relative to the others in terms of a variety of skill metrics. In the merged category, the Global Precipitation Climatology Project (GPCP) dataset is shown to perform better than the Climate Prediction Center Merged Analysis of Precipitation (CMAP) for the Indian monsoon in terms of various metrics, when compared with the IMD gridded data. Most of the datasets have difficulty in representing rainfall over orographic regions including the Western Ghats mountains, in north-east India and the Himalayan foothills. The wide range of skill scores seen among the datasets and even the change of sign of bias found in some years are causes of concern. This uncertainty between datasets is largest in north-east India. These results will help those studying the Indian monsoon region to select an appropriate dataset depending on their application and focus of research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sea surface temperature (SST) datasets have been generated from satellite observations for the period 1991–2010, intended for use in climate science applications. Attributes of the datasets specifically relevant to climate applications are: first, independence from in situ observations; second, effort to ensure homogeneity and stability through the time-series; third, context-specific uncertainty estimates attached to each SST value; and, fourth, provision of estimates of both skin SST (the fundamental measure- ment, relevant to air-sea fluxes) and SST at standard depth and local time (partly model mediated, enabling comparison with his- torical in situ datasets). These attributes in part reflect requirements solicited from climate data users prior to and during the project. Datasets consisting of SSTs on satellite swaths are derived from the Along-Track Scanning Radiometers (ATSRs) and Advanced Very High Resolution Radiometers (AVHRRs). These are then used as sole SST inputs to a daily, spatially complete, analysis SST product, with a latitude-longitude resolution of 0.05°C and good discrimination of ocean surface thermal features. A product user guide is available, linking to reports describing the datasets’ algorithmic basis, validation results, format, uncer- tainty information and experimental use in trial climate applications. Future versions of the datasets will span at least 1982–2015, better addressing the need in many climate applications for stable records of global SST that are at least 30 years in length.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims. Although the time of the Maunder minimum (1645–1715) is widely known as a period of extremely low solar activity, it is still being debated whether solar activity during that period might have been moderate or even higher than the current solar cycle (number 24). We have revisited all existing evidence and datasets, both direct and indirect, to assess the level of solar activity during the Maunder minimum. Methods. We discuss the East Asian naked-eye sunspot observations, the telescopic solar observations, the fraction of sunspot active days, the latitudinal extent of sunspot positions, auroral sightings at high latitudes, cosmogenic radionuclide data as well as solar eclipse observations for that period. We also consider peculiar features of the Sun (very strong hemispheric asymmetry of the sunspot location, unusual differential rotation and the lack of the K-corona) that imply a special mode of solar activity during the Maunder minimum. Results. The level of solar activity during the Maunder minimum is reassessed on the basis of all available datasets. Conclusions. We conclude that solar activity was indeed at an exceptionally low level during the Maunder minimum. Although the exact level is still unclear, it was definitely lower than during the Dalton minimum of around 1800 and significantly below that of the current solar cycle #24. Claims of a moderate-to-high level of solar activity during the Maunder minimum are rejected with a high confidence level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the two datasets (ARENA and P5) and the challenge that form a part of the PETS 2015 workshop. The datasets consist of scenarios recorded by us- ing multiple visual and thermal sensors. The scenarios in ARENA dataset involve different staged activities around a parked vehicle in a parking lot in UK and those in P5 dataset involve different staged activities around the perimeter of a nuclear power plant in Sweden. The scenarios of each dataset are grouped into ‘Normal’, ‘Warning’ and ‘Alarm’ categories. The Challenge specifically includes tasks that account for different steps in a video understanding system: Low-Level Video Analysis (object detection and tracking), Mid-Level Video Analysis (‘atomic’ event detection) and High-Level Video Analysis (‘complex’ event detection). The evaluation methodology used for the Challenge includes well-established measures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a quantitative evaluation of a tracking system on PETS 2015 Challenge datasets using well-established performance measures. Using the existing tools, the tracking system implements an end-to-end pipeline that include object detection, tracking and post- processing stages. The evaluation results are presented on the provided sequences of both ARENA and P5 datasets of PETS 2015 Challenge. The results show an encouraging performance of the tracker in terms of accuracy but a greater tendency of being prone to cardinality error and ID changes on both datasets. Moreover, the analysis show a better performance of the tracker on visible imagery than on thermal imagery.