948 resultados para Field data analyser
Resumo:
This article describes work undertaken by the VERA project to investigate how archaeologists work with information technology (IT) on excavation sites. We used a diary study to research the usual patterns of behaviour of archaeologists digging the Silchester Roman town site during the summer of 2007. Although recording had previously been undertaken using pen and paper, during the 2007 season a part of the dig was dedicated to trials of IT and archaeologists used digital pens and paper and Nokia N800 handheld PDAs to record their work. The goal of the trial was to see whether it was possible to record data from the dig whilst still on site, rather than waiting until after the excavation to enter it into the Integrated Archaeological Database (IADB) and to determine whether the archaeologists found the new technology helpful. The digital pens were a success, however, the N800s were not successful given the extreme conditions on site. Our findings confirmed that it was important that technology should fit in well with the work being undertaken rather than being used for its own sake, and should respect established work flows. We also found that the quality of data being entered was a recurrent concern as was the reliability of the infrastructure and equipment.
Resumo:
In the decade since OceanObs `99, great advances have been made in the field of ocean data dissemination. The use of Internet technologies has transformed the landscape: users can now find, evaluate and access data rapidly and securely using only a web browser. This paper describes the current state of the art in dissemination methods for ocean data, focussing particularly on ocean observations from in situ and remote sensing platforms. We discuss current efforts being made to improve the consistency of delivered data and to increase the potential for automated integration of diverse datasets. An important recent development is the adoption of open standards from the Geographic Information Systems community; we discuss the current impact of these new technologies and their future potential. We conclude that new approaches will indeed be necessary to exchange data more effectively and forge links between communities, but these approaches must be evaluated critically through practical tests, and existing ocean data exchange technologies must be used to their best advantage. Investment in key technology components, cross-community pilot projects and the enhancement of end-user software tools will be required in order to assess and demonstrate the value of any new technology.
Resumo:
Airborne LIght Detection And Ranging (LIDAR) provides accurate height information for objects on the earth, which makes LIDAR become more and more popular in terrain and land surveying. In particular, LIDAR data offer vital and significant features for land-cover classification which is an important task in many application domains. In this paper, an unsupervised approach based on an improved fuzzy Markov random field (FMRF) model is developed, by which the LIDAR data, its co-registered images acquired by optical sensors, i.e. aerial color image and near infrared image, and other derived features are fused effectively to improve the ability of the LIDAR system for the accurate land-cover classification. In the proposed FMRF model-based approach, the spatial contextual information is applied by modeling the image as a Markov random field (MRF), with which the fuzzy logic is introduced simultaneously to reduce the errors caused by the hard classification. Moreover, a Lagrange-Multiplier (LM) algorithm is employed to calculate a maximum A posteriori (MAP) estimate for the classification. The experimental results have proved that fusing the height data and optical images is particularly suited for the land-cover classification. The proposed approach works very well for the classification from airborne LIDAR data fused with its coregistered optical images and the average accuracy is improved to 88.9%.
Resumo:
Multiple linear regression is used to diagnose the signal of the 11-yr solar cycle in zonal-mean zonal wind and temperature in the 40-yr ECMWF Re-Analysis (ERA-40) dataset. The results of previous studies are extended to 2008 using data from ECMWF operational analyses. This analysis confirms that the solar signal found in previous studies is distinct from that of volcanic aerosol forcing resulting from the eruptions of El Chichón and Mount Pinatubo, but it highlights the potential for confusion of the solar signal and lower-stratospheric temperature trends. A correction to an error that is present in previous results of Crooks and Gray, stemming from the use of a single daily analysis field rather than monthly averaged data, is also presented.
Resumo:
Accurate estimates for the fall speed of natural hydrometeors are vital if their evolution in clouds is to be understood quantitatively. In this study, laboratory measurements of the terminal velocity vt for a variety of ice particle models settling in viscous fluids, along with wind-tunnel and field measurements of ice particles settling in air, have been analyzed and compared to common methods of computing vt from the literature. It is observed that while these methods work well for a number of particle types, they fail for particles with open geometries, specifically those particles for which the area ratio Ar is small (Ar is defined as the area of the particle projected normal to the flow divided by the area of a circumscribing disc). In particular, the fall speeds of stellar and dendritic crystals, needles, open bullet rosettes, and low-density aggregates are all overestimated. These particle types are important in many cloud types: aggregates in particular often dominate snow precipitation at the ground and vertically pointing Doppler radar measurements. Based on the laboratory data, a simple modification to previous computational methods is proposed, based on the area ratio. This new method collapses the available drag data onto an approximately universal curve, and the resulting errors in the computed fall speeds relative to the tank data are less than 25% in all cases. Comparison with the (much more scattered) measurements of ice particles falling in air show strong support for this new method, with the area ratio bias apparently eliminated.
Resumo:
This paper describes a method that employs Earth Observation (EO) data to calculate spatiotemporal estimates of soil heat flux, G, using a physically-based method (the Analytical Method). The method involves a harmonic analysis of land surface temperature (LST) data. It also requires an estimate of near-surface soil thermal inertia; this property depends on soil textural composition and varies as a function of soil moisture content. The EO data needed to drive the model equations, and the ground-based data required to provide verification of the method, were obtained over the Fakara domain within the African Monsoon Multidisciplinary Analysis (AMMA) program. LST estimates (3 km × 3 km, one image 15 min−1) were derived from MSG-SEVIRI data. Soil moisture estimates were obtained from ENVISAT-ASAR data, while estimates of leaf area index, LAI, (to calculate the effect of the canopy on G, largely due to radiation extinction) were obtained from SPOT-HRV images. The variation of these variables over the Fakara domain, and implications for values of G derived from them, were discussed. Results showed that this method provides reliable large-scale spatiotemporal estimates of G. Variations in G could largely be explained by the variability in the model input variables. Furthermore, it was shown that this method is relatively insensitive to model parameters related to the vegetation or soil texture. However, the strong sensitivity of thermal inertia to soil moisture content at low values of relative saturation (<0.2) means that in arid or semi-arid climates accurate estimates of surface soil moisture content are of utmost importance, if reliable estimates of G are to be obtained. This method has the potential to improve large-scale evaporation estimates, to aid land surface model prediction and to advance research that aims to explain failure in energy balance closure of meteorological field studies.
Resumo:
Airflow through urban environments is one of the most important factors affecting human health, outdoor and indoor thermal comfort, air quality and the energy performance of buildings. This paper presents a study on the effects of wind induced airflows through urban built form using statistical analysis. The data employed in the analysis are from the year-long simultaneous field measurements conducted at the University of Reading campus in the United Kingdom. In this study, the association between typical architectural forms and the wind environment are investigated; such forms include: a street canyon, a semi-closure, a courtyard form and a relatively open space in a low-rise building complex. Measured data captures wind speed and wind direction at six representative locations and statistical analysis identifies key factors describing the effects of built form on the resulting airflows. Factor analysis of the measured data identified meteorological and architectural layout factors as key factors. The derivation of these factors and their variation with the studied built forms are presented in detail.
Resumo:
Using the record of 30 flank eruptions over the last 110 years at Nyamuragira, we have tested the relationship between the eruption dynamics and the local stress field. There are two groups of eruptions based on their duration (< 80days >) that are also clustered in space and time. We find that the eruptions fed by dykes parallel to the East African Rift Valley have longer durations (and larger volumes) than those eruptions fed by dykes with other orientations. This is compatible with a model for compressible magma transported through an elastic-walled dyke in a differential stress field from an over-pressured reservoir (Woods et al., 2006). The observed pattern of eruptive fissures is consistent with a local stress field modified by a northwest-trending, right lateral slip fault that is part of the northern transfer zone of the Kivu Basin rift segment. We have also re-tested with new data the stochastic eruption models for Nyamuragira of Burt et al. (1994). The time-predictable, pressure-threshold model remains the best fit and is consistent with the typically observed declining rate of sulphur dioxide emission during the first few days of eruption with lava emission from a depressurising, closed, crustal reservoir. The 2.4-fold increase in long-term eruption rate that occurred after 1977 is confirmed in the new analysis. Since that change, the record has been dominated by short-duration eruptions fed by dykes perpendicular to the Rift. We suggest that the intrusion of a major dyke during the 1977 volcano-tectonic event at neighbouring Nyiragongo volcano inhibited subsequent dyke formation on the southern flanks of Nyamuragira and this may also have resulted in more dykes reaching the surface elsewhere. Thus that sudden change in output was a result of a changed stress field that forced more of the deep magma supply to the surface. Another volcano-tectonic event in 2002 may also have changed the magma output rate at Nyamuragira.
Resumo:
The organization of non-crystalline polymeric materials at a local level, namely on a spatial scale between a few and 100 a, is still unclear in many respects. The determination of the local structure in terms of the configuration and conformation of the polymer chain and of the packing characteristics of the chain in the bulk material represents a challenging problem. Data from wide-angle diffraction experiments are very difficult to interpret due to the very large amount of information that they carry, that is the large number of correlations present in the diffraction patterns.We describe new approaches that permit a detailed analysis of the complex neutron diffraction patterns characterizing polymer melts and glasses. The coupling of different computer modelling strategies with neutron scattering data over a wide Q range allows the extraction of detailed quantitative information on the structural arrangements of the materials of interest. Proceeding from modelling routes as diverse as force field calculations, single-chain modelling and reverse Monte Carlo, we show the successes and pitfalls of each approach in describing model systems, which illustrate the need to attack the data analysis problem simultaneously from several fronts.
Resumo:
We present a new approach that allows the determination and refinement of force field parameters for the description of disordered macromolecular systems from experimental neutron diffraction data obtained over a large Q range. The procedure is based on tight coupling between experimentally derived structure factors and computer modelling. By separating the potential into terms representing respectively bond stretching, angle bending and torsional rotation and by treating each of them separately, the various potential parameters are extracted directly from experiment. The procedure is illustrated on molten polytetrafluoroethylene.
Resumo:
We present a new approach that allows the determination of force-field parameters for the description of disordered macromolecular systems from experimental neutron diffraction data obtained over a large Q range. The procedure is based on a tight coupling between experimentally derived structure factors and computer modelling. We separate the molecular potential into non-interacting terms representing respectively bond stretching, angle bending and torsional rotation. The parameters for each of the potentials are extracted directly from experimental data through comparison of the experimental structure factor and those derived from atomistic level molecular models. The viability of these force fields is assessed by comparison of predicted large-scale features such as the characteristic ratio. The procedure is illustrated on molten poly(ethylene) and poly(tetrafluoroethylene).
Resumo:
Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.
Resumo:
The Mitigation Options for Phosphorus and Sediment (MOPS) project investigated the effectiveness of within-field control measures (tramline management, straw residue management, type of cultivation and direction, and vegetative buffers) in terms of mitigating sediment and phosphorus loss from winter-sown combinable cereal crops using three case study sites. To determine the cost of the approaches, simple financial spreadsheet models were constructed at both farm and regional levels. Taking into account crop areas, crop rotation margins per hectare were calculated to reflect the costs of crop establishment, fertiliser and agro-chemical applications, harvesting, and the associated labour and machinery costs. Variable and operating costs associated with each mitigation option were then incorporated to demonstrate the impact on the relevant crop enterprise and crop rotation margins. These costs were then compared to runoff, sediment and phosphorus loss data obtained from monitoring hillslope-length scale field plots. Each of the mitigation options explored in this study had potential for reducing sediment and phosphorus losses from arable land under cereal crops. Sediment losses were reduced from between 9 kg ha−1 to as much as 4780 kg ha−1 with a corresponding reduction in phosphorus loss from 0.03 kg ha−1 to 2.89 kg ha−1. In percentage terms reductions of phosphorus were between 9% and 99%. Impacts on crop rotation margins also varied. Minimum tillage resulted in cost savings (up to £50 ha−1) whilst other options showed increased costs (up to £19 ha−1 for straw residue incorporation). Overall, the results indicate that each of the options has potential for on-farm implementation. However, tramline management appeared to have the greatest potential for reducing runoff, sediment, and phosphorus losses from arable land (between 69% and 99%) and is likely to be considered cost-effective with only a small additional cost of £2–4 ha−1, although further work is needed to evaluate alternative tramline management methods. Tramline management is also the only option not incorporated within current policy mechanisms associated with reducing soil erosion and phosphorus loss and in light of its potential is an approach that should be encouraged once further evidence is available.
Resumo:
Climate-G is a large scale distributed testbed devoted to climate change research. It is an unfunded effort started in 2008 and involving a wide community both in Europe and US. The testbed is an interdisciplinary effort involving partners from several institutions and joining expertise in the field of climate change and computational science. Its main goal is to allow scientists carrying out geographical and cross-institutional data discovery, access, analysis, visualization and sharing of climate data. It represents an attempt to address, in a real environment, challenging data and metadata management issues. This paper presents a complete overview about the Climate-G testbed highlighting the most important results that have been achieved since the beginning of this project.
Resumo:
Liquid clouds play a profound role in the global radiation budget but it is difficult to remotely retrieve their vertical profile. Ordinary narrow field-of-view (FOV) lidars receive a strong return from such clouds but the information is limited to the first few optical depths. Wideangle multiple-FOV lidars can isolate radiation scattered multiple times before returning to the instrument, often penetrating much deeper into the cloud than the singly-scattered signal. These returns potentially contain information on the vertical profile of extinction coefficient, but are challenging to interpret due to the lack of a fast radiative transfer model for simulating them. This paper describes a variational algorithm that incorporates a fast forward model based on the time-dependent two-stream approximation, and its adjoint. Application of the algorithm to simulated data from a hypothetical airborne three-FOV lidar with a maximum footprint width of 600m suggests that this approach should be able to retrieve the extinction structure down to an optical depth of around 6, and total opticaldepth up to at least 35, depending on the maximum lidar FOV. The convergence behavior of Gauss-Newton and quasi-Newton optimization schemes are compared. We then present results from an application of the algorithm to observations of stratocumulus by the 8-FOV airborne “THOR” lidar. It is demonstrated how the averaging kernel can be used to diagnose the effective vertical resolution of the retrieved profile, and therefore the depth to which information on the vertical structure can be recovered. This work enables exploitation of returns from spaceborne lidar and radar subject to multiple scattering more rigorously than previously possible.