49 resultados para DIGITAL DATA
Resumo:
This paper maps the carbonate geochemistry of the Makgadikgadi Pans region of northern Botswana from moderate resolution (500 m pixels) remotely sensed data, to assess the impact of various geomorphological processes on surficial carbonate distribution. Previous palaeo-environmental studies have demonstrated that the pans have experienced several highstands during the Quaternary, forming calcretes around shoreline embayments. The pans are also a significant regional source of dust, and some workers have suggested that surficial carbonate distributions may be controlled, in part, by wind regime. Field studies of carbonate deposits in the region have also highlighted the importance of fluvial and groundwater processes in calcrete formation. However, due to the large area involved and problems of accessibility, the carbonate distribution across the entire Makgadikgadi basin remains poorly understood. The MODIS instrument permits mapping of carbonate distribution over large areas; comparison with estimates from Landsat Thematic Mapper data show reasonable agreement, and there is good agreement with estimates from laboratory analysis of field samples. The results suggest that palaeo-lake highstands, reconstructed here using the SRTM 3 arc-second digital elevation model, have left behind surficial carbonate deposits, which can be mapped by the MODIS instrument. Copyright (c) 2006 John Wiley & Sons, Ltd.
Resumo:
Soil data and reliable soil maps are imperative for environmental management. conservation and policy. Data from historical point surveys, e.g. experiment site data and farmers fields can serve this purpose. However, legacy soil information is not necessarily collected for spatial analysis and mapping such that the data may not have immediately useful geo-references. Methods are required to utilise these historical soil databases so that we can produce quantitative maps of soil propel-ties to assess spatial and temporal trends but also to assess where future sampling is required. This paper discusses two such databases: the Representative Soil Sampling Scheme which has monitored the agricultural soil in England and Wales from 1969 to 2003 (between 400 and 900 bulked soil samples were taken annually from different agricultural fields); and the former State Chemistry Laboratory, Victoria, Australia where between 1973 and 1994 approximately 80,000 soil samples were submitted for analysis by farmers. Previous statistical analyses have been performed using administrative regions (with sharp boundaries) for both databases, which are largely unrelated to natural features. For a more detailed spatial analysis that call be linked to climate and terrain attributes, gradual variation of these soil properties should be described. Geostatistical techniques such as ordinary kriging are suited to this. This paper describes the format of the databases and initial approaches as to how they can be used for digital soil mapping. For this paper we have selected soil pH to illustrate the analyses for both databases.
Resumo:
The elucidation of spatial variation in the landscape can indicate potential wildlife habitats or breeding sites for vectors, such as ticks or mosquitoes, which cause a range of diseases. Information from remotely sensed data could aid the delineation of vegetation distribution on the ground in areas where local knowledge is limited. The data from digital images are often difficult to interpret because of pixel-to-pixel variation, that is, noise, and complex variation at more than one spatial scale. Landsat Thematic Mapper Plus (ETM+) and Satellite Pour l'Observation de La Terre (SPOT) image data were analyzed for an area close to Douna in Mali, West Africa. The variograms of the normalized difference vegetation index (NDVI) from both types of image data were nested. The parameters of the nested variogram function from the Landsat ETM+ data were used to design the sampling for a ground survey of soil and vegetation data. Variograms of the soil and vegetation data showed that their variation was anisotropic and their scales of variation were similar to those of NDVI from the SPOT data. The short- and long-range components of variation in the SPOT data were filtered out separately by factorial kriging. The map of the short-range component appears to represent the patterns of vegetation and associated shallow slopes and drainage channels of the tiger bush system. The map of the long-range component also appeared to relate to broader patterns in the tiger bush and to gentle undulations in the topography. The results suggest that the types of image data analyzed in this study could be used to identify areas with more moisture in semiarid regions that could support wildlife and also be potential vector breeding sites.
Resumo:
This paper maps the carbonate geochemistry of the Makgadikgadi Pans region of northern Botswana from moderate resolution (500 m pixels) remotely sensed data, to assess the impact of various geomorphological processes on surficial carbonate distribution. Previous palaeo-environmental studies have demonstrated that the pans have experienced several highstands during the Quaternary, forming calcretes around shoreline embayments. The pans are also a significant regional source of dust, and some workers have suggested that surficial carbonate distributions may be controlled, in part, by wind regime. Field studies of carbonate deposits in the region have also highlighted the importance of fluvial and groundwater processes in calcrete formation. However, due to the large area involved and problems of accessibility, the carbonate distribution across the entire Makgadikgadi basin remains poorly understood. The MODIS instrument permits mapping of carbonate distribution over large areas; comparison with estimates from Landsat Thematic Mapper data show reasonable agreement, and there is good agreement with estimates from laboratory analysis of field samples. The results suggest that palaeo-lake highstands, reconstructed here using the SRTM 3 arc-second digital elevation model, have left behind surficial carbonate deposits, which can be mapped by the MODIS instrument. Copyright (c) 2006 John Wiley & Sons, Ltd.
Resumo:
The ground surface net solar radiation is the energy that drives physical and chemical processes at the ground surface. In this paper, multi-spectral data from the Landsat-5 TM, topographic data from a gridded digital elevation model, field measurements, and the atmosphere model LOWTRAN 7 are used to estimate surface net solar radiation over the FIFE site. Firstly an improved method is presented and used for calculating total surface incoming radiation. Then, surface albedo is integrated from surface reflectance factors derived from remotely sensed data from Landsat-5 TM. Finally, surface net solar radiation is calculated by subtracting surface upwelling radiation from the total surface incoming radiation.
Resumo:
LIght Detection And Ranging (LIDAR) data for terrain and land surveying has contributed to many environmental, engineering and civil applications. However, the analysis of Digital Surface Models (DSMs) from complex LIDAR data is still challenging. Commonly, the first task to investigate LIDAR data point clouds is to separate ground and object points as a preparatory step for further object classification. In this paper, the authors present a novel unsupervised segmentation algorithm-skewness balancing to separate object and ground points efficiently from high resolution LIDAR point clouds by exploiting statistical moments. The results presented in this paper have shown its robustness and its potential for commercial applications.
Resumo:
Recent severe flooding in the UK has highlighted the need for better information on flood risk, increasing the pressure on engineers to enhance the capabilities of computer models for flood prediction. This paper evaluates the benefits to be gained from the use of remotely sensed data to support flood modelling. The remotely sensed data available can be used either to produce high-resolution digital terrain models (DTMs) (light detection and ranging (Lidar) data), or to generate accurate inundation mapping of past flood events (airborne synthetic aperture radar (SAR) data and aerial photography). The paper reports on the modelling of real flood events that occurred at two UK sites on the rivers Severn and Ouse. At these sites a combination of remotely sensed data and recorded hydrographs was available. It is concluded first that light detection and ranging Lidar generated DTMs support the generation of considerably better models and enhance the visualisation of model results and second that flood outlines obtained from airborne SAR or aerial images help develop an appreciation of the hydraulic behaviour of important model components, and facilitate model validation. The need for further research is highlighted by a number of limitations, namely: the difficulties in obtaining an adequate representation of hydraulically important features such as embankment crests and walls; uncertainties in the validation data; and difficulties in extracting flood outlines from airborne SAR images in urban areas.
Resumo:
This research examines dynamics associated with new representational technologies in complex organizations through a study of the use of a Single Model Environment, prototyping and simulation tools in the mega-project to construct Terminal 5 at Heathrow Airport, London. The ambition of the client, BAA. was to change industrial practices reducing project costs and time to delivery through new contractual arrangements and new digitally-enabled collaborative ways of working. The research highlights changes over time and addresses two areas of 'turbulence' in the use of: 1) technologies, where there is a dynamic tension between desires to constantly improve, change and update digital technologies and the need to standardise practices, maintaining and defending the overall integrity of the system; and 2) representations, where dynamics result from the responsibilities and liabilities associated with sharing of digital representations and a lack of trust in the validity of data from other firms. These dynamics are tracked across three stages of this well-managed and innovative project and indicate the generic need to treat digital infrastructure as an ongoing strategic issue.
Resumo:
We argue the case for a new branch of mathematics and its applications: Mathematics for the Digital Society. There is a challenge for mathematics, a strong “pull” from new and emerging commercial and public activities; and a need to train and inspire a generation of quantitative scientists who will seek careers within the associated sectors. Although now going through an early phase of boiling up, prior to scholarly distillation, we discuss how data rich activities and applications may benefit from a wide range of continuous and discrete models, methods, analysis and inference. In ten years time such applications will be common place and associated courses may be embedded within the undergraduate curriculum.
Resumo:
The ASTER Global Digital Elevation Model (GDEM) has made elevation data at 30 m spatial resolution freely available, enabling reinvestigation of morphometric relationships derived from limited field data using much larger sample sizes. These data are used to analyse a range of morphometric relationships derived for dunes (between dune height, spacing, and equivalent sand thickness) in the Namib Sand Sea, which was chosen because there are a number of extant studies that could be used for comparison with the results. The relative accuracy of GDEM for capturing dune height and shape was tested against multiple individual ASTER DEM scenes and against field surveys, highlighting the smoothing of the dune crest and resultant underestimation of dune height, and the omission of the smallest dunes, because of the 30 m sampling of ASTER DEM products. It is demonstrated that morphometric relationships derived from GDEM data are broadly comparable with relationships derived by previous methods, across a range of different dune types. The data confirm patterns of dune height, spacing and equivalent sand thickness mapped previously in the Namib Sand Sea, but add new detail to these patterns.
Resumo:
This paper discusses many of the issues associated with formally publishing data in academia, focusing primarily on the structures that need to be put in place for peer review and formal citation of datasets. Data publication is becoming increasingly important to the scientific community, as it will provide a mechanism for those who create data to receive academic credit for their work and will allow the conclusions arising from an analysis to be more readily verifiable, thus promoting transparency in the scientific process. Peer review of data will also provide a mechanism for ensuring the quality of datasets, and we provide suggestions on the types of activities one expects to see in the peer review of data. A simple taxonomy of data publication methodologies is presented and evaluated, and the paper concludes with a discussion of dataset granularity, transience and semantics, along with a recommended human-readable citation syntax.
Resumo:
Active robot force control requires some form of dynamic inner loop control for stability. The author considers the implementation of position-based inner loop control on an industrial robot fitted with encoders only. It is shown that high gain velocity feedback for such a robot, which is effectively stationary when in contact with a stiff environment, involves problems beyond the usual caveats on the effects of unknown environment stiffness. It is shown that it is possible for the controlled joint to become chaotic at very low velocities if encoder edge timing data are used for velocity measurement. The results obtained indicate that there is a lower limit on controlled velocity when encoders are the only means of joint measurement. This lower limit to speed is determined by the desired amount of loop gain, which is itself determined by the severity of the nonlinearities present in the drive system.
Resumo:
Written for communications and electronic engineers, technicians and students, this book begins with an introduction to data communications, and goes on to explain the concept of layered communications. Other chapters deal with physical communications channels, baseband digital transmission, analog data transmission, error control and data compression codes, physical layer standards, the data link layer, the higher layers of the protocol hierarchy, and local are networks (LANS). Finally, the book explores some likely future developments.
Resumo:
Climate modeling is a complex process, requiring accurate and complete metadata in order to identify, assess and use climate data stored in digital repositories. The preservation of such data is increasingly important given the development of ever-increasingly complex models to predict the effects of global climate change. The EU METAFOR project has developed a Common Information Model (CIM) to describe climate data and the models and modelling environments that produce this data. There is a wide degree of variability between different climate models and modelling groups. To accommodate this, the CIM has been designed to be highly generic and flexible, with extensibility built in. METAFOR describes the climate modelling process simply as "an activity undertaken using software on computers to produce data." This process has been described as separate UML packages (and, ultimately, XML schemas). This fairly generic structure canbe paired with more specific "controlled vocabularies" in order to restrict the range of valid CIM instances. The CIM will aid digital preservation of climate models as it will provide an accepted standard structure for the model metadata. Tools to write and manage CIM instances, and to allow convenient and powerful searches of CIM databases,. Are also under development. Community buy-in of the CIM has been achieved through a continual process of consultation with the climate modelling community, and through the METAFOR team’s development of a questionnaire that will be used to collect the metadata for the Intergovernmental Panel on Climate Change’s (IPCC) Coupled Model Intercomparison Project Phase 5 (CMIP5) model runs.
Resumo:
With the advent of mass digitization projects, such as the Google Book Search, a peculiar shift has occurred in the way that copyright works are dealt with. Contrary to what has so far been the case, works are turned into machine-readable data to be automatically processed for various purposes without the expression of works being displayed to the public. In the Google Book Settlement Agreement, this new kind of usage is referred to as ‘non-display uses’ of digital works. The legitimacy of these uses has not yet been tested by Courts and does not comfortably fit in the current copyright doctrine, plainly because the works are not used as works but as something else, namely as data. Since non-display uses may prove to be a very lucrative market in the near future, with the potential to affect the way people use copyright works, we examine non-display uses under the prism of copyright principles to determine the boundaries of their legitimacy. Through this examination, we provide a categorization of the activities carried out under the heading of ‘non-display uses’, we examine their lawfulness under the current copyright doctrine and approach the phenomenon from the spectrum of data protection law that could apply, by analogy, to the use of copyright works as processable data.