978 resultados para Geospatial data
Resumo:
Multibeam data were collected without operator supervision on R/V Polarstern cruise ANT-XVI/3 along track lines of approximately 6700 NM. Data were achieved during transits and stationary work in the Weddell Sea off the Ekstrom Ice Shelf and the Jelbart Ice Shelf and in the South Atlantic Ocean. An area of 140 x 140 km was surveyed with 15 km transect space at about 49.5°S and 20°E. The multibeam sonar system Hydrosweep DS-2 was operated using 59 beams and 90° aperture angle. The quality of data might be reduced during bad weather periods or adverse sea ice conditions. The dataset contains raw data that are not processed and thus may contain errors and blunders in depth and position.
Resumo:
Multibeam data were measured during R/V Polarstern cruise ANT-XXII/3 along track lines of approximately 8000 NM total length during transits and partly during stationary work. Data were achieved on a transect along the Greenwich meridian, across the Weddell Sea from Kapp Norvegia to Joinville Island, across the Powell Basin, furthermore in the Drake Passage and west of Antarctic Peninsula. Short bathymetric surveys were carried out on the continental slope off Kapp Norvegia and Fimbulisen, and in the area of the Weddell Abyssal Plain. The multibeam sonar system Hydrosweep DS-2 was operated mainly in the HDBE softbeam mode with 240 depth values per swath and a receiving coverage of 100°. The refraction correction was achieved utilizing CTD profiles or the system's own cross fan calibration. The quality of data might be reduced during bad weather periods or adverse sea ice conditions. The dataset contains raw data that are not processed and thus may contain errors and blunders in depth and position.
Resumo:
Multibeam data were measured during R/V Polarstern cruise ANT-XIX/1 on track lines of about 5,200 NM total length in the Atlantic Ocean during the transit from Bremerhaven to Cape Town. The multibeam sonar system Hydrosweep DS-2 was operated using 59 beams and 90° aperture angle. The refraction correction was achieved utilizing the system's own cross fan calibration. The quality of data might be reduced during bad weather periods. The dataset contains raw data that are not processed and thus may contain errors and blunders in depth and position.
Resumo:
Recent developments in service-oriented and distributed computing have created exciting opportunities for the integration of models in service chains to create the Model Web. This offers the potential for orchestrating web data and processing services, in complex chains; a flexible approach which exploits the increased access to products and tools, and the scalability offered by the Web. However, the uncertainty inherent in data and models must be quantified and communicated in an interoperable way, in order for its effects to be effectively assessed as errors propagate through complex automated model chains. We describe a proposed set of tools for handling, characterizing and communicating uncertainty in this context, and show how they can be used to 'uncertainty- enable' Web Services in a model chain. An example implementation is presented, which combines environmental and publicly-contributed data to produce estimates of sea-level air pressure, with estimates of uncertainty which incorporate the effects of model approximation as well as the uncertainty inherent in the observational and derived data.
Resumo:
Remote sensing data is routinely used in ecology to investigate the relationship between landscape pattern as characterised by land use and land cover maps, and ecological processes. Multiple factors related to the representation of geographic phenomenon have been shown to affect characterisation of landscape pattern resulting in spatial uncertainty. This study investigated the effect of the interaction between landscape spatial pattern and geospatial processing methods statistically; unlike most papers which consider the effect of each factor in isolation only. This is important since data used to calculate landscape metrics typically undergo a series of data abstraction processing tasks and are rarely performed in isolation. The geospatial processing methods tested were the aggregation method and the choice of pixel size used to aggregate data. These were compared to two components of landscape pattern, spatial heterogeneity and the proportion of landcover class area. The interactions and their effect on the final landcover map were described using landscape metrics to measure landscape pattern and classification accuracy (response variables). All landscape metrics and classification accuracy were shown to be affected by both landscape pattern and by processing methods. Large variability in the response of those variables and interactions between the explanatory variables were observed. However, even though interactions occurred, this only affected the magnitude of the difference in landscape metric values. Thus, provided that the same processing methods are used, landscapes should retain their ranking when their landscape metrics are compared. For example, highly fragmented landscapes will always have larger values for the landscape metric "number of patches" than less fragmented landscapes. But the magnitude of difference between the landscapes may change and therefore absolute values of landscape metrics may need to be interpreted with caution. The explanatory variables which had the largest effects were spatial heterogeneity and pixel size. These explanatory variables tended to result in large main effects and large interactions. The high variability in the response variables and the interaction of the explanatory variables indicate it would be difficult to make generalisations about the impact of processing on landscape pattern as only two processing methods were tested and it is likely that untested processing methods will potentially result in even greater spatial uncertainty. © 2013 Elsevier B.V.
Resumo:
Modern geographical databases, which are at the core of geographic information systems (GIS), store a rich set of aspatial attributes in addition to geographic data. Typically, aspatial information comes in textual and numeric format. Retrieving information constrained on spatial and aspatial data from geodatabases provides GIS users the ability to perform more interesting spatial analyses, and for applications to support composite location-aware searches; for example, in a real estate database: “Find the nearest homes for sale to my current location that have backyard and whose prices are between $50,000 and $80,000”. Efficient processing of such queries require combined indexing strategies of multiple types of data. Existing spatial query engines commonly apply a two-filter approach (spatial filter followed by nonspatial filter, or viceversa), which can incur large performance overheads. On the other hand, more recently, the amount of geolocation data has grown rapidly in databases due in part to advances in geolocation technologies (e.g., GPS-enabled smartphones) that allow users to associate location data to objects or events. The latter poses potential data ingestion challenges of large data volumes for practical GIS databases. In this dissertation, we first show how indexing spatial data with R-trees (a typical data pre-processing task) can be scaled in MapReduce—a widely-adopted parallel programming model for data intensive problems. The evaluation of our algorithms in a Hadoop cluster showed close to linear scalability in building R-tree indexes. Subsequently, we develop efficient algorithms for processing spatial queries with aspatial conditions. Novel techniques for simultaneously indexing spatial with textual and numeric data are developed to that end. Experimental evaluations with real-world, large spatial datasets measured query response times within the sub-second range for most cases, and up to a few seconds for a small number of cases, which is reasonable for interactive applications. Overall, the previous results show that the MapReduce parallel model is suitable for indexing tasks in spatial databases, and the adequate combination of spatial and aspatial attribute indexes can attain acceptable response times for interactive spatial queries with constraints on aspatial data.
Resumo:
Over 50% of the world's population live within 3. km of rivers and lakes highlighting the on-going importance of freshwater resources to human health and societal well-being. Whilst covering c. 3.5% of the Earth's non-glaciated land mass, trends in the environmental quality of the world's standing waters (natural lakes and reservoirs) are poorly understood, at least in comparison with rivers, and so evaluation of their current condition and sensitivity to change are global priorities. Here it is argued that a geospatial approach harnessing existing global datasets, along with new generation remote sensing products, offers the basis to characterise trajectories of change in lake properties e.g., water quality, physical structure, hydrological regime and ecological behaviour. This approach furthermore provides the evidence base to understand the relative importance of climatic forcing and/or changing catchment processes, e.g. land cover and soil moisture data, which coupled with climate data provide the basis to model regional water balance and runoff estimates over time. Using examples derived primarily from the Danube Basin but also other parts of the World, we demonstrate the power of the approach and its utility to assess the sensitivity of lake systems to environmental change, and hence better manage these key resources in the future.
Resumo:
The main focus of this thesis was to gain a better understanding about the dynamics of risk perception and its influence on people’s evacuation behavior. Another major focus was to improve our knowledge regarding geo-spatial and temporal variations of risk perception and hurricane evacuation behavior. A longitudinal dataset of more than eight hundred households were collected following two major hurricane events, Ivan and Katrina. The longitudinal survey data was geocoded and a geo-spatial database was integrated to it. The geospatial database was composed of distance, elevation and hazard parameters with respect to the respondent’s household location. A set of Bivariate Probit (BP) model suggests that geospatial variables have had significant influences in explaining hurricane risk perception and evacuation behavior during both hurricanes. The findings also indicated that people made their evacuation decision in coherence with their risk perception. In addition, people updated their hurricane evacuation decision in a subsequent similar event.
Resumo:
Big Data Analytics is an emerging field since massive storage and computing capabilities have been made available by advanced e-infrastructures. Earth and Environmental sciences are likely to benefit from Big Data Analytics techniques supporting the processing of the large number of Earth Observation datasets currently acquired and generated through observations and simulations. However, Earth Science data and applications present specificities in terms of relevance of the geospatial information, wide heterogeneity of data models and formats, and complexity of processing. Therefore, Big Earth Data Analytics requires specifically tailored techniques and tools. The EarthServer Big Earth Data Analytics engine offers a solution for coverage-type datasets, built around a high performance array database technology, and the adoption and enhancement of standards for service interaction (OGC WCS and WCPS). The EarthServer solution, led by the collection of requirements from scientific communities and international initiatives, provides a holistic approach that ranges from query languages and scalability up to mobile access and visualization. The result is demonstrated and validated through the development of lighthouse applications in the Marine, Geology, Atmospheric, Planetary and Cryospheric science domains.
Resumo:
Big Data Analytics is an emerging field since massive storage and computing capabilities have been made available by advanced e-infrastructures. Earth and Environmental sciences are likely to benefit from Big Data Analytics techniques supporting the processing of the large number of Earth Observation datasets currently acquired and generated through observations and simulations. However, Earth Science data and applications present specificities in terms of relevance of the geospatial information, wide heterogeneity of data models and formats, and complexity of processing. Therefore, Big Earth Data Analytics requires specifically tailored techniques and tools. The EarthServer Big Earth Data Analytics engine offers a solution for coverage-type datasets, built around a high performance array database technology, and the adoption and enhancement of standards for service interaction (OGC WCS and WCPS). The EarthServer solution, led by the collection of requirements from scientific communities and international initiatives, provides a holistic approach that ranges from query languages and scalability up to mobile access and visualization. The result is demonstrated and validated through the development of lighthouse applications in the Marine, Geology, Atmospheric, Planetary and Cryospheric science domains.
Resumo:
In the current world geospatial information is being demanded in almost real time, which requires the speed at which this data is processed and made available to the user to be at an all-time high. In order to keep up with this ever increasing speed, analysts must find ways to increase their productivity. At the same time the demand for new analysts is high, and current methods of training are long and can be costly. Through the use of human computer interactions and basic networking systems, this paper explores new ways to increase efficiency in data processing and analyst training.