853 resultados para Maps Data processing
Resumo:
The advancement of GPS technology has made it possible to use GPS devices as orientation and navigation tools, but also as tools to track spatiotemporal information. GPS tracking data can be broadly applied in location-based services, such as spatial distribution of the economy, transportation routing and planning, traffic management and environmental control. Therefore, knowledge of how to process the data from a standard GPS device is crucial for further use. Previous studies have considered various issues of the data processing at the time. This paper, however, aims to outline a general procedure for processing GPS tracking data. The procedure is illustrated step-by-step by the processing of real-world GPS data of car movements in Borlänge in the centre of Sweden.
Resumo:
Serving as a powerful tool for extracting localized variations in non-stationary signals, applications of wavelet transforms (WTs) in traffic engineering have been introduced; however, lacking in some important theoretical fundamentals. In particular, there is little guidance provided on selecting an appropriate WT across potential transport applications. This research described in this paper contributes uniquely to the literature by first describing a numerical experiment to demonstrate the shortcomings of commonly-used data processing techniques in traffic engineering (i.e., averaging, moving averaging, second-order difference, oblique cumulative curve, and short-time Fourier transform). It then mathematically describes WT’s ability to detect singularities in traffic data. Next, selecting a suitable WT for a particular research topic in traffic engineering is discussed in detail by objectively and quantitatively comparing candidate wavelets’ performances using a numerical experiment. Finally, based on several case studies using both loop detector data and vehicle trajectories, it is shown that selecting a suitable wavelet largely depends on the specific research topic, and that the Mexican hat wavelet generally gives a satisfactory performance in detecting singularities in traffic and vehicular data.
Resumo:
This paper describes a safety data recording and analysis system that has been developed to capture safety occurrences including precursors using high-definition forward-facing video from train cabs and data from other train-borne systems. The paper describes the data processing model and how events detected through data analysis are related to an underlying socio-technical model of accident causation. The integrated approach to safety data recording and analysis insures systemic factors that condition, influence or potentially contribute to an occurrence are captured both for safety occurrences and precursor events, providing a rich tapestry of antecedent causal factors that can significantly improve learning around accident causation. This can ultimately provide benefit to railways through the development of targeted and more effective countermeasures, better risk models and more effective use and prioritization of safety funds. Level crossing occurrences are a key focus in this paper with data analysis scenarios describing causal factors around near-miss occurrences. The paper concludes with a discussion on how the system can also be applied to other types of railway safety occurrences.
Resumo:
Increasingly larger scale applications are generating an unprecedented amount of data. However, the increasing gap between computation and I/O capacity on High End Computing machines makes a severe bottleneck for data analysis. Instead of moving data from its source to the output storage, in-situ analytics processes output data while simulations are running. However, in-situ data analysis incurs much more computing resource contentions with simulations. Such contentions severely damage the performance of simulation on HPE. Since different data processing strategies have different impact on performance and cost, there is a consequent need for flexibility in the location of data analytics. In this paper, we explore and analyze several potential data-analytics placement strategies along the I/O path. To find out the best strategy to reduce data movement in given situation, we propose a flexible data analytics (FlexAnalytics) framework in this paper. Based on this framework, a FlexAnalytics prototype system is developed for analytics placement. FlexAnalytics system enhances the scalability and flexibility of current I/O stack on HEC platforms and is useful for data pre-processing, runtime data analysis and visualization, as well as for large-scale data transfer. Two use cases – scientific data compression and remote visualization – have been applied in the study to verify the performance of FlexAnalytics. Experimental results demonstrate that FlexAnalytics framework increases data transition bandwidth and improves the application end-to-end transfer performance.
Resumo:
During the last decades there has been a global shift in forest management from a focus solely on timber management to ecosystem management that endorses all aspects of forest functions: ecological, economic and social. This has resulted in a shift in paradigm from sustained yield to sustained diversity of values, goods and benefits obtained at the same time, introducing new temporal and spatial scales into forest resource management. The purpose of the present dissertation was to develop methods that would enable spatial and temporal scales to be introduced into the storage, processing, access and utilization of forest resource data. The methods developed are based on a conceptual view of a forest as a hierarchically nested collection of objects that can have a dynamically changing set of attributes. The temporal aspect of the methods consists of lifetime management for the objects and their attributes and of a temporal succession linking the objects together. Development of the forest resource data processing method concentrated on the extensibility and configurability of the data content and model calculations, allowing for a diverse set of processing operations to be executed using the same framework. The contribution of this dissertation to the utilisation of multi-scale forest resource data lies in the development of a reference data generation method to support forest inventory methods in approaching single-tree resolution.
Resumo:
Remote sensing provides methods to infer land cover information over large geographical areas at a variety of spatial and temporal resolutions. Land cover is input data for a range of environmental models and information on land cover dynamics is required for monitoring the implications of global change. Such data are also essential in support of environmental management and policymaking. Boreal forests are a key component of the global climate and a major sink of carbon. The northern latitudes are expected to experience a disproportionate and rapid warming, which can have a major impact on vegetation at forest limits. This thesis examines the use of optical remote sensing for estimating aboveground biomass, leaf area index (LAI), tree cover and tree height in the boreal forests and tundra taiga transition zone in Finland. The continuous fields of forest attributes are required, for example, to improve the mapping of forest extent. The thesis focus on studying the feasibility of satellite data at multiple spatial resolutions, assessing the potential of multispectral, -angular and -temporal information, and provides regional evaluation for global land cover data. Preprocessed ASTER, MISR and MODIS products are the principal satellite data. The reference data consist of field measurements, forest inventory data and fine resolution land cover maps. Fine resolution studies demonstrate how statistical relationships between biomass and satellite data are relatively strong in single species and low biomass mountain birch forests in comparison to higher biomass coniferous stands. The combination of forest stand data and fine resolution ASTER images provides a method for biomass estimation using medium resolution MODIS data. The multiangular data improve the accuracy of land cover mapping in the sparsely forested tundra taiga transition zone, particularly in mires. Similarly, multitemporal data improve the accuracy of coarse resolution tree cover estimates in comparison to single date data. Furthermore, the peak of the growing season is not necessarily the optimal time for land cover mapping in the northern boreal regions. The evaluated coarse resolution land cover data sets have considerable shortcomings in northernmost Finland and should be used with caution in similar regions. The quantitative reference data and upscaling methods for integrating multiresolution data are required for calibration of statistical models and evaluation of land cover data sets. The preprocessed image products have potential for wider use as they can considerably reduce the time and effort used for data processing.
The Intelligent Measuring Sub-System in the Computer Integrated and Flexible Laser Processing System
Resumo:
Based on the computer integrated and flexible laser processing system, develop the intelligent measuring sub-system. A novel model has been built to compensate the deviations of the main frame, a new-developed 3-D laser tracker system is applied to adjust the accuracy of the system. Analyzing the characteristic of all kinds of automobile dies, which is the main processing object of the laser processing system, classify the types of the surface and border needed to be measured and be processed. According to different types of surface and border, develop 2-D adaptive measuring method based on B?zier curve and 3-D adaptive measuring method based on spline curve. During the data processing, a new 3-D probe compensation method has been described in details. Some measuring experiments and laser processing experiments are carried out to testify the methods. All the methods have been applied in the computer integrated and flexible laser processing system invented by the Institute of Mechanics, CAS.
Resumo:
Statistical analysis of diffusion tensor imaging (DTI) data requires a computational framework that is both numerically tractable (to account for the high dimensional nature of the data) and geometric (to account for the nonlinear nature of diffusion tensors). Building upon earlier studies exploiting a Riemannian framework to address these challenges, the present paper proposes a novel metric and an accompanying computational framework for DTI data processing. The proposed approach grounds the signal processing operations in interpolating curves. Well-chosen interpolating curves are shown to provide a computational framework that is at the same time tractable and information relevant for DTI processing. In addition, and in contrast to earlier methods, it provides an interpolation method which preserves anisotropy, a central information carried by diffusion tensor data. © 2013 Springer Science+Business Media New York.
Resumo:
When used in the determining the total electron content (TEC), which may be the most important ionospheric parameter, the worldwide GPS observation brings a revolutionary change in the ionospheric science. There are three steps in the data processing to retrieve GPS TEC: (1) to estimate slant TEC from the measurements of GPS signals; (2) to map the slant TEC into vertical; and (3) to interpolate the vertical TEC into grid points. In this scientific dissertation we focus our attention on the second step, the mapping theory and method to convert slant TEC into vertical. This is conventionally done by multiplying on the slant TEC a mapping function which is usually determined by certain models of electron density profile. Study of the vertical TEC mapping function is of significance in GPS TEC measurement. This paper first reviews briefly the three steps in GPS TEC mapping process. Then we compare the vertical TEC mapping function which were respectively calculated from the electron density profiles of the ionospheric model and retrieved from the observation of worldwide GPS TEC. We also perform the statistical analysis on the observational mapping functions. The main works and results are as follows: 1. We calculated the vertical TEC mapping functions for both SLM and Chapman models, and discussed the modulation of the ionosphere height to the mapping functions. We use two simple models, single layer model (SLM) and Chapman models, of the ionospheric electron density profiles to calculate the vertical TEC mapping function. In the case of the SLM, we discuss the control of the ionospheric altitude, i.e., the layer height hipp, to the mapping function. We find that the mapping function decreases rapidly as hipp increases. For the Chapman model we study also the control mapping function by both ionospheric altitude indicated by the peak electron density height hmF2, and the scale height, H, which present the thickness of the ionosphere. It is also found that the mapping function decreases rapidly as hmF2 increases. and it also decreases as H increases. 2. Then we estimate the mapping functions from the GPS observations and compare them with those calculated from the electron density models. We first, proposed a new method to estimate the mapping functions from GPS TEC data. This method is then used to retrieve the observational mapping function from both the slant TEC (TECS) provided by International GPS Service (IGS)and vertical TEC provide by JPL Global Ionospheric Maps (GIMs). Then we compare the observational mapping function with those calculated from the electron density models, SLM and Chapman. We find that the values of the observational mapping functions are much smaller than that from the model mapping functions, when the zenith angle is large enough. We attribute this to the effect of the plasmasphere which is above about 1000 km. 3. We statistically analyze the observational mapping functions and reveal their climatological changes. Observational mapping functions during 1999-2007 are used in our statistics. The main results are as follows. (1) The observational mapping functions decrease obviously with the decrement of the solar activity which is represented by the F10.7 index; (2) In annual variations of the observational mapping functions, the semiannual component is found at low-latitudes, and the remarkable seasonal variations at mid- and high-latitudes. (3) The diurnal variation of the observational mapping functions is that they are large in daytime and small at night, they become extremely small in the early morning before sunrise. (4) The observational mapping functions change with latitudes that they are smaller at lower latitudes and larger at higher. All of the above variations of the observational mapping functions are explained by the existence of the plasmasphere, which changes more slowly with time and more rapidly with latitude than the ionosphere does . In summary, our study on the vertical TEC mapping function imply that the ionosphere height has a modulative effect on the mapping function. We first propose the concept of the 'observational mapping functions' , and provide a new method to calculate them. This is important in improving the TEC mapping. It may also possible to retrieving the plasmaspheric information from GPS observations.
Resumo:
Huelse, M, Barr, D R W, Dudek, P: Cellular Automata and non-static image processing for embodied robot systems on a massively parallel processor array. In: Adamatzky, A et al. (eds) AUTOMATA 2008, Theory and Applications of Cellular Automata. Luniver Press, 2008, pp. 504-510. Sponsorship: EPSRC
Resumo:
Plants exhibit different developmental strategies than animals; these are characterized by a tight linkage between environmental conditions and development. As plants have neither specialized sensory organs nor a nervous system, intercellular regulators are essential for their development. Recently, major advances have been made in understanding how intercellular regulation is achieved in plants on a molecular level. Plants use a variety of molecules for intercellular regulation: hormones are used as systemic signals that are interpreted at the individual-cell level; receptor peptide-ligand systems regulate local homeostasis; moving transcriptional regulators act in a switch-like manner over small and large distances. Together, these mechanisms coherently coordinate developmental decisions with resource allocation and growth.
Resumo:
BACKGROUND: Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. METHODS AND PRINCIPAL FINDINGS: The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. CONCLUSIONS: Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks.