924 resultados para spatial data analysis
Resumo:
Polistine wasps are important in Neotropical ecosystems due to their ubiquity and diversity. Inventories have not adequately considered spatial attributes of collected specimens. Spatial data on biodiversity are important for study and mitigation of anthropogenic impacts over natural ecosystems and for protecting species. We described and analyzed local-scale spatial patterns of collecting records of wasp species, as well as spatial variation of diversity descriptors in a 2500-hectare area of an Amazon forest in Brazil. Rare species comprised the largest fraction of the fauna. Close range spatial effects were detected for most of the more common species, with clustering of presence-data at short distances. Larger spatial lag effects could also be identified in some species, constituting probably cases of exogenous autocorrelation and candidates for explanations based on environmental factors. In a few cases, significant or near significant correlations were found between five species (of Agelaia, Angiopolybia, and Mischocyttarus) and three studied environmental variables: distance to nearest stream, terrain altitude, and the type of forest canopy. However, association between these factors and biodiversity variables were generally low. When used as predictors of polistine richness in a linear multiple regression, only the coefficient for the forest canopy variable resulted significant. Some level of prediction of wasp diversity variables can be attained based on environmental variables, especially vegetation structure. Large-scale landscape and regional studies should be scheduled to address this issue.
Resumo:
Whether for investigative or intelligence aims, crime analysts often face up the necessity to analyse the spatiotemporal distribution of crimes or traces left by suspects. This article presents a visualisation methodology supporting recurrent practical analytical tasks such as the detection of crime series or the analysis of traces left by digital devices like mobile phone or GPS devices. The proposed approach has led to the development of a dedicated tool that has proven its effectiveness in real inquiries and intelligence practices. It supports a more fluent visual analysis of the collected data and may provide critical clues to support police operations as exemplified by the presented case studies.
Resumo:
The paper deals with the development and application of the generic methodology for automatic processing (mapping and classification) of environmental data. General Regression Neural Network (GRNN) is considered in detail and is proposed as an efficient tool to solve the problem of spatial data mapping (regression). The Probabilistic Neural Network (PNN) is considered as an automatic tool for spatial classifications. The automatic tuning of isotropic and anisotropic GRNN/PNN models using cross-validation procedure is presented. Results are compared with the k-Nearest-Neighbours (k-NN) interpolation algorithm using independent validation data set. Real case studies are based on decision-oriented mapping and classification of radioactively contaminated territories.
Resumo:
The Office of Special Investigations at Iowa Department of Transportation (DOT) collects FWD data on regular basis to evaluate pavement structural conditions. The primary objective of this study was to develop a fully-automated software system for rapid processing of the FWD data along with a user manual. The software system automatically reads the FWD raw data collected by the JILS-20 type FWD machine that Iowa DOT owns, processes and analyzes the collected data with the rapid prediction algorithms developed during the phase I study. This system smoothly integrates the FWD data analysis algorithms and the computer program being used to collect the pavement deflection data. This system can be used to assess pavement condition, estimate remaining pavement life, and eventually help assess pavement rehabilitation strategies by the Iowa DOT pavement management team. This report describes the developed software in detail and can also be used as a user-manual for conducting simulation studies and detailed analyses. *********************** Large File ***********************
Resumo:
Geophysical techniques can help to bridge the inherent gap with regard to spatial resolution and the range of coverage that plagues classical hydrological methods. This has lead to the emergence of the new and rapidly growing field of hydrogeophysics. Given the differing sensitivities of various geophysical techniques to hydrologically relevant parameters and their inherent trade-off between resolution and range the fundamental usefulness of multi-method hydrogeophysical surveys for reducing uncertainties in data analysis and interpretation is widely accepted. A major challenge arising from such endeavors is the quantitative integration of the resulting vast and diverse database in order to obtain a unified model of the probed subsurface region that is internally consistent with all available data. To address this problem, we have developed a strategy towards hydrogeophysical data integration based on Monte-Carlo-type conditional stochastic simulation that we consider to be particularly suitable for local-scale studies characterized by high-resolution and high-quality datasets. Monte-Carlo-based optimization techniques are flexible and versatile, allow for accounting for a wide variety of data and constraints of differing resolution and hardness and thus have the potential of providing, in a geostatistical sense, highly detailed and realistic models of the pertinent target parameter distributions. Compared to more conventional approaches of this kind, our approach provides significant advancements in the way that the larger-scale deterministic information resolved by the hydrogeophysical data can be accounted for, which represents an inherently problematic, and as of yet unresolved, aspect of Monte-Carlo-type conditional simulation techniques. We present the results of applying our algorithm to the integration of porosity log and tomographic crosshole georadar data to generate stochastic realizations of the local-scale porosity structure. Our procedure is first tested on pertinent synthetic data and then applied to corresponding field data collected at the Boise Hydrogeophysical Research Site near Boise, Idaho, USA.
Resumo:
The book presents the state of the art in machine learning algorithms (artificial neural networks of different architectures, support vector machines, etc.) as applied to the classification and mapping of spatially distributed environmental data. Basic geostatistical algorithms are presented as well. New trends in machine learning and their application to spatial data are given, and real case studies based on environmental and pollution data are carried out. The book provides a CD-ROM with the Machine Learning Office software, including sample sets of data, that will allow both students and researchers to put the concepts rapidly to practice.
Resumo:
Global positioning systems (GPS) offer a cost-effective and efficient method to input and update transportation data. The spatial location of objects provided by GPS is easily integrated into geographic information systems (GIS). The storage, manipulation, and analysis of spatial data are also relatively simple in a GIS. However, many data storage and reporting methods at transportation agencies rely on linear referencing methods (LRMs); consequently, GPS data must be able to link with linear referencing. Unfortunately, the two systems are fundamentally incompatible in the way data are collected, integrated, and manipulated. In order for the spatial data collected using GPS to be integrated into a linear referencing system or shared among LRMs, a number of issues need to be addressed. This report documents and evaluates several of those issues and offers recommendations. In order to evaluate the issues associated with integrating GPS data with a LRM, a pilot study was created. To perform the pilot study, point features, a linear datum, and a spatial representation of a LRM were created for six test roadway segments that were located within the boundaries of the pilot study conducted by the Iowa Department of Transportation linear referencing system project team. Various issues in integrating point features with a LRM or between LRMs are discussed and recommendations provided. The accuracy of the GPS is discussed, including issues such as point features mapping to the wrong segment. Another topic is the loss of spatial information that occurs when a three-dimensional or two-dimensional spatial point feature is converted to a one-dimensional representation on a LRM. Recommendations such as storing point features as spatial objects if necessary or preserving information such as coordinates and elevation are suggested. The lack of spatial accuracy characteristic of most cartography, on which LRM are often based, is another topic discussed. The associated issues include linear and horizontal offset error. The final topic discussed is some of the issues in transferring point feature data between LRMs.
Resumo:
The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
ABSTRACT Dual-trap optical tweezers are often used in high-resolution measurements in single-molecule biophysics. Such measurements can be hindered by the presence of extraneous noise sources, the most prominent of which is the coupling of fluctuations along different spatial directions, which may affect any optical tweezers setup. In this article, we analyze, both from the theoretical and the experimental points of view, the most common source for these couplings in dual-trap optical-tweezers setups: the misalignment of traps and tether. We give criteria to distinguish different kinds of misalignment, to estimate their quantitative relevance and to include them in the data analysis. The experimental data is obtained in a, to our knowledge, novel dual-trap optical-tweezers setup that directly measures forces. In the case in which misalignment is negligible, we provide a method to measure the stiffness of traps and tether based on variance analysis. This method can be seen as a calibration technique valid beyond the linear trap region. Our analysis is then employed to measure the persistence length of dsDNA tethers of three different lengths spanning two orders of magnitude. The effective persistence length of such tethers is shown to decrease with the contour length, in accordance with previous studies.
Resumo:
We present a participant study that compares biological data exploration tasks using volume renderings of laser confocal microscopy data across three environments that vary in level of immersion: a desktop, fishtank, and cave system. For the tasks, data, and visualization approach used in our study, we found that subjects qualitatively preferred and quantitatively performed better in the cave compared with the fishtank and desktop. Subjects performed real-world biological data analysis tasks that emphasized understanding spatial relationships including characterizing the general features in a volume, identifying colocated features, and reporting geometric relationships such as whether clusters of cells were coplanar. After analyzing data in each environment, subjects were asked to choose which environment they wanted to analyze additional data sets in - subjects uniformly selected the cave environment.
Resumo:
In general, laboratory activities are costly in terms of time, space, and money. As such, the ability to provide realistically simulated laboratory data that enables students to practice data analysis techniques as a complementary activity would be expected to reduce these costs while opening up very interesting possibilities. In the present work, a novel methodology is presented for design of analytical chemistry instrumental analysis exercises that can be automatically personalized for each student and the results evaluated immediately. The proposed system provides each student with a different set of experimental data generated randomly while satisfying a set of constraints, rather than using data obtained from actual laboratory work. This allows the instructor to provide students with a set of practical problems to complement their regular laboratory work along with the corresponding feedback provided by the system's automatic evaluation process. To this end, the Goodle Grading Management System (GMS), an innovative web-based educational tool for automating the collection and assessment of practical exercises for engineering and scientific courses, was developed. The proposed methodology takes full advantage of the Goodle GMS fusion code architecture. The design of a particular exercise is provided ad hoc by the instructor and requires basic Matlab knowledge. The system has been employed with satisfactory results in several university courses. To demonstrate the automatic evaluation process, three exercises are presented in detail. The first exercise involves a linear regression analysis of data and the calculation of the quality parameters of an instrumental analysis method. The second and third exercises address two different comparison tests, a comparison test of the mean and a t-paired test.
Resumo:
Since the advent of mechanized farming and intensive use of agricultural machinery and implements on the properties, the soil began to receive greater load of machinery traffic, which can cause increased soil compaction. The aim of this study was to evaluate the spatial variability of soil mechanical resistance to penetration (RP) in the layers of 0.00-0.10, 0.10-0.20, 0.20-0.30 and 0.30-0.40m, using geostatistics in an area cultivated with mango in Haplic Vertisol of the northeastern semi-arid, with mobile unit equipped with electronic penetrometer. The RP data was collected in 56 points from an area of 3 ha, and random soil samples were collected to determine the soil moisture and texture. For RP data analysis we used descriptive statistics and geostatistics. The soil mechanical resistance to penetration presented increased variability, with adjustment of the spherical and exponential semivariograms in the layers. We found that 42% of the area in the layer of 0.10-0.20m showed RP values above 2.70 MPa. Maximum values of RP were found in the layer of 0.19-0.27m, predominantly in 56% of the area.
Resumo:
Reliability analysis is a well established branch of statistics that deals with the statistical study of different aspects of lifetimes of a system of components. As we pointed out earlier that major part of the theory and applications in connection with reliability analysis were discussed based on the measures in terms of distribution function. In the beginning chapters of the thesis, we have described some attractive features of quantile functions and the relevance of its use in reliability analysis. Motivated by the works of Parzen (1979), Freimer et al. (1988) and Gilchrist (2000), who indicated the scope of quantile functions in reliability analysis and as a follow up of the systematic study in this connection by Nair and Sankaran (2009), in the present work we tried to extend their ideas to develop necessary theoretical framework for lifetime data analysis. In Chapter 1, we have given the relevance and scope of the study and a brief outline of the work we have carried out. Chapter 2 of this thesis is devoted to the presentation of various concepts and their brief reviews, which were useful for the discussions in the subsequent chapters .In the introduction of Chapter 4, we have pointed out the role of ageing concepts in reliability analysis and in identifying life distributions .In Chapter 6, we have studied the first two L-moments of residual life and their relevance in various applications of reliability analysis. We have shown that the first L-moment of residual function is equivalent to the vitality function, which have been widely discussed in the literature .In Chapter 7, we have defined percentile residual life in reversed time (RPRL) and derived its relationship with reversed hazard rate (RHR). We have discussed the characterization problem of RPRL and demonstrated with an example that the RPRL for given does not determine the distribution uniquely
Resumo:
Atmospheric surface boundary layer parameters vary anomalously in response to the occurrence of annular solar eclipse on 15th January 2010 over Cochin. It was the longest annular solar eclipse occurred over South India with high intensity. As it occurred during the noon hours, it is considered to be much more significant because of its effects in all the regions of atmosphere including ionosphere. Since the insolation is the main driving factor responsible for the anomalous changes occurred in the surface layer due to annular solar eclipse, occurred on 15th January 2010, that played very important role in understanding dynamics of the atmosphere during the eclipse period because of its coincidence with the noon time. The Sonic anemometer is able to give data of zonal, meridional and vertical wind as well as the air temperature at a temporal resolution of 1 s. Different surface boundary layer parameters and turbulent fluxes were computed by the application of eddy correlation technique using the high resolution station data. The surface boundary layer parameters that are computed using the sonic anemometer data during the period are momentum flux, sensible heat flux, turbulent kinetic energy, frictional velocity (u*), variance of temperature, variances of u, v and w wind. In order to compare the results, a control run has been done using the data of previous day as well as next day. It is noted that over the specified time period of annular solar eclipse, all the above stated surface boundary layer parameters vary anomalously when compared with the control run. From the observations we could note that momentum flux was 0.1 Nm 2 instead of the mean value 0.2 Nm-2 when there was eclipse. Sensible heat flux anomalously decreases to 50 Nm 2 instead of the mean value 200 Nm 2 at the time of solar eclipse. The turbulent kinetic energy decreases to 0.2 m2s 2 from the mean value 1 m2s 2. The frictional velocity value decreases to 0.05 ms 1 instead of the mean value 0.2 ms 1. The present study aimed at understanding the dynamics of surface layer in response to the annular solar eclipse over a tropical coastal station, occurred during the noon hours. Key words: annular solar eclipse, surface boundary layer, sonic anemometer