964 resultados para Point Data
Resumo:
In this thesis, I study the changing ladscape and human environment of the Mätäjoki Valley, West-Helsinki, using reconstructions and predictive modelling. The study is a part of a larger project funded by the city of Helsinki aming to map the past of the Mätäjoki Valley. The changes in landscape from an archipelago in the Ancylus Lake to a river valley are studied from 10000 to 2000 years ago. Alongside shore displacement, we look at the changing environment from human perspective and predict the location of dwelling sitesat various times. As a result, two map series were produced that show how the landscape changed and where inhabitance is predicted. To back them up, we have also looked at what previous research says about the history of the waterways, climate, vegetation and archaeology. The changing landscape of the river valley is reconstructed using GIS methods. For this purpose, new laser point data set was used and at the same time tested in the context landscape modelling. Dwelling sites were modeled with logistic regression analysis. The spatial predictive model combines data on the locations of the known dwelling sites, environmental factors and shore displacement data. The predictions were visualised into raster maps that show the predictions for inhabitance 3000 and 5000 years ago. The aim of these maps was to help archaeologists map potential spots for human activity. The produced landscape reconstructions clarified previous shore displacement studies of the Mätäjoki region and provided new information on the location of shoreline. From the shore displacement history of the Mätäjoki Valley arise the following stages: 1. The northernmost hills of the Mätäjoki Valley rose from Ancylus Lake approximately 10000 years ago. Shore displacement was fast during the following thousand years. 2. The area was an archipelago with a relatively steady shoreline 9000 7000 years ago. 8000 years ago the shoreline drew back in the middle and southern parts of the river valley because of the transgression of the Litorina Sea. 3. Mätäjoki was a sheltered bay of the Litorina Sea 6000 5000 years ago. The Vantaanjoki River started to flow into the Mätäjoki Valley approximately 5000 years ago. 4. The sediment plains in the southern part of the river valley rose from the sea rather quickly 5000 3000 years ago. Salt water still pushed its way into the southermost part of the valley 4000 years ago. 5. The shoreline proceeded to Pitäjänmäki rapids where it stayed at least a thousand years 3000 2000 years ago. The predictive models managed to predict the locations of dwelling sites moderately well. The most accurate predictions were found on the eastern shore and Malminkartano area. Of the environment variables sand and aspect of slope were found to have the best predictive power. From the results of this study we can conclude that the Mätäjoki Valley has been a favorable location to live especially 6000 5000 years ago when the climate was mild and vegetation lush. The laser point data set used here works best in shore displacement studies located in rural areas or if further specific palaeogeographic or hydrologic analysis in the research area is not needed.
Resumo:
Periglacial processes act on cold, non-glacial regions where the landscape deveploment is mainly controlled by frost activity. Circa 25 percent of Earth's surface can be considered as periglacial. Geographical Information System combined with advanced statistical modeling methods, provides an efficient tool and new theoretical perspective for study of cold environments. The aim of this study was to: 1) model and predict the abundance of periglacial phenomena in subarctic environment with statistical modeling, 2) investigate the most import factors affecting the occurence of these phenomena with hierarchical partitioning, 3) compare two widely used statistical modeling methods: Generalized Linear Models and Generalized Additive Models, 4) study modeling resolution's effect on prediction and 5) study how spatially continous prediction can be obtained from point data. The observational data of this study consist of 369 points that were collected during the summers of 2009 and 2010 at the study area in Kilpisjärvi northern Lapland. The periglacial phenomena of interest were cryoturbations, slope processes, weathering, deflation, nivation and fluvial processes. The features were modeled using Generalized Linear Models (GLM) and Generalized Additive Models (GAM) based on Poisson-errors. The abundance of periglacial features were predicted based on these models to a spatial grid with a resolution of one hectare. The most important environmental factors were examined with hierarchical partitioning. The effect of modeling resolution was investigated with in a small independent study area with a spatial resolution of 0,01 hectare. The models explained 45-70 % of the occurence of periglacial phenomena. When spatial variables were added to the models the amount of explained deviance was considerably higher, which signalled a geographical trend structure. The ability of the models to predict periglacial phenomena were assessed with independent evaluation data. Spearman's correlation varied 0,258 - 0,754 between the observed and predicted values. Based on explained deviance, and the results of hierarchical partitioning, the most important environmental variables were mean altitude, vegetation and mean slope angle. The effect of modeling resolution was clear, too coarse resolution caused a loss of information, while finer resolution brought out more localized variation. The models ability to explain and predict periglacial phenomena in the study area were mostly good and moderate respectively. Differences between modeling methods were small, although the explained deviance was higher with GLM-models than GAMs. In turn, GAMs produced more realistic spatial predictions. The single most important environmental variable controlling the occurence of periglacial phenomena was mean altitude, which had strong correlations with many other explanatory variables. The ongoing global warming will have great impact especially in cold environments on high latitudes, and for this reason, an important research topic in the near future will be the response of periglacial environments to a warming climate.
Resumo:
Compared with the conventional P wave, multi-component seismic data can markedly provide more information, thus improve the quality of reservoir evaluation like formation evaluation etc. With PS wave, better imaging result can be obtained especially in areas involved with gas chimney and high velocity formation. However, the signal-to-noise of multi-component seismic data is normally lower than that of the conventional P wave seismic data, while the frequency range of converted wave is always close to that of the surface wave which adds to the difficulty of removing surface wave. To realize common reflection point data stacking from extracted common conversion point data is a hard nut to crack. The s wave static correction of common receiver point PS wave data is not easy neither. In a word, the processing of multi-component seismic data is more complicated than P wave data. This paper shows some work that has been done, addressing those problems mentioned above. (1) Based on the AVO feature of converted wave, this paper has realized the velocity spectrum of converted waves by using Sarkar’s generalized semblance method taking into account of AVO factor in velocity analysis. (2)We achieve a method of smoothly offset division normal method.Firstly we scan the stacking velocities in different offset divisions for a t0, secondly obtain some hyperbolas using these stacking velocities, then get the travel time for every trace using these hyperbolas; in the end we interpolate the normal move out between two t0 for every trace. (3) Here realize a method of stepwise offset division normal moveout.It is similar to the method of smoothly offset division normal moveout.The main difference is using quadratic curve, sixth order curve or fraction curve to fit these hyperbolas. (4)In this paper, 4 types of travel time versus distance functions in inhomogeneous media whose velocity or slowness varies with depth and vertical travel time have been discussed and used to approximate reflection travel time. The errors of ray path and travel time based on those functions in four layered models were analyzed, and it has shown that effective results of NMO in synthetic or real data can be obtained. (5) The velocity model of converted PS-wave can be considered as that of P -wave based on the ghost source theory, thus the converted wave travel time can be approximated by calculation from 4 equivalent velocity functions: velocity or slowness vary linearly with depth or vertical travel time. Then combining with P wave velocity analysis, the converted wave data can be corrected directly to the P-wave vertical travel time. The improvements were shown in Normal Move out of converted waves with numerical examples and real data. (6) This paper introduces the methods to compute conversion point location in vertical inhomogeneous media based on linear functions of velocity or slowness versus depth or vertical travel time, and introduce three ways to choose appropriate equivalent velocity methods, which are velocity fitting, travel time approximation and semblance coefficient methods.
Resumo:
The results of a study aimed at determining the most important experimental parameters for automated, quantitative analysis of solid dosage form pharmaceuticals (seized and model 'ecstasy' tablets) are reported. Data obtained with a macro-Raman spectrometer were complemented by micro-Raman measurements, which gave information on particle size and provided excellent data for developing statistical models of the sampling errors associated with collecting data as a series of grid points on the tablets' surface. Spectra recorded at single points on the surface of seized MDMA-caffeine-lactose tablets with a Raman microscope (lambda(ex) = 785 nm, 3 mum diameter spot) were typically dominated by one or other of the three components, consistent with Raman mapping data which showed the drug and caffeine microcrystals were ca 40 mum in diameter. Spectra collected with a microscope from eight points on a 200 mum grid were combined and in the resultant spectra the average value of the Raman band intensity ratio used to quantify the MDMA: caffeine ratio, mu(r), was 1.19 with an unacceptably high standard deviation, sigma(r), of 1.20. In contrast, with a conventional macro-Raman system (150 mum spot diameter), combined eight grid point data gave mu(r) = 1.47 with sigma(r) = 0.16. A simple statistical model which could be used to predict sigma(r) under the various conditions used was developed. The model showed that the decrease in sigma(r) on moving to a 150 mum spot was too large to be due entirely to the increased spot diameter but was consistent with the increased sampling volume that arose from a combination of the larger spot size and depth of focus in the macroscopic system. With the macro-Raman system, combining 64 grid points (0.5 mm spacing and 1-2 s accumulation per point) to give a single averaged spectrum for a tablet was found to be a practical balance between minimizing sampling errors and keeping overhead times at an acceptable level. The effectiveness of this sampling strategy was also tested by quantitative analysis of a set of model ecstasy tablets prepared from MDEA-sorbitol (0-30% by mass MDEA). A simple univariate calibration model of averaged 64 point data had R-2 = 0.998 and an r.m.s. standard error of prediction of 1.1% whereas data obtained by sampling just four points on the same tablet showed deviations from the calibration of up to 5%.
Resumo:
Nitrogen Dioxide (NO2) is known to act as an environmental trigger for many respiratory illnesses. As a pollutant it is difficult to map accurately, as concentrations can vary greatly over small distances. In this study three geostatistical techniques were compared, producing maps of NO2 concentrations in the United Kingdom (UK). The primary data source for each technique was NO2 point data, generated from background automatic monitoring and background diffusion tubes, which are analysed by different laboratories on behalf of local councils and authorities in the UK. The techniques used were simple kriging (SK), ordinary kriging (OK) and simple kriging with a locally varying mean (SKlm). SK and OK make use of the primary variable only. SKlm differs in that it utilises additional data to inform prediction, and hence potentially reduces uncertainty. The secondary data source was Oxides of Nitrogen (NOx) derived from dispersion modelling outputs, at 1km x 1km resolution for the UK. These data were used to define the locally varying mean in SKlm, using two regression approaches: (i) global regression (GR) and (ii) geographically weighted regression (GWR). Based upon summary statistics and cross-validation prediction errors, SKlm using GWR derived local means produced the most accurate predictions. Therefore, using GWR to inform SKlm was beneficial in this study.
Resumo:
The main objective of this research was to examine the relationship between surface electromyographic (SEMG) spike activity and force. The secondary objective was to determine to what extent subcutaneous tissue impacts the high frequency component of the signal, as well as, examining the relationship between measures of SEMG spike shape and their traditional time and frequency analogues. A total of96 participants (46 males and 50 females) ranging in age (18-35 years), generated three 5-second isometric step contractions at each force level of 40, 60, 80, and 100 percent of maximal voluntary contraction (MVC). The presentation of the contractions was balanced across subjects. The right arm of the subject was positioned in the sagittal plane, with the shoulder and elbow flexed to 90 degrees. The elbow rested on a support in a neutral position (mid pronation/mid supination) and placed within a wrist cuff, fastened below the styloid process. The wrist cuff was attached to a load cell (JR3 Inc., Woodland, CA) recording the force produced. Biceps brachii activity was monitored with a pair of Ag/AgCI recording electrodes (Grass F-E9, Astro-Med Inc., West Warwick, RI) placed in a bipolar configuration, with an interelectrode distance (lED) of 2cm distal to the motor point. Data analysis was performed on a I second window of data in the middle of the 5-second contraction. The results indicated that all spike shape measures exhibited significant (p < 0.01) differences as force increase~ from 40 to 100% MVC. The spike shape measures suggest that increased motor unit (MU) recruitment was responsible for increasing force up to 80% MVC. The results suggested that further increases in force relied on MU III synchronization. The results also revealed that the subcutaneous tissue (skin fold thickness) had no relationship (r = 0.02; P > 0.05) with the mean number of peaks per spike (MNPPS), which was the high frequency component of the signal. Mean spike amplitude (MSA) and mean spike frequency (MSF) were highly correlated with their traditional measures root mean square (RMS) and mean power frequency (MPF), respectively (r = 0.99; r = 0.97; P < 0.01).
Resumo:
In this qualitative investigation, the researcher examined the experiences of 10 teachers as they implemented a classroom management model called the Respect Circle. Through interviews and journal entries, the writer sought to understand how the participating teachers developed their classroom management practice, using the Respect Circle as a reference point. Data collection occurred over a 10-week period from October to December. The findings of this study demonstrate the multifaceted and complex nature of classroom management. Participants identified relationships with their students as the premier factor in establishing classroom management. Additionally, pro action, professional reflection, adaptability, and consistency figured prominently in the classroom management approaches taken by the participating teachers. Utilizing the experiences and suggestions of the participants as a springboard, the Respect Circle model was revised. The findings underline areas of concern regarding classroom management and suggest that teachers want a respectful, structured yet flexible model upon which to base their classroom management. Suggestions for teachers, new and experienced; school administrators; and developers of classroom management courses are provided.
Resumo:
Oil rig mooring lines have traditionally consisted of chain and wire rope. As production has moved into deeper water it has proved advantageous to incorporate sections of fibre rope into the mooring lines. However, this has highlighted torsional interaction problems that can occur when ropes of different types are joined together. This paper describes a method by which the torsional properties of ropes can be modelled and can then be used to calculate the rotation and torque for two ropes connected in series. The method uses numerical representations of the torsional characteristics of both the ropes, and equates the torque generated in each rope under load to determine the rotation at the connection point. Data from rope torsional characterization tests have been analysed to derive constants used in the numerical model. Constants are presented for: a six-strand wire rope; a torque-balanced fibre rope; and a fibre rope that has been designed to be torque-matched to stranded wire rope. The calculation method has been verified by comparing predicted rotations with measured test values. Worked examples are given for a six-strand wire rope connected, firstly, to a torque-balanced fibre rope that offers little rotational restraint, and, secondly, to a fibre rope whose torsional properties are matched to that of the wire rope.
Resumo:
Fire is a major management issue in the southwestern United States. Three spatial models of fire risk for Coconino County, Northern Arizona. These models were generated using thematic data layers depicting vegetation, elevation, wind speed and direction, and precipitation for January (winter), June (summer), and July (start of monsoon season). ArcGIS 9.0 was used to weight attributes in raster layers to reflect their influence on fire risk and to interpolate raster data layers from point data. Final models were generated using the raster calculator in the Spatial Analyst extension of ArcGIS 9.0. Ultimately, the unique combinations of variables resulted in three different models illustrating the change in fire risk during the year.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Ecological disturbances may be caused by a range of biotic and abiotic factors. Among these are disturbances that result from human activities such as the introduction of exotic plants and land management activities. This dissertation addresses both of these types of disturbance in ecosystems in the Upper Peninsula of Michigan. Invasive plants are a significant cause of disturbance at Pictured Rocks Natural Lakeshore. Management of invasive plants is dependent on understanding what areas are at risk of being invaded, what the consequences of an invasion are on native plant communities and how effective different tools are for managing the invasive species. A series of risk models are described that predict three stages of invasion (introduction, establishment and spread) for eight invasive plant species at Pictured Rocks National Lakeshore. These models are specific to this location and include species for which models have not previously been produced. The models were tested by collecting point data throughout the park to demonstrate their effectiveness for future detection of invasive plants in the park. Work to describe the impacts and management of invasive plants focused on spotted knapweed in the sensitive Grand Sable Dunes area of Pictured Rocks National Lakeshore. Impacts of spotted knapweed were assessed by comparing vegetation communities in areas with varying amounts of spotted knapweed. This work showed significant increases in species diversity in areas invaded by knapweed, apparently as a result of the presence of a number of non-dune species that have become established in spotted knapweed invaded areas. An experiment was carried out to compare annual spot application of two herbicides, Milestone® and Transline® to target spotted knapweed. This included an assessment of impacts of this type of treatment on non-target species. There was no difference in the effectiveness of the two herbicides, and both significantly reduced the density of spotted knapweed during the course of the study. Areas treated with herbicide developed a higher percent cover of grasses during the study, and suffered limited negative impacts on some sensitive dune species such as beach pea and dune stitchwort, and on some other non-dune species such as hawkweed. The use of these herbicides to reduce the density of spotted knapweed appears to be feasible over large scales.
Resumo:
Development of a Sensorimotor Algorithm Able to Deal with Unforeseen Pushes and Its Implementation Based on VHDL is the title of my thesis which concludes my Bachelor Degree in the Escuela Técnica Superior de Ingeniería y Sistemas de Telecomunicación of the Universidad Politécnica de Madrid. It encloses the overall work I did in the Neurorobotics Research Laboratory from the Beuth Hochschule für Technik Berlin during my ERASMUS year in 2015. This thesis is focused on the field of robotics, specifically an electronic circuit called Cognitive Sensorimotor Loop (CSL) and its control algorithm based on VHDL hardware description language. The reason that makes the CSL special resides in its ability to operate a motor both as a sensor and an actuator. This way, it is possible to achieve a balanced position in any of the robot joints (e.g. the robot manages to stand) without needing any conventional sensor. In other words, the back electromotive force (EMF) induced by the motor coils is measured and the control algorithm responds depending on its magnitude. The CSL circuit contains mainly an analog-to-digital converter (ADC) and a driver. The ADC consists on a delta-sigma modulation which generates a series of bits with a certain percentage of 1's and 0's, proportional to the back EMF. The control algorithm, running in a FPGA, processes the bit frame and outputs a signal for the driver. This driver, which has an H bridge topology, gives the motor the ability to rotate in both directions while it's supplied with the power needed. The objective of this thesis is to document the experiments and overall work done on push ignoring contractive sensorimotor algorithms, meaning sensorimotor algorithms that ignore large magnitude forces (compared to gravity) applied in a short time interval on a pendulum system. This main objective is divided in two sub-objectives: (1) developing a system based on parameterized thresholds and (2) developing a system based on a push bypassing filter. System (1) contains a module that outputs a signal which blocks the main Sensorimotor algorithm when a push is detected. This module has several different parameters as inputs e.g. the back EMF increment to consider a force as a push or the time interval between samples. System (2) consists on a low-pass Infinite Impulse Response digital filter. It cuts any frequency considered faster than a certain push oscillation. This filter required an intensive study on how to implement some functions and data types (fixed or floating point data) not supported by standard VHDL packages. Once this was achieved, the next challenge was to simplify the solution as much as possible, without using non-official user made packages. Both systems behaved with a series of interesting advantages and disadvantages for the elaboration of the document. Stability, reaction time, simplicity or computational load are one of the many factors to be studied in the designed systems. RESUMEN. Development of a Sensorimotor Algorithm Able to Deal with Unforeseen Pushes and Its Implementation Based on VHDL es un Proyecto de Fin de Grado (PFG) que concluye mis estudios en la Escuela Técnica Superior de Ingeniería y Sistemas de Telecomunicación de la Universidad Politécnica de Madrid. En él se documenta el trabajo de investigación que realicé en el Neurorobotics Research Laboratory de la Beuth Hochschule für Technik Berlin durante el año 2015 mediante el programa de intercambio ERASMUS. Este PFG se centra en el campo de la robótica y en concreto en un circuito electrónico llamado Cognitive Sensorimotor Loop (CSL) y su algoritmo de control basado en lenguaje de modelado hardware VHDL. La particularidad del CSL reside en que se consigue que un motor haga las veces tanto de sensor como de actuador. De esta manera es posible que las articulaciones de un robot alcancen una posición de equilibrio (p.ej. el robot se coloca erguido) sin la necesidad de sensores en el sentido estricto de la palabra. Es decir, se mide la propia fuerza electromotriz (FEM) inducida sobre el motor y el algoritmo responde de acuerdo a su magnitud. El circuito CSL se compone de un convertidor analógico-digital (ADC) y un driver. El ADC consiste en un modulador sigma-delta, que genera una serie de bits con un porcentaje de 1's y 0's determinado, en proporción a la magnitud de la FEM inducida. El algoritmo de control, que se ejecuta en una FPGA, procesa esta cadena de bits y genera una señal para el driver. El driver, que posee una topología en puente H, provee al motor de la potencia necesaria y le otorga la capacidad de rotar en cualquiera de las dos direcciones. El objetivo de este PFG es documentar los experimentos y en general el trabajo realizado en algoritmos Sensorimotor que puedan ignorar fuerzas de gran magnitud (en comparación con la gravedad) y aplicadas en una corta ventana de tiempo. En otras palabras, ignorar empujones conservando el comportamiento original frente a la gravedad. Para ello se han desarrollado dos sistemas: uno basado en umbrales parametrizados (1) y otro basado en un filtro de corte ajustable (2). El sistema (1) contiene un módulo que, en el caso de detectar un empujón, genera una señal que bloquea el algoritmo Sensorimotor. Este módulo recibe diferentes parámetros como el incremento necesario de la FEM para que se considere un empujón o la ventana de tiempo para que se considere la existencia de un empujón. El sistema (2) consiste en un filtro digital paso-bajo de respuesta infinita que corta cualquier variación que considere un empujón. Para crear este filtro se requirió un estudio sobre como implementar ciertas funciones y tipos de datos (coma fija o flotante) no soportados por las librerías básicas de VHDL. Tras esto, el objetivo fue simplificar al máximo la solución del problema, sin utilizar paquetes de librerías añadidos. En ambos sistemas aparecen una serie de ventajas e inconvenientes de interés para el documento. La estabilidad, el tiempo de reacción, la simplicidad o la carga computacional son algunas de las muchos factores a estudiar en los sistemas diseñados. Para concluir, también han sido documentadas algunas incorporaciones a los sistemas: una interfaz visual en VGA, un módulo que compensa el offset del ADC o la implementación de una batería de faders MIDI entre otras.
Resumo:
A k-NN query finds the k nearest-neighbors of a given point from a point database. When it is sufficient to measure object distance using the Euclidian distance, the key to efficient k-NN query processing is to fetch and check the distances of a minimum number of points from the database. For many applications, such as vehicle movement along road networks or rover and animal movement along terrain surfaces, the distance is only meaningful when it is along a valid movement path. For this type of k-NN queries, the focus of efficient query processing is to minimize the cost of computing distances using the environment data (such as the road network data and the terrain data), which can be several orders of magnitude larger than that of the point data. Efficient processing of k-NN queries based on the Euclidian distance or the road network distance has been investigated extensively in the past. In this paper, we investigate the problem of surface k-NN query processing, where the distance is calculated from the shortest path along a terrain surface. This problem is very challenging, as the terrain data can be very large and the computational cost of finding shortest paths is very high. We propose an efficient solution based on multiresolution terrain models. Our approach eliminates the need of costly process of finding shortest paths by ranking objects using estimated lower and upper bounds of distance on multiresolution terrain models.
Resumo:
INTAMAP is a web processing service for the automatic interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the open geospatial consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an open source solution. The system couples the 52-North web processing service, accepting data in the form of an observations and measurements (O&M) document with a computing back-end realized in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a new markup language to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropies and extreme values. In the light of the INTAMAP experience, we discuss the lessons learnt.
Resumo:
INTAMAP is a Web Processing Service for the automatic spatial interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the Open Geospatial Consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an integrated, open source solution. The system couples an open-source Web Processing Service (developed by 52°North), accepting data in the form of standardised XML documents (conforming to the OGC Observations and Measurements standard) with a computing back-end realised in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a markup language designed to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropy, extreme values, and data with known error distributions. Besides a fully automatic mode, the system can be used with different levels of user control over the interpolation process.