954 resultados para Spatial Information
Resumo:
The collection of spatial information to quantify changes to the state and condition of the environment is a fundamental component of conservation or sustainable utilization of tropical and subtropical forests, Age is an important structural attribute of old-growth forests influencing biological diversity in Australia eucalypt forests. Aerial photograph interpretation has traditionally been used for mapping the age and structure of forest stands. However this method is subjective and is not able to accurately capture fine to landscape scale variation necessary for ecological studies. Identification and mapping of fine to landscape scale vegetative structural attributes will allow the compilation of information associated with Montreal Process indicators lb and ld, which seek to determine linkages between age structure and the diversity and abundance of forest fauna populations. This project integrated measurements of structural attributes derived from a canopy-height elevation model with results from a geometrical-optical/spectral mixture analysis model to map forest age structure at a landscape scale. The availability of multiple-scale data allows the transfer of high-resolution attributes to landscape scale monitoring. Multispectral image data were obtained from a DMSV (Digital Multi-Spectral Video) sensor over St Mary's State Forest in Southeast Queensland, Australia. Local scene variance levels for different forest tapes calculated from the DMSV data were used to optimize the tree density and canopy size output in a geometric-optical model applied to a Landsat Thematic Mapper (TU) data set. Airborne laser scanner data obtained over the project area were used to calibrate a digital filter to extract tree heights from a digital elevation model that was derived from scanned colour stereopairs. The modelled estimates of tree height, crown size, and tree density were used to produce a decision-tree classification of forest successional stage at a landscape scale. The results obtained (72% accuracy), were limited in validation, but demonstrate potential for using the multi-scale methodology to provide spatial information for forestry policy objectives (ie., monitoring forest age structure).
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
After ischemic stroke, the ischemic damage to brain tissue evolves over time and with an uneven spatial distribution. Early irreversible changes occur in the ischemic core, whereas, in the penumbra, which receives more collateral blood flow, the damage is more mild and delayed. A better characterization of the penumbra, irreversibly damaged and healthy tissues is needed to understand the mechanisms involved in tissue death. MRSI is a powerful tool for this task if the scan time can be decreased whilst maintaining high sensitivity. Therefore, we made improvements to a (1) H MRSI protocol to study middle cerebral artery occlusion in mice. The spatial distribution of changes in the neurochemical profile was investigated, with an effective spatial resolution of 1.4 μL, applying the protocol on a 14.1-T magnet. The acquired maps included the difficult-to-separate glutamate and glutamine resonances and, to our knowledge, the first mapping of metabolites γ-aminobutyric acid and glutathione in vivo, within a metabolite measurement time of 45 min. The maps were in excellent agreement with findings from single-voxel spectroscopy and offer spatial information at a scan time acceptable for most animal models. The metabolites measured differed with respect to the temporal evolution of their concentrations and the localization of these changes. Specifically, lactate and N-acetylaspartate concentration changes largely overlapped with the T(2) -hyperintense region visualized with MRI, whereas changes in cholines and glutathione affected the entire middle cerebral artery territory. Glutamine maps showed elevated levels in the ischemic striatum until 8 h after reperfusion, and until 24 h in cortical tissue, indicating differences in excitotoxic effects and secondary energy failure in these tissue types. Copyright © 2011 John Wiley & Sons, Ltd.
Resumo:
We aimed to determine whether human subjects' reliance on different sources of spatial information encoded in different frames of reference (i.e., egocentric versus allocentric) affects their performance, decision time and memory capacity in a short-term spatial memory task performed in the real world. Subjects were asked to play the Memory game (a.k.a. the Concentration game) without an opponent, in four different conditions that controlled for the subjects' reliance on egocentric and/or allocentric frames of reference for the elaboration of a spatial representation of the image locations enabling maximal efficiency. We report experimental data from young adult men and women, and describe a mathematical model to estimate human short-term spatial memory capacity. We found that short-term spatial memory capacity was greatest when an egocentric spatial frame of reference enabled subjects to encode and remember the image locations. However, when egocentric information was not reliable, short-term spatial memory capacity was greater and decision time shorter when an allocentric representation of the image locations with respect to distant objects in the surrounding environment was available, as compared to when only a spatial representation encoding the relationships between the individual images, independent of the surrounding environment, was available. Our findings thus further demonstrate that changes in viewpoint produced by the movement of images placed in front of a stationary subject is not equivalent to the movement of the subject around stationary images. We discuss possible limitations of classical neuropsychological and virtual reality experiments of spatial memory, which typically restrict the sensory information normally available to human subjects in the real world.
Resumo:
This study assesses gender differences in spatial and non-spatial relational learning and memory in adult humans behaving freely in a real-world, open-field environment. In Experiment 1, we tested the use of proximal landmarks as conditional cues allowing subjects to predict the location of rewards hidden in one of two sets of three distinct locations. Subjects were tested in two different conditions: (1) when local visual cues marked the potentially-rewarded locations, and (2) when no local visual cues marked the potentially-rewarded locations. We found that only 17 of 20 adults (8 males, 9 females) used the proximal landmarks to predict the locations of the rewards. Although females exhibited higher exploratory behavior at the beginning of testing, males and females discriminated the potentially-rewarded locations similarly when local visual cues were present. Interestingly, when the spatial and local information conflicted in predicting the reward locations, males considered both spatial and local information, whereas females ignored the spatial information. However, in the absence of local visual cues females discriminated the potentially-rewarded locations as well as males. In Experiment 2, subjects (9 males, 9 females) were tested with three asymmetrically-arranged rewarded locations, which were marked by local cues on alternate trials. Again, females discriminated the rewarded locations as well as males in the presence or absence of local cues. In sum, although particular aspects of task performance might differ between genders, we found no evidence that women have poorer allocentric spatial relational learning and memory abilities than men in a real-world, open-field environment.
Resumo:
Global positioning systems (GPS) offer a cost-effective and efficient method to input and update transportation data. The spatial location of objects provided by GPS is easily integrated into geographic information systems (GIS). The storage, manipulation, and analysis of spatial data are also relatively simple in a GIS. However, many data storage and reporting methods at transportation agencies rely on linear referencing methods (LRMs); consequently, GPS data must be able to link with linear referencing. Unfortunately, the two systems are fundamentally incompatible in the way data are collected, integrated, and manipulated. In order for the spatial data collected using GPS to be integrated into a linear referencing system or shared among LRMs, a number of issues need to be addressed. This report documents and evaluates several of those issues and offers recommendations. In order to evaluate the issues associated with integrating GPS data with a LRM, a pilot study was created. To perform the pilot study, point features, a linear datum, and a spatial representation of a LRM were created for six test roadway segments that were located within the boundaries of the pilot study conducted by the Iowa Department of Transportation linear referencing system project team. Various issues in integrating point features with a LRM or between LRMs are discussed and recommendations provided. The accuracy of the GPS is discussed, including issues such as point features mapping to the wrong segment. Another topic is the loss of spatial information that occurs when a three-dimensional or two-dimensional spatial point feature is converted to a one-dimensional representation on a LRM. Recommendations such as storing point features as spatial objects if necessary or preserving information such as coordinates and elevation are suggested. The lack of spatial accuracy characteristic of most cartography, on which LRM are often based, is another topic discussed. The associated issues include linear and horizontal offset error. The final topic discussed is some of the issues in transferring point feature data between LRMs.
Resumo:
Simple reaction times (RTs) to auditory-somatosensory (AS) multisensory stimuli are facilitated over their unisensory counterparts both when stimuli are delivered to the same location and when separated. In two experiments we addressed the possibility that top-down and/or task-related influences can dynamically impact the spatial representations mediating these effects and the extent to which multisensory facilitation will be observed. Participants performed a simple detection task in response to auditory, somatosensory, or simultaneous AS stimuli that in turn were either spatially aligned or misaligned by lateralizing the stimuli. Additionally, we also informed the participants that they would be retrogradely queried (one-third of trials) regarding the side where a given stimulus in a given sensory modality was presented. In this way, we sought to have participants attending to all possible spatial locations and sensory modalities, while nonetheless having them perform a simple detection task. Experiment 1 provided no cues prior to stimulus delivery. Experiment 2 included spatially uninformative cues (50% of trials). In both experiments, multisensory conditions significantly facilitated detection RTs with no evidence for differences according to spatial alignment (though general benefits of cuing were observed in Experiment 2). Facilitated detection occurs even when attending to spatial information. Performance with probes, quantified using sensitivity (d'), was impaired following multisensory trials in general and significantly more so following misaligned multisensory trials. This indicates that spatial information is not available, despite being task-relevant. The collective results support a model wherein early AS interactions may result in a loss of spatial acuity for unisensory information.
Resumo:
Directional cell growth requires that cells read and interpret shallow chemical gradients, but how the gradient directional information is identified remains elusive. We use single-cell analysis and mathematical modeling to define the cellular gradient decoding network in yeast. Our results demonstrate that the spatial information of the gradient signal is read locally within the polarity site complex using double-positive feedback between the GTPase Cdc42 and trafficking of the receptor Ste2. Spatial decoding critically depends on low Cdc42 activity, which is maintained by the MAPK Fus3 through sequestration of the Cdc42 activator Cdc24. Deregulated Cdc42 or Ste2 trafficking prevents gradient decoding and leads to mis-oriented growth. Our work discovers how a conserved set of components assembles a network integrating signal intensity and directionality to decode the spatial information contained in chemical gradients.
Resumo:
Vineyards vary over space and time, making geomatics technologies ideally suited to study terroir. This study applied geomatics technologies - GPS, remote sensing and GIS - to characterize the spatial variability at Stratus Vineyards in the Niagara Region. The concept of spatial terroir was used to visualize, monitor and analyze the spatial and temporal variability of variables that influence grape quality. Spatial interpolation and spatial autocorrelation were used to measure the pattern demonstrated by soil moisture, leaf water potential, vine vigour, soil composition and grape composition on two Cabernet Franc blocks and one Chardonnay block. All variables demonstrated some spatial variability within and between the vineyard block and over time. Soil moisture exhibited the most significant spatial clustering and was temporally stable. Geomatics technologies provided valuable spatial information related to the natural spatial variability at Stratus Vineyards and can be used to inform and influence vineyard management decisions.
Resumo:
The human visual ability to perceive depth looks like a puzzle. We perceive three-dimensional spatial information quickly and efficiently by using the binocular stereopsis of our eyes and, what is mote important the learning of the most common objects which we achieved through living. Nowadays, modelling the behaviour of our brain is a fiction, that is why the huge problem of 3D perception and further, interpretation is split into a sequence of easier problems. A lot of research is involved in robot vision in order to obtain 3D information of the surrounded scene. Most of this research is based on modelling the stereopsis of humans by using two cameras as if they were two eyes. This method is known as stereo vision and has been widely studied in the past and is being studied at present, and a lot of work will be surely done in the future. This fact allows us to affirm that this topic is one of the most interesting ones in computer vision. The stereo vision principle is based on obtaining the three dimensional position of an object point from the position of its projective points in both camera image planes. However, before inferring 3D information, the mathematical models of both cameras have to be known. This step is known as camera calibration and is broadly describes in the thesis. Perhaps the most important problem in stereo vision is the determination of the pair of homologue points in the two images, known as the correspondence problem, and it is also one of the most difficult problems to be solved which is currently investigated by a lot of researchers. The epipolar geometry allows us to reduce the correspondence problem. An approach to the epipolar geometry is describes in the thesis. Nevertheless, it does not solve it at all as a lot of considerations have to be taken into account. As an example we have to consider points without correspondence due to a surface occlusion or simply due to a projection out of the camera scope. The interest of the thesis is focused on structured light which has been considered as one of the most frequently used techniques in order to reduce the problems related lo stereo vision. Structured light is based on the relationship between a projected light pattern its projection and an image sensor. The deformations between the pattern projected into the scene and the one captured by the camera, permits to obtain three dimensional information of the illuminated scene. This technique has been widely used in such applications as: 3D object reconstruction, robot navigation, quality control, and so on. Although the projection of regular patterns solve the problem of points without match, it does not solve the problem of multiple matching, which leads us to use hard computing algorithms in order to search the correct matches. In recent years, another structured light technique has increased in importance. This technique is based on the codification of the light projected on the scene in order to be used as a tool to obtain an unique match. Each token of light is imaged by the camera, we have to read the label (decode the pattern) in order to solve the correspondence problem. The advantages and disadvantages of stereo vision against structured light and a survey on coded structured light are related and discussed. The work carried out in the frame of this thesis has permitted to present a new coded structured light pattern which solves the correspondence problem uniquely and robust. Unique, as each token of light is coded by a different word which removes the problem of multiple matching. Robust, since the pattern has been coded using the position of each token of light with respect to both co-ordinate axis. Algorithms and experimental results are included in the thesis. The reader can see examples 3D measurement of static objects, and the more complicated measurement of moving objects. The technique can be used in both cases as the pattern is coded by a single projection shot. Then it can be used in several applications of robot vision. Our interest is focused on the mathematical study of the camera and pattern projector models. We are also interested in how these models can be obtained by calibration, and how they can be used to obtained three dimensional information from two correspondence points. Furthermore, we have studied structured light and coded structured light, and we have presented a new coded structured light pattern. However, in this thesis we started from the assumption that the correspondence points could be well-segmented from the captured image. Computer vision constitutes a huge problem and a lot of work is being done at all levels of human vision modelling, starting from a)image acquisition; b) further image enhancement, filtering and processing, c) image segmentation which involves thresholding, thinning, contour detection, texture and colour analysis, and so on. The interest of this thesis starts in the next step, usually known as depth perception or 3D measurement.
Resumo:
The resolution of remotely sensed data is becoming increasingly fine, and there are now many sources of data with a pixel size of 1 m x 1 m. This produces huge amounts of data that have to be stored, processed and transmitted. For environmental applications this resolution possibly provides far more data than are needed: data overload. This poses the question: how much is too much? We have explored two resolutions of data-20 in pixel SPOT data and I in pixel Computerized Airborne Multispectral Imaging System (CAMIS) data from Fort A. P. Hill (Virginia, USA), using the variogram of geostatistics. For both we used the normalized difference vegetation index (NDVI). Three scales of spatial variation were identified in both the SPOT and 1 in data: there was some overlap at the intermediate spatial scales of about 150 in and of 500 m-600 in. We subsampled the I in data and scales of variation of about 30 in and of 300 in were identified consistently until the separation between pixel centroids was 15 in (or 1 in 225pixels). At this stage, spatial scales of about 100m and 600m were described, which suggested that only now was there a real difference in the amount of spatial information available from an environmental perspective. These latter were similar spatial scales to those identified from the SPOT image. We have also analysed I in CAMIS data from Fort Story (Virginia, USA) for comparison and the outcome is similar.:From these analyses it seems that a pixel size of 20m is adequate for many environmental applications, and that if more detail is required the higher resolution data could be sub-sampled to a 10m separation between pixel centroids without any serious loss of information. This reduces significantly the amount of data that needs to be stored, transmitted and analysed and has important implications for data compression.
Spatial reference of black capuchin monkeys in Brazilian Atlantic Forest: egocentric or allocentric?
Resumo:
Wild primates occupy large home ranges and travel long distances to reach goals. However, how primates are able to remember goal locations and travel efficiently is unclear. Few studies present consistent results regarding what reference system primates use to navigate, and what kind of spatial information they recognize. We analysed the pattern of navigation of one wild group of black capuchin monkeys, Cebus nigritus, at Atlantic Forest for 100 days in Carlos Botelho State Park (PECB), Brazil. We tested predictions based on the alternative hypotheses that black capuchin monkeys navigate using a sequence of landmarks as an egocentric reference system or an allocentric reference system, or both, depending on availability of food resources. The group location was recorded using a GPS device collecting coordinates at 5 min intervals, and route maps were generated using ArcView v9.3.1. The study group travelled through habitual routes during less than 30% of our study sample, and revisited resources from different starting points, using different paths and routes, even when prominent landmarks near feeding locations were not visible. The study group used habitual routes more frequently when high-quality foods were scarce, and navigated using different paths when revisiting food sources. Results support the hypothesis that black capuchin monkeys at PECB navigate using both egocentric and allocentric systems of reference, depending on the quality and distribution of the food resource they find. (C) 2010 The Association for the Study of Animal Behaviour. Published by Elsevier Ltd. All rights reserved.
Resumo:
In the last couple of decades we assisted to a reappraisal of spatial design-based techniques. Usually the spatial information regarding the spatial location of the individuals of a population has been used to develop efficient sampling designs. This thesis aims at offering a new technique for both inference on individual values and global population values able to employ the spatial information available before sampling at estimation level by rewriting a deterministic interpolator under a design-based framework. The achieved point estimator of the individual values is treated both in the case of finite spatial populations and continuous spatial domains, while the theory on the estimator of the population global value covers the finite population case only. A fairly broad simulation study compares the results of the point estimator with the simple random sampling without replacement estimator in predictive form and the kriging, which is the benchmark technique for inference on spatial data. The Monte Carlo experiment is carried out on populations generated according to different superpopulation methods in order to manage different aspects of the spatial structure. The simulation outcomes point out that the proposed point estimator has almost the same behaviour as the kriging predictor regardless of the parameters adopted for generating the populations, especially for low sampling fractions. Moreover, the use of the spatial information improves substantially design-based spatial inference on individual values.