932 resultados para spatial information processing theories
Resumo:
Summarizing topological relations is fundamental to many spatial applications including spatial query optimization. In this paper, we present several novel techniques to eectively construct cell density based spatial histograms for range (window) summarizations restricted to the four most important topological relations: contains, contained, overlap, and disjoint. We rst present a novel framework to construct a multiscale histogram composed of multiple Euler histograms with the guarantee of the exact summarization results for aligned windows in constant time. Then we present an approximate algorithm, with the approximate ratio 19/12, to minimize the storage spaces of such multiscale Euler histograms, although the problem is generally NP-hard. To conform to a limited storage space where only k Euler histograms are allowed, an effective algorithm is presented to construct multiscale histograms to achieve high accuracy. Finally, we present a new approximate algorithm to query an Euler histogram that cannot guarantee the exact answers; it runs in constant time. Our extensive experiments against both synthetic and real world datasets demonstrated that the approximate mul- tiscale histogram techniques may improve the accuracy of the existing techniques by several orders of magnitude while retaining the cost effciency, and the exact multiscale histogram technique requires only a storage space linearly proportional to the number of cells for the real datasets.
River basin surveillance using remotely sensed data: a water resources information management system
Resumo:
This thesis describes the development of an operational river basin water resources information management system. The river or drainage basin is the fundamental unit of the system; in both the modelling and prediction of hydrological processes, and in the monitoring of the effect of catchment management policies. A primary concern of the study is the collection of sufficient and sufficiently accurate information to model hydrological processes. Remote sensing, in combination with conventional point source measurement, can be a valuable source of information, but is often overlooked by hydrologists, due to the cost of acquisition and processing. This thesis describes a number of cost effective methods of acquiring remotely sensed imagery, from airborne video survey to real time ingestion of meteorological satellite data. Inexpensive micro-computer systems and peripherals are used throughout to process and manipulate the data. Spatial information systems provide a means of integrating these data with topographic and thematic cartographic data, and historical records. For the system to have any real potential the data must be stored in a readily accessible format and be easily manipulated within the database. The design of efficient man-machine interfaces and the use of software enginering methodologies are therefore included in this thesis as a major part of the design of the system. The use of low cost technologies, from micro-computers to video cameras, enables the introduction of water resources information management systems into developing countries where the potential benefits are greatest.
Resumo:
The leadership categorisation theory suggests that followers rely on a hierarchical cognitive structure in perceiving leaders and the leadership process, which consists of three levels; superordinate, basic and subordinate. The predominant view is that followers rely on Implicit Leadership Theories (ILTs) at the basic level in making judgments about managers. The thesis examines whether this presumption is true by proposing and testing two competing conceptualisations; namely the congruence between the basic level ILTs (general leader) and actual manager perceptions, and subordinate level ILTs (job-specific leader) and actual manager. The conceptualisation at the job-specific level builds on context-related assertions of the ILT explanatory models: leadership categorisation, information processing and connectionist network theories. Further, the thesis addresses the effects of ILT congruence at the group level. The hypothesised model suggests that Leader-Member Exchange (LMX) will act as a mediator between ILT congruence and outcomes. Three studies examined the proposed model. The first was cross-sectional with 175 students reporting on work experience during a 1-year industrial placement. The second was longitudinal and had a sample of 343 students engaging in a business simulation in groups with formal leadership. The final study was a cross-sectional survey in several organisations with a sample of 178. A novel approach was taken to congruence analysis; the hypothesised models were tested using Latent Congruence Modelling (LCM), which accounts for measurement error and overcomes the majority of limitations of traditional approaches. The first two studies confirm the traditional theorised view that employees rely on basic-level ILTs in making judgments about their managers with important implications, and show that LMX mediates the relationship between ILT congruence and work-related outcomes (performance, job satisfaction, well-being, task satisfaction, intragroup conflict, group satisfaction, team realness, team-member exchange, group performance). The third study confirms this with conflict, well-being, self-rated performance and commitment as outcomes.
Resumo:
In the future, competitors will have more and more opportunities to buy the same information; therefore the companies’ competitiveness will not primarily depend on how much information they possess, but rather on how they can “translate” it to their own language. This study aims to examine those factors that have the most significant impact on the degree to which market studies are utilised by companies. Most of the work in this area has studied the use of information in strategic decisions a priori. This paper — while reflecting on the findings of research on organisational theories of information processing — aims to bridge this gap. It proposes and tests a new conceptual framework that examines the use of managerial market research information in decision-making and knowledge creation within one single model. Collected survey data, including all the top-income business enterprises in Hungary indicate that market research findings are efficiently incorporated into the marketing information system only if the marketing manager has trust in the researcher, and believes that the market study is of high quality. Decision-makers are more likely to learn from market studies facilitating the resolution of some specific problem than descriptive studies of a more general nature.
Resumo:
The question as to whether people totally blind since infancy process allocentric or ‘external’ spatial information like the sighted has caused considerable debate within the literature. Due to the extreme rarity of the population, researchers have often included individuals with Retinopathy of Prematurity (RoP – over oxygenation at birth) within the sample. However, RoP is inextricably confounded with prematurity per se. Prematurity, without visual disability, has been associated with spatial processing difficulties. In this experiment, blindfolded sighted and two groups of functionally totally blind participants heard text descriptions from a survey (allocentric) or route (egocentric) perspective. One blind group lost their sight due to retinopathy of prematurity (RoP – over oxygenation at birth) and a second group before 24 months of age. The accuracy of participants’ mental representations derived from the text descriptions were assessed via questions and maps. The RoP participants had lower scores than the sighted and early blind, who performed similarly. In other words, it was not visual impairment alone that resulted in impaired allocentric spatial performance in this task, but visual impairment together with RoP. This finding may help explain the contradictions within the existing literature on the role of vision in allocentric spatial processing.
Resumo:
New and promising treatments for coronary heart disease are enabled by vascular scaffolds made of poly(L-lactic acid) (PLLA), as demonstrated by Abbott Vascular’s bioresorbable vascular scaffold. PLLA is a semicrystalline polymer whose degree of crystallinity and crystalline microstructure depend on the thermal and deformation history during processing. In turn, the semicrystalline morphology determines scaffold strength and biodegradation time. However, spatially-resolved information about the resulting material structure (crystallinity and crystal orientation) is needed to interpret in vivo observations.
The first manufacturing step of the scaffold is tube expansion in a process similar to injection blow molding. Spatial uniformity of the tube microstructure is essential for the consistent production and performance of the final scaffold. For implantation into the artery, solid-state deformation below the glass transition temperature is imposed on a laser-cut subassembly to crimp it into a small diameter. Regions of localized strain during crimping are implicated in deployment behavior.
To examine the semicrystalline microstructure development of the scaffold, we employed complementary techniques of scanning electron and polarized light microscopy, wide-angle X-ray scattering, and X-ray microdiffraction. These techniques enabled us to assess the microstructure at the micro and nano length scale. The results show that the expanded tube is very uniform in the azimuthal and axial directions and that radial variations are more pronounced. The crimping step dramatically changes the microstructure of the subassembly by imposing extreme elongation and compression. Spatial information on the degree and direction of chain orientation from X-ray microdiffraction data gives insight into the mechanism by which the PLLA dissipates the stresses during crimping, without fracture. Finally, analysis of the microstructure after deployment shows that it is inherited from the crimping step and contributes to the scaffold’s successful implantation in vivo.
Resumo:
The "SNARC effect" refers to the finding that people respond faster to small numbers with the left hand and to large numbers with the right hand. This effect is often explained by hypothesizing that numbers are represented from left to right in ascending order (Mental Number Line). However, the SNARC effect may not depend on quantitative information, but on other factors such as the order in which numbers are often represented from left to right in our culture. Four experiments were performed to test this hypothesis. In the first experiment, the concept of spatial association was extended to nonnumeric mathematical symbols: the minus and plus symbols. These symbols were presented as fixation points in a spatial compatibility paradigm. The results demonstrated an opposite influence of the two symbols on the target stimulus: the minus symbol tends to favor the target presented on the left, while the plus symbol the target presented on the right, demonstrating that spatial association can emerge in the absence of a numerical context. In the last three experiments, the relationship between quantity and order was evaluated using normal numbers and mirror numbers. Although mirror numbers denote quantity, they are not encountered in a left-to-right spatial organization. In Experiments 1 and 2, participants performed a magnitude classification task with mirror and normal numbers presented together (Experiment 1) or separately (Experiment 2). In Experiment 3, participants performed a new task in which quantity information processing was not required: the mirror judgment task. The results show that participants access the quantity of both normal and mirror numbers, but only the normal numbers are spatially organized from left to right. In addition, the physical similarity between the numbers, used as a predictor variable in the last three experiments, showed that the physical characteristics of numbers influenced participants' reaction times.
Resumo:
Universidade Estadual de Campinas. Faculdade de Educação Física
Resumo:
The parallel mutation-selection evolutionary dynamics, in which mutation and replication are independent events, is solved exactly in the case that the Malthusian fitnesses associated to the genomes are described by the random energy model (REM) and by a ferromagnetic version of the REM. The solution method uses the mapping of the evolutionary dynamics into a quantum Ising chain in a transverse field and the Suzuki-Trotter formalism to calculate the transition probabilities between configurations at different times. We find that in the case of the REM landscape the dynamics can exhibit three distinct regimes: pure diffusion or stasis for short times, depending on the fitness of the initial configuration, and a spin-glass regime for large times. The dynamic transition between these dynamical regimes is marked by discontinuities in the mean-fitness as well as in the overlap with the initial reference sequence. The relaxation to equilibrium is described by an inverse time decay. In the ferromagnetic REM, we find in addition to these three regimes, a ferromagnetic regime where the overlap and the mean-fitness are frozen. In this case, the system relaxes to equilibrium in a finite time. The relevance of our results to information processing aspects of evolution is discussed.
Resumo:
This paper develops an interactive approach for exploratory spatial data analysis. Measures of attribute similarity and spatial proximity are combined in a clustering model to support the identification of patterns in spatial information. Relationships between the developed clustering approach, spatial data mining and choropleth display are discussed. Analysis of property crime rates in Brisbane, Australia is presented. A surprising finding in this research is that there are substantial inconsistencies in standard choropleth display options found in two widely used commercial geographical information systems, both in terms of definition and performance. The comparative results demonstrate the usefulness and appeal of the developed approach in a geographical information system environment for exploratory spatial data analysis.
Resumo:
On the basis of a spatially distributed sediment budget across a large basin, costs of achieving certain sediment reduction targets in rivers were estimated. A range of investment prioritization scenarios were tested to identify the most cost-effective strategy to control suspended sediment loads. The scenarios were based on successively introducing more information from the sediment budget. The relationship between spatial heterogeneity of contributing sediment sources on cost effectiveness of prioritization was investigated. Cost effectiveness was shown to increase with sequential introduction of sediment budget terms. The solution which most decreased cost was achieved by including spatial information linking sediment sources to the downstream target location. This solution produced cost curves similar to those derived using a genetic algorithm formulation. Appropriate investment prioritization can offer large cost savings because the magnitude of the costs can vary by several times depending on what type of erosion source or sediment delivery mechanism is targeted. Target settings which only consider the erosion source rates can potentially result in spending more money than random management intervention for achieving downstream targets. Coherent spatial patterns of contributing sediment emerge from the budget model and its many inputs. The heterogeneity in these patterns can be summarized in a succinct form. This summary was shown to be consistent with the cost difference between local and regional prioritization for three of four test catchments. To explain the effect for the fourth catchment, the detail of the individual sediment sources needed to be taken into account.
Resumo:
The collection of spatial information to quantify changes to the state and condition of the environment is a fundamental component of conservation or sustainable utilization of tropical and subtropical forests, Age is an important structural attribute of old-growth forests influencing biological diversity in Australia eucalypt forests. Aerial photograph interpretation has traditionally been used for mapping the age and structure of forest stands. However this method is subjective and is not able to accurately capture fine to landscape scale variation necessary for ecological studies. Identification and mapping of fine to landscape scale vegetative structural attributes will allow the compilation of information associated with Montreal Process indicators lb and ld, which seek to determine linkages between age structure and the diversity and abundance of forest fauna populations. This project integrated measurements of structural attributes derived from a canopy-height elevation model with results from a geometrical-optical/spectral mixture analysis model to map forest age structure at a landscape scale. The availability of multiple-scale data allows the transfer of high-resolution attributes to landscape scale monitoring. Multispectral image data were obtained from a DMSV (Digital Multi-Spectral Video) sensor over St Mary's State Forest in Southeast Queensland, Australia. Local scene variance levels for different forest tapes calculated from the DMSV data were used to optimize the tree density and canopy size output in a geometric-optical model applied to a Landsat Thematic Mapper (TU) data set. Airborne laser scanner data obtained over the project area were used to calibrate a digital filter to extract tree heights from a digital elevation model that was derived from scanned colour stereopairs. The modelled estimates of tree height, crown size, and tree density were used to produce a decision-tree classification of forest successional stage at a landscape scale. The results obtained (72% accuracy), were limited in validation, but demonstrate potential for using the multi-scale methodology to provide spatial information for forestry policy objectives (ie., monitoring forest age structure).
Resumo:
Map algebra is a data model and simple functional notation to study the distribution and patterns of spatial phenomena. It uses a uniform representation of space as discrete grids, which are organized into layers. This paper discusses extensions to map algebra to handle neighborhood operations with a new data type called a template. Templates provide general windowing operations on grids to enable spatial models for cellular automata, mathematical morphology, and local spatial statistics. A programming language for map algebra that incorporates templates and special processing constructs is described. The programming language is called MapScript. Example program scripts are presented to perform diverse and interesting neighborhood analysis for descriptive, model-based and processed-based analysis.
Resumo:
Land related information about the Earth's surface is commonIJ found in two forms: (1) map infornlation and (2) satellite image da ta. Satellite imagery provides a good visual picture of what is on the ground but complex image processing is required to interpret features in an image scene. Increasingly, methods are being sought to integrate the knowledge embodied in mop information into the interpretation task, or, alternatively, to bypass interpretation and perform biophysical modeling directly on derived data sources. A cartographic modeling language, as a generic map analysis package, is suggested as a means to integrate geographical knowledge and imagery in a process-oriented view of the Earth. Specialized cartographic models may be developed by users, which incorporate mapping information in performing land classification. In addition, a cartographic modeling language may be enhanced with operators suited to processing remotely sensed imagery. We demonstrate the usefulness of a cartographic modeling language for pre-processing satellite imagery, and define two nerv cartographic operators that evaluate image neighborhoods as post-processing operations to interpret thematic map values. The language and operators are demonstrated with an example image classification task.