15 resultados para grid-based spatial data
em Aston University Research Archive
Resumo:
Most current 3D landscape visualisation systems either use bespoke hardware solutions, or offer a limited amount of interaction and detail when used in realtime mode. We are developing a modular, data driven 3D visualisation system that can be readily customised to specific requirements. By utilising the latest software engineering methods and bringing a dynamic data driven approach to geo-spatial data visualisation we will deliver an unparalleled level of customisation in near-photo realistic, realtime 3D landscape visualisation. In this paper we show the system framework and describe how this employs data driven techniques. In particular we discuss how data driven approaches are applied to the spatiotemporal management aspect of the application framework, and describe the advantages these convey.
Resumo:
Analyzing geographical patterns by collocating events, objects or their attributes has a long history in surveillance and monitoring, and is particularly applied in environmental contexts, such as ecology or epidemiology. The identification of patterns or structures at some scales can be addressed using spatial statistics, particularly marked point processes methodologies. Classification and regression trees are also related to this goal of finding "patterns" by deducing the hierarchy of influence of variables on a dependent outcome. Such variable selection methods have been applied to spatial data, but, often without explicitly acknowledging the spatial dependence. Many methods routinely used in exploratory point pattern analysis are2nd-order statistics, used in a univariate context, though there is also a wide literature on modelling methods for multivariate point pattern processes. This paper proposes an exploratory approach for multivariate spatial data using higher-order statistics built from co-occurrences of events or marks given by the point processes. A spatial entropy measure, derived from these multinomial distributions of co-occurrences at a given order, constitutes the basis of the proposed exploratory methods. © 2010 Elsevier Ltd.
Resumo:
Analyzing geographical patterns by collocating events, objects or their attributes has a long history in surveillance and monitoring, and is particularly applied in environmental contexts, such as ecology or epidemiology. The identification of patterns or structures at some scales can be addressed using spatial statistics, particularly marked point processes methodologies. Classification and regression trees are also related to this goal of finding "patterns" by deducing the hierarchy of influence of variables on a dependent outcome. Such variable selection methods have been applied to spatial data, but, often without explicitly acknowledging the spatial dependence. Many methods routinely used in exploratory point pattern analysis are2nd-order statistics, used in a univariate context, though there is also a wide literature on modelling methods for multivariate point pattern processes. This paper proposes an exploratory approach for multivariate spatial data using higher-order statistics built from co-occurrences of events or marks given by the point processes. A spatial entropy measure, derived from these multinomial distributions of co-occurrences at a given order, constitutes the basis of the proposed exploratory methods. © 2010 Elsevier Ltd.
Resumo:
Context - Diffusion tensor imaging (DTI) studies in adults with bipolar disorder (BD) indicate altered white matter (WM) in the orbitomedial prefrontal cortex (OMPFC), potentially underlying abnormal prefrontal corticolimbic connectivity and mood dysregulation in BD. Objective - To use tract-based spatial statistics (TBSS) to examine WM skeleton (ie, the most compact whole-brain WM) in subjects with BD vs healthy control subjects. Design - Cross-sectional, case-control, whole-brain DTI using TBSS. Setting - University research institute. Participants - Fifty-six individuals, 31 having a DSM-IV diagnosis of BD type I (mean age, 35.9 years [age range, 24-52 years]) and 25 controls (mean age, 29.5 years [age range, 19-52 years]). Main Outcome Measures - Fractional anisotropy (FA) longitudinal and radial diffusivities in subjects with BD vs controls (covarying for age) and their relationships with clinical and demographic variables. Results - Subjects with BD vs controls had significantly greater FA (t > 3.0, P = .05 corrected) in the left uncinate fasciculus (reduced radial diffusivity distally and increased longitudinal diffusivity centrally), left optic radiation (increased longitudinal diffusivity), and right anterothalamic radiation (no significant diffusivity change). Subjects with BD vs controls had significantly reduced FA (t > 3.0, P = .05 corrected) in the right uncinate fasciculus (greater radial diffusivity). Among subjects with BD, significant negative correlations (P < .01) were found between age and FA in bilateral uncinate fasciculi and in the right anterothalamic radiation, as well as between medication load and FA in the left optic radiation. Decreased FA (P < .01) was observed in the left optic radiation and in the right anterothalamic radiation among subjects with BD taking vs those not taking mood stabilizers, as well as in the left optic radiation among depressed vs remitted subjects with BD. Subjects having BD with vs without lifetime alcohol or other drug abuse had significantly decreased FA in the left uncinate fasciculus. Conclusions - To our knowledge, this is the first study to use TBSS to examine WM in subjects with BD. Subjects with BD vs controls showed greater WM FA in the left OMPFC that diminished with age and with alcohol or other drug abuse, as well as reduced WM FA in the right OMPFC. Mood stabilizers and depressed episode reduced WM FA in left-sided sensory visual processing regions among subjects with BD. Abnormal right vs left asymmetry in FA in OMPFC WM among subjects with BD, likely reflecting increased proportions of left-sided longitudinally aligned and right-sided obliquely aligned myelinated fibers, may represent a biologic mechanism for mood dysregulation in BD.
Resumo:
DUE TO INCOMPLETE PAPERWORK, ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
Most current 3D landscape visualisation systems either use bespoke hardware solutions, or offer a limited amount of interaction and detail when used in realtime mode. We are developing a modular, data driven 3D visualisation system that can be readily customised to specific requirements. By utilising the latest software engineering methods and bringing a dynamic data driven approach to geo-spatial data visualisation we will deliver an unparalleled level of customisation in near-photo realistic, realtime 3D landscape visualisation. In this paper we show the system framework and describe how this employs data driven techniques. In particular we discuss how data driven approaches are applied to the spatiotemporal management aspect of the application framework, and describe the advantages these convey. © Springer-Verlag Berlin Heidelberg 2006.
Resumo:
Most object-based approaches to Geographical Information Systems (GIS) have concentrated on the representation of geometric properties of objects in terms of fixed geometry. In our road traffic marking application domain we have a requirement to represent the static locations of the road markings but also enforce the associated regulations, which are typically geometric in nature. For example a give way line of a pedestrian crossing in the UK must be within 1100-3000 mm of the edge of the crossing pattern. In previous studies of the application of spatial rules (often called 'business logic') in GIS emphasis has been placed on the representation of topological constraints and data integrity checks. There is very little GIS literature that describes models for geometric rules, although there are some examples in the Computer Aided Design (CAD) literature. This paper introduces some of the ideas from so called variational CAD models to the GIS application domain, and extends these using a Geography Markup Language (GML) based representation. In our application we have an additional requirement; the geometric rules are often changed and vary from country to country so should be represented in a flexible manner. In this paper we describe an elegant solution to the representation of geometric rules, such as requiring lines to be offset from other objects. The method uses a feature-property model embraced in GML 3.1 and extends the possible relationships in feature collections to permit the application of parameterized geometric constraints to sub features. We show the parametric rule model we have developed and discuss the advantage of using simple parametric expressions in the rule base. We discuss the possibilities and limitations of our approach and relate our data model to GML 3.1. © 2006 Springer-Verlag Berlin Heidelberg.
Resumo:
Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.
Resumo:
A method is described which enables the spatial pattern of discrete objects in histological sections of brain tissue to be determined. The method can be applied to cell bodies, sections of blood vessels or the characteristic lesions which develop in the brain of patients with neurodegenerative disorders. The density of the histological feature under study is measured in a series of contiguous sample fields arranged in a grid or transect. Data from adjacent sample fields are added together to provide density data for larger field sizes. A plot of the variance/mean ratio (V/M) of the data versus field size reveals whether the objects are distributed randomly, uniformly or in clusters. If the objects are clustered, the analysis determines whether the clusters are randomly or regularly distributed and the mean size of the clusters. In addition, if two different histological features are clustered, the analysis can determine whether their clusters are in phase, out of phase or unrelated to each other. To illustrate the method, the spatial patterns of senile plaques and neurofibrillary tangles were studied in histological sections of brain tissue from patients with Alzheimer's disease.
Resumo:
This thesis addresses the problem of information hiding in low dimensional digital data focussing on issues of privacy and security in Electronic Patient Health Records (EPHRs). The thesis proposes a new security protocol based on data hiding techniques for EPHRs. This thesis contends that embedding of sensitive patient information inside the EPHR is the most appropriate solution currently available to resolve the issues of security in EPHRs. Watermarking techniques are applied to one-dimensional time series data such as the electroencephalogram (EEG) to show that they add a level of confidence (in terms of privacy and security) in an individual’s diverse bio-profile (the digital fingerprint of an individual’s medical history), ensure belief that the data being analysed does indeed belong to the correct person, and also that it is not being accessed by unauthorised personnel. Embedding information inside single channel biomedical time series data is more difficult than the standard application for images due to the reduced redundancy. A data hiding approach which has an in built capability to protect against illegal data snooping is developed. The capability of this secure method is enhanced by embedding not just a single message but multiple messages into an example one-dimensional EEG signal. Embedding multiple messages of similar characteristics, for example identities of clinicians accessing the medical record helps in creating a log of access while embedding multiple messages of dissimilar characteristics into an EPHR enhances confidence in the use of the EPHR. The novel method of embedding multiple messages of both similar and dissimilar characteristics into a single channel EEG demonstrated in this thesis shows how this embedding of data boosts the implementation and use of the EPHR securely.
Resumo:
Indicators which summarise the characteristics of spatiotemporal data coverages significantly simplify quality evaluation, decision making and justification processes by providing a number of quality cues that are easy to manage and avoiding information overflow. Criteria which are commonly prioritised in evaluating spatial data quality and assessing a dataset’s fitness for use include lineage, completeness, logical consistency, positional accuracy, temporal and attribute accuracy. However, user requirements may go far beyond these broadlyaccepted spatial quality metrics, to incorporate specific and complex factors which are less easily measured. This paper discusses the results of a study of high level user requirements in geospatial data selection and data quality evaluation. It reports on the geospatial data quality indicators which were identified as user priorities, and which can potentially be standardised to enable intercomparison of datasets against user requirements. We briefly describe the implications for tools and standards to support the communication and intercomparison of data quality, and the ways in which these can contribute to the generation of a GEO label.
Resumo:
To investigate investment behaviour the present study applies panel data techniques, in particular the Arellano-Bond (1991) GMM estimator, based on data on Estonian manufacturing firms from the period 1995-1999. We employ the model of optimal capital accumulation in the presence of convex adjustment costs. The main research findings are that domestic companies seem to be financially more constrained than those where foreign investors are present, and also, smaller firms are more constrained than their larger counterparts.
Resumo:
IEEE 802.15.4 standard has been recently developed for low power wireless personal area networks. It can find many applications for smart grid, such as data collection, monitoring and control functions. The performance of 802.15.4 networks has been widely studied in the literature. However the main focus has been on the modeling throughput performance with frame collisions. In this paper we propose an analytic model which can model the impact of frame collisions as well as frame corruptions due to channel bit errors. With this model the frame length can be carefully selected to improve system performance. The analytic model can also be used to study the 802.15.4 networks with interference from other co-located networks, such as IEEE 802.11 and Bluetooth networks. © 2011 Springer-Verlag.
Resumo:
This paper is a cross-national study testing a framework relating cultural descriptive norms to entrepreneurship in a sample of 40 nations. Based on data from the Global Leadership and Organizational Behavior Effectiveness project, we identify two higher-order dimensions of culture – socially supportive culture (SSC) and performance-based culture (PBC) – and relate them to entrepreneurship rates and associated supply-side and demand-side variables available from the Global Entrepreneurship Monitor. Findings provide strong support for a social capital/SSC and supply-side variable explanation of entrepreneurship rate. PBC predicts demand-side variables, such as opportunity existence and the quality of formal institutions to support entrepreneurship.
Resumo:
This article presents a new method for data collection in regional dialectology based on site-restricted web searches. The method measures the usage and determines the distribution of lexical variants across a region of interest using common web search engines, such as Google or Bing. The method involves estimating the proportions of the variants of a lexical alternation variable over a series of cities by counting the number of webpages that contain the variants on newspaper websites originating from these cities through site-restricted web searches. The method is evaluated by mapping the 26 variants of 10 lexical variables with known distributions in American English. In almost all cases, the maps based on site-restricted web searches align closely with traditional dialect maps based on data gathered through questionnaires, demonstrating the accuracy of this method for the observation of regional linguistic variation. However, unlike collecting dialect data using traditional methods, which is a relatively slow process, the use of site-restricted web searches allows for dialect data to be collected from across a region as large as the United States in a matter of days.