15 resultados para Spatial analysis of geographical data
em Aston University Research Archive
Resumo:
This paper presents the results of a multivariate spatial analysis of 38 vowel formant variables in the language of 402 informants from 236 cities from across the contiguous United States, based on the acoustic data from the Atlas of North American English (Labov, Ash & Boberg, 2006). The results of the analysis both confirm and challenge the results of the Atlas. Most notably, while the analysis identifies similar patterns as the Atlas in the West and the Southeast, the analysis finds that the Midwest and the Northeast are distinct dialect regions that are considerably stronger than the traditional Midland and Northern dialect region indentified in the Atlas. The analysis also finds evidence that a western vowel shift is actively shaping the language of the Western United States.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
The purpose of this paper is to investigate the technological development of electronic inventory solutions from perspective of patent analysis. We first applied the international patent classification to classify the top categories of data processing technologies and their corresponding top patenting countries. Then we identified the core technologies by the calculation of patent citation strength and standard deviation criterion for each patent. To eliminate those core innovations having no reference relationships with the other core patents, relevance strengths between core technologies were evaluated also. Our findings provide market intelligence not only for the research and development community, but for the decision making of advanced inventory solutions.
Resumo:
Background: The Framework Method is becoming an increasingly popular approach to the management and analysis of qualitative data in health research. However, there is confusion about its potential application and limitations. Discussion. The article discusses when it is appropriate to adopt the Framework Method and explains the procedure for using it in multi-disciplinary health research teams, or those that involve clinicians, patients and lay people. The stages of the method are illustrated using examples from a published study. Summary. Used effectively, with the leadership of an experienced qualitative researcher, the Framework Method is a systematic and flexible approach to analysing qualitative data and is appropriate for use in research teams even where not all members have previous experience of conducting qualitative research. © 2013 Gale et al.; licensee BioMed Central Ltd.
Resumo:
A Principal Components Analysis of neuropathological data from 79 Alzheimer’s disease (AD) cases was performed to determine whether there was evidence for subtypes of the disease. Two principal components were extracted from the data which accounted for 72% and 12% of the total variance respectively. The results suggested that 1) AD was heterogeneous but subtypes could not be clearly defined; 2) the heterogeneity, in part, reflected disease onset; 3) familial cases did not constitute a distinct subtype of AD and 4) there were two forms of late onset AD, one of which was associated with less senile plaque and neurofibrillary tangle development but with a greater degree of brain atherosclerosis.
Resumo:
Substantial altimetry datasets collected by different satellites have only become available during the past five years, but the future will bring a variety of new altimetry missions, both parallel and consecutive in time. The characteristics of each produced dataset vary with the different orbital heights and inclinations of the spacecraft, as well as with the technical properties of the radar instrument. An integral analysis of datasets with different properties offers advantages both in terms of data quantity and data quality. This thesis is concerned with the development of the means for such integral analysis, in particular for dynamic solutions in which precise orbits for the satellites are computed simultaneously. The first half of the thesis discusses the theory and numerical implementation of dynamic multi-satellite altimetry analysis. The most important aspect of this analysis is the application of dual satellite altimetry crossover points as a bi-directional tracking data type in simultaneous orbit solutions. The central problem is that the spatial and temporal distributions of the crossovers are in conflict with the time-organised nature of traditional solution methods. Their application to the adjustment of the orbits of both satellites involved in a dual crossover therefore requires several fundamental changes of the classical least-squares prediction/correction methods. The second part of the thesis applies the developed numerical techniques to the problems of precise orbit computation and gravity field adjustment, using the altimetry datasets of ERS-1 and TOPEX/Poseidon. Although the two datasets can be considered less compatible that those of planned future satellite missions, the obtained results adequately illustrate the merits of a simultaneous solution technique. In particular, the geographically correlated orbit error is partially observable from a dataset consisting of crossover differences between two sufficiently different altimetry datasets, while being unobservable from the analysis of altimetry data of both satellites individually. This error signal, which has a substantial gravity-induced component, can be employed advantageously in simultaneous solutions for the two satellites in which also the harmonic coefficients of the gravity field model are estimated.
Resumo:
Analysis of variance (ANOVA) is the most efficient method available for the analysis of experimental data. Analysis of variance is a method of considerable complexity and subtlety, with many different variations, each of which applies in a particular experimental context. Hence, it is possible to apply the wrong type of ANOVA to data and, therefore, to draw an erroneous conclusion from an experiment. This article reviews the types of ANOVA most likely to arise in clinical experiments in optometry including the one-way ANOVA ('fixed' and 'random effect' models), two-way ANOVA in randomised blocks, three-way ANOVA, and factorial experimental designs (including the varieties known as 'split-plot' and 'repeated measures'). For each ANOVA, the appropriate experimental design is described, a statistical model is formulated, and the advantages and limitations of each type of design discussed. In addition, the problems of non-conformity to the statistical model and determination of the number of replications are considered. © 2002 The College of Optometrists.
Resumo:
Despite considerable and growing interest in the subject of academic researchers and practising managers jointly generating knowledge (which we term ‘co-production’), our searches of management literature revealed few articles based on primary data or multiple cases. Given the increasing commitment to co-production by academics, managers and those funding research, it seems important to strengthen the evidence base about practice and performance in co-production. Literature on collaborative research was reviewed to develop a framework to structure the analysis of this data and relate findings to the limited body of prior research on collaborative research practice and performance. This paper presents empirical data from four completed, large scale co-production projects. Despite major differences between the cases, we find that the key success factors and the indicators of performances are remarkably similar. We demonstrate many, complex influences between factors, between outcomes, and between factors and outcomes, and discuss the features that are distinctive to co-production. Our empirical findings are broadly consonant with prior literature, but go further in trying to understand success factors’ consequences for performance. A second contribution of this paper is the development of a conceptually and methodologically rigorous process for investigating collaborative research, linking process and performance. The paper closes with discussion of the study’s limitations and opportunities for further research.
Resumo:
The identification of disease clusters in space or space-time is of vital importance for public health policy and action. In the case of methicillin-resistant Staphylococcus aureus (MRSA), it is particularly important to distinguish between community and health care-associated infections, and to identify reservoirs of infection. 832 cases of MRSA in the West Midlands (UK) were tested for clustering and evidence of community transmission, after being geo-located to the centroids of UK unit postcodes (postal areas roughly equivalent to Zip+4 zip code areas). An age-stratified analysis was also carried out at the coarser spatial resolution of UK Census Output Areas. Stochastic simulation and kernel density estimation were combined to identify significant local clusters of MRSA (p<0.025), which were supported by SaTScan spatial and spatio-temporal scan. In order to investigate local sampling effort, a spatial 'random labelling' approach was used, with MRSA as cases and MSSA (methicillin-sensitive S. aureus) as controls. Heavy sampling in general was a response to MRSA outbreaks, which in turn appeared to be associated with medical care environments. The significance of clusters identified by kernel estimation was independently supported by information on the locations and client groups of nursing homes, and by preliminary molecular typing of isolates. In the absence of occupational/ lifestyle data on patients, the assumption was made that an individual's location and consequent risk is adequately represented by their residential postcode. The problems of this assumption are discussed, with recommendations for future data collection.
Resumo:
A method of determining the spatial pattern of any histological feature in sections of brain tissue which can be measured quantitatively is described and compared with a previously described method. A measurement of a histological feature such as density, area, amount or load is obtained for a series of contiguous sample fields. The regression coefficient (β) is calculated from the measurements taken in pairs, first in pairs of adjacent samples and then in pairs of samples taken at increasing degrees of separation between them, i.e. separated by 2, 3, 4,..., n units. A plot of β versus the degree of separation between the pairs of sample fields reveals whether the histological feature is distributed randomly, uniformly or in clusters. If the feature is clustered, the analysis determines whether the clusters are randomly or regularly distributed, the mean size of the clusters and the spacing of the clusters. The method is simple to apply and interpret and is illustrated using simulated data and studies of the spatial patterns of blood vessels in the cerebral cortex of normal brain, the degree of vacuolation of the cortex in patients with Creutzfeldt-Jacob disease (CJD) and the characteristic lesions present in Alzheimer's disease (AD). Copyright (C) 2000 Elsevier Science B.V.
Resumo:
A method is described which enables the spatial pattern of discrete objects in histological sections of brain tissue to be determined. The method can be applied to cell bodies, sections of blood vessels or the characteristic lesions which develop in the brain of patients with neurodegenerative disorders. The density of the histological feature under study is measured in a series of contiguous sample fields arranged in a grid or transect. Data from adjacent sample fields are added together to provide density data for larger field sizes. A plot of the variance/mean ratio (V/M) of the data versus field size reveals whether the objects are distributed randomly, uniformly or in clusters. If the objects are clustered, the analysis determines whether the clusters are randomly or regularly distributed and the mean size of the clusters. In addition, if two different histological features are clustered, the analysis can determine whether their clusters are in phase, out of phase or unrelated to each other. To illustrate the method, the spatial patterns of senile plaques and neurofibrillary tangles were studied in histological sections of brain tissue from patients with Alzheimer's disease.
Resumo:
Visual mental imagery is a complex process that may be influenced by the content of mental images. Neuropsychological evidence from patients with hemineglect suggests that in the imagery domain environments and objects may be represented separately and may be selectively affected by brain lesions. In the present study, we used functional magnetic resonance imaging (fMRI) to assess the possibility of neural segregation among mental images depicting parts of an object, of an environment (imagined from a first-person perspective), and of a geographical map, using both a mass univariate and a multivariate approach. Data show that different brain areas are involved in different types of mental images. Imagining an environment relies mainly on regions known to be involved in navigational skills, such as the retrosplenial complex and parahippocampal gyrus, whereas imagining a geographical map mainly requires activation of the left angular gyrus, known to be involved in the representation of categorical relations. Imagining a familiar object mainly requires activation of parietal areas involved in visual space analysis in both the imagery and the perceptual domain. We also found that the pattern of activity in most of these areas specifically codes for the spatial arrangement of the parts of the mental image. Our results clearly demonstrate a functional neural segregation for different contents of mental images and suggest that visuospatial information is coded by different patterns of activity in brain areas involved in visual mental imagery. Hum Brain Mapp 36:945-958, 2015.