17 resultados para Mapping time
em Aston University Research Archive
Resumo:
The automatic interpolation of environmental monitoring network data such as air quality or radiation levels in real-time setting poses a number of practical and theoretical questions. Among the problems found are (i) dealing and communicating uncertainty of predictions, (ii) automatic (hyper)parameter estimation, (iii) monitoring network heterogeneity, (iv) dealing with outlying extremes, and (v) quality control. In this paper we discuss these issues, in light of the spatial interpolation comparison exercise held in 2004.
Resumo:
The standard GTM (generative topographic mapping) algorithm assumes that the data on which it is trained consists of independent, identically distributed (iid) vectors. For time series, however, the iid assumption is a poor approximation. In this paper we show how the GTM algorithm can be extended to model time series by incorporating it as the emission density in a hidden Markov model. Since GTM has discrete hidden states we are able to find a tractable EM algorithm, based on the forward-backward algorithm, to train the model. We illustrate the performance of GTM through time using flight recorder data from a helicopter.
Resumo:
We have used MALDI-MS imaging (MALDI-MSI) to monitor the time dependent appearance and loss of signals when tissue slices are brought rapidly to room temperature for short to medium periods of time. Sections from mouse brain were cut in a cryostat microtome, placed on a MALDI target and allowed to warm to room temperature for 30 s to 3 h. Sections were then refrozen, fixed by ethanol treatment and analysed by MALDI-MSI. The intensity of a range of markers were seen to vary across the time course, both increasing and decreasing, with the intensity of some markers changing significantly within 30 s and markers also showed tissue location specific evolution. The markers resulting from this autolysis were compared directly to those that evolved in a comparable 16 h on-tissue trypsin digest, and the markers that evolved in the two studies were seen to be substantially different. These changes offer an important additional level of location-dependent information for mapping changes and seeking disease-dependent biomarkers in the tissue. They also indicate that considerable care is required to allow comparison of biomarkers between MALDI-MSI experiments and also has implications for the standard practice of thaw-mounting multiple tissue sections onto MALDI-MS targets.
Resumo:
A case study demonstrates the use of a process-based approach to change regarding the implementation of an information system for road traffic accident reporting in a UK police force. The supporting tools of process mapping and business process simulation are used in the change process and assist in communicating the current process design and people's roles in the overall performance of that design. The simulation model is also used to predict the performance of new designs incorporating the use of information technology. The approach is seen to have a number of advantages in the context of a public sector organisation. These include the ability for personnel to move from a traditional grouping of staff in occupational groups with relationships defined by reporting requirements to a view of their role in a process, which delivers a performance to a customer. By running the simulation through time it is also possible to gauge how changes at an operational level can lead to the meeting of strategic targets over time. Also the ability of simulation to proof new designs was seen as particularly important in a government agency were past failures of information technology investments had contributed to a more risk averse approach to their implementation. © 2004 Elsevier Ltd. All rights reserved.
Resumo:
This paper proposes, and begins to evaluate, a format of brainstorming-type activity which aims to release the creativity of participants and encourage them to learn about a wider range of issues in more detail. The format does this through providing a two-stage brainstorming session. After the first brainstorm, participants have an opportunity to both piggy-back off other peoples ideas (i.e. create new ideas by synthesising other peoples' ideas into their own perspectives), and share causal links to build a causal map with the brainstormed ideas. Five causal mapping sessions with organisations have been analysed. Findings suggest that ideas shared when piggy-backing are often highly creative and unique for the participant who shared them. Also piggy-backing and causal linking seem to provide effective opportunities for individual learning as participants have time to reflect upon other peoples' perspectives and share their own views on those.
Resumo:
Automatically generating maps of a measured variable of interest can be problematic. In this work we focus on the monitoring network context where observations are collected and reported by a network of sensors, and are then transformed into interpolated maps for use in decision making. Using traditional geostatistical methods, estimating the covariance structure of data collected in an emergency situation can be difficult. Variogram determination, whether by method-of-moment estimators or by maximum likelihood, is very sensitive to extreme values. Even when a monitoring network is in a routine mode of operation, sensors can sporadically malfunction and report extreme values. If this extreme data destabilises the model, causing the covariance structure of the observed data to be incorrectly estimated, the generated maps will be of little value, and the uncertainty estimates in particular will be misleading. Marchant and Lark [2007] propose a REML estimator for the covariance, which is shown to work on small data sets with a manual selection of the damping parameter in the robust likelihood. We show how this can be extended to allow treatment of large data sets together with an automated approach to all parameter estimation. The projected process kriging framework of Ingram et al. [2007] is extended to allow the use of robust likelihood functions, including the two component Gaussian and the Huber function. We show how our algorithm is further refined to reduce the computational complexity while at the same time minimising any loss of information. To show the benefits of this method, we use data collected from radiation monitoring networks across Europe. We compare our results to those obtained from traditional kriging methodologies and include comparisons with Box-Cox transformations of the data. We discuss the issue of whether to treat or ignore extreme values, making the distinction between the robust methods which ignore outliers and transformation methods which treat them as part of the (transformed) process. Using a case study, based on an extreme radiological events over a large area, we show how radiation data collected from monitoring networks can be analysed automatically and then used to generate reliable maps to inform decision making. We show the limitations of the methods and discuss potential extensions to remedy these.
Resumo:
Detailed transport studies in plasmas require the solution of the time evolution of many different initial positions of test particles in the phase space of the systems to be investigated. To reduce this amount of numerical work, one would like to replace the integration of the time-continues system with a mapping.
Resumo:
In this paper we discuss a fast Bayesian extension to kriging algorithms which has been used successfully for fast, automatic mapping in emergency conditions in the Spatial Interpolation Comparison 2004 (SIC2004) exercise. The application of kriging to automatic mapping raises several issues such as robustness, scalability, speed and parameter estimation. Various ad-hoc solutions have been proposed and used extensively but they lack a sound theoretical basis. In this paper we show how observations can be projected onto a representative subset of the data, without losing significant information. This allows the complexity of the algorithm to grow as O(n m 2), where n is the total number of observations and m is the size of the subset of the observations retained for prediction. The main contribution of this paper is to further extend this projective method through the application of space-limited covariance functions, which can be used as an alternative to the commonly used covariance models. In many real world applications the correlation between observations essentially vanishes beyond a certain separation distance. Thus it makes sense to use a covariance model that encompasses this belief since this leads to sparse covariance matrices for which optimised sparse matrix techniques can be used. In the presence of extreme values we show that space-limited covariance functions offer an additional benefit, they maintain the smoothness locally but at the same time lead to a more robust, and compact, global model. We show the performance of this technique coupled with the sparse extension to the kriging algorithm on synthetic data and outline a number of computational benefits such an approach brings. To test the relevance to automatic mapping we apply the method to the data used in a recent comparison of interpolation techniques (SIC2004) to map the levels of background ambient gamma radiation. © Springer-Verlag 2007.
Resumo:
The INTAMAP FP6 project has developed an interoperable framework for real-time automatic mapping of critical environmental variables by extending spatial statistical methods and employing open, web-based, data exchange protocols and visualisation tools. This paper will give an overview of the underlying problem, of the project, and discuss which problems it has solved and which open problems seem to be most relevant to deal with next. The interpolation problem that INTAMAP solves is the generic problem of spatial interpolation of environmental variables without user interaction, based on measurements of e.g. PM10, rainfall or gamma dose rate, at arbitrary locations or over a regular grid covering the area of interest. It deals with problems of varying spatial resolution of measurements, the interpolation of averages over larger areas, and with providing information on the interpolation error to the end-user. In addition, monitoring network optimisation is addressed in a non-automatic context.
Resumo:
This thesis introduces and develops a novel real-time predictive maintenance system to estimate the machine system parameters using the motion current signature. Recently, motion current signature analysis has been addressed as an alternative to the use of sensors for monitoring internal faults of a motor. A maintenance system based upon the analysis of motion current signature avoids the need for the implementation and maintenance of expensive motion sensing technology. By developing nonlinear dynamical analysis for motion current signature, the research described in this thesis implements a novel real-time predictive maintenance system for current and future manufacturing machine systems. A crucial concept underpinning this project is that the motion current signature contains information relating to the machine system parameters and that this information can be extracted using nonlinear mapping techniques, such as neural networks. Towards this end, a proof of concept procedure is performed, which substantiates this concept. A simulation model, TuneLearn, is developed to simulate the large amount of training data required by the neural network approach. Statistical validation and verification of the model is performed to ascertain confidence in the simulated motion current signature. Validation experiment concludes that, although, the simulation model generates a good macro-dynamical mapping of the motion current signature, it fails to accurately map the micro-dynamical structure due to the lack of knowledge regarding performance of higher order and nonlinear factors, such as backlash and compliance. Failure of the simulation model to determine the micro-dynamical structure suggests the presence of nonlinearity in the motion current signature. This motivated us to perform surrogate data testing for nonlinearity in the motion current signature. Results confirm the presence of nonlinearity in the motion current signature, thereby, motivating the use of nonlinear techniques for further analysis. Outcomes of the experiment show that nonlinear noise reduction combined with the linear reverse algorithm offers precise machine system parameter estimation using the motion current signature for the implementation of the real-time predictive maintenance system. Finally, a linear reverse algorithm, BJEST, is developed and applied to the motion current signature to estimate the machine system parameters.
Resumo:
Tonal, textural and contextual properties are used in manual photointerpretation of remotely sensed data. This study has used these three attributes to produce a lithological map of semi arid northwest Argentina by semi automatic computer classification procedures of remotely sensed data. Three different types of satellite data were investigated, these were LANDSAT MSS, TM and SIR-A imagery. Supervised classification procedures using tonal features only produced poor classification results. LANDSAT MSS produced classification accuracies in the range of 40 to 60%, while accuracies of 50 to 70% were achieved using LANDSAT TM data. The addition of SIR-A data produced increases in the classification accuracy. The increased classification accuracy of TM over the MSS is because of the better discrimination of geological materials afforded by the middle infra red bands of the TM sensor. The maximum likelihood classifier consistently produced classification accuracies 10 to 15% higher than either the minimum distance to means or decision tree classifier, this improved accuracy was obtained at the cost of greatly increased processing time. A new type of classifier the spectral shape classifier, which is computationally as fast as a minimum distance to means classifier is described. However, the results for this classifier were disappointing, being lower in most cases than the minimum distance or decision tree procedures. The classification results using only tonal features were felt to be unacceptably poor, therefore textural attributes were investigated. Texture is an important attribute used by photogeologists to discriminate lithology. In the case of TM data, texture measures were found to increase the classification accuracy by up to 15%. However, in the case of the LANDSAT MSS data the use of texture measures did not provide any significant increase in the accuracy of classification. For TM data, it was found that second order texture, especially the SGLDM based measures, produced highest classification accuracy. Contextual post processing was found to increase classification accuracy and improve the visual appearance of classified output by removing isolated misclassified pixels which tend to clutter classified images. Simple contextual features, such as mode filters were found to out perform more complex features such as gravitational filter or minimal area replacement methods. Generally the larger the size of the filter, the greater the increase in the accuracy. Production rules were used to build a knowledge based system which used tonal and textural features to identify sedimentary lithologies in each of the two test sites. The knowledge based system was able to identify six out of ten lithologies correctly.
Resumo:
Early, lesion-based models of language processing suggested that semantic and phonological processes are associated with distinct temporal and parietal regions respectively, with frontal areas more indirectly involved. Contemporary spatial brain mapping techniques have not supported such clear-cut segregation, with strong evidence of activation in left temporal areas by both processes and disputed evidence of involvement of frontal areas in both processes. We suggest that combining spatial information with temporal and spectral data may allow a closer scrutiny of the differential involvement of closely overlapping cortical areas in language processing. Using beamforming techniques to analyze magnetoencephalography data, we localized the neuronal substrates underlying primed responses to nouns requiring either phonological or semantic processing, and examined the associated measures of time and frequency in those areas where activation was common to both tasks. Power changes in the beta (14-30 Hz) and gamma (30-50 Hz) frequency bandswere analyzed in pre-selected time windows of 350-550 and 500-700ms In left temporal regions, both tasks elicited power changes in the same time window (350-550 ms), but with different spectral characteristics, low beta (14-20 Hz) for the phonological task and high beta (20-30 Hz) for the semantic task. In frontal areas (BA10), both tasks elicited power changes in the gamma band (30-50 Hz), but in different time windows, 500-700ms for the phonological task and 350-550ms for the semantic task. In the left inferior parietal area (BA40), both tasks elicited changes in the 20-30 Hz beta frequency band but in different time windows, 350-550ms for the phonological task and 500-700ms for the semantic task. Our findings suggest that, where spatial measures may indicate overlapping areas of involvement, additional beamforming techniques can demonstrate differential activation in time and frequency domains. © 2012 McNab, Hillebrand, Swithenby and Rippon.
Resumo:
We analyze the steady-state propagation of optical pulses in fiber transmission systems with lumped nonlinear optical devices (NODs) placed periodically in the line. For the first time to our knowledge, a theoretical model is developed to describe the transmission regime with a quasilinear pulse evolution along the transmission line and the point action of NODs. We formulate the mapping problem for pulse propagation in a unit cell of the line and show that in the particular application to nonlinear optical loop mirrors, the steady-state pulse characteristics predicted by the theory accurately reproduce the results of direct numerical simulations.
Resumo:
This paper presents and demonstrates a method for using magnetic resonance imaging to measure local pressure of a fluid saturating a porous medium. The method is tested both in a static system of packed silica gel and in saturated sintered glass cylinders experiencing fluid flow. The fluid used contains 3% gas in the form of 3-μm average diameter gas filled 1,2-distearoyl-sn-glycero-3-phosphocholine (C18:0, MW: 790.16) liposomes suspended in 5% glycerol and 0.5% Methyl cellulose with water. Preliminary studies at 2.35 T demonstrate relative magnetic resonance signal changes of 20% per bar in bulk fluid for an echo time TE=40 ms, and 6-10% in consolidated porous media for TE=10 ms, over the range 0.8-1.8 bar for a spatial resolution of 0.1 mm3 and a temporal resolution of 30 s. The stability of this solution with relation to applied pressure and methods for improving sensitivity are discussed. © 2007 Elsevier Inc. All rights reserved.
Resumo:
Monitoring land-cover changes on sites of conservation importance allows environmental problems to be detected, solutions to be developed and the effectiveness of actions to be assessed. However, the remoteness of many sites or a lack of resources means these data are frequently not available. Remote sensing may provide a solution, but large-scale mapping and change detection may not be appropriate, necessitating site-level assessments. These need to be easy to undertake, rapid and cheap. We present an example of a Web-based solution based on free and open-source software and standards (including PostGIS, OpenLayers, Web Map Services, Web Feature Services and GeoServer) to support assessments of land-cover change (and validation of global land-cover maps). Authorised users are provided with means to assess land-cover visually and may optionally provide uncertainty information at various levels: from a general rating of their confidence in an assessment to a quantification of the proportions of land-cover types within a reference area. Versions of this tool have been developed for the TREES-3 initiative (Simonetti, Beuchle and Eva, 2011). This monitors tropical land-cover change through ground-truthing at latitude / longitude degree confluence points, and for monitoring of change within and around Important Bird Areas (IBAs) by Birdlife International and the Royal Society for the Protection of Birds (RSPB). In this paper we present results from the second of these applications. We also present further details on the potential use of the land-cover change assessment tool on sites of recognised conservation importance, in combination with NDVI and other time series data from the eStation (a system for receiving, processing and disseminating environmental data). We show how the tool can be used to increase the usability of earth observation data by local stakeholders and experts, and assist in evaluating the impact of protection regimes on land-cover change.