11 resultados para space use

em Aston University Research Archive


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The deficiencies of stationary models applied to financial time series are well documented. A special form of non-stationarity, where the underlying generator switches between (approximately) stationary regimes, seems particularly appropriate for financial markets. We use a dynamic switching (modelled by a hidden Markov model) combined with a linear dynamical system in a hybrid switching state space model (SSSM) and discuss the practical details of training such models with a variational EM algorithm due to [Ghahramani and Hilton,1998]. The performance of the SSSM is evaluated on several financial data sets and it is shown to improve on a number of existing benchmark methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To make vision possible, the visual nervous system must represent the most informative features in the light pattern captured by the eye. Here we use Gaussian scale-space theory to derive a multiscale model for edge analysis and we test it in perceptual experiments. At all scales there are two stages of spatial filtering. An odd-symmetric, Gaussian first derivative filter provides the input to a Gaussian second derivative filter. Crucially, the output at each stage is half-wave rectified before feeding forward to the next. This creates nonlinear channels selectively responsive to one edge polarity while suppressing spurious or "phantom" edges. The two stages have properties analogous to simple and complex cells in the visual cortex. Edges are found as peaks in a scale-space response map that is the output of the second stage. The position and scale of the peak response identify the location and blur of the edge. The model predicts remarkably accurately our results on human perception of edge location and blur for a wide range of luminance profiles, including the surprising finding that blurred edges look sharper when their length is made shorter. The model enhances our understanding of early vision by integrating computational, physiological, and psychophysical approaches. © ARVO.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Whilst some authors have portrayed the Internet as a powerful tool for business and political institutions, others have highlighted the potential of this technology for those vying to constrain or counter-balance the power of organizations, through e-collectivism and on-line action. What appears to be emerging is a contested space that has the potential to simultaneously enhance the power of organizations, whilst also acting as an enabling technology for the empowerment of grass-root networks. In this struggle, organizations are fighting for the retention of “old economy” positions, as well as the development of “new economy” power-bases. In realizing these positions, organizations and institutions are strategizing and manoeuvering in order to shape on-line networks and communications. For example, the on-line activities of individuals can be contained through various technological means, such as surveillance, and the structuring of the virtual world through the use of portals and “walled gardens”. However, loose groupings of individuals are also strategizing to ensure there is a liberation of their communication paths and practices, and to maintain the potential for mobilization within and across traditional boundaries. In this article, the unique nature and potential of the Internet are evaluated, and the struggle over this contested virtual space is explored.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we discuss a fast Bayesian extension to kriging algorithms which has been used successfully for fast, automatic mapping in emergency conditions in the Spatial Interpolation Comparison 2004 (SIC2004) exercise. The application of kriging to automatic mapping raises several issues such as robustness, scalability, speed and parameter estimation. Various ad-hoc solutions have been proposed and used extensively but they lack a sound theoretical basis. In this paper we show how observations can be projected onto a representative subset of the data, without losing significant information. This allows the complexity of the algorithm to grow as O(n m 2), where n is the total number of observations and m is the size of the subset of the observations retained for prediction. The main contribution of this paper is to further extend this projective method through the application of space-limited covariance functions, which can be used as an alternative to the commonly used covariance models. In many real world applications the correlation between observations essentially vanishes beyond a certain separation distance. Thus it makes sense to use a covariance model that encompasses this belief since this leads to sparse covariance matrices for which optimised sparse matrix techniques can be used. In the presence of extreme values we show that space-limited covariance functions offer an additional benefit, they maintain the smoothness locally but at the same time lead to a more robust, and compact, global model. We show the performance of this technique coupled with the sparse extension to the kriging algorithm on synthetic data and outline a number of computational benefits such an approach brings. To test the relevance to automatic mapping we apply the method to the data used in a recent comparison of interpolation techniques (SIC2004) to map the levels of background ambient gamma radiation. © Springer-Verlag 2007.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The generation of very short range forecasts of precipitation in the 0-6 h time window is traditionally referred to as nowcasting. Most existing nowcasting systems essentially extrapolate radar observations in some manner, however, very few systems account for the uncertainties involved. Thus deterministic forecast are produced, which have a limited use when decisions must be made, since they have no measure of confidence or spread of the forecast. This paper develops a Bayesian state space modelling framework for quantitative precipitation nowcasting which is probabilistic from conception. The model treats the observations (radar) as noisy realisations of the underlying true precipitation process, recognising that this process can never be completely known, and thus must be represented probabilistically. In the model presented here the dynamics of the precipitation are dominated by advection, so this is a probabilistic extrapolation forecast. The model is designed in such a way as to minimise the computational burden, while maintaining a full, joint representation of the probability density function of the precipitation process. The update and evolution equations avoid the need to sample, thus only one model needs be run as opposed to the more traditional ensemble route. It is shown that the model works well on both simulated and real data, but that further work is required before the model can be used operationally. © 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There has been a dramatic change in the U.K. government policy regarding the establishment of new towns. The emphasis is now on the redevelopment of existing cities rather than on building new ones. This has created an urgent need to carry out detailed surveys and inventories of many aspects of urban land use in metropolitan areas: this study concentrates on just one aspect - urban open space. In the first stage a comparison was made between 1:10,000 scale black and white and 1:10,000 scale colour infra-red aerial photographs, to compare the type and amount of open space information which could be obtained from these two sources. The advantages of using colour infra-red photography were clearly demonstrated in this comparison. The second stage was the use of colour infra-red photography as the sole source of data to survey and map the urban open space of a sample area in Merseyside Metropolitan County. This sample area comprised eleven 1/4km2 squares, on each of which a 20m x 20m grid cell was placed to record, directly from the photography, 625 sets of data. Each set of data recorded the type and amount of open space, its surface cover, maintenance status and management. The data recorded were fed into a computer and a suite of programs was developed to provide output in both computer map and statistical form, for each of the eleven -1/4km2 -sample areas. The third stage involved a comparison of open space data with socio-economic status. Merseyside County Planning Authority had previously conducted a socio-economic survey of the county, and this information was used to identify ' the socio-economic status of the population in the eleven ilkm2 areas of this project. This comparison revealed many interesting and useful relationships between the provision of urban open space and socio-economic status.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis records the design and development of an electrically driven, air to water, vapour compression heat pump of nominally 6kW heat output, for residential space heating. The study was carried out on behalf of GEC Research Ltd through the Interdisciplinary Higher Degrees Scheme at Aston University. A computer based mathematical model of the vapour compression cycle was produced as a design aid, to enable the effects of component design changes or variations in operating conditions to be predicted. This model is supported by performance testing of the major components, which revealed that improvements in the compressor isentropic efficiency offer the greatest potential for further increases in cycle COPh. The evaporator was designed from first principles, and is based on wire-wound heat transfer tubing. Two evaporators, of air side area 10.27 and 16.24m2, were tested in a temperature and humidity controlled environment, demonstrating that the benefits of the large coil are greater heat pump heat output and lower noise levels. A systematic study of frost growth rates suggested that this problem is most severe at the conditions of saturated air at 0oC combined with low condenser water temperature. A dynamic simulation model was developed to predict the in-service performance of the heat pump. This study confirmed the importance of an adequate radiator area for heat pump installations. A prototype heat pump was designed and manufactured, consisting of a hermetic reciprocating compressor, a coaxial tube condenser and a helically coiled evaporator, using Refrigerant 22. The prototype was field tested in a domestic environment for one and a half years. The installation included a comprehensive monitoring system. Initial problems were encountered with defrosting and compressor noise, both of which were solved. The unit then operated throughout the 1985/86 heating season without further attention, producing a COPh of 2.34.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new 3D implementation of a hybrid model based on the analogy with two-phase hydrodynamics has been developed for the simulation of liquids at microscale. The idea of the method is to smoothly combine the atomistic description in the molecular dynamics zone with the Landau-Lifshitz fluctuating hydrodynamics representation in the rest of the system in the framework of macroscopic conservation laws through the use of a single "zoom-in" user-defined function s that has the meaning of a partial concentration in the two-phase analogy model. In comparison with our previous works, the implementation has been extended to full 3D simulations for a range of atomistic models in GROMACS from argon to water in equilibrium conditions with a constant or a spatially variable function s. Preliminary results of simulating the diffusion of a small peptide in water are also reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In global policy documents, the language of Technology-Enhanced Learning (TEL) now firmly structures a perception of educational technology which ‘subsumes’ terms like Networked Learning and e-Learning. Embedded in these three words though is a deterministic, economic assumption that technology has now enhanced learning, and will continue to do so. In a market-driven, capitalist society this is a ‘trouble free’, economically focused discourse which suggests there is no need for further debate about what the use of technology achieves in learning. Yet this raises a problem too: if technology achieves goals for human beings, then in education we are now simply counting on ‘use of technology’ to enhance learning. This closes the door on a necessary and ongoing critical pedagogical conversation that reminds us it is people that design learning, not technology. Furthermore, such discourse provides a vehicle for those with either strong hierarchical, or neoliberal agendas to make simplified claims politically, in the name of technology. This chapter is a reflection on our use of language in the educational technology community through a corpus-based Critical Discourse Analysis (CDA). In analytical examples that are ‘loaded’ with economic expectation, we can notice how the policy discourse of TEL narrows conversational space for learning so that people may struggle to recognise their own subjective being in this language. Through the lens of Lieras’s externality, desubjectivisation and closure (Lieras, 1996) we might examine possible effects of this discourse and seek a more emancipatory approach. A return to discussing Networked Learning is suggested, as a first step towards a more multi-directional conversation than TEL, that acknowledges the interrelatedness of technology, language and learning in people’s practice. Secondly, a reconsideration of how we write policy for educational technology is recommended, with a critical focus on how people learn, rather than on what technology is assumed to enhance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A ground-based laser system for space-debris cleaning will use powerful laser pulses that can self-focus while propagating through the atmosphere. We demonstrate that for the relevant laser parameters, this self-focusing can noticeably decrease the laser intensity on the target. We show that the detrimental effect can be, to a great extent, compensated for by applying the optimal initial beam defocusing. The effect of laser elevation on the system performance is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research focuses on automatically adapting a search engine size in response to fluctuations in query workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computer resources to or from the engine. Our solution is to contribute an adaptive search engine that will repeatedly re-evaluate its load and, when appropriate, switch over to a dierent number of active processors. We focus on three aspects and break them out into three sub-problems as follows: Continually determining the Number of Processors (CNP), New Grouping Problem (NGP) and Regrouping Order Problem (ROP). CNP means that (in the light of the changes in the query workload in the search engine) there is a problem of determining the ideal number of processors p active at any given time to use in the search engine and we call this problem CNP. NGP happens when changes in the number of processors are determined and it must also be determined which groups of search data will be distributed across the processors. ROP is how to redistribute this data onto processors while keeping the engine responsive and while also minimising the switchover time and the incurred network load. We propose solutions for these sub-problems. For NGP we propose an algorithm for incrementally adjusting the index to t the varying number of virtual machines. For ROP we present an ecient method for redistributing data among processors while keeping the search engine responsive. Regarding the solution for CNP, we propose an algorithm determining the new size of the search engine by re-evaluating its load. We tested the solution performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud. Our experiments show that when we compare our NGP solution with computing the index from scratch, the incremental algorithm speeds up the index computation 2{10 times while maintaining a similar search performance. The chosen redistribution method is 25% to 50% faster than other methods and reduces the network load around by 30%. For CNP we present a deterministic algorithm that shows a good ability to determine a new size of search engine. When combined, these algorithms give an adapting algorithm that is able to adjust the search engine size with a variable workload.