19 resultados para Shift-and-add algorithms
em Aston University Research Archive
Resumo:
Magnification factors specify the extent to which the area of a small patch of the latent (or `feature') space of a topographic mapping is magnified on projection to the data space, and are of considerable interest in both neuro-biological and data analysis contexts. Previous attempts to consider magnification factors for the self-organizing map (SOM) algorithm have been hindered because the mapping is only defined at discrete points (given by the reference vectors). In this paper we consider the batch version of SOM, for which a continuous mapping can be defined, as well as the Generative Topographic Mapping (GTM) algorithm of Bishop et al. (1997) which has been introduced as a probabilistic formulation of the SOM. We show how the techniques of differential geometry can be used to determine magnification factors as continuous functions of the latent space coordinates. The results are illustrated here using a problem involving the identification of crab species from morphological data.
Resumo:
Many tracking algorithms have difficulties dealing with occlusions and background clutters, and consequently don't converge to an appropriate solution. Tracking based on the mean shift algorithm has shown robust performance in many circumstances but still fails e.g. when encountering dramatic intensity or colour changes in a pre-defined neighbourhood. In this paper, we present a robust tracking algorithm that integrates the advantages of mean shift tracking with those of tracking local invariant features. These features are integrated into the mean shift formulation so that tracking is performed based both on mean shift and feature probability distributions, coupled with an expectation maximisation scheme. Experimental results show robust tracking performance on a series of complicated real image sequences. © 2010 IEEE.
Resumo:
In this paper we examine the relation between ownership structure and operating performance for European maritime firms. Using a sample of 266 firm-year observations, during the period 2002–2004, we provide evidence that operating performance is positively related with foreign held shares and investment corporation held shares, indicating better investor protection from managerial opportunism. We also find no relation between operating performance and employee held shares, suggesting no relation between employee commitment and firms’ economic performance. Furthermore, we find no relation between operating performance and government held shares, indicating that government may not adequately protect shareholders’ interests from managerial opportunism. Finally, we do find a positive relation between operating performance and portfolio held shares for code law maritime firms but not for common law maritime firms. Results are robust after adjusting for various firm and country risk characteristics. Overall, our results on the importance of the ownership structure are new to this setting and add to a large body of evidence linking ownership characteristics to corporate performance.
Resumo:
This thesis presents details on both theoretical and experimental aspects of UV written fibre gratings. The main body of the thesis deals with the design, fabrication and testing of telecommunication optical fibre grating devices, but also an accurate theoretical analysis of intra-core fibre gratings is presented. Since more than a decade, fibre gratings have been extensively used in the telecommunication field (as filters, dispersion compensators, and add/drop multiplexers for instance). Gratings for telecommunication should conform to very high fabrication standards as the presence of any imperfection raises the noise level in the transmission system compromising its ability of transmitting intelligible sequence of bits to the receiver. Strong side lobes suppression and high and sharp reflection profile are then necessary characteristics. A fundamental part of the theoretical and experimental work reported in this thesis is about apodisation. The physical principle of apodisation is introduced and a number of apodisation techniques, experimental results and numerical optimisation of the shading functions and all the practical parameters involved in the fabrication are detailed. The measurement of chromatic dispersion in fibres and FBGs is detailed and an estimation of its accuracy is given. An overview on the possible methods that can be implemented for the fabrication of tunable fibre gratings is given before detailing a new dispersion compensator device based on the action of a distributed strain onto a linearly chirped FBG. It is shown that tuning of second and third order dispersion of the grating can be obtained by the use of a specially designed multipoint bending rig. Experiments on the recompression of optical pulses travelling long distances are detailed for 10 Gb/s and 40 Gb/s. The characterisation of a new kind of double section LPG fabricated on a metal-clad coated fibre is reported. The fabrication of the device is made easier by directly writing the grating through the metal coating. This device may be used to overcome the recoating problems associated with standard LPGs written in step-index fibre. Also, it can be used as a sensor for simultaneous measurements of temperature and surrounding medium refractive index.
Resumo:
The rapid developments in computer technology have resulted in a widespread use of discrete event dynamic systems (DEDSs). This type of system is complex because it exhibits properties such as concurrency, conflict and non-determinism. It is therefore important to model and analyse such systems before implementation to ensure safe, deadlock free and optimal operation. This thesis investigates current modelling techniques and describes Petri net theory in more detail. It reviews top down, bottom up and hybrid Petri net synthesis techniques that are used to model large systems and introduces on object oriented methodology to enable modelling of larger and more complex systems. Designs obtained by this methodology are modular, easy to understand and allow re-use of designs. Control is the next logical step in the design process. This thesis reviews recent developments in control DEDSs and investigates the use of Petri nets in the design of supervisory controllers. The scheduling of exclusive use of resources is investigated and an efficient Petri net based scheduling algorithm is designed and a re-configurable controller is proposed. To enable the analysis and control of large and complex DEDSs, an object oriented C++ software tool kit was developed and used to implement a Petri net analysis tool, Petri net scheduling and control algorithms. Finally, the methodology was applied to two industrial DEDSs: a prototype can sorting machine developed by Eurotherm Controls Ltd., and a semiconductor testing plant belonging to SGS Thomson Microelectronics Ltd.
Reconstructing the past? Low German and the creating of regional identity in public language display
Resumo:
This article deals with language contact between a dominant standard language -German - and a lesser-used variety - Low German - in a situation in which the minoritised language is threatened by language shift and language loss. It analyses the application of Low German in forms of public language display and the selfpresentation of the community in tourism brochures, focusing on bilingual linguistic practices on the one hand and on underlying discourses on the other. It reveals that top-down and bottom-up approaches to implementing Low German in public language display show a remarkable homogeneity, thus creating a regional 'brand'. The article asks whether a raised level of visibility will in itself guarantee better chances for linguistic maintenance and survival of the threatened language. © 2011 Taylor & Francis.
Resumo:
Since 1988, quasi-markets have been introduced into many areas of social policy in the UK, the NHS internal market is one example. Markets operate by price signals. The NHS Internal Market, if it is to operate efficiently, requires purchasers and providers to respond to price signals. The research hypothesis is - cost accounting methods can be developed to enable healthcare contracts to be priced on a cost-basis in a manner which will facilitate the achievement of economic efficiency in the NHS internal market. Surveys of hospitals in 1991 and 1994 established the cost methods adopted in deriving the prices for healthcare contracts in the first year of the market and three years on. An in-depth view of the costing for pricing process was gained through case studies. Hospitals had inadequate cost information on which to price healthcare contracts at the inception of the internal market: prices did not reflect the relative performance of healthcare providers sufficiently closely to enable the market's espoused efficiency aims to be achieved. Price variations were often due to differing costing approaches rather than efficiency. Furthermore, price comparisons were often meaningless because of inadequate definition of the services (products). In April 1993, the NHS Executive issued guidance on costing for contracting to all NHS providers in an attempt to improve the validity of price comparisons between alternative providers. The case studies and the 1994 survey show that although price comparison has improved, considerable problems remain. Consistency is not assured, and the problem of adequate product definition is still to be solved. Moreover, the case studies clearly highlight the mismatch of rigid, full-cost pricing rules with both the financial management considerations at local level and the emerging internal market(s). Incentives exist to cost-shift, and healthcare prices can easily be manipulated. In the search for a new health policy paradigm to replace traditional bureaucratic provision, cost-based pricing cannot be used to ensure a more efficient allocation of healthcare resources.
Resumo:
This thesis is concerned with the investigation, by nuclear magnetic resonance spectroscopy, of the molecular interactions occurring in mixtures of benzene and cyclohexane to which either chloroform or deutero-chloroform has been added. The effect of the added polar molecule on the liquid structure has been studied using spin-lattice relaxation time, 1H chemical shift, and nuclear Overhauser effect measurements. The main purpose of the work has been to validate a model for molecular interaction involving local ordering of benzene around chloroform. A chemical method for removing dissolved oxygen from samples has been developed to encompass a number of types of sample, including quantitative mixtures, and its supremacy over conventional deoxygenation technique is shown. A set of spectrometer conditions, the use of which produces the minimal variation in peak height in the steady state, is presented. To separate the general diluting effects of deutero-chloroform from its effects due to the production of local order a series of mixtures involving carbon tetrachloride, instead of deutero-chloroform, have been used as non-interacting references. The effect of molecular interaction is shown to be explainable using a solvation model, whilst an approach involving 1:1 complex formation is shown not to account for the observations. It is calculated that each solvation shell, based on deutero-chloroform, contains about twelve molecules of benzene or cyclohexane. The equations produced to account for the T1 variations have been adapted to account for the 1H chemical shift variations in the same system. The shift measurements are shown to substantiate the solvent cage model with a cage capacity of twelve molecules around each chloroform molecule. Nuclear Overhauser effect data have been analysed quantitatively in a manner consistent with the solvation model. The results show that discrete shells only exist when the mole fraction of deutero-chloroform is below about 0.08.
Resumo:
With the ability to collect and store increasingly large datasets on modern computers comes the need to be able to process the data in a way that can be useful to a Geostatistician or application scientist. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively for likelihood-based Geostatistics. Various methods have been proposed and are extensively used in an attempt to overcome these complexity issues. This thesis introduces a number of principled techniques for treating large datasets with an emphasis on three main areas: reduced complexity covariance matrices, sparsity in the covariance matrix and parallel algorithms for distributed computation. These techniques are presented individually, but it is also shown how they can be combined to produce techniques for further improving computational efficiency.
Resumo:
The aims of the project were twofold: 1) To investigate classification procedures for remotely sensed digital data, in order to develop modifications to existing algorithms and propose novel classification procedures; and 2) To investigate and develop algorithms for contextual enhancement of classified imagery in order to increase classification accuracy. The following classifiers were examined: box, decision tree, minimum distance, maximum likelihood. In addition to these the following algorithms were developed during the course of the research: deviant distance, look up table and an automated decision tree classifier using expert systems technology. Clustering techniques for unsupervised classification were also investigated. Contextual enhancements investigated were: mode filters, small area replacement and Wharton's CONAN algorithm. Additionally methods for noise and edge based declassification and contextual reclassification, non-probabilitic relaxation and relaxation based on Markov chain theory were developed. The advantages of per-field classifiers and Geographical Information Systems were investigated. The conclusions presented suggest suitable combinations of classifier and contextual enhancement, given user accuracy requirements and time constraints. These were then tested for validity using a different data set. A brief examination of the utility of the recommended contextual algorithms for reducing the effects of data noise was also carried out.
Resumo:
This study investigated the variability of response associated with various perimetric techniques, with the aim of improving the clinical interpretation of automated static threshold perirnetry. Evaluation of a third generation of perimetric threshold algorithms (SITA) demonstrated a reduction in test duration by approximately 50% both in normal subjects and in glaucoma patients. SITA produced a slightly higher, but clinically insignificant, Mean Sensitivity than with the previous generations of algorithms. This was associated with a decreased between-subject variability in sensitivity and hence, lower confidence intervals for normality. In glaucoma, the SITA algorithms gave rise to more statistically significant visual field defects and a similar between-visit repeatability to the Full Threshold and FASTPAC algorithms. The higher estimated sensitivity observed with SITA compared to Full Threshold and FASTPAC were not attributed to a reduction in the fatigue effect. The investigation of a novel method of maintaining patient fixation, a roving fixation target which paused immediately prior lo the stimulus presentation, revealed a greater degree of fixational instability with the roving fixation target compared to the conventional static fixation target. Previous experience with traditional white-white perimetry did not eradicate the learning effect in short-wavelength automated perimetry (SWAP) in a group of ocular hypertensive patients. The learning effect was smaller in an experienced group of patients compared to a naive group of patients, but was still at a significant level to require that patients should undertake a series of at least three familiarisation tests with SWAP.
Resumo:
INTAMAP is a Web Processing Service for the automatic spatial interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the Open Geospatial Consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an integrated, open source solution. The system couples an open-source Web Processing Service (developed by 52°North), accepting data in the form of standardised XML documents (conforming to the OGC Observations and Measurements standard) with a computing back-end realised in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a markup language designed to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropy, extreme values, and data with known error distributions. Besides a fully automatic mode, the system can be used with different levels of user control over the interpolation process.
Resumo:
The study here highlights the potential that analytical methods based on Knowledge Discovery in Databases (KDD) methodologies have to aid both the resolution of unstructured marketing/business problems and the process of scholarly knowledge discovery. The authors present and discuss the application of KDD in these situations prior to the presentation of an analytical method based on fuzzy logic and evolutionary algorithms, developed to analyze marketing databases and uncover relationships among variables. A detailed implementation on a pre-existing data set illustrates the method. © 2012 Published by Elsevier Inc.
Resumo:
The standard reference clinical score quantifying average Parkinson's disease (PD) symptom severity is the Unified Parkinson's Disease Rating Scale (UPDRS). At present, UPDRS is determined by the subjective clinical evaluation of the patient's ability to adequately cope with a range of tasks. In this study, we extend recent findings that UPDRS can be objectively assessed to clinically useful accuracy using simple, self-administered speech tests, without requiring the patient's physical presence in the clinic. We apply a wide range of known speech signal processing algorithms to a large database (approx. 6000 recordings from 42 PD patients, recruited to a six-month, multi-centre trial) and propose a number of novel, nonlinear signal processing algorithms which reveal pathological characteristics in PD more accurately than existing approaches. Robust feature selection algorithms select the optimal subset of these algorithms, which is fed into non-parametric regression and classification algorithms, mapping the signal processing algorithm outputs to UPDRS. We demonstrate rapid, accurate replication of the UPDRS assessment with clinically useful accuracy (about 2 UPDRS points difference from the clinicians' estimates, p < 0.001). This study supports the viability of frequent, remote, cost-effective, objective, accurate UPDRS telemonitoring based on self-administered speech tests. This technology could facilitate large-scale clinical trials into novel PD treatments.
Resumo:
In this paper, we propose a resource allocation scheme to minimize transmit power for multicast orthogonal frequency division multiple access systems. The proposed scheme allows users to have different symbol error rate (SER) across subcarriers and guarantees an average bit error rate and transmission rate for all users. We first provide an algorithm to determine the optimal bits and target SER on subcarriers. Because the worst-case complexity of the optimal algorithm is exponential, we further propose a suboptimal algorithm that separately assigns bit and adjusts SER with a lower complexity. Numerical results show that the proposed algorithm can effectively improve the performance of multicast orthogonal frequency division multiple access systems and that the performance of the suboptimal algorithm is close to that of the optimal one. Copyright © 2012 John Wiley & Sons, Ltd. This paper proposes optimal and suboptimal algorithms for minimizing transmitting power of multicast orthogonal frequency division multiple access systems with guaranteed average bit error rate and data rate requirement. The proposed scheme allows users to have different symbol error rate across subcarriers and guarantees an average bit error rate and transmission rate for all users. Copyright © 2012 John Wiley & Sons, Ltd.