51 resultados para the SIMPLE algorithm


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The generative topographic mapping (GTM) model was introduced by Bishop et al. (1998, Neural Comput. 10(1), 215-234) as a probabilistic re- formulation of the self-organizing map (SOM). It offers a number of advantages compared with the standard SOM, and has already been used in a variety of applications. In this paper we report on several extensions of the GTM, including an incremental version of the EM algorithm for estimating the model parameters, the use of local subspace models, extensions to mixed discrete and continuous data, semi-linear models which permit the use of high-dimensional manifolds whilst avoiding computational intractability, Bayesian inference applied to hyper-parameters, and an alternative framework for the GTM based on Gaussian processes. All of these developments directly exploit the probabilistic structure of the GTM, thereby allowing the underlying modelling assumptions to be made explicit. They also highlight the advantages of adopting a consistent probabilistic framework for the formulation of pattern recognition algorithms.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Most traditional methods for extracting the relationships between two time series are based on cross-correlation. In a non-linear non-stationary environment, these techniques are not sufficient. We show in this paper how to use hidden Markov models (HMMs) to identify the lag (or delay) between different variables for such data. We first present a method using maximum likelihood estimation and propose a simple algorithm which is capable of identifying associations between variables. We also adopt an information-theoretic approach and develop a novel procedure for training HMMs to maximise the mutual information between delayed time series. Both methods are successfully applied to real data. We model the oil drilling process with HMMs and estimate a crucial parameter, namely the lag for return.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This is the first paper to examine the microstructure of the Irish Stock Market empirically and is motivated by the adoption, on June 7th of Xetra the modern pan European auction trading system. Prior to this the exchange utilized an antiquated floor based system. This change was an important event for the market as a rich literature exists to suggest that the trading system exerts a strong influence over the behavior of security returns. We apply the ICSS algorithm of Inclan and Tiao (1994) to discover whether the change to the trading system caused a shift in unconditional volatility at the time Xetra was introduced. Because the trading mechanism can influence volatility in a number of ways we also estimate the partial adjustment coefficients of the Amihud and Mendelson (1987) model prior and subsequent to the introduction of Xetra. Although we find no evidence of volatility changes associated with the introduction of Xetra we do find evidence of an increase in the speed of adjustment (JEL: G15).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose – This paper sets out to study a production-planning problem for printed circuit board (PCB) assembly. A PCB assembly company may have a number of assembly lines for production of several product types in large volume. Design/methodology/approach – Pure integer linear programming models are formulated for assigning the product types to assembly lines, which is the line assignment problem, with the objective of minimizing the total production cost. In this approach, unrealistic assignment, which was suffered by previous researchers, is avoided by incorporating several constraints into the model. In this paper, a genetic algorithm is developed to solve the line assignment problem. Findings – The procedure of the genetic algorithm to the problem and a numerical example for illustrating the models are provided. It is also proved that the algorithm is effective and efficient in dealing with the problem. Originality/value – This paper studies the line assignment problem arising in a PCB manufacturing company in which the production volume is high.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A chip shooter machine for electronic components assembly has a movable feeder carrier holding components, a movable X-Y table carrying a printed circuit board (PCB), and a rotary turret having multiple assembly heads. This paper presents a hybrid genetic algorithm to optimize the sequence of component placements for a chip shooter machine. The objective of the problem is to minimize the total traveling distance of the X-Y table or the board. The genetic algorithm developed in the paper hybridizes the nearest neighbor heuristic, and an iterated swap procedure, which is a new improved heuristic. We have compared the performance of the hybrid genetic algorithm with that of the approach proposed by other researchers and have demonstrated our algorithm is superior in terms of the distance traveled by the X-Y table or the board.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a hybrid genetic algorithm to optimize the sequence of component placements on a printed circuit board and the arrangement of component types to feeders simultaneously for a pick-and-place machine with multiple stationary feeders, a fixed board table and a movable placement head. The objective of the problem is to minimize the total travelling distance, or the travelling time, of the placement head. The genetic algorithm developed in the paper hybrisizes different search heuristics including the nearest neighbor heuristic, the 2-opt heuristic, and an iterated swap procedure, which is a new improving heuristic. Compared with the results obtained by other researchers, the performance of the hybrid genetic algorithm is superior to others in terms of the distance travelled by the placement head.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Our PhD study focuses on the role of aspectual marking in expressing simultaneity of events in Tunisian Arabic as a first language, French as a first language, as well as in French as a second language by Tunisian learners at different acquisitional stages. We examine how the explicit markers of on-goingness qa:’id and «en train de» in Tunisian Arabic and in French respectively are used to express this temporal relation, in competition with the simple forms, the prefixed verb form in Tunisian Arabic and the présent de l’indicatif in French. We use a complex verbal task of retelling simultaneous events sharing an interval on the time axis based on eight videos presenting two situations happening in parallel. Two types of simultaneity are exploited: perfect simultaneity (when the two situations are parallel to each other) and inclusion (one situation is framed by the second one). Our informants in French and in Tunisian Arabic have two profiles, highly educated and low educated speakers. We show that the participants’ response to the retelling task varies according to their profiles, and so does their use of the on-goingness devices in the expression of simultaneity. The differences observed between the two profile groups are explained by the degree to which the speakers have developed a habit of responding to tasks. This is a skill typically acquired during schooling. We notice overall that the use of qa:’id as well as of «en train de» is less frequent in the data than the use of the simple forms. However, qa:’id as well as «en train de» are employed to play discursive roles that go beyond the proposition level. We postulate that despite the shared features between Tunisian Arabic and French regarding marking the concept of on-goingness, namely the presence of explicit lexical, not fully grammaticalised markers competing with other non-marked forms, the way they are used in the discourse of simultaneous events shows clear differences. We explain that «en train de» plays a more contrastive role than qa:’id and its use in discourse obeys a stricter rule. In cases of the inclusion type of simultaneity, it is used to construe the ‘framing’ event that encloses the second event. In construing perfectly simultaneneous events, and when both «en train de» and présent de l’indicatif are used, the proposition with «en train de» generally precedes the proposition with présent de l’indicatif, and not the other way around. qa:id obeys, but to a less strict rule as it can be used interchangeably with the simple form regardless of the order of propositions. The contrastive analysis of French L1 and L2 reveals learners’ deviations from natives’ use of on-goingness devices. They generalise the use of «en train de» and apply different rules to the interaction of the different marked and unmarked forms in discourse. Learners do not master its role in discourse even at advanced stages of acquisition despite its possible emergence around the basic and intermediate varieties. We conclude that the native speakers’ use of «en train de» involves mastering its role at the macro-structure level. This feature, not explicitly available to learners in the input, might persistently present a challenge to L2 acquisition of the periphrasis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Magnetoencephalography (MEG) is a non-invasive brain imaging technique with the potential for very high temporal and spatial resolution of neuronal activity. The main stumbling block for the technique has been that the estimation of a neuronal current distribution, based on sensor data outside the head, is an inverse problem with an infinity of possible solutions. Many inversion techniques exist, all using different a-priori assumptions in order to reduce the number of possible solutions. Although all techniques can be thoroughly tested in simulation, implicit in the simulations are the experimenter's own assumptions about realistic brain function. To date, the only way to test the validity of inversions based on real MEG data has been through direct surgical validation, or through comparison with invasive primate data. In this work, we constructed a null hypothesis that the reconstruction of neuronal activity contains no information on the distribution of the cortical grey matter. To test this, we repeatedly compared rotated sections of grey matter with a beamformer estimate of neuronal activity to generate a distribution of mutual information values. The significance of the comparison between the un-rotated anatomical information and the electrical estimate was subsequently assessed against this distribution. We found that there was significant (P < 0.05) anatomical information contained in the beamformer images across a number of frequency bands. Based on the limited data presented here, we can say that the assumptions behind the beamformer algorithm are not unreasonable for the visual-motor task investigated.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A multi-scale model of edge coding based on normalized Gaussian derivative filters successfully predicts perceived scale (blur) for a wide variety of edge profiles [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision]. Our model spatially differentiates the luminance profile, half-wave rectifies the 1st derivative, and then differentiates twice more, to give the 3rd derivative of all regions with a positive gradient. This process is implemented by a set of Gaussian derivative filters with a range of scales. Peaks in the inverted normalized 3rd derivative across space and scale indicate the positions and scales of the edges. The edge contrast can be estimated from the height of the peak. The model provides a veridical estimate of the scale and contrast of edges that have a Gaussian integral profile. Therefore, since scale and contrast are independent stimulus parameters, the model predicts that the perceived value of either of these parameters should be unaffected by changes in the other. This prediction was found to be incorrect: reducing the contrast of an edge made it look sharper, and increasing its scale led to a decrease in the perceived contrast. Our model can account for these effects when the simple half-wave rectifier after the 1st derivative is replaced by a smoothed threshold function described by two parameters. For each subject, one pair of parameters provided a satisfactory fit to the data from all the experiments presented here and in the accompanying paper [May, K. A. & Georgeson, M. A. (2007). Added luminance ramp alters perceived edge blur and contrast: A critical test for derivative-based models of edge coding. Vision Research, 47, 1721-1731]. Thus, when we allow for the visual system's insensitivity to very shallow luminance gradients, our multi-scale model can be extended to edge coding over a wide range of contrasts and blurs. © 2007 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Marr's work offered guidelines on how to investigate vision (the theory - algorithm - implementation distinction), as well as specific proposals on how vision is done. Many of the latter have inevitably been superseded, but the approach was inspirational and remains so. Marr saw the computational study of vision as tightly linked to psychophysics and neurophysiology, but the last twenty years have seen some weakening of that integration. Because feature detection is a key stage in early human vision, we have returned to basic questions about representation of edges at coarse and fine scales. We describe an explicit model in the spirit of the primal sketch, but tightly constrained by psychophysical data. Results from two tasks (location-marking and blur-matching) point strongly to the central role played by second-derivative operators, as proposed by Marr and Hildreth. Edge location and blur are evaluated by finding the location and scale of the Gaussian-derivative `template' that best matches the second-derivative profile (`signature') of the edge. The system is scale-invariant, and accurately predicts blur-matching data for a wide variety of 1-D and 2-D images. By finding the best-fitting scale, it implements a form of local scale selection and circumvents the knotty problem of integrating filter outputs across scales. [Supported by BBSRC and the Wellcome Trust]

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A novel direct integration technique of the Manakov-PMD equation for the simulation of polarisation mode dispersion (PMD) in optical communication systems is demonstrated and shown to be numerically as efficient as the commonly used coarse-step method. The main advantage of using a direct integration of the Manakov-PMD equation over the coarse-step method is a higher accuracy of the PMD model. The new algorithm uses precomputed M(w) matrices to increase the computational speed compared to a full integration without loss of accuracy. The simulation results for the probability distribution function (PDF) of the differential group delay (DGD) and the autocorrelation function (ACF) of the polarisation dispersion vector for varying numbers of precomputed M(w) matrices are compared to analytical models and results from the coarse-step method. It is shown that the coarse-step method achieves a significantly inferior reproduction of the statistical properties of PMD in optical fibres compared to a direct integration of the Manakov-PMD equation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This review suggests an evidence-based algorithm for sequential testing in infective endocarditis. It discusses blood culture and the merits and drawbacks of serology in making the diagnosis. Newer techniques are briefly reviewed. The proposed algorithm will complement the Duke criteria in clinical practice. © 2003 The British Infection Society. Published by Elsevier Science Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We report the effect of a range of monovalent sodium salts on the molecular equilibrium swelling of a simple synthetic microphase separated poly(methyl methacrylate)-block-poly(2-(diethylamino)ethyl methacrylate)-block-poly(methyl methacrylate) (PMMA88-b-PDEA223-b-PMMA88) pH-responsive hydrogel. Sodium acetate, sodium chloride, sodium bromide, sodium iodide, sodium nitrate and sodium thiocyanate were selected for study at controlled ionic strength and pH; all salts are taken from the Hofmeister series (HS). The influence of the anions on the expansion of the hydrogel was found to follow the reverse order of the classical HS. The expansion ratio of the gel measured in solutions containing the simple sodium halide salts (NaCl, NaBr, and NaI) was found to be strongly related to parameters which describe the interaction of the ion with water; surface charge density, viscosity coefficient, and entropy of hydration. A global study which also included nonspherical ions (NaAce, NaNO3 and NaSCN) showed the strongest correlation with the viscosity coefficient. Our results are interpreted in terms of the Collins model,(1) where larger ions have more mobile water in the first hydration cage immediately surrounding the gel, therefore making them more adhesive to the surface of the stationary phase of the gel and ultimately reducing the level of expansion.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A number of researchers have investigated the application of neural networks to visual recognition, with much of the emphasis placed on exploiting the network's ability to generalise. However, despite the benefits of such an approach it is not at all obvious how networks can be developed which are capable of recognising objects subject to changes in rotation, translation and viewpoint. In this study, we suggest that a possible solution to this problem can be found by studying aspects of visual psychology and in particular, perceptual organisation. For example, it appears that grouping together lines based upon perceptually significant features can facilitate viewpoint independent recognition. The work presented here identifies simple grouping measures based on parallelism and connectivity and shows how it is possible to train multi-layer perceptrons (MLPs) to detect and determine the perceptual significance of any group presented. In this way, it is shown how MLPs which are trained via backpropagation to perform individual grouping tasks, can be brought together into a novel, large scale network capable of determining the perceptual significance of the whole input pattern. Finally the applicability of such significance values for recognition is investigated and results indicate that both the NILP and the Kohonen Feature Map can be trained to recognise simple shapes described in terms of perceptual significances. This study has also provided an opportunity to investigate aspects of the backpropagation algorithm, particularly the ability to generalise. In this study we report the results of various generalisation tests. In applying the backpropagation algorithm to certain problems, we found that there was a deficiency in performance with the standard learning algorithm. An improvement in performance could however, be obtained when suitable modifications were made to the algorithm. The modifications and consequent results are reported here.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work sets out to evaluate the potential benefits and pit-falls in using a priori information to help solve the Magnetoencephalographic (MEG) inverse problem. In chapter one the forward problem in MEG is introduced, together with a scheme that demonstrates how a priori information can be incorporated into the inverse problem. Chapter two contains a literature review of techniques currently used to solve the inverse problem. Emphasis is put on the kind of a priori information that is used by each of these techniques and the ease with which additional constraints can be applied. The formalism of the FOCUSS algorithm is shown to allow for the incorporation of a priori information in an insightful and straightforward manner. In chapter three it is described how anatomical constraints, in the form of a realistically shaped source space, can be extracted from a subject’s Magnetic Resonance Image (MRI). The use of such constraints relies on accurate co-registration of the MEG and MRI co-ordinate systems. Variations of the two main co-registration approaches, based on fiducial markers or on surface matching, are described and the accuracy and robustness of a surface matching algorithm is evaluated. Figures of merit introduced in chapter four are shown to given insight into the limitations of a typical measurement set-up and potential value of a priori information. It is shown in chapter five that constrained dipole fitting and FOCUSS outperform unconstrained dipole fitting when data with low SNR is used. However, the effect of errors in the constraints can reduce this advantage. Finally, it is demonstrated in chapter six that the results of different localisation techniques give corroborative evidence about the location and activation sequence of the human visual cortical areas underlying the first 125ms of the visual magnetic evoked response recorded with a whole head neuromagnetometer.