899 resultados para the SIMPLE algorithm


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The local image representation produced by early stages of visual analysis is uninformative regarding spatially extensive textures and surfaces. We know little about the cortical algorithm used to combine local information over space, and still less about the area over which it can operate. But such operations are vital to support perception of real-world objects and scenes. Here, we deploy a novel reverse-correlation technique to measure the extent of spatial pooling for target regions of different areas placed either in the central visual field, or more peripherally. Stimuli were large arrays of micropatterns, with their contrasts perturbed individually on an interval-by-interval basis. By comparing trial-by-trial observer responses with the predictions of computational models, we show that substantial regions (up to 13 carrier cycles) of a stimulus can be monitored in parallel by summing contrast over area. This summing strategy is very different from the more widely assumed signal selection strategy (a MAX operation), and suggests that neural mechanisms representing extensive visual textures can be recruited by attention. We also demonstrate that template resolution is much less precise in the parafovea than in the fovea, consistent with recent accounts of crowding. © 2014 The Authors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The fractional Fourier transform (FrFT) is used for the solution of the diffraction integral in optics. A scanning approach is proposed for finding the optimal FrFT order. In this way, the process of diffraction computing is speeded up. The basic algorithm and the intermediate results at each stage are demonstrated.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Let H be a real Hilbert space and T be a maximal monotone operator on H. A well-known algorithm, developed by R. T. Rockafellar [16], for solving the problem (P) ”To find x ∈ H such that 0 ∈ T x” is the proximal point algorithm. Several generalizations have been considered by several authors: introduction of a perturbation, introduction of a variable metric in the perturbed algorithm, introduction of a pseudo-metric in place of the classical regularization, . . . We summarize some of these extensions by taking simultaneously into account a pseudo-metric as regularization and a perturbation in an inexact version of the algorithm.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Link quality-based rate adaptation has been widely used for IEEE 802.11 networks. However, network performance is affected by both link quality and random channel access. Selection of transmit modes for optimal link throughput can cause medium access control (MAC) throughput loss. In this paper, we investigate this issue and propose a generalised cross-layer rate adaptation algorithm. It considers jointly link quality and channel access to optimise network throughput. The objective is to examine the potential benefits by cross-layer design. An efficient analytic model is proposed to evaluate rate adaptation algorithms under dynamic channel and multi-user access environments. The proposed algorithm is compared to link throughput optimisation-based algorithm. It is found rate adaptation by optimising link layer throughput can result in large performance loss, which cannot be compensated by the means of optimising MAC access mechanism alone. Results show cross-layer design can achieve consistent and considerable performance gains of up to 20%. It deserves to be exploited in practical design for IEEE 802.11 networks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Bayesian algorithms pose a limit to the performance learning algorithms can achieve. Natural selection should guide the evolution of information processing systems towards those limits. What can we learn from this evolution and what properties do the intermediate stages have? While this question is too general to permit any answer, progress can be made by restricting the class of information processing systems under study. We present analytical and numerical results for the evolution of on-line algorithms for learning from examples for neural network classifiers, which might include or not a hidden layer. The analytical results are obtained by solving a variational problem to determine the learning algorithm that leads to maximum generalization ability. Simulations using evolutionary programming, for programs that implement learning algorithms, confirm and expand the results. The principal result is not just that the evolution is towards a Bayesian limit. Indeed it is essentially reached. In addition we find that evolution is driven by the discovery of useful structures or combinations of variables and operators. In different runs the temporal order of the discovery of such combinations is unique. The main result is that combinations that signal the surprise brought by an example arise always before combinations that serve to gauge the performance of the learning algorithm. This latter structures can be used to implement annealing schedules. The temporal ordering can be understood analytically as well by doing the functional optimization in restricted functional spaces. We also show that there is data suggesting that the appearance of these traits also follows the same temporal ordering in biological systems. © 2006 American Institute of Physics.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Gastroesophageal reflux disease (GERD) is a common cause of chronic cough. For the diagnosis and treatment of GERD, it is desirable to quantify the temporal correlation between cough and reflux events. Cough episodes can be identified on esophageal manometric recordings as short-duration, rapid pressure rises. The present study aims at facilitating the detection of coughs by proposing an algorithm for the classification of cough events using manometric recordings. The algorithm detects cough episodes based on digital filtering, slope and amplitude analysis, and duration of the event. The algorithm has been tested on in vivo data acquired using a single-channel intra-esophageal manometric probe that comprises a miniature white-light interferometric fiber optic pressure sensor. Experimental results demonstrate the feasibility of using the proposed algorithm for identifying cough episodes based on real-time recordings using a single channel pressure catheter. The presented work can be integrated with commercial reflux pH/impedance probes to facilitate simultaneous 24-hour ambulatory monitoring of cough and reflux events, with the ultimate goal of quantifying the temporal correlation between the two types of events.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nonlinearity plays a critical role in the intra-cavity dynamics of high-pulse energy fiber lasers. Management of the intra-cavity nonlinear dynamics is the key to increase the output pulse energy in such laser systems. Here, we examine the impact of the order of the intra-cavity elements on the energy of generated pulses in the all-normal dispersion mode-locked ring fiber laser cavity. In mathematical terms, the nonlinear light dynamics in resonator makes operators corresponding to the action of laser elements (active and passive fiber, out-coupler, saturable absorber) non-commuting and the order of their appearance in a cavity important. For the simple design of all-normal dispersion ring fiber laser with varying cavity length, we found the order of the cavity elements, leading to maximum output pulse energy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

* The research is supported partly by INTAS: 04-77-7173 project, http://www.intas.be

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The cost and limited flexibility of traditional approaches to 11kV network reinforcement threatens to constrain the uptake of low carbon technologies. Ofgem has released £500m of funding for DNOs to trial innovative techniques and share the learning with the rest of the industry. One of the techniques under study is the addition of Energy Storage at key substations to the network to help with peak load lopping. This paper looks in detail at the sizing algorithm for use in the assessment of alternatives to traditional reinforcement and investigates a method of sizing a battery for use on a Network taking into account load growth, capacity fade and battery lifecycle issues. A further complication to the analysis is the method of operation of the battery system and how this affects the Depth of Discharge (DoD). The proposed method is being trialled on an area of 11kV network in Milton Keynes Central area and the simulation results are presented in this paper.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We propose the adaptive algorithm for solving a set of similar scheduling problems using learning technology. It is devised to combine the merits of an exact algorithm based on the mixed graph model and heuristics oriented on the real-world scheduling problems. The former may ensure high quality of the solution by means of an implicit exhausting enumeration of the feasible schedules. The latter may be developed for certain type of problems using their peculiarities. The main idea of the learning technology is to produce effective (in performance measure) and efficient (in computational time) heuristics by adapting local decisions for the scheduling problems under consideration. Adaptation is realized at the stage of learning while solving a set of sample scheduling problems using a branch-and-bound algorithm and structuring knowledge using pattern recognition apparatus.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

* This research was partially supported by the Latvian Science Foundation under grant No.02-86d.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The method (algorithm BIDIMS) of multivariate objects display to bidimensional structure in which the sum of differences of objects properties and their nearest neighbors is minimal is being described. The basic regularities on the set of objects at this ordering become evident. Besides, such structures (tables) have high inductive opportunities: many latent properties of objects may be predicted on their coordinates in this table. Opportunities of a method are illustrated on an example of bidimentional ordering of chemical elements. The table received in result practically coincides with the periodic Mendeleev table.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this article the new approach for optimization of estimations calculating algorithms is suggested. It can be used for finding the correct algorithm of minimal complexity in the context of algebraic approach for pattern recognition.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Transition P systems are computational models based on basic features of biological membranes and the observation of biochemical processes. In these models, membrane contains objects multisets, which evolve according to given evolution rules. In the field of Transition P systems implementation, it has been detected the necessity to determine whichever time are going to take active evolution rules application in membranes. In addition, to have time estimations of rules application makes possible to take important decisions related to the hardware / software architectures design. In this paper we propose a new evolution rules application algorithm oriented towards the implementation of Transition P systems. The developed algorithm is sequential and, it has a linear order complexity in the number of evolution rules. Moreover, it obtains the smaller execution times, compared with the preceding algorithms. Therefore the algorithm is very appropriate for the implementation of Transition P systems in sequential devices.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, a new method for offline handwriting recognition is presented. A robust algorithm for handwriting segmentation has been described here with the help of which individual characters can be segmented from a word selected from a paragraph of handwritten text image which is given as input to the module. Then each of the segmented characters are converted into column vectors of 625 values that are later fed into the advanced neural network setup that has been designed in the form of text files. The networks has been designed with quadruple layered neural network with 625 input and 26 output neurons each corresponding to a character from a-z, the outputs of all the four networks is fed into the genetic algorithm which has been developed using the concepts of correlation, with the help of this the overall network is optimized with the help of genetic algorithm thus providing us with recognized outputs with great efficiency of 71%.