24 resultados para distributed combination of classifiers

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We studied the rules by which visual responses to luminous targets are combined across the two eyes. Previous work has found very different forms of binocular combination for targets defined by increments and by decrements of luminance, with decrement data implying a severe nonlinearity before binocular combination. We ask whether this difference is due to the luminance of the target, the luminance of the background, or the sign of the luminance excursion. We estimated the pre-binocular nonlinearity (power exponent) by fitting a computational model to ocular equibrightness matches. The severity of the nonlinearity had a monotonic dependence on the signed difference between target and background luminance. For dual targets, in which there was both a luminance increment and a luminance decrement (e.g. contrast), perception was governed largely by the decrement. The asymmetry in the nonlinearities derived from the subjective matching data made a clear prediction for visual performance: there should be more binocular summation for detecting luminance increments than for detecting luminance decrements. This prediction was confirmed by the results of a subsequent experiment. We discuss the relation between these results and luminance nonlinearities such as a logarithmic transform, as well as the involvement of contemporary model architectures of binocular vision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Flow control in Computer Communication systems is generally a multi-layered structure, consisting of several mechanisms operating independently at different levels. Evaluation of the performance of networks in which different flow control mechanisms act simultaneously is an important area of research, and is examined in depth in this thesis. This thesis presents the modelling of a finite resource computer communication network equipped with three levels of flow control, based on closed queueing network theory. The flow control mechanisms considered are: end-to-end control of virtual circuits, network access control of external messages at the entry nodes and the hop level control between nodes. The model is solved by a heuristic technique, based on an equivalent reduced network and the heuristic extensions to the mean value analysis algorithm. The method has significant computational advantages, and overcomes the limitations of the exact methods. It can be used to solve large network models with finite buffers and many virtual circuits. The model and its heuristic solution are validated by simulation. The interaction between the three levels of flow control are investigated. A queueing model is developed for the admission delay on virtual circuits with end-to-end control, in which messages arrive from independent Poisson sources. The selection of optimum window limit is considered. Several advanced network access schemes are postulated to improve the network performance as well as that of selected traffic streams, and numerical results are presented. A model for the dynamic control of input traffic is developed. Based on Markov decision theory, an optimal control policy is formulated. Numerical results are given and throughput-delay performance is shown to be better with dynamic control than with static control.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The detection of signals in the presence of noise is one of the most basic and important problems encountered by communication engineers. Although the literature abounds with analyses of communications in Gaussian noise, relatively little work has appeared dealing with communications in non-Gaussian noise. In this thesis several digital communication systems disturbed by non-Gaussian noise are analysed. The thesis is divided into two main parts. In the first part, a filtered-Poisson impulse noise model is utilized to calulate error probability characteristics of a linear receiver operating in additive impulsive noise. Firstly the effect that non-Gaussian interference has on the performance of a receiver that has been optimized for Gaussian noise is determined. The factors affecting the choice of modulation scheme so as to minimize the deterimental effects of non-Gaussian noise are then discussed. In the second part, a new theoretical model of impulsive noise that fits well with the observed statistics of noise in radio channels below 100 MHz has been developed. This empirical noise model is applied to the detection of known signals in the presence of noise to determine the optimal receiver structure. The performance of such a detector has been assessed and is found to depend on the signal shape, the time-bandwidth product, as well as the signal-to-noise ratio. The optimal signal to minimize the probability of error of; the detector is determined. Attention is then turned to the problem of threshold detection. Detector structure, large sample performance and robustness against errors in the detector parameters are examined. Finally, estimators of such parameters as. the occurrence of an impulse and the parameters in an empirical noise model are developed for the case of an adaptive system with slowly varying conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The subject of investigation of the present research is the use of smart hydrogels with fibre optic sensor technology. The aim was to develop a costeffective sensor platform for the detection of water in hydrocarbon media, and of dissolved inorganic analytes, namely potassium, calcium and aluminium. The fibre optic sensors in this work depend upon the use of hydrogels to either entrap chemotropic agents or to respond to external environmental changes, by changing their inherent properties, such as refractive index (RI). A review of current fibre optic technology for sensing outlined that the main principles utilised are either the measurement of signal loss or a change in wavelength of the light transmitted through the system. The signal loss principle relies on changing the conditions required for total internal reflection to occur. Hydrogels are cross-linked polymer networks that swell but do not dissolve in aqueous environments. Smart hydrogels are synthetic materials that exhibit additional properties to those inherent in their structure. In order to control the non-inherent properties, the hydrogels were fabricated with the addition of chemotropic agents. For the detection of water, hydrogels of low refractive index were synthesized using fluorinated monomers. Sulfonated monomers were used for their extreme hydrophilicity as a means of water sensing through an RI change. To enhance the sensing capability of the hydrogel, chemotropic agents, such as pH indicators and cobalt salts, were used. The system comprises of the smart hydrogel coated onto an exposed section of the fibre optic core, connected to the interrogation system measuring the difference in the signal. Information obtained was analysed using a purpose designed software. The developed sensor platform showed that an increase in the target species caused an increase in the signal lost from the sensor system, allowing for a detection of the target species. The system has potential applications in areas such as clinical point of care, water detection in fuels and the detection of dissolved ions in the water industry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Substantial altimetry datasets collected by different satellites have only become available during the past five years, but the future will bring a variety of new altimetry missions, both parallel and consecutive in time. The characteristics of each produced dataset vary with the different orbital heights and inclinations of the spacecraft, as well as with the technical properties of the radar instrument. An integral analysis of datasets with different properties offers advantages both in terms of data quantity and data quality. This thesis is concerned with the development of the means for such integral analysis, in particular for dynamic solutions in which precise orbits for the satellites are computed simultaneously. The first half of the thesis discusses the theory and numerical implementation of dynamic multi-satellite altimetry analysis. The most important aspect of this analysis is the application of dual satellite altimetry crossover points as a bi-directional tracking data type in simultaneous orbit solutions. The central problem is that the spatial and temporal distributions of the crossovers are in conflict with the time-organised nature of traditional solution methods. Their application to the adjustment of the orbits of both satellites involved in a dual crossover therefore requires several fundamental changes of the classical least-squares prediction/correction methods. The second part of the thesis applies the developed numerical techniques to the problems of precise orbit computation and gravity field adjustment, using the altimetry datasets of ERS-1 and TOPEX/Poseidon. Although the two datasets can be considered less compatible that those of planned future satellite missions, the obtained results adequately illustrate the merits of a simultaneous solution technique. In particular, the geographically correlated orbit error is partially observable from a dataset consisting of crossover differences between two sufficiently different altimetry datasets, while being unobservable from the analysis of altimetry data of both satellites individually. This error signal, which has a substantial gravity-induced component, can be employed advantageously in simultaneous solutions for the two satellites in which also the harmonic coefficients of the gravity field model are estimated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many planning and control tools, especially network analysis, have been developed in the last four decades. The majority of them were created in military organization to solve the problem of planning and controlling research and development projects. The original version of the network model (i.e. C.P.M/PERT) was transplanted to the construction industry without the consideration of the special nature and environment of construction projects. It suited the purpose of setting up targets and defining objectives, but it failed in satisfying the requirement of detailed planning and control at the site level. Several analytical and heuristic rules based methods were designed and combined with the structure of C.P.M. to eliminate its deficiencies. None of them provides a complete solution to the problem of resource, time and cost control. VERT was designed to deal with new ventures. It is suitable for project evaluation at the development stage. CYCLONE, on the other hand, is concerned with the design and micro-analysis of the production process. This work introduces an extensive critical review of the available planning techniques and addresses the problem of planning for site operation and control. Based on the outline of the nature of site control, this research developed a simulation based network model which combines part of the logics of both VERT and CYCLONE. Several new nodes were designed to model the availability and flow of resources, the overhead and operating cost and special nodes for evaluating time and cost. A large software package is written to handle the input, the simulation process and the output of the model. This package is designed to be used on any microcomputer using MS-DOS operating system. Data from real life projects were used to demonstrate the capability of the technique. Finally, a set of conclusions are drawn regarding the features and limitations of the proposed model, and recommendations for future work are outlined at the end of this thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cleavage by the proteasome is responsible for generating the C terminus of T-cell epitopes. Modeling the process of proteasome cleavage as part of a multi-step algorithm for T-cell epitope prediction will reduce the number of non-binders and increase the overall accuracy of the predictive algorithm. Quantitative matrix-based models for prediction of the proteasome cleavage sites in a protein were developed using a training set of 489 naturally processed T-cell epitopes (nonamer peptides) associated with HLA-A and HLA-B molecules. The models were validated using an external test set of 227 T-cell epitopes. The performance of the models was good, identifying 76% of the C-termini correctly. The best model of proteasome cleavage was incorporated as the first step in a three-step algorithm for T-cell epitope prediction, where subsequent steps predicted TAP affinity and MHC binding using previously derived models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, there has been an increas-ing interest in learning a distributed rep-resentation of word sense. Traditional context clustering based models usually require careful tuning of model parame-ters, and typically perform worse on infre-quent word senses. This paper presents a novel approach which addresses these lim-itations by first initializing the word sense embeddings through learning sentence-level embeddings from WordNet glosses using a convolutional neural networks. The initialized word sense embeddings are used by a context clustering based model to generate the distributed representations of word senses. Our learned represen-tations outperform the publicly available embeddings on 2 out of 4 metrics in the word similarity task, and 6 out of 13 sub tasks in the analogical reasoning task.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Binocular combination for first-order (luminancedefined) stimuli has been widely studied, but we know rather little about this binocular process for spatial modulations of contrast (second-order stimuli). We used phase-matching and amplitude-matching tasks to assess binocular combination of second-order phase and modulation depth simultaneously. With fixed modulation in one eye, we found that binocularly perceived phase was shifted, and perceived amplitude increased almost linearly as modulation depth in the other eye increased. At larger disparities, the phase shift was larger and the amplitude change was smaller. The degree of interocular correlation of the carriers had no influence. These results can be explained by an initial extraction of the contrast envelopes before binocular combination (consistent with the lack of dependence on carrier correlation) followed by a weighted linear summation of second-order modulations in which the weights (gains) for each eye are driven by the first-order carrier contrasts as previously found for first-order binocular combination. Perceived modulation depth fell markedly with increasing phase disparity unlike previous findings that perceived first-order contrast was almost independent of phase disparity. We present a simple revision to a widely used interocular gain-control theory that unifies first- and second-order binocular summation with a single principle-contrast-weighted summation-and we further elaborate the model for first-order combination. Conclusion: Second-order combination is controlled by first-order contrast.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increase in renewable energy generators introduced into the electricity grid is putting pressure on its stability and management as predictions of renewable energy sources cannot be accurate or fully controlled. This, with the additional pressure of fluctuations in demand, presents a problem more complex than the current methods of controlling electricity distribution were designed for. A global approximate and distributed optimisation method for power allocation that accommodates uncertainties and volatility is suggested and analysed. It is based on a probabilistic method known as message passing [1], which has deep links to statistical physics methodology. This principled method of optimisation is based on local calculations and inherently accommodates uncertainties; it is of modest computational complexity and provides good approximate solutions.We consider uncertainty and fluctuations drawn from a Gaussian distribution and incorporate them into the message-passing algorithm. We see the effect that increasing uncertainty has on the transmission cost and how the placement of volatile nodes within a grid, such as renewable generators or consumers, effects it.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Energy crops production is considered as environmentally benign and socially acceptable, offering ecological benefits over fossil fuels through their contribution to the reduction of greenhouse gases and acidifying emissions. Energy crops are subjected to persistent policy support by the EU, despite their limited or even marginally negative impact on the greenhouse effect. The present study endeavors to optimize the agricultural income generated by energy crops in a remote and disadvantageous region, with the assistance of linear programming. The optimization concerns the income created from soybean, sunflower (proxy for energy crop), and corn. Different policy scenarios imposed restrictions on the value of the subsidies as a proxy for EU policy tools, the value of inputs (costs of capital and labor) and different irrigation conditions. The results indicate that the area and the imports per energy crop remain unchanged, independently of the policy scenario enacted. Furthermore, corn cultivation contributes the most to iFncome maximization, whereas the implemented CAP policy plays an incremental role in uptaking an energy crop. A key implication is that alternative forms of motivation should be provided to the farmers beyond the financial ones in order the extensive use of energy crops to be achieved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributed representations (DR) of cortical channels are pervasive in models of spatio-temporal vision. A central idea that underpins current innovations of DR stems from the extension of 1-D phase into 2-D images. Neurophysiological evidence, however, provides tenuous support for a quadrature representation in the visual cortex, since even phase visual units are associated with broader orientation tuning than odd phase visual units (J.Neurophys.,88,455–463, 2002). We demonstrate that the application of the steering theorems to a 2-D definition of phase afforded by the Riesz Transform (IEEE Trans. Sig. Proc., 49, 3136–3144), to include a Scale Transform, allows one to smoothly interpolate across 2-D phase and pass from circularly symmetric to orientation tuned visual units, and from more narrowly tuned odd symmetric units to even ones. Steering across 2-D phase and scale can be orthogonalized via a linearizing transformation. Using the tiltafter effect as an example, we argue that effects of visual adaptation can be better explained by via an orthogonal rather than channel specific representation of visual units. This is because of the ability to explicitly account for isotropic and cross-orientation adaptation effect from the orthogonal representation from which both direct and indirect tilt after-effects can be explained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new approach is described herein, where neutron reflectivity measurements that probe changes in the density profile of thin films as they absorb material from the gas phase have been combined with a Love wave based gravimetric assay that measures the mass of absorbed material. This combination of techniques not only determines the spatial distribution of absorbed molecules, but also reveals the amount of void space within the thin film (a quantity that can be difficult to assess using neutron reflectivity measurements alone). The uptake of organic solvent vapours into spun cast films of polystyrene has been used as a model system with a view to this method having the potential for extension to the study of other systems. These could include, for example, humidity sensors, hydrogel swelling, biomolecule adsorption or transformations of electroactive and chemically reactive thin films. This is the first ever demonstration of combined neutron reflectivity and Love wave-based gravimetry and the experimental caveats, limitations and scope of the method are explored and discussed in detail.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

SUMMARY A study has been made of the coalescence of secondary dispersions in a fibrous bed. The literature pertaining to the formation, hydrodynamic behaviour and methods of separation of droplets less than one hundred micrometres in diameter has been reviewed with particular reference to fibrous bed coalescers. The main operating parameters were identified as inlet drop size distribution, phase ratio, superficial velocity and the thickness and voidage of the bed . A recirculatory rig with interchangeable fibrous bed pads was designed and operated with toluene-water dispersions generated by a combination of centrifugal pumps . Inlet drop sizes were analysed using a Coulter Counter and outlet drops were sized photographically. A novel technique, involving conductivity measur ements at different planes in the bed, was developed to measure hold up distribution. Single phase flow and two phase flow pressure drops were correlated by a Blake-Kozeny type equation. Exit drop size was independent of inlet drop size distribution and phase ratio but a function of superficialvelocity and packing thickness. Average bed hold up was independent of inlet drop size distribution and phase ratio, but decreased with increase in superficial velocity. Hold up was not evenly distributed in the bed, the highest value occurred at the inlet followed by a sharp -2 drop at approximately 1.2 x 10 m. Hold up remained constant throughout the rest of the bed until the exit plane, where it increased. From the results, a mechanism is postulated involving: (a) Capture of the inlet drops followed by interdrop coalescence until an equilibrium value is reached. (b) Equilibrium size droplets flowing as rivulets through the intermediate portion of the bed, and (c) Each rivulet forms droplets at the exit face, which detach by a 'drip point' mechanism.