155 resultados para Information display systems


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Field emission from a series of tetrahedrally bonded amorphous-carbon (ta-C) films, deposited in a filtered cathodic vacuum arc, has been measured. The threshold field for emission and current densities achievable have been investigated as a function of sp3/sp2 bonding ratio and nitrogen content. Typical as-grown undoped ta-C films have threshold fields of the order 10-15 V/μm and optimally nitrogen doped films exhibited fields as low as 5 V/μm. In order to gain further understanding of the mechanism of field emission, the films were also subjected to H2, Ar, and O2 plasma treatments and were also deposited onto substrates of different work function. The threshold field, emission current, emission site densities were all significantly improved by the plasma treatment, but little dependence of these properties on work function of the substrate was observed. This suggests that the main barrier to emission in these films is at the front surface.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We have developed a novel human facial tracking system that operates in real time at a video frame rate without needing any special hardware. The approach is based on the use of Lie algebra, and uses three-dimensional feature points on the targeted human face. It is assumed that the roughly estimated facial model (relative coordinates of the three-dimensional feature points) is known. First, the initial feature positions of the face are determined using a model fitting technique. Then, the tracking is operated by the following sequence: (1) capture the new video frame and render feature points to the image plane; (2) search for new positions of the feature points on the image plane; (3) get the Euclidean matrix from the moving vector and the three-dimensional information for the points; and (4) rotate and translate the feature points by using the Euclidean matrix, and render the new points on the image plane. The key algorithm of this tracker is to estimate the Euclidean matrix by using a least square technique based on Lie algebra. The resulting tracker performed very well on the task of tracking a human face.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a novel technique for reconstructing an outdoor sculpture from an uncalibrated image sequence acquired around it using a hand-held camera. The technique introduced here uses only the silhouettes of the sculpture for both motion estimation and model reconstruction, and no corner detection nor matching is necessary. This is very important as most sculptures are composed of smooth textureless surfaces, and hence their silhouettes are very often the only information available from their images. Besides, as opposed to previous works, the proposed technique does not require the camera motion to be perfectly circular (e.g., turntable sequence). It employs an image rectification step before the motion estimation step to obtain a rough estimate of the camera motion which is only approximately circular. A refinement process is then applied to obtain the true general motion of the camera. This allows the technique to handle large outdoor sculptures which cannot be rotated on a turntable, making it much more practical and flexible.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We have used novel liquid crystals with extremely large flexoelectric coefficients in a range of ultra-fast photonic/display modes, namely 1) the uniform lying helix, that leads to in-plain switching, birefringence based displays with 100 μs switching times at low fields, i.e.2-5 V/μm, wide viewing angle and analogue or grey scale capability, 2) the uniform standing helix, using planar surface alignment and in-plane fields, with sub ms response times and optical contrasts in excess of 5000:1 with a perfect black "off state", 3) the wide temperature range blue phase that leads to field controlled reflective color and 4) high slope efficiency, wide wavelength range tunable narrow linewidth microscopic liquid crystal lasers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An adaptive lens, which has variable focus and is rapidly controllable with simple low-power electronics, has numerous applications in optical telecommunications devices, 3D display systems, miniature cameras and adaptive optics. The University of Durham is developing a range of adaptive liquid crystal lenses, and here we describe work on construction of modal liquid crystal lenses. This type of lens was first described by Naumov [1] and further developed by others [24]. In this system, a spatially varying and circularly symmetric voltage profile can be generated across a liquid-crystal cell, generating a lens-like refractive index profile. Such devices are simple in design, and do not require a pixellated structure. The shape and focussing power of the lens can be controlled by the variation of applied electric field and frequency. Results show adaptive lenses operating at optical wavelengths with continuously variable focal lengths from infinity to 70 cm. Switching speeds are of the order of 1 second between focal positions. Manufacturing methods of our adaptive lenses are presented, together with the latest results to the performance of these devices.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present the Gaussian Process Density Sampler (GPDS), an exchangeable generative model for use in nonparametric Bayesian density estimation. Samples drawn from the GPDS are consistent with exact, independent samples from a fixed density function that is a transformation of a function drawn from a Gaussian process prior. Our formulation allows us to infer an unknown density from data using Markov chain Monte Carlo, which gives samples from the posterior distribution over density functions and from the predictive distribution on data space. We can also infer the hyperparameters of the Gaussian process. We compare this density modeling technique to several existing techniques on a toy problem and a skullreconstruction task.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Synapses exhibit an extraordinary degree of short-term malleability, with release probabilities and effective synaptic strengths changing markedly over multiple timescales. From the perspective of a fixed computational operation in a network, this seems like a most unacceptable degree of added variability. We suggest an alternative theory according to which short-term synaptic plasticity plays a normatively-justifiable role. This theory starts from the commonplace observation that the spiking of a neuron is an incomplete, digital, report of the analog quantity that contains all the critical information, namely its membrane potential. We suggest that a synapse solves the inverse problem of estimating the pre-synaptic membrane potential from the spikes it receives, acting as a recursive filter. We show that the dynamics of short-term synaptic depression closely resemble those required for optimal filtering, and that they indeed support high quality estimation. Under this account, the local postsynaptic potential and the level of synaptic resources track the (scaled) mean and variance of the estimated presynaptic membrane potential. We make experimentally testable predictions for how the statistics of subthreshold membrane potential fluctuations and the form of spiking non-linearity should be related to the properties of short-term plasticity in any particular cell type.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We introduce a Gaussian process model of functions which are additive. An additive function is one which decomposes into a sum of low-dimensional functions, each depending on only a subset of the input variables. Additive GPs generalize both Generalized Additive Models, and the standard GP models which use squared-exponential kernels. Hyperparameter learning in this model can be seen as Bayesian Hierarchical Kernel Learning (HKL). We introduce an expressive but tractable parameterization of the kernel function, which allows efficient evaluation of all input interaction terms, whose number is exponential in the input dimension. The additional structure discoverable by this model results in increased interpretability, as well as state-of-the-art predictive power in regression tasks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In standard Gaussian Process regression input locations are assumed to be noise free. We present a simple yet effective GP model for training on input points corrupted by i.i.d. Gaussian noise. To make computations tractable we use a local linear expansion about each input point. This allows the input noise to be recast as output noise proportional to the squared gradient of the GP posterior mean. The input noise variances are inferred from the data as extra hyperparameters. They are trained alongside other hyperparameters by the usual method of maximisation of the marginal likelihood. Training uses an iterative scheme, which alternates between optimising the hyperparameters and calculating the posterior gradient. Analytic predictive moments can then be found for Gaussian distributed test points. We compare our model to others over a range of different regression problems and show that it improves over current methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Computational analyses of dendritic computations often assume stationary inputs to neurons, ignoring the pulsatile nature of spike-based communication between neurons and the moment-to-moment fluctuations caused by such spiking inputs. Conversely, circuit computations with spiking neurons are usually formalized without regard to the rich nonlinear nature of dendritic processing. Here we address the computational challenge faced by neurons that compute and represent analogue quantities but communicate with digital spikes, and show that reliable computation of even purely linear functions of inputs can require the interplay of strongly nonlinear subunits within the postsynaptic dendritic tree.Our theory predicts a matching of dendritic nonlinearities and synaptic weight distributions to the joint statistics of presynaptic inputs. This approach suggests normative roles for some puzzling forms of nonlinear dendritic dynamics and plasticity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Storing a new pattern in a palimpsest memory system comes at the cost of interfering with the memory traces of previously stored items. Knowing the age of a pattern thus becomes critical for recalling it faithfully. This implies that there should be a tight coupling between estimates of age, as a form of familiarity, and the neural dynamics of recollection, something which current theories omit. Using a normative model of autoassociative memory, we show that a dual memory system, consisting of two interacting modules for familiarity and recollection, has best performance for both recollection and recognition. This finding provides a new window onto actively contentious psychological and neural aspects of recognition memory.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A number of recent scientific and engineering problems require signals to be decomposed into a product of a slowly varying positive envelope and a quickly varying carrier whose instantaneous frequency also varies slowly over time. Although signal processing provides algorithms for so-called amplitude-and frequency-demodulation (AFD), there are well known problems with all of the existing methods. Motivated by the fact that AFD is ill-posed, we approach the problem using probabilistic inference. The new approach, called probabilistic amplitude and frequency demodulation (PAFD), models instantaneous frequency using an auto-regressive generalization of the von Mises distribution, and the envelopes using Gaussian auto-regressive dynamics with a positivity constraint. A novel form of expectation propagation is used for inference. We demonstrate that although PAFD is computationally demanding, it outperforms previous approaches on synthetic and real signals in clean, noisy and missing data settings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a new co-clustering problem of images and visual features. The problem involves a set of non-object images in addition to a set of object images and features to be co-clustered. Co-clustering is performed in a way that maximises discrimination of object images from non-object images, thus emphasizing discriminative features. This provides a way of obtaining perceptual joint-clusters of object images and features. We tackle the problem by simultaneously boosting multiple strong classifiers which compete for images by their expertise. Each boosting classifier is an aggregation of weak-learners, i.e. simple visual features. The obtained classifiers are useful for object detection tasks which exhibit multimodalities, e.g. multi-category and multi-view object detection tasks. Experiments on a set of pedestrian images and a face data set demonstrate that the method yields intuitive image clusters with associated features and is much superior to conventional boosting classifiers in object detection tasks.