942 resultados para Subpixel precision
Resumo:
We present a model for early vision tasks such as denoising, super-resolution, deblurring, and demosaicing. The model provides a resolution-independent representation of discrete images which admits a truly rotationally invariant prior. The model generalizes several existing approaches: variational methods, finite element methods, and discrete random fields. The primary contribution is a novel energy functional which has not previously been written down, which combines the discrete measurements from pixels with a continuous-domain world viewed through continous-domain point-spread functions. The value of the functional is that simple priors (such as total variation and generalizations) on the continous-domain world become realistic priors on the sampled images. We show that despite its apparent complexity, optimization of this model depends on just a few computational primitives, which although tedious to derive, can now be reused in many domains. We define a set of optimization algorithms which greatly overcome the apparent complexity of this model, and make possible its practical application. New experimental results include infinite-resolution upsampling, and a method for obtaining subpixel superpixels. © 2012 IEEE.
Resumo:
There are several reasons for monitoring of underground structures and they have already been discussed many times, e.g. from the view of ageing or state after accidental event like flooding of Prague metro in 2002. Monitoring of Prague metro is realized in the framework of international research project sponsored by ESF-S3T. The monitoring methods used in Prague are either classical one or new or developing one. The reason for different monitoring methods is the different precision of each method and also for cross-checking between them and their evaluation. Namely we use convergence, tiltmetres, crackmetres, geophysical methods, laser scanning, computer vision and finally installation of MEMS monitoring devices. In the paper more details of each method and obtained results will be presented. The monitoring methods are complemented by wireless data collection and transfer for real-time monitoring. © 2012 Taylor & Francis Group.
Resumo:
Most quasi-static ultrasound elastography methods image only the axial strain, derived from displacements measured in the direction of ultrasound propagation. In other directions, the beam lacks high resolution phase information and displacement estimation is therefore less precise. However, these estimates can be improved by steering the ultrasound beam through multiple angles and combining displacements measured along the different beam directions. Previously, beamsteering has only considered the 2D case to improve the lateral displacement estimates. In this paper, we extend this to 3D using a simulated 2D array to steer both laterally and elevationally in order to estimate the full 3D displacement vector over a volume. The method is tested on simulated and phantom data using a simulated 6-10MHz array, and the precision of displacement estimation is measured with and without beamsteering. In simulations, we found a statistically significant improvement in the precision of lateral and elevational displacement estimates: lateral precision 35.69μm unsteered, 3.70μm steered; elevational precision 38.67μm unsteered, 3.64μm steered. Similar results were found in the phantom data: lateral precision 26.51μm unsteered, 5.78μm steered; elevational precision 28.92μm unsteered, 11.87μm steered. We conclude that volumetric 3D beamsteering improves the precision of lateral and elevational displacement estimates.
Resumo:
Most quasi-static ultrasound elastography methods image only the axial strain, derived from displacements measured in the direction of ultrasound propagation. In other directions, the beam lacks high resolution phase information and displacement estimation is therefore less precise. However, these estimates can be improved by steering the ultrasound beam through multiple angles and combining displacements measured along the different beam directions. Previously, beamsteering has only considered the 2D case to improve the lateral displacement estimates. In this paper, we extend this to 3D using a simulated 2D array to steer both laterally and elevationally in order to estimate the full 3D displacement vector over a volume. The method is tested on simulated and phantom data using a simulated 6-10 MHz array, and the precision of displacement estimation is measured with and without beamsteering. In simulations, we found a statistically significant improvement in the precision of lateral and elevational displacement estimates: lateral precision 35.69 μm unsteered, 3.70 μm steered; elevational precision 38.67 μm unsteered, 3.64 μm steered. Similar results were found in the phantom data: lateral precision 26.51 μm unsteered, 5.78 μm steered; elevational precision 28.92 μm unsteered, 11.87 μm steered. We conclude that volumetric 3D beamsteering improves the precision of lateral and elevational displacement estimates. © 2012 Elsevier B.V. All rights reserved.
Resumo:
Ubiquitous in-building Real Time Location Systems (RTLS) today are limited by costly active radio frequency identification (RFID) tags and short range portal readers of low cost passive RFID tags. We, however, present a novel technology locates RFID tags using a new approach based on (a) minimising RFID fading using antenna diversity, frequency dithering, phase dithering and narrow beam-width antennas, (b) measuring a combination of RSSI and phase shift in the coherent received tag backscatter signals and (c) being selective of use of information from the system by, applying weighting techniques to minimise error. These techniques make it possible to locate tags to an accuracy of less than one metre. This breakthrough will enable, for the first time, the low-cost tagging of items and the possibility of locating them at relatively high precision.
Resumo:
A common approach to visualise multidimensional data sets is to map every data dimension to a separate visual feature. It is generally assumed that such visual features can be judged independently from each other. However, we have recently shown that interactions between features do exist [Hannus et al. 2004; van den Berg et al. 2005]. In those studies, we first determined individual colour and size contrast or colour and orientation contrast necessary to achieve a fixed level of discrimination performance in single feature search tasks. These contrasts were then used in a conjunction search task in which the target was defined by a combination of a colour and a size or a colour and an orientation. We found that in conjunction search, despite the matched feature discriminability, subjects significantly more often chose an item with the correct colour than one with correct size or orientation. This finding may have consequences for visualisation: the saliency of information coded by objects' size or orientation may change when there is a need to simultaneously search for colour that codes another aspect of the information. In the present experiment, we studied whether a colour bias can also be found in a more complex and continuous task, Subjects had to search for a target in a node-link diagram consisting of SO nodes, while their eye movements were being tracked, Each node was assigned a random colour and size (from a range of 10 possible values with fixed perceptual distances). We found that when we base the distances on the mean threshold contrasts that were determined in our previous experiments, the fixated nodes tend to resemble the target colour more than the target size (Figure 1a). This indicates that despite the perceptual matching, colour is judged with greater precision than size during conjunction search. We also found that when we double the size contrast (i.e. the distances between the 10 possible node sizes), this effect disappears (Figure 1b). Our findings confirm that the previously found decrease in salience of other features during colour conjunction search is also present in more complex (more 'visualisation- realistic') visual search tasks. The asymmetry in visual search behaviour can be compensated for by manipulating step sizes (perceptual distances) within feature dimensions. Our results therefore also imply that feature hierarchies are not completely fixed and may be adapted to the requirements of a particular visualisation. Copyright © 2005 by the Association for Computing Machinery, Inc.
Resumo:
Change detection is a classic paradigm that has been used for decades to argue that working memory can hold no more than a fixed number of items ("item-limit models"). Recent findings force us to consider the alternative view that working memory is limited by the precision in stimulus encoding, with mean precision decreasing with increasing set size ("continuous-resource models"). Most previous studies that used the change detection paradigm have ignored effects of limited encoding precision by using highly discriminable stimuli and only large changes. We conducted two change detection experiments (orientation and color) in which change magnitudes were drawn from a wide range, including small changes. In a rigorous comparison of five models, we found no evidence of an item limit. Instead, human change detection performance was best explained by a continuous-resource model in which encoding precision is variable across items and trials even at a given set size. This model accounts for comparison errors in a principled, probabilistic manner. Our findings sharply challenge the theoretical basis for most neural studies of working memory capacity.
Resumo:
This paper investigates the development of miniature McKibben actuators. Due to their compliancy, high actuation force, and precision, these actuators are on the one hand interesting for medical applications such as prostheses and instruments for surgery and on the other hand for industrial applications such as for assembly robots. During this research, pneumatic McKibben actuators have been miniaturized to an outside diameter of 1.5 mm and a length ranging from 22 mm to 62 mm. These actuators are able to achieve forces of 6 N and strains up to about 15% at a supply pressure of 1 MPa. The maximal actuation speed of the actuators measured during this research is more than 350 mm/s. Further, positioning experiments with a laser interferometer and a PI controller revealed that these actuators are able to achieve sub-micron positioning resolution. © 2010 Published by Elsevier B.V. All rights reserved.
Resumo:
In order to improve the power density of microactuators, recent research focuses on the applicability of fluidic actuation at the microscale. The main encountered difficulties in the development of small fluidic actuators are related to production tolerances and assembly requirements. In addition, these actuators tend to comprise highly three-dimensional parts, which are incompatible with traditional microproduction technologies. This paper presents accurate production and novel assembly techniques for the development of a hydraulic microactuator. Some of the presented techniques are widespread in precision mechanics, but have not yet been introduced in micromechanics. A prototype hydraulic microactuator with a bore of 1 mm and a length of 13 mm has been fabricated and tested. Measurements showed that this actuator is able to generate a force density of more than 0.23 N mm-2 and a work density of 0.18 mJ mm-3 at a driving pressure of 550 kPa, which is remarkable considering the small dimensions of the actuator. © 2005 IOP Publishing Ltd.
Resumo:
We consider a method for approximate inference in hidden Markov models (HMMs). The method circumvents the need to evaluate conditional densities of observations given the hidden states. It may be considered an instance of Approximate Bayesian Computation (ABC) and it involves the introduction of auxiliary variables valued in the same space as the observations. The quality of the approximation may be controlled to arbitrary precision through a parameter ε > 0. We provide theoretical results which quantify, in terms of ε, the ABC error in approximation of expectations of additive functionals with respect to the smoothing distributions. Under regularity assumptions, this error is, where n is the number of time steps over which smoothing is performed. For numerical implementation, we adopt the forward-only sequential Monte Carlo (SMC) scheme of [14] and quantify the combined error from the ABC and SMC approximations. This forms some of the first quantitative results for ABC methods which jointly treat the ABC and simulation errors, with a finite number of data and simulated samples. © Taylor & Francis Group, LLC.
Resumo:
The article discusses the progress and issues related to transparent oxide semiconductor (TOS) TFTs for advanced display and imaging applications. Amorphous oxide semiconductors continue to spark new technological developments in transparent electronics on a multitude of non-conventional substrates. Applications range from high-frame-rate interactive displays with embedded imaging to flexible electronics, where speed and transparency are essential requirements. TOS TFTs exhibit high transparency as well as high electron mobility even when fabricated at room temperature. Compared to conventional a-Si TFT technology, TOS TFTs have higher mobility and sufficiently good uniformity over large areas, similar in many ways to LTPS TFTs. Moreover, because the amorphous oxide semiconductor has higher mobility compared to that of conventional a-Si TFT technology, this allows higher-frame-rate display operation. This would greatly benefit OLED displays in particular because of the need for lower-cost higher-mobility analog circuits at every subpixel.
Resumo:
We report an empirical study of n-gram posterior probability confidence measures for statistical machine translation (SMT). We first describe an efficient and practical algorithm for rapidly computing n-gram posterior probabilities from large translation word lattices. These probabilities are shown to be a good predictor of whether or not the n-gram is found in human reference translations, motivating their use as a confidence measure for SMT. Comprehensive n-gram precision and word coverage measurements are presented for a variety of different language pairs, domains and conditions. We analyze the effect on reference precision of using single or multiple references, and compare the precision of posteriors computed from k-best lists to those computed over the full evidence space of the lattice. We also demonstrate improved confidence by combining multiple lattices in a multi-source translation framework. © 2012 The Author(s).