1000 resultados para 510 Mathematics
Resumo:
In this paper, we present a consolidation method that is based on a new representation of 3D point sets. The key idea is to augment each surface point into a deep point by associating it with an inner point that resides on the meso-skeleton, which consists of a mixture of skeletal curves and sheets. The deep points representation is a result of a joint optimization applied to both ends of the deep points. The optimization objective is to fairly distribute the end points across the surface and the meso-skeleton, such that the deep point orientations agree with the surface normals. The optimization converges where the inner points form a coherent meso-skeleton, and the surface points are consolidated with the missing regions completed. The strength of this new representation stems from the fact that it is comprised of both local and non-local geometric information. We demonstrate the advantages of the deep points consolidation technique by employing it to consolidate and complete noisy point-sampled geometry with large missing parts.
Resumo:
Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state-of-the-art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real-world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.
Resumo:
We propose dual-domain filtering, an image processing paradigm that couples spatial domain with frequency domain filtering. Our dual-domain defined filter removes artifacts like residual noise of other image denoising methods and compression artifacts. Moreover, iterating the filter achieves state-of-the-art image denoising results, but with a much simpler algorithm than competing approaches. The simplicity and versatility of the dual-domain filter makes it an attractive tool for image processing.
Resumo:
State of the art methods for disparity estimation achieve good results for single stereo frames, but temporal coherence in stereo videos is often neglected. In this paper we present a method to compute temporally coherent disparity maps. We define an energy over whole stereo sequences and optimize their Conditional Random Field (CRF) distributions using mean-field approximation. We introduce novel terms for smoothness and consistency between the left and right views, and perform CRF optimization by fast, iterative spatio-temporal filtering with linear complexity in the total number of pixels. Our results rank among the state of the art while having significantly less flickering artifacts in stereo sequences.
Resumo:
In this paper we solve a problem raised by Gutiérrez and Montanari about comparison principles for H−convex functions on subdomains of Heisenberg groups. Our approach is based on the notion of the sub-Riemannian horizontal normal mapping and uses degree theory for set-valued maps. The statement of the comparison principle combined with a Harnack inequality is applied to prove the Aleksandrov-type maximum principle, describing the correct boundary behavior of continuous H−convex functions vanishing at the boundary of horizontally bounded subdomains of Heisenberg groups. This result answers a question by Garofalo and Tournier. The sharpness of our results are illustrated by examples.
Resumo:
Steiner’s tube formula states that the volume of an ϵ-neighborhood of a smooth regular domain in Rn is a polynomial of degree n in the variable ϵ whose coefficients are curvature integrals (also called quermassintegrals). We prove a similar result in the sub-Riemannian setting of the first Heisenberg group. In contrast to the Euclidean setting, we find that the volume of an ϵ-neighborhood with respect to the Heisenberg metric is an analytic function of ϵ that is generally not a polynomial. The coefficients of the series expansion can be explicitly written in terms of integrals of iteratively defined canonical polynomials of just five curvature terms.
Resumo:
The modulus method introduced by H. Grötzsch yields bounds for a mean distortion functional of quasiconformal maps between two annuli mapping the respective boundary components onto each other. P. P. Belinskiĭ studied these inequalities in the plane and identified the family of all minimisers. Beyond the Euclidean framework, a Grötzsch-Belinskiĭ-type inequality has been previously considered for quasiconformal maps between annuli in the Heisenberg group whose boundaries are Korányi spheres. In this note we show that--in contrast to the planar situation--the minimiser in this setting is essentially unique.
Resumo:
Let $\H^n$ be the Heisenberg group of topological dimension 2n+1 . We prove that if n is odd, the pair of metric spaces $(\H^n, \H^n)$ does not have the Lipschitz extension property.
Resumo:
We apply the theory of Peres and Schlag to obtain generic lower bounds for Hausdorff dimension of images of sets by orthogonal projections on simply connected two-dimensional Riemannian manifolds of constant curvature. As a conclusion we obtain appropriate versions of Marstrand's theorem, Kaufman's theorem, and Falconer's theorem in the above geometrical settings.
Resumo:
Among all torus links, we characterise those arising as links of simple plane curve singularities by the property that their fibre surfaces admit only a finite number of cutting arcs that preserve fibredness. The same property allows a characterisation of Coxeter-Dynkin trees (i.e., An , Dn , E6 , E7 and E8 ) among all positive tree-like Hopf plumbings.
Resumo:
We present a novel algorithm to reconstruct high-quality images from sampled pixels and gradients in gradient-domain rendering. Our approach extends screened Poisson reconstruction by adding additional regularization constraints. Our key idea is to exploit local patches in feature images, which contain per-pixels normals, textures, position, etc., to formulate these constraints. We describe a GPU implementation of our approach that runs on the order of seconds on megapixel images. We demonstrate a significant improvement in image quality over screened Poisson reconstruction under the L1 norm. Because we adapt the regularization constraints to the noise level in the input, our algorithm is consistent and converges to the ground truth.
Resumo:
With the ongoing shift in the computer graphics industry toward Monte Carlo rendering, there is a need for effective, practical noise-reduction techniques that are applicable to a wide range of rendering effects and easily integrated into existing production pipelines. This course surveys recent advances in image-space adaptive sampling and reconstruction algorithms for noise reduction, which have proven very effective at reducing the computational cost of Monte Carlo techniques in practice. These approaches leverage advanced image-filtering techniques with statistical methods for error estimation. They are attractive because they can be integrated easily into conventional Monte Carlo rendering frameworks, they are applicable to most rendering effects, and their computational overhead is modest.
Resumo:
The aim of this note is to characterize all pairs of sufficiently smooth functions for which the mean value in the Cauchy mean value theorem is taken at a point which has a well-determined position in the interval. As an application of this result, a partial answer is given to a question posed by Sahoo and Riedel.
Resumo:
We describe explicitly a generic representation for Dynkin quivers of type An or Dn for any dimension vector.
Resumo:
Indoor positioning has attracted considerable attention for decades due to the increasing demands for location based services. In the past years, although numerous methods have been proposed for indoor positioning, it is still challenging to find a convincing solution that combines high positioning accuracy and ease of deployment. Radio-based indoor positioning has emerged as a dominant method due to its ubiquitousness, especially for WiFi. RSSI (Received Signal Strength Indicator) has been investigated in the area of indoor positioning for decades. However, it is prone to multipath propagation and hence fingerprinting has become the most commonly used method for indoor positioning using RSSI. The drawback of fingerprinting is that it requires intensive labour efforts to calibrate the radio map prior to experiments, which makes the deployment of the positioning system very time consuming. Using time information as another way for radio-based indoor positioning is challenged by time synchronization among anchor nodes and timestamp accuracy. Besides radio-based positioning methods, intensive research has been conducted to make use of inertial sensors for indoor tracking due to the fast developments of smartphones. However, these methods are normally prone to accumulative errors and might not be available for some applications, such as passive positioning. This thesis focuses on network-based indoor positioning and tracking systems, mainly for passive positioning, which does not require the participation of targets in the positioning process. To achieve high positioning accuracy, we work on some information of radio signals from physical-layer processing, such as timestamps and channel information. The contributions in this thesis can be divided into two parts: time-based positioning and channel information based positioning. First, for time-based indoor positioning (especially for narrow-band signals), we address challenges for compensating synchronization offsets among anchor nodes, designing timestamps with high resolution, and developing accurate positioning methods. Second, we work on range-based positioning methods with channel information to passively locate and track WiFi targets. Targeting less efforts for deployment, we work on range-based methods, which require much less calibration efforts than fingerprinting. By designing some novel enhanced methods for both ranging and positioning (including trilateration for stationary targets and particle filter for mobile targets), we are able to locate WiFi targets with high accuracy solely relying on radio signals and our proposed enhanced particle filter significantly outperforms the other commonly used range-based positioning algorithms, e.g., a traditional particle filter, extended Kalman filter and trilateration algorithms. In addition to using radio signals for passive positioning, we propose a second enhanced particle filter for active positioning to fuse inertial sensor and channel information to track indoor targets, which achieves higher tracking accuracy than tracking methods solely relying on either radio signals or inertial sensors.