17 resultados para blur
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Radiography is part of evaluating horses with poor performance and pelvic limb lameness; however, the radiographic appearance of the sacroiliac region is poorly described. The goal of the present study was to describe the use of a simple technique to obtain radiographs of the sacroiliac region in the anesthetized horse and to describe the radiographic appearance of this region. Seventy-nine horses underwent radiography of the pelvis under general anesthesia in dorsal recumbency. During a 5s exposure time the horse was actively ventilated to blur the abdominal viscera, which allowed assessment of individual bone structures in 77 horses. A large variation in the shape of the sacral wings, their articulation with the transverse processes of L6, and the relation of the sacrum to the ilium were observed. Females had significantly narrower width of the sacral wings. Broad sacral wings and bony proliferations at the caudal aspect were commonly observed features and their size was highly correlated with gender. In males, caudal osteophytes were significantly larger than in females. Five horses had transitional or hemitransitional vertebrae. Radiography with the ventilation-induced blurring technique is a simple approach that results in diagnostic quality radiographs and delineation of the highly variable bone structures of the sacroiliac region.
Resumo:
Most languages fall into one of two camps: either they adopt a unique, static type system, or they abandon static type-checks for run-time checks. Pluggable types blur this division by (i) making static type systems optional, and (ii) supporting a choice of type systems for reasoning about different kinds of static properties. Dynamic languages can then benefit from static-checking without sacrificing dynamic features or committing to a unique, static type system. But the overhead of adopting pluggable types can be very high, especially if all existing code must be decorated with type annotations before any type-checking can be performed. We propose a practical and pragmatic approach to introduce pluggable type systems to dynamic languages. First of all, only annotated code is type-checked. Second, limited type inference is performed on unannotated code to reduce the number of reported errors. Finally, external annotations can be used to type third-party code. We present Typeplug, a Smalltalk implementation of our framework, and report on experience applying the framework to three different pluggable type systems.
Resumo:
This Trans-Himalayan tale unites two narratives, an historical account of scholarly thinking regarding linguistic phylogeny in eastern Eurasia alongside a reconstruction of the ethnolinguistic prehistory of eastern Eurasia based on linguistic and human population genetic phylogeography. The first story traces the tale of transformation in thought regarding language relationships in eastern Eurasia from Tibeto-Burman to Trans-Himalayan. The path is strewn with defunct family trees such as Indo-Chinese, Sino-Tibetan, Sino-Himalayan and Sino-Kiranti. In the heyday of racism in scholarship, Social Darwinism coloured both language typology and the phylogenetic models of language relationship in eastern Eurasia. Its influential role in the perpetuation of the Indo-Chinese model is generally left untold. The second narrative presents a conjectural reconstruction of the ethnolinguistic prehistory of eastern Eurasia based on possible correlations between genes and language communities. In so doing, biological ancestry and linguistic affinity are meticulously distinguished, a distinction which the language typologists of yore sought to blur, although the independence of language and race was stressed time and again by prominent historical linguists.
Resumo:
In this thesis, we develop an adaptive framework for Monte Carlo rendering, and more specifically for Monte Carlo Path Tracing (MCPT) and its derivatives. MCPT is attractive because it can handle a wide variety of light transport effects, such as depth of field, motion blur, indirect illumination, participating media, and others, in an elegant and unified framework. However, MCPT is a sampling-based approach, and is only guaranteed to converge in the limit, as the sampling rate grows to infinity. At finite sampling rates, MCPT renderings are often plagued by noise artifacts that can be visually distracting. The adaptive framework developed in this thesis leverages two core strategies to address noise artifacts in renderings: adaptive sampling and adaptive reconstruction. Adaptive sampling consists in increasing the sampling rate on a per pixel basis, to ensure that each pixel value is below a predefined error threshold. Adaptive reconstruction leverages the available samples on a per pixel basis, in an attempt to have an optimal trade-off between minimizing the residual noise artifacts and preserving the edges in the image. In our framework, we greedily minimize the relative Mean Squared Error (rMSE) of the rendering by iterating over sampling and reconstruction steps. Given an initial set of samples, the reconstruction step aims at producing the rendering with the lowest rMSE on a per pixel basis, and the next sampling step then further reduces the rMSE by distributing additional samples according to the magnitude of the residual rMSE of the reconstruction. This iterative approach tightly couples the adaptive sampling and adaptive reconstruction strategies, by ensuring that we only sample densely regions of the image where adaptive reconstruction cannot properly resolve the noise. In a first implementation of our framework, we demonstrate the usefulness of our greedy error minimization using a simple reconstruction scheme leveraging a filterbank of isotropic Gaussian filters. In a second implementation, we integrate a powerful edge aware filter that can adapt to the anisotropy of the image. Finally, in a third implementation, we leverage auxiliary feature buffers that encode scene information (such as surface normals, position, or texture), to improve the robustness of the reconstruction in the presence of strong noise.
Resumo:
We propose a method to acquire 3D light fields using a hand-held camera, and describe several computational photography applications facilitated by our approach. As our input we take an image sequence from a camera translating along an approximately linear path with limited camera rotations. Users can acquire such data easily in a few seconds by moving a hand-held camera. We include a novel approach to resample the input into regularly sampled 3D light fields by aligning them in the spatio-temporal domain, and a technique for high-quality disparity estimation from light fields. We show applications including digital refocusing and synthetic aperture blur, foreground removal, selective colorization, and others.
Resumo:
At first sight, experimenting and modeling form two distinct modes of scientific inquiry. This spurs philosophical debates about how the distinction should be drawn (e.g. Morgan 2005, Winsberg 2009, Parker 2009). But much scientific practice casts serious doubts on the idea that the distinction makes much sense. There are two worries. First, the practices of modeling and experimenting are often intertwined in intricate ways because much modeling involves experimenting, and the interpretation of many experiments relies upon models. Second, there are borderline cases that seem to blur the distinction between experiment and model (if there is any). My talk tries to defend the philosophical project of distinguishing models from experiment and to advance the related philosophical debate. I begin with providing a minimalist framework of conceptualizing experimenting and modeling and their mutual relationships. The methods are conceptualized as different types of activities that are characterized by a primary goal, respectively. The minimalist framwork, which should be uncontroversial, suffices to accommodate the first worry. I address the second worry by suggesting several ways how to conceptualize the distinction in a more flexible way. I make a concrete suggestion of how the distinction may be drawn. I use examples from the history of science to argue my case. The talk concentrates and models and experiments, but I will comment on simulations too.
Resumo:
Real cameras have a limited depth of field. The resulting defocus blur is a valuable cue for estimating the depth structure of a scene. Using coded apertures, depth can be estimated from a single frame. For optical flow estimation between frames, however, the depth dependent degradation can introduce errors. These errors are most prominent when objects move relative to the focal plane of the camera. We incorporate coded aperture defocus blur into optical flow estimation and allow for piecewise smooth 3D motion of objects. With coded aperture flow, we can establish dense correspondences between pixels in succeeding coded aperture frames. We compare several approaches to compute accurate correspondences for coded aperture images showing objects with arbitrary 3D motion.
Resumo:
In this paper we study the problem of blind deconvolution. Our analysis is based on the algorithm of Chan and Wong [2] which popularized the use of sparse gradient priors via total variation. We use this algorithm because many methods in the literature are essentially adaptations of this framework. Such algorithm is an iterative alternating energy minimization where at each step either the sharp image or the blur function are reconstructed. Recent work of Levin et al. [14] showed that any algorithm that tries to minimize that same energy would fail, as the desired solution has a higher energy than the no-blur solution, where the sharp image is the blurry input and the blur is a Dirac delta. However, experimentally one can observe that Chan and Wong's algorithm converges to the desired solution even when initialized with the no-blur one. We provide both analysis and experiments to resolve this paradoxical conundrum. We find that both claims are right. The key to understanding how this is possible lies in the details of Chan and Wong's implementation and in how seemingly harmless choices result in dramatic effects. Our analysis reveals that the delayed scaling (normalization) in the iterative step of the blur kernel is fundamental to the convergence of the algorithm. This then results in a procedure that eludes the no-blur solution, despite it being a global minimum of the original energy. We introduce an adaptation of this algorithm and show that, in spite of its extreme simplicity, it is very robust and achieves a performance comparable to the state of the art.
Resumo:
Defocus blur is an indicator for the depth structure of a scene. However, given a single input image from a conventional camera one cannot distinguish between blurred objects lying in front or behind the focal plane, as they may be subject to exactly the same amount of blur. In this paper we address this limitation by exploiting coded apertures. Previous work in this area focuses on setups where the scene is placed either entirely in front or entirely behind the focal plane. We demonstrate that asymmetric apertures result in unique blurs for all distances from the camera. To exploit asymmetric apertures we propose an algorithm that can unambiguously estimate scene depth and texture from a single input image. One of the main advantages of our method is that, within the same depth range, we can work with less blurred data than in other methods. The technique is tested on both synthetic and real images.
Resumo:
The finite depth of field of a real camera can be used to estimate the depth structure of a scene. The distance of an object from the plane in focus determines the defocus blur size. The shape of the blur depends on the shape of the aperture. The blur shape can be designed by masking the main lens aperture. In fact, aperture shapes different from the standard circular aperture give improved accuracy of depth estimation from defocus blur. We introduce an intuitive criterion to design aperture patterns for depth from defocus. The criterion is independent of a specific depth estimation algorithm. We formulate our design criterion by imposing constraints directly in the data domain and optimize the amount of depth information carried by blurred images. Our criterion is a quadratic function of the aperture transmission values. As such, it can be numerically evaluated to estimate optimized aperture patterns quickly. The proposed mask optimization procedure is applicable to different depth estimation scenarios. We use it for depth estimation from two images with different focus settings, for depth estimation from two images with different aperture shapes as well as for depth estimation from a single coded aperture image. In this work we show masks obtained with this new evaluation criterion and test their depth discrimination capability using a state-of-the-art depth estimation algorithm.
Resumo:
This thesis covers a broad part of the field of computational photography, including video stabilization and image warping techniques, introductions to light field photography and the conversion of monocular images and videos into stereoscopic 3D content. We present a user assisted technique for stereoscopic 3D conversion from 2D images. Our approach exploits the geometric structure of perspective images including vanishing points. We allow a user to indicate lines, planes, and vanishing points in the input image, and directly employ these as guides of an image warp that produces a stereo image pair. Our method is most suitable for scenes with large scale structures such as buildings and is able to skip the step of constructing a depth map. Further, we propose a method to acquire 3D light fields using a hand-held camera, and describe several computational photography applications facilitated by our approach. As the input we take an image sequence from a camera translating along an approximately linear path with limited camera rotations. Users can acquire such data easily in a few seconds by moving a hand-held camera. We convert the input into a regularly sampled 3D light field by resampling and aligning them in the spatio-temporal domain. We also present a novel technique for high-quality disparity estimation from light fields. Finally, we show applications including digital refocusing and synthetic aperture blur, foreground removal, selective colorization, and others.
Resumo:
Introduction: Although it seems plausible that sports performance relies on high-acuity foveal vision, it could be empirically shown that myoptic blur (up to +2 diopters) does not harm performance in sport tasks that require foveal information pick-up like golf putting (Bulson, Ciuffreda, & Hung, 2008). How myoptic blur affects peripheral performance is yet unknown. Attention might be less needed for processing visual cues foveally and lead to better performance because peripheral cues are better processed as a function of reduced foveal vision, which will be tested in the current experiment. Methods: 18 sport science students with self-reported myopia volunteered as participants, all of them regularly wearing contact lenses. Exclusion criteria comprised visual correction other than myopic, correction of astigmatism and use of contact lenses out of Swiss delivery area. For each of the participants, three pairs of additional contact lenses (besides their regular lenses; used in the “plano” condition) were manufactured with an individual overcorrection to a retinal defocus of +1 to +3 diopters (referred to as “+1.00 D”, “+2.00 D”, and “+3.00 D” condition, respectively). Gaze data were acquired while participants had to perform a multiple object tracking (MOT) task that required to track 4 out of 10 moving stimuli. In addition, in 66.7 % of all trials, one of the 4 targets suddenly stopped during the motion phase for a period of 0.5 s. Stimuli moved in front of a picture of a sports hall to allow for foveal processing. Due to the directional hypotheses, the level of significance for one-tailed tests on differences was set at α = .05 and posteriori effect sizes were computed as partial eta squares (ηρ2). Results: Due to problems with the gaze-data collection, 3 participants had to be excluded from further analyses. The expectation of a centroid strategy was confirmed because gaze was closer to the centroid than the target (all p < .01). In comparison to the plano baseline, participants more often recalled all 4 targets under defocus conditions, F(1,14) = 26.13, p < .01, ηρ2 = .65. The three defocus conditions differed significantly, F(2,28) = 2.56, p = .05, ηρ2 = .16, with a higher accuracy as a function of a defocus increase and significant contrasts between conditions +1.00 D and +2.00 D (p = .03) and +1.00 D and +3.00 D (p = .03). For stop trials, significant differences could neither be found between plano baseline and defocus conditions, F(1,14) = .19, p = .67, ηρ2 = .01, nor between the three defocus conditions, F(2,28) = 1.09, p = .18, ηρ2 = .07. Participants reacted faster in “4 correct+button” trials under defocus than under plano-baseline conditions, F(1,14) = 10.77, p < .01, ηρ2 = .44. The defocus conditions differed significantly, F(2,28) = 6.16, p < .01, ηρ2 = .31, with shorter response times as a function of a defocus increase and significant contrasts between +1.00 D and +2.00 D (p = .01) and +1.00 D and +3.00 D (p < .01). Discussion: The results show that gaze behaviour in MOT is not affected to a relevant degree by a visual overcorrection up to +3 diopters. Hence, it can be taken for granted that peripheral event detection was investigated in the present study. This overcorrection, however, does not harm the capability to peripherally track objects. Moreover, if an event has to be detected peripherally, neither response accuracy nor response time is negatively affected. Findings could claim considerable relevance for all sport situations in which peripheral vision is required which now needs applied studies on this topic. References: Bulson, R. C., Ciuffreda, K. J., & Hung, G. K. (2008). The effect of retinal defocus on golf putting. Ophthalmic and Physiological Optics, 28, 334-344.
Resumo:
Blind Deconvolution consists in the estimation of a sharp image and a blur kernel from an observed blurry image. Because the blur model admits several solutions it is necessary to devise an image prior that favors the true blur kernel and sharp image. Many successful image priors enforce the sparsity of the sharp image gradients. Ideally the L0 “norm” is the best choice for promoting sparsity, but because it is computationally intractable, some methods have used a logarithmic approximation. In this work we also study a logarithmic image prior. We show empirically how well the prior suits the blind deconvolution problem. Our analysis confirms experimentally the hypothesis that a prior should not necessarily model natural image statistics to correctly estimate the blur kernel. Furthermore, we show that a simple Maximum a Posteriori formulation is enough to achieve state of the art results. To minimize such formulation we devise two iterative minimization algorithms that cope with the non-convexity of the logarithmic prior: one obtained via the primal-dual approach and one via majorization-minimization.
Resumo:
In this paper we propose a solution to blind deconvolution of a scene with two layers (foreground/background). We show that the reconstruction of the support of these two layers from a single image of a conventional camera is not possible. As a solution we propose to use a light field camera. We demonstrate that a single light field image captured with a Lytro camera can be successfully deblurred. More specifically, we consider the case of space-varying motion blur, where the blur magnitude depends on the depth changes in the scene. Our method employs a layered model that handles occlusions and partial transparencies due to both motion blur and out of focus blur of the plenoptic camera. We reconstruct each layer support, the corresponding sharp textures, and motion blurs via an optimization scheme. The performance of our algorithm is demonstrated on synthetic as well as real light field images.