52 resultados para Light-front field theory


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The transverse broadening of an energetic jet passing through a non-Abelian plasma is believed to be described by the thermal expectation value of a light-cone Wilson loop. In this exploratory study, we measure the light-cone Wilson loop with classical lattice gauge theory simulations. We observe, as suggested by previous studies, that there are strong interactions already at short transverse distances, which may lead to more efficient jet quenching than in leading-order perturbation theory. We also verify that the asymptotics of the Wilson loop do not change qualitatively when crossing the light cone, which supports arguments in the literature that infrared contributions to jet quenching can be studied with dimensionally reduced simulations in the space-like domain. Finally we speculate on possibilities for full four-dimensional lattice studies of the same observable, perhaps by employing shifted boundary conditions in order to simulate ensembles boosted by an imaginary velocity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: High dilutions of various starting materials, e.g. copper sulfate, Hypericum perforatum and sulfur, showed significant differences from controls and amongst different dilution levels in ultraviolet light (UV) transmission [1,2]. Exposure of high dilutions to external physical factors such as UV light or elevated temperature (37°C) also yielded significantly different UV transmissions compared to unexposed dilutions [2,3]. In a study with highland frogs it was shown that animals incubated with thyroxine 30c but not with thyroxine 30c exposed to electromagnetic fields (EMFs) of a microwave oven or mobile phone metamorphosed more slowly than control animals [4]. Aims: The aim was to test whether the EMF of a mobile phone influences the UV absorbance of dilutions of quartz and Atropa belladonna (AB). Methodology: Commercially available dilutions of 6x, 12x, 15x, 30x in H2O and 19% ethanol of quartz (SiO2) and of 4x, 6x, 12x, 15x, 30x in H2O and 19% ethanol of AB were used in the experiments (Weleda AG, Arlesheim, Switzerland). Four samples of each dilution were exposed to the EMF of a mobile phone (Philips, Savvy Dual Band) at 900 MHz with an output of 2 W for 3 h, while control samples (4 of each dilution) were kept in a separate room. Absorbance of the samples in the UV range (from 190 to 340 nm) was measured in a randomized order with a Shimadzu UV-1800 spectrophotometer equipped with an auto sampler. In total 5 separate measurement days will be carried out for quartz and for AB dilutions. The average absorbance from 200 to 340 nm and from 200 to 240 nm was compared among dilution levels using a Kruskal-Wallis test and between exposed and unexposed samples using a Mann-Whitney-U test. Results: Preliminary results after 2 measurement days indicated that for quartz the absorbance of the various dilution levels was different from each other (except 12x and 15x), and that samples exposed to an EMF did not show a difference in UV absorbance from unexposed samples. Preliminary results after one measurement day indicated that for AB the absorbance of the various dilution levels was different from each other. The samples exposed to an EMF did not show a difference in UV absorbance from unexposed samples (except 4x in the range from 200 – 240 nm). Conclusions: These results suggest that exposure of high dilutions of quartz and AB to a mobile phone EMF as used here does not alter UV absorbance of these dilutions. The final results will show whether this holds true.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We propose a method to acquire 3D light fields using a hand-held camera, and describe several computational photography applications facilitated by our approach. As our input we take an image sequence from a camera translating along an approximately linear path with limited camera rotations. Users can acquire such data easily in a few seconds by moving a hand-held camera. We include a novel approach to resample the input into regularly sampled 3D light fields by aligning them in the spatio-temporal domain, and a technique for high-quality disparity estimation from light fields. We show applications including digital refocusing and synthetic aperture blur, foreground removal, selective colorization, and others.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis covers a broad part of the field of computational photography, including video stabilization and image warping techniques, introductions to light field photography and the conversion of monocular images and videos into stereoscopic 3D content. We present a user assisted technique for stereoscopic 3D conversion from 2D images. Our approach exploits the geometric structure of perspective images including vanishing points. We allow a user to indicate lines, planes, and vanishing points in the input image, and directly employ these as guides of an image warp that produces a stereo image pair. Our method is most suitable for scenes with large scale structures such as buildings and is able to skip the step of constructing a depth map. Further, we propose a method to acquire 3D light fields using a hand-held camera, and describe several computational photography applications facilitated by our approach. As the input we take an image sequence from a camera translating along an approximately linear path with limited camera rotations. Users can acquire such data easily in a few seconds by moving a hand-held camera. We convert the input into a regularly sampled 3D light field by resampling and aligning them in the spatio-temporal domain. We also present a novel technique for high-quality disparity estimation from light fields. Finally, we show applications including digital refocusing and synthetic aperture blur, foreground removal, selective colorization, and others.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we propose a solution to blind deconvolution of a scene with two layers (foreground/background). We show that the reconstruction of the support of these two layers from a single image of a conventional camera is not possible. As a solution we propose to use a light field camera. We demonstrate that a single light field image captured with a Lytro camera can be successfully deblurred. More specifically, we consider the case of space-varying motion blur, where the blur magnitude depends on the depth changes in the scene. Our method employs a layered model that handles occlusions and partial transparencies due to both motion blur and out of focus blur of the plenoptic camera. We reconstruct each layer support, the corresponding sharp textures, and motion blurs via an optimization scheme. The performance of our algorithm is demonstrated on synthetic as well as real light field images.