57 resultados para focal-plane-array image processors

em University of Queensland eSpace - Australia


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design and development of two X-band amplifying reflectarrays is presented. The arrays use dual-polarized aperture coupled patch antennas with FET transistors and phasing circuits to amplify a microwave signal and to radiate it in a chosen direction. Two cases are considered, one when a reflectarray converts a spherical wave due to a feed horn into a plane wave radiated into a boresight direction, and two, when the reflectarray converts a spherical wave due to a dual-polarized four-element feed array into a co-focal spherical wave. This amplified signal is received in an orthogonal port of the feed array so that the entire structure acts as a spatial power combiner. The two amplifying arrays are tested in the near-field zone for phase distribution over their apertures to achieve the required beam formation. Alternatively, their radiation patterns or gains are investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Subtractive imaging in confocal fluorescence light microscopy is based on the subtraction of a suitably weighted widefield image from a confocal image. An approximation to a widefield image can be obtained by detection with an opened confocal pinhole. The subtraction of images enhances the resolution in-plane as well as along the optic axis. Due to the linearity of the approach, the effect of subtractive imaging in Fourier-space corresponds to a reduction of low spatial frequency contributions leading to a relative enhancement of the high frequencies. Along the direction of the optic axis this also results in an improved sectioning. Image processing can achieve a similar effect. However, a 3D volume dataset must be acquired and processed, yielding a result essentially identical to subtractive imaging but superior in signal-to-noise ratio. The latter can be increased further with the technique of weighted averaging in Fourier-space. A comparison of 2D and 3D experimental data analysed with subtractive imaging, the equivalent Fourier-space processing of the confocal data only, and Fourier-space weighted averaging is presented. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new method of modeling imaging of laser beams in the presence of diffraction. Our method is based on the concept of first orthogonally expanding the resultant diffraction field (that would have otherwise been obtained by the laborious application of the Huygens diffraction principle) and then representing it by an effective multimodal laser beam with different beam parameters. We show not only that the process of obtaining the new beam parameters is straightforward but also that it permits a different interpretation of the diffraction-caused focal shift in laser beams. All of the criteria that we have used to determine the minimum number of higher-order modes needed to accurately represent the diffraction field show that the mode-expansion method is numerically efficient. Finally, the characteristics of the mode-expansion method are such that it allows modeling of a vast array of diffraction problems, regardless of the characteristics of the incident laser beam, the diffracting element, or the observation plane. (C) 2005 Optical Society of America.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new method for ameliorating high-field image distortion caused by radio frequency/tissue interaction is presented and modeled, The proposed method uses, but is not restricted to, a shielded four-element transceive phased array coil and involves performing two separate scans of the same slice with each scan using different excitations during transmission. By optimizing the amplitudes and phases for each scan, antipodal signal profiles can be obtained, and by combining both images together, the image distortion can be reduced several-fold. A hybrid finite-difference time-domain/method-of-moments method is used to theoretically demonstrate the method and also to predict the radio frequency behavior inside the human head. in addition, the proposed method is used in conjunction with the GRAPPA reconstruction technique to enable rapid imaging. Simulation results reported herein for IIT (470 MHz) brain imaging applications demonstrate the feasibility of the concept where multiple acquisitions using parallel imaging elements with GRAPPA reconstruction results in improved image quality. (c) 2006 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effect of transmitter and receiver array configurations on the stray-light and diffraction-caused crosstalk in free-space optical interconnects was investigated. The optical system simulation software (Code V) is used to simulate both the stray-light and diffraction-caused crosstalk. Experimentally measured, spectrally-resolved, near-field images of VCSEL higher order modes were used as extended sources in our simulation model. In addition, we have included the electrical and optical noise in our analysis to give more accurate overall performance of the FSOI system. Our results show that by changing the square lattice geometry to a hexagonal configuration, we obtain an overall signal-to-noise ratio improvement of 3 dB. Furthermore, system density is increased by up to 4 channels/mm2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the effect of transmitter and receiver array configurations on the stray-light and diffraction-caused crosstalk in free-space optical interconnects. The optical system simulation software (Code V) is used to simulate both the stray-light and diffraction-caused crosstalk. Experimentally measured, spectrally-resolved, near-field images of VCSEL higher order modes were used as extended sources in our simulation model. Our results show that by changing the square lattice geometry to a hexagonal configuration, we obtain the reduction in the stray-light crosstalk of up to 9 dB and an overall signal-to-noise ratio improvement of 3 dB.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The demand for more pixels is beginning to be met as manufacturers increase the native resolution of projector chips. Tiling several projectors still offers a solution to augment the pixel capacity of a display. However, problems of color and illumination uniformity across projectors need to be addressed as well as the computer software required to drive such devices. We present the results obtained on a desktop-size tiled projector array of three D-ILA projectors sharing a common illumination source. A short throw lens (0.8:1) on each projector yields a 21-in. diagonal for each image tile; the composite image on a 3×1 array is 3840×1024 pixels with a resolution of about 80 dpi. The system preserves desktop resolution, is compact, and can fit in a normal room or laboratory. The projectors are mounted on precision six-axis positioners, which allow pixel level alignment. A fiber optic beamsplitting system and a single set of red, green, and blue dichroic filters are the key to color and illumination uniformity. The D-ILA chips inside each projector can be adjusted separately to set or change characteristics such as contrast, brightness, or gamma curves. The projectors were then matched carefully: photometric variations were corrected, leading to a seamless image. Photometric measurements were performed to characterize the display and are reported here. This system is driven by a small PC cluster fitted with graphics cards and running Linux. It can be scaled to accommodate an array of 2×3 or 3×3 projectors, thus increasing the number of pixels of the final image. Finally, we present current uses of the display in fields such as astrophysics and archaeology (remote sensing).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the challenges in scientific visualization is to generate software libraries suitable for the large-scale data emerging from tera-scale simulations and instruments. We describe the efforts currently under way at SDSC and NPACI to address these challenges. The scope of the SDSC project spans data handling, graphics, visualization, and scientific application domains. Components of the research focus on the following areas: intelligent data storage, layout and handling, using an associated “Floor-Plan” (meta data); performance optimization on parallel architectures; extension of SDSC’s scalable, parallel, direct volume renderer to allow perspective viewing; and interactive rendering of fractional images (“imagelets”), which facilitates the examination of large datasets. These concepts are coordinated within a data-visualization pipeline, which operates on component data blocks sized to fit within the available computing resources. A key feature of the scheme is that the meta data, which tag the data blocks, can be propagated and applied consistently. This is possible at the disk level, in distributing the computations across parallel processors; in “imagelet” composition; and in feature tagging. The work reflects the emerging challenges and opportunities presented by the ongoing progress in high-performance computing (HPC) and the deployment of the data, computational, and visualization Grids.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a review of perceptual image quality metrics and their application to still image compression. The review describes how image quality metrics can be used to guide an image compression scheme and outlines the advantages, disadvantages and limitations of a number of quality metrics. We examine a broad range of metrics ranging from simple mathematical measures to those which incorporate full perceptual models. We highlight some variation in the models for luminance adaptation and the contrast sensitivity function and discuss what appears to be a lack of a general consensus regarding the models which best describe contrast masking and error summation. We identify how the various perceptual components have been incorporated in quality metrics, and identify a number of psychophysical testing techniques that can be used to validate the metrics. We conclude by illustrating some of the issues discussed throughout the paper with a simple demonstration. (C) 1998 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Matthiessen's ratio (distance from centre of lens to retina: lens radius) was measured in developing black bream, Acanthopagrus butcheri (Sparidae, Teleostei). The value decreased over the first 10 days post-hatch from 3.6 to 2.3 along the nasal and from four to 2.6 along temporal axis. Coincidentally, there was a decrease in the focal ratio of the lens (focal length:lens radius). Morphologically, the accommodatory retractor lentis muscle appeared to become functional between 10-12 days post-hatch. The results suggest that a higher focal ratio compensates for the relatively high Matthiessen's ratio brought about by constraints of small eye size during early development. Combined with differences in axial length, this provides a means for larval fish to focus images from different distances prior to the ability to accommodate. (C) 1999 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is devoted to the problems of finding the load flow feasibility, saddle node, and Hopf bifurcation boundaries in the space of power system parameters. The first part contains a review of the existing relevant approaches including not-so-well-known contributions from Russia. The second part presents a new robust method for finding the power system load flow feasibility boundary on the plane defined by any three vectors of dependent variables (nodal voltages), called the Delta plane. The method exploits some quadratic and linear properties of the load now equations and state matrices written in rectangular coordinates. An advantage of the method is that it does not require an iterative solution of nonlinear equations (except the eigenvalue problem). In addition to benefits for visualization, the method is a useful tool for topological studies of power system multiple solution structures and stability domains. Although the power system application is developed, the method can be equally efficient for any quadratic algebraic problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this and a preceding paper, we provide an introduction to the Fujitsu VPP range of vector-parallel supercomputers and to some of the computational chemistry software available for the VPP. Here, we consider the implementation and performance of seven popular chemistry application packages. The codes discussed range from classical molecular dynamics to semiempirical and ab initio quantum chemistry. All have evolved from sequential codes, and have typically been parallelised using a replicated data approach. As such they are well suited to the large-memory/fast-processor architecture of the VPP. For one code, CASTEP, a distributed-memory data-driven parallelisation scheme is presented. (C) 2000 Published by Elsevier Science B.V. All rights reserved.