986 resultados para image generation
Resumo:
We present a signal processing approach using discrete wavelet transform (DWT) for the generation of complex synthetic aperture radar (SAR) images at an arbitrary number of dyadic scales of resolution. The method is computationally efficient and is free from significant system-imposed limitations present in traditional subaperture-based multiresolution image formation. Problems due to aliasing associated with biorthogonal decomposition of the complex signals are addressed. The lifting scheme of DWT is adapted to handle complex signal approximations and employed to further enhance the computational efficiency. Multiresolution SAR images formed by the proposed method are presented.
Resumo:
The Production Workstation developed at the University of Greenwich is evaluated as a tool for assisting all those concerned with production. It enables the producer, director, and cinematographer to explore the quality of the images obtainable when using a plethora of tools. Users are free to explore many possible choices, ranging from 35mm to DV, and combine them with the many image manipulation tools of the cinematographer. The validation required for the system is explicitly examined, concerning the accuracy of the resulting imagery. Copyright © 1999 by the Society of Motion Picture and Television Engineers, Inc.
Resumo:
Intravascular ultrasound (IVUS) phantoms are important to calibrate and evaluate many IVUS imaging processing tasks. However, phantom generation is never the primary focus of related works; hence, it cannot be well covered, and is usually based on more than one platform, which may not be accessible to investigators. Therefore, we present a framework for creating representative IVUS phantoms, for different intraluminal pressures, based on the finite element method and Field II. First, a coronary cross-section model is selected. Second, the coronary regions are identified to apply the properties. Third, the corresponding mesh is generated. Fourth, the intraluminal force is applied and the deformation computed. Finally, the speckle noise is incorporated. The framework was tested taking into account IVUS contrast, noise and strains. The outcomes are in line with related studies and expected values. Moreover, the framework toolbox is freely accessible and fully implemented in a single platform. (E-mail: fernando.okara@gmail.com) (c) 2012 World Federation for Ultrasound in Medicine & Biology.
Resumo:
Between 8 and 40% of Parkinson disease (PD) patients will have visual hallucinations (VHs) during the course of their illness. Although cognitive impairment has been identified as a risk factor for hallucinations, more specific neuropsychological deficits underlying such phenomena have not been established. Research in psychopathology has converged to suggest that hallucinations are associated with confusion between internal representations of events and real events (i.e. impaired-source monitoring). We evaluated three groups: 17 Parkinson's patients with visual hallucinations, 20 Parkinson's patients without hallucinations and 20 age-matched controls, using tests of visual imagery, visual perception and memory, including tests of source monitoring and recollective experience. The study revealed that Parkinson's patients with hallucinations appear to have intact visual imagery processes and spatial perception. However, there were impairments in object perception and recognition memory, and poor recollection of the encoding episode in comparison to both non-hallucinating Parkinson's patients and healthy controls. Errors were especially likely to occur when encoding and retrieval cues were in different modalities. The findings raise the possibility that visual hallucinations in Parkinson's patients could stem from a combination of faulty perceptual processing of environmental stimuli, and less detailed recollection of experience combined with intact image generation. (C) 2002 Elsevier Science Ltd. All fights reserved.
Resumo:
We describe ncWMS, an implementation of the Open Geospatial Consortium’s Web Map Service (WMS) specification for multidimensional gridded environmental data. ncWMS can read data in a large number of common scientific data formats – notably the NetCDF format with the Climate and Forecast conventions – then efficiently generate map imagery in thousands of different coordinate reference systems. It is designed to require minimal configuration from the system administrator and, when used in conjunction with a suitable client tool, provides end users with an interactive means for visualizing data without the need to download large files or interpret complex metadata. It is also used as a “bridging” tool providing interoperability between the environmental science community and users of geographic information systems. ncWMS implements a number of extensions to the WMS standard in order to fulfil some common scientific requirements, including the ability to generate plots representing timeseries and vertical sections. We discuss these extensions and their impact upon present and future interoperability. We discuss the conceptual mapping between the WMS data model and the data models used by gridded data formats, highlighting areas in which the mapping is incomplete or ambiguous. We discuss the architecture of the system and particular technical innovations of note, including the algorithms used for fast data reading and image generation. ncWMS has been widely adopted within the environmental data community and we discuss some of the ways in which the software is integrated within data infrastructures and portals.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
SAFT techniques are based on the sequential activation, in emission and reception, of the array elements and the post-processing of all the received signals to compose the image. Thus, the image generation can be divided into two stages: (1) the excitation and acquisition stage, where the signals received by each element or group of elements are stored; and (2) the beamforming stage, where the signals are combined together to obtain the image pixels. The use of Graphics Processing Units (GPUs), which are programmable devices with a high level of parallelism, can accelerate the computations of the beamforming process, that usually includes different functions such as dynamic focusing, band-pass filtering, spatial filtering or envelope detection. This work shows that using GPU technology can accelerate, in more than one order of magnitude with respect to CPU implementations, the beamforming and post-processing algorithms in SAFT imaging. ©2009 IEEE.
When that tune runs through your head: A PET investigation of auditory imagery for familiar melodies
Resumo:
The present study used positron emission tomography (PET) to examine the cerebral activity pattern associated with auditory imagery forfamiliar tunes. Subjects either imagined the continuation of nonverbaltunes cued by their first few notes, listened to a short sequence of notesas a control task, or listened and then reimagined that short sequence. Subtraction of the activation in the control task from that in the real-tune imagery task revealed primarily right-sided activation in frontal and superior temporal regions, plus supplementary motor area(SMA). Isolating retrieval of the real tunes by subtracting activation in the reimagine task from that in the real-tune imagery task revealedactivation primarily in right frontal areas and right superior temporal gyrus. Subtraction of activation in the control condition from that in the reimagine condition, intended to capture imagery of unfamiliarsequences, revealed activation in SMA, plus some left frontal regions. We conclude that areas of right auditory association cortex, together with right and left frontal cortices, are implicated in imagery for familiartunes, in accord with previous behavioral, lesion and PET data. Retrieval from musical semantic memory is mediated by structures in the right frontal lobe, in contrast to results from previous studies implicating left frontal areas for all semantic retrieval. The SMA seems to be involved specifically in image generation, implicating a motor code in this process.
When that tune runs through your head: a PET investigation of auditory imagery for familiar melodies
Resumo:
The present study used positron emission tomography (PET) to examine the cerebral activity pattern associated with auditory imagery for familiar tunes. Subjects either imagined the continuation of nonverbal tunes cued by their first few notes, listened to a short sequence of notes as a control task, or listened and then reimagined that short sequence. Subtraction of the activation in the control task from that in the real-tune imagery task revealed primarily right-sided activation in frontal and superior temporal regions, plus supplementary motor area (SMA). Isolating retrieval of the real tunes by subtracting activation in the reimagine task from that in the real-tune imagery task revealed activation primarily in right frontal areas and right superior temporal gyrus. Subtraction of activation in the control condition from that in the reimagine condition, intended to capture imagery of unfamiliar sequences, revealed activation in SMA, plus some left frontal regions. We conclude that areas of right auditory association cortex, together with right and left frontal cortices, are implicated in imagery for familiar tunes, in accord with previous behavioral, lesion and PET data. Retrieval from musical semantic memory is mediated by structures in the right frontal lobe, in contrast to results from previous studies implicating left frontal areas for all semantic retrieval. The SMA seems to be involved specifically in image generation, implicating a motor code in this process.
Resumo:
During decades Distance Transforms have proven to be useful for many image processing applications, and more recently, they have started to be used in computer graphics environments. The goal of this paper is to propose a new technique based on Distance Transforms for detecting mesh elements which are close to the objects' external contour (from a given point of view), and using this information for weighting the approximation error which will be tolerated during the mesh simplification process. The obtained results are evaluated in two ways: visually and using an objective metric that measures the geometrical difference between two polygonal meshes.
Resumo:
A method for fast colour and geometric correction of a tiled display system is presented in this paper. Such kind of displays are a common choice for virtual reality applications and simulators, where a high resolution image is required. They are the cheapest and more flexible alternative for large image generation but they require a precise geometric and colour correction. The purpose of the proposed method is to correct the projection system as fast as possible so in case the system needs to be recalibrated it doesn’t interfere with the normal operation of the simulator or virtual reality application. This technique makes use of a single conventional webcam for both geometric and photometric correction. Some previous assumptions are made, like planar projection surface and negligibleintra-projector colour variation and black-offset levels. If these assumptions hold true, geometric and photometric seamlessness can be achievedfor this kind of display systems. The method described in this paper is scalable for an undefined number of projectors and completely automatic.
Resumo:
In recent years, many experimental and theoretical research groups worldwide have actively worked on demonstrating the use of liquid crystals (LCs) as adaptive lenses for image generation, waveform shaping, and non-mechanical focusing applications. In particular, important achievements have concerned the development of alternative solutions for 3D vision. This work focuses on the design and evaluation of the electro-optic response of a LC-based 2D/3D autostereoscopic display prototype. A strategy for achieving 2D/3D vision has been implemented with a cylindrical LC lens array placed in front of a display; this array acts as a lenticular sheet with a tunable focal length by electrically controlling the birefringence. The performance of the 2D/3D device was evaluated in terms of the angular luminance, image deflection, crosstalk, and 3D contrast within a simulated environment. These measurements were performed with characterization equipment for autostereoscopic 3D displays (angular resolution of 0.03 ).