996 resultados para video imaging
Resumo:
Spaceborne/airborne synthetic aperture radar (SAR) systems provide high resolution two-dimensional terrain imagery. The paper proposes a technique for combining multiple SAR images, acquired on flight paths slightly separated in the elevation direction, to generate high resolution three-dimensional imagery. The technique could be viewed as an extension to interferometric SAR (InSAR) in that it generates topographic imagery with an additional dimension of resolution. The 3-D multi-pass SAR imaging system is typically characterised by a relatively short ambiguity length in the elevation direction. To minimise the associated ambiguities we exploit the relative phase information within the set of images to track the terrain landscape. The SAR images are then coherently combined, via a nonuniform DFT, over a narrow (in elevation) volume centred on the 'dominant' terrain ground plane. The paper includes a detailed description of the technique, background theory, including achievable resolution, and the results of an experimental study.
Resumo:
Three-dimensional (3D) synthetic aperture radar (SAR) imaging via multiple-pass processing is an extension of interferometric SAR imaging. It exploits more than two flight passes to achieve a desired resolution in elevation. In this paper, a novel approach is developed to reconstruct a 3D space-borne SAR image with multiple-pass processing. It involves image registration, phase correction and elevational imaging. An image model matching is developed for multiple image registration, an eigenvector method is proposed for the phase correction and the elevational imaging is conducted using a Fourier transform or a super-resolution method for enhancement of elevational resolution. 3D SAR images are obtained by processing simulated data and real data from the first European Remote Sensing satellite (ERS-1) with the proposed approaches.
Resumo:
The aim of this experiment was to determine the effectiveness of two video-based perceptual training approaches designed to improve the anticipatory skills of junior tennis players. Players were assigned equally to an explicit learning group, an implicit learning group, a placebo group or a control group. A progressive temporal occlusion paradigm was used to examine, before and after training, the ability of the players to predict the direction of an opponent's service in an in-vivo on-court setting. The players responded either through hitting a return stroke or making a verbal prediction of stroke direction. Results revealed that the implicit learning group, whose training required them to predict serve speed direction while viewing temporally occluded video footage of the return-of-serve scenario, significantly improved their prediction accuracy after the training intervention. However, this training effect dissipated after a 32 day unfilled retention interval. The explicit learning group, who received instructions about the specific aspects of the pre-contact service kinematics that are informative with respect to service direction, did not demonstrate any significant performance improvements after the intervention. This, together with the absence of any significant improvements for the placebo and control groups, demonstrated that the improvement observed for the implicit learning group was not a consequence of either expectancy or familiarity effects.
Resumo:
Magnetic resonance imaging (MRI) is an easily automated, reliable technique to investigate axial mixing within rotating drums. Moist bran can be clearly differentiated from dry bran using MRI allowing a non-segregating tracer for axial mixing. For a 20-cm diameter drum, the axial dispersion coefficient in the particle bed was 0.51 cm s(-2). Axial dispersion is scale-dependent.
Resumo:
Background-In vivo methods to evaluate the size and composition of atherosclerotic lesions in animal models of atherosclerosis would assist in the testing of antiatherosclerotic drugs. We have developed an MRI method of detecting atherosclerotic plaque in the major vessels at the base of the heart in low-density lipoprotein (LDL) receptor-knockout (LDLR-/-) mice on a high-fat diet. Methods and Results-Three-dimensional fast spin-echo magnetic resonance images were acquired at 7 T by use of cardiac and respiratory triggering, with approximate to140-mum isotropic resolution, over 30 minutes. Comparison of normal and fat-suppressed images from female LDLR-/- mice I week before and 8 and 12 weeks after the transfer to a high-fat diet allowed visualization and quantification of plaque development in the innominate artery in vivo. Plaque mean cross-sectional area was significantly greater at week 12 in the LDLR-/- mice (0.14+/-0.086 mm(2) [mean+/-SD]) than in wild-type control mice on a normal diet (0.017+/-0.031 mm(2), p
Resumo:
To investigate the ability of ultrasonography to estimate musactivity, we measured architectural parameters (pennation angles, fascicle lengths, and muscle thickness) of several human muscles (tibialis anterior, biceps brachii, brachialis, transversus abdominis, obliquus internus abdominis, and obliquus externus abdominis) during isometric contractions of from 0 to 100% maximal voluntary contraction (MVC). Concurrently, electromyographic (EMG) activity was measured with surface (tibialis anterior only) or fine-wire electrodes. Most architectural parameters changed markedly with contractions up to 30% MVC but changed little at higher levels of contraction. Thus, ultrasound imaging can be used to detect low levels of muscle activity but cannot discriminate between moderate and strong contractions. Ultrasound measures could reliably detect changes in EMG of as little as 4% MVC (biceps muscle thickness), 5% MVC (brachialis muscle thickness), or 9% MVC (tibialis anterior pennation angle). They were generally less sensitive to changes in abdominal muscle activity, but it was possible to reliably detect contractions of 12% MVC in transversus abdominis (muscle length) and 22% MVC in obliquus internus (muscle thickness). Obliquus externus abdominis thickness did not change consistently with muscle contraction, so ultrasound measures of thickness cannot be used to detect activity of this muscle. Ultrasound imaging can thus provide a non-invasive method of detecting isometric muscle contractions of certain individual muscles.
Resumo:
Time motion analysis is extensively used to assess the demands of team sports. At present there is only limited information on the reliability of measurements using this analysis tool. The aim of this study was to establish the reliability of an individual observer's time motion analysis of rugby union. Ten elite level rugby players were individually tracked in Southern Hemisphere Super 12 matches using a digital video camera. The video footage was subsequently analysed by a single researcher on two occasions one month apart. The test-retest reliability was quantified as the typical error of measurement (TEM) and rated as either good (10% TEM). The total time spent in the individual movements of walking, jogging, striding, sprinting, static exertion and being stationary had moderate to poor reliability (5.8-11.1% TEM). The frequency of individual movements had good to poor reliability (4.3-13.6% TEM), while the mean duration of individual movements had moderate reliability (7.1-9.3% TEM). For the individual observer in the present investigation, time motion analysis was shown to be moderately reliable as an evaluation tool for examining the movement patterns of players in competitive rugby. These reliability values should be considered when assessing the movement patterns of rugby players within competition.
Resumo:
The interoperability of IP video equipment is a critical problem for surveillance systems and other video application developers. ONVIF is one of the two specifications addressing the standardization of networked devices interface, and it is based on SOAP. This paper addresses the development of an ONVIF library to develop clients of video cameras. We address the choice of a web services toolkit, and how to use the selected toolkit to develop a basic library. From that, we discuss the implementation of features that ...
Resumo:
The Wyner-Ziv video coding (WZVC) rate distortion performance is highly dependent on the quality of the side information, an estimation of the original frame, created at the decoder. This paper, characterizes the WZVC efficiency when motion compensated frame interpolation (MCFI) techniques are used to generate the side information, a difficult problem in WZVC especially because the decoder only has available some reference decoded frames. The proposed WZVC compression efficiency rate model relates the power spectral of the estimation error to the accuracy of the MCFI motion field. Then, some interesting conclusions may be derived related to the impact of the motion field smoothness and the correlation to the true motion trajectories on the compression performance.
Resumo:
Fluorescent protein microscopy imaging is nowadays one of the most important tools in biomedical research. However, the resulting images present a low signal to noise ratio and a time intensity decay due to the photobleaching effect. This phenomenon is a consequence of the decreasing on the radiation emission efficiency of the tagging protein. This occurs because the fluorophore permanently loses its ability to fluoresce, due to photochemical reactions induced by the incident light. The Poisson multiplicative noise that corrupts these images, in addition with its quality degradation due to photobleaching, make long time biological observation processes very difficult. In this paper a denoising algorithm for Poisson data, where the photobleaching effect is explicitly taken into account, is described. The algorithm is designed in a Bayesian framework where the data fidelity term models the Poisson noise generation process as well as the exponential intensity decay caused by the photobleaching. The prior term is conceived with Gibbs priors and log-Euclidean potential functions, suitable to cope with the positivity constrained nature of the parameters to be estimated. Monte Carlo tests with synthetic data are presented to characterize the performance of the algorithm. One example with real data is included to illustrate its application.
Resumo:
One of the most efficient approaches to generate the side information (SI) in distributed video codecs is through motion compensated frame interpolation where the current frame is estimated based on past and future reference frames. However, this approach leads to significant spatial and temporal variations in the correlation noise between the source at the encoder and the SI at the decoder. In such scenario, it would be useful to design an architecture where the SI can be more robustly generated at the block level, avoiding the creation of SI frame regions with lower correlation, largely responsible for some coding efficiency losses. In this paper, a flexible framework to generate SI at the block level in two modes is presented: while the first mode corresponds to a motion compensated interpolation (MCI) technique, the second mode corresponds to a motion compensated quality enhancement (MCQE) technique where a low quality Intra block sent by the encoder is used to generate the SI by doing motion estimation with the help of the reference frames. The novel MCQE mode can be overall advantageous from the rate-distortion point of view, even if some rate has to be invested in the low quality Intra coding blocks, for blocks where the MCI produces SI with lower correlation. The overall solution is evaluated in terms of RD performance with improvements up to 2 dB, especially for high motion video sequences and long Group of Pictures (GOP) sizes.
Resumo:
Motion compensated frame interpolation (MCFI) is one of the most efficient solutions to generate side information (SI) in the context of distributed video coding. However, it creates SI with rather significant motion compensated errors for some frame regions while rather small for some other regions depending on the video content. In this paper, a low complexity Infra mode selection algorithm is proposed to select the most 'critical' blocks in the WZ frame and help the decoder with some reliable data for those blocks. For each block, the novel coding mode selection algorithm estimates the encoding rate for the Intra based and WZ coding modes and determines the best coding mode while maintaining a low encoder complexity. The proposed solution is evaluated in terms of rate-distortion performance with improvements up to 1.2 dB regarding a WZ coding mode only solution.
Resumo:
Esta tese pretende contribuir para o estudo e análise dos factores relacionados com as técnicas de aquisição de imagens radiológicas digitais, a qualidade diagnóstica e a gestão da dose de radiação em sistema de radiologia digital. A metodologia encontra-se organizada em duas componentes. A componente observacional, baseada num desenho do estudo de natureza retrospectiva e transversal. Os dados recolhidos a partir de sistemas CR e DR permitiram a avaliação dos parâmetros técnicos de exposição utilizados em radiologia digital, a avaliação da dose absorvida e o índice de exposição no detector. No contexto desta classificação metodológica (retrospectiva e transversal), também foi possível desenvolver estudos da qualidade diagnóstica em sistemas digitais: estudos de observadores a partir de imagens arquivadas no sistema PACS. A componente experimental da tese baseou-se na realização de experiências em fantomas para avaliar a relação entre dose e qualidade de imagem. As experiências efectuadas permitiram caracterizar as propriedades físicas dos sistemas de radiologia digital, através da manipulação das variáveis relacionadas com os parâmetros de exposição e a avaliação da influência destas na dose e na qualidade da imagem. Utilizando um fantoma contraste de detalhe, fantomas antropomórficos e um fantoma de osso animal, foi possível objectivar medidas de quantificação da qualidade diagnóstica e medidas de detectabilidade de objectos. Da investigação efectuada, foi possível salientar algumas conclusões. As medidas quantitativas referentes à performance dos detectores são a base do processo de optimização, permitindo a medição e a determinação dos parâmetros físicos dos sistemas de radiologia digital. Os parâmetros de exposição utilizados na prática clínica mostram que a prática não está em conformidade com o referencial Europeu. Verifica-se a necessidade de avaliar, melhorar e implementar um padrão de referência para o processo de optimização, através de novos referenciais de boa prática ajustados aos sistemas digitais. Os parâmetros de exposição influenciam a dose no paciente, mas a percepção da qualidade de imagem digital não parece afectada com a variação da exposição. Os estudos que se realizaram envolvendo tanto imagens de fantomas como imagens de pacientes mostram que a sobreexposição é um risco potencial em radiologia digital. A avaliação da qualidade diagnóstica das imagens mostrou que com a variação da exposição não se observou degradação substancial da qualidade das imagens quando a redução de dose é efectuada. Propõe-se o estudo e a implementação de novos níveis de referência de diagnóstico ajustados aos sistemas de radiologia digital. Como contributo da tese, é proposto um modelo (STDI) para a optimização de sistemas de radiologia digital.