998 resultados para Image statistics
Resumo:
We present an open-source ITK implementation of a directFourier method for tomographic reconstruction, applicableto parallel-beam x-ray images. Direct Fourierreconstruction makes use of the central-slice theorem tobuild a polar 2D Fourier space from the 1D transformedprojections of the scanned object, that is resampled intoa Cartesian grid. Inverse 2D Fourier transform eventuallyyields the reconstructed image. Additionally, we providea complex wrapper to the BSplineInterpolateImageFunctionto overcome ITKâeuro?s current lack for image interpolatorsdealing with complex data types. A sample application ispresented and extensively illustrated on the Shepp-Loganhead phantom. We show that appropriate input zeropaddingand 2D-DFT oversampling rates together with radial cubicb-spline interpolation improve 2D-DFT interpolationquality and are efficient remedies to reducereconstruction artifacts.
Resumo:
We investigated the relationship between being bullied and measured body weight and perceived body weight among adolescents of a middle-income sub Saharan African country. Our data originated from the Global School-based Health Survey, which targets adolescents aged 13-15 years. Student weights and heights were measured before administrating the questionnaire which included questions about personal data, health behaviors and being bullied. Standard criteria were used to assess thinness, overweight and obesity. Among 1,006 participants who had complete data, 16.5% (95%CI 13.3-20.2) reported being bullied ≥ 3 days during the past 30 days; 13.4% were thin, 16.8% were overweight and 7.6% were obese. Categories of actual weight and of perceived weight correlated only moderately (Spearman correlation coefficient 0.37 for boys and 0.57 for girls; p < 0.001). In univariate analysis, both actual obesity (OR 1.76; p = 0.051) and perception of high weight (OR 1.63 for "slightly overweight"; OR 2.74 for "very overweight", both p < 0.05) were associated with being bullied. In multivariate analysis, ORs for categories of perceived overweight were virtually unchanged while ORs for actual overweight and obesity were substantially attenuated, suggesting a substantial role of perceived weight in the association with being bullied. Actual underweight and perceived thinness also tended to be associated with being bullied, although not significantly. Our findings suggest that more research attention be given to disentangling the significant association between body image, overweight and bullying among adolescents. Further studies in diverse populations are warranted.
Resumo:
Three-dimensional imaging for the quantification of myocardial motion is a key step in the evaluation of cardiac disease. A tagged magnetic resonance imaging method that automatically tracks myocardial displacement in three dimensions is presented. Unlike other techniques, this method tracks both in-plane and through-plane motion from a single image plane without affecting the duration of image acquisition. A small z-encoding gradient is subsequently added to the refocusing lobe of the slice-selection gradient pulse in a slice following CSPAMM acquisition. An opposite polarity z-encoding gradient is added to the orthogonal tag direction. The additional z-gradients encode the instantaneous through plane position of the slice. The vertical and horizontal tags are used to resolve in-plane motion, while the added z-gradients is used to resolve through-plane motion. Postprocessing automatically decodes the acquired data and tracks the three-dimensional displacement of every material point within the image plane for each cine frame. Experiments include both a phantom and in vivo human validation. These studies demonstrate that the simultaneous extraction of both in-plane and through-plane displacements and pathlines from tagged images is achievable. This capability should open up new avenues for the automatic quantification of cardiac motion and strain for scientific and clinical purposes.
Resumo:
In this paper we present a Bayesian image reconstruction algorithm with entropy prior (FMAPE) that uses a space-variant hyperparameter. The spatial variation of the hyperparameter allows different degrees of resolution in areas of different statistical characteristics, thus avoiding the large residuals resulting from algorithms that use a constant hyperparameter. In the first implementation of the algorithm, we begin by segmenting a Maximum Likelihood Estimator (MLE) reconstruction. The segmentation method is based on using a wavelet decomposition and a self-organizing neural network. The result is a predetermined number of extended regions plus a small region for each star or bright object. To assign a different value of the hyperparameter to each extended region and star, we use either feasibility tests or cross-validation methods. Once the set of hyperparameters is obtained, we carried out the final Bayesian reconstruction, leading to a reconstruction with decreased bias and excellent visual characteristics. The method has been applied to data from the non-refurbished Hubble Space Telescope. The method can be also applied to ground-based images.
Resumo:
We present a novel approach for analyzing single-trial electroencephalography (EEG) data, using topographic information. The method allows for visualizing event-related potentials using all the electrodes of recordings overcoming the problem of previous approaches that required electrode selection and waveforms filtering. We apply this method to EEG data from an auditory object recognition experiment that we have previously analyzed at an ERP level. Temporally structured periods were statistically identified wherein a given topography predominated without any prior information about the temporal behavior. In addition to providing novel methods for EEG analysis, the data indicate that ERPs are reliably observable at a single-trial level when examined topographically.
Resumo:
The information in this digest comes from the FY09 Iowa Annual Public Library Survey. It reflects the activities of 522 of the 543 public libraries in Iowa.
Resumo:
The standard data fusion methods may not be satisfactory to merge a high-resolution panchromatic image and a low-resolution multispectral image because they can distort the spectral characteristics of the multispectral data. The authors developed a technique, based on multiresolution wavelet decomposition, for the merging and data fusion of such images. The method presented consists of adding the wavelet coefficients of the high-resolution image to the multispectral (low-resolution) data. They have studied several possibilities concluding that the method which produces the best results consists in adding the high order coefficients of the wavelet transform of the panchromatic image to the intensity component (defined as L=(R+G+B)/3) of the multispectral image. The method is, thus, an improvement on standard intensity-hue-saturation (IHS or LHS) mergers. They used the ¿a trous¿ algorithm which allows the use of a dyadic wavelet to merge nondyadic data in a simple and efficient scheme. They used the method to merge SPOT and LANDSATTM images. The technique presented is clearly better than the IHS and LHS mergers in preserving both spectral and spatial information.
Resumo:
When preparing an article on image restoration in astronomy, it is obvious that some topics have to be dropped to keep the work at reasonable length. We have decided to concentrate on image and noise models and on the algorithms to find the restoration. Topics like parameter estimation and stopping rules are also commented on. We start by describing the Bayesian paradigm and then proceed to study the noise and blur models used by the astronomical community. Then the prior models used to restore astronomical images are examined. We describe the algorithms used to find the restoration for the most common combinations of degradation and image models. Then we comment on important issues such as acceleration of algorithms, stopping rules, and parameter estimation. We also comment on the huge amount of information available to, and made available by, the astronomical community.
Resumo:
Usual image fusion methods inject features from a high spatial resolution panchromatic sensor into every low spatial resolution multispectral band trying to preserve spectral signatures and improve spatial resolution to that of the panchromatic sensor. The objective is to obtain the image that would be observed by a sensor with the same spectral response (i.e., spectral sensitivity and quantum efficiency) as the multispectral sensors and the spatial resolution of the panchromatic sensor. But in these methods, features from electromagnetic spectrum regions not covered by multispectral sensors are injected into them, and physical spectral responses of the sensors are not considered during this process. This produces some undesirable effects, such as resolution overinjection images and slightly modified spectral signatures in some features. The authors present a technique which takes into account the physical electromagnetic spectrum responses of sensors during the fusion process, which produces images closer to the image obtained by the ideal sensor than those obtained by usual wavelet-based image fusion methods. This technique is used to define a new wavelet-based image fusion method.
Resumo:
Introduction. The DRIVER I project drew up a detailed report of European repositories based on data gathered in a survey in which Spain's participation was very low. This created a highly distorted image of the implementation of repositories in Spain. This study aims to analyse the current state of Spanish open-access institutional repositories and to describe their characteristics. Method. The data were gathered through a Web survey. The questionnaire was based on that used by DRIVER I: coverage; technical infrastructure and technical issues; institutional policies; services created; and stimulators and inhibitors for establishing, filling and maintaining their digital institutional repositories. Analysis. Data were tabulated and analysed systematically according responses obtained from the questionnaire and grouped by coverage. Results. Responses were obtained from 38 of the 104 institutions contacted, which had 29 institutional repositories. This represents 78.3% of the Spanish repositories according to the BuscaRepositorios directory. Spanish repositories contained mainly full-text materials (journal articles and doctoral theses) together with metadata. The software most used was DSpace, followed by EPrints. The metadata standard most used was Dublin Core. Spanish repositories offered more usage statistics and fewer author-oriented services than the European average. The priorities for the future development of the repositories are the need for clear policies on access to scientific production based on public funding and the need for quality control indicators. Conclusions.This is the first detailed study of Spanish institutional repositories. The key stimulants for establishing, filling and maintaining were, in order of importance, the increase of visibility and citation, the interest of decision-makers, simplicity of use and search services. On the other hand the main inhibitors identified were the absence of policies, the lack of integration with other national and international systems and the lack of awareness efforts among academia.
Resumo:
PURPOSE: To evaluate the effect of a real-time adaptive trigger delay on image quality to correct for heart rate variability in 3D whole-heart coronary MR angiography (MRA). MATERIALS AND METHODS: Twelve healthy adults underwent 3D whole-heart coronary MRA with and without the use of an adaptive trigger delay. The moment of minimal coronary artery motion was visually determined on a high temporal resolution MRI. Throughout the scan performed without adaptive trigger delay, trigger delay was kept constant, whereas during the scan performed with adaptive trigger delay, trigger delay was continuously updated after each RR-interval using physiological modeling. Signal-to-noise, contrast-to-noise, vessel length, vessel sharpness, and subjective image quality were compared in a blinded manner. RESULTS: Vessel sharpness improved significantly for the middle segment of the right coronary artery (RCA) with the use of the adaptive trigger delay (52.3 +/- 7.1% versus 48.9 +/- 7.9%, P = 0.026). Subjective image quality was significantly better in the middle segments of the RCA and left anterior descending artery (LAD) when the scan was performed with adaptive trigger delay compared to constant trigger delay. CONCLUSION: Our results demonstrate that the use of an adaptive trigger delay to correct for heart rate variability improves image quality mainly in the middle segments of the RCA and LAD.