998 resultados para Image statistics


Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the key aspects in 3D-image registration is the computation of the joint intensity histogram. We propose a new approach to compute this histogram using uniformly distributed random lines to sample stochastically the overlapping volume between two 3D-images. The intensity values are captured from the lines at evenly spaced positions, taking an initial random offset different for each line. This method provides us with an accurate, robust and fast mutual information-based registration. The interpolation effects are drastically reduced, due to the stochastic nature of the line generation, and the alignment process is also accelerated. The results obtained show a better performance of the introduced method than the classic computation of the joint histogram

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, an information theoretic framework for image segmentation is presented. This approach is based on the information channel that goes from the image intensity histogram to the regions of the partitioned image. It allows us to define a new family of segmentation methods which maximize the mutual information of the channel. Firstly, a greedy top-down algorithm which partitions an image into homogeneous regions is introduced. Secondly, a histogram quantization algorithm which clusters color bins in a greedy bottom-up way is defined. Finally, the resulting regions in the partitioning algorithm can optionally be merged using the quantized histogram

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An objective analysis of image quality parameters was performed for a computed radiography (CR) system using both standard single-side and prototype dual-side read plates. The pre-sampled modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) for the systems were determined at three different beam qualities representative of pediatric chest radiography, at an entrance detector air kerma of 5 microGy. The NPS and DQE measurements were realized under clinically relevant x-ray spectra for pediatric radiology, including x-ray scatter radiations. Compared to the standard single-side read system, the MTF for the dual-side read system is reduced, but this is offset by a significant decrease in image noise, resulting in a marked increase in DQE (+40%) in the low spatial frequency range. Thus, for the same image quality, the new technology permits the CR system to be used at a reduced dose level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The shape of the energy spectrum produced by an x-ray tube has a great importance in mammography. Many anode-filtration combinations have been proposed to obtain the most effective spectrum shape for the image quality-dose relationship. On the other hand, third generation synchrotrons such as the European Synchrotron Radiation Facility in Grenoble are able to produce a high flux of monoenergetic radiation. It is thus a powerful tool to study the effect of beam energy on image quality and dose in mammography. An objective method was used to evaluate image quality and dose in mammography with synchrotron radiation and to compare them to standard conventional units. It was performed systematically in the energy range of interest for mammography through the evaluation of a global image quality index and through the measurement of the mean glandular dose. Compared to conventional mammography units, synchrotron radiation shows a great improvement of the image quality-dose relationship, which is due to the beam monochromaticity and to the high intrinsic collimation of the beam, which allows the use of a slit instead of an anti-scatter grid for scatter rejection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over the past decade, significant interest has been expressed in relating the spatial statistics of surface-based reflection ground-penetrating radar (GPR) data to those of the imaged subsurface volume. A primary motivation for this work is that changes in the radar wave velocity, which largely control the character of the observed data, are expected to be related to corresponding changes in subsurface water content. Although previous work has indeed indicated that the spatial statistics of GPR images are linked to those of the water content distribution of the probed region, a viable method for quantitatively analyzing the GPR data and solving the corresponding inverse problem has not yet been presented. Here we address this issue by first deriving a relationship between the 2-D autocorrelation of a water content distribution and that of the corresponding GPR reflection image. We then show how a Bayesian inversion strategy based on Markov chain Monte Carlo sampling can be used to estimate the posterior distribution of subsurface correlation model parameters that are consistent with the GPR data. Our results indicate that if the underlying assumptions are valid and we possess adequate prior knowledge regarding the water content distribution, in particular its vertical variability, this methodology allows not only for the reliable recovery of lateral correlation model parameters but also for estimates of parameter uncertainties. In the case where prior knowledge regarding the vertical variability of water content is not available, the results show that the methodology still reliably recovers the aspect ratio of the heterogeneity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A methodology of exploratory data analysis investigating the phenomenon of orographic precipitation enhancement is proposed. The precipitation observations obtained from three Swiss Doppler weather radars are analysed for the major precipitation event of August 2005 in the Alps. Image processing techniques are used to detect significant precipitation cells/pixels from radar images while filtering out spurious effects due to ground clutter. The contribution of topography to precipitation patterns is described by an extensive set of topographical descriptors computed from the digital elevation model at multiple spatial scales. Additionally, the motion vector field is derived from subsequent radar images and integrated into a set of topographic features to highlight the slopes exposed to main flows. Following the exploratory data analysis with a recent algorithm of spectral clustering, it is shown that orographic precipitation cells are generated under specific flow and topographic conditions. Repeatability of precipitation patterns in particular spatial locations is found to be linked to specific local terrain shapes, e.g. at the top of hills and on the upwind side of the mountains. This methodology and our empirical findings for the Alpine region provide a basis for building computational data-driven models of orographic enhancement and triggering of precipitation. Copyright (C) 2011 Royal Meteorological Society .

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aquest projecte es centra principalment en el detector no coherent d’un GPS. Per tal de caracteritzar el procés de detecció d’un receptor, es necessita conèixer l’estadística implicada. Pel cas dels detectors no coherents convencionals, l’estadística de segon ordre intervé plenament. Les prestacions que ens dóna l’estadística de segon ordre, plasmada en la ROC, són prou bons tot i que en diferents situacions poden no ser els millors. Aquest projecte intenta reproduir el procés de detecció mitjançant l’estadística de primer ordre com a alternativa a la ja coneguda i implementada estadística de segon ordre. Per tal d’aconseguir-ho, s’usen expressions basades en el Teorema Central del Límit i de les sèries Edgeworth com a bones aproximacions. Finalment, tant l’estadística convencional com l’estadística proposada són comparades, en termes de la ROC, per tal de determinar quin detector no coherent ofereix millor prestacions en cada situació.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given an observed test statistic and its degrees of freedom, one may compute the observed P value with most statistical packages. It is unknown to what extent test statistics and P values are congruent in published medical papers. Methods:We checked the congruence of statistical results reported in all the papers of volumes 409–412 of Nature (2001) and a random sample of 63 results from volumes 322–323 of BMJ (2001). We also tested whether the frequencies of the last digit of a sample of 610 test statistics deviated from a uniform distribution (i.e., equally probable digits).Results: 11.6% (21 of 181) and 11.1% (7 of 63) of the statistical results published in Nature and BMJ respectively during 2001 were incongruent, probably mostly due to rounding, transcription, or type-setting errors. At least one such error appeared in 38% and 25% of the papers of Nature and BMJ, respectively. In 12% of the cases, the significance level might change one or more orders of magnitude. The frequencies of the last digit of statistics deviated from the uniform distribution and suggested digit preference in rounding and reporting.Conclusions: this incongruence of test statistics and P values is another example that statistical practice is generally poor, even in the most renowned scientific journals, and that quality of papers should be more controlled and valued

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is a joint effort between five institutionsthat introduces several novel similarity measures andcombines them to carry out a multimodal segmentationevaluation. The new similarity measures proposed arebased on the location and the intensity values of themisclassified voxels as well as on the connectivity andthe boundaries of the segmented data. We showexperimentally that the combination of these measuresimprove the quality of the evaluation. The study that weshow here has been carried out using four differentsegmentation methods from four different labs applied toa MRI simulated dataset of the brain. We claim that ournew measures improve the robustness of the evaluation andprovides better understanding about the differencebetween segmentation methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A study of how the machine learning technique, known as gentleboost, could improve different digital watermarking methods such as LSB, DWT, DCT2 and Histogram shifting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND Functional brain images such as Single-Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) have been widely used to guide the clinicians in the Alzheimer's Disease (AD) diagnosis. However, the subjectivity involved in their evaluation has favoured the development of Computer Aided Diagnosis (CAD) Systems. METHODS It is proposed a novel combination of feature extraction techniques to improve the diagnosis of AD. Firstly, Regions of Interest (ROIs) are selected by means of a t-test carried out on 3D Normalised Mean Square Error (NMSE) features restricted to be located within a predefined brain activation mask. In order to address the small sample-size problem, the dimension of the feature space was further reduced by: Large Margin Nearest Neighbours using a rectangular matrix (LMNN-RECT), Principal Component Analysis (PCA) or Partial Least Squares (PLS) (the two latter also analysed with a LMNN transformation). Regarding the classifiers, kernel Support Vector Machines (SVMs) and LMNN using Euclidean, Mahalanobis and Energy-based metrics were compared. RESULTS Several experiments were conducted in order to evaluate the proposed LMNN-based feature extraction algorithms and its benefits as: i) linear transformation of the PLS or PCA reduced data, ii) feature reduction technique, and iii) classifier (with Euclidean, Mahalanobis or Energy-based methodology). The system was evaluated by means of k-fold cross-validation yielding accuracy, sensitivity and specificity values of 92.78%, 91.07% and 95.12% (for SPECT) and 90.67%, 88% and 93.33% (for PET), respectively, when a NMSE-PLS-LMNN feature extraction method was used in combination with a SVM classifier, thus outperforming recently reported baseline methods. CONCLUSIONS All the proposed methods turned out to be a valid solution for the presented problem. One of the advances is the robustness of the LMNN algorithm that not only provides higher separation rate between the classes but it also makes (in combination with NMSE and PLS) this rate variation more stable. In addition, their generalization ability is another advance since several experiments were performed on two image modalities (SPECT and PET).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Forest fires are defined as uncontrolled fires often occurring in wildland areas, but that can also affect houses or agricultural resources. Causes are both natural (e.g.,lightning phenomena) and anthropogenic (human negligence or arsons).Major environmental factors influencing the fire ignition and propagation are climate and vegetation. Wildfires are most common and severe during drought period and on windy days. Moreover, under water-stress conditions, which occur after a long hot and dry period, the vegetation is more vulnerable to fire. These conditions are common in the United State and Canada, where forest fires represent a big problem. We focused our analysis on the state of Florida, for which a big dataset on forest fires detection is readily available. USDA Forest Service Remote Sensing Application Center, in collaboration with NASA-Goddard Space Flight Center and the University of Maryland, has compiled daily MODIS Thermal Anomalies (fires and biomass burning images) produced by NASA using a contextual algorithm that exploits the strong emission of mid-infrared radiation from fires. Fire classes were converted in GIS format: daily MODIS fire detections are provided as the centroids of the 1 kilometer pixels and compiled into daily Arc/INFO point coverage.