919 resultados para Ordered weighted average
Resumo:
A crucial method for investigating patients with coronary artery disease (CAD) is the calculation of the left ventricular ejection fraction (LVEF). It is, consequently, imperative to precisely estimate the value of LVEF--a process that can be done with myocardial perfusion scintigraphy. Therefore, the present study aimed to establish and compare the estimation performance of the quantitative parameters of the reconstruction methods filtered backprojection (FBP) and ordered-subset expectation maximization (OSEM). METHODS: A beating-heart phantom with known values of end-diastolic volume, end-systolic volume, and LVEF was used. Quantitative gated SPECT/quantitative perfusion SPECT software was used to obtain these quantitative parameters in a semiautomatic mode. The Butterworth filter was used in FBP, with the cutoff frequencies between 0.2 and 0.8 cycles per pixel combined with the orders of 5, 10, 15, and 20. Sixty-three reconstructions were performed using 2, 4, 6, 8, 10, 12, and 16 OSEM subsets, combined with several iterations: 2, 4, 6, 8, 10, 12, 16, 32, and 64. RESULTS: With FBP, the values of end-diastolic, end-systolic, and the stroke volumes rise as the cutoff frequency increases, whereas the value of LVEF diminishes. This same pattern is verified with the OSEM reconstruction. However, with OSEM there is a more precise estimation of the quantitative parameters, especially with the combinations 2 iterations × 10 subsets and 2 iterations × 12 subsets. CONCLUSION: The OSEM reconstruction presents better estimations of the quantitative parameters than does FBP. This study recommends the use of 2 iterations with 10 or 12 subsets for OSEM and a cutoff frequency of 0.5 cycles per pixel with the orders 5, 10, or 15 for FBP as the best estimations for the left ventricular volumes and ejection fraction quantification in myocardial perfusion scintigraphy.
Resumo:
Combined audit report on the institutions under the control of the Iowa Department of Human Services including findings and recommendations and average cost per resident/patient information for the five years ended June 30, 2006
Resumo:
OBJECTIVE: To determine the risks of prosthesis dislocation, postoperative Trendelenburg gait, and sciatic nerve palsy after a posterior approach compared to a direct lateral approach for adult patients undergoing total hip arthroplasty (THA) for primary osteoarthritis (OA). METHODS: Medline, Embase, CINHAL, and Cochrane databases were searched until August 2003. All published trials comparing posterior and direct lateral surgical approaches to THA in adults with a diagnosis of primary hip osteoarthritis were collected. Retrieved articles were assessed independently for their methodological quality. RESULTS: Four prospective cohort studies involving 241 participants met the inclusion criteria. Regarding dislocation rate, no significant difference between posterior and direct lateral surgical approach was found (relative risk 0.35). The presence of postoperative Trendelenburg gait was not significantly different between surgical approaches. The risk of nerve palsy or injury was significantly higher with the direct lateral approach (relative risk 0.16). However, there were no significant differences when comparing this risk nerve by nerve, in particular for the sciatic nerve. Of the other outcomes considered, only the average range of internal rotation in extension of the hip was significantly higher (weighted mean difference 16 degrees ) in the posterior approach group (mean 35 degrees, SD 13 degrees ) compared to the direct lateral approach (mean 19 degrees, SD 13 degrees ). CONCLUSION: The quality and quantity of information extracted from the trials performed to date are insufficient to make a firm conclusion on the optimum choice of surgical approach for adult patients undergoing primary THA for OA.
Resumo:
In a weighted spatial network, as specified by an exchange matrix, the variances of the spatial values are inversely proportional to the size of the regions. Spatial values are no more exchangeable under independence, thus weakening the rationale for ordinary permutation and bootstrap tests of spatial autocorrelation. We propose an alternative permutation test for spatial autocorrelation, based upon exchangeable spatial modes, constructed as linear orthogonal combinations of spatial values. The coefficients obtain as eigenvectors of the standardised exchange matrix appearing in spectral clustering, and generalise to the weighted case the concept of spatial filtering for connectivity matrices. Also, two proposals aimed at transforming an acessibility matrix into a exchange matrix with with a priori fixed margins are presented. Two examples (inter-regional migratory flows and binary adjacency networks) illustrate the formalism, rooted in the theory of spectral decomposition for reversible Markov chains.
Resumo:
In the fixed design regression model, additional weights areconsidered for the Nadaraya--Watson and Gasser--M\"uller kernel estimators.We study their asymptotic behavior and the relationships between new andclassical estimators. For a simple family of weights, and considering theIMSE as global loss criterion, we show some possible theoretical advantages.An empirical study illustrates the performance of the weighted estimatorsin finite samples.
Resumo:
We represent interval ordered homothetic preferences with a quantitative homothetic utility function and a multiplicative bias. When preferences are weakly ordered (i.e. when indifference is transitive), such a bias equals 1. When indifference is intransitive, the biasing factor is a positive function smaller than 1 and measures a threshold of indifference. We show that the bias is constant if and only if preferences are semiordered, and we identify conditions ensuring a linear utility function. We illustrate our approach with indifference sets on a two dimensional commodity space.
Resumo:
This paper establishes a general framework for metric scaling of any distance measure between individuals based on a rectangular individuals-by-variables data matrix. The method allows visualization of both individuals and variables as well as preserving all the good properties of principal axis methods such as principal components and correspondence analysis, based on the singular-value decomposition, including the decomposition of variance into components along principal axes which provide the numerical diagnostics known as contributions. The idea is inspired from the chi-square distance in correspondence analysis which weights each coordinate by an amount calculated from the margins of the data table. In weighted metric multidimensional scaling (WMDS) we allow these weights to be unknown parameters which are estimated from the data to maximize the fit to the original distances. Once this extra weight-estimation step is accomplished, the procedure follows the classical path in decomposing a matrix and displaying its rows and columns in biplots.
Resumo:
We construct a weighted Euclidean distance that approximates any distance or dissimilarity measure between individuals that is based on a rectangular cases-by-variables data matrix. In contrast to regular multidimensional scaling methods for dissimilarity data, the method leads to biplots of individuals and variables while preserving all the good properties of dimension-reduction methods that are based on the singular-value decomposition. The main benefits are the decomposition of variance into components along principal axes, which provide the numerical diagnostics known as contributions, and the estimation of nonnegative weights for each variable. The idea is inspired by the distance functions used in correspondence analysis and in principal component analysis of standardized data, where the normalizations inherent in the distances can be considered as differential weighting of the variables. In weighted Euclidean biplots we allow these weights to be unknown parameters, which are estimated from the data to maximize the fit to the chosen distances or dissimilarities. These weights are estimated using a majorization algorithm. Once this extra weight-estimation step is accomplished, the procedure follows the classical path in decomposing the matrix and displaying its rows and columns in biplots.
Resumo:
BACKGROUND: Three-dimensional (3D) navigator-gated and prospectively corrected free-breathing coronary magnetic resonance angiography (MRA) allows for submillimeter image resolution but suffers from poor contrast between coronary blood and myocardium. Data collected over >100 ms/heart beat are also susceptible to bulk cardiac and respiratory motion. To address these problems, we examined the effect of a T2 preparation prepulse (T2prep) for myocardial suppression and a shortened acquisition window on coronary definition. METHODS AND RESULTS: Eight healthy adult subjects and 5 patients with confirmed coronary artery disease (CAD) underwent free-breathing 3D MRA with and without T2prep and with 120- and 60-ms data-acquisition windows. The T2prep resulted in a 123% (P<0. 001) increase in contrast-to-noise ratio (CNR). Coronary edge definition was improved by 33% (P<0.001). Acquisition window shortening from 120 to 60 ms resulted in better vessel definition (11%; P<0.001). Among patients with CAD, there was a good correspondence with disease. CONCLUSIONS: Free-breathing, T2prep, 3D coronary MRA with a shorter acquisition window resulted in improved CNR and better coronary artery definition, allowing the assessment of coronary disease. This approach offers the potential for free-breathing, noninvasive assessment of the major coronary arteries.
Resumo:
We review methods to estimate the average crystal (grain) size and the crystal (grain) size distribution in solid rocks. Average grain sizes often provide the base for stress estimates or rheological calculations requiring the quantification of grain sizes in a rock's microstructure. The primary data for grain size data are either 1D (i.e. line intercept methods), 2D (area analysis) or 3D (e.g., computed tomography, serial sectioning). These data have been used for different data treatments over the years, whereas several studies assume a certain probability function (e.g., logarithm, square root) to calculate statistical parameters as the mean, median, mode or the skewness of a crystal size distribution. The finally calculated average grain sizes have to be compatible between the different grain size estimation approaches in order to be properly applied, for example, in paleo-piezometers or grain size sensitive flow laws. Such compatibility is tested for different data treatments using one- and two-dimensional measurements. We propose an empirical conversion matrix for different datasets. These conversion factors provide the option to make different datasets compatible with each other, although the primary calculations were obtained in different ways. In order to present an average grain size, we propose to use the area-weighted and volume-weighted mean in the case of unimodal grain size distributions, respectively, for 2D and 3D measurements. The shape of the crystal size distribution is important for studies of nucleation and growth of minerals. The shape of the crystal size distribution of garnet populations is compared between different 2D and 3D measurements, which are serial sectioning and computed tomography. The comparison of different direct measured 3D data; stereological data and direct presented 20 data show the problems of the quality of the smallest grain sizes and the overestimation of small grain sizes in stereological tools, depending on the type of CSD. (C) 2011 Published by Elsevier Ltd.
Resumo:
Quantification of short-echo time proton magnetic resonance spectroscopy results in >18 metabolite concentrations (neurochemical profile). Their quantification accuracy depends on the assessment of the contribution of macromolecule (MM) resonances, previously experimentally achieved by exploiting the several fold difference in T(1). To minimize effects of heterogeneities in metabolites T(1), the aim of the study was to assess MM signal contributions by combining inversion recovery (IR) and diffusion-weighted proton spectroscopy at high-magnetic field (14.1 T) and short echo time (= 8 msec) in the rat brain. IR combined with diffusion weighting experiments (with δ/Δ = 1.5/200 msec and b-value = 11.8 msec/μm(2)) showed that the metabolite nulled spectrum (inversion time = 740 msec) was affected by residuals attributed to creatine, inositol, taurine, choline, N-acetylaspartate as well as glutamine and glutamate. While the metabolite residuals were significantly attenuated by 50%, the MM signals were almost not affected (< 8%). The combination of metabolite-nulled IR spectra with diffusion weighting allows a specific characterization of MM resonances with minimal metabolite signal contributions and is expected to lead to a more precise quantification of the neurochemical profile.
Resumo:
The purposes of this study were to characterize the performance of a 3-dimensional (3D) ordered-subset expectation maximization (OSEM) algorithm in the quantification of left ventricular (LV) function with (99m)Tc-labeled agent gated SPECT (G-SPECT), the QGS program, and a beating-heart phantom and to optimize the reconstruction parameters for clinical applications. METHODS: A G-SPECT image of a dynamic heart phantom simulating the beating left ventricle was acquired. The exact volumes of the phantom were known and were as follows: end-diastolic volume (EDV) of 112 mL, end-systolic volume (ESV) of 37 mL, and stroke volume (SV) of 75 mL; these volumes produced an LV ejection fraction (LVEF) of 67%. Tomographic reconstructions were obtained after 10-20 iterations (I) with 4, 8, and 16 subsets (S) at full width at half maximum (FWHM) gaussian postprocessing filter cutoff values of 8-15 mm. The QGS program was used for quantitative measurements. RESULTS: Measured values ranged from 72 to 92 mL for EDV, from 18 to 32 mL for ESV, and from 54 to 63 mL for SV, and the calculated LVEF ranged from 65% to 76%. Overall, the combination of 10 I, 8 S, and a cutoff filter value of 10 mm produced the most accurate results. The plot of the measures with respect to the expectation maximization-equivalent iterations (I x S product) revealed a bell-shaped curve for the LV volumes and a reverse distribution for the LVEF, with the best results in the intermediate range. In particular, FWHM cutoff values exceeding 10 mm affected the estimation of the LV volumes. CONCLUSION: The QGS program is able to correctly calculate the LVEF when used in association with an optimized 3D OSEM algorithm (8 S, 10 I, and FWHM of 10 mm) but underestimates the LV volumes. However, various combinations of technical parameters, including a limited range of I and S (80-160 expectation maximization-equivalent iterations) and low cutoff values (< or =10 mm) for the gaussian postprocessing filter, produced results with similar accuracies and without clinically relevant differences in the LV volumes and the estimated LVEF.