952 resultados para Weighted Overlay Analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Geographic health planning analyses, such as service area calculations, are hampered by a lack of patient-specific geographic data. Using the limited patient address information in patient management systems, planners analyze patient origin based on home address. But activity space research done sparingly in public health and extensively in non-health related arenas uses multiple addresses per person when analyzing accessibility. Also, health care access research has shown that there are many non-geographic factors that influence choice of provider. Most planning methods, however, overlook non-geographic factors influencing choice of provider, and the limited data mean the analyses can only be related to home address. This research attempted to determine to what extent geography plays a part in patient choice of provider and to determine if activity space data can be used to calculate service areas for primary care providers. ^ During Spring 2008, a convenience sample of 384 patients of a locally-funded Community Health Center in Houston, Texas, completed a survey that asked about what factors are important when he or she selects a health care provider. A subset of this group (336) also completed an activity space log that captured location and time data on the places where the patient regularly goes. ^ Survey results indicate that for this patient population, geography plays a role in their choice of health care provider, but it is not the most important reason for choosing a provider. Other factors for choosing a health care provider such as the provider offering "free or low cost visits", meeting "all of the patient's health care needs", and seeing "the patient quickly" were all ranked higher than geographic reasons. ^ Analysis of the patient activity locations shows that activity spaces can be used to create service areas for a single primary care provider. Weighted activity-space-based service areas have the potential to include more patients in the service area since more than one location per patient is used. Further analysis of the logs shows that a reduced set of locations by time and type could be used for this methodology, facilitating ongoing data collection for activity-space-based planning efforts. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to understand the role of principle economic, sociodemographic and health status factors in determining the likelihood and volume of prescription drug use. Econometric demand regression models were developed for this purpose. Ten explanatory variables were examined: family income, coinsurance rate, age, sex, race, household head education level, size of family, health status, number of medical visits, and type of provider seen during medical visits. The economic factors (family income and coinsurance) were given special emphasis in this study.^ The National Medical Care Utilization and Expenditure Survey (NMCUES) was the data source. The sample represented the civilian, noninstitutionalized residents of the United States in 1980. The sample method used in the survey was a stratified four-stage, area probability design. The sample was comprised of 6,600 households (17,123 individuals). The weighted sample provided the population estimates used in the analysis. Five repeated interviews were conducted with each household. The household survey provided detailed information on the United States health status, pattern of health care utilization, charges for services received, and methods of payments for 1980.^ The study provided evidence that economic factors influenced the use of prescription drugs, but the use was not highly responsive to family income and coinsurance for the levels examined. The elasticities for family income ranged from -.0002 to -.013 and coinsurance ranged from -.174 to -.108. Income has a greater influence on the likelihood of prescription drug use, and coinsurance rates had an impact on the amount spent on prescription drugs. The coinsurance effect was not examined for the likelihood of drug use due to limitations in the measurement of coinsurance. Health status appeared to overwhelm any effects which may be attributed to family income or coinsurance. The likelihood of prescription drug use was highly dependent on visits to medical providers. The volume of prescription drug use was highly dependent on the health status, age, and whether or not the individual saw a general practitioner. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background and Objective. Ever since the human development index was published in 1990 by the United Nations Development Programme (UNDP), many researchers started searching and corporative studying for more effective methods to measure the human development. Published in 1999, Lai’s “Temporal analysis of human development indicators: principal component approach” provided a valuable statistical way on human developmental analysis. This study presented in the thesis is the extension of Lai’s 1999 research. ^ Methods. I used the weighted principal component method on the human development indicators to measure and analyze the progress of human development in about 180 countries around the world from the year 1999 to 2010. The association of the main principal component obtained from the study and the human development index reported by the UNDP was estimated by the Spearman’s rank correlation coefficient. The main principal component was then further applied to quantify the temporal changes of the human development of selected countries by the proposed Z-test. ^ Results. The weighted means of all three human development indicators, health, knowledge, and standard of living, were increased from 1999 to 2010. The weighted standard deviation for GDP per capita was also increased across years indicated the rising inequality of standard of living among countries. The ranking of low development countries by the main principal component (MPC) is very similar to that by the human development index (HDI). Considerable discrepancy between MPC and HDI ranking was found among high development countries with high GDP per capita shifted to higher ranks. The Spearman’s rank correlation coefficient between the main principal component and the human development index were all around 0.99. All the above results were very close to outcomes in Lai’s 1999 report. The Z test result on temporal analysis of main principal components from 1999 to 2010 on Qatar was statistically significant, but not on other selected countries, such as Brazil, Russia, India, China, and U.S.A.^ Conclusion. To synthesize the multi-dimensional measurement of human development into a single index, the weighted principal component method provides a good model by using the statistical tool on a comprehensive ranking and measurement. Since the weighted main principle component index is more objective because of using population of nations as weight, more effective when the analysis is across time and space, and more flexible when the countries reported to the system has been changed year after year. Thus, in conclusion, the index generated by using weighted main principle component has some advantage over the human development index created in UNDP reports.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a noninvasive technique for quantitative assessment of the integrity of blood-brain barrier and blood-spinal cord barrier (BSCB) in the presence of central nervous system pathologies. However, the results of DCE-MRI show substantial variability. The high variability can be caused by a number of factors including inaccurate T1 estimation, insufficient temporal resolution and poor contrast-to-noise ratio. My thesis work is to develop improved methods to reduce the variability of DCE-MRI results. To obtain fast and accurate T1 map, the Look-Locker acquisition technique was implemented with a novel and truly centric k-space segmentation scheme. In addition, an original multi-step curve fitting procedure was developed to increase the accuracy of T1 estimation. A view sharing acquisition method was implemented to increase temporal resolution, and a novel normalization method was introduced to reduce image artifacts. Finally, a new clustering algorithm was developed to reduce apparent noise in the DCE-MRI data. The performance of these proposed methods was verified by simulations and phantom studies. As part of this work, the proposed techniques were applied to an in vivo DCE-MRI study of experimental spinal cord injury (SCI). These methods have shown robust results and allow quantitative assessment of regions with very low vascular permeability. In conclusion, applications of the improved DCE-MRI acquisition and analysis methods developed in this thesis work can improve the accuracy of the DCE-MRI results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global and local climatic forcing, e.g. concentration of atmospheric CO2 or insolation, influence the distribution of C3 and C4 plants in southwest Africa. C4 plants dominate in more arid and warmer areas and are favoured by lower pCO2 levels. Several studies have assessed past and present continental vegetation by the analysis of terrestrial n-alkanes in near-coastal deep sea sediments using single samples or a small number of samples from a given climatic stage. The objectives of this study were to evaluate vegetation changes in southwest Africa with regard to climatic changes during the Late Pleistocene and the Holocene and to elucidate the potential of single sample simplifications. We analysed two sediment cores at high resolution, altogether ca. 240 samples, from the Southeast Atlantic Ocean (20°S and 12°S) covering the time spans of 18 to 1 ka and 56 to 2 ka, respectively. Our results for 20°S showed marginally decreasing C4 plant domination (of ca. 5%) during deglaciation based on average chain length (ACL27-33 values) and carbon isotopic composition of the C31 and C33 n-alkanes. Values for single samples from 18 ka and the Holocene overlap and, thus, are not significantly representative of the climatic stages they derive from. In contrast, at 12°S the n-alkane parameters show a clear difference of plant type for the Late Pleistocene (C4 plant domination, 66% C4 on average) and the Holocene (C3 plant domination, 40% C4 on average). During deglaciation vegetation change highly correlates with the increase in pCO2 (r² = 0.91). Short-term climatic events such as Heinrich Stadials or Antarctic warming periods are not reflected by vegetation changes in the catchment area. Instead, smaller vegetation fluctuations during the Late Pleistocene occur in accordance with local variations of insolation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Oceanic sediments deposited at high rate close to continents are dominated by terrigenous material. Aside from dilution by biogenic components, their chemical compositions reflect those of nearby continental masses. This study focuses on oceanic sediments coming from the juvenile Canadian Cordillera and highlights systematic differences between detritus deriving from juvenile crust and detritus from old and mature crust. We report major and trace element concentrations for 68 sediments from the northernmost part of the Cascade forearc, drilled at ODP Sites 888 and 1027. The calculated weighted averages for each site can then be used in the future to quantify the contribution of subducted sediments to Cascades volcanism. The two sites have similar compositions but Site 888, located closer to the continent, has higher sandy turbidite contents and displays higher bulk SiO2/Al2O3 with lower bulk Nb/Zr, attributed to the presence of zircons in the coarse sands. Comparison with published data for other oceanic sedimentary piles demonstrates the existence of systematic differences between modern sediments deriving from juvenile terranes (juvenile sediments) and modern sediments derived from mature continental areas (cratonic sediments). The most striking systematic difference is for Th/Nb, Th/U, Nb/U and Th/Rb ratios: juvenile sediments have much lower ratios than cratonic sediments. The small enrichment of Th over Nb in cratonic sediments may be explained by intracrustal magmatic and metamorphic differentiation processes. In contrast, their elevated Th/U and Nb/U ratios (average values of 6.87 and 7.95, respectively) in comparison to juvenile sediments (Th/U ~ 3.09, Nb/U ~ 5.15) suggest extensive U and Rb losses on old cratons. Uranium and Rb losses are attributed to long-term leaching by rain and river water during exposure of the continental crust at the surface. Over geological times, the weathering effects create a slow but systematic increase of Th/U with exposure time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Several meta-analysis methods can be used to quantitatively combine the results of a group of experiments, including the weighted mean difference, statistical vote counting, the parametric response ratio and the non-parametric response ratio. The software engineering community has focused on the weighted mean difference method. However, other meta-analysis methods have distinct strengths, such as being able to be used when variances are not reported. There are as yet no guidelines to indicate which method is best for use in each case. Aim: Compile a set of rules that SE researchers can use to ascertain which aggregation method is best for use in the synthesis phase of a systematic review. Method: Monte Carlo simulation varying the number of experiments in the meta analyses, the number of subjects that they include, their variance and effect size. We empirically calculated the reliability and statistical power in each case Results: WMD is generally reliable if the variance is low, whereas its power depends on the effect size and number of subjects per meta-analysis; the reliability of RR is generally unaffected by changes in variance, but it does require more subjects than WMD to be powerful; NPRR is the most reliable method, but it is not very powerful; SVC behaves well when the effect size is moderate, but is less reliable with other effect sizes. Detailed tables of results are annexed. Conclusions: Before undertaking statistical aggregation in software engineering, it is worthwhile checking whether there is any appreciable difference in the reliability and power of the methods. If there is, software engineers should select the method that optimizes both parameters.