961 resultados para Variance analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: Understanding the learning styles of individuals may assist in the tailoring of an educational program to optimize learning. General surgery faculty and residents have been characterized previously as having a tendency toward particular learning styles. We seek to understand better the learning styles of general surgery residents and differences that may exist within the population. METHODS: The Kolb Learning Style Inventory was administered yearly to general surgery residents at the University of Cincinnati from 1994 to 2006. This tool allows characterization of learning styles into 4 groups: converging, accommodating, assimilating, and diverging. The converging learning style involves education by actively solving problems. The accommodating learning style uses emotion and interpersonal relationships. The assimilating learning style learns by abstract logic. The diverging learning style learns best by observation. Chi-square analysis and analysis of variance were performed to determine significance. RESULTS: Surveys from 1994 to 2006 (91 residents, 325 responses) were analyzed. The prevalent learning style was converging (185, 57%), followed by assimilating (58, 18%), accommodating (44, 14%), and diverging (38, 12%). At the PGY 1 and 2 levels, male and female residents differed in learning style, with the accommodating learning style being relatively more frequent in women and assimilating learning style more frequent in men (Table 1, p < or = 0.001, chi-square test). Interestingly, learning style did not seem to change with advancing PGY level within the program, which suggests that individual learning styles may be constant throughout residency training. If a resident's learning style changed, it tended to be to converging. In addition, no relation exists between learning style and participation in dedicated basic science training or performance on the ABSIT/SBSE. CONCLUSIONS: Our data suggests that learning style differs between male and female general surgery residents but not with PGY level or ABSIT/SBSE performance. A greater understanding of individual learning styles may allow more refinement and tailoring of surgical programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To determine the effect of two pairs of echo times (TEs) for in-phase (IP) and opposed-phase (OP) 3.0-T magnetic resonance (MR) imaging on (a) quantitative analysis prospectively in a phantom study and (b) diagnostic accuracy retrospectively in a clinical study of adrenal tumors, with use of various reference standards in the clinical study. MATERIALS AND METHODS: A fat-saline phantom was used to perform IP and OP 3.0-T MR imaging for various fat fractions. The institutional review board approved this HIPAA-compliant study, with waiver of informed consent. Single-breath-hold IP and OP 3.0-T MR images in 21 patients (14 women, seven men; mean age, 63 years) with 23 adrenal tumors (16 adenomas, six metastases, one adrenocortical carcinoma) were reviewed. The MR protocol involved two acquisition schemes: In scheme A, the first OP echo (approximately 1.5-msec TE) and the second IP echo (approximately 4.9-msec TE) were acquired. In scheme B, the first IP echo (approximately 2.4-msec TE) and the third OP echo (approximately 5.8-msec TE) were acquired. Quantitative analysis was performed, and analysis of variance was used to test for differences between adenomas and nonadenomas. RESULTS: In the phantom study, scheme B did not enable discrimination among voxels that had small amounts of fat. In the clinical study, no overlap in signal intensity (SI) index values between adenomas and nonadenomas was seen (P < .05) with scheme A. However, with scheme B, no overlap in the adrenal gland SI-to-liver SI ratio between adenomas and nonadenomas was seen (P < .05). With scheme B, no overlap in adrenal gland SI index-to-liver SI index ratio between adenomas and nonadenomas was seen (P < .05). CONCLUSION: This initial experience indicates SI index is the most reliable parameter for characterization of adrenal tumors with 3.0-T MR imaging when obtaining OP echo before IP echo. When acquiring IP echo before OP echo, however, nonadenomas can be mistaken as adenomas with use of the SI index value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To quantify the interobserver variability of abdominal aortic aneurysm (AAA) neck length and angulation measurements. MATERIALS AND METHODS: A total of 25 consecutive patients scheduled for endovascular AAA repair underwent follow-up 64-row computed tomographic (CT) angiography in 0.625-mm collimation. AAA neck length and angulation were determined by four blinded, independent readers. AAA neck length was defined as the longitudinal distance between the first transverse CT slice directly distal to the lowermost renal artery and the first transverse CT slice that showed at least a 15% larger outer aortic wall diameter versus the diameter measured directly below the lowermost renal artery. Infrarenal AAA neck angulation was defined as the true angle between the longitudinal axis of the proximal AAA neck and the longitudinal axis of the AAA lumen as analyzed on three-dimensional CT reconstructions. RESULTS: Mean deviation in aortic neck length determination was 32.3% and that in aortic neck angulation was 32.1%. Interobserver variability of aortic neck length and angulation measurements was considerable: in any reader combination, at least one measurement difference was outside the predefined limits of agreement. CONCLUSIONS: Assessment of the longitudinal extension and angulation of the infrarenal aortic neck is associated with substantial observer variability, even if measurement is carried out according to a standardized protocol. Further studies are mandatory to assess dedicated technical approaches to minimize variance in the determination of the longitudinal extension and angulation of the infrarenal aortic neck.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The number of record-breaking events expected to occur in a strictly stationary time-series depends only on the number of values in the time-series, regardless of distribution. This holds whether the events are record-breaking highs or lows and whether we count from past to present or present to past. However, these symmetries are broken in distinct ways by trends in the mean and variance. We define indices that capture this information and use them to detect weak trends from multiple time-series. Here, we use these methods to answer the following questions: (1) Is there a variability trend among globally distributed surface temperature time-series? We find a significant decreasing variability over the past century for the Global Historical Climatology Network (GHCN). This corresponds to about a 10% change in the standard deviation of inter-annual monthly mean temperature distributions. (2) How are record-breaking high and low surface temperatures in the United States affected by time period? We investigate the United States Historical Climatology Network (USHCN) and find that the ratio of record-breaking highs to lows in 2006 increases as the time-series extend further into the past. When we consider the ratio as it evolves with respect to a fixed start year, we find it is strongly correlated with the ensemble mean. We also compare the ratios for USHCN and GHCN (minus USHCN stations). We find the ratios grow monotonically in the GHCN data set, but not in the USHCN data set. (3) Do we detect either mean or variance trends in annual precipitation within the United States? We find that the total annual and monthly precipitation in the United States (USHCN) has increased over the past century. Evidence for a trend in variance is inconclusive.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, we consider Bayesian inference on the detection of variance change-point models with scale mixtures of normal (for short SMN) distributions. This class of distributions is symmetric and thick-tailed and includes as special cases: Gaussian, Student-t, contaminated normal, and slash distributions. The proposed models provide greater flexibility to analyze a lot of practical data, which often show heavy-tail and may not satisfy the normal assumption. As to the Bayesian analysis, we specify some prior distributions for the unknown parameters in the variance change-point models with the SMN distributions. Due to the complexity of the joint posterior distribution, we propose an efficient Gibbs-type with Metropolis- Hastings sampling algorithm for posterior Bayesian inference. Thereafter, following the idea of [1], we consider the problems of the single and multiple change-point detections. The performance of the proposed procedures is illustrated and analyzed by simulation studies. A real application to the closing price data of U.S. stock market has been analyzed for illustrative purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The early phase of psychotherapy has been regarded as a sensitive period in the unfolding of psychotherapy leading to positive outcomes. However, there is disagreement about the degree to which early (especially relationship-related) session experiences predict outcome over and above initial levels of distress and early response to treatment. The goal of the present study was to simultaneously examine outcome at post treatment as a function of (a) intake symptom and interpersonal distress as well as early change in well-being and symptoms, (b) the patient's early session-experiences, (c) the therapist's early session-experiences/interventions, and (d) their interactions. The data of 430 psychotherapy completers treated by 151 therapists were analyzed using hierarchical linear models. Results indicate that early positive intra- and interpersonal session experiences as reported by patients and therapists after the sessions explained 58% of variance of a composite outcome measure, taking intake distress and early response into account. All predictors (other than problem-activating therapists' interventions) contributed to later treatment outcomes if entered as single predictors. However, the multi-predictor analyses indicated that interpersonal distress at intake as well as the early interpersonal session experiences by patients and therapists remained robust predictors of outcome. The findings underscore that early in therapy therapists (and their supervisors) need to understand and monitor multiple interconnected components simultaneously

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous research suggests that the personality of a relationship partner predicts not only the individual’s own satisfaction with the relationship but also the partner’s satisfaction. Based on the actor–partner interdependence model, the present research tested whether actor and partner effects of personality are biased when the same method (e.g., self-report) is used for the assessment of personality and relationship satisfaction and, consequently, shared method variance is not controlled for. Data came from 186 couples, of whom both partners provided self- and partner reports on the Big Five personality traits. Depending on the research design, actor effects were larger than partner effects (when using only self-reports), smaller than partner effects (when using only partner reports), or of about the same size as partner effects (when using self- and partner reports). The findings attest to the importance of controlling for shared method variance in dyadic data analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we show statistical analyses of several types of traffic sources in a 3G network, namely voice, video and data sources. For each traffic source type, measurements were collected in order to, on the one hand, gain better understanding of the statistical characteristics of the sources and, on the other hand, enable forecasting traffic behaviour in the network. The latter can be used to estimate service times and quality of service parameters. The probability density function, mean, variance, mean square deviation, skewness and kurtosis of the interarrival times are estimated by Wolfram Mathematica and Crystal Ball statistical tools. Based on evaluation of packet interarrival times, we show how the gamma distribution can be used in network simulations and in evaluation of available capacity in opportunistic systems. As a result, from our analyses, shape and scale parameters of gamma distribution are generated. Data can be applied also in dynamic network configuration in order to avoid potential network congestions or overflows. Copyright © 2013 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: Etravirine (ETV) is metabolized by cytochrome P450 (CYP) 3A, 2C9, and 2C19. Metabolites are glucuronidated by uridine diphosphate glucuronosyltransferases (UGT). To identify the potential impact of genetic and non-genetic factors involved in ETV metabolism, we carried out a two-step pharmacogenetics-based population pharmacokinetic study in HIV-1 infected individuals. Materials and methods: The study population included 144 individuals contributing 289 ETV plasma concentrations and four individuals contributing 23 ETV plasma concentrations collected in a rich sampling design. Genetic variants [n=125 single-nucleotide polymorphisms (SNPs)] in 34 genes with a predicted role in ETV metabolism were selected. A first step population pharmacokinetic model included non-genetic and known genetic factors (seven SNPs in CYP2C, one SNP in CYP3A5) as covariates. Post-hoc individual ETV clearance (CL) was used in a second (discovery) step, in which the effect of the remaining 98 SNPs in CYP3A, P450 cytochrome oxidoreductase (POR), nuclear receptor genes, and UGTs was investigated. Results: A one-compartment model with zero-order absorption best characterized ETV pharmacokinetics. The average ETV CL was 41 (l/h) (CV 51.1%), the volume of distribution was 1325 l, and the mean absorption time was 1.2 h. The administration of darunavir/ritonavir or tenofovir was the only non-genetic covariate influencing ETV CL significantly, resulting in a 40% [95% confidence interval (CI): 13–69%] and a 42% (95% CI: 17–68%) increase in ETV CL, respectively. Carriers of rs4244285 (CYP2C19*2) had 23% (8–38%) lower ETV CL. Co-administered antiretroviral agents and genetic factors explained 16% of the variance in ETV concentrations. None of the SNPs in the discovery step influenced ETV CL. Conclusion: ETV concentrations are highly variable, and co-administered antiretroviral agents and genetic factors explained only a modest part of the interindividual variability in ETV elimination. Opposing effects of interacting drugs effectively abrogate genetic influences on ETV CL, and vice-versa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE Texture analysis is an alternative method to quantitatively assess MR-images. In this study, we introduce dynamic texture parameter analysis (DTPA), a novel technique to investigate the temporal evolution of texture parameters using dynamic susceptibility contrast enhanced (DSCE) imaging. Here, we aim to introduce the method and its application on enhancing lesions (EL), non-enhancing lesions (NEL) and normal appearing white matter (NAWM) in multiple sclerosis (MS). METHODS We investigated 18 patients with MS and clinical isolated syndrome (CIS), according to the 2010 McDonald's criteria using DSCE imaging at different field strengths (1.5 and 3 Tesla). Tissues of interest (TOIs) were defined within 27 EL, 29 NEL and 37 NAWM areas after normalization and eight histogram-based texture parameter maps (TPMs) were computed. TPMs quantify the heterogeneity of the TOI. For every TOI, the average, variance, skewness, kurtosis and variance-of-the-variance statistical parameters were calculated. These TOI parameters were further analyzed using one-way ANOVA followed by multiple Wilcoxon sum rank testing corrected for multiple comparisons. RESULTS Tissue- and time-dependent differences were observed in the dynamics of computed texture parameters. Sixteen parameters discriminated between EL, NEL and NAWM (pAVG = 0.0005). Significant differences in the DTPA texture maps were found during inflow (52 parameters), outflow (40 parameters) and reperfusion (62 parameters). The strongest discriminators among the TPMs were observed in the variance-related parameters, while skewness and kurtosis TPMs were in general less sensitive to detect differences between the tissues. CONCLUSION DTPA of DSCE image time series revealed characteristic time responses for ELs, NELs and NAWM. This may be further used for a refined quantitative grading of MS lesions during their evolution from acute to chronic state. DTPA discriminates lesions beyond features of enhancement or T2-hypersignal, on a numeric scale allowing for a more subtle grading of MS-lesions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Any functionally important mutation is embedded in an evolutionary matrix of other mutations. Cladistic analysis, based on this, is a method of investigating gene effects using a haplotype phylogeny to define a set of tests which localize causal mutations to branches of the phylogeny. Previous implementations of cladistic analysis have not addressed the issue of analyzing data from related individuals, though in human studies, family data are usually needed to obtain unambiguous haplotypes. In this study, a method of cladistic analysis is described in which haplotype effects are parameterized in a linear model which accounts for familial correlations. The method was used to study the effect of apolipoprotein (Apo) B gene variation on total-, LDL-, and HDL-cholesterol, triglyceride, and Apo B levels in 121 French families. Five polymorphisms defined Apo B haplotypes: the signal peptide Insertion/deletion, Bsp 1286I, XbaI, MspI, and EcoRI. Eleven haplotypes were found, and a haplotype phylogeny was constructed and used to define a set of tests of haplotype effects on lipid and apo B levels.^ This new method of cladistic analysis, the parametric method, found significant effects for single haplotypes for all variables. For HDL-cholesterol, 3 clusters of evolutionarily-related haplotypes affecting levels were found. Haplotype effects accounted for about 10% of the genetic variance of triglyceride and HDL-cholesterol levels. The results of the parametric method were compared to those of a method of cladistic analysis based on permutational testing. The permutational method detected fewer haplotype effects, even when modified to account for correlations within families. Simulation studies exploring these differences found evidence of systematic errors in the permutational method due to the process by which haplotype groups were selected for testing.^ The applicability of cladistic analysis to human data was shown. The parametric method is suggested as an improvement over the permutational method. This study has identified candidate haplotypes for sequence comparisons in order to locate the functional mutations in the Apo B gene which may influence plasma lipid levels. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Linkage disequilibrium methods can be used to find genes influencing quantitative trait variation in humans. Linkage disequilibrium methods can require smaller sample sizes than linkage equilibrium methods, such as the variance component approach to find loci with a specific effect size. The increase in power is at the expense of requiring more markers to be typed to scan the entire genome. This thesis compares different linkage disequilibrium methods to determine which factors influence the power to detect disequilibrium. The costs of disequilibrium and equilibrium tests were compared to determine whether the savings in phenotyping costs when using disequilibrium methods outweigh the additional genotyping costs.^ Nine linkage disequilibrium tests were examined by simulation. Five tests involve selecting isolated unrelated individuals while four involved the selection of parent child trios (TDT). All nine tests were found to be able to identify disequilibrium with the correct significance level in Hardy-Weinberg populations. Increasing linked genetic variance and trait allele frequency were found to increase the power to detect disequilibrium, while increasing the number of generations and distance between marker and trait loci decreased the power to detect disequilibrium. Discordant sampling was used for several of the tests. It was found that the more stringent the sampling, the greater the power to detect disequilibrium in a sample of given size. The power to detect disequilibrium was not affected by the presence of polygenic effects.^ When the trait locus had more than two trait alleles, the power of the tests maximized to less than one. For the simulation methods used here, when there were more than two-trait alleles there was a probability equal to 1-heterozygosity of the marker locus that both trait alleles were in disequilibrium with the same marker allele, resulting in the marker being uninformative for disequilibrium.^ The five tests using isolated unrelated individuals were found to have excess error rates when there was disequilibrium due to population admixture. Increased error rates also resulted from increased unlinked major gene effects, discordant trait allele frequency, and increased disequilibrium. Polygenic effects did not affect the error rates. The TDT, Transmission Disequilibrium Test, based tests were not liable to any increase in error rates.^ For all sample ascertainment costs, for recent mutations ($<$100 generations) linkage disequilibrium tests were less expensive than the variance component test to carry out. Candidate gene scans saved even more money. The use of recently admixed populations also decreased the cost of performing a linkage disequilibrium test. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In numerous intervention studies and education field trials, random assignment to treatment occurs in clusters rather than at the level of observation. This departure of random assignment of units may be due to logistics, political feasibility, or ecological validity. Data within the same cluster or grouping are often correlated. Application of traditional regression techniques, which assume independence between observations, to clustered data produce consistent parameter estimates. However such estimators are often inefficient as compared to methods which incorporate the clustered nature of the data into the estimation procedure (Neuhaus 1993).1 Multilevel models, also known as random effects or random components models, can be used to account for the clustering of data by estimating higher level, or group, as well as lower level, or individual variation. Designing a study, in which the unit of observation is nested within higher level groupings, requires the determination of sample sizes at each level. This study investigates the design and analysis of various sampling strategies for a 3-level repeated measures design on the parameter estimates when the outcome variable of interest follows a Poisson distribution. ^ Results study suggest that second order PQL estimation produces the least biased estimates in the 3-level multilevel Poisson model followed by first order PQL and then second and first order MQL. The MQL estimates of both fixed and random parameters are generally satisfactory when the level 2 and level 3 variation is less than 0.10. However, as the higher level error variance increases, the MQL estimates become increasingly biased. If convergence of the estimation algorithm is not obtained by PQL procedure and higher level error variance is large, the estimates may be significantly biased. In this case bias correction techniques such as bootstrapping should be considered as an alternative procedure. For larger sample sizes, those structures with 20 or more units sampled at levels with normally distributed random errors produced more stable estimates with less sampling variance than structures with an increased number of level 1 units. For small sample sizes, sampling fewer units at the level with Poisson variation produces less sampling variation, however this criterion is no longer important when sample sizes are large. ^ 1Neuhaus J (1993). “Estimation efficiency and Tests of Covariate Effects with Clustered Binary Data”. Biometrics , 49, 989–996^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The article offers a systematic analysis of the comparative trajectory of international democratic change. In particular, it focuses on the resulting convergence or divergence of political systems, borrowing from the literatures on institutional change and policy convergence. To this end, political-institutional data in line with Arend Lijphart’s (1999, 2012) empirical theory of democracy for 24 developed democracies between 1945 and 2010 are analyzed. Heteroscedastic multilevel models allow for directly modeling the development of the variance of types of democracy over time, revealing information about convergence, and adding substantial explanations. The findings indicate that there has been a trend away from extreme types of democracy in single cases, but no unconditional trend of convergence can be observed. However, there are conditional processes of convergence. In particular, economic globalization and the domestic veto structure interactively influence democratic convergence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We estimate the effects of climatic changes, as predicted by six climate models, on lake surface temperatures on a global scale, using the lake surface equilibrium temperature as a proxy. We evaluate interactions between different forcing variables, the sensitivity of lake surface temperatures to these variables, as well as differences between climate zones. Lake surface equilibrium temperatures are predicted to increase by 70 to 85 % of the increase in air temperatures. On average, air temperature is the main driver for changes in lake surface temperatures, and its effect is reduced by ~10 % by changes in other meteorological variables. However, the contribution of these other variables to the variance is ~40 % of that of air temperature, and their effects can be important at specific locations. The warming increases the importance of longwave radiation and evaporation for the lake surface heat balance compared to shortwave radiation and convective heat fluxes. We discuss the consequences of our findings for the design and evaluation of different types of studies on climate change effects on lakes.