7 resultados para Ward hierarchical scheme

em DigitalCommons@The Texas Medical Center


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most statistical analysis, theory and practice, is concerned with static models; models with a proposed set of parameters whose values are fixed across observational units. Static models implicitly assume that the quantified relationships remain the same across the design space of the data. While this is reasonable under many circumstances this can be a dangerous assumption when dealing with sequentially ordered data. The mere passage of time always brings fresh considerations and the interrelationships among parameters, or subsets of parameters, may need to be continually revised. ^ When data are gathered sequentially dynamic interim monitoring may be useful as new subject-specific parameters are introduced with each new observational unit. Sequential imputation via dynamic hierarchical models is an efficient strategy for handling missing data and analyzing longitudinal studies. Dynamic conditional independence models offers a flexible framework that exploits the Bayesian updating scheme for capturing the evolution of both the population and individual effects over time. While static models often describe aggregate information well they often do not reflect conflicts in the information at the individual level. Dynamic models prove advantageous over static models in capturing both individual and aggregate trends. Computations for such models can be carried out via the Gibbs sampler. An application using a small sample repeated measures normally distributed growth curve data is presented. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces an extended hierarchical task analysis (HTA) methodology devised to evaluate and compare user interfaces on volumetric infusion pumps. The pumps were studied along the dimensions of overall usability and propensity for generating human error. With HTA as our framework, we analyzed six pumps on a variety of common tasks using Norman’s Action theory. The introduced method of evaluation divides the problem space between the external world of the device interface and the user’s internal cognitive world, allowing for predictions of potential user errors at the human-device level. In this paper, one detailed analysis is provided as an example, comparing two different pumps on two separate tasks. The results demonstrate the inherent variation, often the cause of usage errors, found with infusion pumps being used in hospitals today. The reported methodology is a useful tool for evaluating human performance and predicting potential user errors with infusion pumps and other simple medical devices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, we present a trilocus sequence typing (TLST) scheme based on intragenic regions of two antigenic genes, ace and salA (encoding a collagen/laminin adhesin and a cell wall-associated antigen, respectively), and a gene associated with antibiotic resistance, lsa (encoding a putative ABC transporter), for subspecies differentiation of Enterococcus faecalis. Each of the alleles was analyzed using 50 E. faecalis isolates representing 42 diverse multilocus sequence types (ST(M); based on seven housekeeping genes) and four groups of clonally linked (by pulsed-field gel electrophoresis [PFGE]) isolates. The allelic profiles and/or concatenated sequences of the three genes agreed with multilocus sequence typing (MLST) results for typing of 49 of the 50 isolates; in addition to the one exception, two isolates were found to have identical TLST types but were single-locus variants (differing by a single nucleotide) by MLST and were therefore also classified as clonally related by MLST. TLST was also comparable to PFGE for establishing short-term epidemiological relationships, typing all isolates classified as clonally related by PFGE with the same type. TLST was then applied to representative isolates (of each PFGE subtype and isolation year) of a collection of 48 hospital isolates and demonstrated the same relationships between isolates of an outbreak strain as those found by MLST and PFGE. In conclusion, the TLST scheme described here was shown to be successful for investigating short-term epidemiology in a hospital setting and may provide an alternative to MLST for discriminating isolates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In numerous intervention studies and education field trials, random assignment to treatment occurs in clusters rather than at the level of observation. This departure of random assignment of units may be due to logistics, political feasibility, or ecological validity. Data within the same cluster or grouping are often correlated. Application of traditional regression techniques, which assume independence between observations, to clustered data produce consistent parameter estimates. However such estimators are often inefficient as compared to methods which incorporate the clustered nature of the data into the estimation procedure (Neuhaus 1993).1 Multilevel models, also known as random effects or random components models, can be used to account for the clustering of data by estimating higher level, or group, as well as lower level, or individual variation. Designing a study, in which the unit of observation is nested within higher level groupings, requires the determination of sample sizes at each level. This study investigates the design and analysis of various sampling strategies for a 3-level repeated measures design on the parameter estimates when the outcome variable of interest follows a Poisson distribution. ^ Results study suggest that second order PQL estimation produces the least biased estimates in the 3-level multilevel Poisson model followed by first order PQL and then second and first order MQL. The MQL estimates of both fixed and random parameters are generally satisfactory when the level 2 and level 3 variation is less than 0.10. However, as the higher level error variance increases, the MQL estimates become increasingly biased. If convergence of the estimation algorithm is not obtained by PQL procedure and higher level error variance is large, the estimates may be significantly biased. In this case bias correction techniques such as bootstrapping should be considered as an alternative procedure. For larger sample sizes, those structures with 20 or more units sampled at levels with normally distributed random errors produced more stable estimates with less sampling variance than structures with an increased number of level 1 units. For small sample sizes, sampling fewer units at the level with Poisson variation produces less sampling variation, however this criterion is no longer important when sample sizes are large. ^ 1Neuhaus J (1993). “Estimation efficiency and Tests of Covariate Effects with Clustered Binary Data”. Biometrics , 49, 989–996^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Early Employee Assistance Programs (EAPs) had their origin in humanitarian motives, and there was little concern for their cost/benefit ratios; however, as some programs began accumulating data and analyzing it over time, even with single variables such as absenteeism, it became apparent that the humanitarian reasons for a program could be reinforced by cost savings particularly when the existence of the program was subject to justification.^ Today there is general agreement that cost/benefit analyses of EAPs are desirable, but the specific models for such analyses, particularly those making use of sophisticated but simple computer based data management systems, are few.^ The purpose of this research and development project was to develop a method, a design, and a prototype for gathering managing and presenting information about EAPS. This scheme provides information retrieval and analyses relevant to such aspects of EAP operations as: (1) EAP personnel activities, (2) Supervisory training effectiveness, (3) Client population demographics, (4) Assessment and Referral Effectiveness, (5) Treatment network efficacy, (6) Economic worth of the EAP.^ This scheme has been implemented and made operational at The University of Texas Employee Assistance Programs for more than three years.^ Application of the scheme in the various programs has defined certain variables which remained necessary in all programs. Depending on the degree of aggressiveness for data acquisition maintained by program personnel, other program specific variables are also defined. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The central objective of this dissertation was to determine the feasibility of self-completed advance directives (AD) in older persons suffering from mild and moderate stages of dementia. This was accomplished by identifying differences in ability to complete AD among elderly subjects with increasing degrees of dementia and cognitive incompetence. Secondary objectives were to describe and compare advance directives completed by elders and identified proxy decision makers. Secondary objectives were accomplished by measuring the agreement between advance directives completed by proxy and elder, and comparing that agreement across groups defined by the elder's cognitive status. This cross-sectional study employed a structured interview to elicit AD, followed by a similar interview with a proxy decision maker identified by the elder. A stratified sampling scheme recruited elders with normal cognition, mild, and moderate forms of dementia using the Mini Mental-State Exam (MMSE). The Hopkins Competency Assessment Test (HCAT) was used for evaluation of competency to make medical decisions. Analysis was conducted on "between group" (non-demented $\leftrightarrow$ mild dementia $\leftrightarrow$ moderate dementia, and competent $\leftrightarrow$ incompetent) and "within group" (elder $\leftrightarrow$ family member) variation.^ The 118 elderly subjects interviewed were generally male, Caucasian, and of low socioeconomic status. Mean age was 77. Overall, elders preferred a "trial of therapy" regarding AD rather than to "always receive the therapy". No intervention was refused outright more often than it was accepted. A test-retest of elders' AD revealed stable responses. Eleven logic checks measured appropriateness of AD responses independent of preference. No difference was found in logic error rates between elders grouped by MMSE or HCAT. Agreement between proxy and elder responses showed significant dissimilarity, indicating that proxies were not making the same medical decisions as the elders.^ Conclusions based on these data are: (1) Self reporting AD is feasible among elders showing signs of cognitive impairment and they should be given all opportunities to complete advance directives, (2) variation in preferences for advance directives in cognitively impaired elders should not be assumed to be the effects of their impairment alone, (3) proxies do not appear to forego life-prolonging interventions in the face of increasing impairment in their ward, however, their advance directives choices are frequently not those of the elder they represent. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hierarchical linear growth model (HLGM), as a flexible and powerful analytic method, has played an increased important role in psychology, public health and medical sciences in recent decades. Mostly, researchers who conduct HLGM are interested in the treatment effect on individual trajectories, which can be indicated by the cross-level interaction effects. However, the statistical hypothesis test for the effect of cross-level interaction in HLGM only show us whether there is a significant group difference in the average rate of change, rate of acceleration or higher polynomial effect; it fails to convey information about the magnitude of the difference between the group trajectories at specific time point. Thus, reporting and interpreting effect sizes have been increased emphases in HLGM in recent years, due to the limitations and increased criticisms for statistical hypothesis testing. However, most researchers fail to report these model-implied effect sizes for group trajectories comparison and their corresponding confidence intervals in HLGM analysis, since lack of appropriate and standard functions to estimate effect sizes associated with the model-implied difference between grouping trajectories in HLGM, and also lack of computing packages in the popular statistical software to automatically calculate them. ^ The present project is the first to establish the appropriate computing functions to assess the standard difference between grouping trajectories in HLGM. We proposed the two functions to estimate effect sizes on model-based grouping trajectories difference at specific time, we also suggested the robust effect sizes to reduce the bias of estimated effect sizes. Then, we applied the proposed functions to estimate the population effect sizes (d ) and robust effect sizes (du) on the cross-level interaction in HLGM by using the three simulated datasets, and also we compared the three methods of constructing confidence intervals around d and du recommended the best one for application. At the end, we constructed 95% confidence intervals with the suitable method for the effect sizes what we obtained with the three simulated datasets. ^ The effect sizes between grouping trajectories for the three simulated longitudinal datasets indicated that even though the statistical hypothesis test shows no significant difference between grouping trajectories, effect sizes between these grouping trajectories can still be large at some time points. Therefore, effect sizes between grouping trajectories in HLGM analysis provide us additional and meaningful information to assess group effect on individual trajectories. In addition, we also compared the three methods to construct 95% confident intervals around corresponding effect sizes in this project, which handled with the uncertainty of effect sizes to population parameter. We suggested the noncentral t-distribution based method when the assumptions held, and the bootstrap bias-corrected and accelerated method when the assumptions are not met.^