8 resultados para Hierarchical sampling

em DigitalCommons@The Texas Medical Center


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In numerous intervention studies and education field trials, random assignment to treatment occurs in clusters rather than at the level of observation. This departure of random assignment of units may be due to logistics, political feasibility, or ecological validity. Data within the same cluster or grouping are often correlated. Application of traditional regression techniques, which assume independence between observations, to clustered data produce consistent parameter estimates. However such estimators are often inefficient as compared to methods which incorporate the clustered nature of the data into the estimation procedure (Neuhaus 1993).1 Multilevel models, also known as random effects or random components models, can be used to account for the clustering of data by estimating higher level, or group, as well as lower level, or individual variation. Designing a study, in which the unit of observation is nested within higher level groupings, requires the determination of sample sizes at each level. This study investigates the design and analysis of various sampling strategies for a 3-level repeated measures design on the parameter estimates when the outcome variable of interest follows a Poisson distribution. ^ Results study suggest that second order PQL estimation produces the least biased estimates in the 3-level multilevel Poisson model followed by first order PQL and then second and first order MQL. The MQL estimates of both fixed and random parameters are generally satisfactory when the level 2 and level 3 variation is less than 0.10. However, as the higher level error variance increases, the MQL estimates become increasingly biased. If convergence of the estimation algorithm is not obtained by PQL procedure and higher level error variance is large, the estimates may be significantly biased. In this case bias correction techniques such as bootstrapping should be considered as an alternative procedure. For larger sample sizes, those structures with 20 or more units sampled at levels with normally distributed random errors produced more stable estimates with less sampling variance than structures with an increased number of level 1 units. For small sample sizes, sampling fewer units at the level with Poisson variation produces less sampling variation, however this criterion is no longer important when sample sizes are large. ^ 1Neuhaus J (1993). “Estimation efficiency and Tests of Covariate Effects with Clustered Binary Data”. Biometrics , 49, 989–996^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces an extended hierarchical task analysis (HTA) methodology devised to evaluate and compare user interfaces on volumetric infusion pumps. The pumps were studied along the dimensions of overall usability and propensity for generating human error. With HTA as our framework, we analyzed six pumps on a variety of common tasks using Norman’s Action theory. The introduced method of evaluation divides the problem space between the external world of the device interface and the user’s internal cognitive world, allowing for predictions of potential user errors at the human-device level. In this paper, one detailed analysis is provided as an example, comparing two different pumps on two separate tasks. The results demonstrate the inherent variation, often the cause of usage errors, found with infusion pumps being used in hospitals today. The reported methodology is a useful tool for evaluating human performance and predicting potential user errors with infusion pumps and other simple medical devices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most statistical analysis, theory and practice, is concerned with static models; models with a proposed set of parameters whose values are fixed across observational units. Static models implicitly assume that the quantified relationships remain the same across the design space of the data. While this is reasonable under many circumstances this can be a dangerous assumption when dealing with sequentially ordered data. The mere passage of time always brings fresh considerations and the interrelationships among parameters, or subsets of parameters, may need to be continually revised. ^ When data are gathered sequentially dynamic interim monitoring may be useful as new subject-specific parameters are introduced with each new observational unit. Sequential imputation via dynamic hierarchical models is an efficient strategy for handling missing data and analyzing longitudinal studies. Dynamic conditional independence models offers a flexible framework that exploits the Bayesian updating scheme for capturing the evolution of both the population and individual effects over time. While static models often describe aggregate information well they often do not reflect conflicts in the information at the individual level. Dynamic models prove advantageous over static models in capturing both individual and aggregate trends. Computations for such models can be carried out via the Gibbs sampler. An application using a small sample repeated measures normally distributed growth curve data is presented. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Indoor and ambient air organic pollutants have been gaining attention because they have been measured at levels with possible health effects. Studies have shown that most airborne polychlorinated biphenyls (PCBs), pesticides and many polycyclic aromatic hydrocarbons (PAHs) are present in the free vapor state. The purpose of this research was to extend recent investigative work with polyurethane foam (PUF) as a collection medium for semivolatile compounds. Open-porous flexible PUFs with different chemical makeup and physical properties were evaluated as to their collection affinities/efficiencies for various classes of compounds and the degree of sample recovery. Filtered air samples were pulled through plugs of PUF spiked with various semivolatiles under different simulated environmental conditions (temperature and humidity), and sampling parameters (flow rate and sample volume) in order to measure their effects on sample breakthrough volume (V(,B)). PUF was also evaluated in the passive mode using organo-phosphorus pesticides. Another major goal was to improve the overall analytical methodology; PUF is inexpensive, easy to handle in the field and has excellent airflow characteristics (low pressure drop). It was confirmed that the PUF collection apparatus behaves as if it were a gas-solid chromatographic system, in that, (V(,B)) was related to temperature and sample volume. Breakthrough volumes were essentially the same using both polyether and polyester type PUF. Also, little change was observed in the V(,B)s after coating PUF with common chromatographic liquid phases. Open cell (reticulated) foams gave better recoveries than closed cell foams. There was a slight increase in (V(,B)) with an increase in the number of cells/pores per inch. The high-density polyester PUF was found to be an excellent passive and active collection adsorbent. Good recoveries could be obtained using just solvent elution. A gas chromatograph equipped with a photoionization detector gave excellent sensitivities and selectivities for the various classes of compounds investigated. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Various airborne aldehydes and ketones (i.e., airborne carbonyls) present in outdoor, indoor, and personal air pose a risk to human health at present environmental concentrations. To date, there is no adequate, simple-to-use sampler for monitoring carbonyls at parts per billion concentrations in personal air. The Passive Aldehydes and Ketones Sampler (PAKS) originally developed for this purpose has been found to be unreliable in a number of relatively recent field studies. The PAKS method uses dansylhydrazine, DNSH, as the derivatization agent to produce aldehyde derivatives that are analyzed by HPLC with fluorescence detection. The reasons for the poor performance of the PAKS are not known but it is hypothesized that the chemical derivatization conditions and reaction kinetics combined with a relatively low sampling rate may play a role. This study evaluated the effect of absorption and emission wavelengths, pH of the DNSH coating solution, extraction solvent, and time post-extraction for the yield and stability of formaldehyde, acetaldehyde, and acrolein DNSH derivatives. The results suggest that the optimum conditions for the analysis of DNSHydrazones are the following. The excitation and emission wavelengths for HPLC analysis should be at 250nm and 500nm, respectively. The optimal pH of the coating solution appears to be pH 2 because it improves the formation of di-derivatized acrolein DNSHydrazones without affecting the response of the derivatives of the formaldehyde and acetaldehyde derivatives. Acetonitrile is the preferable extraction solvent while the optimal time to analyze the aldehyde derivatives is 72 hours post-extraction. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study was to assess the accuracy and precision of airborne volatile organic compound (VOC) concentrations measured using passive air samplers (3M 3500 organic vapor monitors) over extended sampling durations (9 and 15 days). A total of forty-five organic vapor monitor samples were collected at a State of Texas air monitoring site during two different sampling periods (July/August and November 2008). The results of this study indicate that for most of the tested compounds, there was no significant difference between long-term (9 or 15 days) sample concentrations and the means of parallel consecutive short-term (3 days) sample concentrations. Biases of 9 or 15-day measurements vs. consecutive 3-day measurements showed considerable variability. Those compounds that had percent bias values of <10% are suggested as acceptable for long-term sampling (9 and 15 days). Of the twenty-one compounds examined, 10 compounds are classified as acceptable for long-term sampling; these include m,p-xylene, 1,2,4-trimethylbenzene, n-hexane, ethylbenzene, benzene, toluene, o-xylene, d-limonene, dimethylpentane and methyl tertbutyl ether. The ratio of sampling procedure variability relative to variability within days was approximately 1.89 for both sampling periods for the 3-day vs. 9-day comparisons and approximately 2.19 for both sampling periods for the 3-day vs. 15-day comparisons. Considerably higher concentrations of most VOCs were measured during the November sampling period compared to the July/August period. These differences may be a result of varying meteorological conditions during these two time periods, e.g., the differences in wind direction, and wind speed. Further studies are suggested to further evaluate the accuracy and precision of 3M 3500 organic vapor monitors over extended sampling durations. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study was carried out at St. Luke's Episcopal Hospital to evaluate environmental contamination of Clostridium difficile in the infected patient rooms. Samples were collected from the high risk areas and were immediately cultured for the presence of Clostridium difficile . Lack of microbial typing prevented the study of molecular characterization of the Clostridium difficile isolates obtained led to a change in the study hypothesis. The study found a positivity of 10% among 50 Hospital rooms sampled for the presence of Clostridium difficile. The study provided data that led to recommendations that routine environmental sampling be carried in the hospital rooms in which patients with CDAD are housed and that effective environmental disinfection methods are used. The study also recommended molecular typing methods to allow characterization of the CD strains isolated from patients and environmental sampling to determine their type, similarity and origin.^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hierarchical linear growth model (HLGM), as a flexible and powerful analytic method, has played an increased important role in psychology, public health and medical sciences in recent decades. Mostly, researchers who conduct HLGM are interested in the treatment effect on individual trajectories, which can be indicated by the cross-level interaction effects. However, the statistical hypothesis test for the effect of cross-level interaction in HLGM only show us whether there is a significant group difference in the average rate of change, rate of acceleration or higher polynomial effect; it fails to convey information about the magnitude of the difference between the group trajectories at specific time point. Thus, reporting and interpreting effect sizes have been increased emphases in HLGM in recent years, due to the limitations and increased criticisms for statistical hypothesis testing. However, most researchers fail to report these model-implied effect sizes for group trajectories comparison and their corresponding confidence intervals in HLGM analysis, since lack of appropriate and standard functions to estimate effect sizes associated with the model-implied difference between grouping trajectories in HLGM, and also lack of computing packages in the popular statistical software to automatically calculate them. ^ The present project is the first to establish the appropriate computing functions to assess the standard difference between grouping trajectories in HLGM. We proposed the two functions to estimate effect sizes on model-based grouping trajectories difference at specific time, we also suggested the robust effect sizes to reduce the bias of estimated effect sizes. Then, we applied the proposed functions to estimate the population effect sizes (d ) and robust effect sizes (du) on the cross-level interaction in HLGM by using the three simulated datasets, and also we compared the three methods of constructing confidence intervals around d and du recommended the best one for application. At the end, we constructed 95% confidence intervals with the suitable method for the effect sizes what we obtained with the three simulated datasets. ^ The effect sizes between grouping trajectories for the three simulated longitudinal datasets indicated that even though the statistical hypothesis test shows no significant difference between grouping trajectories, effect sizes between these grouping trajectories can still be large at some time points. Therefore, effect sizes between grouping trajectories in HLGM analysis provide us additional and meaningful information to assess group effect on individual trajectories. In addition, we also compared the three methods to construct 95% confident intervals around corresponding effect sizes in this project, which handled with the uncertainty of effect sizes to population parameter. We suggested the noncentral t-distribution based method when the assumptions held, and the bootstrap bias-corrected and accelerated method when the assumptions are not met.^