952 resultados para refreshment samples
Resumo:
Economists and other social scientists often face situations where they have access to two datasets that they can use but one set of data suffers from censoring or truncation. If the censored sample is much bigger than the uncensored sample, it is common for researchers to use the censored sample alone and attempt to deal with the problem of partial observation in some manner. Alternatively, they simply use only the uncensored sample and ignore the censored one so as to avoid biases. It is rarely the case that researchers use both datasets together, mainly because they lack guidance about how to combine them. In this paper, we develop a tractable semiparametric framework for combining the censored and uncensored datasets so that the resulting estimators are consistent, asymptotically normal, and use all information optimally. When the censored sample, which we refer to as the master sample, is much bigger than the uncensored sample (which we call the refreshment sample), the latter can be thought of as providing identification where it is otherwise absent. In contrast, when the refreshment sample is large and could typically be used alone, our methodology can be interpreted as using information from the censored sample to increase effciency. To illustrate our results in an empirical setting, we show how to estimate the effect of changes in compulsory schooling laws on age at first marriage, a variable that is censored for younger individuals. We also demonstrate how refreshment samples for this application can be created by matching cohort information across census datasets.
Resumo:
Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.
This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.
The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new
individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the
refreshment sample itself. As we illustrate, nonignorable unit nonresponse
can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse
in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.
The second method incorporates informative prior beliefs about
marginal probabilities into Bayesian latent class models for categorical data.
The basic idea is to append synthetic observations to the original data such that
(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.
We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.
The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.
Resumo:
Knowledge of particle emission characteristics associated with forest fires and in general, biomass burning, is becoming increasingly important due to the impact of these emissions on human health. Of particular importance is developing a better understanding of the size distribution of particles generated from forest combustion under different environmental conditions, as well as provision of emission factors for different particle size ranges. This study was aimed at quantifying particle emission factors from four types of wood found in South East Queensland forests: Spotted Gum (Corymbia citriodora), Red Gum (Eucalypt tereticornis), Blood Gum (Eucalypt intermedia), and Iron bark (Eucalypt decorticans); under controlled laboratory conditions. The experimental set up included a modified commercial stove connected to a dilution system designed for the conditions of the study. Measurements of particle number size distribution and concentration resulting from the burning of woods with a relatively homogenous moisture content (in the range of 15 to 26 %) and for different rates of burning were performed using a TSI Scanning Mobility Particle Sizer (SMPS) in the size range from 10 to 600 nm and a TSI Dust Trak for PM2.5. The results of the study in terms of the relationship between particle number size distribution and different condition of burning for different species show that particle number emission factors and PM2.5 mass emission factors depend on the type of wood and the burning rate; fast burning or slow burning. The average particle number emission factors for fast burning conditions are in the range of 3.3 x 1015 to 5.7 x 1015 particles/kg, and for PM2.5 are in the range of 139 to 217 mg/kg.
Resumo:
Biological tissues are subjected to complex loading states in vivo and in order to define constitutive equations that effectively simulate their mechanical behaviour under these loads, it is necessary to obtain data on the tissue's response to multiaxial loading. Single axis and shear testing of biological tissues is often carried out, but biaxial testing is less common. We sought to design and commission a biaxial compression testing device, capable of obtaining repeatable data for biological samples. The apparatus comprised a sealed stainless steel pressure vessel specifically designed such that a state of hydrostatic compression could be created on the test specimen while simultaneously unloading the sample along one axis with an equilibrating tensile pressure. Thus a state of equibiaxial compression was created perpendicular to the long axis of a rectangular sample. For the purpose of calibration and commissioning of the vessel, rectangular samples of closed cell ethylene vinyl acetate (EVA) foam were tested. Each sample was subjected to repeated loading, and nine separate biaxial experiments were carried out to a maximum pressure of 204 kPa (30 psi), with a relaxation time of two hours between them. Calibration testing demonstrated the force applied to the samples had a maximum error of 0.026 N (0.423% of maximum applied force). Under repeated loading, the foam sample demonstrated lower stiffness during the first load cycle. Following this cycle, an increased stiffness, repeatable response was observed with successive loading. While the experimental protocol was developed for EVA foam, preliminary results on this material suggest that this device may be capable of providing test data for biological tissue samples. The load response of the foam was characteristic of closed cell foams, with consolidation during the early loading cycles, then a repeatable load-displacement response upon repeated loading. The repeatability of the test results demonstrated the ability of the test device to provide reproducible test data and the low experimental error in the force demonstrated the reliability of the test data.
Resumo:
A method for determination of lactose in food samples by Osteryoung square wave voltammetry (OSWV) was developed. It was based on the nucleophilic addition reaction between lactose and aqua ammonia. The carbonyl group of lactose can be changed into imido group, and this increases the electrochemical activity in reduction and the sensitivity. The optimal condition for the nucleophilic addition reaction was investigated and it was found that in NH4Cl–NH3 buffer of pH 10.1, the linear range between the peak current and the concentration of lactose was 0.6–8.4 mg L−1, and the detection limits was 0.44 mg L−1. The proposed method was applied to the determination of lactose in food samples and satisfactory results were obtained.
Resumo:
A fast and accurate procedure has been researched and developed for the simultaneous determination of maltol and ethyl maltol, based on their reaction with iron(III) in the presence of o-phenanthroline in sulfuric acid medium. This reaction was the basis for an indirect kinetic spectrophotometric method, which followed the development of the pink ferroin product (λmax = 524 nm). The kinetic data were collected in the 370–900 nm range over 0–30 s. The optimized method indicates that individual analytes followed Beer’s law in the concentration range of 4.0–76.0 mg L−1 for both maltol and ethyl maltol. The LOD values of 1.6 mg L−1 for maltol and 1.4 mg L−1 for ethyl maltol agree well with those obtained by the alternative high performance liquid chromatography with ultraviolet detection (HPLC-UV). Three chemometrics methods, principal component regression (PCR), partial least squares (PLS) and principal component analysis–radial basis function–artificial neural networks (PC–RBF–ANN), were used to resolve the measured data with small kinetic differences between the two analytes as reflected by the development of the pink ferroin product. All three performed satisfactorily in the case of the synthetic verification samples, and in their application for the prediction of the analytes in several food products. The figures of merit for the analytes based on the multivariate models agreed well with those from the alternative HPLC-UV method involving the same samples.
Resumo:
A kinetic spectrophotometric method with aid of chemometrics is proposed for the simultaneous determination of norfloxacin and rifampicin in mixtures. The proposed method was applied for the simultaneous determination of these two compounds in pharmaceutical formulation and human urine samples, and the results obtained are similar to those obtained by high performance liquid chromatography.
Resumo:
A spectrophotometric method for the simultaneous determination of the important pharmaceuticals, pefloxacin and its structurally similar metabolite, norfloxacin, is described for the first time. The analysis is based on the monitoring of a kinetic spectrophotometric reaction of the two analytes with potassium permanganate as the oxidant. The measurement of the reaction process followed the absorbance decrease of potassium permanganate at 526 nm, and the accompanying increase of the product, potassium manganate, at 608 nm. It was essential to use multivariate calibrations to overcome severe spectral overlaps and similarities in reaction kinetics. Calibration curves for the individual analytes showed linear relationships over the concentration ranges of 1.0–11.5 mg L−1 at 526 and 608 nm for pefloxacin, and 0.15–1.8 mg L−1 at 526 and 608 nm for norfloxacin. Various multivariate calibration models were applied, at the two analytical wavelengths, for the simultaneous prediction of the two analytes including classical least squares (CLS), principal component regression (PCR), partial least squares (PLS), radial basis function-artificial neural network (RBF-ANN) and principal component-radial basis function-artificial neural network (PC-RBF-ANN). PLS and PC-RBF-ANN calibrations with the data collected at 526 nm, were the preferred methods—%RPET not, vert, similar 5, and LODs for pefloxacin and norfloxacin of 0.36 and 0.06 mg L−1, respectively. Then, the proposed method was applied successfully for the simultaneous determination of pefloxacin and norfloxacin present in pharmaceutical and human plasma samples. The results compared well with those from the alternative analysis by HPLC.
Resumo:
Several protocols for isolation of mycobacteria from water exist, but there is no established standard method. This study compared methods of processing potable water samples for the isolation of Mycobacterium avium and Mycobacterium intracellulare using spiked sterilized water and tap water decontaminated using 0.005% cetylpyridinium chloride (CPC). Samples were concentrated by centrifugation or filtration and inoculated onto Middlebrook 7H10 and 7H11 plates and Lowenstein-Jensen slants and into mycobacterial growth indicator tubes with or without polymyxin, azlocillin, nalidixic acid, trimethoprim, and amphotericin B. The solid media were incubated at 32°C, at 35°C, and at 35°C with CO2 and read weekly. The results suggest that filtration of water for the isolation of mycobacteria is a more sensitive method for concentration than centrifugation. The addition of sodium thiosulfate may not be necessary and may reduce the yield. Middlebrook M7H10 and 7H11 were equally sensitive culture media. CPC decontamination, while effective for reducing growth of contaminants, also significantly reduces mycobacterial numbers. There was no difference at 3 weeks between the different incubation temperatures.
Resumo:
A combination of micro-Raman spectroscopy, micro-infrared spectroscopy and SEM–EDX was employed to characterize decorative pigments on Classic Maya ceramics from Copán, Honduras. Variation in red paint mixtures was correlated with changing ceramic types and improvements in process and firing techniques. We have confirmed the use of specular hematite on Coner ceramics by the difference in intensities of Raman bands. Different compositions of brown paint were correlated with imported and local wares. The carbon-iron composition of the ceramic type, Surlo Brown, was confirmed. By combining micro-Raman analysis with micro-ATR infrared and SEM–EDX, we have achieved a more comprehensive characterization of the paint mixtures. These spectroscopic techniques can be used non-destructively on raw samples as a rapid confirmation of ceramic type.
Resumo:
Polybrominated diphenyl ethers (PBDEs) are lipophilic, persistent pollutants found worldwide in environmental and human samples. Exposure pathways for PBDEs remain unclear but may include food, air and dust. The aim of this study was to conduct an integrated assessment of PBDE exposure and human body burden using 10 matched samples of human milk, indoor air and dust collected in 2007–2008 in Brisbane, Australia. In addition, temporal analysis was investigated comparing the results of the current study with PBDE concentrations in human milk collected in 2002–2003 from the same region. PBDEs were detected in all matrices and the median concentrations of BDEs -47 and -209 in human milk, air and dust were: 4.2 and 0.3 ng/g lipid; 25 and 7.8 pg/m3; and 56 and 291 ng/g dust, respectively. Significant correlations were observed between the concentrations of BDE-99 in air and human milk (r = 0.661, p = 0.038) and BDE-153 in dust and BDE-183 in human milk (r = 0.697, p = 0.025). These correlations do not suggest causal relationships — there is no hypothesis that can be offered to explain why BDE-153 in dust and BDE-183 in milk are correlated. The fact that so few correlations were found in the data could be a function of the small sample size, or because additional factors, such as sources of exposure not considered or measured in the study, might be important in explaining exposure to PBDEs. There was a slight decrease in PBDE concentrations from 2002–2003 to 2007–2008 but this may be due to sampling and analytical differences. Overall, average PBDE concentrations from these individual samples were similar to results from pooled human milk collected in Brisbane in 2002–2003 indicating that pooling may be an efficient, cost-effective strategy of assessing PBDE concentrations on a population basis. The results of this study were used to estimate an infant's daily PBDE intake via inhalation, dust ingestion and human milk consumption. Differences in PBDE intake of individual congeners from the different matrices were observed. Specifically, as the level of bromination increased, the contribution of PBDE intake decreased via human milk and increased via dust. As the impacts of the ban of the lower brominated (penta- and octa-BDE) products become evident, an increased use of the higher brominated deca-BDE product may result in dust making a greater contribution to infant exposure than it does currently. To better understand human body burden, further research is required into the sources and exposure pathways of PBDEs and metabolic differences influencing an individual's response to exposure. In addition, temporal trend analysis is necessary with continued monitoring of PBDEs in the human population as well as in the suggested exposure matrices of food, dust and air.
Resumo:
Determining sensitivity and specificity of a postoperative infection surveillance process is a difficult undertaking. Because postoperative infections are rare, vast numbers of negative results exist, and it is often not reasonable to assess them all. This study gives a methodological framework for estimating sensitivity and specificity by taking only a small sample of the number of patients who test negative and comparing their findings to the reference or “gold standard” rather than comparing the findings of all patients to the gold standard. It provides a formula for deriving confidence intervals for these estimates and a guide to minimum requirements for sampling results.
Resumo:
This study aims to stimulate thought, debate and action for change on this question of more vigorous philanthropic funding of Australian health and medical research (HMR). It sharpens the argument with some facts and ideas about HMR funding from overseas sources. It also reports informed opinions from those working, giving and innovating in this area. It pinpoints the range of attitudes to HMR giving, both positive and negative. The study includes some aspects of Government funding as part of the equation, viewing Government as major HMR givers, with particular ability to partner, leverage and create incentives. Stimulating new philanthropy takes active outreach. The opportunity to build more dialogue between the HMR industry and the wider community is timely given the ‘licence to practice’ issues and questioned trust that applies currently somewhat both to science and to the charitable sector. This interest in improving HMR philanthropy also coincides with the launch last year by the Federal Government of Nonprofit Australia Limited (NAL), a group currently assessing infrastructure improvements to the charitable sector. History suggests no one will create this change if Research Australia does not. However, interest in change exists in various quarters. For Research Australia to successfully change the culture of Australian HMR giving, the process will drive the outcomes. Obviously stakeholder buy-in and partners will be needed and the ultimate blueprint for greater philanthropic HMR funding here will not be this document. Instead it will be the one that wears the handprint and ‘mindprint’ of the many architects and implementers interested in promoting HMR philanthropy, from philanthropists to nonprofit peaks to government policy arms. As the African proverb says, ‘If you want to go fast, go alone; but if you want to go far, go with others’.
Resumo:
Recent studies have shown that human papillomavirus (HPV) DNA can be found in circulating blood, including peripheral blood mononuclear cells (PBMCs), sera, plasma, and arterial cord blood. In light of these findings, DNA extracted from PBMCs from healthy blood donors were examined in order to determine how common HPV DNA is in blood of healthy individuals. Blood samples were collected from 180 healthy male blood donors (18-76 years old) through the Australian Red Cross Blood Services. Genomic DNA was extracted and specimens were tested for HPV DNA by PCR using a broad range primer pair. Positive samples were HPV-type determined by cloning and sequencing. HPV DNA was found in 8.3% (15/180) of the blood donors. A wide variety of different HPV types were isolated from the PBMCs; belonging to the cutaneous beta and gamma papillomavirus genera and mucosal alpha papillomaviruses. High-risk HPV types that are linked to cancer development were detected in 1.7% (3/180) of the PBMCs. Blood was also collected from a healthy HPV-positive 44-year-old male on four different occasions in order to determine which blood cell fractions harbor HPV. PBMCs treated with trypsin were negative for HPV, while non-trypsinized PBMCs were HPV-positive. This suggests that the HPV in blood is attached to the outside of blood cells via a protein-containing moiety. HPV was also isolated in the B cells, dendritic cells, NK cells, and neutrophils. To conclude, HPV present in PBMCs could represent a reservoir of virus and a potential new route of transmission.
Resumo:
Equilibrium Partitioning of an Ionic Contrast agent with microcomputed tomography (EPIC-[mu]CT) is a non-invasive technique to quantify and visualize the three-dimensional distribution of glycosaminoglycans (GAGs) in fresh cartilage tissue. However, it is unclear whether this technique is applicable to already fixed tissues. Therefore, this study aimed at investigating whether formalin fixation of bovine cartilage affects X-ray attenuation, and thus the interpretation of EPIC-[mu]CT data.Design Osteochondral samples (n = 24) were incubated with ioxaglate, an ionic contrast agent, for 22 h prior to [mu]CT scanning. The samples were scanned in both formalin-fixed and fresh conditions. GAG content was measured using a biochemical assay and normalized to wet weight, dry weight, and water content to determine potential reasons for differences in X-ray attenuation.Results The expected zonal distribution of contrast agent/GAGs was observed for both fixed and fresh cartilage specimens. However, despite no significant differences in GAG concentrations or physical properties between fixed and fresh samples, the average attenuation levels of formalin-fixed cartilage were 14.3% lower than in fresh samples.Conclusions EPIC-[mu]CT is useful for three-dimensional visualization of GAGs in formalin-fixed cartilage. However, a significant reduction in X-ray attenuation for fixed (compared to fresh) cartilage must be taken into account and adjusted for accordingly when quantifying GAG concentrations using EPIC-[mu]CT.