974 resultados para Count Basil
Resumo:
In numerous intervention studies and education field trials, random assignment to treatment occurs in clusters rather than at the level of observation. This departure of random assignment of units may be due to logistics, political feasibility, or ecological validity. Data within the same cluster or grouping are often correlated. Application of traditional regression techniques, which assume independence between observations, to clustered data produce consistent parameter estimates. However such estimators are often inefficient as compared to methods which incorporate the clustered nature of the data into the estimation procedure (Neuhaus 1993).1 Multilevel models, also known as random effects or random components models, can be used to account for the clustering of data by estimating higher level, or group, as well as lower level, or individual variation. Designing a study, in which the unit of observation is nested within higher level groupings, requires the determination of sample sizes at each level. This study investigates the design and analysis of various sampling strategies for a 3-level repeated measures design on the parameter estimates when the outcome variable of interest follows a Poisson distribution. ^ Results study suggest that second order PQL estimation produces the least biased estimates in the 3-level multilevel Poisson model followed by first order PQL and then second and first order MQL. The MQL estimates of both fixed and random parameters are generally satisfactory when the level 2 and level 3 variation is less than 0.10. However, as the higher level error variance increases, the MQL estimates become increasingly biased. If convergence of the estimation algorithm is not obtained by PQL procedure and higher level error variance is large, the estimates may be significantly biased. In this case bias correction techniques such as bootstrapping should be considered as an alternative procedure. For larger sample sizes, those structures with 20 or more units sampled at levels with normally distributed random errors produced more stable estimates with less sampling variance than structures with an increased number of level 1 units. For small sample sizes, sampling fewer units at the level with Poisson variation produces less sampling variation, however this criterion is no longer important when sample sizes are large. ^ 1Neuhaus J (1993). “Estimation efficiency and Tests of Covariate Effects with Clustered Binary Data”. Biometrics , 49, 989–996^
Resumo:
A tacitly held assumption in synesthesia research is the unidirectionality of digit-color associations. This notion is based on synesthetes' report that digits evoke a color percept, but colors do not elicit any numerical impression. In a random color generation task, we found evidence for an implicit co-activation of digits by colors, a finding that constrains neurological theories concerning cross-modal associations in general and synesthesia in particular.
Resumo:
Bovine mastitis is a frequent problem in Swiss dairy herds. One of the main pathogens causing significant economic loss is Staphylococcus aureus. Various Staph. aureus genotypes with different biological properties have been described. Genotype B (GTB) of Staph. aureus was identified as the most contagious and one of the most prevalent strains in Switzerland. The aim of this study was to identify risk factors associated with the herd-level presence of Staph. aureus GTB and Staph. aureus non-GTB in Swiss dairy herds with an elevated yield-corrected herd somatic cell count (YCHSCC). One hundred dairy herds with a mean YCHSCC between 200,000 and 300,000cells/mL in 2010 were recruited and each farm was visited once during milking. A standardized protocol investigating demography, mastitis management, cow husbandry, milking system, and milking routine was completed during the visit. A bulk tank milk (BTM) sample was analyzed by real-time PCR for the presence of Staph. aureus GTB to classify the herds into 2 groups: Staph. aureus GTB-positive and Staph. aureus GTB-negative. Moreover, quarter milk samples were aseptically collected for bacteriological culture from cows with a somatic cell count ≥150,000cells/mL on the last test-day before the visit. The culture results allowed us to allocate the Staph. aureus GTB-negative farms to Staph. aureus non-GTB and Staph. aureus-free groups. Multivariable multinomial logistic regression models were built to identify risk factors associated with the herd-level presence of Staph. aureus GTB and Staph. aureus non-GTB. The prevalence of Staph. aureus GTB herds was 16% (n=16), whereas that of Staph. aureus non-GTB herds was 38% (n=38). Herds that sent lactating cows to seasonal communal pastures had significantly higher odds of being infected with Staph. aureus GTB (odds ratio: 10.2, 95% CI: 1.9-56.6), compared with herds without communal pasturing. Herds that purchased heifers had significantly higher odds of being infected with Staph. aureus GTB (rather than Staph. aureus non-GTB) compared with herds without purchase of heifers. Furthermore, herds that did not use udder ointment as supportive therapy for acute mastitis had significantly higher odds of being infected with Staph. aureus GTB (odds ratio: 8.5, 95% CI: 1.6-58.4) or Staph. aureus non-GTB (odds ratio: 6.1, 95% CI: 1.3-27.8) than herds that used udder ointment occasionally or regularly. Herds in which the milker performed unrelated activities during milking had significantly higher odds of being infected with Staph. aureus GTB (rather than Staph. aureus non-GTB) compared with herds in which the milker did not perform unrelated activities at milking. Awareness of 4 potential risk factors identified in this study guides implementation of intervention strategies to improve udder health in both Staph. aureus GTB and Staph. aureus non-GTB herds.
Resumo:
OBJECTIVES Pre-antiretroviral therapy (ART) inflammation and coagulation activation predict clinical outcomes in HIV-positive individuals. We assessed whether pre-ART inflammatory marker levels predicted the CD4 count response to ART. METHODS Analyses were based on data from the Strategic Management of Antiretroviral Therapy (SMART) trial, an international trial evaluating continuous vs. interrupted ART, and the Flexible Initial Retrovirus Suppressive Therapies (FIRST) trial, evaluating three first-line ART regimens with at least two drug classes. For this analysis, participants had to be ART-naïve or off ART at randomization and (re)starting ART and have C-reactive protein (CRP), interleukin-6 (IL-6) and D-dimer measured pre-ART. Using random effects linear models, we assessed the association between each of the biomarker levels, categorized as quartiles, and change in CD4 count from ART initiation to 24 months post-ART. Analyses adjusted for CD4 count at ART initiation (baseline), study arm, follow-up time and other known confounders. RESULTS Overall, 1084 individuals [659 from SMART (26% ART naïve) and 425 from FIRST] met the eligibility criteria, providing 8264 CD4 count measurements. Seventy-five per cent of individuals were male with the mean age of 42 years. The median (interquartile range) baseline CD4 counts were 416 (350-530) and 100 (22-300) cells/μL in SMART and FIRST, respectively. All of the biomarkers were inversely associated with baseline CD4 count in FIRST but not in SMART. In adjusted models, there was no clear relationship between changing biomarker levels and mean change in CD4 count post-ART (P for trend: CRP, P = 0.97; IL-6, P = 0.25; and D-dimer, P = 0.29). CONCLUSIONS Pre-ART inflammation and coagulation activation do not predict CD4 count response to ART and appear to influence the risk of clinical outcomes through other mechanisms than blunting long-term CD4 count gain.
Resumo:
This paper is concerned with the analysis of zero-inflated count data when time of exposure varies. It proposes a modified zero-inflated count data model where the probability of an extra zero is derived from an underlying duration model with Weibull hazard rate. The new model is compared to the standard Poisson model with logit zero inflation in an application to the effect of treatment with thiotepa on the number of new bladder tumors.
Resumo:
Charcoal particles in pollen slides are often abundant, and thus analysts are faced with the problem of setting the minimum counting sum as small as possible in order to save time. We analysed the reliability of charcoal-concentration estimates based on different counting sums, using simulated low-to high-count samples. Bootstrap simulations indicate that the variability of inferred charcoal concentrations increases progressively with decreasing sums. Below 200 items (i.e., the sum of charcoal particles and exotic marker grains), reconstructed fire incidence is either too high or too low. Statistical comparisons show that the means of bootstrap simulations stabilize after 200 counts. Moreover, a count of 200-300 items is sufficient to produce a charcoal-concentration estimate with less than+5% error if compared with high-count samples of 1000 items for charcoal/marker grain ratios 0.1-0.91. If, however, this ratio is extremely high or low (> 0.91 or < 0.1) and if such samples are frequent, we suggest that marker grains are reduced or added prior to new sample processing.
Resumo:
The concentrations of chironomid remains in lake sediments are very variable and, therefore, chironomid stratigraphies often include samples with a low number of counts. Thus, the effect of low count sums on reconstructed temperatures is an important issue when applying chironomid‐temperature inference models. Using an existing data set, we simulated low count sums by randomly picking subsets of head capsules from surface‐sediment samples with a high number of specimens. Subsequently, a chironomid‐temperature inference model was used to assess how the inferred temperatures are affected by low counts. The simulations indicate that the variability of inferred temperatures increases progressively with decreasing count sums. At counts below 50 specimens, a further reduction in count sum can cause a disproportionate increase in the variation of inferred temperatures, whereas at higher count sums the inferences are more stable. Furthermore, low count samples may consistently infer too low or too high temperatures and, therefore, produce a systematic error in a reconstruction. Smoothing reconstructed temperatures downcore is proposed as a possible way to compensate for the high variability due to low count sums. By combining adjacent samples in a stratigraphy, to produce samples of a more reliable size, it is possible to assess if low counts cause a systematic error in inferred temperatures.
Resumo:
BACKGROUND Antiretroviral therapy (ART) initiation is now recommended irrespective of CD4 count. However data on the relationship between CD4 count at ART initiation and loss to follow-up (LTFU) are limited and conflicting. METHODS We conducted a cohort analysis including all adults initiating ART (2008-2012) at three public sector sites in South Africa. LTFU was defined as no visit in the 6 months before database closure. The Kaplan-Meier estimator and Cox's proportional hazards models examined the relationship between CD4 count at ART initiation and 24-month LTFU. Final models were adjusted for demographics, year of ART initiation, programme expansion and corrected for unascertained mortality. RESULTS Among 17 038 patients, the median CD4 at initiation increased from 119 (IQR 54-180) in 2008 to 257 (IQR 175-318) in 2012. In unadjusted models, observed LTFU was associated with both CD4 counts <100 cells/μL and CD4 counts ≥300 cells/μL. After adjustment, patients with CD4 counts ≥300 cells/μL were 1.35 (95% CI 1.12 to 1.63) times as likely to be LTFU after 24 months compared to those with a CD4 150-199 cells/μL. This increased risk for patients with CD4 counts ≥300 cells/μL was largest in the first 3 months on treatment. Correction for unascertained deaths attenuated the association between CD4 counts <100 cells/μL and LTFU while the association between CD4 counts ≥300 cells/μL and LTFU persisted. CONCLUSIONS Patients initiating ART at higher CD4 counts may be at increased risk for LTFU. With programmes initiating patients at higher CD4 counts, models of ART delivery need to be reoriented to support long-term retention.
Resumo:
OBJECTIVE To illustrate an approach to compare CD4 cell count and HIV-RNA monitoring strategies in HIV-positive individuals on antiretroviral therapy (ART). DESIGN Prospective studies of HIV-positive individuals in Europe and the USA in the HIV-CAUSAL Collaboration and The Center for AIDS Research Network of Integrated Clinical Systems. METHODS Antiretroviral-naive individuals who initiated ART and became virologically suppressed within 12 months were followed from the date of suppression. We compared 3 CD4 cell count and HIV-RNA monitoring strategies: once every (1) 3 ± 1 months, (2) 6 ± 1 months, and (3) 9-12 ± 1 months. We used inverse-probability weighted models to compare these strategies with respect to clinical, immunologic, and virologic outcomes. RESULTS In 39,029 eligible individuals, there were 265 deaths and 690 AIDS-defining illnesses or deaths. Compared with the 3-month strategy, the mortality hazard ratios (95% CIs) were 0.86 (0.42 to 1.78) for the 6 months and 0.82 (0.46 to 1.47) for the 9-12 month strategy. The respective 18-month risk ratios (95% CIs) of virologic failure (RNA >200) were 0.74 (0.46 to 1.19) and 2.35 (1.56 to 3.54) and 18-month mean CD4 differences (95% CIs) were -5.3 (-18.6 to 7.9) and -31.7 (-52.0 to -11.3). The estimates for the 2-year risk of AIDS-defining illness or death were similar across strategies. CONCLUSIONS Our findings suggest that monitoring frequency of virologically suppressed individuals can be decreased from every 3 months to every 6, 9, or 12 months with respect to clinical outcomes. Because effects of different monitoring strategies could take years to materialize, longer follow-up is needed to fully evaluate this question.
Resumo:
Signatur des Originals: S 36/F08772
Resumo:
Leukopenia, the leukocyte count, and prognosis of disease are interrelated; a systematic search of the literature was undertaken to ascertain the strength of the evidence. One hundred seventy-one studies were found from 1953 onward pertaining to the predictive capabilities of the leukocyte count. Of those studies, 42 met inclusion criteria. An estimated range of 2,200cells/μL to 7,000cells/μL was determined as that which indicates good prognosis in disease and indicates the least amount of risk to an individual overall. Tables of the evidence are included indicating the disparate populations examined and the possible degree of association. ^