953 resultados para fecal egg count
Resumo:
Charcoal particles in pollen slides are often abundant, and thus analysts are faced with the problem of setting the minimum counting sum as small as possible in order to save time. We analysed the reliability of charcoal-concentration estimates based on different counting sums, using simulated low-to high-count samples. Bootstrap simulations indicate that the variability of inferred charcoal concentrations increases progressively with decreasing sums. Below 200 items (i.e., the sum of charcoal particles and exotic marker grains), reconstructed fire incidence is either too high or too low. Statistical comparisons show that the means of bootstrap simulations stabilize after 200 counts. Moreover, a count of 200-300 items is sufficient to produce a charcoal-concentration estimate with less than+5% error if compared with high-count samples of 1000 items for charcoal/marker grain ratios 0.1-0.91. If, however, this ratio is extremely high or low (> 0.91 or < 0.1) and if such samples are frequent, we suggest that marker grains are reduced or added prior to new sample processing.
Resumo:
The parasitoid Chelonus inanitus (Braconidae, Hymenoptera) oviposits into eggs of Spodoptera littoralis (Noctuidae, Lepidoptera) and, along with the egg, also injects polydnaviruses and venom, which are prerequisites for successful parasitoid development. The parasitoid larva develops within the embryonic and larval stages of the host, which enters metamorphosis precociously and arrests development in the prepupal stage. Polydnaviruses are responsible for the developmental arrest and interfere with the host's endocrine system in the last larval instar. Polydnaviruses have a segmented genome and are transmitted as a provirus integrated in the wasp's genome. Virions are only formed in female wasps and no virus replication is seen in the parasitized host. Here it is shown that very small amounts of viral transcripts were found in parasitized eggs and early larval instars of S. littoralis. Later on, transcript quantities increased and were highest in the late last larval instar for two of the three viral segments tested and in the penultimate to early last larval instar for the third segment. These are the first data on the occurrence of viral transcripts in the host of an egg-larval parasitoid and they are different from data reported for hosts of larval parasitoids, where transcript levels are already high shortly after parasitization. The analysis of three open reading frames by RT-PCR revealed viral transcripts in parasitized S. littoralis and in female pupae of C. inanitus, indicating the absence of host specificity. For one open reading frame, transcripts were also seen in male pupae, suggesting transcription from integrated viral DNA.
Resumo:
The concentrations of chironomid remains in lake sediments are very variable and, therefore, chironomid stratigraphies often include samples with a low number of counts. Thus, the effect of low count sums on reconstructed temperatures is an important issue when applying chironomid‐temperature inference models. Using an existing data set, we simulated low count sums by randomly picking subsets of head capsules from surface‐sediment samples with a high number of specimens. Subsequently, a chironomid‐temperature inference model was used to assess how the inferred temperatures are affected by low counts. The simulations indicate that the variability of inferred temperatures increases progressively with decreasing count sums. At counts below 50 specimens, a further reduction in count sum can cause a disproportionate increase in the variation of inferred temperatures, whereas at higher count sums the inferences are more stable. Furthermore, low count samples may consistently infer too low or too high temperatures and, therefore, produce a systematic error in a reconstruction. Smoothing reconstructed temperatures downcore is proposed as a possible way to compensate for the high variability due to low count sums. By combining adjacent samples in a stratigraphy, to produce samples of a more reliable size, it is possible to assess if low counts cause a systematic error in inferred temperatures.
Resumo:
BACKGROUND The aim of newborn screening (NBS) for CF is to detect children with 'classic' CF where early treatment is possible and improves prognosis. Children with inconclusive CF diagnosis (CFSPID) should not be detected, as there is no evidence for improvement through early treatment. No algorithm in current NBS guidelines explains what to do when sweat test (ST) fails. This study compares the performance of three different algorithms for further diagnostic evaluations when first ST is unsuccessful, regarding the numbers of children detected with CF and CFSPID, and the time until a definite diagnosis. METHODS In Switzerland, CF-NBS was introduced in January 2011 using an IRT-DNA-IRT algorithm followed by a ST. In children, in whom ST was not possible (no or insufficient sweat), 3 different protocols were applied between 2011 and 2014: in 2011, ST was repeated until it was successful (protocol A), in 2012 we proceeded directly to diagnostic DNA testing (protocol B), and 2013-2014, fecal elastase (FE) was measured in the stool, in order to determine a pancreas insufficiency needing immediate treatment (protocol C). RESULTS The ratio CF:CFSPID was 7:1 (27/4) with protocol A, 2:1 (22/10) with protocol B, and 14:1 (54/4) with protocol C. The mean time to definite diagnosis was significantly shorter with protocol C (33days) compared to protocol A or B (42 and 40days; p=0.014 compared to A, and p=0.036 compared to B). CONCLUSIONS The algorithm for the diagnostic part of the newborn screening used in the CF centers is important and affects the performance of a CF-NBS program with regard to the ratio CF:CFSPID and the time until definite diagnosis. Our results suggest to include FE after initial sweat test failure in the CF-NBS guidelines to keep the proportion of CFSPID low and the time until definite diagnosis short.
Resumo:
Gastrointestinal (GI) protein loss, due to lymphangiectasia or chronic inflammation, can be challenging to diagnose. This study evaluated the diagnostic accuracy of serum and fecal canine α1-proteinase inhibitor (cα1PI) concentrations to detect crypt abscesses and/or lacteal dilation in dogs. Serum and fecal cα1PI concentrations were measured in 120 dogs undergoing GI tissue biopsies, and were compared between dogs with and without crypt abscesses/lacteal dilation. Sensitivity and specificity were calculated for dichotomous outcomes. Serial serum cα1PI concentrations were also evaluated in 12 healthy corticosteroid-treated dogs. Serum cα1PI and albumin concentrations were significantly lower in dogs with crypt abscesses and/or lacteal dilation than in those without (both P <0.001), and more severe lesions were associated with lower serum cα1PI concentrations, higher 3 days-mean fecal cα1PI concentrations, and lower serum/fecal cα1PI ratios. Serum and fecal cα1PI, and their ratios, distinguished dogs with moderate or severe GI crypt abscesses/lacteal dilation from dogs with only mild or none such lesions with moderate sensitivity (56-92%) and specificity (67-81%). Serum cα1PI concentrations increased during corticosteroid administration. We conclude that serum and fecal α1PI concentrations reflect the severity of intestinal crypt abscesses/lacteal dilation in dogs. Due to its specificity for the GI tract, measurement of fecal cα1PI appears to be superior to serum cα1PI for diagnosing GI protein loss in dogs. In addition, the serum/fecal cα1PI ratio has an improved accuracy in hypoalbuminemic dogs, but serum cα1PI concentrations should be carefully interpreted in corticosteroid-treated dogs.
Resumo:
The experiment was designed to investigate the impact of selection for increased body mass on external and internal egg quality traits of Japanese quail. Three hundred and sixty Japanese quail, divergently selected over three generations for different body mass at 4 weeks of age, were used. Quail were homogeneously divided into three groups each consisting of 120 birds: high body mass (HBM), low body mass (LBM) and Control. ANOVA was used to detect the effect of selection on egg quality. In addition, correlation between external and internal egg quality traits was measured. Our results revealed thatHBMquail laid heavier eggs (P = 0.03 compared with LBM but not significantly different with Control quail) with a higher external (shell thickness, shell weight, eggshell ratio and eggshell density, P = 0.0001) and internal egg quality score (albumen weight, P = 0.003; albumen ratio, P = 0.01; albumen height, yolk height, yolk index and Haugh unit, P = 0.0001) when compared with both the Control and LBM. The egg surface area and yolk diameter were significantly higher in HBM when compared with the LBM but not with the Control line. Egg weight was positively correlated with albumen weight (r = 0.54, P = 0.0001), albumen ratio (r = 0.14, P = 0.05), yolk height (r = 0.27, P = 0.0001), yolk weight (r = 0.23, P = 0.002), yolk diameter (r = 0.14, P = 0.05) and yolk index (r = 0.21, P = 0.005) but was negatively correlated with yolk ratio (r = –0.16, P = 0.03). Our results indicate that selection for higher body mass might result in heavier eggs and superior egg quality.
Resumo:
BACKGROUND Antiretroviral therapy (ART) initiation is now recommended irrespective of CD4 count. However data on the relationship between CD4 count at ART initiation and loss to follow-up (LTFU) are limited and conflicting. METHODS We conducted a cohort analysis including all adults initiating ART (2008-2012) at three public sector sites in South Africa. LTFU was defined as no visit in the 6 months before database closure. The Kaplan-Meier estimator and Cox's proportional hazards models examined the relationship between CD4 count at ART initiation and 24-month LTFU. Final models were adjusted for demographics, year of ART initiation, programme expansion and corrected for unascertained mortality. RESULTS Among 17 038 patients, the median CD4 at initiation increased from 119 (IQR 54-180) in 2008 to 257 (IQR 175-318) in 2012. In unadjusted models, observed LTFU was associated with both CD4 counts <100 cells/μL and CD4 counts ≥300 cells/μL. After adjustment, patients with CD4 counts ≥300 cells/μL were 1.35 (95% CI 1.12 to 1.63) times as likely to be LTFU after 24 months compared to those with a CD4 150-199 cells/μL. This increased risk for patients with CD4 counts ≥300 cells/μL was largest in the first 3 months on treatment. Correction for unascertained deaths attenuated the association between CD4 counts <100 cells/μL and LTFU while the association between CD4 counts ≥300 cells/μL and LTFU persisted. CONCLUSIONS Patients initiating ART at higher CD4 counts may be at increased risk for LTFU. With programmes initiating patients at higher CD4 counts, models of ART delivery need to be reoriented to support long-term retention.
Resumo:
OBJECTIVE To illustrate an approach to compare CD4 cell count and HIV-RNA monitoring strategies in HIV-positive individuals on antiretroviral therapy (ART). DESIGN Prospective studies of HIV-positive individuals in Europe and the USA in the HIV-CAUSAL Collaboration and The Center for AIDS Research Network of Integrated Clinical Systems. METHODS Antiretroviral-naive individuals who initiated ART and became virologically suppressed within 12 months were followed from the date of suppression. We compared 3 CD4 cell count and HIV-RNA monitoring strategies: once every (1) 3 ± 1 months, (2) 6 ± 1 months, and (3) 9-12 ± 1 months. We used inverse-probability weighted models to compare these strategies with respect to clinical, immunologic, and virologic outcomes. RESULTS In 39,029 eligible individuals, there were 265 deaths and 690 AIDS-defining illnesses or deaths. Compared with the 3-month strategy, the mortality hazard ratios (95% CIs) were 0.86 (0.42 to 1.78) for the 6 months and 0.82 (0.46 to 1.47) for the 9-12 month strategy. The respective 18-month risk ratios (95% CIs) of virologic failure (RNA >200) were 0.74 (0.46 to 1.19) and 2.35 (1.56 to 3.54) and 18-month mean CD4 differences (95% CIs) were -5.3 (-18.6 to 7.9) and -31.7 (-52.0 to -11.3). The estimates for the 2-year risk of AIDS-defining illness or death were similar across strategies. CONCLUSIONS Our findings suggest that monitoring frequency of virologically suppressed individuals can be decreased from every 3 months to every 6, 9, or 12 months with respect to clinical outcomes. Because effects of different monitoring strategies could take years to materialize, longer follow-up is needed to fully evaluate this question.
Resumo:
BACKGROUND : Approximately 1/3 of individuals have a high plasma response to dietary cholesterol (hyper-responders). Although increases in both LDL and HDL cholesterol have been observed, limited data exist regarding effects of egg consumption on lipoprotein subclasses and circulating carotenoids. METHODS : 29 postmenopausal women (50-68 y) and 13 men (60-80 y) were assigned to either 3 eggs (EGG, 640 mg cholesterol/d) or an equal volume of cholesterol-free egg substitute (SUB, 0 mg cholesterol/d) for 30 d. Following a 3 wk wash out, subjects crossed over to the alternate diet. Individuals with a response to dietary cholesterol > 2.2 mg/dL for each additional 100 mg of dietary cholesterol were classified as hyper-responders while hypo-responders were those with a response /= 21.2 nm) less atherogenic LDL particle (P < 0.001) and larger HDL particle (> 8.8 nm) (P < 0.01), with no significant difference in the total number of LDL or HDL particles. Regardless of response classification, all individuals had an increase in plasma lutein (from 32.4 +/- 15.2 to 46.4 +/- 23.3 ng/L) and zeaxanthin (from 8.8 +/- 4.8 to 10.7 +/- 5.8 ng/L) during EGG, yet hyper-responders displayed higher concentrations of carotenoids when compared to hypo-responders CONCLUSION : These findings suggest that the increases in LDL-C and HDL-C due to increased egg consumption in hyper-responders are not related to an increased number of LDL or HDL particles but, to an increase in the less atherogenic lipoprotein subfractions. Also, increases in plasma carotenoids after EGG may provide a valuable dietary source for this population.
Resumo:
The ability to respond plastically to the environment has allowed amphibians to evolve a response to spatial and temporal variation in predation threat (Benard 2004). Embroys exposed to egg predation are expected to hatch out earlier than their conspecifics. Larval predation can induce a suite of phenotypic changes including growing a larger tail area. When presented with cues from both egg and larval predators, embryos are expected to respond to the egg predator by hatching out earlier because the egg predator presents an immediate threat. However, hatching early may be costly in the larval environment in terms of development, morphology, and/or behavior. We created a laboratory experiment in which we exposed clutches of spotted salamander (Ambystoma maculatum) eggs to both egg (caddisfly larvae) and larval (A. opacum) predators to test this hypothesis. We recorded hatching time and stage and took developmental and morphological data of the animals a week after hatching. Larvae were entered into lethal predation trials with a larval predatory sunfish (Lepomis sp.) in order to study behavior. We found that animals exposed to the egg predator cues hatched out earlier and at earlier developmental stages than conspecifics regardless of whether there was a larval predator present. Animals exposed to larval predator cues grew relatively larger tails and survived longer in the lethal predation trials. However the group exposed to both predators showed a cost of early hatching in terms of lower tail area and shorter survival time in predation trials. The morphological and developmental effects measured of hatching plasticity were transient as there were no developmental or morphological differences between the treatment groups at metamorphosis. Hatching plasticity may be transient but it is important to the development and survival of many amphibians.
Resumo:
Leukopenia, the leukocyte count, and prognosis of disease are interrelated; a systematic search of the literature was undertaken to ascertain the strength of the evidence. One hundred seventy-one studies were found from 1953 onward pertaining to the predictive capabilities of the leukocyte count. Of those studies, 42 met inclusion criteria. An estimated range of 2,200cells/μL to 7,000cells/μL was determined as that which indicates good prognosis in disease and indicates the least amount of risk to an individual overall. Tables of the evidence are included indicating the disparate populations examined and the possible degree of association. ^
Resumo:
Patients who had started HAART (Highly Active Anti-Retroviral Treatment) under previous aggressive DHHS guidelines (1997) underwent a life-long continuous HAART that was associated with many short term as well as long term complications. Many interventions attempted to reduce those complications including intermittent treatment also called pulse therapy. Many studies were done to study the determinants of rate of fall in CD4 count after interruption as this data would help guide treatment interruptions. The data set used here was a part of a cohort study taking place at the Johns Hopkins AIDS service since January 1984, in which the data were collected both prospectively and retrospectively. The patients in this data set consisted of 47 patients receiving via pulse therapy with the aim of reducing the long-term complications. ^ The aim of this project was to study the impact of virologic and immunologic factors on the rate of CD4 loss after treatment interruption. The exposure variables under investigation included CD4 cell count and viral load at treatment initiation. The rates of change of CD4 cell count after treatment interruption was estimated from observed data using advanced longitudinal data analysis methods (i.e., linear mixed model). Using random effects accounted for repeated measures of CD4 per person after treatment interruption. The regression coefficient estimates from the model was then used to produce subject specific rates of CD4 change accounting for group trends in change. The exposure variables of interest were age, race, and gender, CD4 cell counts and HIV RNA levels at HAART initiation. ^ The rate of fall of CD4 count did not depend on CD4 cell count or viral load at initiation of treatment. Thus these factors may not be used to determine who can have a chance of successful treatment interruption. CD4 and viral load were again studied by t-tests and ANOVA test after grouping based on medians and quartiles to see any difference in means of rate of CD4 fall after interruption. There was no significant difference between the groups suggesting that there was no association between rate of fall of CD4 after treatment interruption and above mentioned exposure variables. ^
Resumo:
Outbreaks of diarrhea are common among children in day care centers (DCC). Enteropathogens associated with these outbreaks are spread by the fecal-oral route through contaminated hands or environmental objects. This prospective study was undertaken to determine the prevalence of fecal coliform (FC) contamination in the DCC environment. Ten rooms in 6 DCC housing 121 children $<$2 years of age were studied for 13 weeks. Inanimate objects (1275), toy balls (724), and hands (954) were cultured 1-3 times per week. FC contamination was common during each week of study and was significantly (p $<$ 0.05) greater for objects, toy balls, and hands of children in toddler compared to infant rooms. In 5 rooms in which clothes were worn over diapers, there was a significantly lower prevalence of FC of toy balls (p $<$ 0.005), inanimate objects (p $<$ 0.05), and hands of children (p $<$ 0.001) and caregivers (p $<$ 0.05) when compared to rooms in which overclothes were not worn. Occurrence of diarrhea was significantly associated with increased contamination of caregivers' and children's hands. Using plasmid analysis of trimethoprim (TMP)-resistant Escherichia coli, stool and environmental isolates from individual DCC rooms had the same plasmid patterns, which were unique to each center. In summary, FC of environmental isolates and hands of children and caregivers in DCC is common; toy balls can serve as sentinels of contamination; FC can be significantly decreased by use of clothes worn over diapers; and plasmid analysis of E. coli strains showed the same patterns from stool and environmental isolates. ^
Resumo:
Purpose. This project was designed to describe the association between wasting and CD4 cell counts in HIV-infected men in order to better understand the role of wasting in progression of HIV infection.^ Methods. Baseline and prevalence data were collected from a cross-sectional survey of 278 HIV-infected men seen at the Houston Veterans Affairs Medical Center Special Medicine Clinic, from June 1, 1991 to January 1, 1994. A follow-up study was conducted among those at risk, to investigate the incidence of wasting and the association between wasting and low CD4 cell counts. Wasting was described by four methods. Z-scores for age-, sex-, and height-adjusted weight; sex-, and age-adjusted mid-arm muscle circumference (MAMC); and fat-free mass; and the ratio of extra-cellular mass (ECM) to body-cell mass (BCM) $>$ 1.20. FFM, ECM, and BCM were estimated from bioelectrical impedance analysis. MAMC was calculated from triceps skinfold and mid-arm circumference. The relationship between wasting and covariates was examined with logistic regression in the cross-sectional study, and with Poisson regression in the follow-up study. The association between death and wasting was examined with Cox's regression.^ Results. The prevalence of wasting ranged from 5% (weight and ECM:BCM) to almost 14% (MAMC and FFM) among the 278 men examined. The odds of wasting, associated with baseline CD4 cell count $<$200, was significant for each method but weight, and ranged from 4.6 to 12.7. Use of antiviral therapy was significantly protective of MAMC, FFM and ECM:BCM (OR $\approx$ 0.2), whereas the need for antibacterial therapy was a risk (OR 3.1, 95% CI 1.1-8.7). The average incidence of wasting ranged from 4 to 16 per 100 person-years among the approximately 145 men followed for 160 person-years. Low CD4 cell count seemed to increase the risk of wasting, but statistical significance was not reached. The effect of the small sample size on the power to detect a significant association should be considered. Wasting, by MAMC and FFM, was significantly associated with death, after adjusting for baseline serum albumin concentration and CD4 cell count.^ Conclusions. Wasting by MAMC and FFM were strongly associated with baseline CD4 cell counts in both the prevalence and incidence study and strong predictors of death. Of the two methods, MAMC is convenient, has available reference population data, may be the most appropriate for assessing the nutritional status of HIV-infected men. ^
Resumo:
This study establishes the extent and relevance of bias of population estimates of prevalence, incidence, and intensity of infection with Schistosoma mansoni caused by the relative sensitivity of stool examination techniques. The population studied was Parcelas de Boqueron in Las Piedras, Puerto Rico, where the Centers for Disease Control, had undertaken a prospective community-based study of infection with S. mansoni in 1972. During each January of the succeeding years stool specimens from this population were processed according to the modified Ritchie concentration (MRC) technique. During January 1979 additional stool specimens were collected from 30 individuals selected on the basis of their mean S. mansoni egg output during previous years. Each specimen was divided into ten 1-gm aliquots and three 42-mg aliquots. The relationship of egg counts obtained with the Kato-Katz (KK) thick smear technique as a function of the mean of ten counts obtained with the MRC technique was established by means of regression analysis. Additionally, the effect of fecal sample size and egg excretion level on technique sensitivity was evaluated during a blind assessment of single stool specimen samples, using both examination methods, from 125 residents with documented S. mansoni infections. The regression equation was: Ln KK = 2.3324 + 0.6319 Ln MRC, and the coefficient of determination (r('2)) was 0.73. The regression equation was then utilized to correct the term "m" for sample size in the expression P ((GREATERTHEQ) 1 egg) = 1 - e('-ms), which estimates the probability P of finding at least one egg as a function of the mean S. mansoni egg output "m" of the population and the effective stool sample size "s" utilized by the coprological technique. This algorithm closely approximated the observed sensitivity of the KK and MRC tests when these were utilized to blindly screen a population of known parasitologic status for infection with S. mansoni. In addition, the algorithm was utilized to adjust the apparent prevalence of infection for the degree of functional sensitivity exhibited by the diagnostic test. This permitted the estimation of true prevalence of infection and, hence, a means for correcting estimates of incidence of infection. ^