943 resultados para Bayesian p-values


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Urinary creatinine excretion is used as a marker of completeness of timed urine collections, which are a keystone of several metabolic evaluations in clinical investigations and epidemiological surveys. The current reference values for 24-hour urinary creatinine excretion rely on observations performed in the 1960s and 1970s in relatively small and mostly selected groups, and may thus poorly fit to the present-day general European population. The aim of this study was to establish and validate anthropometry-based age- and sex-specific reference values of the 24-hour urinary creatinine excretion on adult populations with preserved renal function. METHODS We used data from two independent Swiss cross-sectional population-based studies with standardised 24-hour urinary collection and measured anthropometric variables. Only data from adults of European descent, with estimated glomerular filtration rate (eGFR) ≥60 ml/min/1.73 m(2) and reported completeness of the urinary collection were retained. A linear regression model was developed to predict centiles of the 24-hour urinary creatinine excretion in 1,137 participants from the Swiss Survey on Salt and validated in 994 participants from the Swiss Kidney Project on Genes in Hypertension. RESULTS The mean urinary creatinine excretion was 193 ± 41 μmol/kg/24 hours in men and 151 ± 38 μmol/kg/24 hours in women in the Swiss Survey on Salt. The values were inversely correlated with age and body mass index (BMI). Based on current reference values (177 to 221 μmol/kg/24 hours in men and 133 to 177 μmol/kg/24 hours in women), 56% of the urinary collections in the whole population and 67% in people >60 years old would have been considered as inaccurate. A linear regression model with sex, BMI and age as predictor variables was found to provide the best prediction of the observed values and showed a good fit when applied to the validation population. CONCLUSIONS We propose a validated prediction equation for 24-hour urinary creatinine excretion in the general European population, based on readily available variables such as age, sex and BMI, and a few derived normograms to ease its clinical application. This should help healthcare providers to interpret the completeness of a 24-hour urine collection in daily clinical practice and in epidemiological population studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

(31)P MRS magnetization transfer ((31)P-MT) experiments allow the estimation of exchange rates of biochemical reactions, such as the creatine kinase equilibrium and adenosine triphosphate (ATP) synthesis. Although various (31)P-MT methods have been successfully used on isolated organs or animals, their application on humans in clinical scanners poses specific challenges. This study compared two major (31)P-MT methods on a clinical MR system using heteronuclear surface coils. Although saturation transfer (ST) is the most commonly used (31)P-MT method, sequences such as inversion transfer (IT) with short pulses might be better suited for the specific hardware and software limitations of a clinical scanner. In addition, small NMR-undetectable metabolite pools can transfer MT to NMR-visible pools during long saturation pulses, which is prevented with short pulses. (31)P-MT sequences were adapted for limited pulse length, for heteronuclear transmit-receive surface coils with inhomogeneous B1 , for the need for volume selection and for the inherently low signal-to-noise ratio (SNR) on a clinical 3-T MR system. The ST and IT sequences were applied to skeletal muscle and liver in 10 healthy volunteers. Monte-Carlo simulations were used to evaluate the behavior of the IT measurements with increasing imperfections. In skeletal muscle of the thigh, ATP synthesis resulted in forward reaction constants (k) of 0.074 ± 0.022 s(-1) (ST) and 0.137 ± 0.042 s(-1) (IT), whereas the creatine kinase reaction yielded 0.459 ± 0.089 s(-1) (IT). In the liver, ATP synthesis resulted in k = 0.267 ± 0.106 s(-1) (ST), whereas the IT experiment yielded no consistent results. ST results were close to literature values; however, the IT results were either much larger than the corresponding ST values and/or were widely scattered. To summarize, ST and IT experiments can both be implemented on a clinical body scanner with heteronuclear transmit-receive surface coils; however, ST results are much more robust against experimental imperfections than the current implementation of IT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND & AIMS: Esophageal impedance measurements have been proposed to indicate the status of the esophageal mucosa, and might be used to study the roles of the impaired mucosal integrity and increased acid sensitivity in patients with heartburn. We compared baseline impedance levels among patients with heartburn who did and did not respond to proton pump inhibitor (PPI) therapy, along with the pathophysiological characteristics of functional heartburn (FH). METHODS: In a case-control study, we collected data from January to December 203 on patients with heartburn and normal findings from endoscopy who were not receiving PPI therapy and underwent impedance pH testing at hospitals in Italy. Patients with negative test results were placed on an 8-week course of PPI therapy (84 patients received esomeprazole and 36 patients received pantoprazole). Patients with more than 50% symptom improvement were classified as FH/PPI responders and patients with less than 50% symptom improvement were classified as FH/PPI nonresponders. Patients with hypersensitive esophagus and healthy volunteers served as controls. In all patients and controls, we measured acid exposure time, number of refluxes, baseline impedance, and swallow-induced peristaltic wave indices. RESULTS: FH/PPI responders had higher acid exposure times, numbers of reflux events, and acid refluxes compared with FH/PPI nonresponders (P < .05). Patients with hypersensitive esophagus had mean acid exposure times and numbers of reflux events similar to those of FH/PPI responders. Baseline impedance levels were lower in FH/PPI responders and patients with hypersensitive esophagus, compared with FH/PPI nonresponders and healthy volunteers (P < .001). Swallow-induced peristaltic wave indices were similar between FH/PPI responders and patients with hypersensitive esophagus. CONCLUSIONS: Patients with FH who respond to PPI therapy have impedance pH features similar to those of patients with hypersensitive esophagus. Baseline impedance measurements might allow for identification of patients who respond to PPIs but would be classified as having FH based on conventional impedance-pH measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The stable isotopic composition of fossil resting eggs (ephippia) of Daphnia spp. is being used to reconstruct past environmental conditions in lake ecosystems. However, the underlying assumption that the stable isotopic composition of the ephippia reflects the stable isotopic composition of the parent Daphnia, of their diet and of the environmental water have yet to be confirmed in a controlled experimental setting. We performed experiments with Daphnia pulicaria cultures, which included a control treatment conducted at 12 °C in filtered lake water and with a diet of fresh algae and three treatments in which we manipulated the stable carbon isotopic composition (δ13C value) of the algae, stable oxygen isotopic composition (δ18O value) of the water and the water temperature, respectively. The stable nitrogen isotopic composition (δ15N value) of the algae was similar for all treatments. At 12 °C, differences in algal δ13C values and in δ18O values of water were reflected in those of Daphnia. The differences between ephippia and Daphnia stable isotope ratios were similar in the different treatments (δ13C: +0.2 ± 0.4 ‰ (standard deviation); δ15N: −1.6 ± 0.4 ‰; δ18O: −0.9 ± 0.4 ‰), indicating that changes in dietary δ13C values and in δ18O values of water are passed on to these fossilizing structures. A higher water temperature (20 °C) resulted in lower δ13C values in Daphnia and ephippia than in the other treatments with the same food source and in a minor change in the difference between δ13C values of ephippia and Daphnia (to −1.3 ± 0.3 ‰). This may have been due to microbial processes or increased algal respiration rates in the experimental containers, which may not affect Daphnia in natural environments. There was no significant difference in the offset between δ18O and δ15N values of ephippia and Daphnia between the 12 and 20 °C treatments, but the δ18O values of Daphnia and ephippia were on average 1.2 ‰ lower at 20 °C than at 12 °C. We conclude that the stable isotopic composition of Daphnia ephippia provides information on that of the parent Daphnia and of the food and water they were exposed to, with small offsets between Daphnia and ephippia relative to variations in Daphnia stable isotopic composition reported from downcore studies. However, our experiments also indicate that temperature may have a minor influence on the δ13C, δ15N and δ18O values of Daphnia body tissue and ephippia. This aspect deserves attention in further controlled experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fluids are considered a fundamental agent for chemical exchanges between different rock types in the subduction system. Constraints on the sources and pathways of subduction fluids thus provide crucial information to reconstruct subduction processes. The Monviso ophiolitic sequence is composed of mafic, ultramafic and minor sediments that have been subducted to ~80 km depth. In this sequence, both localized fluid flow and channelized fluids along major shear zones have been documented. We investigate the timing and source of the fluids that affected the dominant mafic rocks using microscale U-Pb dating of zircon and oxygen isotope analysis of mineral zones (garnet, zircon and antigorite) in high pressure rocks with variable degree of metasomatic modification. In mafic eclogites, Jurassic zircon cores are the only mineralogical relicts of the protolith gabbros and retain δ18O values of 4.5–6 ‰, typical of mantle melts. Garnet and metamorphic zircon that grew during prograde to peak metamorphism display low δ18O values between 0.2 and 3.8 ‰, which are likely inherited from high-temperature alteration of the protolith on the sea floor. This is corroborated by δ18O values of 3.0 and 3.6 ‰ in antigorite from surrounding serpentinites. In metasomatised eclogites within the Lower Shear Zone, garnet rim formed at the metamorphic peak shows a shift to higher δ18O up to 6‰. The age of zircons in high-pressure veins and metasomatised eclogites constrains the timing of fluid flow at high pressure at around 45–46 Ma. Although the oxygen data do not contradict previous reports of interaction with serpentinite-derived fluids, the shift to isotopically heavier oxygen compositions requires contribution from sediment-derived fluids. The scarcity of metasediments in the Monviso sequence suggests that such fluids were concentrated and fluxed along the Lower Shear Zone in a sufficient amount to modify the oxygen composition of the eclogitic minerals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first objective of this study was to determine normative digital X-ray radiogrammetry (DXR) values, based on original digital images, in a pediatric population (aged 6-18 years). The second aim was to compare these reference data with patients suffering from distal radius fractures, whereas both cohorts originated from the same geographical region and were evaluated using the same technical parameters as well as inclusion and exclusion criteria. DXR-BMD and DXR-MCI of the metacarpal bones II-IV were assessed on standardized digital hand radiographs, without printing or scanning procedures. DXR parameters were estimated separately by gender and among six age groups; values in the fracture group were compared to age- and gender-matched normative data using Student's t tests and Z scores. In the reference cohort (150 boys, 138 girls), gender differences were found in bone mineral density (DXR-BMD), with higher values for girls from 11 to 14 years and for boys from 15 to 18 years (p < 0.05). Girls had higher normative metacarpal index (DXR-MCI) values than boys, with significant differences at 11-14 years (p < 0.05). In the case-control investigation, the fracture group (95 boys, 69 girls) presented lower DXR-BMD at 15-18 years in boys and 13-16 years in girls vs. the reference cohort (p < 0.05); DXR-MCI was lower at 11-18 years in boys and 11-16 years in girls (p < 0.05). Mean Z scores in the fracture group for DXR-BMD were -0.42 (boys) and -0.46 (girls), and for DXR-MCI were -0.51 (boys) and -0.53 (girls). These findings indicate that the fully digital DXR technique can be accurately applied in pediatric populations ≥ 6 years of age. The lower DXR-BMD and DXR-MCI values in the fracture group suggest promising early identification of individuals with increased fracture risk, without the need for additional radiation exposure, enabling the initiation of prevention strategies to possibly reduce the incidence of osteoporosis later in life.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Pancreatic stone protein (PSP) has been identified as a promising sepsis marker in adults, children and neonates. However, data on population-based reference values are lacking. This study aimed to establish age-specific reference values for PSP. METHODS PSP was determined using a specific ELISA. PSP serum concentrations were determined in 372 healthy subjects including 217 neonates, 94 infants and children up to 16 years, and 61 adults. The adjacent categories method was used to determine which age categories had significantly different PSP concentrations. RESULTS PSP circulating levels were not gender-dependent and ranged from 1.0 to 99.4 ng/ml with a median of 9.2 ng/ml. PSP increased significantly between the age categories, from a median of 2.6 ng/ml in very preterm newborns, to 6.3 ng/ml in term newborns, to 16.1 ng/ml in older children (p < 0.001). PSP levels were higher on postnatal day three compared to levels measured immediately post delivery (p < 0.001). Paired umbilical artery and umbilical vein samples were strongly correlated (p < 0.001). Simultaneously obtained capillary heel-prick versus venous samples showed a good level of agreement for PSP (Rho 0.89, bias 19 %). CONCLUSIONS This study provides age-specific normal values that may be used to define cut-offs for future trials on PSP. We demonstrate an age-dependent increase of PSP from birth to childhood.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hydroxylation of N- and O-methyl drugs and a polycyclic hydrocarbon has been demonstrated in microsomes prepared from two transplantable Morris hepatomas (i.e., 7288C. t.c. and 5123 t.c.(H). The hydroxylation rates of the drug benzphetamine and the polycyclic hydrocarbon benzo {(alpha)} pyrene by tumor microsomes were inducible 2 to 3-fold and 2-fold, respectively by pretreatment of rats with phenobarbital/hydrocortisone. Hepatoma 5123t.c.(h) microsomal hydroxylation activities were more inducible after these pretreatments than hepatoma 7288C.t.c. Two chemotherapeutic drugs (cyclophosphamide and isophosphamide) were shown to be mutagenic after activation by the tumor hemogenate with the TA100 strain of Salmonella typhimurium bacteria. NADPH-cytochrome P-450 was purified from phenobarbital/hydrocortisone treated rat hepatoma 5123t.c.(H) microsomes 353-fold with a specific activity 63.6 nmol of cytochrome c reduced per min per mg of protein. The purified enzyme, has an apparent molecular weight of 79,500 daltons, and contained an equal molar ratio of FMN and FAD, with a total flavin content of 16.4 nmol per mg of protein. The purified enzyme also catalyzed electron transfer to artificial electron acceptors with the K(,m) values of the hepatoma reductase similar to those of purified liver reductase. The K(,m) value of the hepatoma reductase (13 uM) for NADPH was similar to that of purified liver reductase (5.0 uM). In addition the purified hepatoma reductase was immunochemically similar to the liver reductase.^ Hepatoma cytochrome P-450, the hemeprotein component of the hepatoma microsomes of rats pretreated with phenobarbital/hydrocortisone. The resolution of the six forms was achieved by the DE-53 ion-exchange chromatography, and further purified by hydroxyapatite. The six different fractions that contained P-450 activity, had specific contents from 0.47 to 1.75 nmol of cytochrome P-450 per mg of protein, and indicated a 2 to 9-fold purification as compared to the original microsomes. In addition, difference spectra, molecular weights and immunological results suggest there are at least six different forms of cytochrome P-450 in hepatoma 5123 t.c.(H). ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bayesian phylogenetic analyses are now very popular in systematics and molecular evolution because they allow the use of much more realistic models than currently possible with maximum likelihood methods. There are, however, a growing number of examples in which large Bayesian posterior clade probabilities are associated with very short edge lengths and low values for non-Bayesian measures of support such as nonparametric bootstrapping. For the four-taxon case when the true tree is the star phylogeny, Bayesian analyses become increasingly unpredictable in their preference for one of the three possible resolved tree topologies as data set size increases. This leads to the prediction that hard (or near-hard) polytomies in nature will cause unpredictable behavior in Bayesian analyses, with arbitrary resolutions of the polytomy receiving very high posterior probabilities in some cases. We present a simple solution to this problem involving a reversible-jump Markov chain Monte Carlo (MCMC) algorithm that allows exploration of all of tree space, including unresolved tree topologies with one or more polytomies. The reversible-jump MCMC approach allows prior distributions to place some weight on less-resolved tree topologies, which eliminates misleadingly high posteriors associated with arbitrary resolutions of hard polytomies. Fortunately, assigning some prior probability to polytomous tree topologies does not appear to come with a significant cost in terms of the ability to assess the level of support for edges that do exist in the true tree. Methods are discussed for applying arbitrary prior distributions to tree topologies of varying resolution, and an empirical example showing evidence of polytomies is analyzed and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bayesian phylogenetic analyses are now very popular in systematics and molecular evolution because they allow the use of much more realistic models than currently possible with maximum likelihood methods. There are, however, a growing number of examples in which large Bayesian posterior clade probabilities are associated with very short edge lengths and low values for non-Bayesian measures of support such as nonparametric bootstrapping. For the four-taxon case when the true tree is the star phylogeny, Bayesian analyses become increasingly unpredictable in their preference for one of the three possible resolved tree topologies as data set size increases. This leads to the prediction that hard (or near-hard) polytomies in nature will cause unpredictable behavior in Bayesian analyses, with arbitrary resolutions of the polytomy receiving very high posterior probabilities in some cases. We present a simple solution to this problem involving a reversible-jump Markov chain Monte Carlo (MCMC) algorithm that allows exploration of all of tree space, including unresolved tree topologies with one or more polytomies. The reversible-jump MCMC approach allows prior distributions to place some weight on less-resolved tree topologies, which eliminates misleadingly high posteriors associated with arbitrary resolutions of hard polytomies. Fortunately, assigning some prior probability to polytomous tree topologies does not appear to come with a significant cost in terms of the ability to assess the level of support for edges that do exist in the true tree. Methods are discussed for applying arbitrary prior distributions to tree topologies of varying resolution, and an empirical example showing evidence of polytomies is analyzed and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the recognition of the importance of evidence-based medicine, there is an emerging need for methods to systematically synthesize available data. Specifically, methods to provide accurate estimates of test characteristics for diagnostic tests are needed to help physicians make better clinical decisions. To provide more flexible approaches for meta-analysis of diagnostic tests, we developed three Bayesian generalized linear models. Two of these models, a bivariate normal and a binomial model, analyzed pairs of sensitivity and specificity values while incorporating the correlation between these two outcome variables. Noninformative independent uniform priors were used for the variance of sensitivity, specificity and correlation. We also applied an inverse Wishart prior to check the sensitivity of the results. The third model was a multinomial model where the test results were modeled as multinomial random variables. All three models can include specific imaging techniques as covariates in order to compare performance. Vague normal priors were assigned to the coefficients of the covariates. The computations were carried out using the 'Bayesian inference using Gibbs sampling' implementation of Markov chain Monte Carlo techniques. We investigated the properties of the three proposed models through extensive simulation studies. We also applied these models to a previously published meta-analysis dataset on cervical cancer as well as to an unpublished melanoma dataset. In general, our findings show that the point estimates of sensitivity and specificity were consistent among Bayesian and frequentist bivariate normal and binomial models. However, in the simulation studies, the estimates of the correlation coefficient from Bayesian bivariate models are not as good as those obtained from frequentist estimation regardless of which prior distribution was used for the covariance matrix. The Bayesian multinomial model consistently underestimated the sensitivity and specificity regardless of the sample size and correlation coefficient. In conclusion, the Bayesian bivariate binomial model provides the most flexible framework for future applications because of its following strengths: (1) it facilitates direct comparison between different tests; (2) it captures the variability in both sensitivity and specificity simultaneously as well as the intercorrelation between the two; and (3) it can be directly applied to sparse data without ad hoc correction. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: In this secondary data analysis, three statistical methodologies were implemented to handle cases with missing data in a motivational interviewing and feedback study. The aim was to evaluate the impact that these methodologies have on the data analysis. ^ Methods: We first evaluated whether the assumption of missing completely at random held for this study. We then proceeded to conduct a secondary data analysis using a mixed linear model to handle missing data with three methodologies (a) complete case analysis, (b) multiple imputation with explicit model containing outcome variables, time, and the interaction of time and treatment, and (c) multiple imputation with explicit model containing outcome variables, time, the interaction of time and treatment, and additional covariates (e.g., age, gender, smoke, years in school, marital status, housing, race/ethnicity, and if participants play on athletic team). Several comparisons were conducted including the following ones: 1) the motivation interviewing with feedback group (MIF) vs. the assessment only group (AO), the motivation interviewing group (MIO) vs. AO, and the intervention of the feedback only group (FBO) vs. AO, 2) MIF vs. FBO, and 3) MIF vs. MIO.^ Results: We first evaluated the patterns of missingness in this study, which indicated that about 13% of participants showed monotone missing patterns, and about 3.5% showed non-monotone missing patterns. Then we evaluated the assumption of missing completely at random by Little's missing completely at random (MCAR) test, in which the Chi-Square test statistic was 167.8 with 125 degrees of freedom, and its associated p-value was p=0.006, which indicated that the data could not be assumed to be missing completely at random. After that, we compared if the three different strategies reached the same results. For the comparison between MIF and AO as well as the comparison between MIF and FBO, only the multiple imputation with additional covariates by uncongenial and congenial models reached different results. For the comparison between MIF and MIO, all the methodologies for handling missing values obtained different results. ^ Discussions: The study indicated that, first, missingness was crucial in this study. Second, to understand the assumptions of the model was important since we could not identify if the data were missing at random or missing not at random. Therefore, future researches should focus on exploring more sensitivity analyses under missing not at random assumption.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

My dissertation focuses mainly on Bayesian adaptive designs for phase I and phase II clinical trials. It includes three specific topics: (1) proposing a novel two-dimensional dose-finding algorithm for biological agents, (2) developing Bayesian adaptive screening designs to provide more efficient and ethical clinical trials, and (3) incorporating missing late-onset responses to make an early stopping decision. Treating patients with novel biological agents is becoming a leading trend in oncology. Unlike cytotoxic agents, for which toxicity and efficacy monotonically increase with dose, biological agents may exhibit non-monotonic patterns in their dose-response relationships. Using a trial with two biological agents as an example, we propose a phase I/II trial design to identify the biologically optimal dose combination (BODC), which is defined as the dose combination of the two agents with the highest efficacy and tolerable toxicity. A change-point model is used to reflect the fact that the dose-toxicity surface of the combinational agents may plateau at higher dose levels, and a flexible logistic model is proposed to accommodate the possible non-monotonic pattern for the dose-efficacy relationship. During the trial, we continuously update the posterior estimates of toxicity and efficacy and assign patients to the most appropriate dose combination. We propose a novel dose-finding algorithm to encourage sufficient exploration of untried dose combinations in the two-dimensional space. Extensive simulation studies show that the proposed design has desirable operating characteristics in identifying the BODC under various patterns of dose-toxicity and dose-efficacy relationships. Trials of combination therapies for the treatment of cancer are playing an increasingly important role in the battle against this disease. To more efficiently handle the large number of combination therapies that must be tested, we propose a novel Bayesian phase II adaptive screening design to simultaneously select among possible treatment combinations involving multiple agents. Our design is based on formulating the selection procedure as a Bayesian hypothesis testing problem in which the superiority of each treatment combination is equated to a single hypothesis. During the trial conduct, we use the current values of the posterior probabilities of all hypotheses to adaptively allocate patients to treatment combinations. Simulation studies show that the proposed design substantially outperforms the conventional multi-arm balanced factorial trial design. The proposed design yields a significantly higher probability for selecting the best treatment while at the same time allocating substantially more patients to efficacious treatments. The proposed design is most appropriate for the trials combining multiple agents and screening out the efficacious combination to be further investigated. The proposed Bayesian adaptive phase II screening design substantially outperformed the conventional complete factorial design. Our design allocates more patients to better treatments while at the same time providing higher power to identify the best treatment at the end of the trial. Phase II trial studies usually are single-arm trials which are conducted to test the efficacy of experimental agents and decide whether agents are promising to be sent to phase III trials. Interim monitoring is employed to stop the trial early for futility to avoid assigning unacceptable number of patients to inferior treatments. We propose a Bayesian single-arm phase II design with continuous monitoring for estimating the response rate of the experimental drug. To address the issue of late-onset responses, we use a piece-wise exponential model to estimate the hazard function of time to response data and handle the missing responses using the multiple imputation approach. We evaluate the operating characteristics of the proposed method through extensive simulation studies. We show that the proposed method reduces the total length of the trial duration and yields desirable operating characteristics for different physician-specified lower bounds of response rate with different true response rates.