55 resultados para Synthetic Control Method
Resumo:
There is anecdotal evidence that athletes use the banned substance Synacthen because of its perceived benefit with its associated rise in cortisol. To test the performance-enhancing effects of Synacthen, eight trained cyclists completed two, 2-day exercise sessions separated by 7-10 days. On the first day of each 2-day exercise session, subjects received either Synacthen (0.25 mg, TX) or placebo (PLA) injection. Performance was assessed by a 20-km time trial (TT) after a 90-min fatigue period on day 1 and without the fatiguing protocol on day 2. Plasma androgens and ACTH concentrations were measured during the exercise bouts as well as the rate of perceived exertion (RPE). Spot urines were analyzed for androgens and glucocorticoids quantification. Basal plasma hormones did not differ significantly between PLA and TX groups before and 24 h after the IM injection (P > 0.05). After TX injection, ACTH peaked at 30 min and hormone profiles were significantly different compared to the PLA trial (P < 0.001). RPE increased significantly in both groups as the exercise sessions progressed (P < 0.001) but was not influenced by treatment. The time to completion of the TT was not affected on both days by Synacthen treatment. In the present study, a single IM injection of synthetic ACTH did not improve either acute or subsequent cycling performance and did not influence perceived exertion. The investigated urinary hormones did not vary after treatment, reinforcing the difficulty for ACTH abuse detection.
Resumo:
Plasmodium vivax circumsporozoite (CS) protein is a leading malaria vaccine candidate. We describe the characterization of specific immune responses induced in 21 malaria-naive volunteers vaccinated with long synthetic peptides derived from the CS protein formulated in Montanide ISA 720. Both antibody- and cell-mediated immune responses were analyzed. Antibodies were predominantly of IgG1 and IgG3 isotypes, recognized parasite proteins on the immunofluorescent antibody test, and partially blocked sporozoite invasion of hepatoma cell lines in vitro. Peripheral blood mononuclear cells from most volunteers (94%) showed IFN-γ production in vitro upon stimulation with both long signal peptide and short peptides containing CD8+ T-cell epitopes. The relatively limited sample size did not allow conclusions about HLA associations with the immune responses observed. In summary, the inherent safety and tolerability together with strong antibody responses, invasion blocking activity, and the IFN-γ production induced by these vaccine candidates warrants further testing in a phase II clinical trial.
Resumo:
Aims: To describe the drinking patterns and their baseline predictive factors during a 12-month period after an initial evaluation for alcohol treatment. Methods CONTROL is a single-center, prospective, observational study evaluating consecutive alcohol-dependent patients. Using a curve clustering methodology based on a polynomial regression mixture model, we identified three clusters of patients with dominant alcohol use patterns described as mostly abstainers, mostly moderate drinkers and mostly heavy drinkers. Multinomial logistic regression analysis was used to identify baseline factors (socio-demographic, alcohol dependence consequences and related factors) predictive of belonging to each drinking cluster. ResultsThe sample included 143 alcohol-dependent adults (63.6% males), mean age 44.6 ± 11.8 years. The clustering method identified 47 (32.9%) mostly abstainers, 56 (39.2%) mostly moderate drinkers and 40 (28.0%) mostly heavy drinkers. Multivariate analyses indicated that mild or severe depression at baseline predicted belonging to the mostly moderate drinkers cluster during follow-up (relative risk ratio (RRR) 2.42, CI [1.02-5.73, P = 0.045] P = 0.045), while living alone (RRR 2.78, CI [1.03-7.50], P = 0.044) and reporting more alcohol-related consequences (RRR 1.03, CI [1.01-1.05], P = 0.004) predicted belonging to the mostly heavy drinkers cluster during follow-up. Conclusion In this sample, the drinking patterns of alcohol-dependent patients were predicted by baseline factors, i.e. depression, living alone or alcohol-related consequences and findings that may inform clinicians about the likely drinking patterns of their alcohol-dependent patient over the year following the initial evaluation for alcohol treatment.
Resumo:
BACKGROUND: The activity of the renin-angiotensin system is usually evaluated as plasma renin activity (PRA, ngAI/ml per h) but the reproducibility of this enzymatic assay is notoriously scarce. We compared the inter and intralaboratory reproducibilities of PRA with those of a new automated chemiluminescent assay, which allows the direct quantification of immunoreactive renin [chemiluminescent immunoreactive renin (CLIR), microU/ml]. METHODS: Aliquots from six pool plasmas of patients with very low to very high PRA levels were measured in 12 centres with both the enzymatic and the direct assays. The same methods were applied to three control plasma preparations with known renin content. RESULTS: In pool plasmas, mean PRA values ranged from 0.14 +/- 0.08 to 18.9 +/- 4.1 ngAI/ml per h, whereas those of CLIR ranged from 4.2 +/- 1.7 to 436 +/- 47 microU/ml. In control plasmas, mean values of PRA and of CLIR were always within the expected range. Overall, there was a significant correlation between the two methods (r = 0.73, P < 0.01). Similar correlations were found in plasmas subdivided in those with low, intermediate and high PRA. However, the coefficients of variation among laboratories found for PRA were always higher than those of CLIR, ranging from 59.4 to 17.1% for PRA, and from 41.0 to 10.7% for CLIR (P < 0.01). Also, the mean intralaboratory variability was higher for PRA than for CLIR, being respectively, 8.5 and 4.5% (P < 0.01). CONCLUSION: The measurement of renin with the chemiluminescent method is a reliable alternative to PRA, having the advantage of a superior inter and intralaboratory reproducibility.
Resumo:
ABSTRACT: BACKGROUND: Decision curve analysis has been introduced as a method to evaluate prediction models in terms of their clinical consequences if used for a binary classification of subjects into a group who should and into a group who should not be treated. The key concept for this type of evaluation is the "net benefit", a concept borrowed from utility theory. METHODS: We recall the foundations of decision curve analysis and discuss some new aspects. First, we stress the formal distinction between the net benefit for the treated and for the untreated and define the concept of the "overall net benefit". Next, we revisit the important distinction between the concept of accuracy, as typically assessed using the Youden index and a receiver operating characteristic (ROC) analysis, and the concept of utility of a prediction model, as assessed using decision curve analysis. Finally, we provide an explicit implementation of decision curve analysis to be applied in the context of case-control studies. RESULTS: We show that the overall net benefit, which combines the net benefit for the treated and the untreated, is a natural alternative to the benefit achieved by a model, being invariant with respect to the coding of the outcome, and conveying a more comprehensive picture of the situation. Further, within the framework of decision curve analysis, we illustrate the important difference between the accuracy and the utility of a model, demonstrating how poor an accurate model may be in terms of its net benefit. Eventually, we expose that the application of decision curve analysis to case-control studies, where an accurate estimate of the true prevalence of a disease cannot be obtained from the data, is achieved with a few modifications to the original calculation procedure. CONCLUSIONS: We present several interrelated extensions to decision curve analysis that will both facilitate its interpretation and broaden its potential area of application.
Resumo:
Simulated-annealing-based conditional simulations provide a flexible means of quantitatively integrating diverse types of subsurface data. Although such techniques are being increasingly used in hydrocarbon reservoir characterization studies, their potential in environmental, engineering and hydrological investigations is still largely unexploited. Here, we introduce a novel simulated annealing (SA) algorithm geared towards the integration of high-resolution geophysical and hydrological data which, compared to more conventional approaches, provides significant advancements in the way that large-scale structural information in the geophysical data is accounted for. Model perturbations in the annealing procedure are made by drawing from a probability distribution for the target parameter conditioned to the geophysical data. This is the only place where geophysical information is utilized in our algorithm, which is in marked contrast to other approaches where model perturbations are made through the swapping of values in the simulation grid and agreement with soft data is enforced through a correlation coefficient constraint. Another major feature of our algorithm is the way in which available geostatistical information is utilized. Instead of constraining realizations to match a parametric target covariance model over a wide range of spatial lags, we constrain the realizations only at smaller lags where the available geophysical data cannot provide enough information. Thus we allow the larger-scale subsurface features resolved by the geophysical data to have much more due control on the output realizations. Further, since the only component of the SA objective function required in our approach is a covariance constraint at small lags, our method has improved convergence and computational efficiency over more traditional methods. Here, we present the results of applying our algorithm to the integration of porosity log and tomographic crosshole georadar data to generate stochastic realizations of the local-scale porosity structure. Our procedure is first tested on a synthetic data set, and then applied to data collected at the Boise Hydrogeophysical Research Site.
Resumo:
PURPOSE: Effective cancer treatment generally requires combination therapy. The combination of external beam therapy (XRT) with radiopharmaceutical therapy (RPT) requires accurate three-dimensional dose calculations to avoid toxicity and evaluate efficacy. We have developed and tested a treatment planning method, using the patient-specific three-dimensional dosimetry package 3D-RD, for sequentially combined RPT/XRT therapy designed to limit toxicity to organs at risk. METHODS AND MATERIALS: The biologic effective dose (BED) was used to translate voxelized RPT absorbed dose (D(RPT)) values into a normalized total dose (or equivalent 2-Gy-fraction XRT absorbed dose), NTD(RPT) map. The BED was calculated numerically using an algorithmic approach, which enabled a more accurate calculation of BED and NTD(RPT). A treatment plan from the combined Samarium-153 and external beam was designed that would deliver a tumoricidal dose while delivering no more than 50 Gy of NTD(sum) to the spinal cord of a patient with a paraspinal tumor. RESULTS: The average voxel NTD(RPT) to tumor from RPT was 22.6 Gy (range, 1-85 Gy); the maximum spinal cord voxel NTD(RPT) from RPT was 6.8 Gy. The combined therapy NTD(sum) to tumor was 71.5 Gy (range, 40-135 Gy) for a maximum voxel spinal cord NTD(sum) equal to the maximum tolerated dose of 50 Gy. CONCLUSIONS: A method that enables real-time treatment planning of combined RPT-XRT has been developed. By implementing a more generalized conversion between the dose values from the two modalities and an activity-based treatment of partial volume effects, the reliability of combination therapy treatment planning has been expanded.
Resumo:
Background: Urine is still the matrix of choice to fight against doping, because it can be collected non-invasively during anti-doping tests. Most of the World Anti-Doping Agency's accredited laboratories have more than 20 years experience in analyzing this biological fluid and the majority of the compounds listed in the 2010 Prohibited List - International Standard are eliminated through the urinary apparatus. Storing and transporting urine samples for doping analyses does not include a specific protocol to prevent microbial and thermal degradation. The use of a rapid and reliable screening method could enable determine reference intervals for urine specimens in doping control samples and evaluate notably the prevalence of microbial contamination known to be responsible for the degradation of chemical substances in urine.Methods: The Sysmex(R) UF-500i is a recent urine flow cytometer analyzer capable of quantifying BACT and other urinary particles such as RBC, WBC, EC, DEBRIS, CAST, PATH. CAST, YLC, SRC as well as measuring urine conductivity. To determine urine anti-doping reference intervals, 501 samples received in our laboratory over a period of two months were submitted to an immediate examination. All samples were collected and then transported at room temperature. Analysis of variance was performed to test the effects of factors such as gender, test type [in-competition, out-of-competition] and delivery time.Results: The data obtained showed that most of the urine samples were highly contaminated with bacteria. The other urine particles were also very different according to the factors.Conclusions: The Sysmex(R) UF-500i was capable of providing a snapshot of urine particles present in the samples at the time of the delivery to the laboratory. These particles, BACT in particular, gave a good idea of the possible microbial degradation which had and/or could have occurred in the sample. This information could be used as the first quality control set up in WADA (World Anti-Doping Agency) accredited laboratories to determine if steroid profiles, endogenous and prohibited substances have possibly been altered. (C) 2011 Elsevier Ireland Ltd. All rights reserved.
Resumo:
BACKGROUND: Numbers of travellers visiting friends and relatives (VFRs) from Europe to malaria endemic countries are increasing and include long-term and second generation immigrants, who represent the major burden of malaria cases imported back into Europe. Most recommendations for malaria chemoprophylaxis lack a solid evidence base, and often fail to address the cultural, social and economic needs of VFRs. METHODS: European travel medicine experts, who are members of TropNetEurop, completed a sequential series of questionnaires according to the Delphi method. This technique aims at evaluating and developing a consensus through repeated iterations of questionnaires. The questionnaires in this study included questions about professional experience with VFRs, controversial issues in malaria prophylaxis, and 16 scenarios exploring indications for prescribing and choice of chemoprophylaxis. RESULTS: The experience of participants was rather diverse as was their selection of chemoprophylaxis regimen. A significant consensus was observed in only seven of 16 scenarios. The analysis revealed a wide variation in prescribing choices with preferences grouped by region of practice and increased prescribing seen in Northern Europe compared to Central Europe. CONCLUSIONS: Improving the evidence base on efficacy, adherence to chemoprophylaxis and risk of malaria and encouraging discussion among experts, using techniques such as the Delphi method, may reduce the variability in prescription in European travel clinics.
Resumo:
The functional method is a new test theory using a new scoring method that assumes complexity in test structure, and thus takes into account every correlation between factors and items. The main specificity of the functional method is to model test scores by multiple regression instead of estimating them by using simplistic sums of points. In order to proceed, the functional method requires the creation of hyperspherical measurement space, in which item responses are expressed by their correlation with orthogonal factors. This method has three main qualities. First, measures are expressed in the absolute metric of correlations; therefore, items, scales and persons are expressed in the same measurement space using the same single metric. Second, factors are systematically orthogonal and without errors, which is optimal in order to predict other outcomes. Such predictions can be performed to estimate how one would answer to other tests, or even to model one's response strategy if it was perfectly coherent. Third, the functional method provides measures of individuals' response validity (i.e., control indices). Herein, we propose a standard procedure in order to identify whether test results are interpretable and to exclude invalid results caused by various response biases based on control indices.