849 resultados para Integración of methods
Resumo:
In a sequential clinical trial, accrual of data on patients often continues after the stopping criterion for the study has been met. This is termed “overrunning.” Overrunning occurs mainly when the primary response from each patient is measured after some extended observation period. The objective of this article is to compare two methods of allowing for overrunning. In particular, simulation studies are reported that assess the two procedures in terms of how well they maintain the intended type I error rate. The effect on power resulting from the incorporation of “overrunning data” using the two procedures is evaluated.
Resumo:
Background: Meta-analyses based on individual patient data (IPD) are regarded as the gold standard for systematic reviews. However, the methods used for analysing and presenting results from IPD meta-analyses have received little discussion. Methods We review 44 IPD meta-analyses published during the years 1999–2001. We summarize whether they obtained all the data they sought, what types of approaches were used in the analysis, including assumptions of common or random effects, and how they examined the effects of covariates. Results: Twenty-four out of 44 analyses focused on time-to-event outcomes, and most analyses (28) estimated treatment effects within each trial and then combined the results assuming a common treatment effect across trials. Three analyses failed to stratify by trial, analysing the data is if they came from a single mega-trial. Only nine analyses used random effects methods. Covariate-treatment interactions were generally investigated by subgrouping patients. Seven of the meta-analyses included data from less than 80% of the randomized patients sought, but did not address the resulting potential biases. Conclusions: Although IPD meta-analyses have many advantages in assessing the effects of health care, there are several aspects that could be further developed to make fuller use of the potential of these time-consuming projects. In particular, IPD could be used to more fully investigate the influence of covariates on heterogeneity of treatment effects, both within and between trials. The impact of heterogeneity, or use of random effects, are seldom discussed. There is thus considerable scope for enhancing the methods of analysis and presentation of IPD meta-analysis.
Resumo:
This paper considers methods for testing for superiority or non-inferiority in active-control trials with binary data, when the relative treatment effect is expressed as an odds ratio. Three asymptotic tests for the log-odds ratio based on the unconditional binary likelihood are presented, namely the likelihood ratio, Wald and score tests. All three tests can be implemented straightforwardly in standard statistical software packages, as can the corresponding confidence intervals. Simulations indicate that the three alternatives are similar in terms of the Type I error, with values close to the nominal level. However, when the non-inferiority margin becomes large, the score test slightly exceeds the nominal level. In general, the highest power is obtained from the score test, although all three tests are similar and the observed differences in power are not of practical importance. Copyright (C) 2007 John Wiley & Sons, Ltd.
Resumo:
Sensitive methods that are currently used to monitor proteolysis by plasmin in milk are limited due to 7 their high cost and lack of standardisation for quality assurance in the various dairy laboratories. In 8 this study, four methods, trinitrobenzene sulphonic acid (TNBS), reverse phase high pressure liquid 9 chromatography (RP-HPLC), gel electrophoresis and fluorescamine, were selected to assess their 10 suitability for the detection of proteolysis in milk by plasmin. Commercial UHT milk was incubated 11 with plasmin at 37 °C for one week. Clarification was achieved by isoelectric precipitation (pH 4·6 12 soluble extracts)or 6% (final concentration) trichloroacetic acid (TCA). The pH 4·6 and 6% TCA 13 soluble extracts of milk showed high correlations (R2 > 0·93) by the TNBS, fluorescamine and 14 RP-HPLC methods, confirming increased proteolysis during storage. For gel electrophoresis,15 extensive proteolysis was confirmed by the disappearance of α- and β-casein bands on the seventh 16 day, which was more evident in the highest plasmin concentration. This was accompanied by the 17 appearance of α- and β-casein proteolysis products with higher intensities than on previous days, 18 implying that more products had been formed as a result of casein breakdown. The fluorescamine 19 method had a lower detection limit compared with the other methods, whereas gel electrophoresis 20 was the best qualitative method for monitoring β-casein proteolysis products. Although HPLC was the 21 most sensitive, the TNBS method is recommended for use in routine laboratory analysis on the basis 22 of its accuracy, reliability and simplicity.
Resumo:
There are many published methods available for creating keyphrases for documents. Previous work in the field has shown that in a significant proportion of cases author selected keyphrases are not appropriate for the document they accompany. This requires the use of such automated methods to improve the use of keyphrases. Often the keyphrases are not updated when the focus of a paper changes or include keyphrases that are more classificatory than explanatory. The published methods are all evaluated using different corpora, typically one relevant to their field of study. This not only makes it difficult to incorporate the useful elements of algorithms in future work but also makes comparing the results of each method inefficient and ineffective. This paper describes the work undertaken to compare five methods across a common baseline of six corpora. The methods chosen were term frequency, inverse document frequency, the C-Value, the NC-Value, and a synonym based approach. These methods were compared to evaluate performance and quality of results, and to provide a future benchmark. It is shown that, with the comparison metric used for this study Term Frequency and Inverse Document Frequency were the best algorithms, with the synonym based approach following them. Further work in the area is required to determine an appropriate (or more appropriate) comparison metric.
Resumo:
Recently, in order to accelerate drug development, trials that use adaptive seamless designs such as phase II/III clinical trials have been proposed. Phase II/III clinical trials combine traditional phases II and III into a single trial that is conducted in two stages. Using stage 1 data, an interim analysis is performed to answer phase II objectives and after collection of stage 2 data, a final confirmatory analysis is performed to answer phase III objectives. In this paper we consider phase II/III clinical trials in which, at stage 1, several experimental treatments are compared to a control and the apparently most effective experimental treatment is selected to continue to stage 2. Although these trials are attractive because the confirmatory analysis includes phase II data from stage 1, the inference methods used for trials that compare a single experimental treatment to a control and do not have an interim analysis are no longer appropriate. Several methods for analysing phase II/III clinical trials have been developed. These methods are recent and so there is little literature on extensive comparisons of their characteristics. In this paper we review and compare the various methods available for constructing confidence intervals after phase II/III clinical trials.
Resumo:
In an adaptive seamless phase II/III clinical trial interim analysis, data are used for treatment selection, enabling resources to be focused on comparison of more effective treatment(s) with a control. In this paper, we compare two methods recently proposed to enable use of short-term endpoint data for decision-making at the interim analysis. The comparison focuses on the power and the probability of correctly identifying the most promising treatment. We show that the choice of method depends on how well short-term data predict the best treatment, which may be measured by the correlation between treatment effects on short- and long-term endpoints.
Resumo:
As part of an international intercomparison project, a set of single column models (SCMs) and cloud-resolving models (CRMs) are run under the weak temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistent implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.
Resumo:
Intracellular reactive oxygen species (ROS) production is essential to normal cell function. However, excessive ROS production causes oxidative damage and cell death. Many pharmacological compounds exert their effects on cell cycle progression by changing intracellular redox state and in many cases cause oxidative damage leading to drug cytotoxicity. Appropriate measurement of intracellular ROS levels during cell cycle progression is therefore crucial in understanding redox-regulation of cell function and drug toxicity and for the development of new drugs. However, due to the extremely short half-life of ROS, measuring the changes in intracellular ROS levels during a particular phase of cell cycle for drug intervention can be challenging. In this article, we have provided updated information on the rationale, the applications, the advantages and limitations of common methods for screening drug effects on intracellular ROS production linked to cell cycle study. Our aim is to facilitate biomedical scientists and researchers in the pharmaceutical industry in choosing or developing specific experimental regimens to suit their research needs.
Resumo:
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison of the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. These large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.
Resumo:
The study reported compared coefficients of apparent digestibility (CAD) obtained by total collection (TC) and by chromic oxide (Cr2O3) determined by visible absorption spectrometry (VIS) and flame atomic absorption spectrophotometry (FAAS). These quantification methods were also investigated to assess their precision. Nineteen adult dogs housed in individual digestibility cages were fed three commercial diets, each test including five or seven animals. The experiment was carried out in a completely randomized design with three among-subjects factors (foods) and three within-subject factors (methods), and the dogs were the experimental unit. Estimated CAD for all dietary components in the three diets by either chromium quantification methods did not differ and showed fair agreement with TC results (P > 0.05). Coefficients of chromic oxide recoveries in feces were 1.06 +/- 0.044 and 1.01 +/- 0.045 for VIS and FAAS, respectively, values significantly different (P < 0.05), indicating better recovery for FAAS. Significant differences (P<0.05) were found in the mean concentration of Cr2O3 for diets and feces (17 +/- 0.6 and 19 +/- 0.8g/kg for VIS and FAAS, respectively). Coefficient of variation for FAAS was 1.69% and 1.77% for VIS, which did not differ significantly (P > 0.05). Pearson's correlation coefficient (r=0.99; P < 0.01) indicated positive and significant association among the results for VIS and FASS, indicating reasonable precision and agreement of chromic oxide contents determined by these methods. Therefore, FAAS and VIS can be used successfully to determine Cr2O3 in dog foods and feces. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
This study analyzed degrees of demineralization in bovine enamel using synchrotron microcomputed tomography (SMCT) and hardness measurements (Knoop hardness number, KHN). For 5 days, 40 bovine enamel blocks were individually subjected to a pH cycling model and treatment with fluoride dentifrices (placebo, 275, 550 and 1,100 mu g F/g) diluted in deionized water twice a day. Surface hardness number and cross-sectional profiles of hardness and mineral concentration (by SMCT) were determined. Integrated hardness (KHN x mu m) for sound and demineralized specimens was calculated and subtracted to give the integrated loss of hardness (Delta KHN) for the lesions. Increasing fluoride concentration in the dentifrices led to higher values for surface hardness after pH cycling and mineral concentration (g(HAp) cm(-3)), and lower values for Delta KHN (p < 0.05). From the present results, it may be concluded that hardness measurements revealed demineralization in all groups, which was lower in groups treated with dentifrice with a higher F concentration. SMCT and hardness measurements gave similar results in areas with higher demineralization, but diverged in areas with lower demineralization. Copyright (C) 2009 S. Karger AG, Basel
Resumo:
Two high-performance liquid chromatographic methods for determination of residual monomer in dental acrylic resins are described. Monomers were detected by their UV absorbance at 230 nm, on a Nucleosil((R)) C-18 (5 mu m particle size, 100 angstrom pore size, 15 x 0.46 cm i.d.) column. The separation was performed using acetonitrile-water (55:45 v/v) containing 0.01% triethylamine (TEA) for methyl methacrylate and butyl methacrylate, and acetonitrile-water (60:40 v/v) containing 0.01% TEA for isobutyl methacrylate and 1,6-hexanediol dimethacrylate as mobile phases, at a flow rate of 0.8 mL/min. Good linear relationships were obtained in the concentration range 5.0-80.0 mu g/mL for methyl methacrylate, 10.0-160.0 mu g/mL for butyl methacrylate, 50.0-500.0 mu g/mL for isobutyl methacrylate and 2.5-180.0 mu g/mL for 1,6-hexanediol dimethacrylate. Adequate assay for intra- and inter-day precision and accuracy was observed during the validation process. An extraction procedure to remove residual monomer from the acrylic resins was also established. Residual monomer was obtained from broken specimens of acrylic disks using methanol as extraction solvent for 2 h in an ice-bath. The developed methods and the extraction procedure were applied to dental acrylic resins, tested with or without post-polymerization treatments, and proved to be accurate and precise for the determination of residual monomer content of the materials evaluated. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
Oxacillin is the main drug of choice for the treatment of S. aureus infections. However, S. aureus resistance to oxacillin has become a major problem in the recent decades. The study aimed assess the rates of oxacillin resistance in S. aureus samples obtained at the Botucatu Medical School Hospital, UNESP, and to compare phenotypic techniques for the detection of MRSA against the gold standard method (mecA gene detection) in these samples. A total of 102 samples, previously isolated between 2002 and 2006, and kept at the Culture Collection of the Department of Microbiology and Immunology, in the Botucatu Biosciences Institute, UNESP, were included. Oxacillin resistance was assessed by oxacillin and cefoxitin disk diffusion and agar dilution tests, screening tests using Mueller-Hinton agar with 6 mu g/mL of oxacillin and 4% NaCl, E-test, and mecA gene detection. of the samples analyzed, 46 (45.1%) were mecA-positive. Oxacillin disk sensitivity and specificity were 86.9% and 91.1%, respectively. Cefoxitin disk sensitivity and specificity were respectively 91.3% and 91.1%. The screening test with the cefoxitin disk showed almost the same level of sensitivity (91.3%) and specificity (91.1%). With E-test strips, sensitivity was higher (97.8%) and specificity was comparable to that found with the other methods (91.1%). Ninety-three percent of the samples produced beta-lactamase and five of them were mecA-negative. There was a gradual increase in the number of oxacillin-resistant S. aureus samples between 2002 and 2004. However, from 2004 to 2006, the number of resistant samples dropped from 55% of MRSA in 2004, to 45% in 2005 and 34.6% in 2006. The data obtained reveal that, among phenotypic methods, the E-test yielded the best results, with higher sensitivity levels when compared to the other methods. The decreased resistance rate observed over the most recent years may be explained by the rational use of antimicrobial agents associated with good practices in the control of hospital infection, or may be related to the diminished use of oxacillin as a treatment option.