989 resultados para Trials (Products liability)
Resumo:
Liver metastases have long been known to indicate an unfavourable disease course in breast cancer (BC). However, a small subset of patients with liver metastases alone who were treated with pre-taxane chemotherapy regimens was reported to have longer survival compared with patients with liver and metastases at other sites. In the present study, we examined the clinical outcome of breast cancer patients with liver metastases alone in the context of two phase III European Organisation for Research and Treatment of Cancer (EORTC) trials which compared the efficacy of doxorubicin (A) versus paclitaxel (T) (trial 10923) and of AC (cyclophosphamide) versus AT (trial 10961), given as first-line chemotherapy in metastatic BC patients. The median follow-up for the patients with liver metastases was 90.5 months in trial 10923 and 56.6 months in trial 10961. Patients with liver metastases alone comprised 18% of all patients with liver metastases, in both the 10923 and 10961 trials. The median survival of patients with liver metastases alone and liver plus other sites of metastases were 22.7 and 14.2 months (log rank test, P=0.002) in trial 10923 and 27.1 and 16.8 months (log rank test, P=0.19) in trial 10961. The median TTP (time to progression) for patients with liver metastases alone was also longer compared with the liver plus other sites of metastases group in both trials: 10.2 versus 8.8 months (log rank test, P=0.02) in trial 10923 and 8.3 versus 6.7 months (log rank test, P=0.37) in trial 10961. Most patients with liver metastases alone have progression of their disease in their liver again (96 and 60% of patients in trials 10923 and 10961, respectively). Given the high prevalence of breast cancer, improved detection of liver metastases, encouraging survival achieved with currently available cytotoxic agents and the fact that a significant portion of patients with liver metastases alone have progression of their tumour in the liver again, a more aggressive multimodality treatment approach through prospective clinical trials seems worth exploring in this specific subset of women.
Resumo:
info:eu-repo/semantics/published
Resumo:
OBJECTIVES: Side-effects of standard pain medications can limit their use. Therefore, nonpharmacologic pain relief techniques such as auriculotherapy may play an important role in pain management. Our aim was to conduct a systematic review and meta-analysis of studies evaluating auriculotherapy for pain management. DESIGN: MEDLINE,(®) ISI Web of Science, CINAHL, AMED, and Cochrane Library were searched through December 2008. Randomized trials comparing auriculotherapy to sham, placebo, or standard-of-care control were included that measured outcomes of pain or medication use and were published in English. Two (2) reviewers independently assessed trial eligibility, quality, and abstracted data to a standardized form. Standardized mean differences (SMD) were calculated for studies using a pain score or analgesic requirement as a primary outcome. RESULTS: Seventeen (17) studies met inclusion criteria (8 perioperative, 4 acute, and 5 chronic pain). Auriculotherapy was superior to controls for studies evaluating pain intensity (SMD, 1.56 [95% confidence interval (CI): 0.85, 2.26]; 8 studies). For perioperative pain, auriculotherapy reduced analgesic use (SMD, 0.54 [95% CI: 0.30, 0.77]; 5 studies). For acute pain and chronic pain, auriculotherapy reduced pain intensity (SMD for acute pain, 1.35 [95% CI: 0.08, 2.64], 2 studies; SMD for chronic pain, 1.84 [95% CI: 0.60, 3.07], 5 studies). Removal of poor quality studies did not alter the conclusions. Significant heterogeneity existed among studies of acute and chronic pain, but not perioperative pain. CONCLUSIONS: Auriculotherapy may be effective for the treatment of a variety of types of pain, especially postoperative pain. However, a more accurate estimate of the effect will require further large, well-designed trials.
Resumo:
Long term, high quality estimates of burned area are needed for improving both prognostic and diagnostic fire emissions models and for assessing feedbacks between fire and the climate system. We developed global, monthly burned area estimates aggregated to 0.5° spatial resolution for the time period July 1996 through mid-2009 using four satellite data sets. From 2001ĝ€ "2009, our primary data source was 500-m burned area maps produced using Moderate Resolution Imaging Spectroradiometer (MODIS) surface reflectance imagery; more than 90% of the global area burned during this time period was mapped in this fashion. During times when the 500-m MODIS data were not available, we used a combination of local regression and regional regression trees developed over periods when burned area and Terra MODIS active fire data were available to indirectly estimate burned area. Cross-calibration with fire observations from the Tropical Rainfall Measuring Mission (TRMM) Visible and Infrared Scanner (VIRS) and the Along-Track Scanning Radiometer (ATSR) allowed the data set to be extended prior to the MODIS era. With our data set we estimated that the global annual area burned for the years 1997ĝ€ "2008 varied between 330 and 431 Mha, with the maximum occurring in 1998. We compared our data set to the recent GFED2, L3JRC, GLOBCARBON, and MODIS MCD45A1 global burned area products and found substantial differences in many regions. Lastly, we assessed the interannual variability and long-term trends in global burned area over the past 13 years. This burned area time series serves as the basis for the third version of the Global Fire Emissions Database (GFED3) estimates of trace gas and aerosol emissions.
Resumo:
The objective of spatial downscaling strategies is to increase the information content of coarse datasets at smaller scales. In the case of quantitative precipitation estimation (QPE) for hydrological applications, the goal is to close the scale gap between the spatial resolution of coarse datasets (e.g., gridded satellite precipitation products at resolution L × L) and the high resolution (l × l; L»l) necessary to capture the spatial features that determine spatial variability of water flows and water stores in the landscape. In essence, the downscaling process consists of weaving subgrid-scale heterogeneity over a desired range of wavelengths in the original field. The defining question is, which properties, statistical and otherwise, of the target field (the known observable at the desired spatial resolution) should be matched, with the caveat that downscaling methods be as a general as possible and therefore ideally without case-specific constraints and/or calibration requirements? Here, the attention is focused on two simple fractal downscaling methods using iterated functions systems (IFS) and fractal Brownian surfaces (FBS) that meet this requirement. The two methods were applied to disaggregate spatially 27 summertime convective storms in the central United States during 2007 at three consecutive times (1800, 2100, and 0000 UTC, thus 81 fields overall) from the Tropical Rainfall Measuring Mission (TRMM) version 6 (V6) 3B42 precipitation product (~25-km grid spacing) to the same resolution as the NCEP stage IV products (~4-km grid spacing). Results from bilinear interpolation are used as the control. A fundamental distinction between IFS and FBS is that the latter implies a distribution of downscaled fields and thus an ensemble solution, whereas the former provides a single solution. The downscaling effectiveness is assessed using fractal measures (the spectral exponent β, fractal dimension D, Hurst coefficient H, and roughness amplitude R) and traditional operational scores statistics scores [false alarm rate (FR), probability of detection (PD), threat score (TS), and Heidke skill score (HSS)], as well as bias and the root-mean-square error (RMSE). The results show that both IFS and FBS fractal interpolation perform well with regard to operational skill scores, and they meet the additional requirement of generating structurally consistent fields. Furthermore, confidence intervals can be directly generated from the FBS ensemble. The results were used to diagnose errors relevant for hydrometeorological applications, in particular a spatial displacement with characteristic length of at least 50 km (2500 km2) in the location of peak rainfall intensities for the cases studied. © 2010 American Meteorological Society.
Resumo:
BACKGROUND: A Royal Statistical Society Working Party recently recommended that "Greater use should be made of numerical, as opposed to verbal, descriptions of risk" in first-in-man clinical trials. This echoed the view of many clinicians and psychologists about risk communication. As the clinical trial industry expands rapidly across the globe, it is important to understand risk communication in Asian countries. METHODS: We conducted a cognitive experiment about participation in a hypothetical clinical trial of a pain relief medication and a survey in cancer and arthritis patients in Singapore. In part 1 of the experiment, the patients received information about the risk of side effects in one of three formats (frequency, percentage and verbal descriptor) and in one of two sequences (from least to most severe and from most to least severe), and were asked about their willingness to participate. In part 2, the patients received information about the risk in all three formats, in the same sequence, and were again asked about their willingness to participate. A survey of preference for risk presentation methods and usage of verbal descriptors immediately followed. RESULTS: Willingness to participate and the likelihood of changing one's decision were not affected by the risk presentation methods. Most patients indicated a preference for the frequency format, but patients with primary school or no formal education were indifferent. While the patients used the verbal descriptors "very common", "common" and "very rare" in ways similar to the European Commission's Guidelines, their usage of the descriptors "uncommon" and "rare" was substantially different from the EU's. CONCLUSION: In this sample of Asian cancer and arthritis patients, risk presentation format had no impact on willingness to participate in a clinical trial. However, there is a clear preference for the frequency format. The lay use of verbal descriptors was substantially different from the EU's.
Resumo:
BACKGROUND: Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. METHODS AND PRINCIPAL FINDINGS: The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. CONCLUSIONS: Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks.
Resumo:
BACKGROUND: Dropouts and missing data are nearly-ubiquitous in obesity randomized controlled trails, threatening validity and generalizability of conclusions. Herein, we meta-analytically evaluate the extent of missing data, the frequency with which various analytic methods are employed to accommodate dropouts, and the performance of multiple statistical methods. METHODOLOGY/PRINCIPAL FINDINGS: We searched PubMed and Cochrane databases (2000-2006) for articles published in English and manually searched bibliographic references. Articles of pharmaceutical randomized controlled trials with weight loss or weight gain prevention as major endpoints were included. Two authors independently reviewed each publication for inclusion. 121 articles met the inclusion criteria. Two authors independently extracted treatment, sample size, drop-out rates, study duration, and statistical method used to handle missing data from all articles and resolved disagreements by consensus. In the meta-analysis, drop-out rates were substantial with the survival (non-dropout) rates being approximated by an exponential decay curve (e(-lambdat)) where lambda was estimated to be .0088 (95% bootstrap confidence interval: .0076 to .0100) and t represents time in weeks. The estimated drop-out rate at 1 year was 37%. Most studies used last observation carried forward as the primary analytic method to handle missing data. We also obtained 12 raw obesity randomized controlled trial datasets for empirical analyses. Analyses of raw randomized controlled trial data suggested that both mixed models and multiple imputation performed well, but that multiple imputation may be more robust when missing data are extensive. CONCLUSION/SIGNIFICANCE: Our analysis offers an equation for predictions of dropout rates useful for future study planning. Our raw data analyses suggests that multiple imputation is better than other methods for handling missing data in obesity randomized controlled trials, followed closely by mixed models. We suggest these methods supplant last observation carried forward as the primary method of analysis.
Resumo:
BACKGROUND: With the globalization of clinical trials, large developing nations have substantially increased their participation in multi-site studies. This participation has raised ethical concerns, among them the fear that local customs, habits and culture are not respected while asking potential participants to take part in study. This knowledge gap is particularly noticeable among Indian subjects, since despite the large number of participants, little is known regarding what factors affect their willingness to participate in clinical trials. METHODS: We conducted a meta-analysis of all studies evaluating the factors and barriers, from the perspective of potential Indian participants, contributing to their participation in clinical trials. We searched both international as well as Indian-specific bibliographic databases, including Pubmed, Cochrane, Openjgate, MedInd, Scirus and Medknow, also performing hand searches and communicating with authors to obtain additional references. We enrolled studies dealing exclusively with the participation of Indians in clinical trials. Data extraction was conducted by three researchers, with disagreement being resolved by consensus. RESULTS: Six qualitative studies and one survey were found evaluating the main themes affecting the participation of Indian subjects. Themes included Personal health benefits, Altruism, Trust in physicians, Source of extra income, Detailed knowledge, Methods for motivating participants as factors favoring, while Mistrust on trial organizations, Concerns about efficacy and safety of trials, Psychological reasons, Trial burden, Loss of confidentiality, Dependency issues, Language as the barriers. CONCLUSION: We identified factors that facilitated and barriers that have negative implications on trial participation decisions in Indian subjects. Due consideration and weightage should be assigned to these factors while planning future trials in India.
Resumo:
BACKGROUND: With the globalization of clinical trials, a growing emphasis has been placed on the standardization of the workflow in order to ensure the reproducibility and reliability of the overall trial. Despite the importance of workflow evaluation, to our knowledge no previous studies have attempted to adapt existing modeling languages to standardize the representation of clinical trials. Unified Modeling Language (UML) is a computational language that can be used to model operational workflow, and a UML profile can be developed to standardize UML models within a given domain. This paper's objective is to develop a UML profile to extend the UML Activity Diagram schema into the clinical trials domain, defining a standard representation for clinical trial workflow diagrams in UML. METHODS: Two Brazilian clinical trial sites in rheumatology and oncology were examined to model their workflow and collect time-motion data. UML modeling was conducted in Eclipse, and a UML profile was developed to incorporate information used in discrete event simulation software. RESULTS: Ethnographic observation revealed bottlenecks in workflow: these included tasks requiring full commitment of CRCs, transferring notes from paper to computers, deviations from standard operating procedures, and conflicts between different IT systems. Time-motion analysis revealed that nurses' activities took up the most time in the workflow and contained a high frequency of shorter duration activities. Administrative assistants performed more activities near the beginning and end of the workflow. Overall, clinical trial tasks had a greater frequency than clinic routines or other general activities. CONCLUSIONS: This paper describes a method for modeling clinical trial workflow in UML and standardizing these workflow diagrams through a UML profile. In the increasingly global environment of clinical trials, the standardization of workflow modeling is a necessary precursor to conducting a comparative analysis of international clinical trials workflows.
Resumo:
BACKGROUND: With the global expansion of clinical trials and the expectations of the rise of the emerging economies known as BRICs (Brazil, Russia, India and China), the understanding of factors that affect the willingness to participate in clinical trials of patients from those countries assumes a central role in the future of health research. METHODS: We conducted a systematic review and meta-analysis (SRMA) of willingness to participate in clinical trials among Brazilian patients and then we compared it with Indian patients (with results of another SRMA previously conducted by our group) through a system dynamics model. RESULTS: Five studies were included in the SRMA of Brazilian patients. Our main findings are 1) the major motivation for Brazilian patients to participate in clinical trials is altruism, 2) monetary reimbursement is the least important factor motivating Brazilian patients, 3) the major barrier for Brazilian patients to not participate in clinical trials is the fear of side effects, and 4) Brazilian patients are more likely willing to participate in clinical trials than Indians. CONCLUSION: Our study provides important insights for investigators and sponsors for planning trials in Brazil (and India) in the future. Ignoring these results may lead to unnecessary fund/time spending. More studies are needed to validate our results and for better understanding of this poorly studied theme.
Resumo:
The end products of atmospheric degradation are not only CO2 and H2O but also sulfate and nitrate depending on the chemical composition of the substances which are subject to degradation processes. Atmospheric degradation has thus a direct influence on the radiative balance of the earth not only due to formation of greenhouse gases but also of aerosols. Aerosols of a diameter of 0.1 to 2 micrometer, reflect short wave sunlight very efficiently leading to a radiative forcing which is estimated to be about -0.8 watt per m2 by IPCC. Aerosols also influence the radiative balance by way of cloud formation. If more aerosols are present, clouds are formed with more and smaller droplets and these clouds have a higher albedo and are more stable compared to clouds with larger droplets. Not only sulfate, but also nitrate and polar organic compounds, formed as intermediates in degradation processes, contribute to this direct and indirect aerosol effect. Estimates for the Netherlands indicate a direct effect of -4 watt m-2 and an indirect effect of as large as -5 watt m-2. About one third is caused by sulfates, one third by nitrates and last third by polar organic compounds. This large radiative forcing is obviously non-uniform and depends on local conditions.
Resumo:
Gemstone Team Antibiotic Resistance