993 resultados para Validated Computations
Resumo:
PURPOSE: This study aims to describe emotional distress and quality of life (QoL) of patients at different phases of their lung cancer and the association with their family physician (FP) involvement. METHODS: A prospective study on patients with lung cancer was conducted in three regions of Quebec, Canada. Patients completed, at baseline, several validated questionnaires regarding their psychosocial characteristics and their perceived level of FP involvement. Emotional distress [profile of mood states (POMS)] and QoL [European Organization for Research and Treatment of Cancer Quality of Life Core 30 (EORTC QLQ-C30)] were reassessed every 3-6 months, whether patients had metastasis or not, up to 18 months. Results were regrouped according to cancer phase. Mixed models with repeated measurements were performed to identify variation in distress and QoL. RESULTS: In this cohort of 395 patients, distress was low at diagnosis (0.79 ± 0.7 on a 0-4 scale), raising to 1.36 ± 0.8 at the advance phase (p < 0.0001). Patient's global QoL scores significantly decreased from the diagnosis to the advance phase (from 66 to 45 on a 0-100 scale; p < 0.0001). At all phases of cancer, FP involvement was significantly associated with patients' distress (p = 0.0004) and their global perception of QoL (p = 0.0080). These associations remained statistically significant even after controlling for age, gender, and presence of metastases. CONCLUSIONS: This study provides new knowledge on patients' emotional distress and QoL with cancer evolution and, particularly, their association with FP involvement. Other studies should be conducted to further explore FP role in cancer supportive care.
Resumo:
The recent roll-out of rapid diagnostic tests (RDTs) for malaria has highlighted the decreasing proportion of malaria-attributable illness in endemic areas. Unfortunately, once malaria is excluded, there are few accessible diagnostic tools to guide the management of severe febrile illnesses in low resource settings. This review summarizes the current state of RDT development for several key infections, including dengue fever, enteric fever, leptospirosis, brucellosis, visceral leishmaniasis and human African trypanosomiasis, and highlights many remaining gaps. Most RDTs for non-malarial tropical infections currently rely on the detection of host antibodies against a single infectious agent. The sensitivity and specificity of host-antibody detection tests are both inherently limited. Moreover, prolonged antibody responses to many infections preclude the use of most serological RDTs for monitoring response to treatment and/or for diagnosing relapse. Considering these limitations, there is a pressing need for sensitive pathogen-detection-based RDTs, as have been successfully developed for malaria and dengue. Ultimately, integration of RDTs into a validated syndromic approach to tropical fevers is urgently needed. Related research priorities are to define the evolving epidemiology of fever in the tropics, and to determine how combinations of RDTs could be best used to improve the management of severe and treatable infections requiring specific therapy.
Resumo:
BACKGROUND: Retinitis pigmentosa and other hereditary retinal degenerations (HRD) are rare genetic diseases leading to progressive blindness. Recessive HRD are caused by mutations in more than 100 different genes. Laws of population genetics predict that, on a purely theoretical ground, such a high number of genes should translate into an extremely elevated frequency of unaffected carriers of mutations. In this study we estimate the proportion of these individuals within the general population, via the analyses of data from whole-genome sequencing. METHODOLOGY/PRINCIPAL FINDINGS: We screened complete and high-quality genome sequences from 46 control individuals from various world populations for HRD mutations, using bioinformatic tools developed in-house. All mutations detected in silico were validated by Sanger sequencing. We identified clear-cut, null recessive HRD mutations in 10 out of the 46 unaffected individuals analyzed (∼22%). CONCLUSIONS/SIGNIFICANCE: Based on our data, approximately one in 4-5 individuals from the general population may be a carrier of null mutations that are responsible for HRD. This would be the highest mutation carrier frequency so far measured for a class of Mendelian disorders, especially considering that missenses and other forms of pathogenic changes were not included in our assessment. Among other things, our results indicate that the risk for a consanguineous couple of generating a child with a blinding disease is particularly high, compared to other genetic conditions.
Resumo:
Background: Retrospective analyses suggest that personalized PK-based dosage might be useful for imatinib, as treatment response correlates with trough concentrations (Cmin) in cancer patients. Our objectives were to improve the interpretation of randomly measured concentrations and to confirm its efficiency before evaluating the clinical usefulness of systematic PK-based dosage in chronic myeloid leukemia patients. Methods and Results: A Bayesian method was validated for the prediction of individual Cmin on the basis of a single random observation, and was applied in a prospective multicenter randomized controlled clinical trial. 28 out of 56 patients were enrolled in the systematic dosage individualization arm and had 44 follow-up visits (their clinical follow-up is ongoing). PK-dose-adjustments were proposed in 39% having predicted Cmin significantly away from the target (1000 ng/ml). Recommendations were taken up by physicians in 57%, patients were considered non-compliant in 27%. Median Cmin at study inclusion was 754 ng/ml and differed significantly from the target (p=0.02, Wilcoxon test). On follow-up, Cmin was 984 ng/ml (p=0.82) in the compliant group. CV decreased from 46% to 27% (p=0.02, F-test). Conclusion: PK-based (Bayesian) dosage adjustment is able to bring individual drug exposure closer to a given therapeutic target. Its influence on therapeutic response remains to be evaluated.
Resumo:
BACKGROUND: The mutation status of the BRAF and KRAS genes has been proposed as prognostic biomarker in colorectal cancer. Of them, only the BRAF V600E mutation has been validated independently as prognostic for overall survival and survival after relapse, while the prognostic value of KRAS mutation is still unclear. We investigated the prognostic value of BRAF and KRAS mutations in various contexts defined by stratifications of the patient population. METHODS: We retrospectively analyzed a cohort of patients with stage II and III colorectal cancer from the PETACC-3 clinical trial (N = 1,423), by assessing the prognostic value of the BRAF and KRAS mutations in subpopulations defined by all possible combinations of the following clinico-pathological variables: T stage, N stage, tumor site, tumor grade and microsatellite instability status. In each such subpopulation, the prognostic value was assessed by log rank test for three endpoints: overall survival, relapse-free survival, and survival after relapse. The significance level was set to 0.01 for Bonferroni-adjusted p-values, and a second threshold for a trend towards statistical significance was set at 0.05 for unadjusted p-values. The significance of the interactions was tested by Wald test, with significance level of 0.05. RESULTS: In stage II-III colorectal cancer, BRAF mutation was confirmed a marker of poor survival only in subpopulations involving microsatellite stable and left-sided tumors, with higher effects than in the whole population. There was no evidence for prognostic value in microsatellite instable or right-sided tumor groups. We found that BRAF was also prognostic for relapse-free survival in some subpopulations. We found no evidence that KRAS mutations had prognostic value, although a trend was observed in some stratifications. We also show evidence of heterogeneity in survival of patients with BRAF V600E mutation. CONCLUSIONS: The BRAF mutation represents an additional risk factor only in some subpopulations of colorectal cancers, in others having limited prognostic value. However, in the subpopulations where it is prognostic, it represents a marker of much higher risk than previously considered. KRAS mutation status does not seem to represent a strong prognostic variable.
Resumo:
Parkinson's disease (PD) is a chronic neurodegenerative disorder characterized by progressive loss of dopaminergic (DA) neurons of the substantia nigra pars compacta with unknown aetiology. 6-Hydroxydopamine (6-OHDA) treatment of neuronal cells is an established in vivo model for mimicking the effect of oxidative stress found in PD brains. We examined the effects of 6-OHDA treatment on human neuroblastoma cells (SH-SY5Y) and primary mesencephalic cultures. Using a reverse arbitrarily primed polymerase chain reaction (RAP-PCR) approach we generated reproducible genetic fingerprints of differential expression levels in cell cultures treated with 6-OHDA. Of the resulting sequences, 23 showed considerable homology to known human coding sequences. The results of the RAP-PCR were validated by reverse transcription PCR, real-time PCR and, for selected genes, by Western blot analysis and immunofluorescence. In four cases, [tomoregulin-1 (TMEFF-1), collapsin response mediator protein 1 (CRMP-1), neurexin-1, and phosphoribosylaminoimidazole synthetase (GART)], a down-regulation of mRNA and protein levels was detected. Further studies will be necessary on the physiological role of the identified proteins and their impact on pathways leading to neurodegeneration in PD.
Resumo:
Introduction: The general strategy to perform anti-doping analysis starts with a screening followed by a confirmatory step when a sample is suspected to be positive. The screening step should be fast, generic and able to highlight any sample that may contain a prohibited substance by avoiding false negative and reducing false positive results. The confirmatory step is a dedicated procedure comprising a selective sample preparation and detection mode. Aim: The purpose of the study is to develop rapid screening and selective confirmatory strategies to detect and identify 103 doping agents in urine. Methods: For the screening, urine samples were simply diluted by a factor 2 with ultra-pure water and directly injected ("dilute and shoot") in the ultrahigh- pressure liquid chromatography (UHPLC). The UHPLC separation was performed in two gradients (ESI positive and negative) from 5/95 to 95/5% of MeCN/Water containing 0.1% formic acid. The gradient analysis time is 9 min including 3 min reequilibration. Analytes detection was performed in full scan mode on a quadrupole time-of-flight (QTOF) mass spectrometer by acquiring the exact mass of the protonated (ESI positive) or deprotonated (ESI negative) molecular ion. For the confirmatory analysis, urine samples were extracted on SPE 96-well plate with mixed-mode cation (MCX) for basic and neutral compounds or anion exchange (MAX) sorbents for acidic molecules. The analytes were eluted in 3 min (including 1.5 min reequilibration) with a S1-25 Ann Toxicol Anal. 2009; 21(S1) Abstracts gradient from 5/95 to 95/5% of MeCN/Water containing 0.1% formic acid. Analytes confirmation was performed in MS and MS/MS mode on a QTOF mass spectrometer. Results: In the screening and confirmatory analysis, basic and neutral analytes were analysed in the positive ESI mode, whereas acidic compounds were analysed in the negative mode. The analyte identification was based on retention time (tR) and exact mass measurement. "Dilute and shoot" was used as a generic sample treatment in the screening procedure, but matrix effect (e.g., ion suppression) cannot be avoided. However, the sensitivity was sufficient for all analytes to reach the minimal required performance limit (MRPL) required by the World Anti Doping Agency (WADA). To avoid time-consuming confirmatory analysis of false positive samples, a pre-confirmatory step was added. It consists of the sample re-injection, the acquisition of MS/MS spectra and the comparison to reference material. For the confirmatory analysis, urine samples were extracted by SPE allowing a pre-concentration of the analyte. A fast chromatographic separation was developed as a single analyte has to be confirmed. A dedicated QTOF-MS and MS/MS acquisition was performed to acquire within the same run a parallel scanning of two functions. Low collision energy was applied in the first channel to obtain the protonated molecular ion (QTOF-MS), while dedicated collision energy was set in the second channel to obtain fragmented ions (QTOF-MS/MS). Enough identification points were obtained to compare the spectra with reference material and negative urine sample. Finally, the entire process was validated and matrix effects quantified. Conclusion: Thanks to the coupling of UHPLC with the QTOF mass spectrometer, high tR repeatability, sensitivity, mass accuracy and mass resolution over a broad mass range were obtained. The method was sensitive, robust and reliable enough to detect and identify doping agents in urine. Keywords: screening, confirmatory analysis, UHPLC, QTOF, doping agents
Resumo:
Objectives: Several population pharmacokinetic (PPK) and pharmacokinetic-pharmacodynamic (PK-PD) analyses have been performed with the anticancer drug imatinib. Inspired by the approach of meta-analysis, we aimed to compare and combine results from published studies in a useful way - in particular for improving the clinical interpretation of imatinib concentration measurements in the scope of therapeutic drug monitoring (TDM). Methods: Original PPK analyses and PK-PD studies (PK surrogate: trough concentration Cmin; PD outcomes: optimal early response and specific adverse events) were searched systematically on MEDLINE. From each identified PPK model, a predicted concentration distribution under standard dosage was derived through 1000 simulations (NONMEM), after standardizing model parameters to common covariates. A "reference range" was calculated from pooled simulated concentrations in a semi-quantitative approach (without specific weighting) over the whole dosing interval. Meta-regression summarized relationships between Cmin and optimal/suboptimal early treatment response. Results: 9 PPK models and 6 relevant PK-PD reports in CML patients were identified. Model-based predicted median Cmin ranged from 555 to 1388 ng/ml (grand median: 870 ng/ml and inter-quartile range: 520-1390 ng/ml). The probability to achieve optimal early response was predicted to increase from 60 to 85% from 520 to 1390 ng/ml across PK-PD studies (odds ratio for doubling Cmin: 2.7). Reporting of specific adverse events was too heterogeneous to perform a regression analysis. The general frequency of anemia, rash and fluid retention increased however consistently with Cmin, but less than response probability. Conclusions: Predicted drug exposure may differ substantially between various PPK analyses. In this review, heterogeneity was mainly attributed to 2 "outlying" models. The established reference range seems to cover the range where both good efficacy and acceptable tolerance are expected for most patients. TDM guided dose adjustment appears therefore justified for imatinib in CML patients. Its usefulness remains now to be prospectively validated in a randomized trial.
Resumo:
We introduce a model of redistributive income taxation and public expenditure. This joint treatment permits analyzing the interdependencies between the two policies: one cannot be chosen independently of the other. Empirical evidence reveals that partisan confrontation essentially falls on expenditure policies rather than on income taxation. We examine the case in which the expenditure policy (or the size of government) is chosen by majority voting and income taxation is consistently adjusted. This adjustment consists of designing the income tax schedule that, given the expenditure policy, achieves consensus among the population. The model determines the consensus in- come tax schedule, the composition of public expenditure and the size of government. The main results are that inequality is negatively related to the size of government and to the pro-rich bias in public expenditure, and positively or negatively related to the marginal income tax, depending on substitutability between government supplied and market goods. These implications are validated using OECD data.
Resumo:
BACKGROUND: As the diversity of the European population evolves, measuring providers' skillfulness in cross-cultural care and understanding what contextual factors may influence this is increasingly necessary. Given limited information about differences in cultural competency by provider role, we compared cross-cultural skillfulness between physicians and nurses working at a Swiss university hospital. METHODS: A survey on cross-cultural care was mailed in November 2010 to front-line providers in Lausanne, Switzerland. This questionnaire included some questions from the previously validated Cross-Cultural Care Survey. We compared physicians' and nurses' mean composite scores and proportion of "3-good/4-very good" responses, for nine perceived skillfulness items (4-point Likert-scale) using the validated tool. We used linear regression to examine how provider role (physician vs. nurse) was associated with composite skillfulness scores, adjusting for demographics (gender, non-French dominant language), workplace (time at institution, work-unit "sensitized" to cultural-care), reported cultural-competence training, and cross-cultural care problem-awareness. RESULTS: Of 885 questionnaires, 368 (41.2%) returned the survey: 124 (33.6%) physicians and 244 (66.4%) nurses, reflecting institutional distribution of providers. Physicians had better mean composite scores for perceived skillfulness than nurses (2.7 vs. 2.5, p < 0.005), and significantly higher proportion of "good/very good" responses for 4/9 items. After adjusting for explanatory variables, physicians remained more likely to have higher skillfulness (β = 0.13, p = 0.05). Among all, higher skillfulness was associated with perception/awareness of problems in the following areas: inadequate cross-cultural training (β = 0.14, p = 0.01) and lack of practical experience caring for diverse populations (β = 0.11, p = 0.04). In stratified analyses among physicians alone, having French as a dominant language (β = -0.34, p < 0.005) was negatively correlated with skillfulness. CONCLUSIONS: Overall, there is much room for cultural competency improvement among providers. These results support the need for cross-cultural skills training with an inter-professional focus on nurses, education that attunes provider awareness to the local issues in cross-cultural care, and increased diversity efforts in the work force, particularly among physicians.
Resumo:
OBJECTIVE: To reach a consensus on the clinical use of ambulatory blood pressure monitoring (ABPM). METHODS: A task force on the clinical use of ABPM wrote this overview in preparation for the Seventh International Consensus Conference (23-25 September 1999, Leuven, Belgium). This article was amended to account for opinions aired at the conference and to reflect the common ground reached in the discussions. POINTS OF CONSENSUS: The Riva Rocci/Korotkoff technique, although it is prone to error, is easy and cheap to perform and remains worldwide the standard procedure for measuring blood pressure. ABPM should be performed only with properly validated devices as an accessory to conventional measurement of blood pressure. Ambulatory recording of blood pressure requires considerable investment in equipment and training and its use for screening purposes cannot be recommended. ABPM is most useful for identifying patients with white-coat hypertension (WCH), also known as isolated clinic hypertension, which is arbitrarily defined as a clinic blood pressure of more than 140 mmHg systolic or 90 mmHg diastolic in a patient with daytime ambulatory blood pressure below 135 mmHg systolic and 85 mmHg diastolic. Some experts consider a daytime blood pressure below 130 mmHg systolic and 80 mmHg diastolic optimal. Whether WCH predisposes subjects to sustained hypertension remains debated. However, outcome is better correlated to the ambulatory blood pressure than it is to the conventional blood pressure. Antihypertensive drugs lower the clinic blood pressure in patients with WCH but not the ambulatory blood pressure, and also do not improve prognosis. Nevertheless, WCH should not be left unattended. If no previous cardiovascular complications are present, treatment could be limited to follow-up and hygienic measures, which should also account for risk factors other than hypertension. ABPM is superior to conventional measurement of blood pressure not only for selecting patients for antihypertensive drug treatment but also for assessing the effects both of non-pharmacological and of pharmacological therapy. The ambulatory blood pressure should be reduced by treatment to below the thresholds applied for diagnosing sustained hypertension. ABPM makes the diagnosis and treatment of nocturnal hypertension possible and is especially indicated for patients with borderline hypertension, the elderly, pregnant women, patients with treatment-resistant hypertension and patients with symptoms suggestive of hypotension. In centres with sufficient financial resources, ABPM could become part of the routine assessment of patients with clinic hypertension. For patients with WCH, it should be repeated at annual or 6-monthly intervals. Variation of blood pressure throughout the day can be monitored only by ABPM, but several advantages of the latter technique can also be obtained by self-measurement of blood pressure, a less expensive method that is probably better suited to primary practice and use in developing countries. CONCLUSIONS: ABPM or equivalent methods for tracing the white-coat effect should become part of the routine diagnostic and therapeutic procedures applied to treated and untreated patients with elevated clinic blood pressures. Results of long-term outcome trials should better establish the advantage of further integrating ABPM as an accessory to conventional sphygmomanometry into the routine care of hypertensive patients and should provide more definite information on the long-term cost-effectiveness. Because such trials are not likely to be funded by the pharmaceutical industry, governments and health insurance companies should take responsibility in this regard.
Resumo:
Despite a common disorder population-based data on gastro-esophageal reflux disease (GERD) in Bangladesh is lacking. This epidemiological study was designed to determine the prevalence of GERD and its association with lifestyle factors. This population-based cross-sectional study was done by door to door interview of randomly selected persons in both urban and rural areas of North Eastern part of Bangladesh by using a validated questionnaire. A cutoff point 3 was chosen as a valid and reliable scale to confirm GERD. Statistical analysis was done by SPSS-12 version and the level of significance was set at P < or = 0.05. A total of 2000 persons with an age range of 15 to 85 years were interviewed; 1000 subjects from urban area and 1000 from rural area. Among the study subjects 1064 were male and 936 were female. A total of 110 persons (5.5%) were found to have GERD symptoms and among them 47 were men and 67 were women. The monthly, weekly and daily prevalence of heart-burn and or acid regurgitation was 5.5%, 5.25% and 2.5% respectively. Female sex, increased age and lower level of education were significantly associated with GERD symptoms. Prevalence was found more among city dwellers (approximately 6.0% versus 4.8%), married (6.23%, n = 86), widowed/widowers (16.83%, n = 17) and day labourer (8.78%). Level of education inversely influenced the prevalence. No significant association of GERD was found with body mass index (BMI) and smoking. Prevalence of GERD in North-Eastern part of Bangladesh was lower than that of western world. Prevalence was found higher in urban population, women, married, widowed/widowers and in poor an dilliterate persons. BMI and smoking had no significant association with GERD.
Resumo:
Currently, the most widely used criteria for assessing response to therapy in high-grade gliomas are based on two-dimensional tumor measurements on computed tomography (CT) or magnetic resonance imaging (MRI), in conjunction with clinical assessment and corticosteroid dose (the Macdonald Criteria). It is increasingly apparent that there are significant limitations to these criteria, which only address the contrast-enhancing component of the tumor. For example, chemoradiotherapy for newly diagnosed glioblastomas results in transient increase in tumor enhancement (pseudoprogression) in 20% to 30% of patients, which is difficult to differentiate from true tumor progression. Antiangiogenic agents produce high radiographic response rates, as defined by a rapid decrease in contrast enhancement on CT/MRI that occurs within days of initiation of treatment and that is partly a result of reduced vascular permeability to contrast agents rather than a true antitumor effect. In addition, a subset of patients treated with antiangiogenic agents develop tumor recurrence characterized by an increase in the nonenhancing component depicted on T2-weighted/fluid-attenuated inversion recovery sequences. The recognition that contrast enhancement is nonspecific and may not always be a true surrogate of tumor response and the need to account for the nonenhancing component of the tumor mandate that new criteria be developed and validated to permit accurate assessment of the efficacy of novel therapies. The Response Assessment in Neuro-Oncology Working Group is an international effort to develop new standardized response criteria for clinical trials in brain tumors. In this proposal, we present the recommendations for updated response criteria for high-grade gliomas.
Resumo:
BACKGROUND: Adequate pain assessment is critical for evaluating the efficacy of analgesic treatment in clinical practice and during the development of new therapies. Yet the currently used scores of global pain intensity fail to reflect the diversity of pain manifestations and the complexity of underlying biological mechanisms. We have developed a tool for a standardized assessment of pain-related symptoms and signs that differentiates pain phenotypes independent of etiology. METHODS AND FINDINGS: Using a structured interview (16 questions) and a standardized bedside examination (23 tests), we prospectively assessed symptoms and signs in 130 patients with peripheral neuropathic pain caused by diabetic polyneuropathy, postherpetic neuralgia, or radicular low back pain (LBP), and in 57 patients with non-neuropathic (axial) LBP. A hierarchical cluster analysis revealed distinct association patterns of symptoms and signs (pain subtypes) that characterized six subgroups of patients with neuropathic pain and two subgroups of patients with non-neuropathic pain. Using a classification tree analysis, we identified the most discriminatory assessment items for the identification of pain subtypes. We combined these six interview questions and ten physical tests in a pain assessment tool that we named Standardized Evaluation of Pain (StEP). We validated StEP for the distinction between radicular and axial LBP in an independent group of 137 patients. StEP identified patients with radicular pain with high sensitivity (92%; 95% confidence interval [CI] 83%-97%) and specificity (97%; 95% CI 89%-100%). The diagnostic accuracy of StEP exceeded that of a dedicated screening tool for neuropathic pain and spinal magnetic resonance imaging. In addition, we were able to reproduce subtypes of radicular and axial LBP, underscoring the utility of StEP for discerning distinct constellations of symptoms and signs. CONCLUSIONS: We present a novel method of identifying pain subtypes that we believe reflect underlying pain mechanisms. We demonstrate that this new approach to pain assessment helps separate radicular from axial back pain. Beyond diagnostic utility, a standardized differentiation of pain subtypes that is independent of disease etiology may offer a unique opportunity to improve targeted analgesic treatment.
Resumo:
The gene SNRNP200 is composed of 45 exons and encodes a protein essential for pre-mRNA splicing, the 200 kDa helicase hBrr2. Two mutations in SNRNP200 have recently been associated with autosomal dominant retinitis pigmentosa (adRP), a retinal degenerative disease, in two families from China. In this work we analyzed the entire 35-Kb SNRNP200 genomic region in a cohort of 96 unrelated North American patients with adRP. To complete this large-scale sequencing project, we performed ultra high-throughput sequencing of pooled, untagged PCR products. We then validated the detected DNA changes by Sanger sequencing of individual samples from this cohort and from an additional one of 95 patients. One of the two previously known mutations (p.S1087L) was identified in 3 patients, while 4 new missense changes (p.R681C, p.R681H, p.V683L, p.Y689C) affecting highly conserved codons were identified in 6 unrelated individuals, indicating that the prevalence of SNRNP200-associated adRP is relatively high. We also took advantage of this research to evaluate the pool-and-sequence method, especially with respect to the generation of false positive and negative results. We conclude that, although this strategy can be adopted for rapid discovery of new disease-associated variants, it still requires extensive validation to be used in routine DNA screenings. © 2011 Wiley-Liss, Inc.