191 resultados para quality characteristic
Resumo:
Purpose: To compare the performance Glaucoma Quality of Life-15 (GQL-15) Questionnaire, intraocular pressure measurement (IOP Goldmann tonometry) and a measure of visual field loss using Moorfields Motion Displacement Test (MDT) in detecting glaucomatous eyes from a self referred population. Methods: The GQL-15 has been suggested to correlate with visual disability and psychophysical measures of visual function in glaucoma patients. The Moorfields MDT is a multi location perimetry test with 32 white line stimuli presented on a grey background on a standard laptop computer. Each stimulus is displaced between computer frames to give the illusion of "apparent motion". Participants (N=312, 90% older than 45 years; 20.5% family history of glaucoma) self referred to an advertised World Glaucoma Day (March 2009) Jules Gonin Eye Hospital, Lausanne Switzerland. Participants underwent a clinical exam (IOP, slit lamp, angle and disc examination by a general ophthalmologist), 90% completed a GQL-15 questionnaire and over 50% completed a MDT test in both eyes. Those who were classified as abnormal on one or more of the following (IOP >21 mmHg/ GQL-15 score >20/ MDT score >2/ clinical exam) underwent a follow up clinical examination by a glaucoma specialist including imaging and threshold perimetry. After the second examination subjects were classified as "healthy"(H), "glaucoma suspect" (GS) (ocular hypertension and/or suspicious disc, angle closure with SD) or "glaucomatous" (G). Results: One hundred and ten subjects completed all 4 initial examinations; of these 69 were referred to complete the 2nd examination and were classified as; 8 G, 24 GS, and 37 H. MDT detected 7/8 G, and 7/24 GS, with false referral rate of 3.8%. IOP detected 2/8 G and 8/24 GS, with false referral rate of 8.9%. GQL-15 detected 4/8 G, 16/24 GS with a false referral rate of 42%. Conclusions: In this sample of participants attending a self referral glaucoma detection event, the MDT performed significantly better than the GQL-15 and IOP in discriminating glaucomatous patients from healthy subjects. Further studies are required to assess the potential of the MDT as a glaucoma screening tool.
Resumo:
Due to various contexts and processes, forensic science communities may have different approaches, largely influenced by their criminal justice systems. However, forensic science practices share some common characteristics. One is the assurance of a high (scientific) quality within processes and practices. For most crime laboratory directors and forensic science associations, this issue is conditioned by the triangle of quality, which represents the current paradigm of quality assurance in the field. It consists of the implementation of standardization, certification, accreditation, and an evaluation process. It constitutes a clear and sound way to exchange data between laboratories and enables databasing due to standardized methods ensuring reliable and valid results; but it is also a means of defining minimum requirements for practitioners' skills for specific forensic science activities. The control of each of these aspects offers non-forensic science partners the assurance that the entire process has been mastered and is trustworthy. Most of the standards focus on the analysis stage and do not consider pre- and post-laboratory stages, namely, the work achieved at the investigation scene and the evaluation and interpretation of the results, intended for intelligence beneficiaries or for court. Such localized consideration prevents forensic practitioners from identifying where the problems really lie with regard to criminal justice systems. According to a performance-management approach, scientific quality should not be restricted to standardized procedures and controls in forensic science practice. Ensuring high quality also strongly depends on the way a forensic science culture is assimilated (into specific education training and workplaces) and in the way practitioners understand forensic science as a whole.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
The molecular mechanisms controlling the progression of melanoma from a localized tumor to an invasive and metastatic disease are poorly understood. In the attempt to start defining a functional protein profile of melanoma progression, we have analyzed by LC-MS/MS the proteins associated with detergent resistant membranes (DRMs), which are enriched in cholesterol/sphingolipids-containing membrane rafts, of melanoma cell lines derived from tumors at different stages of progression. Since membrane rafts are involved in several biological processes, including signal transduction and protein trafficking, we hypothesized that the association of proteins with rafts can be regulated during melanoma development and affect protein function and disease progression. We have identified a total of 177 proteins in the DRMs of the cell lines examined. Among these, we have found groups of proteins preferentially associated with DRMs of either less malignant radial growth phase/vertical growth phase (VGP) cells, or aggressive VGP and metastatic cells suggesting that melanoma cells with different degrees of malignancy have different DRM profiles. Moreover, some proteins were found in DRMs of only some cell lines despite being expressed at similar levels in all the cell lines examined, suggesting the existence of mechanisms controlling their association with DRMs. We expect that understanding the mechanisms regulating DRM targeting and the activity of the proteins differentially associated with DRMs in relation to cell malignancy will help identify new molecular determinants of melanoma progression.
Resumo:
Background : Canakinumab, a fully human anti-IL-1b antibody has been shown to control inflammation in gouty arthritis. This study evaluated changes in health-related quality of life (HRQoL) in patients treated with canakinumab or triamcinolone acetonide (TA).Methods : An 8-wk, dose-ranging, active controlled, single-blind study in patients (_18 to _80 years) with acute gouty arthritis flare, refractory to or contraindicated to NSAlDs and/or colchicine, were randomized to canakinumab 10, 25, 50, 90, 150mg sc or TA 40mg im. HRQoL was assessed using patient reported outcomes evaluating PCS and MCS, and subscale scores of SF-36_ [acute version 2]) and functional disability (HAQ-DI_).Results : In canakinumab 150mg group, the most severe impairment at baseline was reported for physical functioning and bodily pain; levels of 41.5 and 36.0, respectively, which improved in 7 days to 80.0 and 72.2 (mean increases of 39.0 and 35.6) and at 8 wks improved to 86.1 and 86.6 (mean increases of 44.6 and 50.6); these were higher than levels seen in the general US population. TA group, showed less improvement in 7 days (mean increases of 23.3 and 21.3 for physical function and bodily pain). Functional disability scores, measured by the HAQ-DI_ decreased in both treatment groups (Table 1).Conclusions : Gouty arthritis patients treated with canakinumab showed a rapid improvement in physical and mental well-being based on SF-36_ scores. In contrast to the TA group, patients treated with canakinumab showed improvement in 7 days in physical function and bodily pain approaching levels of the general population.Disclosure statement : U.A., A.F., V.M., D.R., P.S. and K.S. are employees and shareholders of Novartis Pharma AG. A.P. has received research support from Novartis Pharma AG. N.S. has received research support and consultancy fees from Novartis Pharmaceuticals Corporation, has served on advisory boards for Novartis, Takeda, Savient, URL Pharma and EnzymeRx, and is/has been a member of a speakers' bureau for Takeda. A.S. has received consultation fees from Novartis Pharma AG, Abbott, Bristol-Myers Squibb, Essex, Pfizer, MSD, Roche, UCB and Wyeth. All other authors have declared no conflicts of interest.
Resumo:
The safe and responsible development of engineered nanomaterials (ENM), nanotechnology-based materials and products, together with the definition of regulatory measures and implementation of "nano"-legislation in Europe require a widely supported scientific basis and sufficient high quality data upon which to base decisions. At the very core of such a scientific basis is a general agreement on key issues related to risk assessment of ENMs which encompass the key parameters to characterise ENMs, appropriate methods of analysis and best approach to express the effect of ENMs in widely accepted dose response toxicity tests. The following major conclusions were drawn: Due to high batch variability of ENMs characteristics of commercially available and to a lesser degree laboratory made ENMs it is not possible to make general statements regarding the toxicity resulting from exposure to ENMs. 1) Concomitant with using the OECD priority list of ENMs, other criteria for selection of ENMs like relevance for mechanistic (scientific) studies or risk assessment-based studies, widespread availability (and thus high expected volumes of use) or consumer concern (route of consumer exposure depending on application) could be helpful. The OECD priority list is focussing on validity of OECD tests. Therefore source material will be first in scope for testing. However for risk assessment it is much more relevant to have toxicity data from material as present in products/matrices to which men and environment are be exposed. 2) For most, if not all characteristics of ENMs, standardized methods analytical methods, though not necessarily validated, are available. Generally these methods are only able to determine one single characteristic and some of them can be rather expensive. Practically, it is currently not feasible to fully characterise ENMs. Many techniques that are available to measure the same nanomaterial characteristic produce contrasting results (e.g. reported sizes of ENMs). It was recommended that at least two complementary techniques should be employed to determine a metric of ENMs. The first great challenge is to prioritise metrics which are relevant in the assessment of biological dose response relations and to develop analytical methods for characterising ENMs in biological matrices. It was generally agreed that one metric is not sufficient to describe fully ENMs. 3) Characterisation of ENMs in biological matrices starts with sample preparation. It was concluded that there currently is no standard approach/protocol for sample preparation to control agglomeration/aggregation and (re)dispersion. It was recommended harmonization should be initiated and that exchange of protocols should take place. The precise methods used to disperse ENMs should be specifically, yet succinctly described within the experimental section of a publication. 4) ENMs need to be characterised in the matrix as it is presented to the test system (in vitro/ in vivo). 5) Alternative approaches (e.g. biological or in silico systems) for the characterisation of ENMS are simply not possible with the current knowledge. Contributors: Iseult Lynch, Hans Marvin, Kenneth Dawson, Markus Berges, Diane Braguer, Hugh J. Byrne, Alan Casey, Gordon Chambers, Martin Clift, Giuliano Elia1, Teresa F. Fernandes, Lise Fjellsbø, Peter Hatto, Lucienne Juillerat, Christoph Klein, Wolfgang Kreyling, Carmen Nickel1, and Vicki Stone.
Resumo:
PURPOSE: To compare clinical benefit response (CBR) and quality of life (QOL) in patients receiving gemcitabine (Gem) plus capecitabine (Cap) versus single-agent Gem for advanced/metastatic pancreatic cancer. PATIENTS AND METHODS: Patients were randomly assigned to receive GemCap (oral Cap 650 mg/m(2) twice daily on days 1 through 14 plus Gem 1,000 mg/m(2) in a 30-minute infusion on days 1 and 8 every 3 weeks) or Gem (1,000 mg/m(2) in a 30-minute infusion weekly for 7 weeks, followed by a 1-week break, and then weekly for 3 weeks every 4 weeks) for 24 weeks or until progression. CBR criteria and QOL indicators were assessed over this period. CBR was defined as improvement from baseline for >or= 4 consecutive weeks in pain (pain intensity or analgesic consumption) and Karnofsky performance status, stability in one but improvement in the other, or stability in pain and performance status but improvement in weight. RESULTS: Of 319 patients, 19% treated with GemCap and 20% treated with Gem experienced a CBR, with a median duration of 9.5 and 6.5 weeks, respectively (P < .02); 54% of patients treated with GemCap and 60% treated with Gem had no CBR (remaining patients were not assessable). There was no treatment difference in QOL (n = 311). QOL indicators were improving under chemotherapy (P < .05). These changes differed by the time to failure, with a worsening 1 to 2 months before treatment failure (all P < .05). CONCLUSION: There is no indication of a difference in CBR or QOL between GemCap and Gem. Regardless of their initial condition, some patients experience an improvement in QOL on chemotherapy, followed by a worsening before treatment failure.
Resumo:
AIMS: In patients with alcohol dependence, health-related quality of life (QOL) is reduced compared with that of a normal healthy population. The objective of the current analysis was to describe the evolution of health-related QOL in adults with alcohol dependence during a 24-month period after initial assessment for alcohol-related treatment in a routine practice setting, and its relation to drinking pattern which was evaluated across clusters based on the predominant pattern of alcohol use, set against the influence of baseline variables METHODS: The Medical Outcomes Study 36-Item Short-Form Survey (MOS-SF-36) was used to measure QOL at baseline and quarterly for 2 years among participants in CONTROL, a prospective observational study of patients initiating treatment for alcohol dependence. The sample consisted of 160 adults with alcohol dependence (65.6% males) with a mean (SD) age of 45.6 (12.0) years. Alcohol use data were collected using TimeLine Follow-Back. Based on the participant's reported alcohol use, three clusters were identified: 52 (32.5%) mostly abstainers, 64 (40.0%) mostly moderate drinkers and 44 (27.5%) mostly heavy drinkers. Mixed-effect linear regression analysis was used to identify factors that were potentially associated with the mental and physical summary MOS-SF-36 scores at each time point. RESULTS: The mean (SD) MOS-SF-36 mental component summary score (range 0-100, norm 50) was 35.7 (13.6) at baseline [mostly abstainers: 40.4 (14.6); mostly moderate drinkers 35.6 (12.4); mostly heavy drinkers 30.1 (12.1)]. The score improved to 43.1 (13.4) at 3 months [mostly abstainers: 47.4 (12.3); mostly moderate drinkers 44.2 (12.7); mostly heavy drinkers 35.1 (12.9)], to 47.3 (11.4) at 12 months [mostly abstainers: 51.7 (9.7); mostly moderate drinkers 44.8 (11.9); mostly heavy drinkers 44.1 (11.3)], and to 46.6 (11.1) at 24 months [mostly abstainers: 49.2 (11.6); mostly moderate drinkers 45.7 (11.9); mostly heavy drinkers 43.7 (8.8)]. Mixed-effect linear regression multivariate analyses indicated that there was a significant association between a lower 2-year follow-up MOS-SF-36 mental score and being a mostly heavy drinker (-6.97, P < 0.001) or mostly moderate drinker (-3.34 points, P = 0.018) [compared to mostly abstainers], being female (-3.73, P = 0.004), and having a Beck Inventory scale score ≥8 (-6.54, P < 0.001), at baseline. The mean (SD) MOS-SF-36 physical component summary score was 48.8 (10.6) at baseline, remained stable over the follow-up and did not differ across the three clusters. Mixed-effect linear regression univariate analyses found that the average 2-year follow-up MOS-SF-36 physical score was increased (compared with mostly abstainers) in mostly heavy drinkers (+4.44, P = 0.007); no other variables tested influenced the MOS-SF-36 physical score. CONCLUSION: Among individuals with alcohol dependence, a rapid improvement was seen in the mental dimension of QOL following treatment initiation, which was maintained during 24 months. Improvement was associated with the pattern of alcohol use, becoming close to the general population norm in patients classified as mostly abstainers, improving substantially in mostly moderate drinkers and improving only slightly in mostly heavy drinkers. The physical dimension of QOL was generally in the normal range but was not associated with drinking patterns.
Resumo:
OBJECTIVE: Little is known regarding health-related quality of life and its relation with physical activity level in the general population. Our primary objective was to systematically review data examining this relationship. METHODS: We systematically searched MEDLINE, EMBASE, CINAHL, and PsycINFO for health-related quality of life and physical activity related keywords in titles, abstracts, or indexing fields. RESULTS: From 1426 retrieved references, 55 citations were judged to require further evaluation. Fourteen studies were retained for data extraction and analysis; seven were cross-sectional studies, two were cohort studies, four were randomized controlled trials and one used a combined cross sectional and longitudinal design. Thirteen different methods of physical activity assessment were used. Most health-related quality of life instruments related to the Medical Outcome Study SF-36 questionnaire. Cross-sectional studies showed a consistently positive association between self-reported physical activity and health-related quality of life. The largest cross-sectional study reported an adjusted odds ratio of "having 14 or more unhealthy days" during the previous month to be 0.40 (95% Confidence Interval 0.36-0.45) for those meeting recommended levels of physical activity compared to inactive subjects. Cohort studies and randomized controlled trials tended to show a positive effect of physical activity on health-related quality of life, but similar to the cross-sectional studies, had methodological limitations. CONCLUSION: Cross-sectional data showed a consistently positive association between physical activity level and health-related quality of life. Limited evidence from randomized controlled trials and cohort studies precludes a definitive statement about the nature of this association.
Resumo:
Interest groups advocate centre-specific outcome data as a useful tool for patients in choosing a hospital for their treatment and for decision-making by politicians and the insurance industry. Haematopoietic stem cell transplantation (HSCT) requires significant infrastructure and represents a cost-intensive procedure. It therefore qualifies as a prime target for such a policy. We made use of the comprehensive database of the Swiss Blood Stem Cells Transplant Group (SBST) to evaluate potential use of mortality rates. Nine institutions reported a total of 4717 HSCT - 1427 allogeneic (30.3%), 3290 autologous (69.7%) - in 3808 patients between the years 1997 and 2008. Data were analysed for survival- and transplantation-related mortality (TRM) at day 100 and at 5 years. The data showed marked and significant differences between centres in unadjusted analyses. These differences were absent or marginal when the results were adjusted for disease, year of transplant and the EBMT risk score (a score incorporating patient age, disease stage, time interval between diagnosis and transplantation, and, for allogeneic transplants, donor type and donor-recipient gender combination) in a multivariable analysis. These data indicate comparable quality among centres in Switzerland. They show that comparison of crude centre-specific outcome data without adjustment for the patient mix may be misleading. Mandatory data collection and systematic review of all cases within a comprehensive quality management system might, in contrast, serve as a model to ascertain the quality of other cost-intensive therapies in Switzerland.