774 resultados para Prediction intervals


Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVES Mortality in patients starting antiretroviral therapy (ART) is higher in Malawi and Zambia than in South Africa. We examined whether different monitoring of ART (viral load [VL] in South Africa and CD4 count in Malawi and Zambia) could explain this mortality difference. DESIGN Mathematical modelling study based on data from ART programmes. METHODS We used a stochastic simulation model to study the effect of VL monitoring on mortality over 5 years. In baseline scenario A all parameters were identical between strategies except for more timely and complete detection of treatment failure with VL monitoring. Additional scenarios introduced delays in switching to second-line ART (scenario B) or higher virologic failure rates (due to worse adherence) when monitoring was based on CD4 counts only (scenario C). Results are presented as relative risks (RR) with 95% prediction intervals and percent of observed mortality difference explained. RESULTS RRs comparing VL with CD4 cell count monitoring were 0.94 (0.74-1.03) in scenario A, 0.94 (0.77-1.02) with delayed switching (scenario B) and 0.80 (0.44-1.07) when assuming a 3-times higher rate of failure (scenario C). The observed mortality at 3 years was 10.9% in Malawi and Zambia and 8.6% in South Africa (absolute difference 2.3%). The percentage of the mortality difference explained by VL monitoring ranged from 4% (scenario A) to 32% (scenarios B and C combined, assuming a 3-times higher failure rate). Eleven percent was explained by non-HIV related mortality. CONCLUSIONS VL monitoring reduces mortality moderately when assuming improved adherence and decreased failure rates.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective: To evaluate a new triaxial accelerometer device for prediction of energy expenditure, measured as VO2/kg, in obese adults and normal-weight controls during activities of daily life. Subjects and methods: Thirty-seven obese adults (Body Mass Index (BMI) 37±5.4) and seventeen controls (BMI 23±1.8) performed eight activities for 5 to 8 minutes while wearing a triaxial accelerometer on the right thigh. Simultaneously, VO2 and VCO2 were measured using a portable metabolic system. The relationship between accelerometer counts (AC) and VO2/kg was analysed using spline regression and linear mixed-effects models. Results: For all activities, VO2/kg was significantly lower in obese participants than in normalweight controls. A linear relationship between AC and VO2/kg existed only within accelerometer values from 0 to 300 counts/min, with an increase of 3.7 (95%-confidence interval (CI) 3.4 - 4.1) and 3.9 ml/min (95%-CI 3.4 - 4.3) per increase of 100 counts/min in obese and normal-weight adults, respectively. Linear modelling of the whole range yields wide prediction intervals for VO2/kg of ± 6.3 and ±7.3 ml/min in both groups. Conclusion: In obese and normal-weight adults, the use of AC for predicting energy expenditure, defined as VO2/kg, from a broad range of physical activities, characterized by varying intensities and types of muscle work, is limited.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work, we propose the Seasonal Dynamic Factor Analysis (SeaDFA), an extension of Nonstationary Dynamic Factor Analysis, through which one can deal with dimensionality reduction in vectors of time series in such a way that both common and specific components are extracted. Furthermore, common factors are able to capture not only regular dynamics (stationary or not) but also seasonal ones, by means of the common factors following a multiplicative seasonal VARIMA(p, d, q) × (P, D, Q)s model. Additionally, a bootstrap procedure that does not need a backward representation of the model is proposed to be able to make inference for all the parameters in the model. A bootstrap scheme developed for forecasting includes uncertainty due to parameter estimation, allowing enhanced coverage of forecasting intervals. A challenging application is provided. The new proposed model and a bootstrap scheme are applied to an innovative subject in electricity markets: the computation of long-term point forecasts and prediction intervals of electricity prices. Several appendices with technical details, an illustrative example, and an additional table are available online as Supplementary Materials.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Prediction at ungauged sites is essential for water resources planning and management. Ungauged sites have no observations about the magnitude of floods, but some site and basin characteristics are known. Regression models relate physiographic and climatic basin characteristics to flood quantiles, which can be estimated from observed data at gauged sites. However, these models assume linear relationships between variables Prediction intervals are estimated by the variance of the residuals in the estimated model. Furthermore, the effect of the uncertainties in the explanatory variables on the dependent variable cannot be assessed. This paper presents a methodology to propagate the uncertainties that arise in the process of predicting flood quantiles at ungauged basins by a regression model. In addition, Bayesian networks were explored as a feasible tool for predicting flood quantiles at ungauged sites. Bayesian networks benefit from taking into account uncertainties thanks to their probabilistic nature. They are able to capture non-linear relationships between variables and they give a probability distribution of discharges as result. The methodology was applied to a case study in the Tagus basin in Spain.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La protección de las aguas subterráneas es una prioridad de la política medioambiental de la UE. Por ello ha establecido un marco de prevención y control de la contaminación, que incluye provisiones para evaluar el estado químico de las aguas y reducir la presencia de contaminantes en ellas. Las herramientas fundamentales para el desarrollo de dichas políticas son la Directiva Marco del Agua y la Directiva Hija de Aguas Subterráneas. Según ellas, las aguas se consideran en buen estado químico si: • la concentración medida o prevista de nitratos no supera los 50 mg/l y la de ingredientes activos de plaguicidas, de sus metabolitos y de los productos de reacción no supera el 0,1 μg/l (0,5 μg/l para el total de los plaguicidas medidos) • la concentración de determinadas sustancias de riesgo es inferior al valor umbral fijado por los Estados miembros; se trata, como mínimo, del amonio, arsénico, cadmio, cloruro, plomo, mercurio, sulfatos, tricloroetileno y tetracloroetileno • la concentración de cualquier otro contaminante se ajusta a la definición de buen estado químico enunciada en el anexo V de la Directiva marco sobre la política de aguas • en caso de superarse el valor correspondiente a una norma de calidad o a un valor umbral, una investigación confirma, entre otros puntos, la falta de riesgo significativo para el medio ambiente. Analizar el comportamiento estadístico de los datos procedentes de la red de seguimiento y control puede resultar considerablemente complejo, debido al sesgo positivo que suelen presentar dichos datos y a su distribución asimétrica, debido a la existencia de valores anómalos y diferentes tipos de suelos y mezclas de contaminantes. Además, la distribución de determinados componentes en el agua subterránea puede presentar concentraciones por debajo del límite de detección o no ser estacionaria debida a la existencia de tendencias lineales o estacionales. En el primer caso es necesario realizar estimaciones de esos valores desconocidos, mediante procedimientos que varían en función del porcentaje de valores por debajo del límite de detección y el número de límites de detección aplicables. En el segundo caso es necesario eliminar las tendencias de forma previa a la realización de contrastes de hipótesis sobre los residuos. Con esta tesis se ha pretendido establecer las bases estadísticas para el análisis riguroso de los datos de las redes de calidad con objeto de realizar la evaluación del estado químico de las masas de agua subterránea para la determinación de tendencias al aumento en la concentración de contaminantes y para la detección de empeoramientos significativos, tanto en los casos que se ha fijado un estándar de calidad por el organismo medioambiental competente como en aquéllos que no ha sido así. Para diseñar una metodología que permita contemplar la variedad de casos existentes, se han analizado los datos de la Red Oficial de Seguimiento y Control del Estado Químico de las Aguas Subterráneas del Ministerio de Agricultura, Alimentación y Medio Ambiente (Magrama). A continuación, y dado que los Planes Hidrológicos de Cuenca son la herramienta básica de las Directivas, se ha seleccionado la Cuenca del Júcar, dada su designación como cuenca piloto en la estrategia de implementación común (CIS) de la Comisión Europea. El objetivo principal de los grupos de trabajo creados para ello se dirigió a implementar la Directiva Derivada de Agua Subterráneas y los elementos de la Directiva Marco del Agua relacionadas, en especial la toma de datos en los puntos de control y la preparación del primer Plan de Gestión de Cuencas Hidrográficas. Dada la extensión de la zona y con objeto de analizar una masa de agua subterránea (definida como la unidad de gestión en las Directivas), se ha seleccionado una zona piloto (Plana de Vinaroz Peñiscola) en la que se han aplicado los procedimientos desarrollados con objeto de determinar el estado químico de dicha masa. Los datos examinados no contienen en general valores de concentración de contaminantes asociados a fuentes puntuales, por lo que para la realización del estudio se han seleccionado valores de concentración de los datos más comunes, es decir, nitratos y cloruros. La estrategia diseñada combina el análisis de tendencias con la elaboración de intervalos de confianza cuando existe un estándar de calidad e intervalos de predicción cuando no existe o se ha superado dicho estándar. De forma análoga se ha procedido en el caso de los valores por debajo del límite de detección, tomando los valores disponibles en la zona piloto de la Plana de Sagunto y simulando diferentes grados de censura con objeto de comparar los resultados obtenidos con los intervalos producidos de los datos reales y verificar de esta forma la eficacia del método. El resultado final es una metodología general que integra los casos existentes y permite definir el estado químico de una masa de agua subterránea, verificar la existencia de impactos significativos en la calidad del agua subterránea y evaluar la efectividad de los planes de medidas adoptados en el marco del Plan Hidrológico de Cuenca. ABSTRACT Groundwater protection is a priority of the EU environmental policy. As a result, it has established a framework for prevention and control of pollution, which includes provisions for assessing the chemical status of waters and reducing the presence of contaminants in it. The measures include: • criteria for assessing the chemical status of groundwater bodies • criteria for identifying significant upward trends and sustained concentrations of contaminants and define starting points for reversal of such trends • preventing and limiting indirect discharges of pollutants as a result of percolation through soil or subsoil. The basic tools for the development of such policies are the Water Framework Directive and Groundwater Daughter Directive. According to them, the groundwater bodies are considered in good status if: • measured or predicted concentration of nitrate does not exceed 50 mg / l and the active ingredients of pesticides, their metabolites and reaction products do not exceed 0.1 mg / l (0.5 mg / l for total of pesticides measured) • the concentration of certain hazardous substances is below the threshold set by the Member States concerned, at least, of ammonium, arsenic, cadmium, chloride, lead, mercury, sulphates, trichloroethylene and tetrachlorethylene • the concentration of other contaminants fits the definition of good chemical status set out in Annex V of the Framework Directive on water policy • If the value corresponding to a quality standard or a threshold value is exceeded, an investigation confirms, among other things, the lack of significant risk to the environment. Analyzing the statistical behaviour of the data from the monitoring networks may be considerably complex due to the positive bias which often presents such information and its asymmetrical distribution, due to the existence of outliers and different soil types and mixtures of pollutants. Furthermore, the distribution of certain components in groundwater may have concentrations below the detection limit or may not be stationary due to the existence of linear or seasonal trends. In the first case it is necessary to estimate these unknown values, through procedures that vary according to the percentage of values below the limit of detection and the number of applicable limits of detection. In the second case removing trends is needed before conducting hypothesis tests on residuals. This PhD thesis has intended to establish the statistical basis for the rigorous analysis of data quality networks in order to conduct the evaluation of the chemical status of groundwater bodies for determining upward and sustained trends in pollutant concentrations and for the detection of significant deterioration in cases in which an environmental standard has been set by the relevant environmental agency and those that have not. Aiming to design a comprehensive methodology to include the whole range of cases, data from the Groundwater Official Monitoring and Control Network of the Ministry of Agriculture, Food and Environment (Magrama) have been analysed. Then, since River Basin Management Plans are the basic tool of the Directives, the Júcar river Basin has been selected. The main reason is its designation as a pilot basin in the common implementation strategy (CIS) of the European Commission. The main objective of the ad hoc working groups is to implement the Daughter Ground Water Directive and elements of the Water Framework Directive related to groundwater, especially the data collection at control stations and the preparation of the first River Basin Management Plan. Given the size of the area and in order to analyze a groundwater body (defined as the management unit in the Directives), Plana de Vinaroz Peñíscola has been selected as pilot area. Procedures developed to determine the chemical status of that body have been then applied. The data examined do not generally contain pollutant concentration values associated with point sources, so for the study concentration values of the most common data, i.e., nitrates and chlorides have been selected. The designed strategy combines trend analysis with the development of confidence intervals when there is a standard of quality and prediction intervals when there is not or the standard has been exceeded. Similarly we have proceeded in the case of values below the detection limit, taking the available values in Plana de Sagunto pilot area and simulating different degrees of censoring in order to compare the results obtained with the intervals achieved from the actual data and verify in this way the effectiveness of the method. The end result is a general methodology that integrates existing cases to define the chemical status of a groundwater body, verify the existence of significant impacts on groundwater quality and evaluate the effectiveness of the action plans adopted in the framework of the River Basin Management Plan.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Crash reduction factors (CRFs) are used to estimate the potential number of traffic crashes expected to be prevented from investment in safety improvement projects. The method used to develop CRFs in Florida has been based on the commonly used before-and-after approach. This approach suffers from a widely recognized problem known as regression-to-the-mean (RTM). The Empirical Bayes (EB) method has been introduced as a means to addressing the RTM problem. This method requires the information from both the treatment and reference sites in order to predict the expected number of crashes had the safety improvement projects at the treatment sites not been implemented. The information from the reference sites is estimated from a safety performance function (SPF), which is a mathematical relationship that links crashes to traffic exposure. The objective of this dissertation was to develop the SPFs for different functional classes of the Florida State Highway System. Crash data from years 2001 through 2003 along with traffic and geometric data were used in the SPF model development. SPFs for both rural and urban roadway categories were developed. The modeling data used were based on one-mile segments that contain homogeneous traffic and geometric conditions within each segment. Segments involving intersections were excluded. The scatter plots of data show that the relationships between crashes and traffic exposure are nonlinear, that crashes increase with traffic exposure in an increasing rate. Four regression models, namely, Poisson (PRM), Negative Binomial (NBRM), zero-inflated Poisson (ZIP), and zero-inflated Negative Binomial (ZINB), were fitted to the one-mile segment records for individual roadway categories. The best model was selected for each category based on a combination of the Likelihood Ratio test, the Vuong statistical test, and the Akaike's Information Criterion (AIC). The NBRM model was found to be appropriate for only one category and the ZINB model was found to be more appropriate for six other categories. The overall results show that the Negative Binomial distribution model generally provides a better fit for the data than the Poisson distribution model. In addition, the ZINB model was found to give the best fit when the count data exhibit excess zeros and over-dispersion for most of the roadway categories. While model validation shows that most data points fall within the 95% prediction intervals of the models developed, the Pearson goodness-of-fit measure does not show statistical significance. This is expected as traffic volume is only one of the many factors contributing to the overall crash experience, and that the SPFs are to be applied in conjunction with Accident Modification Factors (AMFs) to further account for the safety impacts of major geometric features before arriving at the final crash prediction. However, with improved traffic and crash data quality, the crash prediction power of SPF models may be further improved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cette thèse développe des méthodes bootstrap pour les modèles à facteurs qui sont couram- ment utilisés pour générer des prévisions depuis l'article pionnier de Stock et Watson (2002) sur les indices de diffusion. Ces modèles tolèrent l'inclusion d'un grand nombre de variables macroéconomiques et financières comme prédicteurs, une caractéristique utile pour inclure di- verses informations disponibles aux agents économiques. Ma thèse propose donc des outils éco- nométriques qui améliorent l'inférence dans les modèles à facteurs utilisant des facteurs latents extraits d'un large panel de prédicteurs observés. Il est subdivisé en trois chapitres complémen- taires dont les deux premiers en collaboration avec Sílvia Gonçalves et Benoit Perron. Dans le premier article, nous étudions comment les méthodes bootstrap peuvent être utilisées pour faire de l'inférence dans les modèles de prévision pour un horizon de h périodes dans le futur. Pour ce faire, il examine l'inférence bootstrap dans un contexte de régression augmentée de facteurs où les erreurs pourraient être autocorrélées. Il généralise les résultats de Gonçalves et Perron (2014) et propose puis justifie deux approches basées sur les résidus : le block wild bootstrap et le dependent wild bootstrap. Nos simulations montrent une amélioration des taux de couverture des intervalles de confiance des coefficients estimés en utilisant ces approches comparativement à la théorie asymptotique et au wild bootstrap en présence de corrélation sérielle dans les erreurs de régression. Le deuxième chapitre propose des méthodes bootstrap pour la construction des intervalles de prévision permettant de relâcher l'hypothèse de normalité des innovations. Nous y propo- sons des intervalles de prédiction bootstrap pour une observation h périodes dans le futur et sa moyenne conditionnelle. Nous supposons que ces prévisions sont faites en utilisant un ensemble de facteurs extraits d'un large panel de variables. Parce que nous traitons ces facteurs comme latents, nos prévisions dépendent à la fois des facteurs estimés et les coefficients de régres- sion estimés. Sous des conditions de régularité, Bai et Ng (2006) ont proposé la construction d'intervalles asymptotiques sous l'hypothèse de Gaussianité des innovations. Le bootstrap nous permet de relâcher cette hypothèse et de construire des intervalles de prédiction valides sous des hypothèses plus générales. En outre, même en supposant la Gaussianité, le bootstrap conduit à des intervalles plus précis dans les cas où la dimension transversale est relativement faible car il prend en considération le biais de l'estimateur des moindres carrés ordinaires comme le montre une étude récente de Gonçalves et Perron (2014). Dans le troisième chapitre, nous suggérons des procédures de sélection convergentes pour les regressions augmentées de facteurs en échantillons finis. Nous démontrons premièrement que la méthode de validation croisée usuelle est non-convergente mais que sa généralisation, la validation croisée «leave-d-out» sélectionne le plus petit ensemble de facteurs estimés pour l'espace généré par les vraies facteurs. Le deuxième critère dont nous montrons également la validité généralise l'approximation bootstrap de Shao (1996) pour les regressions augmentées de facteurs. Les simulations montrent une amélioration de la probabilité de sélectionner par- cimonieusement les facteurs estimés comparativement aux méthodes de sélection disponibles. L'application empirique revisite la relation entre les facteurs macroéconomiques et financiers, et l'excès de rendement sur le marché boursier américain. Parmi les facteurs estimés à partir d'un large panel de données macroéconomiques et financières des États Unis, les facteurs fortement correlés aux écarts de taux d'intérêt et les facteurs de Fama-French ont un bon pouvoir prédictif pour les excès de rendement.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

INTRODUCTION: Differentiation between normal solid (non-cystic) pineal glands and pineal pathologies on brain MRI is difficult. The aim of this study was to assess the size of the solid pineal gland in children (0-5 years) and compare the findings with published pineoblastoma cases. METHODS: We retrospectively analyzed the size (width, height, planimetric area) of solid pineal glands in 184 non-retinoblastoma patients (73 female, 111 male) aged 0-5 years on MRI. The effect of age and gender on gland size was evaluated. Linear regression analysis was performed to analyze the relation between size and age. Ninety-nine percent prediction intervals around the mean were added to construct a normal size range per age, with the upper bound of the predictive interval as the parameter of interest as a cutoff for normalcy. RESULTS: There was no significant interaction of gender and age for all the three pineal gland parameters (width, height, and area). Linear regression analysis gave 99 % upper prediction bounds of 7.9, 4.8, and 25.4 mm(2), respectively, for width, height, and area. The slopes (size increase per month) of each parameter were 0.046, 0.023, and 0.202, respectively. Ninety-three percent (95 % CI 66-100 %) of asymptomatic solid pineoblastomas were larger in size than the 99 % upper bound. CONCLUSION: This study establishes norms for solid pineal gland size in non-retinoblastoma children aged 0-5 years. Knowledge of the size of the normal pineal gland is helpful for detection of pineal gland abnormalities, particularly pineoblastoma.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cette thèse développe des méthodes bootstrap pour les modèles à facteurs qui sont couram- ment utilisés pour générer des prévisions depuis l'article pionnier de Stock et Watson (2002) sur les indices de diffusion. Ces modèles tolèrent l'inclusion d'un grand nombre de variables macroéconomiques et financières comme prédicteurs, une caractéristique utile pour inclure di- verses informations disponibles aux agents économiques. Ma thèse propose donc des outils éco- nométriques qui améliorent l'inférence dans les modèles à facteurs utilisant des facteurs latents extraits d'un large panel de prédicteurs observés. Il est subdivisé en trois chapitres complémen- taires dont les deux premiers en collaboration avec Sílvia Gonçalves et Benoit Perron. Dans le premier article, nous étudions comment les méthodes bootstrap peuvent être utilisées pour faire de l'inférence dans les modèles de prévision pour un horizon de h périodes dans le futur. Pour ce faire, il examine l'inférence bootstrap dans un contexte de régression augmentée de facteurs où les erreurs pourraient être autocorrélées. Il généralise les résultats de Gonçalves et Perron (2014) et propose puis justifie deux approches basées sur les résidus : le block wild bootstrap et le dependent wild bootstrap. Nos simulations montrent une amélioration des taux de couverture des intervalles de confiance des coefficients estimés en utilisant ces approches comparativement à la théorie asymptotique et au wild bootstrap en présence de corrélation sérielle dans les erreurs de régression. Le deuxième chapitre propose des méthodes bootstrap pour la construction des intervalles de prévision permettant de relâcher l'hypothèse de normalité des innovations. Nous y propo- sons des intervalles de prédiction bootstrap pour une observation h périodes dans le futur et sa moyenne conditionnelle. Nous supposons que ces prévisions sont faites en utilisant un ensemble de facteurs extraits d'un large panel de variables. Parce que nous traitons ces facteurs comme latents, nos prévisions dépendent à la fois des facteurs estimés et les coefficients de régres- sion estimés. Sous des conditions de régularité, Bai et Ng (2006) ont proposé la construction d'intervalles asymptotiques sous l'hypothèse de Gaussianité des innovations. Le bootstrap nous permet de relâcher cette hypothèse et de construire des intervalles de prédiction valides sous des hypothèses plus générales. En outre, même en supposant la Gaussianité, le bootstrap conduit à des intervalles plus précis dans les cas où la dimension transversale est relativement faible car il prend en considération le biais de l'estimateur des moindres carrés ordinaires comme le montre une étude récente de Gonçalves et Perron (2014). Dans le troisième chapitre, nous suggérons des procédures de sélection convergentes pour les regressions augmentées de facteurs en échantillons finis. Nous démontrons premièrement que la méthode de validation croisée usuelle est non-convergente mais que sa généralisation, la validation croisée «leave-d-out» sélectionne le plus petit ensemble de facteurs estimés pour l'espace généré par les vraies facteurs. Le deuxième critère dont nous montrons également la validité généralise l'approximation bootstrap de Shao (1996) pour les regressions augmentées de facteurs. Les simulations montrent une amélioration de la probabilité de sélectionner par- cimonieusement les facteurs estimés comparativement aux méthodes de sélection disponibles. L'application empirique revisite la relation entre les facteurs macroéconomiques et financiers, et l'excès de rendement sur le marché boursier américain. Parmi les facteurs estimés à partir d'un large panel de données macroéconomiques et financières des États Unis, les facteurs fortement correlés aux écarts de taux d'intérêt et les facteurs de Fama-French ont un bon pouvoir prédictif pour les excès de rendement.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The ratio of cystatin C (cysC) to creatinine (crea) is regarded as a marker of glomerular filtration quality associated with cardiovascular morbidities. We sought to determine reference intervals for serum cysC-crea ratio in seniors. Furthermore, we sought to determine whether other low-molecular weight molecules exhibit a similar behavior in individuals with altered glomerular filtration quality. Finally, we investigated associations with adverse outcomes. A total of 1382 subjectively healthy Swiss volunteers aged 60 years or older were enrolled in the study. Reference intervals were calculated according to Clinical & Laboratory Standards Institute (CLSI) guideline EP28-A3c. After a baseline exam, a 4-year follow-up survey recorded information about overall morbidity and mortality. The cysC-crea ratio (mean 0.0124 ± 0.0026 mg/μmol) was significantly higher in women and increased progressively with age. Other associated factors were hemoglobin A1c, mean arterial pressure, and C-reactive protein (P < 0.05 for all). Participants exhibiting shrunken pore syndrome had significantly higher ratios of 3.5-66.5 kDa molecules (brain natriuretic peptide, parathyroid hormone, β2-microglobulin, cystatin C, retinol-binding protein, thyroid-stimulating hormone, α1-acid glycoprotein, lipase, amylase, prealbumin, and albumin) and creatinine. There was no such difference in the ratios of very low-molecular weight molecules (urea, uric acid) to creatinine or in the ratios of molecules larger than 66.5 kDa (transferrin, haptoglobin) to creatinine. The cysC-crea ratio was significantly predictive of mortality and subjective overall morbidity at follow-up in logistic regression models adjusting for several factors. The cysC-crea ratio exhibits age- and sex-specific reference intervals in seniors. In conclusion, the cysC-crea ratio may indicate the relative retention of biologically active low-molecular weight compounds and can independently predict the risk for overall mortality and morbidity in the elderly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stress echocardiography has been shown to improve the diagnosis of coronary artery disease in the presence of hypertension, but its value in prognostic evaluation is unclear. We sought to determine whether stress echocardiography could be used to predict mortality in 2363 patients with hypertension, who were followed for up to 10 years (mean 4.0+/-1.8) for death and revascularization. Stress echocardiograms were normal in 1483 patients (63%), 16% had resting left ventricular (LV) dysfunction alone, and 21% had ischemia. Abnormalities were confined to one territory in 489 patients (21%) and to multiple territories in 365 patients (15%). Cardiac death was less frequent among the patients able to exercise than among those undergoing dobutamine echocardiography (4% versus 7%, P<0.001). The risk of death in patients with a negative stress echocardiogram was <1% per year. Ischemia identified by stress echocardiography was an independent predictor of mortality in those able to exercise (hazard ratio 2.21, 95% confidence intervals 1.10 to 4.43, P=0.0001) as well as those undergoing dobutamine echo (hazard ratio 2.39, 95% confidence intervals 1.53 to 3.75, P=0.0001); other predictors were age, heart failure, resting LV dysfunction, and the Duke treadmill score. In stepwise models replicating the sequence of clinical evaluation, the results of stress echocardiography added prognostic power to models based on clinical and stress-testing variables. Thus, the results of stress echocardiography are an independent predictor of cardiac death in hypertensive patients with known or suspected coronary artery disease, incremental to clinical risks and exercise results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Risks of significant infant drug exposurethrough breastmilk are poorly defined for many drugs, and largescalepopulation data are lacking. We used population pharmacokinetics(PK) modeling to predict fluoxetine exposure levels ofinfants via mother's milk in a simulated population of 1000 motherinfantpairs.METHODS: Using our original data on fluoxetine PK of 25breastfeeding women, a population PK model was developed withNONMEM and parameters, including milk concentrations, wereestimated. An exponential distribution model was used to account forindividual variation. Simulation random and distribution-constrainedassignment of doses, dosing time, feeding intervals and milk volumewas conducted to generate 1000 mother-infant pairs with characteristicssuch as the steady-state serum concentrations (Css) and infantdose relative to the maternal weight-adjusted dose (relative infantdose: RID). Full bioavailability and a conservative point estimate of1-month-old infant CYP2D6 activity to be 20% of the adult value(adjusted by weigth) according to a recent study, were assumed forinfant Css calculations.RESULTS: A linear 2-compartment model was selected as thebest model. Derived parameters, including milk-to-plasma ratios(mean: 0.66; SD: 0.34; range, 0 - 1.1) were consistent with the valuesreported in the literature. The estimated RID was below 10% in >95%of infants. The model predicted median infant-mother Css ratio was0.096 (range 0.035 - 0.25); literature reported mean was 0.07 (range0-0.59). Moreover, the predicted incidence of infant-mother Css ratioof >0.2 was less than 1%.CONCLUSION: Our in silico model prediction is consistent withclinical observations, suggesting that substantial systemic fluoxetineexposure in infants through human milk is rare, but further analysisshould include active metabolites. Our approach may be valid forother drugs. [supported by CIHR and Swiss National Science Foundation(SNSF)]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radioimmunotherapies with Zevalin® (RIT-Z) showed encouraging results in patients with relapsed/refractory follicular lymphoma (FL), leading frequently to failure-free intervals longer than those achieved by the last previous therapy. We compared time-to-event variables obtained before and after RIT-Z in patients with relapsed FL, previously exposed to rituximab. All patients with relapsed non-transformed, non-refractory, non-rituximab-naïve FL who have been treated with RIT-Z in two different centres in Europe were included. Staging and response were assessed by contrast-enhanced CT in all patients; PET/CT was performed according to local availability. Event-free survival (EFS) and time to next treatment (TTNT) following the last previous therapy and after RIT-Z were compared. Pre-therapy characteristics were tested in univariate analyses for prediction of outcomes. A description of the patterns of relapse was also provided. Among 70 patients treated, only 16 fulfilled the inclusion criteria. They were treated with a median of 3 prior lines of chemo-immunotherapies, including a median of 2 rituximab-containing regimens; 6 patients had undergone myeloablative chemotherapy with autologous stem cell rescue (ASCT). Overall response rates were 10 (62%) CR/CRu, 3 (19%) PR and 3 (19%) PD; response rates were similar in patients with prior ASCT. After RIT-Z only few patients obtained EFS and TTNT longer than after the last previous therapy. All four patients receiving rituximab maintenance were without progression 12 months after RIT-Z. Relapses occurred in both previously and newly involved sites; a significant association was found between the number of pathologic sites involved prior to RIT-Z and subsequent TTNT. Despite the excellent response rate, the duration of response was shorter than the previous one confirming the known trend of relapses to occur earlier after subsequent treatments. Rituximab maintenance after RIT-Z showed encouraging results in terms of prolonging EFS, warranting further studies. Copyright © 2010 John Wiley &amp; Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Chest pain can be caused by various conditions, with life-threatening cardiac disease being of greatest concern. Prediction scores to rule out coronary artery disease have been developed for use in emergency settings. We developed and validated a simple prediction rule for use in primary care. METHODS: We conducted a cross-sectional diagnostic study in 74 primary care practices in Germany. Primary care physicians recruited all consecutive patients who presented with chest pain (n = 1249) and recorded symptoms and findings for each patient (derivation cohort). An independent expert panel reviewed follow-up data obtained at six weeks and six months on symptoms, investigations, hospital admissions and medications to determine the presence or absence of coronary artery disease. Adjusted odds ratios of relevant variables were used to develop a prediction rule. We calculated measures of diagnostic accuracy for different cut-off values for the prediction scores using data derived from another prospective primary care study (validation cohort). RESULTS: The prediction rule contained five determinants (age/sex, known vascular disease, patient assumes pain is of cardiac origin, pain is worse during exercise, and pain is not reproducible by palpation), with the score ranging from 0 to 5 points. The area under the curve (receiver operating characteristic curve) was 0.87 (95% confidence interval [CI] 0.83-0.91) for the derivation cohort and 0.90 (95% CI 0.87-0.93) for the validation cohort. The best overall discrimination was with a cut-off value of 3 (positive result 3-5 points; negative result <or= 2 points), which had a sensitivity of 87.1% (95% CI 79.9%-94.2%) and a specificity of 80.8% (77.6%-83.9%). INTERPRETATION: The prediction rule for coronary artery disease in primary care proved to be robust in the validation cohort. It can help to rule out coronary artery disease in patients presenting with chest pain in primary care.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Prognostic models have been developed to predict survival of patients with newly diagnosed glioblastoma (GBM). To improve predictions, models should be updated with information at the recurrence. We performed a pooled analysis of European Organization for Research and Treatment of Cancer (EORTC) trials on recurrent glioblastoma to validate existing clinical prognostic factors, identify new markers, and derive new predictions for overall survival (OS) and progression free survival (PFS).¦METHODS: Data from 300 patients with recurrent GBM recruited in eight phase I or II trials conducted by the EORTC Brain Tumour Group were used to evaluate patient's age, sex, World Health Organisation (WHO) performance status (PS), presence of neurological deficits, disease history, use of steroids or anti-epileptics and disease characteristics to predict PFS and OS. Prognostic calculators were developed in patients initially treated by chemoradiation with temozolomide.¦RESULTS: Poor PS and more than one target lesion had a significant negative prognostic impact for both PFS and OS. Patients with large tumours measured by the maximum diameter of the largest lesion (⩾42mm) and treated with steroids at baseline had shorter OS. Tumours with predominant frontal location had better survival. Age and sex did not show independent prognostic values for PFS or OS.¦CONCLUSIONS: This analysis confirms performance status but not age as a major prognostic factor for PFS and OS in recurrent GBM. Patients with multiple and large lesions have an increased risk of death. With these data prognostic calculators with confidence intervals for both medians and fixed time probabilities of survival were derived.