965 resultados para linear-threshold model
Resumo:
This paper introduces local distance-based generalized linear models. These models extend (weighted) distance-based linear models firstly with the generalized linear model concept, then by localizing. Distances between individuals are the only predictor information needed to fit these models. Therefore they are applicable to mixed (qualitative and quantitative) explanatory variables or when the regressor is of functional type. Models can be fitted and analysed with the R package dbstats, which implements several distancebased prediction methods.
Resumo:
The high level of protection elicited in rodents and primates by the radiation-attenuated schistosome vaccine gives hope that a human vaccine relying on equivalent mechanisms is feasible. In humans, a vaccine would be undoubtedly administered to previously or currently infected individuals. We have therefore used the olive baboon to investigate whether vaccine-induced immunity is compromised by a schistosome infection. We showed that neither a preceding infection, terminated by chemotherapy, nor an ongoing chronic infection affected the level of protection. Whilst IgM responses to vaccination or infection were short-lived, IgG responses rose with each successive exposure to the vaccine. Such a rise was obscured by responses to egg deposition in already-infected animals. In human trials it would be necessary to use indirect estimates of infection intensity to determine vaccine efficacy. Using worm burden as the definitive criterion, we demonstrated that the surrogate measures, fecal eggs, and circulating antigens, consistently overestimated protection. Regression analysis of the surrogate parameters on worm burden revealed that the principal reason for overestimation was the threshold sensitivity of the assays. If we extrapolate our findings to human schistosomiasis mansoni, it is clear that more sensitive indirect measures of infection intensity are required for future vaccine trials.
Resumo:
OBJECTIVES: To determine whether nalmefene combined with psychosocial support is cost-effective compared with psychosocial support alone for reducing alcohol consumption in alcohol-dependent patients with high/very high drinking risk levels (DRLs) as defined by the WHO, and to evaluate the public health benefit of reducing harmful alcohol-attributable diseases, injuries and deaths. DESIGN: Decision modelling using Markov chains compared costs and effects over 5 years. SETTING: The analysis was from the perspective of the National Health Service (NHS) in England and Wales. PARTICIPANTS: The model considered the licensed population for nalmefene, specifically adults with both alcohol dependence and high/very high DRLs, who do not require immediate detoxification and who continue to have high/very high DRLs after initial assessment. DATA SOURCES: We modelled treatment effect using data from three clinical trials for nalmefene (ESENSE 1 (NCT00811720), ESENSE 2 (NCT00812461) and SENSE (NCT00811941)). Baseline characteristics of the model population, treatment resource utilisation and utilities were from these trials. We estimated the number of alcohol-attributable events occurring at different levels of alcohol consumption based on published epidemiological risk-relation studies. Health-related costs were from UK sources. MAIN OUTCOME MEASURES: We measured incremental cost per quality-adjusted life year (QALY) gained and number of alcohol-attributable harmful events avoided. RESULTS: Nalmefene in combination with psychosocial support had an incremental cost-effectiveness ratio (ICER) of £5204 per QALY gained, and was therefore cost-effective at the £20,000 per QALY gained decision threshold. Sensitivity analyses showed that the conclusion was robust. Nalmefene plus psychosocial support led to the avoidance of 7179 alcohol-attributable diseases/injuries and 309 deaths per 100,000 patients compared to psychosocial support alone over the course of 5 years. CONCLUSIONS: Nalmefene can be seen as a cost-effective treatment for alcohol dependence, with substantial public health benefits. TRIAL REGISTRATION NUMBERS: This cost-effectiveness analysis was developed based on data from three randomised clinical trials: ESENSE 1 (NCT00811720), ESENSE 2 (NCT00812461) and SENSE (NCT00811941).
Resumo:
Humoral factors play an important role in the control of exercise hyperpnea. The role of neuromechanical ventilatory factors, however, is still being investigated. We tested the hypothesis that the afferents of the thoracopulmonary system, and consequently of the neuromechanical ventilatory loop, have an influence on the kinetics of oxygen consumption (VO2), carbon dioxide output (VCO2), and ventilation (VE) during moderate intensity exercise. We did this by comparing the ventilatory time constants (tau) of exercise with and without an inspiratory load. Fourteen healthy, trained men (age 22.6 +/- 3.2 yr) performed a continuous incremental cycle exercise test to determine maximal oxygen uptake (VO2max = 55.2 +/- 5.8 ml x min(-1) x kg(-1)). On another day, after unloaded warm-up they performed randomized constant-load tests at 40% of their VO2max for 8 min, one with and the other without an inspiratory threshold load of 15 cmH2O. Ventilatory variables were obtained breath by breath. Phase 2 ventilatory kinetics (VO2, VCO2, and VE) could be described in all cases by a monoexponential function. The bootstrap method revealed small coefficients of variation for the model parameters, indicating an accurate determination for all parameters. Paired Student's t-tests showed that the addition of the inspiratory resistance significantly increased the tau during phase 2 of VO2 (43.1 +/- 8.6 vs. 60.9 +/- 14.1 s; P < 0.001), VCO2 (60.3 +/- 17.6 vs. 84.5 +/- 18.1 s; P < 0.001) and VE (59.4 +/- 16.1 vs. 85.9 +/- 17.1 s; P < 0.001). The average rise in tau was 41.3% for VO2, 40.1% for VCO2, and 44.6% for VE. The tau changes indicated that neuromechanical ventilatory factors play a role in the ventilatory response to moderate exercise.
Resumo:
Glucose supply from blood to brain occurs through facilitative transporter proteins. A near linear relation between brain and plasma glucose has been experimentally determined and described by a reversible model of enzyme kinetics. A conformational four-state exchange model accounting for trans-acceleration and asymmetry of the carrier was included in a recently developed multi-compartmental model of glucose transport. Based on this model, we demonstrate that brain glucose (G(brain)) as function of plasma glucose (G(plasma)) can be described by a single analytical equation namely comprising three kinetic compartments: blood, endothelial cells and brain. Transport was described by four parameters: apparent half saturation constant K(t), apparent maximum rate constant T(max), glucose consumption rate CMR(glc), and the iso-inhibition constant K(ii) that suggests G(brain) as inhibitor of the isomerisation of the unloaded carrier. Previous published data, where G(brain) was quantified as a function of plasma glucose by either biochemical methods or NMR spectroscopy, were used to determine the aforementioned kinetic parameters. Glucose transport was characterized by K(t) ranging from 1.5 to 3.5 mM, T(max)/CMR(glc) from 4.6 to 5.6, and K(ii) from 51 to 149 mM. It was noteworthy that K(t) was on the order of a few mM, as previously determined from the reversible model. The conformational four-state exchange model of glucose transport into the brain includes both efflux and transport inhibition by G(brain), predicting that G(brain) eventually approaches a maximum concentration. However, since K(ii) largely exceeds G(plasma), iso-inhibition is unlikely to be of substantial importance for plasma glucose below 25 mM. As a consequence, the reversible model can account for most experimental observations under euglycaemia and moderate cases of hypo- and hyperglycaemia.
Resumo:
In this paper we propose a parsimonious regime-switching approach to model the correlations between assets, the threshold conditional correlation (TCC) model. This method allows the dynamics of the correlations to change from one state (or regime) to another as a function of observable transition variables. Our model is similar in spirit to Silvennoinen and Teräsvirta (2009) and Pelletier (2006) but with the appealing feature that it does not suffer from the course of dimensionality. In particular, estimation of the parameters of the TCC involves a simple grid search procedure. In addition, it is easy to guarantee a positive definite correlation matrix because the TCC estimator is given by the sample correlation matrix, which is positive definite by construction. The methodology is illustrated by evaluating the behaviour of international equities, govenrment bonds and major exchange rates, first separately and then jointly. We also test and allow for different parts in the correlation matrix to be governed by different transition variables. For this, we estimate a multi-threshold TCC specification. Further, we evaluate the economic performance of the TCC model against a constant conditional correlation (CCC) estimator using a Diebold-Mariano type test. We conclude that threshold correlation modelling gives rise to a significant reduction in portfolio´s variance.
Resumo:
In CoDaWork’05, we presented an application of discriminant function analysis (DFA) to 4 differentcompositional datasets and modelled the first canonical variable using a segmented regression modelsolely based on an observation about the scatter plots. In this paper, multiple linear regressions areapplied to different datasets to confirm the validity of our proposed model. In addition to dating theunknown tephras by calibration as discussed previously, another method of mapping the unknown tephrasinto samples of the reference set or missing samples in between consecutive reference samples isproposed. The application of these methodologies is demonstrated with both simulated and real datasets.This new proposed methodology provides an alternative, more acceptable approach for geologists as theirfocus is on mapping the unknown tephra with relevant eruptive events rather than estimating the age ofunknown tephra.Kew words: Tephrochronology; Segmented regression
Resumo:
PURPOSE: Currently, many pre-conditions are regarded as relative or absolute contraindications for lumbar total disc replacement (TDR). Radiculopathy is one among them. In Switzerland it is left to the surgeon's discretion when to operate if he adheres to a list of pre-defined indications. Contraindications, however, are less clearly specified. We hypothesized that, the extent of pre-operative radiculopathy results in different benefits for patients treated with mono-segmental lumbar TDR. We used patient perceived leg pain and its correlation with physician recorded radiculopathy for creating the patient groups to be compared. METHODS: The present study is based on the dataset of SWISSspine, a government mandated health technology assessment registry. Between March 2005 and April 2009, 577 patients underwent either mono- or bi-segmental lumbar TDR, which was documented in a prospective observational multicenter mode. A total of 416 cases with a mono-segmental procedure were included in the study. The data collection consisted of pre-operative and follow-up data (physician based) and clinical outcomes (NASS form, EQ-5D). A receiver operating characteristic (ROC) analysis was conducted with patients' self-indicated leg pain and the surgeon-based diagnosis "radiculopathy", as marked on the case report forms. As a result, patients were divided into two groups according to the severity of leg pain. The two groups were compared with regard to the pre-operative patient characteristics and pre- and post-operative pain on Visual Analogue Scale (VAS) and quality of life using general linear modeling. RESULTS: The optimal ROC model revealed a leg pain threshold of 40 ≤ VAS > 40 for the absence or the presence of "radiculopathy". Demographics in the resulting two groups were well comparable. Applying this threshold, the mean pre-operative leg pain level was 16.5 points in group 1 and 68.1 points in group 2 (p < 0.001). Back pain levels differed less with 63.6 points in group 1 and 72.6 in group 2 (p < 0.001). Pre-operative quality of life showed considerable differences with an 0.44 EQ-5D score in group 1 and 0.29 in group 2 (p < 0.001, possible score range -0.6 to 1). At a mean follow-up time of 8 months, group 1 showed a mean leg pain improvement of 3.6 points and group 2 of 41.1 points (p < 0.001). Back pain relief was 35.6 and 39.1 points, respectively (p = 0.27). EQ-5D score improvement was 0.27 in group 1 and 0.41 in group 2 (p = 0.11). CONCLUSIONS: Patients labeled as having radiculopathy (group 2) do mostly have pre-operative leg pain levels ≥ 40. Applying this threshold, the patients with pre-operative leg pain do also have more severe back pain and a considerably lower quality of life. Their net benefit from the lumbar TDR is higher and they do have similar post-operative back and leg pain levels as well as the quality of life as patients without pre-operative leg pain. Although randomized controlled trials are required to confirm these findings, they put leg pain and radiculopathy into perspective as absolute contraindications for TDR.
Resumo:
In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.
Resumo:
1. We investigated experimentally predation by the flatworm Dugesia lugubris on the snail Physa acuta in relation to predator body length and to prey morphology [shell length (SL) and aperture width (AW)]. 2. SL and AW correlate strongly in the field, but display significant and independent variance among populations. In the laboratory, predation by Dugesia resulted in large and significant selection differentials on both SL and AW. Analysis of partial effects suggests that selection on AW was indirect, and mediated through its strong correlation with SL. 3. The probability P(ij) for a snail of size category i (SL) to be preyed upon by a flatworm of size category j was fitted with a Poisson-probability distribution, the mean of which increased linearly with predator size (i). Despite the low number of parameters, the fit was excellent (r2 = 0.96). We offer brief biological interpretations of this relationship with reference to optimal foraging theory. 4. The largest size class of Dugesia (>2 cm) did not prey on snails larger than 7 mm shell length. This size threshold might offer Physa a refuge against flatworm predation and thereby allow coexistence in the field. 5. Our results are further discussed with respect to previous field and laboratory observations on P acuta life-history patterns, in particular its phenotypic variance in adult body size.
Resumo:
AIMS: Experimental models have reported conflicting results regarding the role of dispersion of repolarization in promoting atrial fibrillation (AF). Repolarization alternans, a beat-to-beat alternation in action potential duration, enhances dispersion of repolarization when propagation velocity is involved. METHODS AND RESULTS: In this work, original electrophysiological parameters were analysed to study AF susceptibility in a chronic sheep model of pacing-induced AF. Two pacemakers were implanted, each with a single right atrial lead. Right atrial depolarization and repolarization waves were documented at 2-week intervals. A significant and gradual decrease in the propagation velocity at all pacing rates and in the right atrial effective refractory period (ERP) was observed during the weeks of burst pacing before sustained AF developed when compared with baseline conditions. Right atrial repolarization alternans was observed, but because of the development of 2/1 atrioventricular block with far-field ventricular interference, its threshold could not be precisely measured. Non-sustained AF was not observed at baseline, but appeared during the electrical remodelling in association with a decrease in both ERP and propagation velocity. CONCLUSION: We report here on the feasibility of measuring ERP, atrial repolarization alternans, and propagation velocity kinetics and their potential in predicting susceptibility to AF in a free-behaving model of pacing-induced AF using the standard pacemaker technology.
Resumo:
When back-calculating fish length from scale measurements, the choice of the body-scale relationship is a fundamental step. Using data from the arctic charrSalvelinus alpinus (L.) of Lake Geneva (Switzerland) we show the need for a curvilinear model, on both statistical and biological grounds. From several 2-parameters models, the log-linear relationship appears to provide the best fit. A 3-parameters, Bertalanffy model did not improve the fit. We show moreover that using the proportional model would lead to important misinterpretations of the data.
Resumo:
An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001.We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling.
Resumo:
Doxorubicin is an antineoplasic agent active against sarcoma pulmonary metastasis, but its clinical use is hampered by its myelotoxicity and its cumulative cardiotoxicity, when administered systemically. This limitation may be circumvented using the isolated lung perfusion (ILP) approach, wherein a therapeutic agent is infused locoregionally after vascular isolation of the lung. The influence of the mode of infusion (anterograde (AG): through the pulmonary artery (PA); retrograde (RG): through the pulmonary vein (PV)) on doxorubicin pharmacokinetics and lung distribution was unknown. Therefore, a simple, rapid and sensitive high-performance liquid chromatography method has been developed to quantify doxorubicin in four different biological matrices (infusion effluent, serum, tissues with low or high levels of doxorubicin). The related compound daunorubicin was used as internal standard (I.S.). Following a single-step protein precipitation of 500 microl samples with 250 microl acetone and 50 microl zinc sulfate 70% aqueous solution, the obtained supernatant was evaporated to dryness at 60 degrees C for exactly 45 min under a stream of nitrogen and the solid residue was solubilized in 200 microl of purified water. A 100 microl-volume was subjected to HPLC analysis onto a Nucleosil 100-5 microm C18 AB column equipped with a guard column (Nucleosil 100-5 microm C(6)H(5) (phenyl) end-capped) using a gradient elution of acetonitrile and 1-heptanesulfonic acid 0.2% pH 4: 15/85 at 0 min-->50/50 at 20 min-->100/0 at 22 min-->15/85 at 24 min-->15/85 at 26 min, delivered at 1 ml/min. The analytes were detected by fluorescence detection with excitation and emission wavelength set at 480 and 550 nm, respectively. The calibration curves were linear over the range of 2-1000 ng/ml for effluent and plasma matrices, and 0.1 microg/g-750 microg/g for tissues matrices. The method is precise with inter-day and intra-day relative standard deviation within 0.5 and 6.7% and accurate with inter-day and intra-day deviations between -5.4 and +7.7%. The in vitro stability in all matrices and in processed samples has been studied at -80 degrees C for 1 month, and at 4 degrees C for 48 h, respectively. During initial studies, heparin used as anticoagulant was found to profoundly influence the measurements of doxorubicin in effluents collected from animals under ILP. Moreover, the strong matrix effect observed with tissues samples indicate that it is mandatory to prepare doxorubicin calibration standard samples in biological matrices which would reflect at best the composition of samples to be analyzed. This method was successfully applied in animal studies for the analysis of effluent, serum and tissue samples collected from pigs and rats undergoing ILP.