963 resultados para count data models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Patients with rheumatoid arthritis (RA) with an inadequate response to TNF antagonists (aTNFs) may switch to an alternative aTNF or start treatment from a different class of drugs, such as rituximab (RTX). It remains unclear in which clinical settings these therapeutic strategies offer most benefit. OBJECTIVE: To analyse the effectiveness of RTX versus alternative aTNFs on RA disease activity in different subgroups of patients. METHODS: A prospective cohort study of patients with RA who discontinued at least one aTNF and subsequently received either RTX or an alternative aTNF, nested within the Swiss RA registry (SCQM-RA) was carried out. The primary outcome, longitudinal improvement in 28-joint count Disease Activity Score (DAS28), was analysed using multivariate regression models for longitudinal data and adjusted for potential confounders. RESULTS: Of the 318 patients with RA included; 155 received RTX and 163 received an alternative aTNF. The relative benefit of RTX varied with the type of prior aTNF failure: when the motive for switching was ineffectiveness to previous aTNFs, the longitudinal improvement in DAS28 was significantly better with RTX than with an alternative aTNF (p = 0.03; at 6 months, -1.34 (95% CI -1.54 to -1.15) vs -0.93 (95% CI -1.28 to -0.59), respectively). When the motive for switching was other causes, the longitudinal improvement in DAS28 was similar for RTX and alternative aTNFs (p = 0.40). These results were not significantly modified by the number of previous aTNF failures, the type of aTNF switches, or the presence of co-treatment with a disease-modifying antirheumatic drug. CONCLUSION: This observational study suggests that in patients with RA who have stopped a previous aTNF treatment because of ineffectiveness changing to RTX is more effective than switching to an alternative aTNF.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines both the in-sample and out-of-sample performance of three monetary fundamental models of exchange rates and compares their out-of-sample performance to that of a simple Random Walk model. Using a data-set consisting of five currencies at monthly frequency over the period January 1980 to December 2009 and a battery of newly developed performance measures, the paper shows that monetary models do better (in-sample and out-of-sample forecasting) than a simple Random Walk model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using survey expectations data and Markov-switching models, this paper evaluates the characteristics and evolution of investors' forecast errors about the yen/dollar exchange rate. Since our model is derived from the uncovered interest rate parity (UIRP) condition and our data cover a period of low interest rates, this study is also related to the forward premium puzzle and the currency carry trade strategy. We obtain the following results. First, with the same forecast horizon, exchange rate forecasts are homogeneous among different industry types, but within the same industry, exchange rate forecasts differ if the forecast time horizon is different. In particular, investors tend to undervalue the future exchange rate for long term forecast horizons; however, in the short run they tend to overvalue the future exchange rate. Second, while forecast errors are found to be partly driven by interest rate spreads, evidence against the UIRP is provided regardless of the forecasting time horizon; the forward premium puzzle becomes more significant in shorter term forecasting errors. Consistent with this finding, our coefficients on interest rate spreads provide indirect evidence of the yen carry trade over only a short term forecast horizon. Furthermore, the carry trade seems to be active when there is a clear indication that the interest rate will be low in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes full-Bayes priors for time-varying parameter vector autoregressions (TVP-VARs) which are more robust and objective than existing choices proposed in the literature. We formulate the priors in a way that they allow for straightforward posterior computation, they require minimal input by the user, and they result in shrinkage posterior representations, thus, making them appropriate for models of large dimensions. A comprehensive forecasting exercise involving TVP-VARs of different dimensions establishes the usefulness of the proposed approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we forecast EU-area inflation with many predictors using time-varying parameter models. The facts that time-varying parameter models are parameter-rich and the time span of our data is relatively short motivate a desire for shrinkage. In constant coefficient regression models, the Bayesian Lasso is gaining increasing popularity as an effective tool for achieving such shrinkage. In this paper, we develop econometric methods for using the Bayesian Lasso with time-varying parameter models. Our approach allows for the coefficient on each predictor to be: i) time varying, ii) constant over time or iii) shrunk to zero. The econometric methodology decides automatically which category each coefficient belongs in. Our empirical results indicate the benefits of such an approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Proallatotoxins, and particularly preconcenes, are exceptionally promising models for studying Rhodnius prolixus physiology and for comparison with other natural compounds with anti-hormonal activities. Effects of preconcenes on feeding, development and reproduction of R. prolixus are being detailed. The precocenes reveal significant effects on feeding, moulting cycle (inducing precocious metamorphosis and ecdysial stasis), and reproduction of these insect. The mechanism of action of proallatotoxins was discussed based on the corpus allatum cytotoxic effect and on the ecdysteroid biosynthesis in prothoracic glands and ovaries. Further studies of these compounds on R. prolixus are need and will hopefully reveal other unesplored points regarding the action of the proallatotoxins on insects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Excessive exposure to solar ultraviolet (UV) is the main cause of skin cancer. Specific prevention should be further developed to target overexposed or highly vulnerable populations. A better characterisation of anatomical UV exposure patterns is however needed for specific prevention. To develop a regression model for predicting the UV exposure ratio (ER, ratio between the anatomical dose and the corresponding ground level dose) for each body site without requiring individual measurements. A 3D numeric model (SimUVEx) was used to compute ER for various body sites and postures. A multiple fractional polynomial regression analysis was performed to identify predictors of ER. The regression model used simulation data and its performance was tested on an independent data set. Two input variables were sufficient to explain ER: the cosine of the maximal daily solar zenith angle and the fraction of the sky visible from the body site. The regression model was in good agreement with the simulated data ER (R(2)=0.988). Relative errors up to +20% and -10% were found in daily doses predictions, whereas an average relative error of only 2.4% (-0.03% to 5.4%) was found in yearly dose predictions. The regression model predicts accurately ER and UV doses on the basis of readily available data such as global UV erythemal irradiance measured at ground surface stations or inferred from satellite information. It renders the development of exposure data on a wide temporal and geographical scale possible and opens broad perspectives for epidemiological studies and skin cancer prevention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a real data set of claims amounts where costs related to damage are recorded separately from those related to medical expenses. Only claims with positive costs are considered here. Two approaches to density estimation are presented: a classical parametric and a semi-parametric method, based on transformation kernel density estimation. We explore the data set with standard univariate methods. We also propose ways to select the bandwidth and transformation parameters in the univariate case based on Bayesian methods. We indicate how to compare the results of alternative methods both looking at the shape of the overall density domain and exploring the density estimates in the right tail.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is an increasing awareness that the articulation of forensic science and criminal investigation is critical to the resolution of crimes. However, models and methods to support an effective collaboration between these partners are still poorly expressed or even lacking. Three propositions are borrowed from crime intelligence methods in order to bridge this gap: (a) the general intelligence process, (b) the analyses of investigative problems along principal perspectives: entities and their relationships, time and space, quantitative aspects and (c) visualisation methods as a mode of expression of a problem in these dimensions. Indeed, in a collaborative framework, different kinds of visualisations integrating forensic case data can play a central role for supporting decisions. Among them, link-charts are scrutinised for their abilities to structure and ease the analysis of a case by describing how relevant entities are connected. However, designing an informative chart that does not bias the reasoning process is not straightforward. Using visualisation as a catalyser for a collaborative approach integrating forensic data thus calls for better specifications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Summary: Lipophilicity plays an important role in the determination and the comprehension of the pharmacokinetic behavior of drugs. It is usually expressed by the partition coefficient (log P) in the n-octanol/water system. The use of an additional solvent system (1,2-dichlorethane/water) is necessary to obtain complementary information, as the log Poct values alone are not sufficient to explain ail biological properties. The aim of this thesis is to develop tools allowing to predict lipophilicity of new drugs and to analyze the information yielded by those log P values. Part I presents the development of theoretical models used to predict lipophilicity. Chapter 2 shows the necessity to extend the existing solvatochromic analyses in order to predict correctly the lipophilicity of new and complex neutral compounds. In Chapter 3, solvatochromic analyses are used to develop a model for the prediction of the lipophilicity of ions. A global model was obtained allowing to estimate the lipophilicity of neutral, anionic and cationic solutes. Part II presents the detailed study of two physicochemical filters. Chapter 4 shows that the Discovery RP Amide C16 stationary phase allows to estimate lipophilicity of the neutral form of basic and acidic solutes, except of lipophilic acidic solutes. Those solutes present additional interactions with this particular stationary phase. In Chapter 5, 4 different IANI stationary phases are investigated. For neutral solutes, linear data are obtained whatever the IANI column used. For the ionized solutes, their retention is due to a balance of electrostatic and hydrophobie interactions. Thus no discrimination is observed between different series of solutes bearing the same charge, from one column to an other. Part III presents two examples illustrating the information obtained thanks to Structure-Properties Relationships (SPR). Comparing graphically lipophilicity values obtained in two different solvent systems allows to reveal the presence of intramolecular effects .such as internai H-bond (Chapter 6). SPR is used to study the partitioning of ionizable groups encountered in Medicinal Chemistry (Chapter7). Résumé La lipophilie joue un .rôle important dans la détermination et la compréhension du comportement pharmacocinétique des médicaments. Elle est généralement exprimée par le coefficient de partage (log P) d'un composé dans le système de solvants n-octanol/eau. L'utilisation d'un deuxième système de solvants (1,2-dichloroéthane/eau) s'est avérée nécessaire afin d'obtenir des informations complémentaires, les valeurs de log Poct seules n'étant pas suffisantes pour expliquer toutes les propriétés biologiques. Le but de cette thèse est de développer des outils permettant de prédire la lipophilie de nouveaux candidats médicaments et d'analyser l'information fournie par les valeurs de log P. La Partie I présente le développement de modèles théoriques utilisés pour prédire la lipophilie. Le chapitre 2 montre la nécessité de mettre à jour les analyses solvatochromiques existantes mais inadaptées à la prédiction de la lipophilie de nouveaux composés neutres. Dans le chapitre 3, la même méthodologie des analyses solvatochromiques est utilisée pour développer un modèle permettant de prédire la lipophilie des ions. Le modèle global obtenu permet la prédiction de la lipophilie de composés neutres, anioniques et cationiques. La Partie II présente l'étude approfondie de deux filtres physicochimiques. Le Chapitre 4 montre que la phase stationnaire Discovery RP Amide C16 permet la détermination de la lipophilie de la forme neutre de composés basiques et acides, à l'exception des acides très lipophiles. Ces derniers présentent des interactions supplémentaires avec cette phase stationnaire. Dans le Chapitre 5, 4 phases stationnaires IAM sont étudiées. Pour les composés neutres étudiés, des valeurs de rétention linéaires sont obtenues, quelque que soit la colonne IAM utilisée. Pour les composés ionisables, leur rétention est due à une balance entre des interactions électrostatiques et hydrophobes. Donc aucune discrimination n'est observée entre les différentes séries de composés portant la même charge d'une colonne à l'autre. La Partie III présente deux exemples illustrant les informations obtenues par l'utilisation des relations structures-propriétés. Comparer graphiquement la lipophilie mesurée dans deux différents systèmes de solvants permet de mettre en évidence la présence d'effets intramoléculaires tels que les liaisons hydrogène intramoléculaires (Chapitre 6). Cette approche des relations structures-propriétés est aussi appliquée à l'étude du partage de fonctions ionisables rencontrées en Chimie Thérapeutique (Chapitre 7) Résumé large public Pour exercer son effet thérapeutique, un médicament doit atteindre son site d'action en quantité suffisante. La quantité effective de médicament atteignant le site d'action dépend du nombre d'interactions entre le médicament et de nombreux constituants de l'organisme comme, par exemple, les enzymes du métabolisme ou les membranes biologiques. Le passage du médicament à travers ces membranes, appelé perméation, est un paramètre important à optimiser pour développer des médicaments plus puissants. La lipophilie joue un rôle clé dans la compréhension de la perméation passive des médicaments. La lipophilie est généralement exprimée par le coefficient de partage (log P) dans le système de solvants (non miscibles) n-octanol/eau. Les valeurs de log Poct seules se sont avérées insuffisantes pour expliquer la perméation à travers toutes les différentes membranes biologiques du corps humain. L'utilisation d'un système de solvants additionnel (le système 1,2-dichloroéthane/eau) a permis d'obtenir les informations complémentaires indispensables à une bonne compréhension du processus de perméation. Un grand nombre d'outils expérimentaux et théoriques sont à disposition pour étudier la lipophilie. Ce travail de thèse se focalise principalement sur le développement ou l'amélioration de certains de ces outils pour permettre leur application à un champ plus large de composés. Voici une brève description de deux de ces outils: 1)La factorisation de la lipophilie en fonction de certaines propriétés structurelles (telle que le volume) propres aux composés permet de développer des modèles théoriques utilisables pour la prédiction de la lipophilie de nouveaux composés ou médicaments. Cette approche est appliquée à l'analyse de la lipophilie de composés neutres ainsi qu'à la lipophilie de composés chargés. 2)La chromatographie liquide à haute pression sur phase inverse (RP-HPLC) est une méthode couramment utilisée pour la détermination expérimentale des valeurs de log Poct.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a contemporaneous-threshold multivariate smooth transition autoregressive (C-MSTAR) model in which the regime weights depend on the ex ante probabilities that latent regime-specific variables exceed certain threshold values. A key feature of the model is that the transition function depends on all the parameters of the model as well as on the data. Since the mixing weights are also a function of the regime-specific innovation covariance matrix, the model can account for contemporaneous regime-specific co-movements of the variables. The stability and distributional properties of the proposed model are discussed, as well as issues of estimation, testing and forecasting. The practical usefulness of the C-MSTAR model is illustrated by examining the relationship between US stock prices and interest rates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

On December 4th 2007, a 3-Mm3 landslide occurred along the northwestern shore of Chehalis Lake. The initiation zone is located at the intersection of the main valley slope and the northern sidewall of a prominent gully. The slope failure caused a displacement wave that ran up to 38 m on the opposite shore of the lake. The landslide is temporally associated with a rain-on-snow meteorological event which is thought to have triggered it. This paper describes the Chehalis Lake landslide and presents a comparison of discontinuity orientation datasets obtained using three techniques: field measurements, terrestrial photogrammetric 3D models and an airborne LiDAR digital elevation model to describe the orientation and characteristics of the five discontinuity sets present. The discontinuity orientation data are used to perform kinematic, surface wedge limit equilibrium and three-dimensional distinct element analyses. The kinematic and surface wedge analyses suggest that the location of the slope failure (intersection of the valley slope and a gully wall) has facilitated the development of the unstable rock mass which initiated as a planar sliding failure. Results from the three-dimensional distinct element analyses suggest that the presence, orientation and high persistence of a discontinuity set dipping obliquely to the slope were critical to the development of the landslide and led to a failure mechanism dominated by planar sliding. The three-dimensional distinct element modelling also suggests that the presence of a steeply dipping discontinuity set striking perpendicular to the slope and associated with a fault exerted a significant control on the volume and extent of the failed rock mass but not on the overall stability of the slope.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sensitivity of altitudinal and latitudinal tree-line ecotones to climate change, particularly that of temperature, has received much attention. To improve our understanding of the factors affecting tree-line position, we used the spatially explicit dynamic forest model TreeMig. Although well-suited because of its landscape dynamics functions, TreeMig features a parabolic temperature growth response curve, which has recently been questioned. and the species parameters are not specifically calibrated for cold temperatures. Our main goals were to improve the theoretical basis of the temperature growth response curve in the model and develop a method for deriving that curve's parameters from tree-ring data. We replaced the parabola with an asymptotic curve, calibrated for the main species at the subalpine (Swiss Alps: Pinus cembra, Larix decidua, Picea abies) and boreal (Fennoscandia: Pinus sylvestris, Betula pubescens, P. abies) tree-lines. After fitting new parameters, the growth curve matched observed tree-ring widths better. For the subalpine species, the minimum degree-day sum allowing, growth (kDDMin) was lowered by around 100 degree-days; in the case of Larix, the maximum potential ring-width was increased to 5.19 mm. At the boreal tree-line, the kDDMin for P. sylvestris was lowered by 210 degree-days and its maximum ring-width increased to 2.943 mm; for Betula (new in the model) kDDMin was set to 325 degree-days and the maximum ring-width to 2.51 mm; the values from the only boreal sample site for Picea were similar to the subalpine ones, so the same parameters were used. However, adjusting the growth response alone did not improve the model's output concerning species' distributions and their relative importance at tree-line. Minimum winter temperature (MinWiT, mean of the coldest winter month), which controls seedling establishment in TreeMig, proved more important for determining distribution. Picea, P. sylvestris and Betula did not previously have minimum winter temperature limits, so these values were set to the 95th percentile of each species' coldest MinWiT site (respectively -7, -11, -13). In a case study for the Alps, the original and newly calibrated versions of TreeMig were compared with biomass data from the National Forest Inventor), (NFI). Both models gave similar, reasonably realistic results. In conclusion, this method of deriving temperature responses from tree-rings works well. However, regeneration and its underlying factors seem more important for controlling species' distributions than previously thought. More research on regeneration ecology, especially at the upper limit of forests. is needed to improve predictions of tree-line responses to climate change further.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study presents a classification criteria for two-class Cannabis seedlings. As the cultivation of drug type cannabis is forbidden in Switzerland, law enforcement authorities regularly ask laboratories to determine cannabis plant's chemotype from seized material in order to ascertain that the plantation is legal or not. In this study, the classification analysis is based on data obtained from the relative proportion of three major leaf compounds measured by gas-chromatography interfaced with mass spectrometry (GC-MS). The aim is to discriminate between drug type (illegal) and fiber type (legal) cannabis at an early stage of the growth. A Bayesian procedure is proposed: a Bayes factor is computed and classification is performed on the basis of the decision maker specifications (i.e. prior probability distributions on cannabis type and consequences of classification measured by losses). Classification rates are computed with two statistical models and results are compared. Sensitivity analysis is then performed to analyze the robustness of classification criteria.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Indirect calorimetry based on respiratory exchange measurement has been successfully used from the beginning of the century to obtain an estimate of heat production (energy expenditure) in human subjects and animals. The errors inherent to this classical technique can stem from various sources: 1) model of calculation and assumptions, 2) calorimetric factors used, 3) technical factors and 4) human factors. The physiological and biochemical factors influencing the interpretation of calorimetric data include a change in the size of the bicarbonate and urea pools and the accumulation or loss (via breath, urine or sweat) of intermediary metabolites (gluconeogenesis, ketogenesis). More recently, respiratory gas exchange data have been used to estimate substrate utilization rates in various physiological and metabolic situations (fasting, post-prandial state, etc.). It should be recalled that indirect calorimetry provides an index of overall substrate disappearance rates. This is incorrectly assumed to be equivalent to substrate "oxidation" rates. Unfortunately, there is no adequate golden standard to validate whole body substrate "oxidation" rates, and this contrasts to the "validation" of heat production by indirect calorimetry, through use of direct calorimetry under strict thermal equilibrium conditions. Tracer techniques using stable (or radioactive) isotopes, represent an independent way of assessing substrate utilization rates. When carbohydrate metabolism is measured with both techniques, indirect calorimetry generally provides consistent glucose "oxidation" rates as compared to isotopic tracers, but only when certain metabolic processes (such as gluconeogenesis and lipogenesis) are minimal or / and when the respiratory quotients are not at the extreme of the physiological range. However, it is believed that the tracer techniques underestimate true glucose "oxidation" rates due to the failure to account for glycogenolysis in the tissue storing glucose, since this escapes the systemic circulation. A major advantage of isotopic techniques is that they are able to estimate (given certain assumptions) various metabolic processes (such as gluconeogenesis) in a noninvasive way. Furthermore when, in addition to the 3 macronutrients, a fourth substrate is administered (such as ethanol), isotopic quantification of substrate "oxidation" allows one to eliminate the inherent assumptions made by indirect calorimetry. In conclusion, isotopic tracers techniques and indirect calorimetry should be considered as complementary techniques, in particular since the tracer techniques require the measurement of carbon dioxide production obtained by indirect calorimetry. However, it should be kept in mind that the assessment of substrate oxidation by indirect calorimetry may involve large errors in particular over a short period of time. By indirect calorimetry, energy expenditure (heat production) is calculated with substantially less error than substrate oxidation rates.