103 resultados para general regression model
Resumo:
The methylation status of the O(6)-methylguanine-DNA methyltransferase (MGMT) gene is an important predictive biomarker for benefit from alkylating agent therapy in glioblastoma. Recent studies in anaplastic glioma suggest a prognostic value for MGMT methylation. Investigation of pathogenetic and epigenetic features of this intriguingly distinct behavior requires accurate MGMT classification to assess high throughput molecular databases. Promoter methylation-mediated gene silencing is strongly dependent on the location of the methylated CpGs, complicating classification. Using the HumanMethylation450 (HM-450K) BeadChip interrogating 176 CpGs annotated for the MGMT gene, with 14 located in the promoter, two distinct regions in the CpG island of the promoter were identified with high importance for gene silencing and outcome prediction. A logistic regression model (MGMT-STP27) comprising probes cg1243587 and cg12981137 provided good classification properties and prognostic value (kappa = 0.85; log-rank p < 0.001) using a training-set of 63 glioblastomas from homogenously treated patients, for whom MGMT methylation was previously shown to be predictive for outcome based on classification by methylation-specific PCR. MGMT-STP27 was successfully validated in an independent cohort of chemo-radiotherapy-treated glioblastoma patients (n = 50; kappa = 0.88; outcome, log-rank p < 0.001). Lower prevalence of MGMT methylation among CpG island methylator phenotype (CIMP) positive tumors was found in glioblastomas from The Cancer Genome Atlas than in low grade and anaplastic glioma cohorts, while in CIMP-negative gliomas MGMT was classified as methylated in approximately 50 % regardless of tumor grade. The proposed MGMT-STP27 prediction model allows mining of datasets derived on the HM-450K or HM-27K BeadChip to explore effects of distinct epigenetic context of MGMT methylation suspected to modulate treatment resistance in different tumor types.
Resumo:
Background: Modelling epidemiological knowledge in validated clinical scores is a practical mean of integrating EBM to usual care. Existing scores about cardiovascular disease have been largely developed in emergency settings, but few in primary care. Such a toll is needed for general practitioners (GP) to evaluate the probability of ischemic heart disease (IHD) in patients with non-traumatic chest pain. Objective: To develop a predictive model to use as a clinical score for detecting IHD in patients with non-traumatic chest-pain in primary care. Methods: A post-hoc secondary analysis on data from an observational study including 672 patients with chest pain of which 85 had IHD diagnosed by their GP during the year following their inclusion. Best subset method was used to select 8 predictive variables from univariate analysis and fitted in a multivariate logistic regression model to define the score. Reliability of the model was assessed using split-group method. Results: Significant predictors were: age (0-3 points), gender (1 point), having at least one cardiovascular risks factor (hypertension, dyslipidemia, diabetes, smoking, family history of CVD; 3 points), personal history of cardiovascular disease (1 point), duration of chest pain from 1 to 60 minutes (2 points), substernal chest pain (1 point), pain increasing with exertion (1 point) and absence of tenderness at palpation (1 point). Area under the ROC curve for the score was of 0.95 (IC95% 0.93; 0.97). Patients were categorised in three groups, low risk of IHD (score under 6; n = 360), moderate risk of IHD (score from 6 to 8; n = 187) and high risk of IHD (score from 9-13; n = 125). Prevalence of IHD in each group was respectively of 0%, 6.7%, 58.5%. Reliability of the model seems satisfactory as the model developed from the derivation set predicted perfectly (p = 0.948) the number of patients in each group in the validation set. Conclusion: This clinical score based only on history and physical exams can be an important tool in the practice of the general physician for the prediction of ischemic heart disease in patients complaining of chest pain. The score below 6 points (in more than half of our population) can avoid demanding complementary exams for selected patients (ECG, laboratory tests) because of the very low risk of IHD. Score above 6 points needs investigation to detect or rule out IHD. Further external validation is required in ambulatory settings.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
BACKGROUND: Up to 5% of patients presenting to the emergency department (ED) four or more times within a 12 month period represent 21% of total ED visits. In this study we sought to characterize social and medical vulnerability factors of ED frequent users (FUs) and to explore if these factors hold simultaneously. METHODS: We performed a case-control study at Lausanne University Hospital, Switzerland. Patients over 18 years presenting to the ED at least once within the study period (April 2008 toMarch 2009) were included. FUs were defined as patients with four or more ED visits within the previous 12 months. Outcome data were extracted from medical records of the first ED attendance within the study period. Outcomes included basic demographics and social variables, ED admission diagnosis, somatic and psychiatric days hospitalized over 12 months, and having a primary care physician.We calculated the percentage of FUs and non-FUs having at least one social and one medical vulnerability factor. The four chosen social factors included: unemployed and/or dependence on government welfare, institutionalized and/or without fixed residence, either separated, divorced or widowed, and under guardianship. The fourmedical vulnerability factors were: ≥6 somatic days hospitalized, ≥1 psychiatric days hospitalized, ≥5 clinical departments used (all three factors measured over 12 months), and ED admission diagnosis of alcohol and/or drug abuse. Univariate and multivariate logistical regression analyses allowed comparison of two JGIM ABSTRACTS S391 random samples of 354 FUs and 354 non-FUs (statistical power 0.9, alpha 0.05 for all outcomes except gender, country of birth, and insurance type). RESULTS: FUs accounted for 7.7% of ED patients and 24.9% of ED visits. Univariate logistic regression showed that FUs were older (mean age 49.8 vs. 45.2 yrs, p=0.003),more often separated and/or divorced (17.5%vs. 13.9%, p=0.029) or widowed (13.8% vs. 8.8%, p=0.029), and either unemployed or dependent on government welfare (31.3% vs. 13.3%, p<0.001), compared to non-FUs. FUs cumulated more days hospitalized over 12 months (mean number of somatic days per patient 1.0 vs. 0.3, p<0.001; mean number of psychiatric days per patient 0.12 vs. 0.03, p<0.001). The two groups were similar regarding gender distribution (females 51.7% vs. 48.3%). The multivariate linear regression model was based on the six most significant factors identified by univariate analysis The model showed that FUs had more social problems, as they were more likely to be institutionalized or not have a fixed residence (OR 4.62; 95% CI, 1.65 to 12.93), and to be unemployed or dependent on government welfare (OR 2.03; 95% CI, 1.31 to 3.14) compared to non-FUs. FUs were more likely to need medical care, as indicated by involvement of≥5 clinical departments over 12 months (OR 6.2; 95%CI, 3.74 to 10.15), having an ED admission diagnosis of substance abuse (OR 3.23; 95% CI, 1.23 to 8.46) and having a primary care physician (OR 1.70;95%CI, 1.13 to 2.56); however, they were less likely to present with an admission diagnosis of injury (OR 0.64; 95% CI, 0.40 to 1.00) compared to non-FUs. FUs were more likely to combine at least one social with one medical vulnerability factor (38.4% vs. 12.1%, OR 7.74; 95% CI 5.03 to 11.93). CONCLUSIONS: FUs were more likely than non-FUs to have social and medical vulnerability factors and to have multiple factors in combination.
Resumo:
The paper deals with the development and application of the methodology for automatic mapping of pollution/contamination data. General Regression Neural Network (GRNN) is considered in detail and is proposed as an efficient tool to solve this problem. The automatic tuning of isotropic and an anisotropic GRNN model using cross-validation procedure is presented. Results are compared with k-nearest-neighbours interpolation algorithm using independent validation data set. Quality of mapping is controlled by the analysis of raw data and the residuals using variography. Maps of probabilities of exceeding a given decision level and ?thick? isoline visualization of the uncertainties are presented as examples of decision-oriented mapping. Real case study is based on mapping of radioactively contaminated territories.
Resumo:
BACKGROUND: Shared Decision Making (SDM) is increasingly advocated as a model for medical decision making. However, there is still low use of SDM in clinical practice. High impact factor journals might represent an efficient way for its dissemination. We aimed to identify and characterize publication trends of SDM in 15 high impact medical journals. METHODS: We selected the 15 general and internal medicine journals with the highest impact factor publishing original articles, letters and editorials. We retrieved publications from 1996 to 2011 through the full-text search function on each journal website and abstracted bibliometric data. We included publications of any type containing the phrase "shared decision making" or five other variants in their abstract or full text. These were referred to as SDM publications. A polynomial Poisson regression model with logarithmic link function was used to assess the evolution across the period of the number of SDM publications according to publication characteristics. RESULTS: We identified 1285 SDM publications out of 229,179 publications in 15 journals from 1996 to 2011. The absolute number of SDM publications by journal ranged from 2 to 273 over 16 years. SDM publications increased both in absolute and relative numbers per year, from 46 (0.32% relative to all publications from the 15 journals) in 1996 to 165 (1.17%) in 2011. This growth was exponential (P < 0.01). We found fewer research publications (465, 36.2% of all SDM publications) than non-research publications, which included non-systematic reviews, letters, and editorials. The increase of research publications across time was linear. Full-text search retrieved ten times more SDM publications than a similar PubMed search (1285 vs. 119 respectively). CONCLUSION: This review in full-text showed that SDM publications increased exponentially in major medical journals from 1996 to 2011. This growth might reflect an increased dissemination of the SDM concept to the medical community.
Resumo:
Over the past few decades, age estimation of living persons has represented a challenging task for many forensic services worldwide. In general, the process for age estimation includes the observation of the degree of maturity reached by some physical attributes, such as dentition or several ossification centers. The estimated chronological age or the probability that an individual belongs to a meaningful class of ages is then obtained from the observed degree of maturity by means of various statistical methods. Among these methods, those developed in a Bayesian framework offer to users the possibility of coherently dealing with the uncertainty associated with age estimation and of assessing in a transparent and logical way the probability that an examined individual is younger or older than a given age threshold. Recently, a Bayesian network for age estimation has been presented in scientific literature; this kind of probabilistic graphical tool may facilitate the use of the probabilistic approach. Probabilities of interest in the network are assigned by means of transition analysis, a statistical parametric model, which links the chronological age and the degree of maturity by means of specific regression models, such as logit or probit models. Since different regression models can be employed in transition analysis, the aim of this paper is to study the influence of the model in the classification of individuals. The analysis was performed using a dataset related to the ossifications status of the medial clavicular epiphysis and results support that the classification of individuals is not dependent on the choice of the regression model.
Resumo:
Well-established examples of genetic epistasis between a pair of loci typically show characteristic patterns of phenotypic distributions in joint genotype tables. However, inferring epistasis given such data is difficult due to the lack of power in commonly used approaches, which decompose the epistatic patterns into main plus interaction effects followed by testing the interaction term. Testing additive-only or all terms may have more power, but they are sensitive to nonepistatic patterns. Alternatively, the epistatic patterns of interest can be enumerated and the best matching one is found by searching through the possibilities. Although this approach requires multiple testing correction over possible patterns, each pattern can be fitted with a regression model with just one degree of freedom and thus the overall power can still be high, if the number of possible patterns is limited. Here we compare the power of the linear decomposition and pattern search methods, by applying them to simulated data generated under several patterns of joint genotype effects with simple biological interpretations. Interaction-only tests are the least powerful; while pattern search approach is the most powerful if the range of possibilities is restricted, but still includes the true pattern.
Resumo:
SETTING: Ambulatory paediatric clinic in Lausanne, Switzerland, a country with a significant proportion of tuberculosis (TB) among immigrants. AIM: To assess the factors associated with positive tuberculin skin tests (TST) among children examined during a health check-up or during TB contact tracing, notably the influence of BCG vaccination (Bacille Calmette Guérin) and history of TB contact. METHOD: A descriptive study of children who had a TST (2 Units RT23) between November 2002 and April 2004. Age, sex, history of TB contact, BCG vaccination status, country of origin and birth outside Switzerland were recorded. RESULTS: Of 234 children, 176 (75%) had a reaction equal to zero and 31 (13%) tested positive (>10 mm). In a linear regression model, the size of the TST varied significantly according to the history of TB contact, age, TB incidence in the country of origin and BCG vaccination status but not according to sex or birth in or outside Switzerland. In a logistic regression model including all the recorded variables, age (Odds Ratio = 1.21, 95% CI 1.08; 1.35), a history of TB contact (OR = 7.31, 95% CI 2.23; 24) and the incidence of TB in the country of origin (OR = 1.01, 95% CI 1.00; 1.02) were significantly associated with a positive TST but sex (OR = 1.18, 95% CI 0.50; 2.78) and BCG vaccination status (OR = 2.97, 95% CI 0.91; 9.72) were not associated. CONCLUSIONS: TB incidence in the country of origin, BCG vaccination and age influence the TSTreaction (size or proportion of TST > or = 10 mm). However the most obvious risk factor for a positive TST is a history of contact with TB.
Resumo:
This thesis focuses on theoretical asset pricing models and their empirical applications. I aim to investigate the following noteworthy problems: i) if the relationship between asset prices and investors' propensities to gamble and to fear disaster is time varying, ii) if the conflicting evidence for the firm and market level skewness can be explained by downside risk, Hi) if costly learning drives liquidity risk. Moreover, empirical tests support the above assumptions and provide novel findings in asset pricing, investment decisions, and firms' funding liquidity. The first chapter considers a partial equilibrium model where investors have heterogeneous propensities to gamble and fear disaster. Skewness preference represents the desire to gamble, while kurtosis aversion represents fear of extreme returns. Using US data from 1988 to 2012, my model demonstrates that in bad times, risk aversion is higher, more people fear disaster, and fewer people gamble, in contrast to good times. This leads to a new empirical finding: gambling preference has a greater impact on asset prices during market downturns than during booms. The second chapter consists of two essays. The first essay introduces a foramula based on conditional CAPM for decomposing the market skewness. We find that the major market upward and downward movements can be well preadicted by the asymmetric comovement of betas, which is characterized by an indicator called "Systematic Downside Risk" (SDR). We find that SDR can efafectively forecast future stock market movements and we obtain out-of-sample R-squares (compared with a strategy using historical mean) of more than 2.27% with monthly data. The second essay reconciles a well-known empirical fact: aggregating positively skewed firm returns leads to negatively skewed market return. We reconcile this fact through firms' greater response to negative maraket news than positive market news. We also propose several market return predictors, such as downside idiosyncratic skewness. The third chapter studies the funding liquidity risk based on a general equialibrium model which features two agents: one entrepreneur and one external investor. Only the investor needs to acquire information to estimate the unobservable fundamentals driving the economic outputs. The novelty is that information acquisition is more costly in bad times than in good times, i.e. counter-cyclical information cost, as supported by previous empirical evidence. Later we show that liquidity risks are principally driven by costly learning. Résumé Cette thèse présente des modèles théoriques dévaluation des actifs et leurs applications empiriques. Mon objectif est d'étudier les problèmes suivants: la relation entre l'évaluation des actifs et les tendances des investisseurs à parier et à crainadre le désastre varie selon le temps ; les indications contraires pour l'entreprise et l'asymétrie des niveaux de marché peuvent être expliquées par les risques de perte en cas de baisse; l'apprentissage coûteux augmente le risque de liquidité. En outre, des tests empiriques confirment les suppositions ci-dessus et fournissent de nouvelles découvertes en ce qui concerne l'évaluation des actifs, les décisions relatives aux investissements et la liquidité de financement des entreprises. Le premier chapitre examine un modèle d'équilibre où les investisseurs ont des tendances hétérogènes à parier et à craindre le désastre. La préférence asymétrique représente le désir de parier, alors que le kurtosis d'aversion représente la crainte du désastre. En utilisant les données des Etats-Unis de 1988 à 2012, mon modèle démontre que dans les mauvaises périodes, l'aversion du risque est plus grande, plus de gens craignent le désastre et moins de gens parient, conatrairement aux bonnes périodes. Ceci mène à une nouvelle découverte empirique: la préférence relative au pari a un plus grand impact sur les évaluations des actifs durant les ralentissements de marché que durant les booms économiques. Exploitant uniquement cette relation générera un revenu excédentaire annuel de 7,74% qui n'est pas expliqué par les modèles factoriels populaires. Le second chapitre comprend deux essais. Le premier essai introduit une foramule base sur le CAPM conditionnel pour décomposer l'asymétrie du marché. Nous avons découvert que les mouvements de hausses et de baisses majeures du marché peuvent être prédits par les mouvements communs des bêtas. Un inadicateur appelé Systematic Downside Risk, SDR (risque de ralentissement systématique) est créé pour caractériser cette asymétrie dans les mouvements communs des bêtas. Nous avons découvert que le risque de ralentissement systématique peut prévoir les prochains mouvements des marchés boursiers de manière efficace, et nous obtenons des carrés R hors échantillon (comparés avec une stratégie utilisant des moyens historiques) de plus de 2,272% avec des données mensuelles. Un investisseur qui évalue le marché en utilisant le risque de ralentissement systématique aurait obtenu une forte hausse du ratio de 0,206. Le second essai fait cadrer un fait empirique bien connu dans l'asymétrie des niveaux de march et d'entreprise, le total des revenus des entreprises positiveament asymétriques conduit à un revenu de marché négativement asymétrique. Nous décomposons l'asymétrie des revenus du marché au niveau de l'entreprise et faisons cadrer ce fait par une plus grande réaction des entreprises aux nouvelles négatives du marché qu'aux nouvelles positives du marché. Cette décomposition révélé plusieurs variables de revenus de marché efficaces tels que l'asymétrie caractéristique pondérée par la volatilité ainsi que l'asymétrie caractéristique de ralentissement. Le troisième chapitre fournit une nouvelle base théorique pour les problèmes de liquidité qui varient selon le temps au sein d'un environnement de marché incomplet. Nous proposons un modèle d'équilibre général avec deux agents: un entrepreneur et un investisseur externe. Seul l'investisseur a besoin de connaitre le véritable état de l'entreprise, par conséquent, les informations de paiement coutent de l'argent. La nouveauté est que l'acquisition de l'information coute plus cher durant les mauvaises périodes que durant les bonnes périodes, comme cela a été confirmé par de précédentes expériences. Lorsque la récession comamence, l'apprentissage coûteux fait augmenter les primes de liquidité causant un problème d'évaporation de liquidité, comme cela a été aussi confirmé par de précédentes expériences.
Resumo:
To evaluate the efficacy of anti-J5 serum in the treatment of severe infectious purpura, 73 children were randomized to receive either anti-J5 (40) or control (33) plasma. Age, blood pressure, and biologic risk factors were similar in both groups. At admission, however, tumor necrosis factor serum concentrations were 974 +/- 173 pg/ml compared with 473 +/- 85 pg/ml (P = .023) and interleukin-6 serum concentrations were 129 +/- 45 compared with 19 +/- 5 ng/ml (P = .005) in the control and treated groups, respectively. The duration of shock and the occurrence of complications were similar in both groups. The mortality rate was 36% in the control group and 25% in the treated group (P = .317; odds ratio, 0.76; 95% confidence interval, 0.46-1.26). This trend disappeared after correction for unbalances in risk factors at randomization using a logistic regression model. These results suggest that anti-j5 plasma did not affect the course or mortality of severe infectious purpura in children.
Resumo:
Altitudinal tree lines are mainly constrained by temperature, but can also be influenced by factors such as human activity, particularly in the European Alps, where centuries of agricultural use have affected the tree-line. Over the last decades this trend has been reversed due to changing agricultural practices and land-abandonment. We aimed to combine a statistical land-abandonment model with a forest dynamics model, to take into account the combined effects of climate and human land-use on the Alpine tree-line in Switzerland. Land-abandonment probability was expressed by a logistic regression function of degree-day sum, distance from forest edge, soil stoniness, slope, proportion of employees in the secondary and tertiary sectors, proportion of commuters and proportion of full-time farms. This was implemented in the TreeMig spatio-temporal forest model. Distance from forest edge and degree-day sum vary through feed-back from the dynamics part of TreeMig and climate change scenarios, while the other variables remain constant for each grid cell over time. The new model, TreeMig-LAb, was tested on theoretical landscapes, where the variables in the land-abandonment model were varied one by one. This confirmed the strong influence of distance from forest and slope on the abandonment probability. Degree-day sum has a more complex role, with opposite influences on land-abandonment and forest growth. TreeMig-LAb was also applied to a case study area in the Upper Engadine (Swiss Alps), along with a model where abandonment probability was a constant. Two scenarios were used: natural succession only (100% probability) and a probability of abandonment based on past transition proportions in that area (2.1% per decade). The former showed new forest growing in all but the highest-altitude locations. The latter was more realistic as to numbers of newly forested cells, but their location was random and the resulting landscape heterogeneous. Using the logistic regression model gave results consistent with observed patterns of land-abandonment: existing forests expanded and gaps closed, leading to an increasingly homogeneous landscape.
Resumo:
ABSTRACT: BACKGROUND: Chest pain raises concern for the possibility of coronary heart disease. Scoring methods have been developed to identify coronary heart disease in emergency settings, but not in primary care. METHODS: Data were collected from a multicenter Swiss clinical cohort study including 672 consecutive patients with chest pain, who had visited one of 59 family practitioners' offices. Using delayed diagnosis we derived a prediction rule to rule out coronary heart disease by means of a logistic regression model. Known cardiovascular risk factors, pain characteristics, and physical signs associated with coronary heart disease were explored to develop a clinical score. Patients diagnosed with angina or acute myocardial infarction within the year following their initial visit comprised the coronary heart disease group. RESULTS: The coronary heart disease score was derived from eight variables: age, gender, duration of chest pain from 1 to 60 minutes, substernal chest pain location, pain increases with exertion, absence of tenderness point at palpation, cardiovascular risks factors, and personal history of cardiovascular disease. Area under the receiver operating characteristics curve was of 0.95 with a 95% confidence interval of 0.92; 0.97. From this score, 413 patients were considered as low risk for values of percentile 5 of the coronary heart disease patients. Internal validity was confirmed by bootstrapping. External validation using data from a German cohort (Marburg, n = 774) revealed a receiver operating characteristics curve of 0.75 (95% confidence interval, 0.72; 0.81) with a sensitivity of 85.6% and a specificity of 47.2%. CONCLUSIONS: This score, based only on history and physical examination, is a complementary tool for ruling out coronary heart disease in primary care patients complaining of chest pain.
Resumo:
Among the PAH class of compounds, high molecular weight PAH are now considered as relevant cancer inducers, but not all of them have the same biological activity. However, their analysis is difficult, mainly due to the presence of numerous isomers and due to their low volatility. Retention indices (Ri) for 13 dibenzopyrenes and homologues were determined by high-resolution capillary gas chromatography (GC) with four different stationary phases: a 5% phenyl-substituted methylpolysiloxane column (DB-5 ms), a 35% phenyl-substituted methylpolysiloxane column (BPX-35), a 50% phenyl-substituted methylpolysiloxane column (BPX-50), and a 35% trifluoropropylmethyl polysiloxane stationary phase (Rtx-200). Correlations for retention on each phase were investigated by using 8 independent molecular descriptors. Ri has been shown to be linearly correlated to PAH volume, polarisability alpha, Hückel-pi energy on the four examined columns. Ionisation potential Ip is a fourth variable which improves the regression model for DB-5ms, BPX-35, and BPX-50 column. Correlation coefficients ranging from r2 = 0.935 to r2 = 0.952 are then observed. Application of these indices to the identification and quantification of PAH with MW 302 in certified diesel particulate matter SRM 1650a is presented and discussed. [Authors]
Resumo:
We found that lumbar spine texture analysis using trabecular bone score (TBS) is a risk factor for MOF and a risk factor for death in a retrospective cohort study from a large clinical registry for the province of Manitoba, Canada. INTRODUCTION: FRAX® estimates the 10-year probability of major osteoporotic fracture (MOF) using clinical risk factors and femoral neck bone mineral density (BMD). Trabecular bone score (TBS), derived from texture in the spine dual X-ray absorptiometry (DXA) image, is related to bone microarchitecture and fracture risk independently of BMD. Our objective was to determine whether TBS provides information on MOF probability beyond that provided by the FRAX variables. METHODS: We included 33,352 women aged 40-100 years (mean 63 years) with baseline DXA measurements of lumbar spine TBS and femoral neck BMD. The association between TBS, the FRAX variables, and the risk of MOF or death was examined using an extension of the Poisson regression model. RESULTS: During the mean of 4.7 years, 1,754 women died and 1,872 sustained one or more MOF. For each standard deviation reduction in TBS, there was a 36 % increase in MOF risk (HR 1.36, 95 % CI 1.30-1.42, p < 0.001) and a 32 % increase in death (HR 1.32, 95 % CI 1.26-1.39, p < 0.001). When adjusted for significant clinical risk factors and femoral neck BMD, lumbar spine TBS was still a significant predictor of MOF (HR 1.18, 95 % CI 1.12-1.23) and death (HR 1.20, 95 % CI 1.14-1.26). Models for estimating MOF probability, accounting for competing mortality, showed that low TBS (10th percentile) increased risk by 1.5-1.6-fold compared with high TBS (90th percentile) across a broad range of ages and femoral neck T-scores. CONCLUSIONS: Lumbar spine TBS is able to predict incident MOF independent of FRAX clinical risk factors and femoral neck BMD even after accounting for the increased death hazard.