86 resultados para Value-based pricing
em Université de Lausanne, Switzerland
Resumo:
BACKGROUND: Recent neuroimaging studies suggest that value-based decision-making may rely on mechanisms of evidence accumulation. However no studies have explicitly investigated the time when single decisions are taken based on such an accumulation process. NEW METHOD: Here, we outline a novel electroencephalography (EEG) decoding technique which is based on accumulating the probability of appearance of prototypical voltage topographies and can be used for predicting subjects' decisions. We use this approach for studying the time-course of single decisions, during a task where subjects were asked to compare reward vs. loss points for accepting or rejecting offers. RESULTS: We show that based on this new method, we can accurately decode decisions for the majority of the subjects. The typical time-period for accurate decoding was modulated by task difficulty on a trial-by-trial basis. Typical latencies of when decisions are made were detected at ∼500ms for 'easy' vs. ∼700ms for 'hard' decisions, well before subjects' response (∼340ms). Importantly, this decision time correlated with the drift rates of a diffusion model, evaluated independently at the behavioral level. COMPARISON WITH EXISTING METHOD(S): We compare the performance of our algorithm with logistic regression and support vector machine and show that we obtain significant results for a higher number of subjects than with these two approaches. We also carry out analyses at the average event-related potential level, for comparison with previous studies on decision-making. CONCLUSIONS: We present a novel approach for studying the timing of value-based decision-making, by accumulating patterns of topographic EEG activity at single-trial level.
Resumo:
S100B is a prognostic factor for melanoma as elevated levels correlate with disease progression and poor outcome. We determined its prognostic value based on updated information using serial determinations in stage IIb/III melanoma patients. 211 Patients who participated in the EORTC 18952 trial, evaluating efficacy of adjuvant intermediate doses of interferon α2b (IFN) versus observation, entered a corollary study. Over a period of 36 months, 918 serum samples were collected. The Cox time-dependent model was used to assess prognostic value of the latest (most recent) S100B determination. At first measurement, 178 patients had S100B values <0.2 μg/l and 33 ≥ 0.2 μg/l. Within the first group, 61 patients had, later on, an increased value of S100B (≥ 0.2 μg/l). An initial increased value of S100B, or during follow-up, was associated with worse distant metastasis-free survival (DMFS); hazard ratio (HR) of S100B ≥ 0.2 versus S100B < 0.2 was 5.57 (95% confidence interval (CI) 3.81-8.16), P < 0.0001, after adjustment for stage, number of lymph nodes and sex. In stage IIb patients, the HR adjusted for sex was 2.14 (95% CI 0.71, 6.42), whereas in stage III, the HR adjusted for stage, number of lymph nodes and sex was 6.76 (95% CI 4.50-10.16). Similar results were observed regarding overall survival (OS). Serial determination of S100B in stage IIb-III melanoma is a strong independent prognostic marker, even stronger compared to stage and number of positive lymph nodes. The prognostic impact of S100B ≥ 0.2 μg/l is more pronounced in stage III disease compared with stage IIb.
Resumo:
Perceived patient value is often not aligned with the emerging expenses for health care services. In other words, the costs are often supposed as rising faster than the actual value for the patients. This fact is causing major concerns to governments, health plans, and individuals. Attempts to solve the problem have habitually been on the operational effectiveness side: increasing patient volume, minimizing costs, rationing, or closing hospitals, usually resulting in a zero-sum game. Only few approaches come from the strategic positioning side and "competition" among hospitals is still perceived rather as a danger than as a chance to create a positive-sum game and stimulate patient value. In their 2006 book, "Redefining Health Care", the renowned Harvard strategy professor Michael E. Porter and hospital management expert Professor Elizabeth Olmsted Teisberg approach the challenge from the positive-sum perspective: they propose to form Integrated Practice Units (IPUs) and manage hospitals in a modern, patient value oriented way. They argue that creating value-based competition on results should have the same effect on the health care sector like transparency and competition turned other industries with out-dated management models (like recently the inert telecommunication industry) into highly competitive and customer value creating businesses. The objective of this paper is to elaborate Care Delivery Value Chains for Integrated Practice Units in ophthalmic clinics and gather a first feedback from Swiss hospital managers, ophthalmologists, and patients, if such an approach could be a realistic way to improve health care management. First, Porter's definition of competitiveness (distinction between operational effectiveness and strategic positioning) is explained. Then, the Care Delivery Value Chain is introduced as a key element for understanding value-based management, followed by three practice examples for ophthalmic clinics. Finally, recommendations are given how the Care Delivery Value Chain can be managed efficiently and how the obstacles of becoming a patient-oriented organization can be overcome. The conclusion is that increased transparency and value-based competition on results has the potential to change the mindset of hospital managers-which will align patient value with the emerging health care expenses. Early adapters of this management approach will gain a competitive advantage. [Author, p. 6]
Resumo:
The investigation of perceptual and cognitive functions with non-invasive brain imaging methods critically depends on the careful selection of stimuli for use in experiments. For example, it must be verified that any observed effects follow from the parameter of interest (e.g. semantic category) rather than other low-level physical features (e.g. luminance, or spectral properties). Otherwise, interpretation of results is confounded. Often, researchers circumvent this issue by including additional control conditions or tasks, both of which are flawed and also prolong experiments. Here, we present some new approaches for controlling classes of stimuli intended for use in cognitive neuroscience, however these methods can be readily extrapolated to other applications and stimulus modalities. Our approach is comprised of two levels. The first level aims at equalizing individual stimuli in terms of their mean luminance. Each data point in the stimulus is adjusted to a standardized value based on a standard value across the stimulus battery. The second level analyzes two populations of stimuli along their spectral properties (i.e. spatial frequency) using a dissimilarity metric that equals the root mean square of the distance between two populations of objects as a function of spatial frequency along x- and y-dimensions of the image. Randomized permutations are used to obtain a minimal value between the populations to minimize, in a completely data-driven manner, the spectral differences between image sets. While another paper in this issue applies these methods in the case of acoustic stimuli (Aeschlimann et al., Brain Topogr 2008), we illustrate this approach here in detail for complex visual stimuli.
Resumo:
PURPOSE: Peptide receptor radionuclide therapy (PRRT) delivers high absorbed doses to kidneys and may lead to permanent nephropathy. Reliable dosimetry of kidneys is thus critical for safe and effective PRRT. The aim of this work was to assess the feasibility of planning PRRT based on 3D radiobiological dosimetry (3D-RD) in order to optimize both the amount of activity to administer and the fractionation scheme, while limiting the absorbed dose and the biological effective dose (BED) to the renal cortex. METHODS: Planar and SPECT data were available for a patient examined with (111)In-DTPA-octreotide at 0.5 (planar only), 4, 24, and 48 h post-injection. Absorbed dose and BED distributions were calculated for common therapeutic radionuclides, i.e., (111)In, (90)Y and (177)Lu, using the 3D-RD methodology. Dose-volume histograms were computed and mean absorbed doses to kidneys, renal cortices, and medullae were compared with results obtained using the MIRD schema (S-values) with the multiregion kidney dosimetry model. Two different treatment planning approaches based on (1) the fixed absorbed dose to the cortex and (2) the fixed BED to the cortex were then considered to optimize the activity to administer by varying the number of fractions. RESULTS: Mean absorbed doses calculated with 3D-RD were in good agreement with those obtained with S-value-based SPECT dosimetry for (90)Y and (177)Lu. Nevertheless, for (111)In, differences of 14% and 22% were found for the whole kidneys and the cortex, respectively. Moreover, the authors found that planar-based dosimetry systematically underestimates the absorbed dose in comparison with SPECT-based methods, up to 32%. Regarding the 3D-RD-based treatment planning using a fixed BED constraint to the renal cortex, the optimal number of fractions was found to be 3 or 4, depending on the radionuclide administered and the value of the fixed BED. Cumulative activities obtained using the proposed simulated treatment planning are compatible with real activities administered to patients in PRRT. CONCLUSIONS: The 3D-RD treatment planning approach based on the fixed BED was found to be the method of choice for clinical implementation in PRRT by providing realistic activity to administer and number of cycles. While dividing the activity in several cycles is important to reduce renal toxicity, the clinical outcome of fractionated PRRT should be investigated in the future.
Resumo:
This thesis focuses on theoretical asset pricing models and their empirical applications. I aim to investigate the following noteworthy problems: i) if the relationship between asset prices and investors' propensities to gamble and to fear disaster is time varying, ii) if the conflicting evidence for the firm and market level skewness can be explained by downside risk, Hi) if costly learning drives liquidity risk. Moreover, empirical tests support the above assumptions and provide novel findings in asset pricing, investment decisions, and firms' funding liquidity. The first chapter considers a partial equilibrium model where investors have heterogeneous propensities to gamble and fear disaster. Skewness preference represents the desire to gamble, while kurtosis aversion represents fear of extreme returns. Using US data from 1988 to 2012, my model demonstrates that in bad times, risk aversion is higher, more people fear disaster, and fewer people gamble, in contrast to good times. This leads to a new empirical finding: gambling preference has a greater impact on asset prices during market downturns than during booms. The second chapter consists of two essays. The first essay introduces a foramula based on conditional CAPM for decomposing the market skewness. We find that the major market upward and downward movements can be well preadicted by the asymmetric comovement of betas, which is characterized by an indicator called "Systematic Downside Risk" (SDR). We find that SDR can efafectively forecast future stock market movements and we obtain out-of-sample R-squares (compared with a strategy using historical mean) of more than 2.27% with monthly data. The second essay reconciles a well-known empirical fact: aggregating positively skewed firm returns leads to negatively skewed market return. We reconcile this fact through firms' greater response to negative maraket news than positive market news. We also propose several market return predictors, such as downside idiosyncratic skewness. The third chapter studies the funding liquidity risk based on a general equialibrium model which features two agents: one entrepreneur and one external investor. Only the investor needs to acquire information to estimate the unobservable fundamentals driving the economic outputs. The novelty is that information acquisition is more costly in bad times than in good times, i.e. counter-cyclical information cost, as supported by previous empirical evidence. Later we show that liquidity risks are principally driven by costly learning. Résumé Cette thèse présente des modèles théoriques dévaluation des actifs et leurs applications empiriques. Mon objectif est d'étudier les problèmes suivants: la relation entre l'évaluation des actifs et les tendances des investisseurs à parier et à crainadre le désastre varie selon le temps ; les indications contraires pour l'entreprise et l'asymétrie des niveaux de marché peuvent être expliquées par les risques de perte en cas de baisse; l'apprentissage coûteux augmente le risque de liquidité. En outre, des tests empiriques confirment les suppositions ci-dessus et fournissent de nouvelles découvertes en ce qui concerne l'évaluation des actifs, les décisions relatives aux investissements et la liquidité de financement des entreprises. Le premier chapitre examine un modèle d'équilibre où les investisseurs ont des tendances hétérogènes à parier et à craindre le désastre. La préférence asymétrique représente le désir de parier, alors que le kurtosis d'aversion représente la crainte du désastre. En utilisant les données des Etats-Unis de 1988 à 2012, mon modèle démontre que dans les mauvaises périodes, l'aversion du risque est plus grande, plus de gens craignent le désastre et moins de gens parient, conatrairement aux bonnes périodes. Ceci mène à une nouvelle découverte empirique: la préférence relative au pari a un plus grand impact sur les évaluations des actifs durant les ralentissements de marché que durant les booms économiques. Exploitant uniquement cette relation générera un revenu excédentaire annuel de 7,74% qui n'est pas expliqué par les modèles factoriels populaires. Le second chapitre comprend deux essais. Le premier essai introduit une foramule base sur le CAPM conditionnel pour décomposer l'asymétrie du marché. Nous avons découvert que les mouvements de hausses et de baisses majeures du marché peuvent être prédits par les mouvements communs des bêtas. Un inadicateur appelé Systematic Downside Risk, SDR (risque de ralentissement systématique) est créé pour caractériser cette asymétrie dans les mouvements communs des bêtas. Nous avons découvert que le risque de ralentissement systématique peut prévoir les prochains mouvements des marchés boursiers de manière efficace, et nous obtenons des carrés R hors échantillon (comparés avec une stratégie utilisant des moyens historiques) de plus de 2,272% avec des données mensuelles. Un investisseur qui évalue le marché en utilisant le risque de ralentissement systématique aurait obtenu une forte hausse du ratio de 0,206. Le second essai fait cadrer un fait empirique bien connu dans l'asymétrie des niveaux de march et d'entreprise, le total des revenus des entreprises positiveament asymétriques conduit à un revenu de marché négativement asymétrique. Nous décomposons l'asymétrie des revenus du marché au niveau de l'entreprise et faisons cadrer ce fait par une plus grande réaction des entreprises aux nouvelles négatives du marché qu'aux nouvelles positives du marché. Cette décomposition révélé plusieurs variables de revenus de marché efficaces tels que l'asymétrie caractéristique pondérée par la volatilité ainsi que l'asymétrie caractéristique de ralentissement. Le troisième chapitre fournit une nouvelle base théorique pour les problèmes de liquidité qui varient selon le temps au sein d'un environnement de marché incomplet. Nous proposons un modèle d'équilibre général avec deux agents: un entrepreneur et un investisseur externe. Seul l'investisseur a besoin de connaitre le véritable état de l'entreprise, par conséquent, les informations de paiement coutent de l'argent. La nouveauté est que l'acquisition de l'information coute plus cher durant les mauvaises périodes que durant les bonnes périodes, comme cela a été confirmé par de précédentes expériences. Lorsque la récession comamence, l'apprentissage coûteux fait augmenter les primes de liquidité causant un problème d'évaporation de liquidité, comme cela a été aussi confirmé par de précédentes expériences.
Resumo:
Species distribution models (SDMs) studies suggest that, without control measures, the distribution of many alien invasive plant species (AIS) will increase under climate and land-use changes. Due to limited resources and large areas colonised by invaders, management and monitoring resources must be prioritised. Choices depend on the conservation value of the invaded areas and can be guided by SDM predictions. Here, we use a hierarchical SDM framework, complemented by connectivity analysis of AIS distributions, to evaluate current and future conflicts between AIS and high conservation value areas. We illustrate the framework with three Australian wattle (Acacia) species and patterns of conservation value in Northern Portugal. Results show that protected areas will likely suffer higher pressure from all three Acacia species under future climatic conditions. Due to this higher predicted conflict in protected areas, management might be prioritised for Acacia dealbata and Acacia melanoxylon. Connectivity of AIS suitable areas inside protected areas is currently lower than across the full study area, but this would change under future environmental conditions. Coupled SDM and connectivity analysis can support resource prioritisation for anticipation and monitoring of AIS impacts. However, further tests of this framework over a wide range of regions and organisms are still required before wide application.
Resumo:
Introduction: MCTI is used to assess acute ischemic stroke (AIS) patients.We postulated that use of MCTI improves patient outcome regardingindependence and mortality.Methods: From the ASTRAL registry, all patients with an AIS and a non-contrast-CT (NCCT), angio-CT (CTA) or perfusion-CT (CTP) within24 h from onset were included. Demographic, clinical, biological, radio-logical, and follow-up caracteristics were collected. Significant predictorsof MCTI use were fitted in a multivariate analysis. Patients undergoingCTA or CTA&CTP were compared with NCCT patients with regards tofavourable outcome (mRS ≤ 2) at 3 months, 12 months mortality, strokemechanism, short-term renal function, use of ancillary diagnostic tests,duration of hospitalization and 12 months stroke recurrence.
Resumo:
Wastewater-based epidemiology consists in acquiring relevant information about the lifestyle and health status of the population through the analysis of wastewater samples collected at the influent of a wastewater treatment plant. Whilst being a very young discipline, it has experienced an astonishing development since its firs application in 2005. The possibility to gather community-wide information about drug use has been among the major field of application. The wide resonance of the first results sparked the interest of scientists from various disciplines. Since then, research has broadened in innumerable directions. Although being praised as a revolutionary approach, there was a need to critically assess its added value, with regard to the existing indicators used to monitor illicit drug use. The main, and explicit, objective of this research was to evaluate the added value of wastewater-based epidemiology with regards to two particular, although interconnected, dimensions of illicit drug use. The first is related to trying to understand the added value of the discipline from an epidemiological, or societal, perspective. In other terms, to evaluate if and how it completes our current vision about the extent of illicit drug use at the population level, and if it can guide the planning of future prevention measures and drug policies. The second dimension is the criminal one, with a particular focus on the networks which develop around the large demand in illicit drugs. The goal here was to assess if wastewater-based epidemiology, combined to indicators stemming from the epidemiological dimension, could provide additional clues about the structure of drug distribution networks and the size of their market. This research had also an implicit objective, which focused on initiating the path of wastewater- based epidemiology at the Ecole des Sciences Criminelles of the University of Lausanne. This consisted in gathering the necessary knowledge about the collection, preparation, and analysis of wastewater samples and, most importantly, to understand how to interpret the acquired data and produce useful information. In the first phase of this research, it was possible to determine that ammonium loads, measured directly in the wastewater stream, could be used to monitor the dynamics of the population served by the wastewater treatment plant. Furthermore, it was shown that on the long term, the population did not have a substantial impact on consumption patterns measured through wastewater analysis. Focussing on methadone, for which precise prescription data was available, it was possible to show that reliable consumption estimates could be obtained via wastewater analysis. This allowed to validate the selected sampling strategy, which was then used to monitor the consumption of heroin, through the measurement of morphine. The latter, in combination to prescription and sales data, provided estimates of heroin consumption in line with other indicators. These results, combined to epidemiological data, highlighted the good correspondence between measurements and expectations and, furthermore, suggested that the dark figure of heroin users evading harm-reduction programs, which would thus not be measured by conventional indicators, is likely limited. In the third part, which consisted in a collaborative study aiming at extensively investigating geographical differences in drug use, wastewater analysis was shown to be a useful complement to existing indicators. In particular for stigmatised drugs, such as cocaine and heroin, it allowed to decipher the complex picture derived from surveys and crime statistics. Globally, it provided relevant information to better understand the drug market, both from an epidemiological and repressive perspective. The fourth part focused on cannabis and on the potential of combining wastewater and survey data to overcome some of their respective limitations. Using a hierarchical inference model, it was possible to refine current estimates of cannabis prevalence in the metropolitan area of Lausanne. Wastewater results suggested that the actual prevalence is substantially higher compared to existing figures, thus supporting the common belief that surveys tend to underestimate cannabis use. Whilst being affected by several biases, the information collected through surveys allowed to overcome some of the limitations linked to the analysis of cannabis markers in wastewater (i.e., stability and limited excretion data). These findings highlighted the importance and utility of combining wastewater-based epidemiology to existing indicators about drug use. Similarly, the fifth part of the research was centred on assessing the potential uses of wastewater-based epidemiology from a law enforcement perspective. Through three concrete examples, it was shown that results from wastewater analysis can be used to produce highly relevant intelligence, allowing drug enforcement to assess the structure and operations of drug distribution networks and, ultimately, guide their decisions at the tactical and/or operational level. Finally, the potential to implement wastewater-based epidemiology to monitor the use of harmful, prohibited and counterfeit pharmaceuticals was illustrated through the analysis of sibutramine, and its urinary metabolite, in wastewater samples. The results of this research have highlighted that wastewater-based epidemiology is a useful and powerful approach with numerous scopes. Faced with the complexity of measuring a hidden phenomenon like illicit drug use, it is a major addition to the panoply of existing indicators. -- L'épidémiologie basée sur l'analyse des eaux usées (ou, selon sa définition anglaise, « wastewater-based epidemiology ») consiste en l'acquisition d'informations portant sur le mode de vie et l'état de santé d'une population via l'analyse d'échantillons d'eaux usées récoltés à l'entrée des stations d'épuration. Bien qu'il s'agisse d'une discipline récente, elle a vécu des développements importants depuis sa première mise en oeuvre en 2005, notamment dans le domaine de l'analyse des résidus de stupéfiants. Suite aux retombées médiatiques des premiers résultats de ces analyses de métabolites dans les eaux usées, de nombreux scientifiques provenant de différentes disciplines ont rejoint les rangs de cette nouvelle discipline en développant plusieurs axes de recherche distincts. Bien que reconnu pour son coté objectif et révolutionnaire, il était nécessaire d'évaluer sa valeur ajoutée en regard des indicateurs couramment utilisés pour mesurer la consommation de stupéfiants. En se focalisant sur deux dimensions spécifiques de la consommation de stupéfiants, l'objectif principal de cette recherche était focalisé sur l'évaluation de la valeur ajoutée de l'épidémiologie basée sur l'analyse des eaux usées. La première dimension abordée était celle épidémiologique ou sociétale. En d'autres termes, il s'agissait de comprendre si et comment l'analyse des eaux usées permettait de compléter la vision actuelle sur la problématique, ainsi que déterminer son utilité dans la planification des mesures préventives et des politiques en matière de stupéfiants actuelles et futures. La seconde dimension abordée était celle criminelle, en particulier, l'étude des réseaux qui se développent autour du trafic de produits stupéfiants. L'objectif était de déterminer si cette nouvelle approche combinée aux indicateurs conventionnels, fournissait de nouveaux indices quant à la structure et l'organisation des réseaux de distribution ainsi que sur les dimensions du marché. Cette recherche avait aussi un objectif implicite, développer et d'évaluer la mise en place de l'épidémiologie basée sur l'analyse des eaux usées. En particulier, il s'agissait d'acquérir les connaissances nécessaires quant à la manière de collecter, traiter et analyser des échantillons d'eaux usées, mais surtout, de comprendre comment interpréter les données afin d'en extraire les informations les plus pertinentes. Dans la première phase de cette recherche, il y pu être mis en évidence que les charges en ammonium, mesurées directement dans les eaux usées permettait de suivre la dynamique des mouvements de la population contributrice aux eaux usées de la station d'épuration de la zone étudiée. De plus, il a pu être démontré que, sur le long terme, les mouvements de la population n'avaient pas d'influence substantielle sur le pattern de consommation mesuré dans les eaux usées. En se focalisant sur la méthadone, une substance pour laquelle des données précises sur le nombre de prescriptions étaient disponibles, il a pu être démontré que des estimations exactes sur la consommation pouvaient être tirées de l'analyse des eaux usées. Ceci a permis de valider la stratégie d'échantillonnage adoptée, qui, par le bais de la morphine, a ensuite été utilisée pour suivre la consommation d'héroïne. Combinée aux données de vente et de prescription, l'analyse de la morphine a permis d'obtenir des estimations sur la consommation d'héroïne en accord avec des indicateurs conventionnels. Ces résultats, combinés aux données épidémiologiques ont permis de montrer une bonne adéquation entre les projections des deux approches et ainsi démontrer que le chiffre noir des consommateurs qui échappent aux mesures de réduction de risque, et qui ne seraient donc pas mesurés par ces indicateurs, est vraisemblablement limité. La troisième partie du travail a été réalisée dans le cadre d'une étude collaborative qui avait pour but d'investiguer la valeur ajoutée de l'analyse des eaux usées à mettre en évidence des différences géographiques dans la consommation de stupéfiants. En particulier pour des substances stigmatisées, telles la cocaïne et l'héroïne, l'approche a permis d'objectiver et de préciser la vision obtenue avec les indicateurs traditionnels du type sondages ou les statistiques policières. Globalement, l'analyse des eaux usées s'est montrée être un outil très utile pour mieux comprendre le marché des stupéfiants, à la fois sous l'angle épidémiologique et répressif. La quatrième partie du travail était focalisée sur la problématique du cannabis ainsi que sur le potentiel de combiner l'analyse des eaux usées aux données de sondage afin de surmonter, en partie, leurs limitations. En utilisant un modèle d'inférence hiérarchique, il a été possible d'affiner les actuelles estimations sur la prévalence de l'utilisation de cannabis dans la zone métropolitaine de la ville de Lausanne. Les résultats ont démontré que celle-ci est plus haute que ce que l'on s'attendait, confirmant ainsi l'hypothèse que les sondages ont tendance à sous-estimer la consommation de cannabis. Bien que biaisés, les données récoltées par les sondages ont permis de surmonter certaines des limitations liées à l'analyse des marqueurs du cannabis dans les eaux usées (i.e., stabilité et manque de données sur l'excrétion). Ces résultats mettent en évidence l'importance et l'utilité de combiner les résultats de l'analyse des eaux usées aux indicateurs existants. De la même façon, la cinquième partie du travail était centrée sur l'apport de l'analyse des eaux usées du point de vue de la police. Au travers de trois exemples, l'utilisation de l'indicateur pour produire du renseignement concernant la structure et les activités des réseaux de distribution de stupéfiants, ainsi que pour guider les choix stratégiques et opérationnels de la police, a été mise en évidence. Dans la dernière partie, la possibilité d'utiliser cette approche pour suivre la consommation de produits pharmaceutiques dangereux, interdits ou contrefaits, a été démontrée par l'analyse dans les eaux usées de la sibutramine et ses métabolites. Les résultats de cette recherche ont mis en évidence que l'épidémiologie par l'analyse des eaux usées est une approche pertinente et puissante, ayant de nombreux domaines d'application. Face à la complexité de mesurer un phénomène caché comme la consommation de stupéfiants, la valeur ajoutée de cette approche a ainsi pu être démontrée.
Resumo:
BACKGROUND AND OBJECTIVES: The determination of the carbon isotope ratio in androgen metabolites has been previously shown to be a reliable, direct method to detect testosterone misuse in the context of antidoping testing. Here, the variability in the 13C/12C ratios in urinary steroids in a widely heterogeneous cohort of professional soccer players residing in different countries (Argentina, Italy, Japan, South Africa, Switzerland and Uganda) is examined. METHODS: Carbon isotope ratios of selected androgens in urine specimens were determined using gas chromatography/combustion/isotope ratio mass spectrometry (GC-C-IRMS). RESULTS: Urinary steroids in Italian and Swiss populations were found to be enriched in 13C relative to other groups, reflecting higher consumption of C3 plants in these two countries. Importantly, detection criteria based on the difference in the carbon isotope ratio of androsterone and pregnanediol for each population were found to be well below the established threshold value for positive cases. CONCLUSIONS: The results obtained with the tested diet groups highlight the importance of adapting the criteria if one wishes to increase the sensitivity of exogenous testosterone detection. In addition, confirmatory tests might be rendered more efficient by combining isotope ratio mass spectrometry with refined interpretation criteria for positivity and subject-based profiling of steroids.
Resumo:
BACKGROUND: School-based intervention studies promoting a healthy lifestyle have shown favorable immediate health effects. However, there is a striking paucity on long-term follow-ups. The aim of this study was therefore to assess the 3 yr-follow-up of a cluster-randomized controlled school-based physical activity program over nine month with beneficial immediate effects on body fat, aerobic fitness and physical activity. METHODS AND FINDINGS: Initially, 28 classes from 15 elementary schools in Switzerland were grouped into an intervention (16 classes from 9 schools, n = 297 children) and a control arm (12 classes from 6 schools, n = 205 children) after stratification for grade (1st and 5th graders). Three years after the end of the multi-component physical activity program of nine months including daily physical education (i.e. two additional lessons per week on top of three regular lessons), short physical activity breaks during academic lessons, and daily physical activity homework, 289 (58%) participated in the follow-up. Primary outcome measures included body fat (sum of four skinfolds), aerobic fitness (shuttle run test), physical activity (accelerometry), and quality of life (questionnaires). After adjustment for grade, gender, baseline value and clustering within classes, children in the intervention arm compared with controls had a significantly higher average level of aerobic fitness at follow-up (0.373 z-score units [95%-CI: 0.157 to 0.59, p = 0.001] corresponding to a shift from the 50th to the 65th percentile between baseline and follow-up), while the immediate beneficial effects on the other primary outcomes were not sustained. CONCLUSIONS: Apart from aerobic fitness, beneficial effects seen after one year were not maintained when the intervention was stopped. A continuous intervention seems necessary to maintain overall beneficial health effects as reached at the end of the intervention. TRIAL REGISTRATION: ControlledTrials.com ISRCTN15360785.
Resumo:
BACKGROUND: Little information is available on the validity of simple and indirect body-composition methods in non-Western populations. Equations for predicting body composition are population-specific, and body composition differs between blacks and whites. OBJECTIVE: We tested the hypothesis that the validity of equations for predicting total body water (TBW) from bioelectrical impedance analysis measurements is likely to depend on the racial background of the group from which the equations were derived. DESIGN: The hypothesis was tested by comparing, in 36 African women, TBW values measured by deuterium dilution with those predicted by 23 equations developed in white, African American, or African subjects. These cross-validations in our African sample were also compared, whenever possible, with results from other studies in black subjects. RESULTS: Errors in predicting TBW showed acceptable values (1.3-1.9 kg) in all cases, whereas a large range of bias (0.2-6.1 kg) was observed independently of the ethnic origin of the sample from which the equations were derived. Three equations (2 from whites and 1 from blacks) showed nonsignificant bias and could be used in Africans. In all other cases, we observed either an overestimation or underestimation of TBW with variable bias values, regardless of racial background, yielding no clear trend for validity as a function of ethnic origin. CONCLUSIONS: The findings of this cross-validation study emphasize the need for further fundamental research to explore the causes of the poor validity of TBW prediction equations across populations rather than the need to develop new prediction equations for use in Africa.
Resumo:
BACKGROUND: The aim of this study was to assess, at the European level and using digital technology, the inter-pathologist reproducibility of the ISHLT 2004 system and to compare it with the 1990 system We also assessed the reproducibility of the morphologic criteria for diagnosis of antibody-mediated rejection detailed in the 2004 grading system. METHODS: The hematoxylin-eosin-stained sections of 20 sets of endomyocardial biopsies were pre-selected and graded by two pathologists (A.A. and M.B.) and digitized using a telepathology digital pathology system (Aperio ImageScope System; for details refer to http://aperio.com/). Their diagnoses were considered the index diagnoses, which covered all grades of acute cellular rejection (ACR), early ischemic lesions, Quilty lesions, late ischemic lesions and (in the 2005 system) antibody-mediated rejection (AMR). Eighteen pathologists from 16 heart transplant centers in 7 European countries participated in the study. Inter-observer reproducibility was assessed using Fleiss's kappa and Krippendorff's alpha statistics. RESULTS: The combined kappa value of all grades diagnosed by all 18 pathologists was 0.31 for the 1990 grading system and 0.39 for the 2005 grading system, with alpha statistics at 0.57 and 0.55, respectively. Kappa values by grade for 1990/2005, respectively, were: 0 = 0.52/0.51; 1A/1R = 0.24/0.36; 1B = 0.15; 2 = 0.13; 3A/2R = 0.29/0.29; 3B/3R = 0.13/0.23; and 4 = 0.18. For the 2 cases of AMR, 6 of 18 pathologists correctly suspected AMR on the hematoxylin-eosin slides, whereas, in each of 17 of the 18 AMR-negative cases a small percentage of pathologists (range 5% to 33%) overinterpreted the findings as suggestive for AMR. CONCLUSIONS: Reproducibility studies of cardiac biopsies by pathologists in different centers at the international level were feasible using digitized slides rather than conventional histology glass slides. There was a small improvement in interobserver agreement between pathologists of different European centers when moving from the 1990 ISHLT classification to the "new" 2005 ISHLT classification. Morphologic suspicion of AMR in the 2004 system on hematoxylin-eosin-stained slides only was poor, highlighting the need for better standardization of morphologic criteria for AMR. Ongoing educational programs are needed to ensure standardization of diagnosis of both acute cellular and antibody-mediated rejection.
Resumo:
Patient adherence is often poor for hypertension and dyslipidaemia. A monitoring of drug adherence might improve these risk factors control, but little is known in ambulatory care. We conducted a randomised controlled study in networks of community-based pharmacists and physicians in the canton of Fribourg to examine whether monitoring drug adherence with an electronic monitor (MEMS) would improve risk factor control among treated, but uncontrolled hypertensive and dyslipidemic patients. The results indicate that MEMS achieve a better blood pressure control and lipid profile, although its implementation requires considerable resources. The study also shows the value of collaboration between physicians and pharmacists in the field of patient adherence to improve ambulatory care of patients with cardiovascular risk factors.
Resumo:
PURPOSE: To determine the diagnostic value of the intravascular contrast agent gadocoletic acid (B-22956) in three-dimensional, free breathing coronary magnetic resonance angiography (MRA) for stenosis detection in patients with suspected or known coronary artery disease. METHODS: Eighteen patients underwent three-dimensional, free breathing coronary MRA of the left and right coronary system before and after intravenous application of a single dose of gadocoletic acid (B-22956) using three different dose regimens (group A 0.050 mmol/kg; group B 0.075 mmol/kg; group C 0.100 mmol/kg). Precontrast scanning followed a coronary MRA standard non-contrast T2 preparation/turbo-gradient echo sequence (T2Prep); for postcontrast scanning an inversion-recovery gradient echo sequence was used (real-time navigator correction for both scans). In pre- and postcontrast scans quantitative analysis of coronary MRA data was performed to determine the number of visible side branches, vessel length and vessel sharpness of each of the three coronary arteries (LAD, LCX, RCA). The number of assessable coronary artery segments was determined to calculate sensitivity and specificity for detection of stenosis > or = 50% on a segment-to-segment basis (16-segment-model) in pre- and postcontrast scans with x-ray coronary angiography as the standard of reference. RESULTS: Dose group B (0.075 mmol/kg) was preferable with regard to improvement of MR angiographic parameters: in postcontrast scans all MR angiographic parameters increased significantly except for the number of visible side branches of the left circumflex artery. In addition, assessability of coronary artery segments significantly improved postcontrast in this dose group (67 versus 88%, p < 0.01). Diagnostic performance (sensitivity, specificity, accuracy) was 83, 77 and 78% for precontrast and 86, 95 and 94% for postcontrast scans. CONCLUSIONS: The use of gadocoletic acid (B-22956) results in an improvement of MR angiographic parameters, asssessability of coronary segments and detection of coronary stenoses > or = 50%.