916 resultados para Non-negative sources
Resumo:
This paper reconsiders the empirical evidence on the asymmetricoutput effects of monetary policy. Asymmetric effects is a common feature ofmany theoretical models, and there are many different versions of suchasymmetries. We concentrate on the distinctions between positive andnegative money-supply changes, big and small changes in money-supply, andpossible combinations of the two asymmetries. Earlier research has foundempirical evidence in favor of the former of these in US data. Using M1 asthe monetary variable we find evidence in favor of neutrality of big shocksand non-neutrality of small shocks. The results may, however, be affected bystructual instability of M1 demand. Thus, we substitute M1 with the federalfunds rate. In these data we find that only small negative shocks affectreal aggregate activity. The results are interpreted in terms of menu-costmodels.
Resumo:
Le texte «De la parole dialogale» écrit par Lev Jakubinskij est considéré par un certain nombre de chercheurs comme la source principale de la conception du dialogue chez Valentin Volochinov. Cet article remet en question la légitimité de cette thèse. L'analyse détaillée des notions de dialogue élaborées par Jakubinskij et Volochinov montre que leurs bases théoriques ne coïncident pas. Si Jakubinskij s'appuie sur la psychologie dite objective (la réflexologie), Volochinov élabore sa conception du dialogue sur une base sociologique. Les travaux des sociologues marxistes et non marxistes constituent la source principale de sa notion de dialogue
Resumo:
Le répertoire cellulaire Τ a pour but d'être tolérant aux antigènes du soi afin d'éviter l'induction de maladies autoimmunes. C'est pourquoi les lymphocytes Τ autoréactifs sont éliminés dans le thymus lors de leur développement par le processus de sélection négative. La plupart des recherches étudient les lymphocytes Τ de haute avidité. Ces lymphocytes Τ de haute avidité sont très sensibles et réagissent fortement à un antigène du soi. En conséquence, ces cellules induisent le développement de maladies autoimmunes lorsqu'elles ciblent des organes exprimant l'antigène du soi. Plusieurs études ont montré que les lymphocytes Τ qui réagissent faiblement aux antigènes spécifiques à un tissu, nommé lymphocytes Τ de faible avidité, peuvent contourner les mécanismes de tolérance centrale et périphérique. J'ai utilisé des souris Rip-mOva qui expriment l'Ovalbumine comme antigène du soi spécifique à un tissu. Dans ces souris transgéniques Rip-mOva, les lymphocytes Τ de faible avidité survivent à la sélection négative. Une fois stimulés à la périphérie, ces lymphocytes Τ CD8+ de faible avidité ont la capacité d'infiltrer les organes qui expriment l'antigène du soi chez les souris Rip-mOva et peuvent induire une destruction tissulaire. L'objectif principal de mon projet de thèse était de comprendre les caractéristiques phénotypiques et fonctionnelles de ces lymphocytes Τ dans un état d'équilibre et dans un contexte infectieux. Pour étudier ces cellules dans un modèle murin bien défini, nous avons généré des souris exprimant un récepteur de cellule Τ transgénique appelé OT-3. Ces souris transgéniques OT-3 ont des lymphocytes Τ CD8+ de faible avidité spécifiques à l'épitope SIINFEKL de l'antigène Ovalbumine. Nous avons démontré qu'un grand nombre de lymphocytes Τ CD8+ OT-3 ne sont pas éliminés lors de la sélection négative dans le thymus après avoir rencontré l'antigène du soi. Par conséquent, les lymphocytes Τ OT-3 de faible avidité sont présents dans une fenêtre de sélection comprise entre la sélection positive et négative. Cette limite se définie comme le seuil d'affinité et est impliquée dans l'échappement de certains lymphocytes Τ OT- 3 autoréactifs. A la périphérie, ces cellules sont capables d'induire une autoimmunité après stimulation au cours d'une infection, ce qui nous permet de les définir comme étant non tolérante et non dans un état anergique à la périphérie. Nous avons également étudié le seuil d'activation des lymphocytes Τ OT-3 à faible avidité à la périphérie et avons constaté que des ligands peptidiques plus faibles que l'épitope natif SIINFEKL sont capables de les activer au cours d'une infection ainsi que de les différencier en lymphocytes Τ effecteurs et mémoires. Les données illustrent une déficience lors de la sélection négative dans le thymus de lymphocytes Τ CD8+ autoréactifs de faible avidité contre un antigène du soi spécifique à tissu et montrent que ces cellules sont entièrement compétentes lors d'une infection. - The diverse Τ cell repertoire needs to be tolerant to self-antigen to avoid the induction of autoimmunity. This is why autoreactive developing Τ cells are deleted in the thymus. The deletion of self-reactive Τ cells occurs through the process of negative selection. Most studies investigated high avidity Τ cells. These high avidity Τ cells are very sensitive and strongly react to a self-antigen. As a consequence, these cells induce the development of autoimmunity when they target organs which express the self-antigen. High avidity autoreactive CD8+ Τ cells are deleted in the thymus. However, several studies have shown Τ cells that weakly respond to tissue-restricted antigen, referred to as low avidity Τ cells, can bypass central and peripheral tolerance mechanisms. I used Rip-mOva mice that expressed Ovalbumin as a neo self-antigen in a tissue-restricted fashion. In these transgenic Rip-mOva mice low avidity CD8+ Τ cells survive negative selection. Upon stimulation in the periphery, these low avidity CD8+ Τ cells have the ability to infiltrate organs that express the self-antigen in the Rip-mOva mice and can also induce the destruction of the tissue. The major aim of my PhD project was to understand the phenotypic and functionality characteristics of these Τ cells in a steady-state condition and in a context of an infection. To study these cells in a well-defined mouse model, we generated OT-3 Τ cell receptor transgenic mice that express low avidity CD8+ Τ cells that are specific for the SIINFEKL epitope of the Ovalbumin antigen. We have been able to demonstrate that a large number of OT-3 CD8+ Τ cells survive negative selection in the thymus after encountering the self-antigen. Thus, low avidity OT-3 Τ cells are present in a window of selection comprised between positive and negative selection. This boundary defined as the affinity threshold is involved in the escape of some autoreactive low avidity OT-3 Τ cells. Once they circulate in the periphery, they are able to induce autoimmunity after stimulation during an infection, allowing us to allocate these cells as being non-tolerant and not in an anergic state in the periphery. We have also looked at the threshold of activation of low avidity OT-3 CD8+ Τ cells in the periphery and found that peptide ligands that are weaker than the native SIINFEKL epitope are able to activate OT-3 Τ cells during an infection and to differentiate them into effector and memory Τ cells. The data illustrate the impairment of negatively selecting low avidity autoreactive CD8+ Τ cells against a tissue-restricted antigen in the thymus and shows that these cells are fully competent upon an infection.
Resumo:
Summary Landscapes are continuously changing. Natural forces of change such as heavy rainfall and fires can exert lasting influences on their physical form. However, changes related to human activities have often shaped landscapes more distinctly. In Western Europe, especially modern agricultural practices and the expanse of overbuilt land have left their marks in the landscapes since the middle of the 20th century. In the recent years men realised that mare and more changes that were formerly attributed to natural forces might indirectly be the result of their own action. Perhaps the most striking landscape change indirectly driven by human activity we can witness in these days is the large withdrawal of Alpine glaciers. Together with the landscapes also habitats of animal and plant species have undergone vast and sometimes rapid changes that have been hold responsible for the ongoing loss of biodiversity. Thereby, still little knowledge is available about probable effects of the rate of landscape change on species persistence and disappearance. Therefore, the development and speed of land use/land cover in the Swiss communes between the 1950s and 1990s were reconstructed using 10 parameters from agriculture and housing censuses, and were further correlated with changes in butterfly species occurrences. Cluster analyses were used to detect spatial patterns of change on broad spatial scales. Thereby, clusters of communes showing similar changes or transformation rates were identified for single decades and put into a temporally dynamic sequence. The obtained picture on the changes showed a prevalent replacement of non-intensive agriculture by intensive practices, a strong spreading of urban communes around city centres, and transitions towards larger farm sizes in the mountainous areas. Increasing transformation rates toward more intensive agricultural managements were especially found until the 1970s, whereas afterwards the trends were commonly negative. However, transformation rates representing the development of residential buildings showed positive courses at any time. The analyses concerning the butterfly species showed that grassland species reacted sensitively to the density of livestock in the communes. This might indicate the augmented use of dry grasslands as cattle pastures that show altered plant species compositions. Furthermore, these species also decreased in communes where farms with an agricultural area >5ha have disappeared. The species of the wetland habitats were favoured in communes with smaller fractions of agricultural areas and lower densities of large farms (>10ha) but did not show any correlation to transformation rates. It was concluded from these analyses that transformation rates might influence species disappearance to a certain extent but that states of the environmental predictors might generally outweigh the importance of the corresponding rates. Information on the current distribution of species is evident for nature conservation. Planning authorities that define priority areas for species protection or examine and authorise construction projects need to know about the spatial distribution of species. Hence, models that simulate the potential spatial distribution of species have become important decision tools. The underlying statistical analyses such as the widely used generalised linear models (GLM) often rely on binary species presence-absence data. However, often only species presence data have been colleted, especially for vagrant, rare or cryptic species such as butterflies or reptiles. Modellers have thus introduced randomly selected absence data to design distribution models. Yet, selecting false absence data might bias the model results. Therefore, we investigated several strategies to select more reliable absence data to model the distribution of butterfly species based on historical distribution data. The results showed that better models were obtained when historical data from longer time periods were considered. Furthermore, model performance was additionally increased when long-term data of species that show similar habitat requirements as the modelled species were used. This successful methodological approach was further applied to assess consequences of future landscape changes on the occurrence of butterfly species inhabiting dry grasslands or wetlands. These habitat types have been subjected to strong deterioration in the recent decades, what makes their protection a future mission. Four spatially explicit scenarios that described (i) ongoing land use changes as observed between 1985 and 1997, (ii) liberalised agricultural markets, and (iii) slightly and (iv) strongly lowered agricultural production provided probable directions of landscape change. Current species-environment relationships were derived from a statistical model and used to predict future occurrence probabilities in six major biogeographical regions in Switzerland, comprising the Jura Mountains, the Plateau, the Northern and Southern Alps, as well as the Western and Eastern Central Alps. The main results were that dry grasslands species profited from lowered agricultural production, whereas overgrowth of open areas in the liberalisation scenario might impair species occurrence. The wetland species mostly responded with decreases in their occurrence probabilities in the scenarios, due to a loss of their preferred habitat. Further analyses about factors currently influencing species occurrences confirmed anthropogenic causes such as urbanisation, abandonment of open land, and agricultural intensification. Hence, landscape planning should pay more attention to these forces in areas currently inhabited by these butterfly species to enable sustainable species persistence. In this thesis historical data were intensively used to reconstruct past developments and to make them useful for current investigations. Yet, the availability of historical data and the analyses on broader spatial scales has often limited the explanatory power of the conducted analyses. Meaningful descriptors of former habitat characteristics and abundant species distribution data are generally sparse, especially for fine scale analyses. However, this situation can be ameliorated by broadening the extent of the study site and the used grain size, as was done in this thesis by considering the whole of Switzerland with its communes. Nevertheless, current monitoring projects and data recording techniques are promising data sources that might allow more detailed analyses about effects of long-term species reactions on landscape changes in the near future. This work, however, also showed the value of historical species distribution data as for example their potential to locate still unknown species occurrences. The results might therefore contribute to further research activities that investigate current and future species distributions considering the immense richness of historical distribution data. Résumé Les paysages changent continuellement. Des farces naturelles comme des pluies violentes ou des feux peuvent avoir une influence durable sur la forme du paysage. Cependant, les changements attribués aux activités humaines ont souvent modelé les paysages plus profondément. Depuis les années 1950 surtout, les pratiques agricoles modernes ou l'expansion des surfaces d'habitat et d'infrastructure ont caractérisé le développement du paysage en Europe de l'Ouest. Ces dernières années, l'homme a commencé à réaliser que beaucoup de changements «naturels » pourraient indirectement résulter de ses propres activités. Le changement de paysage le plus apparent dont nous sommes témoins de nos jours est probablement l'immense retraite des glaciers alpins. Avec les paysages, les habitats des animaux et des plantes ont aussi été exposés à des changements vastes et quelquefois rapides, tenus pour coresponsable de la continuelle diminution de la biodiversité. Cependant, nous savons peu des effets probables de la rapidité des changements du paysage sur la persistance et la disparition des espèces. Le développement et la rapidité du changement de l'utilisation et de la couverture du sol dans les communes suisses entre les années 50 et 90 ont donc été reconstruits au moyen de 10 variables issues des recensements agricoles et résidentiels et ont été corrélés avec des changements de présence des papillons diurnes. Des analyses de groupes (Cluster analyses) ont été utilisées pour détecter des arrangements spatiaux de changements à l'échelle de la Suisse. Des communes avec des changements ou rapidités comparables ont été délimitées pour des décennies séparées et ont été placées en séquence temporelle, en rendrent une certaine dynamique du changement. Les résultats ont montré un remplacement répandu d'une agriculture extensive des pratiques intensives, une forte expansion des faubourgs urbains autour des grandes cités et des transitions vers de plus grandes surfaces d'exploitation dans les Alpes. Dans le cas des exploitations agricoles, des taux de changement croissants ont été observés jusqu'aux années 70, alors que la tendance a généralement été inversée dans les années suivantes. Par contre, la vitesse de construction des nouvelles maisons a montré des courbes positives pendant les 50 années. Les analyses sur la réaction des papillons diurnes ont montré que les espèces des prairies sèches supportaient une grande densité de bétail. Il est possible que dans ces communes beaucoup des prairies sèches aient été fertilisées et utilisées comme pâturages, qui ont une autre composition floristique. De plus, les espèces ont diminué dans les communes caractérisées par une rapide perte des fermes avec une surface cultivable supérieure à 5 ha. Les espèces des marais ont été favorisées dans des communes avec peu de surface cultivable et peu de grandes fermes, mais n'ont pas réagi aux taux de changement. Il en a donc été conclu que la rapidité des changements pourrait expliquer les disparitions d'espèces dans certains cas, mais que les variables prédictives qui expriment des états pourraient être des descripteurs plus importants. Des informations sur la distribution récente des espèces sont importantes par rapport aux mesures pour la conservation de la nature. Pour des autorités occupées à définir des zones de protection prioritaires ou à autoriser des projets de construction, ces informations sont indispensables. Les modèles de distribution spatiale d'espèces sont donc devenus des moyens de décision importants. Les méthodes statistiques courantes comme les modèles linéaires généralisés (GLM) demandent des données de présence et d'absence des espèces. Cependant, souvent seules les données de présence sont disponibles, surtout pour les animaux migrants, rares ou cryptiques comme des papillons ou des reptiles. C'est pourquoi certains modélisateurs ont choisi des absences au hasard, avec le risque d'influencer le résultat en choisissant des fausses absences. Nous avons établi plusieurs stratégies, basées sur des données de distribution historique des papillons diurnes, pour sélectionner des absences plus fiables. Les résultats ont démontré que de meilleurs modèles pouvaient être obtenus lorsque les données proviennent des périodes de temps plus longues. En plus, la performance des modèles a pu être augmentée en considérant des données de distribution à long terme d'espèces qui occupent des habitats similaires à ceux de l'espèce cible. Vu le succès de cette stratégie, elle a été utilisée pour évaluer les effets potentiels des changements de paysage futurs sur la distribution des papillons des prairies sèches et marais, deux habitats qui ont souffert de graves détériorations. Quatre scénarios spatialement explicites, décrivant (i) l'extrapolation des changements de l'utilisation de sol tels qu'observés entre 1985 et 1997, (ii) la libéralisation des marchés agricoles, et une production agricole (iii) légèrement amoindrie et (iv) fortement diminuée, ont été utilisés pour générer des directions de changement probables. Les relations actuelles entre la distribution des espèces et l'environnement ont été déterminées par le biais des modèles statistiques et ont été utilisées pour calculer des probabilités de présence selon les scénarios dans six régions biogéographiques majeures de la Suisse, comportant le Jura, le Plateau, les Alpes du Nord, du Sud, centrales orientales et centrales occidentales. Les résultats principaux ont montré que les espèces des prairies sèches pourraient profiter d'une diminution de la production agricole, mais qu'elles pourraient aussi disparaître à cause de l'embroussaillement des terres ouvertes dû à la libéralisation des marchés agricoles. La probabilité de présence des espèces de marais a décrû à cause d'une perte générale des habitats favorables. De plus, les analyses ont confirmé que des causes humaines comme l'urbanisation, l'abandon des terres ouvertes et l'intensification de l'agriculture affectent actuellement ces espèces. Ainsi ces forces devraient être mieux prises en compte lors de planifications paysagères, pour que ces papillons diurnes puissent survivre dans leurs habitats actuels. Dans ce travail de thèse, des données historiques ont été intensivement utilisées pour reconstruire des développements anciens et pour les rendre utiles à des recherches contemporaines. Cependant, la disponibilité des données historiques et les analyses à grande échelle ont souvent limité le pouvoir explicatif des analyses. Des descripteurs pertinents pour caractériser les habitats anciens et des données suffisantes sur la distribution des espèces sont généralement rares, spécialement pour des analyses à des échelles fores. Cette situation peut être améliorée en augmentant l'étendue du site d'étude et la résolution, comme il a été fait dans cette thèse en considérant toute la Suisse avec ses communes. Cependant, les récents projets de surveillance et les techniques de collecte de données sont des sources prometteuses, qui pourraient permettre des analyses plus détaillés sur les réactions à long terme des espèces aux changements de paysage dans le futur. Ce travail a aussi montré la valeur des anciennes données de distribution, par exemple leur potentiel pour aider à localiser des' présences d'espèces encore inconnues. Les résultats peuvent contribuer à des activités de recherche à venir, qui étudieraient les distributions récentes ou futures d'espèces en considérant l'immense richesse des données de distribution historiques.
Resumo:
Fibroblastic reticular cells (FRC) form the structural backbone of the T cell rich zones in secondary lymphoid organs (SLO), but also actively influence the adaptive immune response. They provide a guidance path for immigrating T lymphocytes and dendritic cells (DC) and are the main local source of the cytokines CCL19, CCL21, and IL-7, all of which are thought to positively regulate T cell homeostasis and T cell interactions with DC. Recently, FRC in lymph nodes (LN) were also described to negatively regulate T cell responses in two distinct ways. During homeostasis they express and present a range of peripheral tissue antigens, thereby participating in peripheral tolerance induction of self-reactive CD8(+) T cells. During acute inflammation T cells responding to foreign antigens presented on DC very quickly release pro-inflammatory cytokines such as interferon γ. These cytokines are sensed by FRC which transiently produce nitric oxide (NO) gas dampening the proliferation of neighboring T cells in a non-cognate fashion. In summary, we propose a model in which FRC engage in a bidirectional crosstalk with both DC and T cells to increase the efficiency of the T cell response. However, during an acute response, FRC limit excessive expansion and inflammatory activity of antigen-specific T cells. This negative feedback loop may help to maintain tissue integrity and function during rapid organ growth.
Resumo:
BACKGROUND: The nuclear receptors are a large family of eukaryotic transcription factors that constitute major pharmacological targets. They exert their combinatorial control through homotypic heterodimerisation. Elucidation of this dimerisation network is vital in order to understand the complex dynamics and potential cross-talk involved. RESULTS: Phylogeny, protein-protein interactions, protein-DNA interactions and gene expression data have been integrated to provide a comprehensive and up-to-date description of the topology and properties of the nuclear receptor interaction network in humans. We discriminate between DNA-binding and non-DNA-binding dimers, and provide a comprehensive interaction map, that identifies potential cross-talk between the various pathways of nuclear receptors. CONCLUSION: We infer that the topology of this network is hub-based, and much more connected than previously thought. The hub-based topology of the network and the wide tissue expression pattern of NRs create a highly competitive environment for the common heterodimerising partners. Furthermore, a significant number of negative feedback loops is present, with the hub protein SHP [NR0B2] playing a major role. We also compare the evolution, topology and properties of the nuclear receptor network with the hub-based dimerisation network of the bHLH transcription factors in order to identify both unique themes and ubiquitous properties in gene regulation. In terms of methodology, we conclude that such a comprehensive picture can only be assembled by semi-automated text-mining, manual curation and integration of data from various sources.
Resumo:
BACKGROUND: Previous published studies have shown significant variations in colonoscopy performance, even when medical factors are taken into account. This study aimed to examine the role of nonmedical factors (ie, embodied in health care system design) as possible contributors to variations in colonoscopy performance. METHODS: Patient data from a multicenter observational study conducted between 2000 and 2002 in 21 centers in 11 western countries were used. Variability was captured through 2 performance outcomes (diagnostic yield and colonoscopy withdrawal time), jointly studied as dependent variables, using a multilevel 2-equation system. RESULTS: Results showed that open-access systems and high-volume colonoscopy centers were independently associated with a higher likelihood of detecting significant lesions and longer withdrawal durations. Fee for service (FFS) payment was associated with shorter withdrawal durations, and so had an indirect negative impact on the diagnostic yield. Teaching centers exhibited lower detection rates and longer withdrawal times. CONCLUSIONS: Our results suggest that gatekeeping colonoscopy is likely to miss patients with significant lesions and that developing specialized colonoscopy units is important to improve performance. Results also suggest that FFS may result in a lower quality of care in colonoscopy practice and highlight the fact that longer withdrawal times do not necessarily indicate higher quality in teaching centers.
Resumo:
Apart from therapeutic advances related to new treatments, our practices in the management of early breast cancer have been modified by to key organizational settings (1) mass screening, substantially altering the presentation and epidemiology of breast cancer and (2) the development of guidelines to ensure that any patient management is in agreement with the demonstrated impact in the adjuvant treatment. In daily practice, the impact of screening and guidelines recommendations has put us now in a paradoxical situation: while the majority of non-metastatic breast cancers treated in the hexagon are node negative, most of the results of clinical studies on chemotherapy and targeted therapies today arise from populations predominantly node positive. Therefore, it seemed legitimate to convene a working group around a reflection on the directions of adjuvant chemotherapy in a growing node negative population in order to better respond to the questions of the field oncologists, trying to address the discrepancies between different existing guidelines.
Resumo:
Introduction: Carbon monoxide (CO) poisoning is one of the mostcommon causes of fatal poisoning. Symptoms of CO poisoning arenonspecific and the documentation of elevated carboxyhemoglobin(HbCO) levels in arterial blood sample is the only standard ofconfirming suspected exposure. The treatment of CO poisoning requiresnormobaric or hyperbaric oxygen therapy, according to the symptomsand HbCO levels. A new device, the Rad-57 pulse CO-oximeter allowsnoninvasive transcutaneous measurement of blood carboxyhemoglobinlevel (SpCO) by measurement of light wavelength absorptions.Methods: Prospective cohort study with a sample of patients, admittedbetween October 2008 - March 2009 and October 2009 - March 2010,in the emergency services (ES) of a Swiss regional hospital and aSwiss university hospital (Burn Center). In case of suspected COpoisoning, three successive noninvasive measurements wereperformed, simultaneously with one arterial blood HbCO test. A controlgroup includes patients admitted in the ES for other complaints (cardiacinsufficiency, respiratory distress, acute renal failure), but necessitatingarterial blood testing. Informed consent was obtained from all patients.The primary endpoint was to assess the agreement of themeasurements made by the Rad-57 (SpCO) and the blood levels(HbCO).Results: 50 patients were enrolled, among whom 32 were admittedfor suspected CO poisoning. Baseline demographic and clinicalcharacteristics of patients are presented in table 1. The median age was37.7 ans ± 11.8, 56% being male. Median laboratory carboxyhemoglobinlevels (HbCO) were 4.25% (95% IC 0.6-28.5) for intoxicated patientsand 1.8% (95% IC 1.0-5.3) for control patients. Only five patientspresented with HbCO levels >= 15%. The results disclose relatively faircorrelations between the SpCO levels obtained by the Rad-57 and thestandard HbCO, without any false negative results. However, theRad-57 tend to under-estimate the value of SpCO for patientsintoxicated HbCO levels >10% (fig. 1).Conclusion: Noninvasive transcutaneous measurement of bloodcarboxyhemoglobin level is easy to use. The correlation seems to becorrect for low to moderate levels (<15%). For higher values, weobserve a trend of the Rad-57 to under-estimate the HbCO levels. Apartfrom this potential limitation and a few cases of false-negative resultsdescribed in the literature, the Rad-57 may be useful for initial triageand diagnosis of CO.
Resumo:
We evaluated the effectiveness of supplementation with high dose of oral vitamin D3 to correct vitamin D insufficiency. We have shown that one or two oral bolus of 300,000 IU of vitamin D3 can correct vitamin D insufficiency in 50% of patients and that the patients who benefited more from supplementation were those with the lowest baseline levels. INTRODUCTION: Adherence with daily oral supplements of vitamin D3 is suboptimal. We evaluated the effectiveness of a single high dose of oral vitamin D3 (300,000 IU) to correct vitamin D insufficiency in a rheumatologic population. METHODS: Over 1 month, 292 patients had levels of 25-OH vitamin D determined. Results were classified as: deficiency <10 ng/ml, insufficiency ≥10 to 30 ng/ml, and normal ≥30 ng/ml. We added a category using the IOM recommended cut-off of 20 ng/ml. Patients with deficient or normal levels were excluded, as well as patients already supplemented with vitamin D3. Selected patients (141) with vitamin D insufficiency (18.5 ng/ml (10.2-29.1) received a prescription for 300,000 IU of oral vitamin D3 and were asked to return after 3 (M3) and 6 months (M6). Patients still insufficient at M3 received a second prescription for 300,000 IU of oral vitamin D3. Relation between changes in 25-OH vitamin D between M3 and M0 and baseline values were assessed. RESULTS: Patients (124) had a blood test at M3. Two (2%) had deficiency (8.1 ng/ml (7.5-8.7)) and 50 (40%) normal results (36.7 ng/ml (30.5-5.5)). Seventy-two (58%) were insufficient (23.6 ng/ml (13.8-29.8)) and received a second prescription for 300,000 IU of oral vitamin D3. Of the 50/124 patients who had normal results at M3 and did not receive a second prescription, 36 (72%) had a test at M6. Seventeen (47%) had normal results (34.8 ng/ml (30.3-42.8)) and 19 (53%) were insufficient (25.6 ng/ml (15.2-29.9)). Of the 72/124 patients who receive a second prescription, 54 (75%) had a test at M6. Twenty-eight (52%) had insufficiency (23.2 ng/ml (12.8-28.7)) and 26 (48%) had normal results (33.8 ng/ml (30.0-43.7)). At M3, 84% patients achieved a 25-OH vitamin D level >20 ng/ml. The lowest the baseline value, the highest the change after 3 months (negative relation with a correlation coefficient r = -0.3, p = 0.0007). CONCLUSIONS: We have shown that one or two oral bolus of 300,000 IU of vitamin D3 can correct vitamin D insufficiency in 50% of patients.
Resumo:
Résumé Introduction : Les patients nécessitant une prise en charge prolongée en milieu de soins intensifs et présentant une évolution compliquée, développent une réponse métabolique intense caractérisée généralement par un hypermétabolisme et un catabolisme protéique. La sévérité de leur atteinte pathologique expose ces patients à la malnutrition, due principalement à un apport nutritionnel insuffisant, et entraînant une balance énergétique déficitaire. Dans un nombre important d'unités de soins intensifs la nutrition des patients n'apparaît pas comme un objectif prioritaire de la prise en charge. En menant une étude prospective d'observation afin d'analyser la relation entre la balance énergétique et le pronostic clinique des patients avec séjours prolongés en soins intensifs, nous souhaitions changer cette attitude et démonter l'effet délétère de la malnutrition chez ce type de patient. Méthodes : Sur une période de 2 ans, tous les patients, dont le séjour en soins intensifs fut de 5 jours ou plus, ont été enrôlés. Les besoins en énergie pour chaque patient ont été déterminés soit par calorimétrie indirecte, soit au moyen d'une formule prenant en compte le poids du patient (30 kcal/kg/jour). Les patients ayant bénéficié d'une calorimétrie indirecte ont par ailleurs vérifié la justesse de la formule appliquée. L'âge, le sexe le poids préopératoire, la taille, et le « Body mass index » index de masse corporelle reconnu en milieu clinique ont été relevés. L'énergie délivrée l'était soit sous forme nutritionnelle (administration de nutrition entérale, parentérale ou mixte) soit sous forme non-nutritionnelle (perfusions : soluté glucosé, apport lipidique non nutritionnel). Les données de nutrition (cible théorique, cible prescrite, énergie nutritionnelle, énergie non-nutritionnelle, énergie totale, balance énergétique nutritionnelle, balance énergétique totale), et d'évolution clinique (nombre des jours de ventilation mécanique, nombre d'infections, utilisation des antibiotiques, durée du séjour, complications neurologiques, respiratoires gastro-intestinales, cardiovasculaires, rénales et hépatiques, scores de gravité pour patients en soins intensifs, valeurs hématologiques, sériques, microbiologiques) ont été analysées pour chacun des 669 jours de soins intensifs vécus par un total de 48 patients. Résultats : 48 patients de 57±16 ans dont le séjour a varié entre 5 et 49 jours (motif d'admission : polytraumatisés 10; chirurgie cardiaque 13; insuffisance respiratoire 7; pathologie gastro-intestinale 3; sepsis 3; transplantation 4; autre 8) ont été retenus. Si nous n'avons pu démontrer une relation entre la balance énergétique et plus particulièrement, le déficit énergétique, et la mortalité, il existe une relation hautement significative entre le déficit énergétique et la morbidité, à savoir les complications et les infections, qui prolongent naturellement la durée du séjour. De plus, bien que l'étude ne comporte aucune intervention et que nous ne puissions avancer qu'il existe une relation de cause à effet, l'analyse par régression multiple montre que le facteur pronostic le plus fiable est justement la balance énergétique, au détriment des scores habituellement utilisés en soins intensifs. L'évolution est indépendante tant de l'âge et du sexe, que du status nutritionnel préopératoire. L'étude ne prévoyait pas de récolter des données économiques : nous ne pouvons pas, dès lors, affirmer que l'augmentation des coûts engendrée par un séjour prolongé en unité de soins intensifs est induite par un déficit énergétique, même si le bon sens nous laisse penser qu'un séjour plus court engendre un coût moindre. Cette étude attire aussi l'attention sur l'origine du déficit énergétique : il se creuse au cours de la première semaine en soins intensifs, et pourrait donc être prévenu par une intervention nutritionnelle précoce, alors que les recommandations actuelles préconisent un apport énergétique, sous forme de nutrition artificielle, qu'à partir de 48 heures de séjour aux soins intensifs. Conclusions : L'étude montre que pour les patients de soins intensifs les plus graves, la balance énergétique devrait être considérée comme un objectif important de la prise en charge, nécessitant l'application d'un protocole de nutrition précoce. Enfin comme l'évolution à l'admission des patients est souvent imprévisible, et que le déficit s'installe dès la première semaine, il est légitime de s'interroger sur la nécessité d'appliquer ce protocole à tous les patients de soins intensifs et ceci dès leur admission. Summary Background and aims: Critically ill patients with complicated evolution are frequently hypermetabolic, catabolic, and at risk of underfeeding. The study aimed at assessing the relationship between energy balance and outcome in critically ill patients. Methods: Prospective observational study conducted in consecutive patients staying 5 days in the surgical ICU of a University hospital. Demographic data, time to feeding, route, energy delivery, and outcome were recorded. Energy balance was calculated as energy delivery minus target. Data in means+ SD, linear regressions between energy balance and outcome variables. Results: Forty eight patients aged 57±16 years were investigated; complete data are available in 669 days. Mechanical ventilation lasted 11±8 days, ICU stay 15+9 was days, and 30-days mortality was 38%. Time to feeding was 3.1 ±2.2 days. Enteral nutrition was the most frequent route with 433 days. Mean daily energy delivery was 1090±930 kcal. Combining enteral and parenteral nutrition achieved highest energy delivery. Cumulated energy balance was between -12,600+ 10,520 kcal, and correlated with complications (P<0.001), already after 1 week. Conclusion: Negative energy balances were correlated with increasing number of complications, particularly infections. Energy debt appears as a promising tool for nutritional follow-up, which should be further tested. Delaying initiation of nutritional support exposes the patients to energy deficits that cannot be compensated later on.
Resumo:
Abstract Traditionally, the common reserving methods used by the non-life actuaries are based on the assumption that future claims are going to behave in the same way as they did in the past. There are two main sources of variability in the processus of development of the claims: the variability of the speed with which the claims are settled and the variability between the severity of the claims from different accident years. High changes in these processes will generate distortions in the estimation of the claims reserves. The main objective of this thesis is to provide an indicator which firstly identifies and quantifies these two influences and secondly to determine which model is adequate for a specific situation. Two stochastic models were analysed and the predictive distributions of the future claims were obtained. The main advantage of the stochastic models is that they provide measures of variability of the reserves estimates. The first model (PDM) combines one conjugate family Dirichlet - Multinomial with the Poisson distribution. The second model (NBDM) improves the first one by combining two conjugate families Poisson -Gamma (for distribution of the ultimate amounts) and Dirichlet Multinomial (for distribution of the incremental claims payments). It was found that the second model allows to find the speed variability in the reporting process and development of the claims severity as function of two above mentioned distributions' parameters. These are the shape parameter of the Gamma distribution and the Dirichlet parameter. Depending on the relation between them we can decide on the adequacy of the claims reserve estimation method. The parameters have been estimated by the Methods of Moments and Maximum Likelihood. The results were tested using chosen simulation data and then using real data originating from the three lines of business: Property/Casualty, General Liability, and Accident Insurance. These data include different developments and specificities. The outcome of the thesis shows that when the Dirichlet parameter is greater than the shape parameter of the Gamma, resulting in a model with positive correlation between the past and future claims payments, suggests the Chain-Ladder method as appropriate for the claims reserve estimation. In terms of claims reserves, if the cumulated payments are high the positive correlation will imply high expectations for the future payments resulting in high claims reserves estimates. The negative correlation appears when the Dirichlet parameter is lower than the shape parameter of the Gamma, meaning low expected future payments for the same high observed cumulated payments. This corresponds to the situation when claims are reported rapidly and fewer claims remain expected subsequently. The extreme case appears in the situation when all claims are reported at the same time leading to expectations for the future payments of zero or equal to the aggregated amount of the ultimate paid claims. For this latter case, the Chain-Ladder is not recommended.
Resumo:
Carbapenemases should be accurately and rapidly detected, given their possible epidemiological spread and their impact on treatment options. Here, we developed a simple, easy and rapid matrix-assisted laser desorption ionization-time of flight (MALDI-TOF)-based assay to detect carbapenemases and compared this innovative test with four other diagnostic approaches on 47 clinical isolates. Tandem mass spectrometry (MS-MS) was also used to determine accurately the amount of antibiotic present in the supernatant after 1 h of incubation and both MALDI-TOF and MS-MS approaches exhibited a 100% sensitivity and a 100% specificity. By comparison, molecular genetic techniques (Check-MDR Carba PCR and Check-MDR CT103 microarray) showed a 90.5% sensitivity and a 100% specificity, as two strains of Aeromonas were not detected because their chromosomal carbapenemase is not targeted by probes used in both kits. Altogether, this innovative MALDI-TOF-based approach that uses a stable 10-μg disk of ertapenem was highly efficient in detecting carbapenemase, with a sensitivity higher than that of PCR and microarray.
Resumo:
BACKGROUND: Multislice CT (MSCT) combined with D-dimer measurement can safely exclude pulmonary embolism in patients with a low or intermediate clinical probability of this disease. We compared this combination with a strategy in which both a negative venous ultrasonography of the leg and MSCT were needed to exclude pulmonary embolism. METHODS: We included 1819 consecutive outpatients with clinically suspected pulmonary embolism in a multicentre non-inferiority randomised controlled trial comparing two strategies: clinical probability assessment and either D-dimer measurement and MSCT (DD-CT strategy [n=903]) or D-dimer measurement, venous compression ultrasonography of the leg, and MSCT (DD-US-CT strategy [n=916]). Randomisation was by computer-generated blocks with stratification according to centre. Patients with a high clinical probability according to the revised Geneva score and a negative work-up for pulmonary embolism were further investigated in both groups. The primary outcome was the 3-month thromboembolic risk in patients who were left untreated on the basis of the exclusion of pulmonary embolism by diagnostic strategy. Clinicians assessing outcome were blinded to group assignment. Analysis was per protocol. This study is registered with ClinicalTrials.gov, number NCT00117169. FINDINGS: The prevalence of pulmonary embolism was 20.6% in both groups (189 cases in DD-US-CT group and 186 in DD-CT group). We analysed 855 patients in the DD-US-CT group and 838 in the DD-CT group per protocol. The 3-month thromboembolic risk was 0.3% (95% CI 0.1-1.1) in the DD-US-CT group and 0.3% (0.1-1.2) in the DD-CT group (difference 0.0% [-0.9 to 0.8]). In the DD-US-CT group, ultrasonography showed a deep-venous thrombosis in 53 (9% [7-12]) of 574 patients, and thus MSCT was not undertaken. INTERPRETATION: The strategy combining D-dimer and MSCT is as safe as the strategy using D-dimer followed by venous compression ultrasonography of the leg and MSCT for exclusion of pulmonary embolism. An ultrasound could be of use in patients with a contraindication to CT.
Resumo:
Background: Patient change talk (CT) during brief motivational interventions (BMI) has been linked with subsequent changes in drinking in clinical settings but this link has not been clearly established among young people in non-clinical populations. Objective: To determine which of several CT dimensions assessed during an effective BMI delivered in a non-clinical setting to 20-year old men are associated with drinking 6 months later. Methods: Of 125 individuals receiving a face-to-face BMI session (15.8 ± 5.4 minutes), we recorded and coded a subsample of 42 sessions using the Motivational Interviewing Skill Code 2.1. Each patient change talk utterance was categorized as `Reason´, `Ability´, `Desire´, `Need´, `Commitment´, `Taking steps´, or `Other´. Each utterance was graded according to its strength (absolute value from 1 to 3) and direction (i.e. towards (positive sign) or away (negative sign) from change/in favor of status quo). `Ability´, `Desire´, and `Need´ to change (`ADN´) were grouped together since these codes were too scarce to conduct analyses. Mean strength scores over the entire session were computed for each dimension and later dichotomized in towards change (i.e. mean core > 0) and away from change/in favor of status quo. Negative binomial regression models were used to assess the relationship between CT dimensions and drinking 6 months later, adjusting for drinking at baseline. Results: Compared to subjects with a `Taking steps´ score away from change/in favor of status quo, subjects with a positive `Taking steps´ score reported significantly less drinking 6 months later (Incidence Rate Ration [IRR] for drinks per week: 0.56, 95% Confidence Interval [CI] 0.31, 1.00). IRR (95%CI) for subjects with a positive `ADN´ score was 0.58, (0.32, 1.03). For subjects with a positive `Reason´, `Commitment´, and `Other´ scores, IRR (95%CI) were 1.28 (0.77; 2.12) 1.63 (0.85; 3.14) and 1.03 (0.61; 1.72), respectively. Conclusion: A change talk dimension reflecting steps taken towards change (`Taking steps´) is associated with less drinking 6 months later among young men receiving a BMI in a non-clinical setting. Encouraging patients to take steps towa change may be a worthy objective for clinicians and may explain BMI efficacy.