305 resultados para single event upset
em Université de Lausanne, Switzerland
Resumo:
RESUME GRAND PUBLICLe cerveau est composé de différents types cellulaires, dont les neurones et les astrocytes. Faute de moyens pour les observer, les astrocytes sont très longtemps restés dans l'ombre alors que les neurones, bénéficiant des outils ad hoc pour être stimulés et étudiés, ont fait l'objet de toutes les attentions. Le développement de l'imagerie cellulaire et des outils fluorescents ont permis d'observer ces cellules non électriquement excitables et d'obtenir des informations qui laissent penser que ces cellules sont loin d'être passives et participent activement au fonctionnement cérébral. Cette participation au fonctionnement cérébral se fait en partie par le biais de la libération de substances neuro-actives (appellées gliotransmetteurs) que les astrocytes libèrent à proximité des synapses permettant ainsi de moduler le fonctionnement neuronal. Cette libération de gliotransmetteurs est principalement causée par l'activité neuronale que les astrocytes sont capables de sentir. Néanmoins, nous savons encore peu de chose sur les propriétés précises de la libération des gliotransmetteurs. Comprendre les propriétés spatio-temporelles de cette libération est essentiel pour comprendre le mode de communication de ces cellules et leur implication dans la transmission de l'information cérébrale. En utilisant des outils fluorescents récemment développés et en combinant différentes techniques d'imagerie cellulaire, nous avons pu obtenir des informations très précises sur la libération de ces gliotransmetteurs par les astrocytes. Nous avons ainsi confirmé que cette libération était un processus très rapide et qu'elle était contrôlée par des augmentations de calcium locales et rapides. Nous avons également décrit une organisation complexe de la machinerie supportant la libération des gliotransmetteurs. Cette organisation complexe semble être à la base de la libération extrêmement rapide des gliotransmetteurs. Cette rapidité de libération et cette complexité structurelle semblent indiquer que les astrocytes sont des cellules particulièrement adaptées à une communication rapide et qu'elles peuvent, au même titre que les neurones dont elles seraient les partenaires légitimes, participer à la transmission et à l'intégration de l'information cérébrale.RESUMEDe petites vésicules, les « SLMVs » ou « Synaptic Like MicroVesicles », exprimant des transporteurs vésiculaires du glutamate (VGluTs) et libérant du glutamate par exocytose régulée, ont récemment été décrites dans les astrocytes en culture et in situ. Néanmoins, nous savons peu de chose sur les propriétés précises de la sécrétion de ces SLMVs. Contrairement aux neurones, le couplage stimulussécrétion des astrocytes n'est pas basé sur l'ouverture des canaux calciques membranaires mais nécessite l'intervention de seconds messagers et la libération du calcium par le reticulum endoplasmique (RE). Comprendre les propriétés spatio-temporelles de la sécrétion astrocytaire est essentiel pour comprendre le mode de communication de ces cellules et leur implication dans la transmission de l'information cérébrale. Nous avons utilisé des outils fluorescents récemment développés pour étudier le recyclage des vésicules synaptiques glutamatergiques comme les colorants styryles et la pHluorin afin de pouvoir suivre la sécrétion des SLMVs à l'échelle de la cellule mais également à l'échelle des évènements. L'utilisation combinée de l'épifluorescence et de la fluorescence à onde évanescente nous a permis d'obtenir une résolution temporelle et spatiale sans précédent. Ainsi avons-nous confirmé que la sécrétion régulée des astrocytes était un processus très rapide (de l'ordre de quelques centaines de millisecondes). Nous avons découvert que cette sécrétion est contrôlée par des augmentations de calcium locales et rapides. Nous avons également décrit des compartiments cytosoliques délimités par le RE à proximité de la membrane plasmique et contenant les SLMVs. Cette organisation semble être à la base du couplage rapide entre l'activation des GPCRs et la sécrétion. L'existence de compartiments subcellulaires indépendants permettant de contenir les messagers intracellulaires et de limiter leur diffusion semble compenser de manière efficace la nonexcitabilité électrique des astrocytes. Par ailleurs, l'existence des différents pools de vésicules recrutés séquentiellement et fusionnant selon des modalités distinctes ainsi que l'existence de mécanismes permettant le renouvellement de ces pools lors de la stimulation suggèrent que les astrocytes peuvent faire face à une stimulation soutenue de leur sécrétion. Ces données suggèrent que la libération de gliotransmetteurs par exocytose régulée n'est pas seulement une propriété des astrocytes en culture mais bien le résultat d'une forte spécialisation de ces cellules pour la sécrétion. La rapidité de cette sécrétion donne aux astrocytes toutes les compétences pour pouvoir intervenir de manière active dans la transmission et l'intégration de l'information.ABSTRACTRecently, astrocytic synaptic like microvesicles (SLMVs), that express vesicular glutamate transporters (VGluTs) and are able to release glutamate by Ca2+-dependent regulated exocytosis, have been described both in tissue and in cultured astrocytes. Nevertheless, little is known about the specific properties of regulated secretion in astrocytes. Important differences may exist between astrocytic and neuronal exocytosis, starting from the fact that stimulus-secretion coupling in astrocytes is voltage independent, mediated by G-protein-coupled receptors and the release of Ca2+ from internal stores. Elucidating the spatiotemporal properties of astrocytic exo-endocytosis is, therefore, of primary importance for understanding the mode of communication of these cells and their role in brain signaling. We took advantage of fluorescent tools recently developed for studying recycling of glutamatergic vesicles at synapses like styryl dyes and pHluorin in order to follow exocytosis and endocytosis of SLMVs at the level of the entire cell or at the level of single event. We combined epifluorescence and total internal reflection fluorescence imaging to investigate, with unprecedented temporal and spatial resolution, the events underlying the stimulus-secretion in astrocytes. We confirmed that exo-endocytosis process in astrocytes proceeds with a time course on the millisecond time scale. We discovered that SLMVs exocytosis is controlled by local and fast Ca2+ elevations; indeed submicrometer cytosolic compartments delimited by endoplasmic reticulum (ER) tubuli reaching beneath the plasma membrane and containing SLMVs. Such complex organization seems to support the fast stimulus-secretion coupling reported here. Independent subcellular compartments formed by ER, SLMVs and plasma membrane containing intracellular messengers and limiting their diffusion seem to compensate efficiently the non-electrical excitability of astrocytes. Moreover, the existence of two pools of SLMVs which are sequentially recruited suggests a compensatory mechanisms allowing the refill of SLMVs and supporting exocytosis process over a wide range of multiple stimuli. These data suggest that regulated secretion is not only a feature of cultured astrocytes but results from a strong specialization of these cells. The rapidity of secretion demonstrates that astrocytes are able to actively participate in brain information transmission and processing.
Resumo:
The geodynamic forces acting in the Earth's interior manifest themselves in a variety of ways. Volcanoes are amongst the most impressive examples in this respect, but like with an iceberg, they only represent the tip of a more extensive system hidden underground. This system consists of a source region where melt forms and accumulates, feeder connections in which magma is transported towards the surface, and different reservoirs where it is stored before it eventually erupts to form a volcano. A magma represents a mixture of melt and crystals. The latter can be extracted from the source region, or form anywhere along the path towards their final crystallization place. They will retain information of the overall plumbing system. The host rocks of an intrusion, in contrast, provide information at the emplacement level. They record the effects of thermal and mechanical forces imposed by the magma. For a better understanding of the system, both parts - magmatic and metamorphic petrology - have to be integrated. I will demonstrate in my thesis that information from both is complementary. It is an iterative process, using constraints from one field to better constrain the other. Reading the history of the host rocks is not always straightforward. This is shown in chapter two, where a model for the formation of clustered garnets observed in the contact aureole is proposed. Fragments of garnets, older than the intrusive rocks are overgrown by garnet crystallizing due to the reheating during emplacement of the adjacent pluton. The formation of the clusters is therefore not a single event as generally assumed but the result of a two-stage process, namely the alteration of the old grains and the overgrowth and amalgamation of new garnet rims. This makes an important difference when applying petrological methods such as thermobarometry, geochronology or grain size distributions. The thermal conditions in the aureole are a strong function of the emplacement style of the pluton. therefore it is necessary to understand the pluton before drawing conclusions about its aureole. A study investigating the intrusive rocks by means of field, geochemical, geochronologi- cal and structural methods is presented in chapter three. This provided important information about the assembly of the intrusion, but also new insights on the nature of large, homogeneous plutons and the structure of the plumbing system in general. The incremental nature of the emplacement of the Western Adamello tonalité is documented, and the existence of an intermediate reservoir beneath homogeneous plutons is proposed. In chapter four it is demonstrated that information extracted from the host rock provides further constraints on the emplacement process of the intrusion. The temperatures obtain by combining field observations with phase petrology modeling are used together with thermal models to constrain the magmatic activity in the immediate intrusion. Instead of using the thermal models to control the petrology result, the inverse is done. The model parameters were changed until a match with the aureole temperatures was obtained. It is shown, that only a few combinations give a positive match and that temperature estimates from the aureole can constrain the frequency of ancient magmatic systems. In the fifth chapter, the Anisotropy of Magnetic Susceptibility of intrusive rocks is compared to 3D tomography. The obtained signal is a function of the shape and distribution of ferromagnetic grains, and is often used to infer flow directions of magma. It turns out that the signal is dominated by the shape of the magnetic crystals, and where they form tight clusters, also by their distribution. This is in good agreement with the predictions made in the theoretical and experimental literature. In the sixth chapter arguments for partial melting of host rock carbonates are presented. While at first very surprising, this is to be expected when considering the prior results from the intrusive study and experiments from the literature. Partial melting is documented by compelling microstructures, geochemical and structural data. The necessary conditions are far from extreme and this process might be more frequent than previously thought. The carbonate melt is highly mobile and can move along grain boundaries, infiltrating other rocks and ultimately alter the existing mineral assemblage. Finally, a mineralogical curiosity is presented in chapter seven. The mineral assemblage magne§site and calcite is in apparent equilibrium. It is well known that these two carbonates are not stable together in the system Ca0-Mg0-Fe0-C02. Indeed, magnesite and calcite should react to dolomite during metamorphism. The presented explanation for this '"forbidden" assemblage is, that a calcite melt infiltrated the magnesite bearing rock along grain boundaries and caused the peculiar microstructure. This is supported by isotopie disequilibrium between calcite and magnesite. A further implication of partially molten carbonates is, that the host rock drastically looses its strength so that its physical properties may be comparable to the ones of the intrusive rocks. This contrasting behavior of the host rock may ease the emplacement of the intrusion. We see that the circle closes and the iterative process of better constraining the emplacement could start again. - La Terre est en perpétuel mouvement et les forces tectoniques associées à ces mouvements se manifestent sous différentes formes. Les volcans en sont l'un des exemples les plus impressionnants, mais comme les icebergs, les laves émises en surfaces ne représentent que la pointe d'un vaste système caché dans les profondeurs. Ce système est constitué d'une région source, région où la roche source fond et produit le magma ; ce magma peut s'accumuler dans cette région source ou être transporté à travers différents conduits dans des réservoirs où le magma est stocké. Ce magma peut cristalliser in situ et produire des roches plutoniques ou alors être émis en surface. Un magma représente un mélange entre un liquide et des cristaux. Ces cristaux peuvent être extraits de la source ou se former tout au long du chemin jusqu'à l'endroit final de cristallisation. L'étude de ces cristaux peut ainsi donner des informations sur l'ensemble du système magmatique. Au contraire, les roches encaissantes fournissent des informations sur le niveau d'emplacement de l'intrusion. En effet ces roches enregistrent les effets thermiques et mécaniques imposés par le magma. Pour une meilleure compréhension du système, les deux parties, magmatique et métamorphique, doivent être intégrées. Cette thèse a pour but de montrer que les informations issues de l'étude des roches magmatiques et des roches encaissantes sont complémentaires. C'est un processus itératif qui utilise les contraintes d'un domaine pour améliorer la compréhension de l'autre. Comprendre l'histoire des roches encaissantes n'est pas toujours aisé. Ceci est démontré dans le chapitre deux, où un modèle de formation des grenats observés sous forme d'agrégats dans l'auréole de contact est proposé. Des fragments de grenats plus vieux que les roches intru- sives montrent une zone de surcroissance générée par l'apport thermique produit par la mise en place du pluton adjacent. La formation des agrégats de grenats n'est donc pas le résultat d'un seul événement, comme on le décrit habituellement, mais d'un processus en deux phases, soit l'altération de vieux grains engendrant une fracturation de ces grenats, puis la formation de zone de surcroissance autour de ces différents fragments expliquant la texture en agrégats observée. Cette interprétation en deux phases est importante, car elle engendre des différences notables lorsque l'on applique des méthodes pétrologiques comme la thermobarométrie, la géochronologie ou encore lorsque l'on étudie la distribution relative de la taille des grains. Les conditions thermales dans l'auréole de contact dépendent fortement du mode d'emplacement de l'intrusion et c'est pourquoi il est nécessaire de d'abord comprendre le pluton avant de faire des conclusions sur son auréole de contact. Une étude de terrain des roches intrusives ainsi qu'une étude géochimique, géochronologique et structurale est présente dans le troisième chapitre. Cette étude apporte des informations importantes sur la formation de l'intrusion mais également de nouvelles connaissances sur la nature de grands plutons homogènes et la structure de système magmatique en général. L'emplacement incrémental est mis en évidence et l'existence d'un réservoir intermédiaire en-dessous des plutons homogènes est proposé. Le quatrième chapitre de cette thèse illustre comment utiliser l'information extraite des roches encaissantes pour expliquer la mise en place de l'intrusion. Les températures obtenues par la combinaison des observations de terrain et l'assemblage métamorphique sont utilisées avec des modèles thermiques pour contraindre l'activité magmatique au contact directe de cette auréole. Au lieu d'utiliser le modèle thermique pour vérifier le résultat pétrologique, une approche inverse a été choisie. Les paramètres du modèle ont été changés jusqu'à ce qu'on obtienne une correspondance avec les températures observées dans l'auréole de contact. Ceci montre qu'il y a peu de combinaison qui peuvent expliquer les températures et qu'on peut contraindre la fréquence de l'activité magmatique d'un ancien système magmatique de cette manière. Dans le cinquième chapitre, les processus contrôlant l'anisotropie de la susceptibilité magnétique des roches intrusives sont expliqués à l'aide d'images de la distribution des minéraux dans les roches obtenues par tomographie 3D. Le signal associé à l'anisotropie de la susceptibilité magnétique est une fonction de la forme et de la distribution des grains ferromagnétiques. Ce signal est fréquemment utilisé pour déterminer la direction de mouvement d'un magma. En accord avec d'autres études de la littérature, les résultats montrent que le signal est dominé par la forme des cristaux magnétiques, ainsi que par la distribution des agglomérats de ces minéraux dans la roche. Dans le sixième chapitre, une étude associée à la fusion partielle de carbonates dans les roches encaissantes est présentée. Si la présence de liquides carbonatés dans les auréoles de contact a été proposée sur la base d'expériences de laboratoire, notre étude démontre clairement leur existence dans la nature. La fusion partielle est documentée par des microstructures caractéristiques pour la présence de liquides ainsi que par des données géochimiques et structurales. Les conditions nécessaires sont loin d'être extrêmes et ce processus pourrait être plus fréquent qu'attendu. Les liquides carbonatés sont très mobiles et peuvent circuler le long des limites de grain avant d'infiltrer d'autres roches en produisant une modification de leurs assemblages minéralogiques. Finalement, une curiosité minéralogique est présentée dans le chapitre sept. L'assemblage de minéraux de magnésite et de calcite en équilibre apparent est observé. Il est bien connu que ces deux carbonates ne sont pas stables ensemble dans le système CaO-MgO-FeO-CO.,. En effet, la magnésite et la calcite devraient réagir et produire de la dolomite pendant le métamorphisme. L'explication présentée pour cet assemblage à priori « interdit » est que un liquide carbonaté provenant des roches adjacentes infiltre cette roche et est responsable pour cette microstructure. Une autre implication associée à la présence de carbonates fondus est que la roche encaissante montre une diminution drastique de sa résistance et que les propriétés physiques de cette roche deviennent comparables à celles de la roche intrusive. Cette modification des propriétés rhéologiques des roches encaissantes peut faciliter la mise en place des roches intrusives. Ces différentes études démontrent bien le processus itératif utilisé et l'intérêt d'étudier aussi bien les roches intrusives que les roches encaissantes pour la compréhension des mécanismes de mise en place des magmas au sein de la croûte terrestre.
Resumo:
Like numerous torrents in mountainous regions, the Illgraben creek (canton of Wallis, SW Switzerland) produces almost every year several debris flows. The total area of the active catchment is only 4.7 km², but large events ranging from 50'000 to 400'000 m³ are common (Zimmermann 2000). Consequently, the pathway of the main channel often changes suddenly. One single event can for instance fill the whole river bed and dig new several-meters-deep channels somewhere else (Bardou et al. 2003). The quantification of both, the rhythm and the magnitude of these changes, is very important to assess the variability of the bed's cross section and long profile. These parameters are indispensable for numerical modelling, as they should be considered as initial conditions. To monitor the channel evolution an Optech ILRIS 3D terrestrial laser scanner (LIDAR) was used. LIDAR permits to make a complete high precision 3D model of the channel and its surroundings by scanning it from different view points. The 3D data are treated and interpreted with the software Polyworks from Innovmetric Software Inc. Sequential 3D models allow for the determination of the variation in the bed's cross section and long profile. These data will afterwards be used to quantify the erosion and the deposition in the torrent reaches. To complete the chronological evolution of the landforms, precise digital terrain models, obtained by high resolution photogrammetry based on old aerial photographs, will be used. A 500 m long section of the Illgraben channel was scanned on 18th of August 2005 and on 7th of April 2006. These two data sets permit identifying the changes of the channel that occurred during the winter season. An upcoming scanning campaign in September 2006 will allow for the determination of the changes during this summer. Preliminary results show huge variations in the pathway of the Illgraben channel, as well as important vertical and lateral erosion of the river bed. Here we present the results of a river bank on the left (north-western) flank of the channel (Figure 1). For the August 2005 model the scans from 3 viewpoints were superposed, whereas the April 2006 3D image was obtained by combining 5 separate scans. The bank was eroded. The bank got eroded essentially on its left part (up to 6.3 m), where it is hit by the river and the debris flows (Figures 2 and 3). A debris cone has also formed (Figure 3), which suggests that a part of the bank erosion is due to shallow landslides. They probably occur when the river erosion creates an undercut slope. These geometrical data allow for the monitoring of the alluvial dynamics (i.e. aggradation and degradation) on different time scales and the influence of debris flows occurrence on these changes. Finally, the resistance against erosion of the bed's cross section and long profile will be analysed to assess the variability of these two key parameters. This information may then be used in debris flow simulation.
Resumo:
Carbon isotope ratios in marine carbonate rocks have been shown to shift at some of the time boundaries associated with extinction events; for example, Cretaceous/Tertiary and Ordovician/ Silurian. The Permian/Triassic boundary, the greatest extinction event of the Phanerozoic, is also marked by a large d13C depletion. New carbon isotope results from sections in the southern Alps show that this depletion did not actually represent a single event, but was a complex change that spanned perhaps a million years during the late Permian and early Triassic. These results suggest that the Permian/Triassic (P/Tr) extinction may have been in part gradual and in part 'stepwise', but was not in any case a single catastrophic event.
Resumo:
Introduction: Non-invasive brain imaging techniques often contrast experimental conditions across a cohort of participants, obfuscating distinctions in individual performance and brain mechanisms that are better characterised by the inter-trial variability. To overcome such limitations, we developed topographic analysis methods for single-trial EEG data [1]. So far this was typically based on time-frequency analysis of single-electrode data or single independent components. The method's efficacy is demonstrated for event-related responses to environmental sounds, hitherto studied at an average event-related potential (ERP) level. Methods: Nine healthy subjects participated to the experiment. Auditory meaningful sounds of common objects were used for a target detection task [2]. On each block, subjects were asked to discriminate target sounds, which were living or man-made auditory objects. Continuous 64-channel EEG was acquired during the task. Two datasets were considered for each subject including single-trial of the two conditions, living and man-made. The analysis comprised two steps. In the first part, a mixture of Gaussians analysis [3] provided representative topographies for each subject. In the second step, conditional probabilities for each Gaussian provided statistical inference on the structure of these topographies across trials, time, and experimental conditions. Similar analysis was conducted at group-level. Results: Results show that the occurrence of each map is structured in time and consistent across trials both at the single-subject and at group level. Conducting separate analyses of ERPs at single-subject and group levels, we could quantify the consistency of identified topographies and their time course of activation within and across participants as well as experimental conditions. A general agreement was found with previous analysis at average ERP level. Conclusions: This novel approach to single-trial analysis promises to have impact on several domains. In clinical research, it gives the possibility to statistically evaluate single-subject data, an essential tool for analysing patients with specific deficits and impairments and their deviation from normative standards. In cognitive neuroscience, it provides a novel tool for understanding behaviour and brain activity interdependencies at both single-subject and at group levels. In basic neurophysiology, it provides a new representation of ERPs and promises to cast light on the mechanisms of its generation and inter-individual variability.
Resumo:
Introduction: Responses to external stimuli are typically investigated by averaging peri-stimulus electroencephalography (EEG) epochs in order to derive event-related potentials (ERPs) across the electrode montage, under the assumption that signals that are related to the external stimulus are fixed in time across trials. We demonstrate the applicability of a single-trial model based on patterns of scalp topographies (De Lucia et al, 2007) that can be used for ERP analysis at the single-subject level. The model is able to classify new trials (or groups of trials) with minimal a priori hypotheses, using information derived from a training dataset. The features used for the classification (the topography of responses and their latency) can be neurophysiologically interpreted, because a difference in scalp topography indicates a different configuration of brain generators. An above chance classification accuracy on test datasets implicitly demonstrates the suitability of this model for EEG data. Methods: The data analyzed in this study were acquired from two separate visual evoked potential (VEP) experiments. The first entailed passive presentation of checkerboard stimuli to each of the four visual quadrants (hereafter, "Checkerboard Experiment") (Plomp et al, submitted). The second entailed active discrimination of novel versus repeated line drawings of common objects (hereafter, "Priming Experiment") (Murray et al, 2004). Four subjects per experiment were analyzed, using approx. 200 trials per experimental condition. These trials were randomly separated in training (90%) and testing (10%) datasets in 10 independent shuffles. In order to perform the ERP analysis we estimated the statistical distribution of voltage topographies by a Mixture of Gaussians (MofGs), which reduces our original dataset to a small number of representative voltage topographies. We then evaluated statistically the degree of presence of these template maps across trials and whether and when this was different across experimental conditions. Based on these differences, single-trials or sets of a few single-trials were classified as belonging to one or the other experimental condition. Classification performance was assessed using the Receiver Operating Characteristic (ROC) curve. Results: For the Checkerboard Experiment contrasts entailed left vs. right visual field presentations for upper and lower quadrants, separately. The average posterior probabilities, indicating the presence of the computed template maps in time and across trials revealed significant differences starting at ~60-70 ms post-stimulus. The average ROC curve area across all four subjects was 0.80 and 0.85 for upper and lower quadrants, respectively and was in all cases significantly higher than chance (unpaired t-test, p<0.0001). In the Priming Experiment, we contrasted initial versus repeated presentations of visual object stimuli. Their posterior probabilities revealed significant differences, which started at 250ms post-stimulus onset. The classification accuracy rates with single-trial test data were at chance level. We therefore considered sub-averages based on five single trials. We found that for three out of four subjects' classification rates were significantly above chance level (unpaired t-test, p<0.0001). Conclusions: The main advantage of the present approach is that it is based on topographic features that are readily interpretable along neurophysiologic lines. As these maps were previously normalized by the overall strength of the field potential on the scalp, a change in their presence across trials and between conditions forcibly reflects a change in the underlying generator configurations. The temporal periods of statistical difference between conditions were estimated for each training dataset for ten shuffles of the data. Across the ten shuffles and in both experiments, we observed a high level of consistency in the temporal periods over which the two conditions differed. With this method we are able to analyze ERPs at the single-subject level providing a novel tool to compare normal electrophysiological responses versus single cases that cannot be considered part of any cohort of subjects. This aspect promises to have a strong impact on both basic and clinical research.
Resumo:
We present a novel approach for analyzing single-trial electroencephalography (EEG) data, using topographic information. The method allows for visualizing event-related potentials using all the electrodes of recordings overcoming the problem of previous approaches that required electrode selection and waveforms filtering. We apply this method to EEG data from an auditory object recognition experiment that we have previously analyzed at an ERP level. Temporally structured periods were statistically identified wherein a given topography predominated without any prior information about the temporal behavior. In addition to providing novel methods for EEG analysis, the data indicate that ERPs are reliably observable at a single-trial level when examined topographically.
Resumo:
This tutorial review details some of the recent advances in signal analyses applied to event-related potential (ERP) data. These "electrical neuroimaging" analyses provide reference-independent measurements of response strength and response topography that circumvent statistical and interpretational caveats of canonical ERP analysis methods while also taking advantage of the greater information provided by high-density electrode montages. Electrical neuroimaging can be applied across scales ranging from group-averaged ERPs to single-subject and single-trial datasets. We illustrate these methods with a tutorial dataset and place particular emphasis on their suitability for studies of clinical and/or developmental populations.
Resumo:
PURPOSE: We report the long-term results of a randomized clinical trial comparing induction therapy with once per week for 4 weeks single-agent rituximab alone versus induction followed by 4 cycles of maintenance therapy every 2 months in patients with follicular lymphoma. PATIENTS AND METHODS: Patients (prior chemotherapy 138; chemotherapy-naive 64) received single-agent rituximab and if nonprogressive, were randomly assigned to no further treatment (observation) or four additional doses of rituximab given at 2-month intervals (prolonged exposure). RESULTS: At a median follow-up of 9.5 years and with all living patients having been observed for at least 5 years, the median event-free survival (EFS) was 13 months for the observation and 24 months for the prolonged exposure arm (P < .001). In the observation arm, patients without events at 8 years were 5%, while in the prolonged exposure arm they were 27%. Of previously untreated patients receiving prolonged treatment after responding to rituximab induction, at 8 years 45% were still without event. The only favorable prognostic factor for EFS in a multivariate Cox regression was the prolonged rituximab schedule (hazard ratio, 0.59; 95% CI, 0.39 to 0.88; P = .009), whereas being chemotherapy naive, presenting with stage lower than IV, and showing a VV phenotype at position 158 of the Fc-gamma RIIIA receptor were not of independent prognostic value. No long-term toxicity potentially due to rituximab was observed. CONCLUSION: An important proportion of patients experienced long-term remission after prolonged exposure to rituximab, particularly if they had no prior treatment and responded to rituximab induction.
Resumo:
Background: In FL, Rituximab as a single agent delivered in the standard schedule (4 times weekly) may induce a response rate of 50−70% with an event-free survival (EFS) of 1−3 years according to patients' characteristics. Prolonged Rituximab exposure seems to improve EFS at least in responding patients and to increase the rate of longterm responders. Here we report long-term results of a clinical trial comparing single agent Rituximab delivered in the standard schedule versus prolonged exposure, with focus on the proportion of long-term responders and their characteristics. Material and Methods: Between 1998 and 2002, chemotherapy na¨ıve (n = 64) or pre-treated (n = 138) FL patients received Rituximab in the standard schedule. Those responding or with stable disease were randomized to no further treatment (observation, n = 78) or 4 additional doses of Rituximab given at 2-month intervals (prolonged exposure, n = 73). EFS was calculated from the first dose of standard schedule until progression, relapse, second tumor or death. Results: At a median follow up of 9.4 years and with all living patients having been followed for at least 5 years, the median EFS is 13 months for the observation and 24 months for the prolonged exposure arm (p = 0.0007). In the observation arm 13% had no event at 5-years and only 4% at 8 years, while in the prolonged exposure arm it was 27% at 5 years and remained 21% at 8 years. The only significant prognostic factor for EFS in a multivariate Cox regression was the prolonged Rituximab schedule (hazard ratio 0.58, CI 0.39−0.86, p = 0.007), whereas being chemotherapy na¨ıve, presenting with stage
Resumo:
Keywords Diabetes mellitus; coronary artery disease; myocardial ischemia; prognostic value; single-photon emission computed tomography myocardial perfusion imaging Summary Aim: To determine the long-term prognostic value of SPECT myocardial perfusion imaging (MPI) for the occurrence of cardiovascular events in diabetic patients. Methods: SPECT MPI of 210 consecutive Caucasian diabetic patients were analysed using Kaplan-Meier event-free survival curves and independent predictors were determined by Cox multivariate analyses. Results: Follow-up was complete in 200 (95%) patients with a median period of 3.0 years (0.8-5.0). The population was composed of 114 (57%) men, age 65±10 years, 181 (90.5%) type 2 diabetes mellitus, 50 (25%) with a history of coronary artery disease (CAD) and 98 (49%) presenting chest pain prior to MPI. The prevalence of abnormal MPI was 58%. Patients with a normal MPI had neither cardiac death, nor myocardial infarction, independently of a history of coronary artery disease or chest pain. Among the independent predictors of cardiac death and myocardial infarction, the strongest was abnormal MPI (p<.0001), followed by history of CAD (Hazard Ratio (HR)= t 5.9, p=0.0001), diabetic retinopathy (HR=10.0, p=0.001) and inability to exercise (HR=7.7, p=0.02). Patients with normal 1VIPI had a low revascularisation rate of 2.4% during the follow-up period. Compared to normal MPI, cardiovascular events increased 5.2 fold for reversible defects, 8.5 fold for fixed defects and 20.1 fold for the association of both defects. Conclusion: Diabetic patients with normal MPI had an excellent prognosis independently of history of CAD. On the opposite, an abnormal MPI led to a > 5 fold increase in cardiovascular events. This emphasizes the value of SPECT MPI in predicting and risk-stratifying cardiovascular events in diabetic patients. Mots-Clés Diabète; maladie coronarienne; ischémie myocardique; valeur pronostique; tomoscintigraphie myocardique de perfusion par émission monophotonique Résumé Objectifs: Déterminer la valeur pronostique à long terme de la tomoscintigraphie myocardique de perfusion (TSMP) chez les patients diabétiques pour prédire les événements cardiovasculaires (ECV). Méthodes: Etude de 210 diabétiques caucasiens consécutifs référés pour une TSMP. Les courbes de survie ont été déterminées par Kaplan-Meier et les facteurs prédictifs indépendants par analyses multivariées de type Cox. Résultats: Le suivi a été complet chez 200 (95%) patients avec une durée médiane de 3.0 ans (0.8-50). La population était composée de 114 (57%) hommes, âge moyen 65±10 ans, avec 181 (90.5%) diabète de type 2, 50 (25%) antécédents de maladie coronarienne (AMC) et 98 (49%) patients connus pour un angor avant la TSMP. La prévalence de TSMP anormales était de 58%. Aucun décès d'origine cardiaque ou infarctus du myocarde n'est survenu chez les patients avec une TSMP normale, ceci indépendamment de leurs AMC et des douleurs thoraciques. Les facteurs prédictifs indépendants pour les ECV sont une TSMP anormale (p<.0001), les AMC (Hazard Ratio (HR)=15.9, p-0.0001), suivi de la rétinopathie diabétique (HR-10.0, p=0.001) et de l'incapacité à effectuer un exercice (HR=7.7, p=0.02). Les patients avec une TSMP normale ont présenté un taux de revascularisations de 2.4%. La présence de défauts mixtes accroît le risque d'ECV de 20.1 fois, les défauts fixes de 8.5 fois et les défauts réversibles de 5.2 fois comparés aux sujets avec une TSMP normale. Conclusion: Les patients diabétiques, coronariens ou non, avec une tomoscintigraphie myocardique de perfusion normale ont un excellent pronostique. A l'opposé, une TSMP anormale est associée à une augmentation du risque d'ECV de plus de 5 fois. Ceci confirme l'utilité de la TSMP dans la stratification du risque chez les patients diabétiques.
Resumo:
There are various methods to collect adverse events (AEs) in clinical trials. The methods how AEs are collected in vaccine trials is of special interest: solicited reporting can lead to over-reporting events that have little or no biological relationship to the vaccine. We assessed the rate of AEs listed in the package insert for the virosomal hepatitis A vaccine Epaxal(®), comparing data collected by solicited or unsolicited self-reporting. In an open, multi-centre post-marketing study, 2675 healthy travellers received single doses of vaccine administered intramuscularly. AEs were recorded based on solicited and unsolicited questioning during a four-day period after vaccination. A total of 2541 questionnaires could be evaluated (95.0% return rate). Solicited self-reporting resulted in significantly higher (p<0.0001) rates of subjects with AEs than unsolicited reporting, both at baseline (18.9% solicited versus 2.1% unsolicited systemic AEs) and following immunization (29.6% versus 19.3% local AEs; 33.8% versus 18.2% systemic AEs). This could indicate that actual reporting rates of AEs with Epaxal(®) may be substantially lower than described in the package insert. The distribution of AEs differed significantly between the applied methods of collecting AEs. The most common AEs listed in the package insert were reported almost exclusively with solicited questioning. The reporting of local AEs was more likely than that of systemic AEs to be influenced by subjects' sex, age and study centre. Women reported higher rates of AEs than men. The results highlight the need for detailing the methods how vaccine tolerability was reported and assessed.
Resumo:
Neuroimaging studies typically compare experimental conditions using average brain responses, thereby overlooking the stimulus-related information conveyed by distributed spatio-temporal patterns of single-trial responses. Here, we take advantage of this rich information at a single-trial level to decode stimulus-related signals in two event-related potential (ERP) studies. Our method models the statistical distribution of the voltage topographies with a Gaussian Mixture Model (GMM), which reduces the dataset to a number of representative voltage topographies. The degree of presence of these topographies across trials at specific latencies is then used to classify experimental conditions. We tested the algorithm using a cross-validation procedure in two independent EEG datasets. In the first ERP study, we classified left- versus right-hemifield checkerboard stimuli for upper and lower visual hemifields. In a second ERP study, when functional differences cannot be assumed, we classified initial versus repeated presentations of visual objects. With minimal a priori information, the GMM model provides neurophysiologically interpretable features - vis à vis voltage topographies - as well as dynamic information about brain function. This method can in principle be applied to any ERP dataset testing the functional relevance of specific time periods for stimulus processing, the predictability of subject's behavior and cognitive states, and the discrimination between healthy and clinical populations.
Resumo:
OBJECTIVE: To assess the prevalence of cardiovascular (CV) risk factors in Seychelles, a middle-income African country, and compare the cost-effectiveness of single-risk-factor management (treating individuals with arterial blood pressure >/= 140/90 mmHg and/or total serum cholesterol >/= 6.2 mmol/l) with that of management based on total CV risk (treating individuals with a total CV risk >/= 10% or >/= 20%).METHODS: CV risk factor prevalence and a CV risk prediction chart for Africa were used to estimate the 10-year risk of suffering a fatal or non-fatal CV event among individuals aged 40-64 years. These figures were used to compare single-risk-factor management with total risk management in terms of the number of people requiring treatment to avert one CV event and the number of events potentially averted over 10 years. Treatment for patients with high total CV risk (>/= 20%) was assumed to consist of a fixed-dose combination of several drugs (polypill). Cost analyses were limited to medication.FINDINGS: A total CV risk of >/= 10% and >/= 20% was found among 10.8% and 5.1% of individuals, respectively. With single-risk-factor management, 60% of adults would need to be treated and 157 cardiovascular events per 100 000 population would be averted per year, as opposed to 5% of adults and 92 events with total CV risk management. Management based on high total CV risk optimizes the balance between the number requiring treatment and the number of CV events averted.CONCLUSION: Total CV risk management is much more cost-effective than single-risk-factor management. These findings are relevant for all countries, but especially for those economically and demographically similar to Seychelles.