30 resultados para event based

em Université de Lausanne, Switzerland


Relevância:

60.00% 60.00%

Publicador:

Resumo:

A gradual increase in Earth's surface temperatures marking the transition from the late Paleocene to early Eocene (55.8±0.2Ma), represents an extraordinary warming event known as Paleocene-Eocene Thermal Maximum (PETM). Both marine and continental sedimentary records during this period reveal evidences for the massive injection of isotopically light carbon. The carbon dioxide injection from multiple potential sources may have triggered the global warming. The importance of the PETM studies is due to the fact that the PETM bears some striking resemblances to the human-caused climate change unfolding today. Most notably, the culprit behind it was a massive injection of heat-trapping greenhouse gases into the atmosphere and oceans, comparable in volume to what our persistent burning of fossil fuels could deliver in coming centuries. The exact knowledge of what went on during the PETM could help us to foresee the future climate change. The response of the oceanic and continental environments to the PETM is different. Many factors might control the response of the environments to the PETM such as paleogeography, paleotopography, paleoenvironment, and paleodepth. To better understand the mechanisms triggering PETM events, two different environments were studied: 1) shallow marine to inner shelf environment (Wadi Nukhul, Sinai; and the Dababiya GSSP, Luxor, Egypt), and 2) terrestrial environments (northwestern India lignite mines) representing wetland, and fluvial environments (Esplugafreda, Spain) both highlighting the climatic changes observed in continental conditions. In the marine realm, the PETM is characterized by negative ö13Ccar and ô13Corg excursions and shifts in Ô15N to ~0%o values above the P/E boundary and persisting along the interval suggesting a bloom and high production of atmospheric N2-fixers. Decrease in carbonate contents could be due to dissolution and/or dilution by increasing detrital input. High Ti, K and Zr and decreased Si contents at the P/E boundary indicate high weathering index (CIA), which coincides with significant kaolinite input and suggests intense chemical weathering under humid conditions at the beginning of the PETM. Two anoxic intervals are observed along the PETM. The lower one may be linked to methane released from the continental shelf with no change in the redox proxies, where the upper anoxic to euxinic conditions are revealed by increasing U, Mo, V, Fe and the presence of small size pyrite framboids (2-5fim). Productivity sensitive elements (Cu, Ni, and Cd) show their maximum concentrated within the upper anoxic interval suggesting high productivity in surface water. The obtained data highlight that intense weathering and subsequent nutrient inputs are crucial parameters in the chain of the PETM events, triggering productivity during the recovery phase. In the terrestrial environments, the establishment of wetland conditions and consequence continental climatic shift towards more humid conditions led to migration of modern mammals northward following the extension of the tropical belts. Relative ages of this mammal event based on bio-chemo- and paleomagnetic stratigraphy support a migration path originating from Asia into Europe and North America, followed by later migration from Asia into India and suggests a barrier to migration that is likely linked to the timing of the India-Asia collision. In contrast, at Esplugafereda, northeastern Spain, the terrestrial environment reacted differently. Two significant S13C shifts with the lower one linked to the PETM and the upper corresponding to the Early Eocene Thermal Maximum (ETM2); 180/160 paleothermometry performed on two different soil carbonate nodule reveal a temperature increase of around 8°C during the PETM. The prominent increase in kaolinite content within the PETM is linked to increased runoff and/or weathering of adjacent and coeval soils. These results demonstrate that the PETM coincides globally with extreme climatic fluctuations and that terrestrial environments are very likely to record such climatic changes. - La transition Paléocène-Eocène (55,8±0,2 Ma) est marquée par un réchauffement extraordinaire communément appelé « Paleocene-Eocene Thermal Maximum » (PETM). Les données géochimiques caractérisant les sédiments marins et continentaux de cette période indiquent que ce réchauffement a été déclenché par une augmentation massive de CO2 lié à la déstabilisation des hydrates de méthane stockés le long des marges océaniques. L'étude des événements PETM constitue donc un bon analogue avec le réchauffement actuel. Le volume de CO2 émis durant le PETM est comparable avec le CO2 lié à l'activité actuelle humaine. La compréhension des causes du réchauffement du PETM peut être cruciale pour prévoir et évaluer les conséquences du réchauffement anthropogénique, en particulier les répercussions d'un tel réchauffement sur les domaines continentaux et océaniques. De nombreux facteurs entrent en ligne de compte dans le cas du PETM, tels que la paléogéographie, la paléotopographie et les paléoenvironnement. Pour mieux comprendre les réponses environnementales aux événements du PETM, 2 types d'environnements ont été choisis : (1) le domaine marin ouvert mais relativement peu profond (Wadi Nukhul. Sinai, Dababiya, Luxor, Egypte), (2) le milieu continental marécageux humide (mines de lignite, Inde) et fluviatile, semi-aride (Esplugafreda, Pyrénées espagnoles). Dans le domaine marin, le PETM est caractérisé par des excursions négatives du ô13Ccar et ô13Corg et un shift persistant des valeurs de 815N à ~ 0 %o indiquant une forte activité des organismes (bactéries) fixant l'azote. La diminution des carbonates observée durant le PETM peut-être due à des phénomènes de dissolution ou une augmentation des apports terrigènes. Des taux élevés en Ti, K et Zr et une diminution des montants de Si, reflétés par des valeurs des indices d'altération (CIA) qui coïncident avec une augmentation significative des apports de kaolinite impliquent une altération chimique accrue, du fait de conditions plus humides au début du PETM. Deux événements anoxiques globaux ont été mis en évidence durant le PETM. Le premier, situé dans la partie inférieur du PETM, serait lié à la libération des hydrates de méthane stockés le long des talus continentaux et ne correspond pas à des variations significatives des éléments sensibles aux changements de conditions redox. Le second est caractérisé par une augmentation des éléments U, Mo, V et Fe et la présence de petit framboids de pyrite dont la taille varie entre 2 et 5pm. Le second épisode anoxique est caractérisé par une forte augmentation des éléments sensibles aux changements de la productivité (Cu, Ni et Co), indiquant une augmentation de la productivité dans les eaux de surface. Les données obtenues mettent en évidence le rôle crucial joué par l'altération et les apports en nutriments qui en découlent. Ces paramètres sont cruciaux pour la succession des événements qui ont conduit au PETM, et plus particulièrement l'augmentation de la productivité dans la phase de récupération. Durant le PETM, le milieu continental est caractérisé par l'établissement de conditions humides qui ont facilité voir provoqué la migration des mammifères modernes qui ont suivi le déplacement de ces ceintures climatiques. L'âge de cette migration est basé sur des arguments chimiostratigraphiques (isotopes stables), biostratigraphiques et paléomagnétiques. Les données bibliographiques ainsi que celles que nous avons récoltées en Inde, montrent que les mammifères modernes ont d'abord migré depuis l'Asie vers l'Europe, puis dans le continent Nord américain. Ces derniers ne sont arrivés en Inde que plus tardivement, suggérant que le temps de leur migration est lié à la collision Inde-Asie. Dans le Nord-Est de l'Espagne (Esplugafreda), la réponse du milieu continental aux événements PETM est assez différente. Comme en Inde, deux excursions signicatives en ô13C ont été observées. La première correspond au PETM et la seconde est corrélée avec l'optimum thermique de l'Eocène précoce (ETM2). Les isotopes stables de l'oxygène mesurés 2 différents types de nodules calcaires provenant de paléosols suggère une augmentation de 10°C pendant le PETM. Une augmentation simultanée des taux de kaolinite indique une intensification de l'altération chimique et/ou de l'érosion de sols adjacents. Ces résultats démontrent que le PETM coïncide globalement avec des variations climatiques extrêmes qui sont très aisément reconnaissables dans les dépôts continentaux.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Steep mountain catchments typically experience large sediment pulses from hillslopes which are stored in headwater channels and remobilized by debris-flows or bedload transport. Event-based sediment budget monitoring in the active Manival debris-flow torrent in the French Alps during a two-year period gave insights into the catchment-scale sediment routing during moderate rainfall intensities which occur several times each year. The monitoring was based on intensive topographic resurveys of low- and high-order channels using different techniques (cross-section surveys with total station and high-resolution channel surveys with terrestrial and airborne laser scanning). Data on sediment output volumes from the main channel were obtained by a sediment trap. Two debris-flows were observed, as well as several bedload transport flow events. Sediment budget analysis of the two debris-flows revealed that most of the debris-flow volumes were supplied by channel scouring (more than 92%). Bedload transport during autumn contributed to the sediment recharge of high-order channels by the deposition of large gravel wedges. This process is recognized as being fundamental for debris-flow occurrence during the subsequent spring and summer. A time shift of scour-and-fill sequences was observed between low- and high-order channels, revealing the discontinuous sediment transfer in the catchment during common flow events. A conceptual model of sediment routing for different event magnitude is proposed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Introduction: Non-invasive brain imaging techniques often contrast experimental conditions across a cohort of participants, obfuscating distinctions in individual performance and brain mechanisms that are better characterised by the inter-trial variability. To overcome such limitations, we developed topographic analysis methods for single-trial EEG data [1]. So far this was typically based on time-frequency analysis of single-electrode data or single independent components. The method's efficacy is demonstrated for event-related responses to environmental sounds, hitherto studied at an average event-related potential (ERP) level. Methods: Nine healthy subjects participated to the experiment. Auditory meaningful sounds of common objects were used for a target detection task [2]. On each block, subjects were asked to discriminate target sounds, which were living or man-made auditory objects. Continuous 64-channel EEG was acquired during the task. Two datasets were considered for each subject including single-trial of the two conditions, living and man-made. The analysis comprised two steps. In the first part, a mixture of Gaussians analysis [3] provided representative topographies for each subject. In the second step, conditional probabilities for each Gaussian provided statistical inference on the structure of these topographies across trials, time, and experimental conditions. Similar analysis was conducted at group-level. Results: Results show that the occurrence of each map is structured in time and consistent across trials both at the single-subject and at group level. Conducting separate analyses of ERPs at single-subject and group levels, we could quantify the consistency of identified topographies and their time course of activation within and across participants as well as experimental conditions. A general agreement was found with previous analysis at average ERP level. Conclusions: This novel approach to single-trial analysis promises to have impact on several domains. In clinical research, it gives the possibility to statistically evaluate single-subject data, an essential tool for analysing patients with specific deficits and impairments and their deviation from normative standards. In cognitive neuroscience, it provides a novel tool for understanding behaviour and brain activity interdependencies at both single-subject and at group levels. In basic neurophysiology, it provides a new representation of ERPs and promises to cast light on the mechanisms of its generation and inter-individual variability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIMS: Estimates of the left ventricular ejection fraction (LVEF) in patients with life-threatening ventricular arrhythmias related to coronary artery disease (CAD) have rarely been reported despite it has become the basis for determining patient's eligibility for prophylactic defibrillator. We aimed to determine the extent and distribution of reduced LVEF in patients with sustained ventricular tachycardia or ventricular fibrillation. METHODS AND RESULTS: 252 patients admitted for ventricular arrhythmia related to CAD were included: 149 had acute myocardial infarction (MI) (Group I, 59%), 54 had significant chronic obstructive CAD suggestive of an ischaemic arrhythmic trigger (Group II, 21%) and 49 patients had an old MI without residual ischaemia (Group III, 19%). 34% of the patients with scar-related arrhythmias had an LVEF > or =40%. Based on pre-event LVEF evaluation, it can be estimated that less than one quarter of the whole study population had a known chronic MI with severely reduced LVEF. In Group III, the proportion of inferior MI was significantly higher than anterior MI (81 vs. 19%; absolute difference, -62; 95% confidence interval, -45 to -79; P < or = 0.0001), though median LVEF was higher in inferior MI (0.37 +/- 10 vs. 0.29 +/- 10; P = 0.0499). CONCLUSION: Patients included in defibrillator trials represent only a minority of the patients at risk of sudden cardiac death. By applying the current risk stratification strategy based on LVEF, more than one third of the patients with old MI would not have qualified for a prophylactic defibrillator. Our study also suggests that inferior scars may be more prone to ventricular arrhythmia compared to anterior scars.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

On December 4th 2007, a 3-Mm3 landslide occurred along the northwestern shore of Chehalis Lake. The initiation zone is located at the intersection of the main valley slope and the northern sidewall of a prominent gully. The slope failure caused a displacement wave that ran up to 38 m on the opposite shore of the lake. The landslide is temporally associated with a rain-on-snow meteorological event which is thought to have triggered it. This paper describes the Chehalis Lake landslide and presents a comparison of discontinuity orientation datasets obtained using three techniques: field measurements, terrestrial photogrammetric 3D models and an airborne LiDAR digital elevation model to describe the orientation and characteristics of the five discontinuity sets present. The discontinuity orientation data are used to perform kinematic, surface wedge limit equilibrium and three-dimensional distinct element analyses. The kinematic and surface wedge analyses suggest that the location of the slope failure (intersection of the valley slope and a gully wall) has facilitated the development of the unstable rock mass which initiated as a planar sliding failure. Results from the three-dimensional distinct element analyses suggest that the presence, orientation and high persistence of a discontinuity set dipping obliquely to the slope were critical to the development of the landslide and led to a failure mechanism dominated by planar sliding. The three-dimensional distinct element modelling also suggests that the presence of a steeply dipping discontinuity set striking perpendicular to the slope and associated with a fault exerted a significant control on the volume and extent of the failed rock mass but not on the overall stability of the slope.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The widespread use of digital imaging devices for surveillance (CCTV) and entertainment (e.g., mobile phones, compact cameras) has increased the number of images recorded and opportunities to consider the images as traces or documentation of criminal activity. The forensic science literature focuses almost exclusively on technical issues and evidence assessment [1]. Earlier steps in the investigation phase have been neglected and must be considered. This article is the first comprehensive description of a methodology to event reconstruction using images. This formal methodology was conceptualised from practical experiences and applied to different contexts and case studies to test and refine it. Based on this practical analysis, we propose a systematic approach that includes a preliminary analysis followed by four main steps. These steps form a sequence for which the results from each step rely on the previous step. However, the methodology is not linear, but it is a cyclic, iterative progression for obtaining knowledge about an event. The preliminary analysis is a pre-evaluation phase, wherein potential relevance of images is assessed. In the first step, images are detected and collected as pertinent trace material; the second step involves organising and assessing their quality and informative potential. The third step includes reconstruction using clues about space, time and actions. Finally, in the fourth step, the images are evaluated and selected as evidence. These steps are described and illustrated using practical examples. The paper outlines how images elicit information about persons, objects, space, time and actions throughout the investigation process to reconstruct an event step by step. We emphasise the hypothetico-deductive reasoning framework, which demonstrates the contribution of images to generating, refining or eliminating propositions or hypotheses. This methodology provides a sound basis for extending image use as evidence and, more generally, as clues in investigation and crime reconstruction processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Methods like Event History Analysis can show the existence of diffusion and part of its nature, but do not study the process itself. Nowadays, thanks to the increasing performance of computers, processes can be studied using computational modeling. This thesis presents an agent-based model of policy diffusion mainly inspired from the model developed by Braun and Gilardi (2006). I first start by developing a theoretical framework of policy diffusion that presents the main internal drivers of policy diffusion - such as the preference for the policy, the effectiveness of the policy, the institutional constraints, and the ideology - and its main mechanisms, namely learning, competition, emulation, and coercion. Therefore diffusion, expressed by these interdependencies, is a complex process that needs to be studied with computational agent-based modeling. In a second step, computational agent-based modeling is defined along with its most significant concepts: complexity and emergence. Using computational agent-based modeling implies the development of an algorithm and its programming. When this latter has been developed, we let the different agents interact. Consequently, a phenomenon of diffusion, derived from learning, emerges, meaning that the choice made by an agent is conditional to that made by its neighbors. As a result, learning follows an inverted S-curve, which leads to partial convergence - global divergence and local convergence - that triggers the emergence of political clusters; i.e. the creation of regions with the same policy. Furthermore, the average effectiveness in this computational world tends to follow a J-shaped curve, meaning that not only time is needed for a policy to deploy its effects, but that it also takes time for a country to find the best-suited policy. To conclude, diffusion is an emergent phenomenon from complex interactions and its outcomes as ensued from my model are in line with the theoretical expectations and the empirical evidence.Les méthodes d'analyse de biographie (event history analysis) permettent de mettre en évidence l'existence de phénomènes de diffusion et de les décrire, mais ne permettent pas d'en étudier le processus. Les simulations informatiques, grâce aux performances croissantes des ordinateurs, rendent possible l'étude des processus en tant que tels. Cette thèse, basée sur le modèle théorique développé par Braun et Gilardi (2006), présente une simulation centrée sur les agents des phénomènes de diffusion des politiques. Le point de départ de ce travail met en lumière, au niveau théorique, les principaux facteurs de changement internes à un pays : la préférence pour une politique donnée, l'efficacité de cette dernière, les contraintes institutionnelles, l'idéologie, et les principaux mécanismes de diffusion que sont l'apprentissage, la compétition, l'émulation et la coercition. La diffusion, définie par l'interdépendance des différents acteurs, est un système complexe dont l'étude est rendue possible par les simulations centrées sur les agents. Au niveau méthodologique, nous présenterons également les principaux concepts sous-jacents aux simulations, notamment la complexité et l'émergence. De plus, l'utilisation de simulations informatiques implique le développement d'un algorithme et sa programmation. Cette dernière réalisée, les agents peuvent interagir, avec comme résultat l'émergence d'un phénomène de diffusion, dérivé de l'apprentissage, où le choix d'un agent dépend en grande partie de ceux faits par ses voisins. De plus, ce phénomène suit une courbe en S caractéristique, poussant à la création de régions politiquement identiques, mais divergentes au niveau globale. Enfin, l'efficacité moyenne, dans ce monde simulé, suit une courbe en J, ce qui signifie qu'il faut du temps, non seulement pour que la politique montre ses effets, mais également pour qu'un pays introduise la politique la plus efficace. En conclusion, la diffusion est un phénomène émergent résultant d'interactions complexes dont les résultats du processus tel que développé dans ce modèle correspondent tant aux attentes théoriques qu'aux résultats pratiques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: Studies of diffuse large B-cell lymphoma (DLBCL) are typically evaluated by using a time-to-event approach with relapse, re-treatment, and death commonly used as the events. We evaluated the timing and type of events in newly diagnosed DLBCL and compared patient outcome with reference population data. PATIENTS AND METHODS: Patients with newly diagnosed DLBCL treated with immunochemotherapy were prospectively enrolled onto the University of Iowa/Mayo Clinic Specialized Program of Research Excellence Molecular Epidemiology Resource (MER) and the North Central Cancer Treatment Group NCCTG-N0489 clinical trial from 2002 to 2009. Patient outcomes were evaluated at diagnosis and in the subsets of patients achieving event-free status at 12 months (EFS12) and 24 months (EFS24) from diagnosis. Overall survival was compared with age- and sex-matched population data. Results were replicated in an external validation cohort from the Groupe d'Etude des Lymphomes de l'Adulte (GELA) Lymphome Non Hodgkinien 2003 (LNH2003) program and a registry based in Lyon, France. RESULTS: In all, 767 patients with newly diagnosed DLBCL who had a median age of 63 years were enrolled onto the MER and NCCTG studies. At a median follow-up of 60 months (range, 8 to 116 months), 299 patients had an event and 210 patients had died. Patients achieving EFS24 had an overall survival equivalent to that of the age- and sex-matched general population (standardized mortality ratio [SMR], 1.18; P = .25). This result was confirmed in 820 patients from the GELA study and registry in Lyon (SMR, 1.09; P = .71). Simulation studies showed that EFS24 has comparable power to continuous EFS when evaluating clinical trials in DLBCL. CONCLUSION: Patients with DLBCL who achieve EFS24 have a subsequent overall survival equivalent to that of the age- and sex-matched general population. EFS24 will be useful in patient counseling and should be considered as an end point for future studies of newly diagnosed DLBCL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the Early Toarcian, major paleoenvironnemental and paleoceanographical changes occurred, leading to an oceanic anoxic event (OAE) and to a perturbation of the carbon isotope cycle. Although the standard biochronology of the Lower Jurassic is essentially based upon ammonites, in recent years biostratigraphy based on calcareous nannofossils and dinoflagellate cysts is increasingly used to date Jurassic rocks. However, the precise dating and correlation of the Early Toarcian OAE, and of the associated delta C-13 anomaly in different settings of the western Tethys, are still partly problematic, and it is still unclear whether these events are synchronous or not. In order to allow more accurate correlations of the organic rich levels recorded in the Lower Toarcian OAE, this account proposes a new biozonation based on a quantitative biochronology approach, the Unitary Associations (UA), applied to calcareous nannofossils. This study represents the first attempt to apply the UA method to Jurassic nannofossils. The study incorporates eighteen sections distributed across western Tethys and ranging from the Pliensbachian to Aalenian, comprising 1220 samples and 72 calcareous nannofossil taxa. The BioGraph [Savary, J., Guex, J., 1999. Discrete biochronological scales and unitary associations: description of the Biograph Computer program. Memoires de Geologie de Lausanne 34, 282 pp] and UA-Graph (Copyright Hammer O., Guex and Savary, 2002) softwares provide a discrete biochronological framework based upon multi-taxa concurrent range zones in the different sections. The optimized dataset generates nine UAs using the co-occurrences of 56 taxa. These UAs are grouped into six Unitary Association Zones (UA-Z), which constitute a robust biostratigraphic synthesis of all the observed or deduced biostratigraphic relationships between the analysed taxa. The UA zonation proposed here is compared to ``classic'' calcareous nannofossil biozonations, which are commonly used for the southern and the northern sides of Tethys. The biostratigraphic resolution of the UA-Zones varies from one nannofossil subzone or part of it to several subzones, and can be related to the pattern of calcareous nannoplankton originations and extinctions during the studied time interval. The Late Pliensbachian - Early Toarcian interval (corresponding to the UA-Z II) represents a major step in the Jurassic nannoplankton radiation. The recognized UA-Zones are also compared to the carbon isotopic negative excursion and TOC maximum in five sections of central Italy, Germany and England, with the aim of providing a more reliable correlation tool for the Early Toarcian OAE, and of the associated isotopic anomaly, between the southern and northern part of western Tethys. The results of this work show that the TOC maximum and delta C-13 negative excursion correspond to the upper part of the UA-Z II (i.e., UA 3) in the sections analysed. This suggests that the Early Toarcian OAE was a synchronous event within the western Tethys. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are various methods to collect adverse events (AEs) in clinical trials. The methods how AEs are collected in vaccine trials is of special interest: solicited reporting can lead to over-reporting events that have little or no biological relationship to the vaccine. We assessed the rate of AEs listed in the package insert for the virosomal hepatitis A vaccine Epaxal(®), comparing data collected by solicited or unsolicited self-reporting. In an open, multi-centre post-marketing study, 2675 healthy travellers received single doses of vaccine administered intramuscularly. AEs were recorded based on solicited and unsolicited questioning during a four-day period after vaccination. A total of 2541 questionnaires could be evaluated (95.0% return rate). Solicited self-reporting resulted in significantly higher (p<0.0001) rates of subjects with AEs than unsolicited reporting, both at baseline (18.9% solicited versus 2.1% unsolicited systemic AEs) and following immunization (29.6% versus 19.3% local AEs; 33.8% versus 18.2% systemic AEs). This could indicate that actual reporting rates of AEs with Epaxal(®) may be substantially lower than described in the package insert. The distribution of AEs differed significantly between the applied methods of collecting AEs. The most common AEs listed in the package insert were reported almost exclusively with solicited questioning. The reporting of local AEs was more likely than that of systemic AEs to be influenced by subjects' sex, age and study centre. Women reported higher rates of AEs than men. The results highlight the need for detailing the methods how vaccine tolerability was reported and assessed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Sexual selection theory posits that ornaments can signal the genetic quality of an individual. Eumelanin-based coloration is such an ornament and can signal the ability to cope with a physiological stress response because the melanocortin system regulates eumelanogenesis as well as physiological stress responses. In the present article, we experimentally investigated whether the stronger stress sensitivity of light than dark eumelanic individuals stems from differential regulation of stress hormones. Our study shows that darker eumelanic barn owl nestlings have a lower corticosterone release after a stressful event, an association, which was also inherited from the mother (but not the father) to the offspring. Additionally, nestlings sired by darker eumelanic mothers more quickly reduced experimentally elevated corticosterone levels. This provides a solution as to how ornamented individuals can be more resistant to various sources of stress than drab conspecifics. Our study suggests that eumelanin-based coloration can be a sexually selected signal of resistance to stressful events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Responses to external stimuli are typically investigated by averaging peri-stimulus electroencephalography (EEG) epochs in order to derive event-related potentials (ERPs) across the electrode montage, under the assumption that signals that are related to the external stimulus are fixed in time across trials. We demonstrate the applicability of a single-trial model based on patterns of scalp topographies (De Lucia et al, 2007) that can be used for ERP analysis at the single-subject level. The model is able to classify new trials (or groups of trials) with minimal a priori hypotheses, using information derived from a training dataset. The features used for the classification (the topography of responses and their latency) can be neurophysiologically interpreted, because a difference in scalp topography indicates a different configuration of brain generators. An above chance classification accuracy on test datasets implicitly demonstrates the suitability of this model for EEG data. Methods: The data analyzed in this study were acquired from two separate visual evoked potential (VEP) experiments. The first entailed passive presentation of checkerboard stimuli to each of the four visual quadrants (hereafter, "Checkerboard Experiment") (Plomp et al, submitted). The second entailed active discrimination of novel versus repeated line drawings of common objects (hereafter, "Priming Experiment") (Murray et al, 2004). Four subjects per experiment were analyzed, using approx. 200 trials per experimental condition. These trials were randomly separated in training (90%) and testing (10%) datasets in 10 independent shuffles. In order to perform the ERP analysis we estimated the statistical distribution of voltage topographies by a Mixture of Gaussians (MofGs), which reduces our original dataset to a small number of representative voltage topographies. We then evaluated statistically the degree of presence of these template maps across trials and whether and when this was different across experimental conditions. Based on these differences, single-trials or sets of a few single-trials were classified as belonging to one or the other experimental condition. Classification performance was assessed using the Receiver Operating Characteristic (ROC) curve. Results: For the Checkerboard Experiment contrasts entailed left vs. right visual field presentations for upper and lower quadrants, separately. The average posterior probabilities, indicating the presence of the computed template maps in time and across trials revealed significant differences starting at ~60-70 ms post-stimulus. The average ROC curve area across all four subjects was 0.80 and 0.85 for upper and lower quadrants, respectively and was in all cases significantly higher than chance (unpaired t-test, p<0.0001). In the Priming Experiment, we contrasted initial versus repeated presentations of visual object stimuli. Their posterior probabilities revealed significant differences, which started at 250ms post-stimulus onset. The classification accuracy rates with single-trial test data were at chance level. We therefore considered sub-averages based on five single trials. We found that for three out of four subjects' classification rates were significantly above chance level (unpaired t-test, p<0.0001). Conclusions: The main advantage of the present approach is that it is based on topographic features that are readily interpretable along neurophysiologic lines. As these maps were previously normalized by the overall strength of the field potential on the scalp, a change in their presence across trials and between conditions forcibly reflects a change in the underlying generator configurations. The temporal periods of statistical difference between conditions were estimated for each training dataset for ten shuffles of the data. Across the ten shuffles and in both experiments, we observed a high level of consistency in the temporal periods over which the two conditions differed. With this method we are able to analyze ERPs at the single-subject level providing a novel tool to compare normal electrophysiological responses versus single cases that cannot be considered part of any cohort of subjects. This aspect promises to have a strong impact on both basic and clinical research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Sitting between your past and your future doesn't mean you are in the present. Dakota Skye Complex systems science is an interdisciplinary field grouping under the same umbrella dynamical phenomena from social, natural or mathematical sciences. The emergence of a higher order organization or behavior, transcending that expected of the linear addition of the parts, is a key factor shared by all these systems. Most complex systems can be modeled as networks that represent the interactions amongst the system's components. In addition to the actual nature of the part's interactions, the intrinsic topological structure of underlying network is believed to play a crucial role in the remarkable emergent behaviors exhibited by the systems. Moreover, the topology is also a key a factor to explain the extraordinary flexibility and resilience to perturbations when applied to transmission and diffusion phenomena. In this work, we study the effect of different network structures on the performance and on the fault tolerance of systems in two different contexts. In the first part, we study cellular automata, which are a simple paradigm for distributed computation. Cellular automata are made of basic Boolean computational units, the cells; relying on simple rules and information from- the surrounding cells to perform a global task. The limited visibility of the cells can be modeled as a network, where interactions amongst cells are governed by an underlying structure, usually a regular one. In order to increase the performance of cellular automata, we chose to change its topology. We applied computational principles inspired by Darwinian evolution, called evolutionary algorithms, to alter the system's topological structure starting from either a regular or a random one. The outcome is remarkable, as the resulting topologies find themselves sharing properties of both regular and random network, and display similitudes Watts-Strogtz's small-world network found in social systems. Moreover, the performance and tolerance to probabilistic faults of our small-world like cellular automata surpasses that of regular ones. In the second part, we use the context of biological genetic regulatory networks and, in particular, Kauffman's random Boolean networks model. In some ways, this model is close to cellular automata, although is not expected to perform any task. Instead, it simulates the time-evolution of genetic regulation within living organisms under strict conditions. The original model, though very attractive by it's simplicity, suffered from important shortcomings unveiled by the recent advances in genetics and biology. We propose to use these new discoveries to improve the original model. Firstly, we have used artificial topologies believed to be closer to that of gene regulatory networks. We have also studied actual biological organisms, and used parts of their genetic regulatory networks in our models. Secondly, we have addressed the improbable full synchronicity of the event taking place on. Boolean networks and proposed a more biologically plausible cascading scheme. Finally, we tackled the actual Boolean functions of the model, i.e. the specifics of how genes activate according to the activity of upstream genes, and presented a new update function that takes into account the actual promoting and repressing effects of one gene on another. Our improved models demonstrate the expected, biologically sound, behavior of previous GRN model, yet with superior resistance to perturbations. We believe they are one step closer to the biological reality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Whether different brain networks are involved in generating unimanual responses to a simple visual stimulus presented in the ipsilateral versus contralateral hemifield remains a controversial issue. Visuo-motor routing was investigated with event-related functional magnetic resonance imaging (fMRI) using the Poffenberger reaction time task. A 2 hemifield x 2 response hand design generated the "crossed" and "uncrossed" conditions, describing the spatial relation between these factors. Both conditions, with responses executed by the left or right hand, showed a similar spatial pattern of activated areas, including striate and extrastriate areas bilaterally, SMA, and M1 contralateral to the responding hand. These results demonstrated that visual information is processed bilaterally in striate and extrastriate visual areas, even in the "uncrossed" condition. Additional analyses based on sorting data according to subjects' reaction times revealed differential crossed versus uncrossed activity only for the slowest trials, with response strength in infero-temporal cortices significantly correlating with crossed-uncrossed differences (CUD) in reaction times. Collectively, the data favor a parallel, distributed model of brain activation. The presence of interhemispheric interactions and its consequent bilateral activity is not determined by the crossed anatomic projections of the primary visual and motor pathways. Distinct visuo-motor networks need not be engaged to mediate behavioral responses for the crossed visual field/response hand condition. While anatomical connectivity heavily influences the spatial pattern of activated visuo-motor pathways, behavioral and functional parameters appear to also affect the strength and dynamics of responses within these pathways.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neuroimaging studies typically compare experimental conditions using average brain responses, thereby overlooking the stimulus-related information conveyed by distributed spatio-temporal patterns of single-trial responses. Here, we take advantage of this rich information at a single-trial level to decode stimulus-related signals in two event-related potential (ERP) studies. Our method models the statistical distribution of the voltage topographies with a Gaussian Mixture Model (GMM), which reduces the dataset to a number of representative voltage topographies. The degree of presence of these topographies across trials at specific latencies is then used to classify experimental conditions. We tested the algorithm using a cross-validation procedure in two independent EEG datasets. In the first ERP study, we classified left- versus right-hemifield checkerboard stimuli for upper and lower visual hemifields. In a second ERP study, when functional differences cannot be assumed, we classified initial versus repeated presentations of visual objects. With minimal a priori information, the GMM model provides neurophysiologically interpretable features - vis à vis voltage topographies - as well as dynamic information about brain function. This method can in principle be applied to any ERP dataset testing the functional relevance of specific time periods for stimulus processing, the predictability of subject's behavior and cognitive states, and the discrimination between healthy and clinical populations.