949 resultados para event based
Resumo:
Transportation and warehousing are large and growing sectors in the society, and their efficiency is of high importance. Transportation also has a large share of global carbondioxide emissions, which are one the leading causes of anthropogenic climate warming. Various countries have agreed to decrease their carbon emissions according to the Kyoto protocol. Transportation is the only sector where emissions have steadily increased since the 1990s, which highlights the importance of transportation efficiency. The efficiency of transportation and warehousing can be improved with the help of simulations, but models alone are not sufficient. This research concentrates on the use of simulations in decision support systems. Three main simulation approaches are used in logistics: discrete-event simulation, systems dynamics, and agent-based modeling. However, individual simulation approaches have weaknesses of their own. Hybridization (combining two or more approaches) can improve the quality of the models, as it allows using a different method to overcome the weakness of one method. It is important to choose the correct approach (or a combination of approaches) when modeling transportation and warehousing issues. If an inappropriate method is chosen (this can occur if the modeler is proficient in only one approach or the model specification is not conducted thoroughly), the simulation model will have an inaccurate structure, which in turn will lead to misleading results. This issue can further escalate, as the decision-maker may assume that the presented simulation model gives the most useful results available, even though the whole model can be based on a poorly chosen structure. In this research it is argued that simulation- based decision support systems need to take various issues into account to make a functioning decision support system. The actual simulation model can be constructed using any (or multiple) approach, it can be combined with different optimization modules, and there needs to be a proper interface between the model and the user. These issues are presented in a framework, which simulation modelers can use when creating decision support systems. In order for decision-makers to fully benefit from the simulations, the user interface needs to clearly separate the model and the user, but at the same time, the user needs to be able to run the appropriate runs in order to analyze the problems correctly. This study recommends that simulation modelers should start to transfer their tacit knowledge to explicit knowledge. This would greatly benefit the whole simulation community and improve the quality of simulation-based decision support systems as well. More studies should also be conducted by using hybrid models and integrating simulations with Graphical Information Systems.
Resumo:
Combating climate change is one of the key tasks of humanity in the 21st century. One of the leading causes is carbon dioxide emissions due to usage of fossil fuels. Renewable energy sources should be used instead of relying on oil, gas, and coal. In Finland a significant amount of energy is produced using wood. The usage of wood chips is expected to increase in the future significantly, over 60 %. The aim of this research is to improve understanding over the costs of wood chip supply chains. This is conducted by utilizing simulation as the main research method. The simulation model utilizes both agent-based modelling and discrete event simulation to imitate the wood chip supply chain. This thesis concentrates on the usage of simulation based decision support systems in strategic decision-making. The simulation model is part of a decision support system, which connects the simulation model to databases but also provides a graphical user interface for the decisionmaker. The main analysis conducted with the decision support system concentrates on comparing a traditional supply chain to a supply chain utilizing specialized containers. According to the analysis, the container supply chain is able to have smaller costs than the traditional supply chain. Also, a container supply chain can be more easily scaled up due to faster emptying operations. Initially the container operations would only supply part of the fuel needs of a power plant and it would complement the current supply chain. The model can be expanded to include intermodal supply chains as due to increased demand in the future there is not enough wood chips located close to current and future power plants.
Resumo:
Despite the fact that the literature on mergers and acquisitions is extensive, relatively little effort has been made to examine the relationship between the acquiring firms’ financial slack and short-term post-takeover announcement abnormal stock returns. In this study, the case is made that the financial slack of a firm is not only an outcome of past business and financing activities but it also may affect the quality of acquisition decisions. We will hypothesize that the level of financial slack in a firm is negatively associated with the abnormal returns following acquisition announcements because slack reduces managerial discipline over the use of corporate funds and also because it may give rise to managerial self-serving behavior. In this study, financial slack is measured in terms of three financial statements ratios: leverage ratio, cash and equivalents to total assets ratio and free cash flow to total assets ratio. The data used in this paper is collected from two main sources. A list comprising 90 European acquisition announcements is retrieved from Thomson One Banker database. The stock price data and financial statements information for the respective firms is collected using Datastream. Our empirical analysis is two-fold. First, we conduct a two-sample t-test where we find that the most slack-rich firms experience lower abnormal returns than the most slack-poor firms in the event window [-1, +1], significant at 5% risk level. Second, we perform a cross sectional regression for sample firms using three financial statements ratios to explain cumulative abnormal returns (CAR). We find that leverage shows a statistically significant positive relationship with cumulative abnormal returns in event window [-1; +1] (significance 5%). Moreover, cash to total assets ratio showed a weak negative relationship with CAR (significant at 10%) in event window [-1; +1]. We conclude that our hypothesis for the inverse relationship between slack and abnormal returns receives empirical support. Based on the results of the event study we get empirical support for the hypothesis that the capital markets expect the acquisitions undertaken by slack-rich firms to more likely be driven by managerial self-serving behavior and hubris than do those undertaken by slackpoor firms, signaling possible agency problems and behavioral biases.
Resumo:
Vision affords us with the ability to consciously see, and use this information in our behavior. While research has produced a detailed account of the function of the visual system, the neural processes that underlie conscious vision are still debated. One of the aims of the present thesis was to examine the time-course of the neuroelectrical processes that correlate with conscious vision. The second aim was to study the neural basis of unconscious vision, that is, situations where a stimulus that is not consciously perceived nevertheless influences behavior. According to current prevalent models of conscious vision, the activation of visual cortical areas is not, as such, sufficient for consciousness to emerge, although it might be sufficient for unconscious vision. Conscious vision is assumed to require reciprocal communication between cortical areas, but views differ substantially on the extent of this recurrent communication. Visual consciousness has been proposed to emerge from recurrent neural interactions within the visual system, while other models claim that more widespread cortical activation is needed for consciousness. Studies I-III compared models of conscious vision by studying event-related potentials (ERP). ERPs represent the brain’s average electrical response to stimulation. The results support the model that associates conscious vision with activity localized in the ventral visual cortex. The timing of this activity corresponds to an intermediate stage in visual processing. Earlier stages of visual processing may influence what becomes conscious, although these processes do not directly enable visual consciousness. Late processing stages, when more widespread cortical areas are activated, reflect the access to and manipulation of contents of consciousness. Studies IV and V concentrated on unconscious vision. By using transcranial magnetic stimulation (TMS) we show that when early visual cortical processing is disturbed so that subjects fail to consciously perceive visual stimuli, they may nevertheless guess (above chance-level) the location where the visual stimuli were presented. However, the results also suggest that in a similar situation, early visual cortex is necessary for both conscious and unconscious perception of chromatic information (i.e. color). Chromatic information that remains unconscious may influence behavioral responses when activity in visual cortex is not disturbed by TMS. Our results support the view that early stimulus-driven (feedforward) activation may be sufficient for unconscious processing. In conclusion, the results of this thesis support the view that conscious vision is enabled by a series of processing stages. The processes that most closely correlate with conscious vision take place in the ventral visual cortex ~200 ms after stimulus presentation, although preceding time-periods and contributions from other cortical areas such as the parietal cortex are also indispensable. Unconscious vision relies on intact early visual activation, although the location of visual stimulus may be unconsciously resolved even when activity in the early visual cortex is interfered with.
Resumo:
Biomedical natural language processing (BioNLP) is a subfield of natural language processing, an area of computational linguistics concerned with developing programs that work with natural language: written texts and speech. Biomedical relation extraction concerns the detection of semantic relations such as protein-protein interactions (PPI) from scientific texts. The aim is to enhance information retrieval by detecting relations between concepts, not just individual concepts as with a keyword search. In recent years, events have been proposed as a more detailed alternative for simple pairwise PPI relations. Events provide a systematic, structural representation for annotating the content of natural language texts. Events are characterized by annotated trigger words, directed and typed arguments and the ability to nest other events. For example, the sentence “Protein A causes protein B to bind protein C” can be annotated with the nested event structure CAUSE(A, BIND(B, C)). Converted to such formal representations, the information of natural language texts can be used by computational applications. Biomedical event annotations were introduced by the BioInfer and GENIA corpora, and event extraction was popularized by the BioNLP'09 Shared Task on Event Extraction. In this thesis we present a method for automated event extraction, implemented as the Turku Event Extraction System (TEES). A unified graph format is defined for representing event annotations and the problem of extracting complex event structures is decomposed into a number of independent classification tasks. These classification tasks are solved using SVM and RLS classifiers, utilizing rich feature representations built from full dependency parsing. Building on earlier work on pairwise relation extraction and using a generalized graph representation, the resulting TEES system is capable of detecting binary relations as well as complex event structures. We show that this event extraction system has good performance, reaching the first place in the BioNLP'09 Shared Task on Event Extraction. Subsequently, TEES has achieved several first ranks in the BioNLP'11 and BioNLP'13 Shared Tasks, as well as shown competitive performance in the binary relation Drug-Drug Interaction Extraction 2011 and 2013 shared tasks. The Turku Event Extraction System is published as a freely available open-source project, documenting the research in detail as well as making the method available for practical applications. In particular, in this thesis we describe the application of the event extraction method to PubMed-scale text mining, showing how the developed approach not only shows good performance, but is generalizable and applicable to large-scale real-world text mining projects. Finally, we discuss related literature, summarize the contributions of the work and present some thoughts on future directions for biomedical event extraction. This thesis includes and builds on six original research publications. The first of these introduces the analysis of dependency parses that leads to development of TEES. The entries in the three BioNLP Shared Tasks, as well as in the DDIExtraction 2011 task are covered in four publications, and the sixth one demonstrates the application of the system to PubMed-scale text mining.
Resumo:
This master’s thesis was done for a small company, Vipetec Oy, which offers specialized technological services for companies mainly in forest industry. The study was initiated partly because the company wants to expand its customer base to a new industry. There were two goals connected to each other. First was to find out how much and what kind of value current customers have realized from ATA Process Event Library, one of the products that the company offers. Second was to determine the best way to present this value and its implications for future value potential to both current and potential customers. ATA helps to make grade and product changes, starting after machine downtime, and recovery from production break faster for customers. All three events sometimes occur in production line. The faster operation results to savings in time and material. In addition to ATA Vipetec also offers other services related to development of automation and optimization of controls. Theoretical part concentrates on the concept of value, how it can be delivered to customers, and what kind of risk customer faces in industrial purchasing. Also the function of reference marketing towards customers is discussed. In the empirical part the realized value for existing customers is evaluated based on both numerical data and interviews. There’s also a brief case study about one customer. After that the value-based reference marketing for a target industry is examined through interviews of these potential customers. Finally answers to the research questions are stated and compared also to the theoretical knowledge about the subject. Results show that those customers’ machines which use the full service concept of ATA usually are able to save more time and material than the machines which use only some features of the product. Interviews indicated that sales arguments which focus on improved competitive status are not as effective as current arguments which focus on numerical improvements. In the case of potential customers in the new industry, current sales arguments likely work best for those whose irregular production situations are caused mainly by fault situations. When the actions of Vipetec were compared to ten key elements of creating customer references, it was seen that many of them the company has either already included in its strategy or has good chances to include them with the help of the results of this study.
Resumo:
In the present study, using noise-free simulated signals, we performed a comparative examination of several preprocessing techniques that are used to transform the cardiac event series in a regularly sampled time series, appropriate for spectral analysis of heart rhythm variability (HRV). First, a group of noise-free simulated point event series, which represents a time series of heartbeats, was generated by an integral pulse frequency modulation model. In order to evaluate the performance of the preprocessing methods, the differences between the spectra of the preprocessed simulated signals and the true spectrum (spectrum of the model input modulating signals) were surveyed by visual analysis and by contrasting merit indices. It is desired that estimated spectra match the true spectrum as close as possible, showing a minimum of harmonic components and other artifacts. The merit indices proposed to quantify these mismatches were the leakage rate, defined as a measure of leakage components (located outside some narrow windows centered at frequencies of model input modulating signals) with respect to the whole spectral components, and the numbers of leakage components with amplitudes greater than 1%, 5% and 10% of the total spectral components. Our data, obtained from a noise-free simulation, indicate that the utilization of heart rate values instead of heart period values in the derivation of signals representative of heart rhythm results in more accurate spectra. Furthermore, our data support the efficiency of the widely used preprocessing technique based on the convolution of inverse interval function values with a rectangular window, and suggest the preprocessing technique based on a cubic polynomial interpolation of inverse interval function values and succeeding spectral analysis as another efficient and fast method for the analysis of HRV signals
Resumo:
An interesting fact about language cognition is that stimulation involving incongruence in the merge operation between verb and complement has often been related to a negative event-related potential (ERP) of augmented amplitude and latency of ca. 400 ms - the N400. Using an automatic ERP latency and amplitude estimator to facilitate the recognition of waves with a low signal-to-noise ratio, the objective of the present study was to study the N400 statistically in 24 volunteers. Stimulation consisted of 80 experimental sentences (40 congruous and 40 incongruous), generated in Brazilian Portuguese, involving two distinct local verb-argument combinations (nominal object and pronominal object series). For each volunteer, the EEG was simultaneously acquired at 20 derivations, topographically localized according to the 10-20 International System. A computerized routine for automatic N400-peak marking (based on the ascendant zero-cross of the first waveform derivative) was applied to the estimated individual ERP waveform for congruous and incongruous sentences in both series for all ERP topographic derivations. Peak-to-peak N400 amplitude was significantly augmented (P < 0.05; one-sided Wilcoxon signed-rank test) due to incongruence in derivations F3, T3, C3, Cz, T5, P3, Pz, and P4 for nominal object series and in P3, Pz and P4 for pronominal object series. The results also indicated high inter-individual variability in ERP waveforms, suggesting that the usual procedure of grand averaging might not be considered a generally adequate approach. Hence, signal processing statistical techniques should be applied in neurolinguistic ERP studies allowing waveform analysis with low signal-to-noise ratio.
Resumo:
In the present review, we describe a systematic study of the sulfated polysaccharides from marine invertebrates, which led to the discovery of a carbohydrate-based mechanism of sperm-egg recognition during sea urchin fertilization. We have described unique polymers present in these organisms, especially sulfated fucose-rich compounds found in the egg jelly coat of sea urchins. The polysaccharides have simple, linear structures consisting of repeating units of oligosaccharides. They differ among the various species of sea urchins in specific patterns of sulfation and/or position of the glycosidic linkage within their repeating units. These polysaccharides show species specificity in inducing the acrosome reaction in sea urchin sperm, providing a clear-cut example of a signal transduction event regulated by sulfated polysaccharides. This distinct carbohydrate-mediated mechanism of sperm-egg recognition coexists with the bindin-protein system. Possibly, the genes involved in the biosynthesis of these sulfated fucans did not evolve in concordance with evolutionary distance but underwent a dramatic change near the tip of the Strongylocentrotid tree. Overall, we established a direct causal link between the molecular structure of a sulfated polysaccharide and a cellular physiological event - the induction of the sperm acrosome reaction in sea urchins. Small structural changes modulate an entire system of sperm-egg recognition and species-specific fertilization in sea urchins. We demonstrated that sulfated polysaccharides - in addition to their known function in cell proliferation, development, coagulation, and viral infection - mediate fertilization, and respond to evolutionary mechanisms that lead to species diversity.
Resumo:
Recent advances in Information and Communication Technology (ICT), especially those related to the Internet of Things (IoT), are facilitating smart regions. Among many services that a smart region can offer, remote health monitoring is a typical application of IoT paradigm. It offers the ability to continuously monitor and collect health-related data from a person, and transmit the data to a remote entity (for example, a healthcare service provider) for further processing and knowledge extraction. An IoT-based remote health monitoring system can be beneficial in rural areas belonging to the smart region where people have limited access to regular healthcare services. The same system can be beneficial in urban areas where hospitals can be overcrowded and where it may take substantial time to avail healthcare. However, this system may generate a large amount of data. In order to realize an efficient IoT-based remote health monitoring system, it is imperative to study the network communication needs of such a system; in particular the bandwidth requirements and the volume of generated data. The thesis studies a commercial product for remote health monitoring in Skellefteå, Sweden. Based on the results obtained via the commercial product, the thesis identified the key network-related requirements of a typical remote health monitoring system in terms of real-time event update, bandwidth requirements and data generation. Furthermore, the thesis has proposed an architecture called IReHMo - an IoT-based remote health monitoring architecture. This architecture allows users to incorporate several types of IoT devices to extend the sensing capabilities of the system. Using IReHMo, several IoT communication protocols such as HTTP, MQTT and CoAP has been evaluated and compared against each other. Results showed that CoAP is the most efficient protocol to transmit small size healthcare data to the remote servers. The combination of IReHMo and CoAP significantly reduced the required bandwidth as well as the volume of generated data (up to 56 percent) compared to the commercial product. Finally, the thesis conducted a scalability analysis, to determine the feasibility of deploying the combination of IReHMo and CoAP in large numbers in regions in north Sweden.
Resumo:
The purpose of this qualitative research is to study what is the impact of event marketing on brand awareness in the context of electronic sport industry. Based on the research questions, the theoretical framework will be developed. This research will analyze earlier theories, and also searching more fresh literature to explain the current phenomenon in the eSport industry. In the empirical part, there were total of five case companies interviewed. The context of this research is eSport, which has its own chapter. The theoretical part of the thesis focuses on event marketing and brand awareness. In this research, event marketing is analyzed from the event organizers perspective. In some occasions, event exhibitors’ perspective is also analyzed. In brand awareness, the focus is how to create a brand recognizable, recalled and from there top of mind in consumers’ minds. The results of this research revealed that many companies’ struggles on getting their brand recognizable. Some of the case companies lacks a strategy and don’t exactly know the core values of their customers. However some of the case companies were opposite. One reason behind this is that some of them has experience on the field and the companies have resources that covers them. Also the current strong brand has clearly a positive affect on their business.
Resumo:
During the Upper Cambrian there were three mass extinctions, each of which eliminated at least half of the trilobite families living in North American shelf seas. The Nolichucky Formation preserves the record of one of these extinction events at the base of the Steptoean Stage. Sixty-six trilobite collections were made from five sections In Tennessee and Virginia. The lower Steptoean faunas are assigned to one low diversity, Aphelaspis-dominated biofacies, which can be recognized in several other parts of North America. In Tennessee, the underlying upper Marjuman strata contain two higher diversity biofacies, the Coosella-Glaphyraspis Biofacies and the Tricrepicephalus-Norwoodiid Biofacies. At least four different biofacies are present in other parts of North America: the Crepicephalus -Lonchocephalus Biofacies, the Kingstonia Biofacies, the Cedaria Biofacies, and the Uncaspis Biofacies. A new, species-based zonation for the Nolichucky Formation imcludes five zones, three of which are new. These zones are the Crepicephalus Zone, the Coosella perplexa Zone, the Aphelaspis buttsi Zone, the A. walcotti Zone and the A. tarda Zone. The Nolichucky Formation was deposited within a shallow shelf basin and consists largely of subtidal shales with stormgenerated carbonate interbeds. A relative deepening is recorded In the Nolichucky Formation near the extinction, and is indicated In some sections by the appearance of shale-rich, distal storm deposits above a carbonate-rich, more proximal storm deposit sequence. A comparable deepening-upward sequence occurs near the extinction in the Great Basin of southwestern United States and in central Texas, and this suggests a possible eustatic control. In other parts of North America, the extinction IS recorded In a variety of environmental settings that range from near-shore to slope. In shelf environments, there is a marked decrease in diversity, and a sharp reduction in biofacies differentiation. Although extinctions do take place in slope environments, there IS no net reduction in diversity because of the immigration of several new taxa.
Resumo:
In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.
Resumo:
La fibrillation auriculaire (FA) est une arythmie touchant les oreillettes. En FA, la contraction auriculaire est rapide et irrégulière. Le remplissage des ventricules devient incomplet, ce qui réduit le débit cardiaque. La FA peut entraîner des palpitations, des évanouissements, des douleurs thoraciques ou l’insuffisance cardiaque. Elle augmente aussi le risque d'accident vasculaire. Le pontage coronarien est une intervention chirurgicale réalisée pour restaurer le flux sanguin dans les cas de maladie coronarienne sévère. 10% à 65% des patients qui n'ont jamais subi de FA, en sont victime le plus souvent lors du deuxième ou troisième jour postopératoire. La FA est particulièrement fréquente après une chirurgie de la valve mitrale, survenant alors dans environ 64% des patients. L'apparition de la FA postopératoire est associée à une augmentation de la morbidité, de la durée et des coûts d'hospitalisation. Les mécanismes responsables de la FA postopératoire ne sont pas bien compris. L'identification des patients à haut risque de FA après un pontage coronarien serait utile pour sa prévention. Le présent projet est basé sur l'analyse d’électrogrammes cardiaques enregistrées chez les patients après pontage un aorte-coronaire. Le premier objectif de la recherche est d'étudier si les enregistrements affichent des changements typiques avant l'apparition de la FA. Le deuxième objectif est d'identifier des facteurs prédictifs permettant d’identifier les patients qui vont développer une FA. Les enregistrements ont été réalisés par l'équipe du Dr Pierre Pagé sur 137 patients traités par pontage coronarien. Trois électrodes unipolaires ont été suturées sur l'épicarde des oreillettes pour enregistrer en continu pendant les 4 premiers jours postopératoires. La première tâche était de développer un algorithme pour détecter et distinguer les activations auriculaires et ventriculaires sur chaque canal, et pour combiner les activations des trois canaux appartenant à un même événement cardiaque. L'algorithme a été développé et optimisé sur un premier ensemble de marqueurs, et sa performance évaluée sur un second ensemble. Un logiciel de validation a été développé pour préparer ces deux ensembles et pour corriger les détections sur tous les enregistrements qui ont été utilisés plus tard dans les analyses. Il a été complété par des outils pour former, étiqueter et valider les battements sinusaux normaux, les activations auriculaires et ventriculaires prématurées (PAA, PVA), ainsi que les épisodes d'arythmie. Les données cliniques préopératoires ont ensuite été analysées pour établir le risque préopératoire de FA. L’âge, le niveau de créatinine sérique et un diagnostic d'infarctus du myocarde se sont révélés être les plus importants facteurs de prédiction. Bien que le niveau du risque préopératoire puisse dans une certaine mesure prédire qui développera la FA, il n'était pas corrélé avec le temps de l'apparition de la FA postopératoire. Pour l'ensemble des patients ayant eu au moins un épisode de FA d’une durée de 10 minutes ou plus, les deux heures précédant la première FA prolongée ont été analysées. Cette première FA prolongée était toujours déclenchée par un PAA dont l’origine était le plus souvent sur l'oreillette gauche. Cependant, au cours des deux heures pré-FA, la distribution des PAA et de la fraction de ceux-ci provenant de l'oreillette gauche était large et inhomogène parmi les patients. Le nombre de PAA, la durée des arythmies transitoires, le rythme cardiaque sinusal, la portion basse fréquence de la variabilité du rythme cardiaque (LF portion) montraient des changements significatifs dans la dernière heure avant le début de la FA. La dernière étape consistait à comparer les patients avec et sans FA prolongée pour trouver des facteurs permettant de discriminer les deux groupes. Cinq types de modèles de régression logistique ont été comparés. Ils avaient une sensibilité, une spécificité et une courbe opérateur-receveur similaires, et tous avaient un niveau de prédiction des patients sans FA très faible. Une méthode de moyenne glissante a été proposée pour améliorer la discrimination, surtout pour les patients sans FA. Deux modèles ont été retenus, sélectionnés sur les critères de robustesse, de précision, et d’applicabilité. Autour 70% patients sans FA et 75% de patients avec FA ont été correctement identifiés dans la dernière heure avant la FA. Le taux de PAA, la fraction des PAA initiés dans l'oreillette gauche, le pNN50, le temps de conduction auriculo-ventriculaire, et la corrélation entre ce dernier et le rythme cardiaque étaient les variables de prédiction communes à ces deux modèles.
Resumo:
Le syndrome du X fragile (SXF) est la première cause héréditaire de déficience intellectuelle et également la première cause monogénique d’autisme. Le SXF est causé par l'expansion de la répétition du nucléotide CGG sur le gène FMR1, ce qui empêche l’expression de la protéine FMRP. L’absence du FMRP mène à une altération du développement structurel et fonctionnel de la synapse, ce qui empêche la maturation des synapses induite par l’activité et l’élagage synaptique, qui sont essentiels pour le développement cérébral et cognitif. Nous avons investigué les potentiels reliés aux événements (PRE) évoqués par des stimulations fondamentales auditives et visuelles dans douze adolescents et jeunes adultes (10-22) atteints du SXF, ainsi que des participants contrôles appariés en âge chronologique et développemental. Les résultats indiquent un profil des PRE altéré, notamment l’augmentation de l’amplitude de N1 auditive, par rapport aux deux groupes contrôle, ainsi que l’augmentation des amplitudes de P2 et N2 auditifs et de la latence de N2 auditif. Chez les patients SXF, le traitement sensoriel semble être davantage perturbé qu’immature. En outre, la modalité auditive semble être plus perturbée que la modalité visuelle. En combinaison avec des résultats anatomique du cerveau, des mécanismes biochimiques et du comportement, nos résultats suggèrent une hyperexcitabilité du système nerveux dans le SXF.