29 resultados para Vertex Transitive Graph
Resumo:
Specific properties emerge from the structure of large networks, such as that of worldwide air traffic, including a highly hierarchical node structure and multi-level small world sub-groups that strongly influence future dynamics. We have developed clustering methods to understand the form of these structures, to identify structural properties, and to evaluate the effects of these properties. Graph clustering methods are often constructed from different components: a metric, a clustering index, and a modularity measure to assess the quality of a clustering method. To understand the impact of each of these components on the clustering method, we explore and compare different combinations. These different combinations are used to compare multilevel clustering methods to delineate the effects of geographical distance, hubs, network densities, and bridges on worldwide air passenger traffic. The ultimate goal of this methodological research is to demonstrate evidence of combined effects in the development of an air traffic network. In fact, the network can be divided into different levels of âeurooecohesionâeuro, which can be qualified and measured by comparative studies (Newman, 2002; Guimera et al., 2005; Sales-Pardo et al., 2007).
Resumo:
Network analysis naturally relies on graph theory and, more particularly, on the use of node and edge metrics to identify the salient properties in graphs. When building visual maps of networks, these metrics are turned into useful visual cues or are used interactively to filter out parts of a graph while querying it, for instance. Over the years, analysts from different application domains have designed metrics to serve specific needs. Network science is an inherently cross-disciplinary field, which leads to the publication of metrics with similar goals; different names and descriptions of their analytics often mask the similarity between two metrics that originated in different fields. Here, we study a set of graph metrics and compare their relative values and behaviors in an effort to survey their potential contributions to the spatial analysis of networks.
Resumo:
Linking the structural connectivity of brain circuits to their cooperative dynamics and emergent functions is a central aim of neuroscience research. Graph theory has recently been applied to study the structure-function relationship of networks, where dynamical similarity of different nodes has been turned into a "static" functional connection. However, the capability of the brain to adapt, learn and process external stimuli requires a constant dynamical functional rewiring between circuitries and cell assemblies. Hence, we must capture the changes of network functional connectivity over time. Multi-electrode array data present a unique challenge within this framework. We study the dynamics of gamma oscillations in acute slices of the somatosensory cortex from juvenile mice recorded by planar multi-electrode arrays. Bursts of gamma oscillatory activity lasting a few hundred milliseconds could be initiated only by brief trains of electrical stimulations applied at the deepest cortical layers and simultaneously delivered at multiple locations. Local field potentials were used to study the spatio-temporal properties and the instantaneous synchronization profile of the gamma oscillatory activity, combined with current source density (CSD) analysis. Pair-wise differences in the oscillation phase were used to determine the presence of instantaneous synchronization between the different sites of the circuitry during the oscillatory period. Despite variation in the duration of the oscillatory response over successive trials, they showed a constant average power, suggesting that the rate of expenditure of energy during the gamma bursts is consistent across repeated stimulations. Within each gamma burst, the functional connectivity map reflected the columnar organization of the neocortex. Over successive trials, an apparently random rearrangement of the functional connectivity was observed, with a more stable columnar than horizontal organization. This work reveals new features of evoked gamma oscillations in developing cortex.
Resumo:
Objective: Aspergillus species are the main pathogens causing invasive fungal infections but the prevalence of other mould species is rising. Resistance to antifungals among these new emerging pathogens presents a challenge for managing of infections. Conventional susceptibility testing of non-Aspergillus species is laborious and often difficult to interpret. We evaluated a new method for real-time susceptibility testing of moulds based on their of growth-related heat production.Methods: Laboratory and clinical strains of Mucor spp. (n = 4), Scedoporium spp. (n = 4) and Fusarium spp. (n = 5) were used. Conventional MIC was determined by microbroth dilution. Isothermal microcalorimetry was performed at 37 C using Sabouraud dextrose broth (SDB) inoculated with 104 spores/ml (determined by microscopical enumeration). SDB without antifungals was used for evaluation of growth characteristics. Detection time was defined as heat flow exceeding 10 lW. For susceptibility testing serial dilutions of amphotericin B, voriconazole, posaconazole and caspofungin were used. The minimal heat inhibitory concentration (MHIC) was defined as the lowest antifungal concentration, inhbiting 50% of the heat produced by the growth control at 48 h or at 24 h for Mucor spp. Susceptibility tests were performed in duplicate.Results: Tested mould genera had distinctive heat flow profiles with a median detection time (range) of 3.4 h (1.9-4.1 h) for Mucor spp, 11.0 h (7.1-13.7 h) for Fusarium spp and 29.3 h (27.4-33.0 h) for Scedosporium spp. Graph shows heat flow (in duplicate) of one representative strain from each genus (dashed line marks detection limit). Species belonging to the same genus showed similar heat production profiles. Table shows MHIC and MIC ranges for tested moulds and antifungals.Conclusions: Microcalorimetry allowed rapid detection of growth of slow-growing species, such as Fusarium spp. and Scedosporium spp. Moreover, microcalorimetry offers a new approach for antifungal susceptibility testing of moulds, correlating with conventional MIC values. Interpretation of calorimetric susceptibility data is easy and real-time data on the effect of different antifungals on the growth of the moulds is additionally obtained. This method may be used for investigation of different mechanisms of action of antifungals, new substances and drug-drug combinations.
Resumo:
OBJECT: Cerebrovascular pressure reactivity is the ability of cerebral vessels to respond to changes in transmural pressure. A cerebrovascular pressure reactivity index (PRx) can be determined as the moving correlation coefficient between mean intracranial pressure (ICP) and mean arterial blood pressure. METHODS: The authors analyzed a database consisting of 398 patients with head injuries who underwent continuous monitoring of cerebrovascular pressure reactivity. In 298 patients, the PRx was compared with a transcranial Doppler ultrasonography assessment of cerebrovascular autoregulation (the mean index [Mx]), in 17 patients with the PET-assessed static rate of autoregulation, and in 22 patients with the cerebral metabolic rate for O(2). Patient outcome was assessed 6 months after injury. RESULTS: There was a positive and significant association between the PRx and Mx (R(2) = 0.36, p < 0.001) and with the static rate of autoregulation (R(2) = 0.31, p = 0.02). A PRx > 0.35 was associated with a high mortality rate (> 50%). The PRx showed significant deterioration in refractory intracranial hypertension, was correlated with outcome, and was able to differentiate patients with good outcome, moderate disability, severe disability, and death. The graph of PRx compared with cerebral perfusion pressure (CPP) indicated a U-shaped curve, suggesting that too low and too high CPP was associated with a disturbance in pressure reactivity. Such an optimal CPP was confirmed in individual cases and a greater difference between current and optimal CPP was associated with worse outcome (for patients who, on average, were treated below optimal CPP [R(2) = 0.53, p < 0.001] and for patients whose mean CPP was above optimal CPP [R(2) = -0.40, p < 0.05]). Following decompressive craniectomy, pressure reactivity initially worsened (median -0.03 [interquartile range -0.13 to 0.06] to 0.14 [interquartile range 0.12-0.22]; p < 0.01) and improved in the later postoperative course. After therapeutic hypothermia, in 17 (70.8%) of 24 patients in whom rewarming exceeded the brain temperature threshold of 37 degrees C, ICP remained stable, but the average PRx increased to 0.32 (p < 0.0001), indicating significant derangement in cerebrovascular reactivity. CONCLUSIONS: The PRx is a secondary index derived from changes in ICP and arterial blood pressure and can be used as a surrogate marker of cerebrovascular impairment. In view of an autoregulation-guided CPP therapy, a continuous determination of a PRx is feasible, but its value has to be evaluated in a prospective controlled trial.
Resumo:
Gene-on-gene regulations are key components of every living organism. Dynamical abstract models of genetic regulatory networks help explain the genome's evolvability and robustness. These properties can be attributed to the structural topology of the graph formed by genes, as vertices, and regulatory interactions, as edges. Moreover, the actual gene interaction of each gene is believed to play a key role in the stability of the structure. With advances in biology, some effort was deployed to develop update functions in Boolean models that include recent knowledge. We combine real-life gene interaction networks with novel update functions in a Boolean model. We use two sub-networks of biological organisms, the yeast cell-cycle and the mouse embryonic stem cell, as topological support for our system. On these structures, we substitute the original random update functions by a novel threshold-based dynamic function in which the promoting and repressing effect of each interaction is considered. We use a third real-life regulatory network, along with its inferred Boolean update functions to validate the proposed update function. Results of this validation hint to increased biological plausibility of the threshold-based function. To investigate the dynamical behavior of this new model, we visualized the phase transition between order and chaos into the critical regime using Derrida plots. We complement the qualitative nature of Derrida plots with an alternative measure, the criticality distance, that also allows to discriminate between regimes in a quantitative way. Simulation on both real-life genetic regulatory networks show that there exists a set of parameters that allows the systems to operate in the critical region. This new model includes experimentally derived biological information and recent discoveries, which makes it potentially useful to guide experimental research. The update function confers additional realism to the model, while reducing the complexity and solution space, thus making it easier to investigate.
Resumo:
During the Early Toarcian, major paleoenvironnemental and paleoceanographical changes occurred, leading to an oceanic anoxic event (OAE) and to a perturbation of the carbon isotope cycle. Although the standard biochronology of the Lower Jurassic is essentially based upon ammonites, in recent years biostratigraphy based on calcareous nannofossils and dinoflagellate cysts is increasingly used to date Jurassic rocks. However, the precise dating and correlation of the Early Toarcian OAE, and of the associated delta C-13 anomaly in different settings of the western Tethys, are still partly problematic, and it is still unclear whether these events are synchronous or not. In order to allow more accurate correlations of the organic rich levels recorded in the Lower Toarcian OAE, this account proposes a new biozonation based on a quantitative biochronology approach, the Unitary Associations (UA), applied to calcareous nannofossils. This study represents the first attempt to apply the UA method to Jurassic nannofossils. The study incorporates eighteen sections distributed across western Tethys and ranging from the Pliensbachian to Aalenian, comprising 1220 samples and 72 calcareous nannofossil taxa. The BioGraph [Savary, J., Guex, J., 1999. Discrete biochronological scales and unitary associations: description of the Biograph Computer program. Memoires de Geologie de Lausanne 34, 282 pp] and UA-Graph (Copyright Hammer O., Guex and Savary, 2002) softwares provide a discrete biochronological framework based upon multi-taxa concurrent range zones in the different sections. The optimized dataset generates nine UAs using the co-occurrences of 56 taxa. These UAs are grouped into six Unitary Association Zones (UA-Z), which constitute a robust biostratigraphic synthesis of all the observed or deduced biostratigraphic relationships between the analysed taxa. The UA zonation proposed here is compared to ``classic'' calcareous nannofossil biozonations, which are commonly used for the southern and the northern sides of Tethys. The biostratigraphic resolution of the UA-Zones varies from one nannofossil subzone or part of it to several subzones, and can be related to the pattern of calcareous nannoplankton originations and extinctions during the studied time interval. The Late Pliensbachian - Early Toarcian interval (corresponding to the UA-Z II) represents a major step in the Jurassic nannoplankton radiation. The recognized UA-Zones are also compared to the carbon isotopic negative excursion and TOC maximum in five sections of central Italy, Germany and England, with the aim of providing a more reliable correlation tool for the Early Toarcian OAE, and of the associated isotopic anomaly, between the southern and northern part of western Tethys. The results of this work show that the TOC maximum and delta C-13 negative excursion correspond to the upper part of the UA-Z II (i.e., UA 3) in the sections analysed. This suggests that the Early Toarcian OAE was a synchronous event within the western Tethys. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
We consider electroencephalograms (EEGs) of healthy individuals and compare the properties of the brain functional networks found through two methods: unpartialized and partialized cross-correlations. The networks obtained by partial correlations are fundamentally different from those constructed through unpartial correlations in terms of graph metrics. In particular, they have completely different connection efficiency, clustering coefficient, assortativity, degree variability, and synchronization properties. Unpartial correlations are simple to compute and they can be easily applied to large-scale systems, yet they cannot prevent the prediction of non-direct edges. In contrast, partial correlations, which are often expensive to compute, reduce predicting such edges. We suggest combining these alternative methods in order to have complementary information on brain functional networks.
Resumo:
The scenario considered here is one where brain connectivity is represented as a network and an experimenter wishes to assess the evidence for an experimental effect at each of the typically thousands of connections comprising the network. To do this, a univariate model is independently fitted to each connection. It would be unwise to declare significance based on an uncorrected threshold of α=0.05, since the expected number of false positives for a network comprising N=90 nodes and N(N-1)/2=4005 connections would be 200. Control of Type I errors over all connections is therefore necessary. The network-based statistic (NBS) and spatial pairwise clustering (SPC) are two distinct methods that have been used to control family-wise errors when assessing the evidence for an experimental effect with mass univariate testing. The basic principle of the NBS and SPC is the same as supra-threshold voxel clustering. Unlike voxel clustering, where the definition of a voxel cluster is unambiguous, 'clusters' formed among supra-threshold connections can be defined in different ways. The NBS defines clusters using the graph theoretical concept of connected components. SPC on the other hand uses a more stringent pairwise clustering concept. The purpose of this article is to compare the pros and cons of the NBS and SPC, provide some guidelines on their practical use and demonstrate their utility using a case study involving neuroimaging data.
Resumo:
We report a boy, referred at 25 months following a dramatic isolated language regression antedating autistic-like symptomatology. His sleep electroencephalogram (EEG) showed persistent focal epileptiform activity over the left parietal and vertex areas never associated with clinical seizures. He was started on adrenocorticotropic hormone (ACTH) with a significant improvement in language, behavior, and in EEG discharges in rapid eye movement (REM) sleep. Later course was characterized by fluctuations/regressions in language and behavior abilities, in phase with recrudescence of EEG abnormalities prompting additional ACTH courses that led to remarkable decrease in EEG abnormalities, improvement in language, and to a lesser degree, in autistic behavior. The timely documentation of regression episodes suggesting an "atypical" autistic regression, striking therapy-induced improvement, fluctuation of symptomatology over time could be ascribed to recurrent and persisting EEG abnormalities.
Resumo:
Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.
Resumo:
Data mining can be defined as the extraction of previously unknown and potentially useful information from large datasets. The main principle is to devise computer programs that run through databases and automatically seek deterministic patterns. It is applied in different fields of application, e.g., remote sensing, biometry, speech recognition, but has seldom been applied to forensic case data. The intrinsic difficulty related to the use of such data lies in its heterogeneity, which comes from the many different sources of information. The aim of this study is to highlight potential uses of pattern recognition that would provide relevant results from a criminal intelligence point of view. The role of data mining within a global crime analysis methodology is to detect all types of structures in a dataset. Once filtered and interpreted, those structures can point to previously unseen criminal activities. The interpretation of patterns for intelligence purposes is the final stage of the process. It allows the researcher to validate the whole methodology and to refine each step if necessary. An application to cutting agents found in illicit drug seizures was performed. A combinatorial approach was done, using the presence and the absence of products. Methods coming from the graph theory field were used to extract patterns in data constituted by links between products and place and date of seizure. A data mining process completed using graphing techniques is called ``graph mining''. Patterns were detected that had to be interpreted and compared with preliminary knowledge to establish their relevancy. The illicit drug profiling process is actually an intelligence process that uses preliminary illicit drug classes to classify new samples. Methods proposed in this study could be used \textit{a priori} to compare structures from preliminary and post-detection patterns. This new knowledge of a repeated structure may provide valuable complementary information to profiling and become a source of intelligence.
Resumo:
This paper presents 3-D brain tissue classificationschemes using three recent promising energy minimizationmethods for Markov random fields: graph cuts, loopybelief propagation and tree-reweighted message passing.The classification is performed using the well knownfinite Gaussian mixture Markov Random Field model.Results from the above methods are compared with widelyused iterative conditional modes algorithm. Theevaluation is performed on a dataset containing simulatedT1-weighted MR brain volumes with varying noise andintensity non-uniformities. The comparisons are performedin terms of energies as well as based on ground truthsegmentations, using various quantitative metrics.
Resumo:
It has been proved, for several classes of continuous and discrete dynamical systems, that the presence of a positive (resp. negative) circuit in the interaction graph of a system is a necessary condition for the presence of multiple stable states (resp. a cyclic attractor). A positive (resp. negative) circuit is said to be functional when it "generates" several stable states (resp. a cyclic attractor). However, there are no definite mathematical frameworks translating the underlying meaning of "generates." Focusing on Boolean networks, we recall and propose some definitions concerning the notion of functionality along with associated mathematical results.