983 resultados para Complex-order differintegrals
Resumo:
Abstract Sitting between your past and your future doesn't mean you are in the present. Dakota Skye Complex systems science is an interdisciplinary field grouping under the same umbrella dynamical phenomena from social, natural or mathematical sciences. The emergence of a higher order organization or behavior, transcending that expected of the linear addition of the parts, is a key factor shared by all these systems. Most complex systems can be modeled as networks that represent the interactions amongst the system's components. In addition to the actual nature of the part's interactions, the intrinsic topological structure of underlying network is believed to play a crucial role in the remarkable emergent behaviors exhibited by the systems. Moreover, the topology is also a key a factor to explain the extraordinary flexibility and resilience to perturbations when applied to transmission and diffusion phenomena. In this work, we study the effect of different network structures on the performance and on the fault tolerance of systems in two different contexts. In the first part, we study cellular automata, which are a simple paradigm for distributed computation. Cellular automata are made of basic Boolean computational units, the cells; relying on simple rules and information from- the surrounding cells to perform a global task. The limited visibility of the cells can be modeled as a network, where interactions amongst cells are governed by an underlying structure, usually a regular one. In order to increase the performance of cellular automata, we chose to change its topology. We applied computational principles inspired by Darwinian evolution, called evolutionary algorithms, to alter the system's topological structure starting from either a regular or a random one. The outcome is remarkable, as the resulting topologies find themselves sharing properties of both regular and random network, and display similitudes Watts-Strogtz's small-world network found in social systems. Moreover, the performance and tolerance to probabilistic faults of our small-world like cellular automata surpasses that of regular ones. In the second part, we use the context of biological genetic regulatory networks and, in particular, Kauffman's random Boolean networks model. In some ways, this model is close to cellular automata, although is not expected to perform any task. Instead, it simulates the time-evolution of genetic regulation within living organisms under strict conditions. The original model, though very attractive by it's simplicity, suffered from important shortcomings unveiled by the recent advances in genetics and biology. We propose to use these new discoveries to improve the original model. Firstly, we have used artificial topologies believed to be closer to that of gene regulatory networks. We have also studied actual biological organisms, and used parts of their genetic regulatory networks in our models. Secondly, we have addressed the improbable full synchronicity of the event taking place on. Boolean networks and proposed a more biologically plausible cascading scheme. Finally, we tackled the actual Boolean functions of the model, i.e. the specifics of how genes activate according to the activity of upstream genes, and presented a new update function that takes into account the actual promoting and repressing effects of one gene on another. Our improved models demonstrate the expected, biologically sound, behavior of previous GRN model, yet with superior resistance to perturbations. We believe they are one step closer to the biological reality.
Resumo:
Prohabits use of all tobacco products in the areas of buildings under Governor's control in the State Capital Complex and all offices occupied by state government.
Resumo:
Many complex systems may be described by not one but a number of complex networks mapped on each other in a multi-layer structure. Because of the interactions and dependencies between these layers, the state of a single layer does not necessarily reflect well the state of the entire system. In this paper we study the robustness of five examples of two-layer complex systems: three real-life data sets in the fields of communication (the Internet), transportation (the European railway system), and biology (the human brain), and two models based on random graphs. In order to cover the whole range of features specific to these systems, we focus on two extreme policies of system's response to failures, no rerouting and full rerouting. Our main finding is that multi-layer systems are much more vulnerable to errors and intentional attacks than they appear from a single layer perspective.
Resumo:
Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.
Resumo:
Isolates of the Trichophyton mentagrophytes complex vary phenotypically. Whether the closely related zoophilic and anthropophilic anamorphs currently associated with Arthroderma vanbreuseghemii have to be considered as members of the same biological species remains an open question. In order to better delineate species in the T. mentagrophytes complex, we performed a mating analysis of freshly collected isolates from humans and animals with A. benhamiae and A. vanbreuseghemii reference strains, in comparison to internal transcribed spacer (ITS) and 28S rDNA sequencing. Mating experiments as well as ITS and 28S sequencing unambiguously allowed the distinction of A. benhamiae and A. vanbreuseghemii. We have also shown that all the isolates from tinea pedis and tinea unguium identified as T. interdigitale based on ITS sequences mated with A. vanbreuseghemii tester strains, but had lost their ability to give fertile cleistothecia. Therefore, T. interdigitale has to be considered as a humanized species derived from the sexual relative A. vanbreuseghemii.
Resumo:
The lpr gene has recently been shown to encode a functional mutation in the Fas receptor, a molecule involved in transducing apoptotic signals. Mice homozygous for the lpr gene develop an autoimmune syndrome accompanied by massive accumulation of double-negative (DN) CD4-8-B220+ T cell receptor-alpha/beta+ cells. In order to investigate the origin of these DN T cells, we derived lpr/lpr mice lacking major histocompatibility complex (MHC) class I molecules by intercrossing them with beta 2-microglobulin (beta 2m)-deficient mice. Interestingly, these lpr beta 2m-/- mice develop 13-fold fewer DNT cells in lymph nodes as compared to lpr/lpr wild-type (lprWT) mice. Analysis of anti-DNA antibodies and rheumatoid factor in serum demonstrates that lpr beta 2m-/- mice produce comparable levels of autoantibodies to lprWT mice. Collectively our data indicate that MHC class I molecules control the development of DN T cells but not autoantibody production in lpr/lpr mice and support the hypothesis that the majority of DN T cells may be derived from cells of the CD8 lineage.
Resumo:
MHC-peptide multimers containing biotinylated MHC-peptide complexes bound to phycoerythrin (PE) streptavidin (SA) are widely used for analyzing and sorting antigen-specific T cells. Here we describe alternative T cell-staining reagents that are superior to conventional reagents. They are built on reversible chelate complexes of Ni(2+)-nitrilotriacetic acid (NTA) with oligohistidines. We synthesized biotinylated linear mono-, di-, and tetra-NTA compounds using conventional solid phase peptide chemistry and studied their interaction with HLA-A*0201-peptide complexes containing a His(6), His(12), or 2×His(6) tag by surface plasmon resonance on SA-coated sensor chips and equilibrium dialysis. The binding avidity increased in the order His(6) < His(12) < 2×His(6) and NTA(1) < NTA(2) < NTA(4), respectively, depending on the configuration of the NTA moieties and increased to picomolar K(D) for the combination of a 2×His(6) tag and a 2×Ni(2+)-NTA(2). We demonstrate that HLA-A2-2×His(6)-peptide multimers containing either Ni(2+)-NTA(4)-biotin and PE-SA- or PE-NTA(4)-stained influenza and Melan A-specific CD8+ T cells equal or better than conventional multimers. Although these complexes were highly stable, they very rapidly dissociated in the presence of imidazole, which allowed sorting of bona fide antigen-specific CD8+ T cells without inducing T cell death as well as assessment of HLA-A2-peptide monomer dissociation kinetics on CD8+ T cells.
Resumo:
We present an extensive study of the structural and optical emission properties in aluminum silicates and soda-lime silicates codoped with Si nanoclusters (Si-nc) and Er. Si excess of 5 and 15¿at.¿% and Er concentrations ranging from 2×1019 up to 6×1020¿cm¿3 were introduced by ion implantation. Thermal treatments at different temperatures were carried out before and after Er implantation. Structural characterization of the resulting structures was performed to obtain the layer composition and the size distribution of Si clusters. A comprehensive study has been carried out of the light emission as a function of the matrix characteristics, Si and Er contents, excitation wavelength, and power. Er emission at 1540¿nm has been detected in all coimplanted glasses, with similar intensities. We estimated lifetimes ranging from 2.5¿to¿12¿ms (depending on the Er dose and Si excess) and an effective excitation cross section of about 1×10¿17¿cm2 at low fluxes that decreases at high pump power. By quantifying the amount of Er ions excited through Si-nc we find a fraction of 10% of the total Er concentration. Upconversion coefficients of about 3×10¿18¿cm¿3¿s¿1 have been found for soda-lime glasses and one order of magnitude lower in aluminum silicates.
Resumo:
The synthesis of magnetic nanoparticles with monodispere size distributions, their self assembly into ordered arrays and their magnetic behavior as a function of structural order (ferrofluids and 2D assemblies) are presented. Magnetic colloids of monodispersed, passivated, cobalt nanocrystals were produced by the rapid pyrolysis of cobalt carbonyl in solution. The size, size distribution (std. dev.< 5%) and the shape of the nanocrystals were controlled by varying the surfactant, its concentration, the reaction rate and the reaction temperature. The Co particles are defect-free single crystals with a complex cubic structure related to the beta phase of manganese (epsilon-Co). In the 2D assembly, a collective behavior was observed in the low-field susceptibility measurements where the magnetization of the zero field cooled process increases steadily and the magnetization of the field cooling process is independent the temperature. This was different from the observed behavior in a sample comprised of disordered interacting particles. A strong paramagnetic contribution appears at very low temperatures where the magnetization increases drastically after field cooling the sample. This has been attributed to the Co surfactant-particle interface since no magnetic atomic impurities are present in these samples.
Resumo:
The plutonic rocks of the Basal Complex of La Gomera, Canary Islands, Spain, were studied by means of major and trace element contents and by H-O-Sr-Nd isotope compositions in order to distinguish primary magmatic characteristics and late-stage alteration products. Deciphering the effects of alteration allowed us to determine primary, plume-related compositions that indicated D- and (18)O-depletion relative to normal upper mantle, supporting the conclusions of earlier studies on the plutonic rocks of Fuerteventura and La Palma. Late-stage alteration took place during the formation of the intrusive series induced by interaction with meteoric water. Inferred isotopic compositions of the meteoric water indicate that the water infiltrated into the rock edifice at a height of about 1500 m above sea level, suggesting the existence of a subaerial volcano which was active during the intrusive activity and that it has been either distroyed or remain buried by later volcanic and landslide events.
Resumo:
OBJECTIVES: Coarctation of the aorta is one of the most common congenital heart defects. Its diagnosis may be difficult in the presence of a patent ductus arteriosus, of other complex defects or of a poor echocardiographic window. We sought to demonstrate that the carotid-subclavian artery index (CSA index) and the isthmus-descending aorta ratio (I/D ratio), two recently described echocardiographic indexes, are effective in detection of isolated and complex aortic coarctations in children younger and older than 3 months of age. The CSA index is the ratio of the distal aortic arch diameter to the distance between the left carotid artery and the left subclavian artery. It is highly suggestive of a coarctation when it is <1.5. The I/D ratio defined as the diameter of the isthmus to the diameter of the descending aorta, suggests an aortic coarctation when it is less than 0.64. METHODS: This is a retrospective cohort study in a tertiary care children's hospital. Review of all echocardiograms in children aged 0-18 years with a diagnosis of coarctation seen at the author's institution between 1996 and 2006. An age- and sex-matched control group without coarctation was constituted. Offline echocardiographic measurements of the aortic arch were performed in order to calculate the CSA index and I/D ratio. RESULTS: Sixty-eight patients were included in the coarctation group, 24 in the control group. Patients with coarctation had a significantly lower CSA index (0.84+/-0.39 vs 2.65+/-0.82, p<0.0001) and I/D ratio (0.58+/-0.18 vs 0.98+/-0.19, p<0.0001) than patients in the control group. Associated cardiac defects and age of the child did not significantly alter the CSA index or the I/D ratio. CONCLUSIONS: A CSA index less than 1.5 is highly suggestive of coarctation independent of age and of the presence of other cardiac defects. I/D ratio alone is less specific than CSA alone at any age and for any associated cardiac lesion. The association of both indexes improves sensitivity and permits diagnosis of coarctation in all patients based solely on a bedside echocardiographic measurement.
Resumo:
PURPOSE: The aim of this study was to develop models based on kernel regression and probability estimation in order to predict and map IRC in Switzerland by taking into account all of the following: architectural factors, spatial relationships between the measurements, as well as geological information. METHODS: We looked at about 240,000 IRC measurements carried out in about 150,000 houses. As predictor variables we included: building type, foundation type, year of construction, detector type, geographical coordinates, altitude, temperature and lithology into the kernel estimation models. We developed predictive maps as well as a map of the local probability to exceed 300 Bq/m(3). Additionally, we developed a map of a confidence index in order to estimate the reliability of the probability map. RESULTS: Our models were able to explain 28% of the variations of IRC data. All variables added information to the model. The model estimation revealed a bandwidth for each variable, making it possible to characterize the influence of each variable on the IRC estimation. Furthermore, we assessed the mapping characteristics of kernel estimation overall as well as by municipality. Overall, our model reproduces spatial IRC patterns which were already obtained earlier. On the municipal level, we could show that our model accounts well for IRC trends within municipal boundaries. Finally, we found that different building characteristics result in different IRC maps. Maps corresponding to detached houses with concrete foundations indicate systematically smaller IRC than maps corresponding to farms with earth foundation. CONCLUSIONS: IRC mapping based on kernel estimation is a powerful tool to predict and analyze IRC on a large-scale as well as on a local level. This approach enables to develop tailor-made maps for different architectural elements and measurement conditions and to account at the same time for geological information and spatial relations between IRC measurements.
Resumo:
Open surgery is still the main treatment of complex abdominal aortic aneurysm. Nevertheless, this approach is associated with major complications and high mortality rate. Therefore the fenestrated endograft has been used to treat the juxtarenal aneurysms. Unfortunately, no randomised controlled study is available to assess the efficacy of such devices. Moreover, the costs are still prohibitive to generalise this approach. Alternative treatments such as chimney or sandwich technique are being evaluated in order to avoid theses disadvantages. The aim of this paper is to present the endovascular approach to treat juxtarenal aneurysm and to emphasize that this option should be used only by highly specialized vascular centres.
Resumo:
Although hydrocarbon-bearing fluids have been known from the alkaline igneous rocks of the Khibiny intrusion for many years, their origin remains enigmatic. A recently proposed model of post-magmatic hydrocarbon (HC) generation through Fischer-Tropsch (FT) type reactions suggests the hydration of Fe-bearing phases and release of H-2 which reacts with magmatically derived CO2 to form CH4 and higher HCs. However, new petrographic, microthermometric, laser Raman, bulk gas and isotope data are presented and discussed in the context of previously published work in order to reassess models of HC generation. The gas phase is dominated by CH4 with only minor proportions of higher hydrocarbons. No remnants of the proposed primary CO2-rich fluid are found in the complex. The majority of the fluid inclusions are of secondary nature and trapped in healed microfractures. This indicates a high fluid flux after magma crystallisation. Entrapment conditions for fluid inclusions are 450-550 degrees C at 2.8-4.5 kbar. These temperatures are too high for hydrocarbon gas generation through the FT reaction. Chemical analyses of rims of Fe-rich phases suggest that they are not the result of alteration but instead represent changes in magma composition during crystallisation. Furthermore, there is no clear relationship between the presence of Fe-rich minerals and the abundance of fluid inclusion planes (FIPs) as reported elsewhere. delta C-13 values for methane range from -22.4% to -5.4%, confirming a largely abiogenic origin for the gas. The presence of primary CH4-dominated fluid inclusions and melt inclusions, which contain a methane-rich gas phase, indicates a magmatic origin of the HCs. An increase in methane content, together with a decrease in delta C-13 isotope values towards the intrusion margin suggests that magmatically derived abiogenic hydrocarbons may have mixed with biogenic hydrocarbons derived from the surrounding country rocks. (C) 2006 Elsevier BV. All rights reserved.
Resumo:
The concept of energy gap(s) is useful for understanding the consequence of a small daily, weekly, or monthly positive energy balance and the inconspicuous shift in weight gain ultimately leading to overweight and obesity. Energy gap is a dynamic concept: an initial positive energy gap incurred via an increase in energy intake (or a decrease in physical activity) is not constant, may fade out with time if the initial conditions are maintained, and depends on the 'efficiency' with which the readjustment of the energy imbalance gap occurs with time. The metabolic response to an energy imbalance gap and the magnitude of the energy gap(s) can be estimated by at least two methods, i.e. i) assessment by longitudinal overfeeding studies, imposing (by design) an initial positive energy imbalance gap; ii) retrospective assessment based on epidemiological surveys, whereby the accumulated endogenous energy storage per unit of time is calculated from the change in body weight and body composition. In order to illustrate the difficulty of accurately assessing an energy gap we have used, as an illustrative example, a recent epidemiological study which tracked changes in total energy intake (estimated by gross food availability) and body weight over 3 decades in the US, combined with total energy expenditure prediction from body weight using doubly labelled water data. At the population level, the study attempted to assess the cause of the energy gap purported to be entirely due to increased food intake. Based on an estimate of change in energy intake judged to be more reliable (i.e. in the same study population) and together with calculations of simple energetic indices, our analysis suggests that conclusions about the fundamental causes of obesity development in a population (excess intake vs. low physical activity or both) is clouded by a high level of uncertainty.