68 resultados para Two dimensional fuzzy fault tree analysis
Resumo:
Metabolic labeling techniques have recently become popular tools for the quantitative profiling of proteomes. Classical stable isotope labeling with amino acids in cell cultures (SILAC) uses pairs of heavy/light isotopic forms of amino acids to introduce predictable mass differences in protein samples to be compared. After proteolysis, pairs of cognate precursor peptides can be correlated, and their intensities can be used for mass spectrometry-based relative protein quantification. We present an alternative SILAC approach by which two cell cultures are grown in media containing isobaric forms of amino acids, labeled either with 13C on the carbonyl (C-1) carbon or 15N on backbone nitrogen. Labeled peptides from both samples have the same nominal mass and nearly identical MS/MS spectra but generate upon fragmentation distinct immonium ions separated by 1 amu. When labeled protein samples are mixed, the intensities of these immonium ions can be used for the relative quantification of the parent proteins. We validated the labeling of cellular proteins with valine, isoleucine, and leucine with coverage of 97% of all tryptic peptides. We improved the sensitivity for the detection of the quantification ions on a pulsing instrument by using a specific fast scan event. The analysis of a protein mixture with a known heavy/light ratio showed reliable quantification. Finally the application of the technique to the analysis of two melanoma cell lines yielded quantitative data consistent with those obtained by a classical two-dimensional DIGE analysis of the same samples. Our method combines the features of the SILAC technique with the advantages of isobaric labeling schemes like iTRAQ. We discuss advantages and disadvantages of isobaric SILAC with immonium ion splitting as well as possible ways to improve it
Resumo:
Starting from theories of secularization and of religious individualization, we propose a two-dimensional typology of religiosity and test its impact on political attitudes. Unlike classic conceptions of religiosity used in political studies, our typology simultaneously accounts for an individual's sense of belonging to the church (institutional dimension) and his/her personal religious beliefs (spiritual dimension). Our analysis, based on data from the World Values Survey in Switzerland (1989-2007), shows two main results. First, next to evidence of religious decline, we also find evidence of religious change with an increase in the number of people who "believe without belonging." Second, non-religious individuals and individuals who believe without belonging are significantly more permissive on issues of cultural liberalism than followers of institutionalized forms of religiosity.
Resumo:
Phagocytosis, whether of food particles in protozoa or bacteria and cell remnants in the metazoan immune system, is a conserved process. The particles are taken up into phagosomes, which then undergo complex remodeling of their components, called maturation. By using two-dimensional gel electrophoresis and mass spectrometry combined with genomic data, we identified 179 phagosomal proteins in the amoeba Dictyostelium, including components of signal transduction, membrane traffic, and the cytoskeleton. By carrying out this proteomics analysis over the course of maturation, we obtained time profiles for 1,388 spots and thus generated a dynamic record of phagosomal protein composition. Clustering of the time profiles revealed five clusters and 24 functional groups that were mapped onto a flow chart of maturation. Two heterotrimeric G protein subunits, Galpha4 and Gbeta, appeared at the earliest times. We showed that mutations in the genes encoding these two proteins produce a phagocytic uptake defect in Dictyostelium. This analysis of phagosome protein dynamics provides a reference point for future genetic and functional investigations.
Resumo:
The ultrastructure of the membrane attack complex (MAC) of complement had been described as representing a hollow cylinder of defined dimensions that is composed of the proteins C5b, C6, C7, C8, and C9. After the characteristic cylindrical structure was identified as polymerized C9 [poly(C9)], the question arose as to the ultrastructural identity and topology of the C9-polymerizing complex C5b-8. An electron microscopic analysis of isolated MAC revealed an asymmetry of individual complexes with respect to their length. Whereas the length of one boundary (+/- SEM) was always 16 +/- 1 nm, the length of the other varied between 16 and 32 nm. In contrast, poly(C9), formed spontaneously from isolated C9, had a uniform tubule length (+/- SEM) of 16 +/- 1 nm. On examination of MAC-phospholipid vesicle complexes, an elongated structure was detected that was closely associated with the poly(C9) tubule and that extended 16-18 nm beyond the torus of the tubule and 28-30 nm above the membrane surface. The width of this structure varied depending on its two-dimensional projection in the electron microscope. By using biotinyl C5b-6 in the formation of the MAC and avidin-coated colloidal gold particles for the ultrastructural analysis, this heretofore unrecognized subunit of the MAC could be identified as the tetramolecular C5b-8 complex. Identification also was achieved by using anti-C5 Fab-coated colloidal gold particles. A similar elongated structure of 25 nm length (above the surface of the membrane) was observed on single C5b-8-vesicle complexes. It is concluded that the C5b-8 complex, which catalyzes poly(C9) formation, constitutes a structure of discrete morphology that remains as such identifiable in the fully assembled MAC, in which it is closely associated with the poly(C9) tubule.
Resumo:
With the availability of new generation sequencing technologies, bacterial genome projects have undergone a major boost. Still, chromosome completion needs a costly and time-consuming gap closure, especially when containing highly repetitive elements. However, incomplete genome data may be sufficiently informative to derive the pursued information. For emerging pathogens, i.e. newly identified pathogens, lack of release of genome data during gap closure stage is clearly medically counterproductive. We thus investigated the feasibility of a dirty genome approach, i.e. the release of unfinished genome sequences to develop serological diagnostic tools. We showed that almost the whole genome sequence of the emerging pathogen Parachlamydia acanthamoebae was retrieved even with relatively short reads from Genome Sequencer 20 and Solexa. The bacterial proteome was analyzed to select immunogenic proteins, which were then expressed and used to elaborate the first steps of an ELISA. This work constitutes the proof of principle for a dirty genome approach, i.e. the use of unfinished genome sequences of pathogenic bacteria, coupled with proteomics to rapidly identify new immunogenic proteins useful to develop in the future specific diagnostic tests such as ELISA, immunohistochemistry and direct antigen detection. Although applied here to an emerging pathogen, this combined dirty genome sequencing/proteomic approach may be used for any pathogen for which better diagnostics are needed. These genome sequences may also be very useful to develop DNA based diagnostic tests. All these diagnostic tools will allow further evaluations of the pathogenic potential of this obligate intracellular bacterium.
Resumo:
Plasmodium falciparum is the parasite responsible for the most acute form of malaria in humans. Recently, the serine repeat antigen (SERA) in P. falciparum has attracted attention as a potential vaccine and drug target, and it has been shown to be a member of a large gene family. To clarify the relationships among the numerous P. falciparum SERAs and to identify orthologs to SERA5 and SERA6 in Plasmodium species affecting rodents, gene trees were inferred from nucleotide and amino acid sequence data for 33 putative SERA homologs in seven different species. (A distance method for nucleotide sequences that is specifically designed to accommodate differing GC content yielded results that were largely compatible with the amino acid tree. Standard-distance and maximum-likelihood methods for nucleotide sequences, on the other hand, yielded gene trees that differed in important respects.) To infer the pattern of duplication, speciation, and gene loss events in the SERA gene family history, the resulting gene trees were then "reconciled" with two competing Plasmodium species tree topologies that have been identified by previous phylogenetic studies. Parsimony of reconciliation was used as a criterion for selecting a gene tree/species tree pair and provided (1) support for one of the two species trees and for the core topology of the amino acid-derived gene tree, (2) a basis for critiquing fine detail in a poorly resolved region of the gene tree, (3) a set of predicted "missing genes" in some species, (4) clarification of the relationship among the P. falciparum SERA, and (5) some information about SERA5 and SERA6 orthologs in the rodent malaria parasites. Parsimony of reconciliation and a second criterion--implied mutational pattern at two key active sites in the SERA proteins-were also seen to be useful supplements to standard "bootstrap" analysis for inferred topologies.
Resumo:
RESUME L'architecture nucléaire ainsi que l'ultrastructure des microtubules ont été abondamment étudiées par des méthodes cytochimiques utilisant des échantillons fixés chimiquement, enrobés dans des résines ou fixés à basse température. Les échantillons fixés à basse température pouvant aussi avoir été substitués, déshydratés et enrobés dans des résines pour la plupart hydrophiles. Ici, nous avons étendu ces études en utilisant la microscopie électronique effectuée sur des sections hydratées (CEMOVIS) permettant d'observer les échantillons dans un état le plus proche de leur état natif. De plus, nous avons effectué de la tomographie électronique sur des sections hydratées (TOVIS) afin d'obtenir une vision tridimensionnelle de : 1) la périphérie du noyau et de la région périchromatinienne et 2) de la lumière des microtubules. Concernant l'architecture nucléaire Nos observations montrent que le nucléole et la chromatine condensée sont facilement visualisés grâce à la texture spécifique qu'ils arborent. Au contraire, la visualisation de domaines nucléaires importants et spécialement ceux qui contiennent des ribonucléoprotéines, est rendue difficile, à cause du faible contraste qui caractérise l'espace interchromatinien. Ceci est essentiellement dû à la quantité d'information présente dans le volume de la section qui semble être superposée, lorsque observée sur des micrographies en deux dimensions. La tomographie nous a permis de mieux visualiser les différentes régions du noyau. Les mottes de chromatine condensée sont décorées à leur périphérie (région périchromatinienne), par nombre de fibrilles et granules. Des tunnels d'espace interchromatinien sont occasionnellement observés en train de traverser des régions de chromatine condensée favorisant l'accès aux pores nucléaires. Enfin, nous avons pu, au niveau d'un pore unique, observer la plupart des structures caractéristiques du complexe de pore nucléaire. Concernant l'ultrastructure des microtubules: Nous avons démontré que la polarité d'un microtubule observé in situ en section transversale, par CEMOVIS, est directement déduite de l'observation de la chiralité de ses protofilaments. Cette chiralité, a été établie précédemment comme étant liée à la morphologie des sous unités de tubuline. La tomographie électronique effectuée sur des sections hydratées, nous a permis d'observer les microtubules dans leur contexte cellulaire avec une résolution suffisante pour visualiser des détails moléculaires, comme les monomères de tubuline. Ainsi, des molécules n'ayant pas encore été caractérisées, ont été observées dans la lumière des microtubules. Ces observations ont été effectuées autant sur des cellules observées en coupe par CEMOVIS que sur des cellules congelées dans leur totalité par immersion dans un bain d'éthane liquide. Enfin, nous avons montré que les microtubules étaient aussi de formidables objets, permettant une meilleure compréhension des artéfacts de coupe occasionnés lors de la préparation des échantillons par CEMOVIS. Les buts des études qui seront menées â la suite de ce travail seront de 1) essayer de localiser des domaines nucléaires spécifiques par des approches cytochimiques avant la congélation des cellules. 2) Appliquer des méthodes de moyennage afin d'obtenir un modèle tridimensionnel de la structure du complexe de pore nucléaire dans son contexte cellulaire. 3) Utiliser des approches biochimiques afin de déterminer la nature exacte des particules qui se trouvent dans la lumière des microtubules. ABSTRACT Nuclear architecture as well as microtubule ultrastructure have been extensively investigated by means of different methods of ultrastructural cytochemistry using chemically fixed and resin embedded samples or following cryofixation, cryosubstitution and embedding into various, especially partially hydrophilic resins. Here, we extend these studies using cryoelectron microscopy of vitreous sections (CEMOVIS) which allows one to observe the specimen as close as possible to its native state. Furthermore, we applied cryoelectron tomography of vitreous sections (TOVIS) in order to obtain athree-dimensional view of: 1) the nuclear periphery, and of the perichromatin region, and 2) the microtubule lumen. Concerning the nuclear architecture: Our observations show that nucleoli and condensed chromatin are well recognisable due to their specific texture. Conversely, the visualisation of other important nuclear domains, especially those containing ribonucleoproteins, is seriously hampered by a generally low contrast of the interchromatin region. This is mainly due to the plethora of information superposed in the volume of the section observed on two-dimensional micrographs. Cryoelectron tomography allowed us to better visualise nuclear regions. Condensed chromatin clumps are decorated on their periphery, the perichromatin region, by numerous fibrils and granules. Tunnels of interchromatin space can occasionally be found as crossing condensed chromatin regions, thus, allowing the access to nuclear pores. Finally, we were able to use TOVIS to directly distinguish most of the nuclear pore complex structures, at the level of a single pore. Concerning the microtubule ultrastructure: We have demonstrated that the polarity of across-sectioned microtubule observed in situ by CEMOVIS wás directly deducible from the visualisation of the tubulin protofiíaments' chirality. This chirality has been established before as related to the shape. of the tubulin subunits. Cryoelectron tomography allowed us to observe microtubules in their cellular context at a resolution sufficient to resolve molecular details such as their tubulin monomers. In this way, uncharacterized molecules were visualised in the microtubule lumen. These observations were made either on samples prepared by CEMOVIS or plunge freezing of whole cells. Finally, we have shown that microtubules are also relevant objects for the understanding of cutting artefacts, when performing CEMOVIS. The goals of our further studies will be to: 1) try to speciifically target different nuclear domains by cytochemical approaches in situ, prior to cryofixation. 2) Apply averaging methods in order to obtain a three-dimensional model of the nuclear pore complex at work, in its cellular context. 3) Use biochemical analysis combined in a second time to immunocytochemical approaches, to determine the exact nature of the microtubule's luminal particles.
Parts, places, and perspectives : a theory of spatial relations based an mereotopology and convexity
Resumo:
This thesis suggests to carry on the philosophical work begun in Casati's and Varzi's seminal book Parts and Places, by extending their general reflections on the basic formal structure of spatial representation beyond mereotopology and absolute location to the question of perspectives and perspective-dependent spatial relations. We show how, on the basis of a conceptual analysis of such notions as perspective and direction, a mereotopological theory with convexity can express perspectival spatial relations in a strictly qualitative framework. We start by introducing a particular mereotopological theory, AKGEMT, and argue that it constitutes an adequate core for a theory of spatial relations. Two features of AKGEMT are of particular importance: AKGEMT is an extensional mereotopology, implying that sameness of proper parts is a sufficient and necessary condition for identity, and it allows for (lower- dimensional) boundary elements in its domain of quantification. We then discuss an extension of AKGEMT, AKGEMTS, which results from the addition of a binary segment operator whose interpretation is that of a straight line segment between mereotopological points. Based on existing axiom systems in standard point-set topology, we propose an axiomatic characterisation of the segment operator and show that it is strong enough to sustain complex properties of a convexity predicate and a convex hull operator. We compare our segment-based characterisation of the convex hull to Cohn et al.'s axioms for the convex hull operator, arguing that our notion of convexity is significantly stronger. The discussion of AKGEMTS defines the background theory of spatial representation on which the developments in the second part of this thesis are built. The second part deals with perspectival spatial relations in two-dimensional space, i.e., such relations as those expressed by 'in front of, 'behind', 'to the left/right of, etc., and develops a qualitative formalism for perspectival relations within the framework of AKGEMTS. Two main claims are defended in part 2: That perspectival relations in two-dimensional space are four- place relations of the kind R(x, y, z, w), to be read as x is i?-related to y as z looks at w; and that these four-place structures can be satisfactorily expressed within the qualitative theory AKGEMTS. To defend these two claims, we start by arguing for a unified account of perspectival relations, thus rejecting the traditional distinction between 'relative' and 'intrinsic' perspectival relations. We present a formal theory of perspectival relations in the framework of AKGEMTS, deploying the idea that perspectival relations in two-dimensional space are four-place relations, having a locational and a perspectival part and show how this four-place structure leads to a unified framework of perspectival relations. Finally, we present a philosophical motivation to the idea that perspectival relations are four-place, cashing out the thesis that perspectives are vectorial properties and argue that vectorial properties are relations between spatial entities. Using Fine's notion of "qua objects" for an analysis of points of view, we show at last how our four-place approach to perspectival relations compares to more traditional understandings.
Resumo:
The importance of the right ventricle as a determinant of clinical symptoms, exercise capacity, peri-operative survival and postoperative outcome has been underestimated for a long time. Right ventricular ejection fraction has been used as a measure of right ventricular function but has been found to be dependent on loading conditions, ventricular interaction as well as on myocardial structure. Altered left ventricular function in patients with valvular disease influences right ventricular performance mainly by changes in afterload but also by ventricular interaction. Right ventricular function and regional wall motion can be determined with right ventricular angiography, radionuclide ventriculography, two-dimensional echocardiography or magnetic resonance imaging. However, the complex structure of the right ventricle and its pronounced translational movements render quantification difficult. True regional wall motion analysis is, however, possible with myocardial tagging based on magnetic resonance techniques. With this technique a baso-apical shear motion of the right ventricle was observed which was enhanced in patients with aortic stenosis.
Resumo:
The exceptional genomic content and genome organization of the Acidianus filamentous virus 1 (AFV1) that infects the hyperthermophilic archaeon Acidianus hospitalis suggest that this virus might exploit an unusual mechanism of genome replication. An analysis of replicative intermediates of the viral genome by two-dimensional (2D) agarose gel electrophoresis revealed that viral genome replication starts by the formation of a D-loop and proceeds via strand displacement replication. Characterization of replicative intermediates using dark-field electron microscopy, in combination with the 2D agarose gel electrophoresis data, suggests that recombination plays a key role in the termination of AFV1 genome replication through the formation of terminal loops. A terminal protein was found to be attached to the ends of the viral genome. The results allow us to postulate a model of genome replication that relies on recombination events for initiation and termination.
Resumo:
INTRODUCTION: Optimal identification of subtle cognitive impairment in the primary care setting requires a very brief tool combining (a) patients' subjective impairments, (b) cognitive testing, and (c) information from informants. The present study developed a new, very quick and easily administered case-finding tool combining these assessments ('BrainCheck') and tested the feasibility and validity of this instrument in two independent studies. METHODS: We developed a case-finding tool comprised of patient-directed (a) questions about memory and depression and (b) clock drawing, and (c) the informant-directed 7-item version of the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE). Feasibility study: 52 general practitioners rated the feasibility and acceptance of the patient-directed tool. Validation study: An independent group of 288 Memory Clinic patients (mean ± SD age = 76.6 ± 7.9, education = 12.0 ± 2.6; 53.8% female) with diagnoses of mild cognitive impairment (n = 80), probable Alzheimer's disease (n = 185), or major depression (n = 23) and 126 demographically matched, cognitively healthy volunteer participants (age = 75.2 ± 8.8, education = 12.5 ± 2.7; 40% female) partook. All patient and healthy control participants were administered the patient-directed tool, and informants of 113 patient and 70 healthy control participants completed the very short IQCODE. RESULTS: Feasibility study: General practitioners rated the patient-directed tool as highly feasible and acceptable. Validation study: A Classification and Regression Tree analysis generated an algorithm to categorize patient-directed data which resulted in a correct classification rate (CCR) of 81.2% (sensitivity = 83.0%, specificity = 79.4%). Critically, the CCR of the combined patient- and informant-directed instruments (BrainCheck) reached nearly 90% (that is 89.4%; sensitivity = 97.4%, specificity = 81.6%). CONCLUSION: A new and very brief instrument for general practitioners, 'BrainCheck', combined three sources of information deemed critical for effective case-finding (that is, patients' subject impairments, cognitive testing, informant information) and resulted in a nearly 90% CCR. Thus, it provides a very efficient and valid tool to aid general practitioners in deciding whether patients with suspected cognitive impairments should be further evaluated or not ('watchful waiting').
Resumo:
The expression of Ia-associated human Invariant (In) chain glycoproteins was studied in the Raji B cells as well as in their RJ 2.2.5 Ia-negative derived variant cells by using a specific rabbit anti-human In chain antiserum. Two-dimensional gel electrophoresis of immunoprecipitates from either biosynthetically labeled or surface labeled cells were analyzed. In addition, flow microfluorometric analysis of stained cells was performed. The results indicate that the In chain is constitutively produced in the Ia-negative B cell variant. Moreover, it appears that several forms of In chain-related molecules, with different charges and distinct m.w. are equally expressed in Ia-positive and Ia-negative B cells. Finally, no evidence could be obtained that the In molecular family was expressed on the cell surface of Ia-positive Raji and Ia-negative RJ 2.2.5 cells.
Resumo:
Three-dimensional information is much easier to understand than a set of two-dimensional images. Therefore a layman is thrilled by the pseudo-3D image taken in a scanning electron microscope (SEM) while, when seeing a transmission electron micrograph, his imagination is challenged. First approaches to gain insight in the third dimension were to make serial microtome sections of a region of interest (ROI) and then building a model of the object. Serial microtome sectioning is a tedious and skill-demanding work and therefore seldom done. In the last two decades with the increase of computer power, sophisticated display options, and the development of new instruments, an SEM with a built-in microtome as well as a focused ion beam scanning electron microscope (FIB-SEM), serial sectioning, and 3D analysis has become far easier and faster.Due to the relief like topology of the microtome trimmed block face of resin-embedded tissue, the ROI can be searched in the secondary electron mode, and at the selected spot, the ROI is prepared with the ion beam for 3D analysis. For FIB-SEM tomography, a thin slice is removed with the ion beam and the newly exposed face is imaged with the electron beam, usually by recording the backscattered electrons. The process, also called "slice and view," is repeated until the desired volume is imaged.As FIB-SEM allows 3D imaging of biological fine structure at high resolution of only small volumes, it is crucial to perform slice and view at carefully selected spots. Finding the region of interest is therefore a prerequisite for meaningful imaging. Thin layer plastification of biofilms offers direct access to the original sample surface and allows the selection of an ROI for site-specific FIB-SEM tomography just by its pronounced topographic features.
Resumo:
Proteomics has come a long way from the initial qualitative analysis of proteins present in a given sample at a given time ("cataloguing") to large-scale characterization of proteomes, their interactions and dynamic behavior. Originally enabled by breakthroughs in protein separation and visualization (by two-dimensional gels) and protein identification (by mass spectrometry), the discipline now encompasses a large body of protein and peptide separation, labeling, detection and sequencing tools supported by computational data processing. The decisive mass spectrometric developments and most recent instrumentation news are briefly mentioned accompanied by a short review of gel and chromatographic techniques for protein/peptide separation, depletion and enrichment. Special emphasis is placed on quantification techniques: gel-based, and label-free techniques are briefly discussed whereas stable-isotope coding and internal peptide standards are extensively reviewed. Another special chapter is dedicated to software and computing tools for proteomic data processing and validation. A short assessment of the status quo and recommendations for future developments round up this journey through quantitative proteomics.
Resumo:
Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.