21 resultados para two-dimensional coupled-wave theory


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite the long tradition for asking about the negative social and health consequences of alcohol consumption in surveys, little is known about the dimensionality of these consequences. Analysing cross-sectional and longitudinal data from the Nordic Taxation Study collected for Sweden, Finland, and Denmark in two waves in 2003 and 2004 by means of an explorative principal component analysis for categorical data (CATPCA), it is tested whether consequences have a single underlying dimension across cultures. It further tests the reliability, replicability, concurrent and predictive validity of the consequence scales. A one-dimensional solution was commonly preferable. Whereas the two-dimensional solution was unable to distinguish clearly between different concepts of consequences, the one-dimensional solution resulted in interpretable, generally very stable scales within countries across different samples and time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the availability of new generation sequencing technologies, bacterial genome projects have undergone a major boost. Still, chromosome completion needs a costly and time-consuming gap closure, especially when containing highly repetitive elements. However, incomplete genome data may be sufficiently informative to derive the pursued information. For emerging pathogens, i.e. newly identified pathogens, lack of release of genome data during gap closure stage is clearly medically counterproductive. We thus investigated the feasibility of a dirty genome approach, i.e. the release of unfinished genome sequences to develop serological diagnostic tools. We showed that almost the whole genome sequence of the emerging pathogen Parachlamydia acanthamoebae was retrieved even with relatively short reads from Genome Sequencer 20 and Solexa. The bacterial proteome was analyzed to select immunogenic proteins, which were then expressed and used to elaborate the first steps of an ELISA. This work constitutes the proof of principle for a dirty genome approach, i.e. the use of unfinished genome sequences of pathogenic bacteria, coupled with proteomics to rapidly identify new immunogenic proteins useful to develop in the future specific diagnostic tests such as ELISA, immunohistochemistry and direct antigen detection. Although applied here to an emerging pathogen, this combined dirty genome sequencing/proteomic approach may be used for any pathogen for which better diagnostics are needed. These genome sequences may also be very useful to develop DNA based diagnostic tests. All these diagnostic tools will allow further evaluations of the pathogenic potential of this obligate intracellular bacterium.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three-dimensional sequence stratigraphy is a potent exploration and development tool for the discovery of subtle stratigraphic traps. Reservoir morphology, heterogeneity and subtle stratigraphic trapping mechanisms can be better understood through systematic horizontal identification of sedimentary facies of systems tracts provided by three-dimensional attribute maps used as an important complement to the sequential analysis on the two-dimensional seismic lines and the well log data. On new prospects as well as on already-producing fields, the additional input of sequential analysis on three-dimensional data enables the identification, location and precise delimitation of new potentially productive zones. The first part of this paper presents four typical horizontal seismic facies assigned to the successive systems tracts of a third- or fourth-order sequence deposited in inner to outer neritic conditions on a elastic shelf. The construction of this synthetic representative sequence is based on the observed reproducibility of the horizontal seismic facies response to cyclic eustatic events on more than 35 sequences registered in the Gulf coast Plio-Pleistocene and Late Miocene, offshore Louisiana in the West Cameron region of the Gulf of Mexico. The second part shows how three-dimensional sequence stratigraphy can contribute in localizing and understanding sedimentary facies associated with productive zones. A case study in the early Middle Miocene Cibicides opima sands shows multiple stacked gas accumulations in the top slope fan, prograding wedge and basal transgressive systems tract of the third-order sequence between SB15.5 and SB 13.8 Ma.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantifying the spatial configuration of hydraulic conductivity (K) in heterogeneous geological environments is essential for accurate predictions of contaminant transport, but is difficult because of the inherent limitations in resolution and coverage associated with traditional hydrological measurements. To address this issue, we consider crosshole and surface-based electrical resistivity geophysical measurements, collected in time during a saline tracer experiment. We use a Bayesian Markov-chain-Monte-Carlo (McMC) methodology to jointly invert the dynamic resistivity data, together with borehole tracer concentration data, to generate multiple posterior realizations of K that are consistent with all available information. We do this within a coupled inversion framework, whereby the geophysical and hydrological forward models are linked through an uncertain relationship between electrical resistivity and concentration. To minimize computational expense, a facies-based subsurface parameterization is developed. The Bayesian-McMC methodology allows us to explore the potential benefits of including the geophysical data into the inverse problem by examining their effect on our ability to identify fast flowpaths in the subsurface, and their impact on hydrological prediction uncertainty. Using a complex, geostatistically generated, two-dimensional numerical example representative of a fluvial environment, we demonstrate that flow model calibration is improved and prediction error is decreased when the electrical resistivity data are included. The worth of the geophysical data is found to be greatest for long spatial correlation lengths of subsurface heterogeneity with respect to wellbore separation, where flow and transport are largely controlled by highly connected flowpaths.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: All methods presented to date to map both conductivity and permittivity rely on multiple acquisitions to compute quantitatively the magnitude of radiofrequency transmit fields, B1+. In this work, we propose a method to compute both conductivity and permittivity based solely on relative receive coil sensitivities ( B1-) that can be obtained in one single measurement without the need to neither explicitly perform transmit/receive phase separation nor make assumptions regarding those phases. THEORY AND METHODS: To demonstrate the validity and the noise sensitivity of our method we used electromagnetic finite differences simulations of a 16-channel transceiver array. To experimentally validate our methodology at 7 Tesla, multi compartment phantom data was acquired using a standard 32-channel receive coil system and two-dimensional (2D) and 3D gradient echo acquisition. The reconstructed electric properties were correlated to those measured using dielectric probes. RESULTS: The method was demonstrated both in simulations and in phantom data with correlations to both the modeled and bench measurements being close to identity. The noise properties were modeled and understood. CONCLUSION: The proposed methodology allows to quantitatively determine the electrical properties of a sample using any MR contrast, with the only constraint being the need to have 4 or more receive coils and high SNR. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc.