157 resultados para sensed space environments


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although gene by environment interactions may play a key role in the maintenance of genetic polymorphisms, little is known about the ecological factors involved in these interactions. We investigated whether food supply and parasites can mediate covariation between the degree of adult pheomelanin-based coloration, a heritable trait, and offspring body mass in the tawny owl (Strix aluco). We swapped clutches between nests to allocate genotypes randomly among environments. Three weeks after hatching, we challenged the immune system of 80 unrelated nestlings with either a phytohemagglutinin (PHA) or a lipopolysaccharide, surrogates of alternative parasites, and then fed them ad lib. or food-restricted them during the following 6 days in the laboratory. Whatever the immune challenge, nestlings fed ad lib. converted food more efficiently into body mass when their biological mother was dark pheomelanic. In contrast, food-restricted nestlings challenged with PHA lost less body mass when their biological mother was pale pheomelanic. Nestling tawny owls born from differently melanic mothers thus show differing reaction norms relative to food availability and parasitism. This suggests that dark and pale pheomelanic owls reflect alternative adaptations to food availability and parasites, factors known to vary in space and time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The spatial resolution visualized with hydrological models and the conceptualized images of subsurface hydrological processes often exceed resolution of the data collected with classical instrumentation at the field scale. In recent years it was possible to increasingly diminish the inherent gap to information from point like field data through the application of hydrogeophysical methods at field-scale. With regards to all common geophysical exploration techniques, electric and electromagnetic methods have arguably to greatest sensitivity to hydrologically relevant parameters. Of particular interest in this context are induced polarisation (IP) measurements, which essentially constrain the capacity of a probed subsurface region to store an electrical charge. In the absence of metallic conductors the IP- response is largely driven by current conduction along the grain surfaces. This offers the perspective to link such measurements to the characteristics of the solid-fluid-interface and thus, at least in unconsolidated sediments, should allow for first-order estimates of the permeability structure.¦While the IP-effect is well explored through laboratory experiments and in part verified through field data for clay-rich environments, the applicability of IP-based characterizations to clay-poor aquifers is not clear. For example, polarization mechanisms like membrane polarization are not applicable in the rather wide pore-systems of clay free sands, and the direct transposition of Schwarz' theory relating polarization of spheres to the relaxation mechanism of polarized cells to complex natural sediments yields ambiguous results.¦In order to improve our understanding of the structural origins of IP-signals in such environments as well as their correlation with pertinent hydrological parameters, various laboratory measurements have been conducted. We consider saturated quartz samples with a grain size spectrum varying from fine sand to fine gravel, that is grain diameters between 0,09 and 5,6 mm, as well as corresponding pertinent mixtures which can be regarded as proxies for widespread alluvial deposits. The pore space characteristics are altered by changing (i) the grain size spectra, (ii) the degree of compaction, and (iii) the level of sorting. We then examined how these changes affect the SIP response, the hydraulic conductivity, and the specific surface area of the considered samples, while keeping any electrochemical variability during the measurements as small as possible. The results do not follow simple assumptions on relationships to single parameters such as grain size. It was found that the complexity of natural occurring media is not yet sufficiently represented when modelling IP. At the same time simple correlation to permeability was found to be strong and consistent. Hence, adaptations with the aim of better representing the geo-structure of natural porous media were applied to the simplified model space used in Schwarz' IP-effect-theory. The resulting semi- empiric relationship was found to more accurately predict the IP-effect and its relation to the parameters grain size and permeability. If combined with recent findings about the effect of pore fluid electrochemistry together with advanced complex resistivity tomography, these results will allow us to picture diverse aspects of the subsurface with relative certainty. Within the framework of single measurement campaigns, hydrologiste can than collect data with information about the geo-structure and geo-chemistry of the subsurface. However, additional research efforts will be necessary to further improve the understanding of the physical origins of IP-effect and minimize the potential for false interpretations.¦-¦Dans l'étude des processus et caractéristiques hydrologiques des subsurfaces, la résolution spatiale donnée par les modèles hydrologiques dépasse souvent la résolution des données du terrain récoltées avec des méthodes classiques d'hydrologie. Récemment il est possible de réduire de plus en plus cet divergence spatiale entre modèles numériques et données du terrain par l'utilisation de méthodes géophysiques, notamment celles géoélectriques. Parmi les méthodes électriques, la polarisation provoquée (PP) permet de représenter la capacité des roches poreuses et des sols à stocker une charge électrique. En l'absence des métaux dans le sous-sol, cet effet est largement influencé par des caractéristiques de surface des matériaux. En conséquence les mesures PP offrent une information des interfaces entre solides et fluides dans les matériaux poreux que nous pouvons lier à la perméabilité également dirigée par ces mêmes paramètres. L'effet de la polarisation provoquée à été étudié dans différentes études de laboratoire, ainsi que sur le terrain. A cause d'une faible capacité de polarisation des matériaux sableux, comparé aux argiles, leur caractérisation par l'effet-PP reste difficile a interpréter d'une manière cohérente pour les environnements hétérogènes.¦Pour améliorer les connaissances sur l'importance de la structure du sous-sol sableux envers l'effet PP et des paramètres hydrologiques, nous avons fait des mesures de laboratoire variées. En détail, nous avons considéré des échantillons sableux de quartz avec des distributions de taille de grain entre sables fins et graviers fins, en diamètre cela fait entre 0,09 et 5,6 mm. Les caractéristiques de l'espace poreux sont changées en modifiant (i) la distribution de taille des grains, (ii) le degré de compaction, et (iii) le niveau d'hétérogénéité dans la distribution de taille de grains. En suite nous étudions comment ces changements influencent l'effet-PP, la perméabilité et la surface spécifique des échantillons. Les paramètres électrochimiques sont gardés à un minimum pendant les mesures. Les résultats ne montrent pas de relation simple entre les paramètres pétro-physiques comme par exemples la taille des grains. La complexité des media naturels n'est pas encore suffisamment représenté par les modèles des processus PP. Néanmoins, la simple corrélation entre effet PP et perméabilité est fort et consistant. En conséquence la théorie de Schwarz sur l'effet-PP a été adapté de manière semi-empirique pour mieux pouvoir estimer la relation entre les résultats de l'effet-PP et les paramètres taille de graines et perméabilité. Nos résultats concernant l'influence de la texture des matériaux et celles de l'effet de l'électrochimie des fluides dans les pores, permettront de visualiser des divers aspects du sous-sol. Avec des telles mesures géo-électriques, les hydrologues peuvent collectionner des données contenant des informations sur la structure et la chimie des fluides des sous-sols. Néanmoins, plus de recherches sur les origines physiques de l'effet-PP sont nécessaires afin de minimiser le risque potentiel d'une mauvaise interprétation des données.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study reports a set of forty proteinogenic histidine-containing dipeptides as potential carbonyl quenchers. The peptides were chosen to cover as exhaustively as possible the accessible chemical space, and their quenching activities toward 4-hydroxy-2-nonenal (HNE) and pyridoxal were evaluated by HPLC analyses. The peptides were capped at the C-terminus as methyl esters or amides to favor their resistance to proteolysis and diastereoisomeric pairs were considered to reveal the influence of configuration on quenching. On average, the examined dipeptides are less active than the parent compound carnosine (βAla + His) thus emphasizing the unfavorable effect of the shortening of the βAla residue as confirmed by the control dipeptide Gly-His. Nevertheless, some peptides show promising activities toward HNE combined with a remarkable selectivity. The results emphasize the beneficial role of aromatic and positively charged residues, while negatively charged and H-bonding side chains show a detrimental effect on quenching. As a trend, ester derivatives are slightly more active than amides while heterochiral peptides are more active than their homochiral diastereoisomer. Overall, the results reveal that quenching activity strongly depends on conformational effects and vicinal residues (as evidenced by the reported QSAR analysis), offering insightful clues for the design of improved carbonyl quenchers and to rationalize the specific reactivity of histidine residues within proteins.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

RésuméLa coexistence de nombreuses espèces différentes a de tout temps intrigué les biologistes. La diversité et la composition des communautés sont influencées par les perturbations et l'hétérogénéité des conditions environnementales. Bien que dans la nature la distribution spatiale des conditions environnementales soit généralement autocorrélée, cet aspect est rarement pris en compte dans les modèles étudiant la coexistence des espèces. Dans ce travail, nous avons donc abordé, à l'aide de simulations numériques, la coexistence des espèces ainsi que leurs caractéristiques au sein d'un environnement autocorrélé.Afin de prendre en compte cet élément spatial, nous avons développé un modèle de métacommunauté (un ensemble de communautés reliées par la dispersion des espèces) spatialement explicite. Dans ce modèle, les espèces sont en compétition les unes avec les autres pour s'établir dans un nombre de places limité, dans un environnement hétérogène. Les espèces sont caractérisées par six traits: optimum de niche, largeur de niche, capacité de dispersion, compétitivité, investissement dans la reproduction et taux de survie. Nous nous sommes particulièrement intéressés à l'influence de l'autocorrélation spatiale et des perturbations sur la diversité des espèces et sur les traits favorisés dans la métacommunauté. Nous avons montré que l'autocorrélation spatiale peut avoir des effets antagonistes sur la diversité, en fonction du taux de perturbations considéré. L'influence de l'autocorrélation spatiale sur la capacité de dispersion moyenne dans la métacommunauté dépend également des taux de perturbations et survie. Nos résultats ont aussi révélé que de nombreuses espèces avec différents degrés de spécialisation (i.e. différentes largeurs de niche) peuvent coexister. Toutefois, les espèces spécialistes sont favorisées en absence de perturbations et quand la dispersion est illimitée. A l'opposé, un taux élevé de perturbations sélectionne des espèces plus généralistes, associées avec une faible compétitivité.L'autocorrélation spatiale de l'environnement, en interaction avec l'intensité des perturbations, influence donc de manière considérable la coexistence ainsi que les caractéristiques des espèces. Ces caractéristiques sont à leur tour souvent impliquées dans d'importants processus, comme le fonctionnement des écosystèmes, la capacité des espèces à réagir aux invasions, à la fragmentation de l'habitat ou aux changements climatiques. Ce travail a permis une meilleure compréhension des mécanismes responsables de la coexistence et des caractéristiques des espèces, ce qui est crucial afin de prédire le devenir des communautés naturelles dans un environnement changeant.AbstractUnderstanding how so many different species can coexist in nature is a fundamental and long-standing question in ecology. Community diversity and composition are known to be influenced by heterogeneity in environmental conditions and disturbance. Though in nature the spatial distribution of environmental conditions is frequently autocorrelated, this aspect is seldom considered in models investigating species coexistence. In this work, we thus addressed several questions pertaining to species coexistence and composition in spatially autocorrelated environments, with a numerical simulations approach.To take into account this spatial aspect, we developed a spatially explicit model of metacommunity (a set of communities linked by dispersal of species). In this model, species are trophically equivalent, and compete for space in a heterogeneous environment. Species are characterized by six life-history traits: niche optimum, niche breadth, dispersal, competitiveness, reproductive investment and survival rate. We were particularly interested in the influence of environmental spatial autocorrelation and disturbance on species diversity and on the traits of the species favoured in the metacommunity. We showed that spatial autocorrelation can have antagonistic effects on diversity depending on disturbance rate. Similarly, spatial autocorrelation interacted with disturbance rate and survival rate to shape the mean dispersal ability observed in the metacommunity. Our results also revealed that many species with various degrees of specialization (i.e. different niche breadths) can coexist together. However specialist species were favoured in the absence of disturbance, and when dispersal was unlimited. In contrast, high disturbance rate selected for more generalist species, associated with low competitive ability.The spatial structure of the environment, together with disturbance and species traits, thus strongly impacts species diversity and, more importantly, species composition. Species composition is known to affect several important metacommunity properties such as ecosystem functioning, resistance and reaction to invasion, to habitat fragmentation and to climate changes. This work allowed a better understanding of the mechanisms responsible for species composition, which is of crucial importance to predict the fate of natural metacommunities in changing environments

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The identification of the presence of active signaling between astrocytes and neurons in a process termed gliotransmission has caused a paradigm shift in our thinking about brain function. However, we are still in the early days of the conceptualization of how astrocytes influence synapses, neurons, networks, and ultimately behavior. In this Perspective, our goal is to identify emerging principles governing gliotransmission and consider the specific properties of this process that endow the astrocyte with unique functions in brain signal integration. We develop and present hypotheses aimed at reconciling confounding reports and define open questions to provide a conceptual framework for future studies. We propose that astrocytes mainly signal through high-affinity slowly desensitizing receptors to modulate neurons and perform integration in spatiotemporal domains complementary to those of neurons.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To optimally manage a metapopulation, managers and conservation biologists can favor a type of habitat spatial distribution (e.g. aggregated or random). However, the spatial distribution that provides the highest habitat occupancy remains ambiguous and numerous contradictory results exist. Habitat occupancy depends on the balance between local extinction and colonization. Thus, the issue becomes even more puzzling when various forms of relationships - positive or negative co-variation - between local extinction and colonization rate within habitat types exist. Using an analytical model we demonstrate first that the habitat occupancy of a metapopulation is significantly affected by the presence of habitat types that display different extinction-colonization dynamics, considering: (i) variation in extinction or colonization rate and (ii) positive and negative co-variation between the two processes within habitat types. We consequently examine, with a spatially explicit stochastic simulation model, how different degrees of habitat aggregation affect occupancy predictions under similar scenarios. An aggregated distribution of habitat types provides the highest habitat occupancy when local extinction risk is spatially heterogeneous and high in some places, while a random distribution of habitat provides the highest habitat occupancy when colonization rates are high. Because spatial variability in local extinction rates always favors aggregation of habitats, we only need to know about spatial variability in colonization rates to determine whether aggregating habitat types increases, or not, metapopulation occupancy. From a comparison of the results obtained with the analytical and with the spatial-explicit stochastic simulation model we determine the conditions under which a simple metapopulation model closely matches the results of a more complex spatial simulation model with explicit heterogeneity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: The risks of a public exposure to a sudden decompression, until now, have been related to civil aviation and, at a lesser extent, to diving activities. However, engineers are currently planning the use of low pressure environments for underground transportation. This method has been proposed for the future Swissmetro, a high-speed underground train designed for inter-urban linking in Switzerland. HYPOTHESIS: The use of a low pressure environment in an underground public transportation system must be considered carefully regarding the decompression risks. Indeed, due to the enclosed environment, both decompression kinetics and safety measures may differ from aviation decompression cases. METHOD: A theoretical study of decompression risks has been conducted at an early stage of the Swissmetro project. A three-compartment theoretical model, based on the physics of fluids, has been implemented with flow processing software (Ithink 5.0). Simulations have been conducted in order to analyze "decompression scenarios" for a wide range of parameters, relevant in the context of the Swissmetro main study. RESULTS: Simulation results cover a wide range from slow to explosive decompression, depending on the simulation parameters. Not surprisingly, the leaking orifice area has a tremendous impact on barotraumatic effects, while the tunnel pressure may significantly affect both hypoxic and barotraumatic effects. Calculations have also shown that reducing the free space around the vehicle may mitigate significantly an accidental decompression. CONCLUSION: Numeric simulations are relevant to assess decompression risks in the future Swissmetro system. The decompression model has proven to be useful in assisting both design choices and safety management.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: Drug delivery to treat diseases of the posterior segment of the eye, such as choroidal neovascularization and its complications, is hampered by poor intraocular penetration and rapid elimination of the drug from the eye. The purpose of this study was to investigate the feasibility and tolerance of suprachoroidal injections of poly(ortho ester) (POE), a bioerodible and biocompatible polymer, as a biomaterial potentially useful for development of sustained drug delivery systems. METHODS: After tunnelization of the sclera, different formulations based on POE were injected (100 microL) into the suprachoroidal space of pigmented rabbits and compared with 1% sodium hyaluronate. Follow-up consisted of fundus observations, echography, fluorescein angiography, and histologic analysis over 3 weeks. RESULTS: After injection, POE spread in the suprachoroidal space at the posterior pole. It was well tolerated and progressively disappeared from the site of injection without sequelae. No bleeding or retinal detachment occurred. Echographic pictures showed that the material was present in the suprachoroidal space for 3 weeks. Angiography revealed minor pigment irregularities at the site of injection, but no retinal edema or necrosis. Histology showed that POE was well tolerated in the choroid. CONCLUSIONS: POE suprachoroidal injections, an easy, controllable, and reproducible procedure, were well tolerated in the rabbit eye. POE appears to be a promising biomaterial to deliver drugs focally to the choroid and the retina.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most corporate codes of conduct and multi-stakeholder sustainability standards guarantee workers' rights to freedom of association and collective bargaining, but many authors are sceptical about the concrete impact of codes and standards of this kind. In this paper we use Hancher and Moran's (1998) concept of 'regulatory space' to assess the potential of private transnational regulation to support the growth of trade union membership and collective bargaining relationships, drawing on some preliminary case study results from a project on the impact of the International Finance Corporation's (IFC) social conditionality on worker organization and social dialogue. One of the major effects of neoliberal economic and industrial policy has been the routine exclusion of workers' organizations from regulatory processes on the grounds that they introduce inappropriate 'political' motives into what ought to be technical decision-making processes. This, rather than any direct attack on their capacity to take action, is what seems best to explain the global decline in union influence (Cradden 2004; Howell 2007; Howe 2012). The evidence we present in the paper suggests that private labour regulation may under certain conditions contribute to a reversal of this tendency, re-establishing the legitimacy of workers' organizations within regulatory processes and by extension the legitimacy of their use of economic and social power. We argue that guarantees of freedom of association and bargaining rights within private regulation schemes are effective to the extent that they can be used by workers' organizations in support of a claim for access to the regulatory space within which the terms and conditions of the employment relationship are determined. Our case study evidence shows that certain trade unions in East Africa have indeed been able to use IFC and other private regulation schemes as levers to win recognition from employers and to establish collective bargaining relationships. Although they did not attempt to use formal procedures to make a claim for the enforcement of freedom of association rights on behalf of their members, the unions did use enterprises' adherence to private regulation schemes as a normative point of reference in argument and political exchange about worker representation. For these unions, the regulation was a useful addition to the range of arguments that they could deploy as means to justify their demand for recognition by employers. By contrast, the private regulation that helps workers' organizations to win access to regulatory processes does little to ensure that they are able to participate meaningfully, whether in terms of technical capacity or of their ability to mobilize social power as a counterweight to the economic power of employers. To the extent that our East African unions were able to make an impact on terms and conditions of employment via their participation in regulatory space it was solely on the basis of their own capacities and resources and the application of national labour law.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Breathing-induced bulk motion of the myocardium during data acquisition may cause severe image artifacts in coronary magnetic resonance angiography (MRA). Current motion compensation strategies include breath-holding or free-breathing MR navigator gating and tracking techniques. Navigator-based techniques have been further refined by the applications of sophisticated 2D k-space reordering techniques. A further improvement in image quality and a reduction of relative scanning duration may be expected from a 3D k-space reordering scheme. Therefore, a 3D k-space reordered acquisition scheme including a 3D navigator gated and corrected segmented k-space gradient echo imaging sequence for coronary MRA was implemented. This new zonal motion-adapted acquisition and reordering technique (ZMART) was developed on the basis of a numerical simulation of the Bloch equations. The technique was implemented on a commercial 1.5T MR system, and first phantom and in vivo experiments were performed. Consistent with the results of the theoretical findings, the results obtained in the phantom studies demonstrate a significant reduction of motion artifacts when compared to conventional (non-k-space reordered) gating techniques. Preliminary in vivo findings also compare favorably with the phantom experiments and theoretical considerations. Magn Reson Med 45:645-652, 2001.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract The main objective of this work is to show how the choice of the temporal dimension and of the spatial structure of the population influences an artificial evolutionary process. In the field of Artificial Evolution we can observe a common trend in synchronously evolv¬ing panmictic populations, i.e., populations in which any individual can be recombined with any other individual. Already in the '90s, the works of Spiessens and Manderick, Sarma and De Jong, and Gorges-Schleuter have pointed out that, if a population is struc¬tured according to a mono- or bi-dimensional regular lattice, the evolutionary process shows a different dynamic with respect to the panmictic case. In particular, Sarma and De Jong have studied the selection pressure (i.e., the diffusion of a best individual when the only selection operator is active) induced by a regular bi-dimensional structure of the population, proposing a logistic modeling of the selection pressure curves. This model supposes that the diffusion of a best individual in a population follows an exponential law. We show that such a model is inadequate to describe the process, since the growth speed must be quadratic or sub-quadratic in the case of a bi-dimensional regular lattice. New linear and sub-quadratic models are proposed for modeling the selection pressure curves in, respectively, mono- and bi-dimensional regu¬lar structures. These models are extended to describe the process when asynchronous evolutions are employed. Different dynamics of the populations imply different search strategies of the resulting algorithm, when the evolutionary process is used to solve optimisation problems. A benchmark of both discrete and continuous test problems is used to study the search characteristics of the different topologies and updates of the populations. In the last decade, the pioneering studies of Watts and Strogatz have shown that most real networks, both in the biological and sociological worlds as well as in man-made structures, have mathematical properties that set them apart from regular and random structures. In particular, they introduced the concepts of small-world graphs, and they showed that this new family of structures has interesting computing capabilities. Populations structured according to these new topologies are proposed, and their evolutionary dynamics are studied and modeled. We also propose asynchronous evolutions for these structures, and the resulting evolutionary behaviors are investigated. Many man-made networks have grown, and are still growing incrementally, and explanations have been proposed for their actual shape, such as Albert and Barabasi's preferential attachment growth rule. However, many actual networks seem to have undergone some kind of Darwinian variation and selection. Thus, how these networks might have come to be selected is an interesting yet unanswered question. In the last part of this work, we show how a simple evolutionary algorithm can enable the emrgence o these kinds of structures for two prototypical problems of the automata networks world, the majority classification and the synchronisation problems. Synopsis L'objectif principal de ce travail est de montrer l'influence du choix de la dimension temporelle et de la structure spatiale d'une population sur un processus évolutionnaire artificiel. Dans le domaine de l'Evolution Artificielle on peut observer une tendence à évoluer d'une façon synchrone des populations panmictiques, où chaque individu peut être récombiné avec tout autre individu dans la population. Déjà dans les année '90, Spiessens et Manderick, Sarma et De Jong, et Gorges-Schleuter ont observé que, si une population possède une structure régulière mono- ou bi-dimensionnelle, le processus évolutionnaire montre une dynamique différente de celle d'une population panmictique. En particulier, Sarma et De Jong ont étudié la pression de sélection (c-à-d la diffusion d'un individu optimal quand seul l'opérateur de sélection est actif) induite par une structure régulière bi-dimensionnelle de la population, proposant une modélisation logistique des courbes de pression de sélection. Ce modèle suppose que la diffusion d'un individu optimal suit une loi exponentielle. On montre que ce modèle est inadéquat pour décrire ce phénomène, étant donné que la vitesse de croissance doit obéir à une loi quadratique ou sous-quadratique dans le cas d'une structure régulière bi-dimensionnelle. De nouveaux modèles linéaires et sous-quadratique sont proposés pour des structures mono- et bi-dimensionnelles. Ces modèles sont étendus pour décrire des processus évolutionnaires asynchrones. Différentes dynamiques de la population impliquent strategies différentes de recherche de l'algorithme résultant lorsque le processus évolutionnaire est utilisé pour résoudre des problèmes d'optimisation. Un ensemble de problèmes discrets et continus est utilisé pour étudier les charactéristiques de recherche des différentes topologies et mises à jour des populations. Ces dernières années, les études de Watts et Strogatz ont montré que beaucoup de réseaux, aussi bien dans les mondes biologiques et sociologiques que dans les structures produites par l'homme, ont des propriétés mathématiques qui les séparent à la fois des structures régulières et des structures aléatoires. En particulier, ils ont introduit la notion de graphe sm,all-world et ont montré que cette nouvelle famille de structures possède des intéressantes propriétés dynamiques. Des populations ayant ces nouvelles topologies sont proposés, et leurs dynamiques évolutionnaires sont étudiées et modélisées. Pour des populations ayant ces structures, des méthodes d'évolution asynchrone sont proposées, et la dynamique résultante est étudiée. Beaucoup de réseaux produits par l'homme se sont formés d'une façon incrémentale, et des explications pour leur forme actuelle ont été proposées, comme le preferential attachment de Albert et Barabàsi. Toutefois, beaucoup de réseaux existants doivent être le produit d'un processus de variation et sélection darwiniennes. Ainsi, la façon dont ces structures ont pu être sélectionnées est une question intéressante restée sans réponse. Dans la dernière partie de ce travail, on montre comment un simple processus évolutif artificiel permet à ce type de topologies d'émerger dans le cas de deux problèmes prototypiques des réseaux d'automates, les tâches de densité et de synchronisation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract The solvability of the problem of fair exchange in a synchronous system subject to Byzantine failures is investigated in this work. The fair exchange problem arises when a group of processes are required to exchange digital items in a fair manner, which means that either each process obtains the item it was expecting or no process obtains any information on, the inputs of others. After introducing a novel specification of fair exchange that clearly separates safety and liveness, we give an overview of the difficulty of solving such a problem in the context of a fully-connected topology. On one hand, we show that no solution to fair exchange exists in the absence of an identified process that every process can trust a priori; on the other, a well-known solution to fair exchange relying on a trusted third party is recalled. These two results lead us to complete our system model with a flexible representation of the notion of trust. We then show that fair exchange is solvable if and only if a connectivity condition, named the reachable majority condition, is satisfied. The necessity of the condition is proven by an impossibility result and its sufficiency by presenting a general solution to fair exchange relying on a set of trusted processes. The focus is then turned towards a specific network topology in order to provide a fully decentralized, yet realistic, solution to fair exchange. The general solution mentioned above is optimized by reducing the computational load assumed by trusted processes as far as possible. Accordingly, our fair exchange protocol relies on trusted tamperproof modules that have limited communication abilities and are only required in key steps of the algorithm. This modular solution is then implemented in the context of a pedagogical application developed for illustrating and apprehending the complexity of fair exchange. This application, which also includes the implementation of a wide range of Byzantine behaviors, allows executions of the algorithm to be set up and monitored through a graphical display. Surprisingly, some of our results on fair exchange seem contradictory with those found in the literature of secure multiparty computation, a problem from the field of modern cryptography, although the two problems have much in common. Both problems are closely related to the notion of trusted third party, but their approaches and descriptions differ greatly. By introducing a common specification framework, a comparison is proposed in order to clarify their differences and the possible origins of the confusion between them. This leads us to introduce the problem of generalized fair computation, a generalization of fair exchange. Finally, a solution to this new problem is given by generalizing our modular solution to fair exchange