957 resultados para Analyze space
Resumo:
BACKGROUND: The risks of a public exposure to a sudden decompression, until now, have been related to civil aviation and, at a lesser extent, to diving activities. However, engineers are currently planning the use of low pressure environments for underground transportation. This method has been proposed for the future Swissmetro, a high-speed underground train designed for inter-urban linking in Switzerland. HYPOTHESIS: The use of a low pressure environment in an underground public transportation system must be considered carefully regarding the decompression risks. Indeed, due to the enclosed environment, both decompression kinetics and safety measures may differ from aviation decompression cases. METHOD: A theoretical study of decompression risks has been conducted at an early stage of the Swissmetro project. A three-compartment theoretical model, based on the physics of fluids, has been implemented with flow processing software (Ithink 5.0). Simulations have been conducted in order to analyze "decompression scenarios" for a wide range of parameters, relevant in the context of the Swissmetro main study. RESULTS: Simulation results cover a wide range from slow to explosive decompression, depending on the simulation parameters. Not surprisingly, the leaking orifice area has a tremendous impact on barotraumatic effects, while the tunnel pressure may significantly affect both hypoxic and barotraumatic effects. Calculations have also shown that reducing the free space around the vehicle may mitigate significantly an accidental decompression. CONCLUSION: Numeric simulations are relevant to assess decompression risks in the future Swissmetro system. The decompression model has proven to be useful in assisting both design choices and safety management.
Resumo:
Flexitime : between autonomy and constraints. A case study in SwitzerlandBy looking at how a new regulation is translated into everyday practices, this dissertation explores through a specific case study the degree of autonomy gained by wage-earners with the introduction of flexible working schedules. The guiding hypothesis is that by introducing procedural rules, flexitime opens the space for more daily negotiations, therefore reinforcing the effects of power relations inherent to employment relationships. The goal is to understand, through a sociological approach, how employees experience a form of working time that transfers responsibility for time management to them, and howthey integrate work-related constraints with their life outside the workplace. The first part of the dissertation sets up the context of the case study. It offers a definition of flexibility by situating it in the broader history of work time, as well as in relation to various organizational forms and cultural transformations. An international literature review and a focus on the Swiss case are offered. In the second part, the focus is narrowed to a specificSwiss firm specialized in mail-order, where a system of individualized management of annual work time has been introduced. By combining a quantitative and qualitative approach, it is possible to analyze determinants of the practices internal to the firm anddeterminants related to employees themselves, as well as the way in which employees articulate these two orders of constraints. The results show that the implementation of flexible working time is not affecting daily negotiation practices so much as it is creating a set of informal rules. The autonomy ofwage-earners is expressed first and foremost through their capacity to produce, negotiate, and legitimate these rules. The intraindividual level has proven to be central for the social regulation of flexible working time. It is not so much a question of legitimation, but rather the process of institutionalization nurtured by the energy invested by wage-earners in their personal quest for a compromise between their various roles, identities, and aspirations. It is this individualized regulation that is ensuring the success of the system under study.
Resumo:
PURPOSE: Drug delivery to treat diseases of the posterior segment of the eye, such as choroidal neovascularization and its complications, is hampered by poor intraocular penetration and rapid elimination of the drug from the eye. The purpose of this study was to investigate the feasibility and tolerance of suprachoroidal injections of poly(ortho ester) (POE), a bioerodible and biocompatible polymer, as a biomaterial potentially useful for development of sustained drug delivery systems. METHODS: After tunnelization of the sclera, different formulations based on POE were injected (100 microL) into the suprachoroidal space of pigmented rabbits and compared with 1% sodium hyaluronate. Follow-up consisted of fundus observations, echography, fluorescein angiography, and histologic analysis over 3 weeks. RESULTS: After injection, POE spread in the suprachoroidal space at the posterior pole. It was well tolerated and progressively disappeared from the site of injection without sequelae. No bleeding or retinal detachment occurred. Echographic pictures showed that the material was present in the suprachoroidal space for 3 weeks. Angiography revealed minor pigment irregularities at the site of injection, but no retinal edema or necrosis. Histology showed that POE was well tolerated in the choroid. CONCLUSIONS: POE suprachoroidal injections, an easy, controllable, and reproducible procedure, were well tolerated in the rabbit eye. POE appears to be a promising biomaterial to deliver drugs focally to the choroid and the retina.
Resumo:
Most corporate codes of conduct and multi-stakeholder sustainability standards guarantee workers' rights to freedom of association and collective bargaining, but many authors are sceptical about the concrete impact of codes and standards of this kind. In this paper we use Hancher and Moran's (1998) concept of 'regulatory space' to assess the potential of private transnational regulation to support the growth of trade union membership and collective bargaining relationships, drawing on some preliminary case study results from a project on the impact of the International Finance Corporation's (IFC) social conditionality on worker organization and social dialogue. One of the major effects of neoliberal economic and industrial policy has been the routine exclusion of workers' organizations from regulatory processes on the grounds that they introduce inappropriate 'political' motives into what ought to be technical decision-making processes. This, rather than any direct attack on their capacity to take action, is what seems best to explain the global decline in union influence (Cradden 2004; Howell 2007; Howe 2012). The evidence we present in the paper suggests that private labour regulation may under certain conditions contribute to a reversal of this tendency, re-establishing the legitimacy of workers' organizations within regulatory processes and by extension the legitimacy of their use of economic and social power. We argue that guarantees of freedom of association and bargaining rights within private regulation schemes are effective to the extent that they can be used by workers' organizations in support of a claim for access to the regulatory space within which the terms and conditions of the employment relationship are determined. Our case study evidence shows that certain trade unions in East Africa have indeed been able to use IFC and other private regulation schemes as levers to win recognition from employers and to establish collective bargaining relationships. Although they did not attempt to use formal procedures to make a claim for the enforcement of freedom of association rights on behalf of their members, the unions did use enterprises' adherence to private regulation schemes as a normative point of reference in argument and political exchange about worker representation. For these unions, the regulation was a useful addition to the range of arguments that they could deploy as means to justify their demand for recognition by employers. By contrast, the private regulation that helps workers' organizations to win access to regulatory processes does little to ensure that they are able to participate meaningfully, whether in terms of technical capacity or of their ability to mobilize social power as a counterweight to the economic power of employers. To the extent that our East African unions were able to make an impact on terms and conditions of employment via their participation in regulatory space it was solely on the basis of their own capacities and resources and the application of national labour law.
Resumo:
Breathing-induced bulk motion of the myocardium during data acquisition may cause severe image artifacts in coronary magnetic resonance angiography (MRA). Current motion compensation strategies include breath-holding or free-breathing MR navigator gating and tracking techniques. Navigator-based techniques have been further refined by the applications of sophisticated 2D k-space reordering techniques. A further improvement in image quality and a reduction of relative scanning duration may be expected from a 3D k-space reordering scheme. Therefore, a 3D k-space reordered acquisition scheme including a 3D navigator gated and corrected segmented k-space gradient echo imaging sequence for coronary MRA was implemented. This new zonal motion-adapted acquisition and reordering technique (ZMART) was developed on the basis of a numerical simulation of the Bloch equations. The technique was implemented on a commercial 1.5T MR system, and first phantom and in vivo experiments were performed. Consistent with the results of the theoretical findings, the results obtained in the phantom studies demonstrate a significant reduction of motion artifacts when compared to conventional (non-k-space reordered) gating techniques. Preliminary in vivo findings also compare favorably with the phantom experiments and theoretical considerations. Magn Reson Med 45:645-652, 2001.
Resumo:
Abstract The main objective of this work is to show how the choice of the temporal dimension and of the spatial structure of the population influences an artificial evolutionary process. In the field of Artificial Evolution we can observe a common trend in synchronously evolv¬ing panmictic populations, i.e., populations in which any individual can be recombined with any other individual. Already in the '90s, the works of Spiessens and Manderick, Sarma and De Jong, and Gorges-Schleuter have pointed out that, if a population is struc¬tured according to a mono- or bi-dimensional regular lattice, the evolutionary process shows a different dynamic with respect to the panmictic case. In particular, Sarma and De Jong have studied the selection pressure (i.e., the diffusion of a best individual when the only selection operator is active) induced by a regular bi-dimensional structure of the population, proposing a logistic modeling of the selection pressure curves. This model supposes that the diffusion of a best individual in a population follows an exponential law. We show that such a model is inadequate to describe the process, since the growth speed must be quadratic or sub-quadratic in the case of a bi-dimensional regular lattice. New linear and sub-quadratic models are proposed for modeling the selection pressure curves in, respectively, mono- and bi-dimensional regu¬lar structures. These models are extended to describe the process when asynchronous evolutions are employed. Different dynamics of the populations imply different search strategies of the resulting algorithm, when the evolutionary process is used to solve optimisation problems. A benchmark of both discrete and continuous test problems is used to study the search characteristics of the different topologies and updates of the populations. In the last decade, the pioneering studies of Watts and Strogatz have shown that most real networks, both in the biological and sociological worlds as well as in man-made structures, have mathematical properties that set them apart from regular and random structures. In particular, they introduced the concepts of small-world graphs, and they showed that this new family of structures has interesting computing capabilities. Populations structured according to these new topologies are proposed, and their evolutionary dynamics are studied and modeled. We also propose asynchronous evolutions for these structures, and the resulting evolutionary behaviors are investigated. Many man-made networks have grown, and are still growing incrementally, and explanations have been proposed for their actual shape, such as Albert and Barabasi's preferential attachment growth rule. However, many actual networks seem to have undergone some kind of Darwinian variation and selection. Thus, how these networks might have come to be selected is an interesting yet unanswered question. In the last part of this work, we show how a simple evolutionary algorithm can enable the emrgence o these kinds of structures for two prototypical problems of the automata networks world, the majority classification and the synchronisation problems. Synopsis L'objectif principal de ce travail est de montrer l'influence du choix de la dimension temporelle et de la structure spatiale d'une population sur un processus évolutionnaire artificiel. Dans le domaine de l'Evolution Artificielle on peut observer une tendence à évoluer d'une façon synchrone des populations panmictiques, où chaque individu peut être récombiné avec tout autre individu dans la population. Déjà dans les année '90, Spiessens et Manderick, Sarma et De Jong, et Gorges-Schleuter ont observé que, si une population possède une structure régulière mono- ou bi-dimensionnelle, le processus évolutionnaire montre une dynamique différente de celle d'une population panmictique. En particulier, Sarma et De Jong ont étudié la pression de sélection (c-à-d la diffusion d'un individu optimal quand seul l'opérateur de sélection est actif) induite par une structure régulière bi-dimensionnelle de la population, proposant une modélisation logistique des courbes de pression de sélection. Ce modèle suppose que la diffusion d'un individu optimal suit une loi exponentielle. On montre que ce modèle est inadéquat pour décrire ce phénomène, étant donné que la vitesse de croissance doit obéir à une loi quadratique ou sous-quadratique dans le cas d'une structure régulière bi-dimensionnelle. De nouveaux modèles linéaires et sous-quadratique sont proposés pour des structures mono- et bi-dimensionnelles. Ces modèles sont étendus pour décrire des processus évolutionnaires asynchrones. Différentes dynamiques de la population impliquent strategies différentes de recherche de l'algorithme résultant lorsque le processus évolutionnaire est utilisé pour résoudre des problèmes d'optimisation. Un ensemble de problèmes discrets et continus est utilisé pour étudier les charactéristiques de recherche des différentes topologies et mises à jour des populations. Ces dernières années, les études de Watts et Strogatz ont montré que beaucoup de réseaux, aussi bien dans les mondes biologiques et sociologiques que dans les structures produites par l'homme, ont des propriétés mathématiques qui les séparent à la fois des structures régulières et des structures aléatoires. En particulier, ils ont introduit la notion de graphe sm,all-world et ont montré que cette nouvelle famille de structures possède des intéressantes propriétés dynamiques. Des populations ayant ces nouvelles topologies sont proposés, et leurs dynamiques évolutionnaires sont étudiées et modélisées. Pour des populations ayant ces structures, des méthodes d'évolution asynchrone sont proposées, et la dynamique résultante est étudiée. Beaucoup de réseaux produits par l'homme se sont formés d'une façon incrémentale, et des explications pour leur forme actuelle ont été proposées, comme le preferential attachment de Albert et Barabàsi. Toutefois, beaucoup de réseaux existants doivent être le produit d'un processus de variation et sélection darwiniennes. Ainsi, la façon dont ces structures ont pu être sélectionnées est une question intéressante restée sans réponse. Dans la dernière partie de ce travail, on montre comment un simple processus évolutif artificiel permet à ce type de topologies d'émerger dans le cas de deux problèmes prototypiques des réseaux d'automates, les tâches de densité et de synchronisation.
Resumo:
Les reconstructions palinspastiques fournissent le cadre idéal à de nombreuses études géologiques, géographiques, océanographique ou climatiques. En tant qu?historiens de la terre, les "reconstructeurs" essayent d?en déchiffrer le passé. Depuis qu?ils savent que les continents bougent, les géologues essayent de retracer leur évolution à travers les âges. Si l?idée originale de Wegener était révolutionnaire au début du siècle passé, nous savons depuis le début des années « soixante » que les continents ne "dérivent" pas sans but au milieu des océans mais sont inclus dans un sur-ensemble associant croûte « continentale » et « océanique »: les plaques tectoniques. Malheureusement, pour des raisons historiques aussi bien que techniques, cette idée ne reçoit toujours pas l'écho suffisant parmi la communauté des reconstructeurs. Néanmoins, nous sommes intimement convaincus qu?en appliquant certaines méthodes et certains principes il est possible d?échapper à l?approche "Wégenerienne" traditionnelle pour enfin tendre vers la tectonique des plaques. Le but principal du présent travail est d?exposer, avec tous les détails nécessaires, nos outils et méthodes. Partant des données paléomagnétiques et paléogéographiques classiquement utilisées pour les reconstructions, nous avons développé une nouvelle méthodologie replaçant les plaques tectoniques et leur cinématique au coeur du problème. En utilisant des assemblages continentaux (aussi appelés "assemblées clés") comme des points d?ancrage répartis sur toute la durée de notre étude (allant de l?Eocène jusqu?au Cambrien), nous développons des scénarios géodynamiques permettant de passer de l?une à l?autre en allant du passé vers le présent. Entre deux étapes, les plaques lithosphériques sont peu à peu reconstruites en additionnant/ supprimant les matériels océaniques (symbolisés par des isochrones synthétiques) aux continents. Excepté lors des collisions, les plaques sont bougées comme des entités propres et rigides. A travers les âges, les seuls éléments évoluant sont les limites de plaques. Elles sont préservées aux cours du temps et suivent une évolution géodynamique consistante tout en formant toujours un réseau interconnecté à travers l?espace. Cette approche appelée "limites de plaques dynamiques" intègre de multiples facteurs parmi lesquels la flottabilité des plaques, les taux d'accrétions aux rides, les courbes de subsidence, les données stratigraphiques et paléobiogéographiques aussi bien que les évènements tectoniques et magmatiques majeurs. Cette méthode offre ainsi un bon contrôle sur la cinématique des plaques et fournit de sévères contraintes au modèle. Cette approche "multi-source" nécessite une organisation et une gestion des données efficaces. Avant le début de cette étude, les masses de données nécessaires était devenues un obstacle difficilement surmontable. Les SIG (Systèmes d?Information Géographiques) et les géo-databases sont des outils informatiques spécialement dédiés à la gestion, au stockage et à l?analyse des données spatialement référencées et de leurs attributs. Grâce au développement dans ArcGIS de la base de données PaleoDyn nous avons pu convertir cette masse de données discontinues en informations géodynamiques précieuses et facilement accessibles pour la création des reconstructions. Dans le même temps, grâce à des outils spécialement développés, nous avons, tout à la fois, facilité le travail de reconstruction (tâches automatisées) et amélioré le modèle en développant fortement le contrôle cinématique par la création de modèles de vitesses des plaques. Sur la base des 340 terranes nouvellement définis, nous avons ainsi développé un set de 35 reconstructions auxquelles est toujours associé un modèle de vitesse. Grâce à cet ensemble de données unique, nous pouvons maintenant aborder des problématiques majeurs de la géologie moderne telles que l?étude des variations du niveau marin et des changements climatiques. Nous avons commencé par aborder un autre problème majeur (et non définitivement élucidé!) de la tectonique moderne: les mécanismes contrôlant les mouvements des plaques. Nous avons pu observer que, tout au long de l?histoire de la terre, les pôles de rotation des plaques (décrivant les mouvements des plaques à la surface de la terre) tendent à se répartir le long d'une bande allant du Pacifique Nord au Nord de l'Amérique du Sud, l'Atlantique Central, l'Afrique du Nord, l'Asie Centrale jusqu'au Japon. Fondamentalement, cette répartition signifie que les plaques ont tendance à fuir ce plan médian. En l'absence d'un biais méthodologique que nous n'aurions pas identifié, nous avons interprété ce phénomène comme reflétant l'influence séculaire de la Lune sur le mouvement des plaques. La Lune sur le mouvement des plaques. Le domaine océanique est la clé de voute de notre modèle. Nous avons attaché un intérêt tout particulier à le reconstruire avec beaucoup de détails. Dans ce modèle, la croûte océanique est préservée d?une reconstruction à l?autre. Le matériel crustal y est symbolisé sous la forme d?isochrones synthétiques dont nous connaissons les âges. Nous avons également reconstruit les marges (actives ou passives), les rides médio-océaniques et les subductions intra-océaniques. En utilisant ce set de données très détaillé, nous avons pu développer des modèles bathymétriques 3-D unique offrant une précision bien supérieure aux précédents.<br/><br/>Palinspastic reconstructions offer an ideal framework for geological, geographical, oceanographic and climatology studies. As historians of the Earth, "reconstructers" try to decipher the past. Since they know that continents are moving, geologists a trying to retrieve the continents distributions through ages. If Wegener?s view of continent motions was revolutionary at the beginning of the 20th century, we know, since the Early 1960?s that continents are not drifting without goal in the oceanic realm but are included in a larger set including, all at once, the oceanic and the continental crust: the tectonic plates. Unfortunately, mainly due to technical and historical issues, this idea seems not to receive a sufficient echo among our particularly concerned community. However, we are intimately convinced that, by applying specific methods and principles we can escape the traditional "Wegenerian" point of view to, at last, reach real plate tectonics. This is the main aim of this study to defend this point of view by exposing, with all necessary details, our methods and tools. Starting with the paleomagnetic and paleogeographic data classically used in reconstruction studies, we developed a modern methodology placing the plates and their kinematics at the centre of the issue. Using assemblies of continents (referred as "key assemblies") as anchors distributed all along the scope of our study (ranging from Eocene time to Cambrian time) we develop geodynamic scenarios leading from one to the next, from the past to the present. In between, lithospheric plates are progressively reconstructed by adding/removing oceanic material (symbolized by synthetic isochrones) to major continents. Except during collisions, plates are moved as single rigid entities. The only evolving elements are the plate boundaries which are preserved and follow a consistent geodynamical evolution through time and form an interconnected network through space. This "dynamic plate boundaries" approach integrates plate buoyancy factors, oceans spreading rates, subsidence patterns, stratigraphic and paleobiogeographic data, as well as major tectonic and magmatic events. It offers a good control on plate kinematics and provides severe constraints for the model. This multi-sources approach requires an efficient data management. Prior to this study, the critical mass of necessary data became a sorely surmountable obstacle. GIS and geodatabases are modern informatics tools of specifically devoted to store, analyze and manage data and associated attributes spatially referenced on the Earth. By developing the PaleoDyn database in ArcGIS software we converted the mass of scattered data offered by the geological records into valuable geodynamical information easily accessible for reconstructions creation. In the same time, by programming specific tools we, all at once, facilitated the reconstruction work (tasks automation) and enhanced the model (by highly increasing the kinematic control of plate motions thanks to plate velocity models). Based on the 340 terranes properly defined, we developed a revised set of 35 reconstructions associated to their own velocity models. Using this unique dataset we are now able to tackle major issues of the geology (such as the global sea-level variations and climate changes). We started by studying one of the major unsolved issues of the modern plate tectonics: the driving mechanism of plate motions. We observed that, all along the Earth?s history, plates rotation poles (describing plate motions across the Earth?s surface) tend to follow a slight linear distribution along a band going from the Northern Pacific through Northern South-America, Central Atlantic, Northern Africa, Central Asia up to Japan. Basically, it sighifies that plates tend to escape this median plan. In the absence of a non-identified methodological bias, we interpreted it as the potential secular influence ot the Moon on plate motions. The oceanic realms are the cornerstone of our model and we attached a particular interest to reconstruct them with many details. In this model, the oceanic crust is preserved from one reconstruction to the next. The crustal material is symbolised by the synthetic isochrons from which we know the ages. We also reconstruct the margins (active or passive), ridges and intra-oceanic subductions. Using this detailed oceanic dataset, we developed unique 3-D bathymetric models offering a better precision than all the previously existing ones.
Resumo:
The resource utilization level in open laboratories of several universities has been shown to be very low. Our aim is to take advantage of those idle resources for parallel computation without disturbing the local load. In order to provide a system that lets us execute parallel applications in such a non-dedicated cluster, we use an integral scheduling system that considers both Space and Time sharing concerns. For dealing with the Time Sharing (TS) aspect, we use a technique based on the communication-driven coscheduling principle. This kind of TS system has some implications on the Space Sharing (SS) system, that force us to modify the way job scheduling is traditionally done. In this paper, we analyze the relation between the TS and the SS systems in a non-dedicated cluster. As a consequence of this analysis, we propose a new technique, termed 3DBackfilling. This proposal implements the well known SS technique of backfilling, but applied to an environment with a MultiProgramming Level (MPL) of the parallel applications that is greater than one. Besides, 3DBackfilling considers the requirements of the local workload running on each node. Our proposal was evaluated in a PVM/MPI Linux cluster, and it was compared with several more traditional SS policies applied to non-dedicated environments.
Resumo:
Sexual reproduction is nearly universal in eukaryotes and genetic determination of sex prevails among animals. The astonishing diversity of sex-determining systems and sex chromosomes is yet bewildering. Some taxonomic groups possess conserved and dimorphic sex chromosomes, involving a functional copy (e.g. mammals' X, birds' Z) and a degenerated copy (mammals' Y, birds' W), implying that sex- chromosomes are expected to decay. In contrast, others like amphibians, reptiles and fishes yet maintained undifferentiated sex chromosomes. Why such different evolutionary trajectories? In this thesis, we empirically test and characterize the main hypotheses proposed to prevent the genetic decay of sex chromosomes, namely occasional X-Y recombination and frequent sex-chromosome transitions, using the Palearctic radiation of Hyla tree frogs as a model system. We take a phylogeographic and phylogenetic approach to relate sex-chromosome recombination, differentiation, and transitions in a spatial and temporal framework. By reconstructing the recent evolutionary history of the widespread European tree frog H. arborea, we showed that sex chromosomes can recombine in males, preventing their differentiation, a situation that potentially evolves rapidly. At the scale of the entire radiation, X-Y recombination combines with frequent transitions to prevent sex-chromosome degeneration in Hyla: we traced several turnovers of sex-determining system within the last 10My. These rapid changes seem less random than usually assumed: we gathered evidences that one chromosome pair is a sex expert, carrying genes with key role in animal sex determination, and which probably specialized through frequent reuse as a sex chromosome in Hyla and other amphibians. Finally, we took advantage of secondary contact zones between closely-related Hyla lineages to evaluate the consequences of sex chromosome homomorphy on the genetics of speciation. In comparison with other systems, the evolution of sex chromosomes in Hyla emphasized the existence of consistent evolutionary patterns within the chaotic diversity of flexibility of cold-blooded vertebrates' sex-determining systems, and provides insights into the evolution of recombination. Beyond sex-chromosome evolution, this work also significantly contributed to speciation, phylogeography and applied conservation research. -- La reproduction sexuée est quasi-universelle chez les eucaryotes et le sexe est le plus souvent déterminé génétiquement au sein du règne animal. L'incroyable diversité des systèmes de reproduction et des chromosomes sexuels est particulièrement étonnante. Certains groupes taxonomiques possèdent des chromosomes sexuels dimorphiques et très conservés, avec une copie entièrement fonctionnelle (ex : le X des mammifères, le Z des oiseaux) et une copie dégénérée (ex : le Y des mammifères, le W des oiseaux), suggérant que les chromosomes sexuels sont voués à se détériorer. Cependant les chromosomes sexuels d'autres groupes tels que les amphibiens, les reptiles et les poissons sont pour la plupart indifférenciés. Comment expliquer des trajectoires évolutives si différentes? Au cours de cette thèse, nous avons étudié empiriquement les processus évolutifs pouvant maintenir les chromosomes sexuels intacts, à savoir la recombinaison X-Y occasionnel ainsi que les substitutions fréquentes de chromosomes sexuels, en utilisant les rainettes Paléarctiques du genre Hyla comme modèle d'étude. Nous avons adopté une approche phylogéographique et phylogénétique pour appréhender les événements de recombinaison, de différenciation et de transitions de chromosomes sexuels dans un contexte spatio-temporel. En retraçant l'histoire évolutive récente de la rainette verte H. arborea, nous avons mis en évidence que les chromosomes sexuels pouvaient recombiner chez les mâles, empêchant ainsi leur différenciation, et que ce processus avait le potentiel d'évoluer très rapidement. A l'échelle plus globale de la radiation, il apparait que les phénomènes de recombinaison X-Y soient également accompagnés de substitutions de chromosomes sexuels, et participent de concert au maintien de chromosomes sexuels intacts dans les populations: le système de détermination du sexe des rainettes a changé plusieurs fois au cours des 10 derniers millions d'années. Ces transitions fréquentes ne semblent pas aléatoires: nous avons identifié une paire de chromosomes qui présente des caractéristiques présageant d'une spécialisation dans le déterminisme du sexe (notamment car elle possède des gènes importants pour cette fonction), et qui a été réutilisée plusieurs fois comme tel chez les rainettes ainsi que d'autres amphibiens. Enfin, nous avons étudié l'hybridation entre différentes espèces dans leurs zones de contact, afin d'évaluer si l'absence de différenciation entre X et Y jouaient un rôle dans les processus génétiques de spéciation. Outre son intérêt pour la compréhension de l'évolution des chromosomes sexuels, ce travail contribue de manière significative à d'autres domaines de recherche tels que la spéciation, la phylogéographie, ainsi que la biologie de la conservation.
Resumo:
BACKGROUND: Surgical site infection (SSI) is a common cause of major morbidity after liver resection. This study aimed to identify the risk factors for incisional and organ/space SSIs after liver resection. METHODS: Our liver surgery database was retrospectively analyzed for patients treated between January 2009 and November 2012 in a tertiary care Swiss hospital. Univariate and multivariate analyses were conducted on preoperative, intraoperative, and postoperative variables to identify risk factors for incisional and organ/space SSIs. RESULTS: In a total of 226 patients, SSI incidences were 12.8 % (incisional), 4.0 % (organ/space), and 1.8 % (both). Univariate analysis showed that incisional SSIs were associated with high American Society of Anesthesiologists (ASA) scores, preoperative anemia, hypoalbuminemia, low prothrombin time, viral or alcoholic chronic hepatitis, liver cirrhosis, and prolonged operation times. Organ/space SSIs were associated with high rates of red blood cell transfusions, concomitant bowel surgery, and prolonged operation times. Multivariate analysis revealed that risk factors for incisional SSIs were anemia [odds ratio (OR) 2.82], high ASA scores (OR 2.88), presence of hepatitis or cirrhosis (OR 5.07), and prolonged operation times (OR 9.61). The only risk factor for organ/space SSIs was concomitant bowel surgery (OR 5.53). Hospital stays were similar in organ/space and incisional SSI groups, but significantly longer for those with both organ/space and incisional SSIs. CONCLUSIONS: High ASA scores, anemia, chronic hepatitis or liver cirrhosis, and prolonged operations increased the risk of incisional SSIs; concomitant bowel surgery increased the risk of organ/space SSI. Specific precautions to prevent organ/space and incisional SSIs may shorten hospital stays.
Resumo:
This work proposes the detection of red peaches in orchard images based on the definition of different linear color models in the RGB vector color space. The classification and segmentation of the pixels of the image is then performed by comparing the color distance from each pixel to the different previously defined linear color models. The methodology proposed has been tested with images obtained in a real orchard under natural light. The peach variety in the orchard was the paraguayo (Prunus persica var. platycarpa) peach with red skin. The segmentation results showed that the area of the red peaches in the images was detected with an average error of 11.6%; 19.7% in the case of bright illumination; 8.2% in the case of low illumination; 8.6% for occlusion up to 33%; 12.2% in the case of occlusion between 34 and 66%; and 23% for occlusion above 66%. Finally, a methodology was proposed to estimate the diameter of the fruits based on an ellipsoidal fitting. A first diameter was obtained by using all the contour pixels and a second diameter was obtained by rejecting some pixels of the contour. This approach enables a rough estimate of the fruit occlusion percentage range by comparing the two diameter estimates.
Resumo:
Lleida, una ciudad media de España en el XIX que tiene como particularidad ser una capital de provincia con una economía tradicional a medio camino entre las actividades del campo y las incipientes urbanas, es nuestro marco de estudio para analizar el comportamiento de los grupos sociales de su comunidad ante los cambios políticos operados en España durante el siglo XIX para consolidar el liberalismo como sistema de gobierno. Las fuentes consultadas fueron los archivos locales mediante el contraste de los censos de población, sus rentas y la lista de políticos y milicianos implicados en el proceso político para establecer pautas y analogías entre ellos según su filiación política. Con estas pautas nos acercamos a la sociología de su población y sus concreciones políticas durante este período. De esta manera, descubrimos los mecanismos de resistencia del grupo dominante del antiguo régimen para mantener el poder y las tácticas de los liberales para conseguir desbancarlos y conseguir mayor representación política en el municipio. La conclusión de este estudio es que mediante el acomodo de los grupos privilegiados del antiguo régimen al liberalismo moderado, estos consiguieron su espacio político en la comunidad para mantener la mayor parte del tiempo su cuota de poder, mientras que los liberales, mediante su identificación al progresismo, tenían enormes dificultades para conseguir el control y poner en práctica su proyecto político.