62 resultados para Compositional Rule of Inference


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Jurassic (approximately 145 Ma) Nambija oxidized gold skarns are hosted by the Triassic volcanosedimentary Piuntza unit in the sub-Andean zone of southeastern Ecuador. The skarns consist dominantly of granditic garnet (Ad(20-98)) with subordinate pyroxene (Di(46-92)Hd(17-42)Jo(0-19)) and epidote and are spatially associated with porphyritic quartz-diorite to granodiorite intrusions. Endoskarn is developed at the intrusion margins and grades inwards into a potassic alteration zone. Exoskarn has an outer K- and Na-enriched zone in the volcanosedimentary unit. Gold mineralization is associated with the weakly developed retrograde alteration of the exoskarn and occurs mainly in sulfide-poor vugs and milky quartz veins and veinlets in association with hematite. Fluid inclusion data for the main part of the prograde stage indicate the coexistence of high-temperature (500A degrees C to > 600A degrees C), high-salinity (up to 65 wt.% eq. NaCl), and moderate- to low-salinity aqueous-carbonic fluids interpreted to have been trapped at pressures around 100-120 MPa, corresponding to about 4-km depth. Lower-temperature (510-300A degrees C) and moderate- to low-salinity (23-2 wt.% eq. NaCl) aqueous fluids are recorded in garnet and epidote of the end of the prograde stage. The microthermometric data (Th from 513A degrees C to 318A degrees C and salinity from 1.0 to 23 wt.% eq. NaCl) and delta(18)O values between 6.2aEuro degrees and 11.5aEuro degrees for gold-bearing milky quartz from the retrograde stage suggest that the ore-forming fluid was dominantly magmatic. Pressures during the early retrograde stage were in the range of 50-100 MPa, in line with the evidence for CO(2) effervescence and probable local boiling. The dominance of magmatic low-saline to moderately saline oxidizing fluids during the retrograde stage is consistent with the depth of the skarn system, which could have delayed the ingression of external fluids until relatively low temperatures were reached. The resulting low water-to-rock ratios explain the weak retrograde alteration and the compositional variability of chlorite, essentially controlled by host rock compositions. Gold was precipitated at this stage as a result of cooling and pH increase related to CO(2) effervescence, which both result in destabilization of gold-bearing chloride complexes. Significant ingression of external fluids took place after gold deposition only, as recorded by delta(18)O values of 0.4aEuro degrees to 6.2aEuro degrees for fluids depositing quartz (below 350A degrees C) in sulfide-rich barren veins. Low-temperature (< 300A degrees C) meteoric fluids (delta(18)O(water) between -10.0aEuro degrees and -2.0aEuro degrees) are responsible for the precipitation of late comb quartz and calcite in cavities and veins and indicate mixing with cooler fluids of higher salinities (about 100A degrees C and 25 wt.% eq. NaCl). The latter are similar to low-temperature fluids (202-74.5A degrees C) with delta(18)O values of -0.5aEuro degrees to 3.1aEuro degrees and salinities in the range of 21.1 to 17.3 wt.% eq. CaCl(2), trapped in calcite of late veins and interpreted as basinal brines. Nambija represents a deep equivalent of the oxidized gold skarn class, the presence of CO(2) in the fluids being partly a consequence of the relatively deep setting at about 4-km depth. As in other Au-bearing skarn deposits, not only the prograde stage but also the gold-precipitating retrograde stage is dominated by fluids of magmatic origin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A criminal investigation requires to search and to interpret vestiges of a criminal act that happened in a past time. The forensic investigator arises in this context as a critical reader of the investigation scene, in search of physical traces that should enable her to tell a story of the offence/crime which allegedly occurred. The challenge of any investigator is to detect and recognise relevant physical traces in order to provide forensic clues for investigation and intelligence purposes. Inspired by this obser- vation, the current research focuses on the following questions : What is a relevant physical trace? And, how does the forensic investigator know she is facing one ? The interest of such questions is to provide a definition of a dimension often used in forensic science but never studied in its implications and operations. This doctoral research investigates scientific paths that are not often explored in forensic science, by using semiotic and sociological tools combined with statistical data analysis. The results are shown following a semiotic path, strongly influenced by Peir- ce's studies, and a second track, called empirical, where investigations data were analysed and forensic investigators interviewed about their work practices in the field. By the semiotic track, a macroscopic view is given of a signification process running from the discove- red physical trace at the scene to what is evaluated as being relevant for the investigator. The physical trace is perceived in the form of several signs, whose meaning is culturally codified. The reasoning should consist of three main steps : 1- What kind of source does the discovered physical trace refer to ? 2- What cause/activity is at the origin of this source in the specific context of the case ? 3- What story can be told from these observations ? The stage 3 requires to reason in creating hypotheses that should explain the presence of the discovered trace coming from an activity ; the specific activity that is related to the investigated case. To validate this assumption, it would depend on their ability to respond to a rule of relevancy. The last step is the symbolisation of the relevancy. The rule would consist of two points : the recognition of the factual/circumstantial relevancy (Is the link between the trace and the case recognised with the formulated hypothesis ? ) and appropriate relevancy (What investment is required to collect and analyse the discovered trace considering the expected outcome at the investigation/intelligence level?). This process of meaning is based on observations and a conjectural reasoning subject to many influences. In this study, relevancy in forensic science is presented as a conventional dimension that is symbolised and conditioned by the context, the forensic investigator's practice and her workplace environment (culture of the place). In short, the current research states relevancy results of the interactions between parameters from situational, structural (or organisational) and individual orders. The detection, collection and analysis of relevant physical traces at scenes depends on the knowledge and culture mastered by the forensic investigator. In the study of the relation relevant trace-forensic investigator, this research introduces the KEE model as a conceptual map that illustrates three major areas of forensic knowledge and culture acquisition, involved in the research and evaluation of the relevant physical trace. Through the analysis of the investigation data and interviews, the relationship between those three parameters and the relevancy was highlighted. K, for knowing, embodies a rela- tionship to the immediate knowledge allowing to give an overview of the reality at a specific moment ; an important point since relevancy is signified in a context. E, for education, is considered through its relationship with relevancy via a culture that tends to become institutionalised ; it represents the theoretical knowledge. As for the parameter E, for experience, it exists in its relation to relevancy in the adjustments of the strategies of intervention (i.e a practical knowledge) of each practitioner having modulated her work in the light of success and setbacks case after case. The two E parameters constitute the library resources for the semiotic recognition process and the K parameter ensures the contextualisation required to set up the reasoning and to formulate explanatory hypotheses for the discovered physical traces, questioned in their relevancy. This research demonstrates that the relevancy is not absolute. It is temporal and contextual; it is a conventional and relative dimension that must be discussed. This is where the whole issue of the meaning of what is relevant to each stakeholder of the investigation process rests. By proposing a step by step approach to the meaning process from the physical trace to the forensic clue, this study aims to provide a more advanced understanding of the reasoning and its operation, in order to streng- then forensic investigators' training. This doctoral research presents a set of tools critical to both pedagogical and practical aspects for crime scene management while identifying key-influences with individual, structural and situational dimensions. - Une enquête criminelle consiste à rechercher et à faire parler les vestiges d'un acte incriminé passé. L'investigateur forensique se pose dans ce cadre comme un lecteur critique des lieux à la recherche de traces devant lui permettre de former son récit, soit l'histoire du délit/crime censé s'être produit. Le challenge de tout investigateur est de pouvoir détecter et reconnaître les traces dites pertinentes pour fournir des indices forensiques à buts d'enquête et de renseignement. Inspirée par un tel constat, la présente recherche pose au coeur de ses réflexions les questions suivantes : Qu'est-ce qu'une trace pertinente ? Et, comment fait le forensicien pour déterminer qu'il y fait face ? L'intérêt de tels questionnements se comprend dans la volonté de définir une dimension souvent utili- sée en science forensique, mais encore jamais étudiée dans ses implications et fonctionnements. Cette recherche se lance dans des voies d'étude encore peu explorées en usant d'outils sémiotiques et des pratiques d'enquêtes sociologiques combinés à des traitements statistiques de données. Les résultats sont représentés en suivant une piste sémiotique fortement influencée par les écrits de Peirce et une seconde piste dite empirique où des données d'interventions ont été analysées et des investigateurs forensiques interviewés sur leurs pratiques de travail sur le terrain. Par la piste sémiotique, une vision macroscopique du processus de signification de la trace en élément pertinent est représentée. La trace est perçue sous la forme de plusieurs signes dont la signification est codifiée culturellement. Le raisonnement se formaliserait en trois principales étapes : 1- Quel type de source évoque la trace détectée? 2- Quelle cause/activité est à l'origine de cette source dans le contexte précis du cas ? 3- Quelle histoire peut être racontée à partir de ces observations ? Cette dernière étape consiste à raisonner en créant des hypothèses devant expliquer la présence de la trace détectée suite à une activité posée comme étant en lien avec le cas investigué. Pour valider ces hypothèses, cela dépendrait de leur capacité à répondre à une règle, celle de la pertinence. Cette dernière étape consiste en la symbolisation de la pertinence. La règle se composerait de deux points : la reconnaissance de la pertinence factuelle (le lien entre la trace et le cas est-il reconnu dans l'hypothèse fournie?) et la pertinence appropriée (quel est l'investissement à fournir dans la collecte et l'exploitation de la trace pour le bénéfice attendu au niveau de l'investigation/renseignement?). Tout ce processus de signification se base sur des observations et un raisonnement conjectural soumis à de nombreuses influences. Dans cette étude, la pertinence en science forensique se formalise sous les traits d'une dimension conventionnelle, symbolisée, conditionnée par le contexte, la pratique de l'investigateur forensique et la culture du milieu ; en somme cette recherche avance que la pertinence est le fruit d'une interaction entre des paramètres d'ordre situationnel, structurel (ou organisationnel) et individuel. Garantir la détection, la collecte et l'exploitation des traces pertinentes sur les lieux dépend de la connaissance et d'une culture maîtrisées par le forensicien. Dans l'étude du rapport trace pertinente-investigateur forensique, la présente recherche pose le modèle SFE comme une carte conceptuelle illustrant trois grands axes d'acquisition de la connaissance et de la culture forensiques intervenant dans la recherche et l'évaluation de la trace pertinente. Par l'analyse des données d'in- terventions et des entretiens, le rapport entre ces trois paramètres et la pertinence a été mis en évidence. S, pour savoir, incarne un rapport à la connaissance immédiate pour se faire une représentation d'une réalité à un instant donné, un point important pour une pertinence qui se comprend dans un contexte. F, pour formation, se conçoit dans son rapport à la pertinence via cette culture qui tend à s'institutionnaliser (soit une connaissance théorique). Quant au paramètre E, pour expérience, il se comprend dans son rapport à la pertinence dans cet ajustement des stratégies d'intervention (soit une connaissance pratique) de chaque praticien ayant modulé leur travail au regard des succès et échecs enregistrés cas après cas. F et E formeraient la bibliothèque de ressources permettant le processus de reconnaissance sémiotique et S assurerait la contextualisation nécessaire pour poser le raisonnement et formuler les hypothèses explicatives pour les traces détectées et questionnées dans leur pertinence. Ce travail démontre que la pertinence n'est pas absolue. Elle est temporelle et contextuelle, c'est une dimension conventionnelle relative et interprétée qui se doit d'être discutée. C'est là que repose toute la problématique de la signification de ce qui est pertinent pour chaque participant du processus d'investigation. En proposant une lecture par étapes du processus de signification depuis la trace à l'indice, l'étude vise à offrir une compréhension plus poussée du raisonnement et de son fonctionnement pour renforcer la formation des intervenants forensiques. Cette recherche présente ainsi un ensemble d'outils critiques à portée tant pédagogiques que pratiques pour la gestion des lieux tout en identifiant des influences-clé jouées par des dimensions individuelles, structurelles et situationnelles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé Métropolisation, morphologie urbaine et développement durable. Transformations urbaines et régulation de l'étalement : le cas de l'agglomération lausannoise. Cette thèse s'inscrit clans la perspective d'une analyse stratégique visant à un définir et à expliciter les liens entre connaissance, expertise et décision politique. L'hypothèse fondamentale qui oriente l'ensemble de ce travail est la suivante : le régime d'urbanisation qui s'est imposé au cours des trente dernières années correspond à une transformation du principe morphogénétique de développement spatial des agglomérations qui tend à alourdir leurs bilans écologiques et à péjorer la qualité du cadre de vie des citadins. Ces enjeux environnementaux liés aux changements urbains et singulièrement ceux de la forme urbaine constituent un thème de plus en plus important dans la recherche de solutions d'aménagement urbain dans une perspective de développement durable. Dans ce contexte, l'aménagement urbain devient un mode d'action et une composante de tout premier ordre des politiques publiques visant un développement durable à l'échelle locale et globale. Ces modalités de développement spatial des agglomérations émergent indiscutablement au coeur de la problématique environnementale. Or si le concept de développement durable nous livre une nouvelle de de lecture des territoires et de ses transformations, en prônant le modèle de la ville compacte et son corollaire la densification, la traduction à donner à ce principe stratégique reste controversée, notamment sous l'angle de l'aménagement du territoire et des stratégies de développement urbain permettant une mise en oeuvre adéquate des solutions proposées. Nous avons ainsi tenté dans ce travail de répondre à un certain nombre de questions : quelle validité accorder au modèle de la ville compacte ? La densification est-elle une réponse adéquate ? Si oui, sous quelles modalités ? Quelles sont, en termes de stratégies d'aménagement, les alternatives durables au modèle de la ville étalée ? Faut-il vraiment densifier ou simplement maîtriser la dispersion ? Notre objectif principal étant in fine de déterminer les orientations et contenus urbanistiques de politiques publiques visant à réguler l'étalement urbain, de valider la faisabilité de ces principes et à définir les conditions de leur mise en place dans le cas d'une agglomération. Pour cela, et après avoir choisi l'agglomération lausannoise comme terrain d'expérimentation, trois approches complémentaires se sont révélées indispensables dans ce travail 1. une approche théorique visant à définir un cadre conceptuel interdisciplinaire d'analyse du phénomène urbain dans ses rapports à la problématique du développement durable liant régime d'urbanisation - forme urbaine - développement durable ; 2. une approche méthodologique proposant des outils d'analyse simples et efficaces de description des nouvelles morphologies urbaines pour une meilleure gestion de l'environnement urbain et de la pratique de l'aménagement urbain ; 3. une approche pragmatique visant à approfondir la réflexion sur la ville étalée en passant d'une approche descriptive des conséquences du nouveau régime d'urbanisation à une approche opérationnelle, visant à identifier les lignes d'actions possibles dans une perspective de développement durable. Cette démarche d'analyse nous a conduits à trois résultats majeurs, nous permettant de définir une stratégie de lutte contre l'étalement. Premièrement, si la densification est acceptée comme un objectif stratégique de l'aménagement urbain, le modèle de la ville dense ne peut être appliqué saris la prise en considération d'autres objectifs d'aménagement. Il ne suffit pas de densifier pour réduire l'empreinte écologique de la ville et améliorer la qualité de vie des citadins. La recherche d'une forme urbaine plus durable est tributaire d'une multiplicité de facteurs et d'effets de synergie et la maîtrise des effets négatifs de l'étalement urbain passe par la mise en oeuvre de politiques urbaines intégrées et concertées, comme par exemple prôner la densification qualifiée comme résultante d'un processus finalisé, intégrer et valoriser les transports collectifs et encore plus la métrique pédestre avec l'aménagement urbain, intégrer systématiquement la diversité à travers les dimensions physique et sociale du territoire. Deuxièmement, l'avenir de ces territoires étalés n'est pas figé. Notre enquête de terrain a montré une évolution des modes d'habitat liée aux modes de vie, à l'organisation du travail, à la mobilité, qui font que l'on peut penser à un retour d'une partie de la population dans les villes centres (fin de la toute puissance du modèle de la maison individuelle). Ainsi, le diagnostic et la recherche de solutions d'aménagement efficaces et viables ne peuvent être dissociés des demandes des habitants et des comportements des acteurs de la production du cadre bâti. Dans cette perspective, tout programme d'urbanisme doit nécessairement s'appuyer sur la connaissance des aspirations de la population. Troisièmement, la réussite de la mise en oeuvre d'une politique globale de maîtrise des effets négatifs de l'étalement urbain est fortement conditionnée par l'adaptation de l'offre immobilière à la demande de nouveaux modèles d'habitat répondant à la fois à la nécessité d'une maîtrise des coûts de l'urbanisation (économiques, sociaux, environnementaux), ainsi qu'aux aspirations émergentes des ménages. Ces résultats nous ont permis de définir les orientations d'une stratégie de lutte contre l'étalement, dont nous avons testé la faisabilité ainsi que les conditions de mise en oeuvre sur le territoire de l'agglomération lausannoise. Abstract This dissertation participates in the perspective of a strategic analysis aiming at specifying the links between knowledge, expertise and political decision, The fundamental hypothesis directing this study assumes that the urban dynamics that has characterized the past thirty years signifies a trans-formation of the morphogenetic principle of agglomerations' spatial development that results in a worsening of their ecological balance and of city dwellers' quality of life. The environmental implications linked to urban changes and particularly to changes in urban form constitute an ever greater share of research into sustainable urban planning solutions. In this context, urban planning becomes a mode of action and an essential component of public policies aiming at local and global sustainable development. These patterns of spatial development indisputably emerge at the heart of environmental issues. If the concept of sustainable development provides us with new understanding into territories and their transformations, by arguing in favor of densification, its concretization remains at issue, especially in terms of urban planning and of urban development strategies allowing the appropriate implementations of the solutions offered. Thus, this study tries to answer a certain number of questions: what validity should be granted to the model of the dense city? Is densification an adequate answer? If so, under what terms? What are the sustainable alternatives to urban sprawl in terms of planning strategies? Should densification really be pursued or should we simply try to master urban sprawl? Our main objective being in fine to determine the directions and urban con-tents of public policies aiming at regulating urban sprawl, to validate the feasibility of these principles and to define the conditions of their implementation in the case of one agglomeration. Once the Lausanne agglomeration had been chosen as experimentation field, three complementary approaches proved to be essential to this study: 1. a theoretical approach aiming at definying an interdisciplinary conceptual framework of the ur-ban phenomenon in its relation to sustainable development linking urban dynamics - urban form - sustainable development ; 2. a methodological approach proposing simple and effective tools for analyzing and describing new urban morphologies for a better management of the urban environment and of urban planning practices 3. a pragmatic approach aiming at deepening reflection on urban sprawl by switching from a descriptive approach of the consequences of the new urban dynamics to an operational approach, aiming at identifying possible avenues of action respecting the principles of sustainable development. This analysis approach provided us with three major results, allowing us to define a strategy to cur-tail urban sprawl. First, if densification is accepted as a strategic objective of urban planning, the model of the dense city can not be applied without taking into consideration other urban planning objectives. Densification does not suffice to reduce the ecological impact of the city and improve the quality of life of its dwellers. The search for a more sustainable urban form depends on a multitude of factors and effects of synergy. Reducing the negative effects of urban sprawl requires the implementation of integrated and concerted urban policies, like for example encouraging densification qualified as resulting from a finalized process, integrating and developing collective forms of transportation and even more so the pedestrian metric with urban planning, integrating diversity on a systematic basis through the physical and social dimensions of the territory. Second, the future of such sprawling territories is not fixed. Our research on the ground revea-led an evolution in the modes of habitat related to ways of life, work organization and mobility that suggest the possibility of the return of a part of the population to the center of cities (end of the rule of the model of the individual home). Thus, the diagnosis and the search for effective and sustainable solutions can not be conceived of independently of the needs of the inhabitants and of the behavior of the actors behind the production of the built territory. In this perspective, any urban program must necessarily be based upon the knowledge of the population's wishes. Third, the successful implementation of a global policy of control of urban sprawl's negative effects is highly influenced by the adaptation of property offer to the demand of new habitat models satisfying both the necessity of urbanization cost controls (economical, social, environ-mental) and people's emerging aspirations. These results allowed us to define a strategy to cur-tail urban sprawl. Its feasibility and conditions of implementation were tested on the territory of the Lausanne agglomeration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

GENDER EMPOWERMENT: EFFECTS OF GODS, GEOGRAPHY, AND GDP¦Fenley, M., & Antonakis, J.¦ABSTRACT¦We examined the determinants of women's empowerment in the economy and political leadership in 178 countries. Given the androcentric nature of most religions, we hypothesized that high degrees of country-level theistic belief create social conditions that impede the progression of women to power. The dependent variable was the Gender Empowerment index of the United Nations Development Program, which captures the participation of women in political leadership, management, and their share of national income. Controlling for GDP per capita as well as the fixed-effects of the dominant type of religion and legal origin and instrumenting all endogenous variables with geographic or historical variables, our results show that atheism has a significant positive effect on gender empowerment. These results are driven by the rule of law, which in addition to being a catalyst for economic development, appears to crowd-out the informal regulation of behavior due to religious norms.¦DEVELOPING WOMEN LEADERS: COMPARING A TRANSFORMATIONAL AND A CHARISMATIC LEADERSHIP INTERVENTION¦Fenley, M., Jacquart, P., & Antonakis, J.¦ABSTRACT¦Along with a gender imbalance in leadership role occupancy, most leadership interventions have been conducted with samples of men. We conducted an experiment wherein we assigned female participants (n = 38, mean age = 35 years) to one of two conditions: Transformational (i.e., "standard") leadership training or charismatic leadership training. The two interventions were essentially equivalent, except that we also focused on developing the "charismatic leadership tactics" (e.g., rhetorical skills) of participants in the charismatic condition. After the interventions, we randomly assigned participants into problem-solving teams that required extensive interaction. Each team had an equal number of participants having received transformational training or charismatic training. At the end of the team exercises, participants rated each of their team members on a leadership prototypicality measure. Results indicated that those who received charismatic training scored higher (a) on prototypicality (standardized  = .42) and (b) on a test of declarative knowledge of charismatic rhetorical strategies (i.e., a manipulation check, standardized  = .76). Furthermore, the score on the test fully mediated the effect of the treatment on prototypicality (standardized indirect  = .32). We discuss the importance and practical implications of these results.¦CHANGING ATTITUDES TOWARDS WOMEN IN A MALE SEX-TYPE WORK ENVIRONMENT: EVIDENCE FROM A FIELD EXPERIMENT IN EUROPEAN ATHLETICS¦Fenley, M.¦ABSTRACT¦Most sports organizations have a similar gender gap in leadership as do the majority of non-sport organizations. Women's careers sputter somewhere at coaching level positions and few women obtain top leadership positions. Greater awareness of gender inequalities in general, and in leadership in particular, could decrease gender discrimination and increase women's presence at upper levels. The goal of this study was to evaluate the impact of an intervention using an online gender awareness exercise. Participants (n = 1,001 participants, n = 32 countries) were randomly assigned to one of eight conditions in a 2 (a discriminating perspective-taking story or a non-discriminating perspective-taking story) by 2 (gender quiz or no gender quiz) by 2 (diversity quiz or no diversity quiz) factorial design. The results show that the online perspective taking exercise changed initial sexist attitudes. Participants having taken a diversity quiz had less sexist attitudes (as measured by the Modern- and Old-fashioned sexism scale) than did participants who did not take the diversity quiz (irrespective of perspective-taking story). The combination of having taken a diversity quiz with a gender quiz had the biggest impact on attitudes for the non-discriminating story.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims: The HR-NBL1 Study of the European SIOP Neuroblastoma Group (SIOPEN) randomised two high dose regimens to learn about potential superiority and toxicity profiles.Patients and Methods: At interim analysis 1483 high risk neuroblastoma patients (893 males) were included since 2002 with either INSS stage 4 disease (1383 pts) above 1 year, or as infants (59 pts) and stage 2&3 of any age (145 pts) with MYCN amplification. The median age at diagnosis was 2.9 years (1 month-19.9 years) with a median follow up of 3 years. Response eligibility criteria prior randomisation after Rapid Cojec Induction (J Clin Oncol, 2010) ± 2 courses of TVD (Cancer, 2003) included complete bone marrow remission and at least partial response at skeletal sites with no more than 3, but improved mIBG positive spots and a PBSC harvest of at least 3x10E6 CD34/kgBW. The randomised regimens were BuMel [busulfan oral till 2006, 4x150mg/m² in 4 ED; or intravenous use according to body weight as licenced thereafter; melphalan 140mg/m²/day) and CEM [carboplatinum ctn. infusion (4x AUC 4.1mg/ml.min/day, etoposid ctn. infusion (4x 338mg/m²/day or [4x 200mg/m²/day]*, melphalan (3x70mg/m²/day; 3x60mg/m²/day*;*reduced dosis if GFR< 100ml/min/1.73m²). Supportive care followed institutional guidelines. VOD prophylaxis included ursadiol, but randomised patients were not eligible for the prophylactic defibrotide trial. Local control included surgery and radiotherapy of 21Gy.Results: Of 1483 patients, 584 were being randomised for the high dose question at data lock. A significant difference in event free survival (3-year EFS 49% vs. 33%, p<0.001) and overall survival (3-year OS 61% vs. 48%, p=0.003) favouring the BuMel regimen over the CEM regimen was demonstrated. The relapse/progression rate was significantly higher after CEM (0.60±0.03) than after BuMel (0.48±0.03)(p<0.001). Toxicity data had reached 80% completeness at last analysis. The severe toxicity rate up to day 100 (ICU and toxic deaths) was below 10%, but was significantly higher for CEM (p= 0.014). The acute toxic death rate was 3% for BuMel and 5% for CEM (NS). The acute HDT toxicity profile favours the BuMel regimen in spite of a total VOD incidence of 18% (grade 3:5%).Conclusions: The Peto rule of P<0.001 at interim analysis on the primary endpoint, EFS was met. Hence randomization was stopped with BuMel as recommended standard treatment in the HR-NBl1/SIOPEN trial which is still accruing for the randomised immunotherapy question.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this study is to assess the results of labyrinthine fenestration for fixed stapes in chronic ear disease. Using a prospective database, pre- and postoperative audiometric data from patients undergoing labyrinthine fenestration for fixation of the stapes in chronic ear disease others than otosclerosis between 2002 and 2012 were evaluated. Twenty-three labyrinthine fenestrations in chronic ear disease were performed (17 malleo-stapedotomies, 4 incus-stapedotomies, 1 neo-malleus-stapedotomy, 1 TORP-stapedotomy). Overall, the mean short-term (2 months) and long-term (42 months) postoperative air-bone gap (0.5-3 kHz) were 17.5 and 16.5 dB, respectively; long-term air-bone gap of <20 dB was obtained in 73 % of patients. There was no significant difference in air-bone gap closure between tympanosclerotic and post inflammatory osteogenic fixation of the stapes (p = 0.267). Hearing benefit success using the 'Belfast rule of the thumb' was achieved in 48 %. Normal bilateral hearing was achieved in 17 % and bilateral symmetric hearing impairment in 26 %. Only in 4 %, bone conduction worsened by more than 5 dB. Labyrinthine fenestration is an option in selected cases of stapes fixation in chronic ear disease and provides hearing gain without significant risk for sensorineural hearing loss. In those already selected cases, hearing benefit success 'Belfast rule of the thumb' is achieved only in half of the cases. This and the possible alternatives, should therefore be discussed preoperatively.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In the forensic examination of DNA mixtures, the question of how to set the total number of contributors (N) presents a topic of ongoing interest. Part of the discussion gravitates around issues of bias, in particular when assessments of the number of contributors are not made prior to considering the genotypic configuration of potential donors. Further complication may stem from the observation that, in some cases, there may be numbers of contributors that are incompatible with the set of alleles seen in the profile of a mixed crime stain, given the genotype of a potential contributor. In such situations, procedures that take a single and fixed number contributors as their output can lead to inferential impasses. Assessing the number of contributors within a probabilistic framework can help avoiding such complication. Using elements of decision theory, this paper analyses two strategies for inference on the number of contributors. One procedure is deterministic and focuses on the minimum number of contributors required to 'explain' an observed set of alleles. The other procedure is probabilistic using Bayes' theorem and provides a probability distribution for a set of numbers of contributors, based on the set of observed alleles as well as their respective rates of occurrence. The discussion concentrates on mixed stains of varying quality (i.e., different numbers of loci for which genotyping information is available). A so-called qualitative interpretation is pursued since quantitative information such as peak area and height data are not taken into account. The competing procedures are compared using a standard scoring rule that penalizes the degree of divergence between a given agreed value for N, that is the number of contributors, and the actual value taken by N. Using only modest assumptions and a discussion with reference to a casework example, this paper reports on analyses using simulation techniques and graphical models (i.e., Bayesian networks) to point out that setting the number of contributors to a mixed crime stain in probabilistic terms is, for the conditions assumed in this study, preferable to a decision policy that uses categoric assumptions about N.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Empirical modeling of exposure levels has been popular for identifying exposure determinants in occupational hygiene. Traditional data-driven methods used to choose a model on which to base inferences have typically not accounted for the uncertainty linked to the process of selecting the final model. Several new approaches propose making statistical inferences from a set of plausible models rather than from a single model regarded as 'best'. This paper introduces the multimodel averaging approach described in the monograph by Burnham and Anderson. In their approach, a set of plausible models are defined a priori by taking into account the sample size and previous knowledge of variables influent on exposure levels. The Akaike information criterion is then calculated to evaluate the relative support of the data for each model, expressed as Akaike weight, to be interpreted as the probability of the model being the best approximating model given the model set. The model weights can then be used to rank models, quantify the evidence favoring one over another, perform multimodel prediction, estimate the relative influence of the potential predictors and estimate multimodel-averaged effects of determinants. The whole approach is illustrated with the analysis of a data set of 1500 volatile organic compound exposure levels collected by the Institute for work and health (Lausanne, Switzerland) over 20 years, each concentration having been divided by the relevant Swiss occupational exposure limit and log-transformed before analysis. Multimodel inference represents a promising procedure for modeling exposure levels that incorporates the notion that several models can be supported by the data and permits to evaluate to a certain extent model selection uncertainty, which is seldom mentioned in current practice.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The capacity to learn to associate sensory perceptions with appropriate motor actions underlies the success of many animal species, from insects to humans. The evolutionary significance of learning has long been a subject of interest for evolutionary biologists who emphasize the bene¬fit yielded by learning under changing environmental conditions, where it is required to flexibly switch from one behavior to another. However, two unsolved questions are particularly impor¬tant for improving our knowledge of the evolutionary advantages provided by learning, and are addressed in the present work. First, because it is possible to learn the wrong behavior when a task is too complex, the learning rules and their underlying psychological characteristics that generate truly adaptive behavior must be identified with greater precision, and must be linked to the specific ecological problems faced by each species. A framework for predicting behavior from the definition of a learning rule is developed here. Learning rules capture cognitive features such as the tendency to explore, or the ability to infer rewards associated to unchosen actions. It is shown that these features interact in a non-intuitive way to generate adaptive behavior in social interactions where individuals affect each other's fitness. Such behavioral predictions are used in an evolutionary model to demonstrate that, surprisingly, simple trial-and-error learn¬ing is not always outcompeted by more computationally demanding inference-based learning, when population members interact in pairwise social interactions. A second question in the evolution of learning is its link with and relative advantage compared to other simpler forms of phenotypic plasticity. After providing a conceptual clarification on the distinction between genetically determined vs. learned responses to environmental stimuli, a new factor in the evo¬lution of learning is proposed: environmental complexity. A simple mathematical model shows that a measure of environmental complexity, the number of possible stimuli in one's environ¬ment, is critical for the evolution of learning. In conclusion, this work opens roads for modeling interactions between evolving species and their environment in order to predict how natural se¬lection shapes animals' cognitive abilities. - La capacité d'apprendre à associer des sensations perceptives à des actions motrices appropriées est sous-jacente au succès évolutif de nombreuses espèces, depuis les insectes jusqu'aux êtres hu¬mains. L'importance évolutive de l'apprentissage est depuis longtemps un sujet d'intérêt pour les biologistes de l'évolution, et ces derniers mettent l'accent sur le bénéfice de l'apprentissage lorsque les conditions environnementales sont changeantes, car dans ce cas il est nécessaire de passer de manière flexible d'un comportement à l'autre. Cependant, deux questions non résolues sont importantes afin d'améliorer notre savoir quant aux avantages évolutifs procurés par l'apprentissage. Premièrement, puisqu'il est possible d'apprendre un comportement incorrect quand une tâche est trop complexe, les règles d'apprentissage qui permettent d'atteindre un com¬portement réellement adaptatif doivent être identifiées avec une plus grande précision, et doivent être mises en relation avec les problèmes écologiques spécifiques rencontrés par chaque espèce. Un cadre théorique ayant pour but de prédire le comportement à partir de la définition d'une règle d'apprentissage est développé ici. Il est démontré que les caractéristiques cognitives, telles que la tendance à explorer ou la capacité d'inférer les récompenses liées à des actions non ex¬périmentées, interagissent de manière non-intuitive dans les interactions sociales pour produire des comportements adaptatifs. Ces prédictions comportementales sont utilisées dans un modèle évolutif afin de démontrer que, de manière surprenante, l'apprentissage simple par essai-et-erreur n'est pas toujours battu par l'apprentissage basé sur l'inférence qui est pourtant plus exigeant en puissance de calcul, lorsque les membres d'une population interagissent socialement par pair. Une deuxième question quant à l'évolution de l'apprentissage concerne son lien et son avantage relatif vis-à-vis d'autres formes plus simples de plasticité phénotypique. Après avoir clarifié la distinction entre réponses aux stimuli génétiquement déterminées ou apprises, un nouveau fac¬teur favorisant l'évolution de l'apprentissage est proposé : la complexité environnementale. Un modèle mathématique permet de montrer qu'une mesure de la complexité environnementale - le nombre de stimuli rencontrés dans l'environnement - a un rôle fondamental pour l'évolution de l'apprentissage. En conclusion, ce travail ouvre de nombreuses perspectives quant à la mo¬délisation des interactions entre les espèces en évolution et leur environnement, dans le but de comprendre comment la sélection naturelle façonne les capacités cognitives des animaux.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Errors in the inferred multiple sequence alignment may lead to false prediction of positive selection. Recently, methods for detecting unreliable alignment regions were developed and were shown to accurately identify incorrectly aligned regions. While removing unreliable alignment regions is expected to increase the accuracy of positive selection inference, such filtering may also significantly decrease the power of the test, as positively selected regions are fast evolving, and those same regions are often those that are difficult to align. Here, we used realistic simulations that mimic sequence evolution of HIV-1 genes to test the hypothesis that the performance of positive selection inference using codon models can be improved by removing unreliable alignment regions. Our study shows that the benefit of removing unreliable regions exceeds the loss of power due to the removal of some of the true positively selected sites.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Natural selection is typically exerted at some specific life stages. If natural selection takes place before a trait can be measured, using conventional models can cause wrong inference about population parameters. When the missing data process relates to the trait of interest, a valid inference requires explicit modeling of the missing process. We propose a joint modeling approach, a shared parameter model, to account for nonrandom missing data. It consists of an animal model for the phenotypic data and a logistic model for the missing process, linked by the additive genetic effects. A Bayesian approach is taken and inference is made using integrated nested Laplace approximations. From a simulation study we find that wrongly assuming that missing data are missing at random can result in severely biased estimates of additive genetic variance. Using real data from a wild population of Swiss barn owls Tyto alba, our model indicates that the missing individuals would display large black spots; and we conclude that genes affecting this trait are already under selection before it is expressed. Our model is a tool to correctly estimate the magnitude of both natural selection and additive genetic variance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Geochemical and petrographical studies of lavas and ignimbrites from the Quaternary Nisyros-Yali volcanic system in the easternmost part of the Hellenic arc (Greece) reveal insight into magma generating processes. A compositional gap between 61 and 68 wt.% SiO2 is recognized that coincides with the stratigraphic distinction between pre-caldera and postcaldera volcanic units. Trace element systematics support the subdivision of Nisyros and Yali volcanic units into two distinct suites of rocks. The variation of Nd and Hf present day isotope data and the fact that they are distinct from the isotope compositions of MORB rule out an origin by pure differentiation and require assimilation of a crustal component. Lead isotope ratios of Nisyros and Yali volcanic rocks support mixing of mantle material with a lower crust equivalent. However, Sr-87/Sr-86 ratios of 0.7036-0.7048 are incompatible with a simple binary mixing scenario and give low depleted mantle extraction ages (< 0.1 Ga), in contrast with Pb model ages of 0.3 Ga and Hf and Nd model ages of ca. 0.8 Ga. The budget of fluid-mobile elements Sr and Pb is likely to be dominated by abundant hydrous fluids characterised by mantle-like Sr isotope ratios. Late stage fluids probably were enriched in CO2, needed to explain the high Th concentrations. The occurrence of hydrated minerals (e.g., amphibole) in the first post-caldera unit with the lowermost Sr-87/Sr-86 ratio of 0.7036 +/- 2 can be interpreted as the result of the increased water activity in the source. The presence of two different plagioclase phenocryst generations in the first lava subsequent to the caldera-causing event is indicative for a longer storage time of this magma at a shallower level. A model capable of explaining these observations involves three evolutionary stages. First stage, assimilation of lower crustal material by a primitive magma of mantle origin (as modelled by Nd-Hf isotope systematics). This stage ended by an interruption in replenishment that led to an increase of crystallization and, hence, an increase in viscosity, suppressing eruption. During this time gap, differentiation by fractional crystallization led to enrichment of incompatible species, especially aqueous fluids, to silica depolymerisation and to a decrease in viscosity, finally enabling eruption again in the third stage. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Aim  Recently developed parametric methods in historical biogeography allow researchers to integrate temporal and palaeogeographical information into the reconstruction of biogeographical scenarios, thus overcoming a known bias of parsimony-based approaches. Here, we compare a parametric method, dispersal-extinction-cladogenesis (DEC), against a parsimony-based method, dispersal-vicariance analysis (DIVA), which does not incorporate branch lengths but accounts for phylogenetic uncertainty through a Bayesian empirical approach (Bayes-DIVA). We analyse the benefits and limitations of each method using the cosmopolitan plant family Sapindaceae as a case study.Location  World-wide.Methods  Phylogenetic relationships were estimated by Bayesian inference on a large dataset representing generic diversity within Sapindaceae. Lineage divergence times were estimated by penalized likelihood over a sample of trees from the posterior distribution of the phylogeny to account for dating uncertainty in biogeographical reconstructions. We compared biogeographical scenarios between Bayes-DIVA and two different DEC models: one with no geological constraints and another that employed a stratified palaeogeographical model in which dispersal rates were scaled according to area connectivity across four time slices, reflecting the changing continental configuration over the last 110 million years.Results  Despite differences in the underlying biogeographical model, Bayes-DIVA and DEC inferred similar biogeographical scenarios. The main differences were: (1) in the timing of dispersal events - which in Bayes-DIVA sometimes conflicts with palaeogeographical information, and (2) in the lower frequency of terminal dispersal events inferred by DEC. Uncertainty in divergence time estimations influenced both the inference of ancestral ranges and the decisiveness with which an area can be assigned to a node.Main conclusions  By considering lineage divergence times, the DEC method gives more accurate reconstructions that are in agreement with palaeogeographical evidence. In contrast, Bayes-DIVA showed the highest decisiveness in unequivocally reconstructing ancestral ranges, probably reflecting its ability to integrate phylogenetic uncertainty. Care should be taken in defining the palaeogeographical model in DEC because of the possibility of overestimating the frequency of extinction events, or of inferring ancestral ranges that are outside the extant species ranges, owing to dispersal constraints enforced by the model. The wide-spanning spatial and temporal model proposed here could prove useful for testing large-scale biogeographical patterns in plants.