38 resultados para Simulation and Modeling
Resumo:
Mountains and mountain societies provide a wide range of goods and services to humanity, but they are particularly sensitive to the effects of global environmental change. Thus, the definition of appropriate management regimes that maintain the multiple functions of mountain regions in a time of greatly changing climatic, economic, and societal drivers constitutes a significant challenge. Management decisions must be based on a sound understanding of the future dynamics of these systems. The present article reviews the elements required for an integrated effort to project the impacts of global change on mountain regions, and recommends tools that can be used at 3 scientific levels (essential, improved, and optimum). The proposed strategy is evaluated with respect to UNESCO's network of Mountain Biosphere Reserves (MBRs), with the intention of implementing it in other mountain regions as well. First, methods for generating scenarios of key drivers of global change are reviewed, including land use/land cover and climate change. This is followed by a brief review of the models available for projecting the impacts of these scenarios on (1) cryospheric systems, (2) ecosystem structure and diversity, and (3) ecosystem functions such as carbon and water relations. Finally, the cross-cutting role of remote sensing techniques is evaluated with respect to both monitoring and modeling efforts. We conclude that a broad range of techniques is available for both scenario generation and impact assessments, many of which can be implemented without much capacity building across many or even most MBRs. However, to foster implementation of the proposed strategy, further efforts are required to establish partnerships between scientists and resource managers in mountain areas.
Resumo:
The n-octanol/water partition coefficient (log Po/w) is a key physicochemical parameter for drug discovery, design, and development. Here, we present a physics-based approach that shows a strong linear correlation between the computed solvation free energy in implicit solvents and the experimental log Po/w on a cleansed data set of more than 17,500 molecules. After internal validation by five-fold cross-validation and data randomization, the predictive power of the most interesting multiple linear model, based on two GB/SA parameters solely, was tested on two different external sets of molecules. On the Martel druglike test set, the predictive power of the best model (N = 706, r = 0.64, MAE = 1.18, and RMSE = 1.40) is similar to six well-established empirical methods. On the 17-drug test set, our model outperformed all compared empirical methodologies (N = 17, r = 0.94, MAE = 0.38, and RMSE = 0.52). The physical basis of our original GB/SA approach together with its predictive capacity, computational efficiency (1 to 2 s per molecule), and tridimensional molecular graphics capability lay the foundations for a promising predictor, the implicit log P method (iLOGP), to complement the portfolio of drug design tools developed and provided by the SIB Swiss Institute of Bioinformatics.
Resumo:
The proportion of population living in or around cites is more important than ever. Urban sprawl and car dependence have taken over the pedestrian-friendly compact city. Environmental problems like air pollution, land waste or noise, and health problems are the result of this still continuing process. The urban planners have to find solutions to these complex problems, and at the same time insure the economic performance of the city and its surroundings. At the same time, an increasing quantity of socio-economic and environmental data is acquired. In order to get a better understanding of the processes and phenomena taking place in the complex urban environment, these data should be analysed. Numerous methods for modelling and simulating such a system exist and are still under development and can be exploited by the urban geographers for improving our understanding of the urban metabolism. Modern and innovative visualisation techniques help in communicating the results of such models and simulations. This thesis covers several methods for analysis, modelling, simulation and visualisation of problems related to urban geography. The analysis of high dimensional socio-economic data using artificial neural network techniques, especially self-organising maps, is showed using two examples at different scales. The problem of spatiotemporal modelling and data representation is treated and some possible solutions are shown. The simulation of urban dynamics and more specifically the traffic due to commuting to work is illustrated using multi-agent micro-simulation techniques. A section on visualisation methods presents cartograms for transforming the geographic space into a feature space, and the distance circle map, a centre-based map representation particularly useful for urban agglomerations. Some issues on the importance of scale in urban analysis and clustering of urban phenomena are exposed. A new approach on how to define urban areas at different scales is developed, and the link with percolation theory established. Fractal statistics, especially the lacunarity measure, and scale laws are used for characterising urban clusters. In a last section, the population evolution is modelled using a model close to the well-established gravity model. The work covers quite a wide range of methods useful in urban geography. Methods should still be developed further and at the same time find their way into the daily work and decision process of urban planners. La part de personnes vivant dans une région urbaine est plus élevé que jamais et continue à croître. L'étalement urbain et la dépendance automobile ont supplanté la ville compacte adaptée aux piétons. La pollution de l'air, le gaspillage du sol, le bruit, et des problèmes de santé pour les habitants en sont la conséquence. Les urbanistes doivent trouver, ensemble avec toute la société, des solutions à ces problèmes complexes. En même temps, il faut assurer la performance économique de la ville et de sa région. Actuellement, une quantité grandissante de données socio-économiques et environnementales est récoltée. Pour mieux comprendre les processus et phénomènes du système complexe "ville", ces données doivent être traitées et analysées. Des nombreuses méthodes pour modéliser et simuler un tel système existent et sont continuellement en développement. Elles peuvent être exploitées par le géographe urbain pour améliorer sa connaissance du métabolisme urbain. Des techniques modernes et innovatrices de visualisation aident dans la communication des résultats de tels modèles et simulations. Cette thèse décrit plusieurs méthodes permettant d'analyser, de modéliser, de simuler et de visualiser des phénomènes urbains. L'analyse de données socio-économiques à très haute dimension à l'aide de réseaux de neurones artificiels, notamment des cartes auto-organisatrices, est montré à travers deux exemples aux échelles différentes. Le problème de modélisation spatio-temporelle et de représentation des données est discuté et quelques ébauches de solutions esquissées. La simulation de la dynamique urbaine, et plus spécifiquement du trafic automobile engendré par les pendulaires est illustrée à l'aide d'une simulation multi-agents. Une section sur les méthodes de visualisation montre des cartes en anamorphoses permettant de transformer l'espace géographique en espace fonctionnel. Un autre type de carte, les cartes circulaires, est présenté. Ce type de carte est particulièrement utile pour les agglomérations urbaines. Quelques questions liées à l'importance de l'échelle dans l'analyse urbaine sont également discutées. Une nouvelle approche pour définir des clusters urbains à des échelles différentes est développée, et le lien avec la théorie de la percolation est établi. Des statistiques fractales, notamment la lacunarité, sont utilisées pour caractériser ces clusters urbains. L'évolution de la population est modélisée à l'aide d'un modèle proche du modèle gravitaire bien connu. Le travail couvre une large panoplie de méthodes utiles en géographie urbaine. Toutefois, il est toujours nécessaire de développer plus loin ces méthodes et en même temps, elles doivent trouver leur chemin dans la vie quotidienne des urbanistes et planificateurs.
Resumo:
BACKGROUND: Anal condylomata acuminata (ACA) are caused by human papilloma virus (HPV) infection which is transmitted by close physical and sexual contact. The result of surgical treatment of ACA has an overall success rate of 71% to 93%, with a recurrence rate between 4% and 29%. The aim of this study was to assess a possible association between HPV type and ACA recurrence after surgical treatment. METHODS: We performed a retrospective analysis of 140 consecutive patients who underwent surgery for ACA from January 1990 to December 2005 at our tertiary University Hospital. We confirmed ACA by histopathological analysis and determined the HPV typing using the polymerase chain reaction. Patients gave consent for HPV testing and completed a questionnaire. We looked at the association of ACA, HPV typing, and HIV disease. We used chi, the Monte Carlo simulation, and Wilcoxon tests for statistical analysis. RESULTS: Among the 140 patients (123 M/17 F), HPV 6 and 11 were the most frequently encountered viruses (51% and 28%, respectively). Recurrence occurred in 35 (25%) patients. HPV 11 was present in 19 (41%) of these recurrences, which is statistically significant, when compared with other HPVs. There was no significant difference between recurrence rates in the 33 (24%) HIV-positive and the HIV-negative patients. CONCLUSIONS: HPV 11 is associated with higher recurrence rate of ACA. This makes routine clinical HPV typing questionable. Follow-up is required to identify recurrence and to treat it early, especially if HPV 11 has been identified.
Resumo:
Dans cette thèse, nous étudions les aspects comportementaux d'agents qui interagissent dans des systèmes de files d'attente à l'aide de modèles de simulation et de méthodologies expérimentales. Chaque période les clients doivent choisir un prestataire de servivce. L'objectif est d'analyser l'impact des décisions des clients et des prestataires sur la formation des files d'attente. Dans un premier cas nous considérons des clients ayant un certain degré d'aversion au risque. Sur la base de leur perception de l'attente moyenne et de la variabilité de cette attente, ils forment une estimation de la limite supérieure de l'attente chez chacun des prestataires. Chaque période, ils choisissent le prestataire pour lequel cette estimation est la plus basse. Nos résultats indiquent qu'il n'y a pas de relation monotone entre le degré d'aversion au risque et la performance globale. En effet, une population de clients ayant un degré d'aversion au risque intermédiaire encoure généralement une attente moyenne plus élevée qu'une population d'agents indifférents au risque ou très averses au risque. Ensuite, nous incorporons les décisions des prestataires en leur permettant d'ajuster leur capacité de service sur la base de leur perception de la fréquence moyenne d'arrivées. Les résultats montrent que le comportement des clients et les décisions des prestataires présentent une forte "dépendance au sentier". En outre, nous montrons que les décisions des prestataires font converger l'attente moyenne pondérée vers l'attente de référence du marché. Finalement, une expérience de laboratoire dans laquelle des sujets jouent le rôle de prestataire de service nous a permis de conclure que les délais d'installation et de démantèlement de capacité affectent de manière significative la performance et les décisions des sujets. En particulier, les décisions du prestataire, sont influencées par ses commandes en carnet, sa capacité de service actuellement disponible et les décisions d'ajustement de capacité qu'il a prises, mais pas encore implémentées. - Queuing is a fact of life that we witness daily. We all have had the experience of waiting in line for some reason and we also know that it is an annoying situation. As the adage says "time is money"; this is perhaps the best way of stating what queuing problems mean for customers. Human beings are not very tolerant, but they are even less so when having to wait in line for service. Banks, roads, post offices and restaurants are just some examples where people must wait for service. Studies of queuing phenomena have typically addressed the optimisation of performance measures (e.g. average waiting time, queue length and server utilisation rates) and the analysis of equilibrium solutions. The individual behaviour of the agents involved in queueing systems and their decision making process have received little attention. Although this work has been useful to improve the efficiency of many queueing systems, or to design new processes in social and physical systems, it has only provided us with a limited ability to explain the behaviour observed in many real queues. In this dissertation we differ from this traditional research by analysing how the agents involved in the system make decisions instead of focusing on optimising performance measures or analysing an equilibrium solution. This dissertation builds on and extends the framework proposed by van Ackere and Larsen (2004) and van Ackere et al. (2010). We focus on studying behavioural aspects in queueing systems and incorporate this still underdeveloped framework into the operations management field. In the first chapter of this thesis we provide a general introduction to the area, as well as an overview of the results. In Chapters 2 and 3, we use Cellular Automata (CA) to model service systems where captive interacting customers must decide each period which facility to join for service. They base this decision on their expectations of sojourn times. Each period, customers use new information (their most recent experience and that of their best performing neighbour) to form expectations of sojourn time at the different facilities. Customers update their expectations using an adaptive expectations process to combine their memory and their new information. We label "conservative" those customers who give more weight to their memory than to the xiv Summary new information. In contrast, when they give more weight to new information, we call them "reactive". In Chapter 2, we consider customers with different degree of risk-aversion who take into account uncertainty. They choose which facility to join based on an estimated upper-bound of the sojourn time which they compute using their perceptions of the average sojourn time and the level of uncertainty. We assume the same exogenous service capacity for all facilities, which remains constant throughout. We first analyse the collective behaviour generated by the customers' decisions. We show that the system achieves low weighted average sojourn times when the collective behaviour results in neighbourhoods of customers loyal to a facility and the customers are approximately equally split among all facilities. The lowest weighted average sojourn time is achieved when exactly the same number of customers patronises each facility, implying that they do not wish to switch facility. In this case, the system has achieved the Nash equilibrium. We show that there is a non-monotonic relationship between the degree of risk-aversion and system performance. Customers with an intermediate degree of riskaversion typically achieve higher sojourn times; in particular they rarely achieve the Nash equilibrium. Risk-neutral customers have the highest probability of achieving the Nash Equilibrium. Chapter 3 considers a service system similar to the previous one but with risk-neutral customers, and relaxes the assumption of exogenous service rates. In this sense, we model a queueing system with endogenous service rates by enabling managers to adjust the service capacity of the facilities. We assume that managers do so based on their perceptions of the arrival rates and use the same principle of adaptive expectations to model these perceptions. We consider service systems in which the managers' decisions take time to be implemented. Managers are characterised by a profile which is determined by the speed at which they update their perceptions, the speed at which they take decisions, and how coherent they are when accounting for their previous decisions still to be implemented when taking their next decision. We find that the managers' decisions exhibit a strong path-dependence: owing to the initial conditions of the model, the facilities of managers with identical profiles can evolve completely differently. In some cases the system becomes "locked-in" into a monopoly or duopoly situation. The competition between managers causes the weighted average sojourn time of the system to converge to the exogenous benchmark value which they use to estimate their desired capacity. Concerning the managers' profile, we found that the more conservative Summary xv a manager is regarding new information, the larger the market share his facility achieves. Additionally, the faster he takes decisions, the higher the probability that he achieves a monopoly position. In Chapter 4 we consider a one-server queueing system with non-captive customers. We carry out an experiment aimed at analysing the way human subjects, taking on the role of the manager, take decisions in a laboratory regarding the capacity of a service facility. We adapt the model proposed by van Ackere et al (2010). This model relaxes the assumption of a captive market and allows current customers to decide whether or not to use the facility. Additionally the facility also has potential customers who currently do not patronise it, but might consider doing so in the future. We identify three groups of subjects whose decisions cause similar behavioural patterns. These groups are labelled: gradual investors, lumpy investors, and random investor. Using an autocorrelation analysis of the subjects' decisions, we illustrate that these decisions are positively correlated to the decisions taken one period early. Subsequently we formulate a heuristic to model the decision rule considered by subjects in the laboratory. We found that this decision rule fits very well for those subjects who gradually adjust capacity, but it does not capture the behaviour of the subjects of the other two groups. In Chapter 5 we summarise the results and provide suggestions for further work. Our main contribution is the use of simulation and experimental methodologies to explain the collective behaviour generated by customers' and managers' decisions in queueing systems as well as the analysis of the individual behaviour of these agents. In this way, we differ from the typical literature related to queueing systems which focuses on optimising performance measures and the analysis of equilibrium solutions. Our work can be seen as a first step towards understanding the interaction between customer behaviour and the capacity adjustment process in queueing systems. This framework is still in its early stages and accordingly there is a large potential for further work that spans several research topics. Interesting extensions to this work include incorporating other characteristics of queueing systems which affect the customers' experience (e.g. balking, reneging and jockeying); providing customers and managers with additional information to take their decisions (e.g. service price, quality, customers' profile); analysing different decision rules and studying other characteristics which determine the profile of customers and managers.
Resumo:
Mountain ecosystems will likely be affected by global warming during the 21st century, with substantial biodiversity loss predicted by species distribution models (SDMs). Depending on the geographic extent, elevation range and spatial resolution of data used in making these models, different rates of habitat loss have been predicted, with associated risk of species extinction. Few coordinated across-scale comparisons have been made using data of different resolution and geographic extent. Here, we assess whether climate-change induced habitat losses predicted at the European scale (10x10' grid cells) are also predicted from local scale data and modeling (25x25m grid cells) in two regions of the Swiss Alps. We show that local-scale models predict persistence of suitable habitats in up to 100% of species that were predicted by a European-scale model to lose all their suitable habitats in the area. Proportion of habitat loss depends on climate change scenario and study area. We find good agreement between the mismatch in predictions between scales and the fine-grain elevation range within 10x10' cells. The greatest prediction discrepancy for alpine species occurs in the area with the largest nival zone. Our results suggest elevation range as the main driver for the observed prediction discrepancies. Local scale projections may better reflect the possibility for species to track their climatic requirement toward higher elevations.
Resumo:
The circadian timing system controls cell cycle, apoptosis, drug bioactivation, and transport and detoxification mechanisms in healthy tissues. As a consequence, the tolerability of cancer chemotherapy varies up to several folds as a function of circadian timing of drug administration in experimental models. Best antitumor efficacy of single-agent or combination chemotherapy usually corresponds to the delivery of anticancer drugs near their respective times of best tolerability. Mathematical models reveal that such coincidence between chronotolerance and chronoefficacy is best explained by differences in the circadian and cell cycle dynamics of host and cancer cells, especially with regard circadian entrainment and cell cycle variability. In the clinic, a large improvement in tolerability was shown in international randomized trials where cancer patients received the same sinusoidal chronotherapy schedule over 24h as compared to constant-rate infusion or wrongly timed chronotherapy. However, sex, genetic background, and lifestyle were found to influence optimal chronotherapy scheduling. These findings support systems biology approaches to cancer chronotherapeutics. They involve the systematic experimental mapping and modeling of chronopharmacology pathways in synchronized cell cultures and their adjustment to mouse models of both sexes and distinct genetic background, as recently shown for irinotecan. Model-based personalized circadian drug delivery aims at jointly improving tolerability and efficacy of anticancer drugs based on the circadian timing system of individual patients, using dedicated circadian biomarker and drug delivery technologies.
Resumo:
In 1903, more than 30 million m3 of rock fell from the east slopes of Turtle Mountain in Alberta, Canada, causing a rock avalanche that killed about 70 people in the town of Frank. The Alberta Government, in response to continuing instabilities at the crest of the mountain, established a sophisticated field laboratory where state-of-the-art monitoring techniques have been installed and tested as part of an early-warning system. In this chapter, we provide an overview of the causes, trigger, and extreme mobility of the landslide. We then present new data relevant to the characterization and detection of the present-day instabilities on Turtle Mountain. Fourteen potential instabilities have been identified through field mapping and remote sensing. Lastly, we provide a detailed review of the different in-situ and remote monitoring systems that have been installed on the mountain. The implications of the new data for the future stability of Turtle Mountain and related landslide runout, and for monitoring strategies and risk management, are discussed.
Resumo:
Severe combined immunodeficiency (SCID) and other severe non-SCID primary immunodeficiencies (non-SCID PID) can be treated by allogeneic hematopoietic stem cell (HSC) transplantation, but when histocompatibility leukocyte antigen-matched donors are lacking, this can be a high-risk procedure. Correcting the patient's own HSCs with gene therapy offers an attractive alternative. Gene therapies currently being used in clinical settings insert a functional copy of the entire gene by means of a viral vector. With this treatment, severe complications may result due to integration within oncogenes. A promising alternative is the use of endonucleases such as ZFNs, TALENs, and CRISPR/Cas9 to introduce a double-stranded break in the DNA and thus induce homology-directed repair. With these genome-editing tools a correct copy can be inserted in a precisely targeted "safe harbor." They can also be used to correct pathogenic mutations in situ and to develop cellular or animal models needed to study the pathogenic effects of specific genetic defects found in immunodeficient patients. This review discusses the advantages and disadvantages of these endonucleases in gene correction and modeling with an emphasis on CRISPR/Cas9, which offers the most promise due to its efficacy and versatility.
Resumo:
Introduction: Coordination is a strategy chosen by the central nervous system to control the movements and maintain stability during gait. Coordinated multi-joint movements require a complex interaction between nervous outputs, biomechanical constraints, and pro-prioception. Quantitatively understanding and modeling gait coordination still remain a challenge. Surgeons lack a way to model and appreciate the coordination of patients before and after surgery of the lower limbs. Patients alter their gait patterns and their kinematic synergies when they walk faster or slower than normal speed to maintain their stability and minimize the energy cost of locomotion. The goal of this study was to provide a dynamical system approach to quantitatively describe human gait coordination and apply it to patients before and after total knee arthroplasty. Methods: A new method of quantitative analysis of interjoint coordination during gait was designed, providing a general model to capture the whole dynamics and showing the kinematic synergies at various walking speeds. The proposed model imposed a relationship among lower limb joint angles (hips and knees) to parameterize the dynamics of locomotion of each individual. An integration of different analysis tools such as Harmonic analysis, Principal Component Analysis, and Artificial Neural Network helped overcome high-dimensionality, temporal dependence, and non-linear relationships of the gait patterns. Ten patients were studied using an ambulatory gait device (Physilog®). Each participant was asked to perform two walking trials of 30m long at 3 different speeds and to complete an EQ-5D questionnaire, a WOMAC and Knee Society Score. Lower limbs rotations were measured by four miniature angular rate sensors mounted respectively, on each shank and thigh. The outcomes of the eight patients undergoing total knee arthroplasty, recorded pre-operatively and post-operatively at 6 weeks, 3 months, 6 months and 1 year were compared to 2 age-matched healthy subjects. Results: The new method provided coordination scores at various walking speeds, ranged between 0 and 10. It determined the overall coordination of the lower limbs as well as the contribution of each joint to the total coordination. The difference between the pre-operative and post-operative coordination values were correlated with the improvements of the subjective outcome scores. Although the study group was small, the results showed a new way to objectively quantify gait coordination of patients undergoing total knee arthroplasty, using only portable body-fixed sensors. Conclusion: A new method for objective gait coordination analysis has been developed with very encouraging results regarding the objective outcome of lower limb surgery.
Resumo:
La gestion des risques est souvent appréhendée par l'utilisation de méthodes linéaires mettant l'accent sur des raisonnements de positionnement et de type causal : à tel événement correspond tel risque et telle conséquence. Une prise en compte des interrelations entre risques est souvent occultée et les risques sont rarement analysés dans leurs dynamiques et composantes non linéaires. Ce travail présente ce que les méthodes systémiques et notamment l'étude des systèmes complexes sont susceptibles d'apporter en matière de compréhension, de management et d'anticipation et de gestion des risques d'entreprise, tant sur le plan conceptuel que de matière appliquée. En partant des définitions relatives aux notions de systèmes et de risques dans différents domaines, ainsi que des méthodes qui sont utilisées pour maîtriser les risques, ce travail confronte cet ensemble à ce qu'apportent les approches d'analyse systémique et de modélisation des systèmes complexes. En mettant en évidence les effets parfois réducteurs des méthodes de prise en compte des risques en entreprise ainsi que les limitations des univers de risques dues, notamment, à des définitions mal adaptées, ce travail propose également, pour la Direction d'entreprise, une palette des outils et approches différentes, qui tiennent mieux compte de la complexité, pour gérer les risques, pour aligner stratégie et management des risques, ainsi que des méthodes d'analyse du niveau de maturité de l'entreprise en matière de gestion des risques. - Risk management is often assessed through linear methods which stress positioning and causal logical frameworks: to such events correspond such consequences and such risks accordingly. Consideration of the interrelationships between risks is often overlooked and risks are rarely analyzed in their dynamic and nonlinear components. This work shows what systemic methods, including the study of complex systems, are likely to bring to knowledge, management, anticipation of business risks, both on the conceptual and the practical sides. Based on the definitions of systems and risks in various areas, as well as methods used to manage risk, this work confronts these concepts with approaches of complex systems analysis and modeling. This work highlights the reducing effects of some business risk analysis methods as well as limitations of risk universes caused in particular by unsuitable definitions. As a result this work also provides chief officers with a range of different tools and approaches which allows them a better understanding of complexity and as such a gain in efficiency in their risk management practices. It results in a better fit between strategy and risk management. Ultimately the firm gains in its maturity of risk management.
Resumo:
The MIGCLIM R package is a function library for the open source R software that enables the implementation of species-specific dispersal constraints into projections of species distribution models under environmental change and/or landscape fragmentation scenarios. The model is based on a cellular automaton and the basic modeling unit is a cell that is inhabited or not. Model parameters include dispersal distance and kernel, long distance dispersal, barriers to dispersal, propagule production potential and habitat invasibility. The MIGCLIM R package has been designed to be highly flexible in the parameter values it accepts, and to offer good compatibility with existing species distribution modeling software. Possible applications include the projection of future species distributions under environmental change conditions and modeling the spread of invasive species.
Resumo:
The application of DNA-based markers toward the task of discriminating among alternate salmon runs has evolved in accordance with ongoing genomic developments and increasingly has enabled resolution of which genetic markers associate with important life-history differences. Accurate and efficient identification of the most likely origin for salmon encountered during ocean fisheries, or at salvage from fresh water diversion and monitoring facilities, has far-reaching consequences for improving measures for management, restoration and conservation. Near-real-time provision of high-resolution identity information enables prompt response to changes in encounter rates. We thus continue to develop new tools to provide the greatest statistical power for run identification. As a proof of concept for genetic identification improvements, we conducted simulation and blind tests for 623 known-origin Chinook salmon (Oncorhynchus tshawytscha) to compare and contrast the accuracy of different population sampling baselines and microsatellite loci panels. This test included 35 microsatellite loci (1266 alleles), some known to be associated with specific coding regions of functional significance, such as the circadian rhythm cryptochrome genes, and others not known to be associated with any functional importance. The identification of fall run with unprecedented accuracy was demonstrated. Overall, the top performing panel and baseline (HMSC21) were predicted to have a success rate of 98%, but the blind-test success rate was 84%. Findings for bias or non-bias are discussed to target primary areas for further research and resolution.
Resumo:
The Zermatt-Saas Fee Zone (ZSZ) in the Western Alps consists of multiple slices of ultramafic, mafic and metasedimentary rocks. They represent the remnants of the Mesozoic Piemonte-Ligurian oceanic basin which was subducted to eclogite facies conditions with peak pressures and temperatures of up to 20-28 kbar and 550-630 °C, followed by a greenschist overprint during exhumation. Previous studies, emphasizing on isotopie geochronology and modeling of REE-behavior in garnets from mafic eclogites, suggest that the ZSZ is buildup of tectonic slices which underwent a protracted diachronous subduction followed by a rapid synchronous exhumation. In this study Rb/Sr geochronology is applied to phengite included in garnets from metasediments of two different slices of the ZSZ to date garnet growth. Inclusion ages for 2 metapelitic samples from the same locality from the first slice are 44.25 ± 0.48 Ma and 43.19 ± 0.32 Ma. Those are about 4 Ma older than the corresponding matrix mica ages of respectively 40.02 ± 0.13 Ma and 39.55 ± 0.25 Ma. The inclusion age for a third calcschist sample, collected from a second slice, is 40.58 ± 0.24 Ma and the matrix age is 39.8 ± 1.5 Ma. The results show that garnet effectively functioned as a shield, preventing a reset of the Rb/Sr isotopie clock in the included phengites to temperatures well above the closure of Sr in mica. The results are consistent with the results of former studies on the ZSZ using both Lu/Hf and Sm/Nd geochronology on mafic eclogites. They confirm that at least parts of the ZSZ underwent close to peak metamorphic HP conditions younger than 43 m.y. ago before being rapidly exhumed about 40 m.y. ago. Fluid infiltration in rocks of the second slice occurred likely close to the peak metamorphic conditions, resulting in rapid growth of garnets. Similar calcschists from the same slice contain two distinct types of porphyroblast garnets with indications of multiple growth pulses and resorption indicated by truncated chemical zoning patterns. In-situ oxygen isotope Sensitive High Resolution Ion Microprobe (SHRIMP) analyses along profiles on central sections of the garnets reveal variations of up to 5 %o in individual garnets. The complex compositional zoning and graphite inclusion patterns as well as the variations in oxygen isotopes correspond to growing under changing fluid composition conditions caused by external infiltrated fluids. The ultramafic and mafic rocks, which were subducted along with the sediments and form the volumetrically most important part of the ZSZ, are the likely source of those mainly aqueous fluids. - La Zone de Zermatt-Saas Fee (ZZS) est constituée de multiples écailles de roches ultramafiques, mafiques et méta-sédimentaires. Cette zone, qui affleure dans les Alpes occidentales, représente les restes du basin océanique Piémontais-Ligurien d'âge mésozoïque. Lors de la subduction de ce basin océanique à l'Eocène, les différentes roches composant le planché océanique ont atteint les conditions du faciès éclogitique avec des pressions et des températures maximales estimées entre 20 - 28 kbar et 550 - 630 °C respectivement, avant de subir une rétrogression au faciès schiste vert pendant l'exhumation. Différentes études antérieures combinant la géochronologie isotopique et la modélisation des mécanismes gouvernant l'incorporation des terres rares dans les grenats des éclogites mafiques, suggèrent que la ZZS ne correspond pas à une seule unité, mais est constituée de différentes écailles tectoniques qui ont subi une subduction prolongée et diachrone suivie d'une exhumation rapide et synchrone. Afin de tester cette hypothèse, j'ai daté, dans cette étude, des phengites incluses dans les grenats des méta-sédiments de deux différentes écailles tectoniques de la ZZS, afin de dater la croissance relative de ces grenats. Pour cela j'ai utilisé la méthode géochronologique basée sur la décroissance du Rb87 en Sr87. J'ai daté trois échantillons de deux différentes écailles. Les premiers deux échantillons proviennent de Triftji, au nord du Breithorn, d'une première écaille dont les méta-sédiments sont caractérisés par des bandes méta-pélitiques à grenat et des calcschistes. Le troisième échantillon a été collectionné au Riffelberg, dans une écaille dont les méta-sédiments sont essentiellement des calcschistes qui sont mélangés avec des roches mafiques et des serpentinites. Ce mélange se trouve au-dessus de la grande masse de serpentinites qui forment le Riffelhorn, le Trockenersteg et le Breithorn, et qui est connu sous le nom de la Zone de mélange de Riffelberg (Bearth, 1953). Les inclusions dans les grenats de deux échantillons méta-pélitiques de la première écaille sont datées à 44.25 ± 0.48 Ma et à 43.19 ± 0.32 Ma. Ces âges sont à peu près 4 Ma plus vieux que les âges obtenus sur les phengites provenant de la matrice de ces mêmes échantillons qui donnent des âges de 40.02 ± 0.13 Ma et 39.55 ± 0.25 Ma respectivement. Les inclusions de phengite dans les grenats appartenant à un calcschiste de la deuxième écaille ont un âge de 40.58 ± 0.24 Ma alors que les phengites de la matrice ont un âge de 39.8 ± 1.5 Ma. Pour expliquer ces différences d'âge entre les phengites incluses dans le grenat et les phengites provenant de la matrice, nous suggérons que la cristallisation de grenat ait permis d'isoler ces phengites et de les préserver de tous rééquilibrage lors de la suite du chemin métamorphique prograde, puis rétrograde. Ceci est particulièrement important pour expliquer l'absence de rééquilibrage des phengites dans des conditions de températures supérieures à la température de fermeture du système Rb/Sr pour les phengites. Les phengites en inclusions n'ayant pas pu être datées individuellement, nous interprétons l'âge de 44 Ma pour les inclusions de phengite comme un âge moyen pour l'incorporation de ces phengites dans le grenat. Ces résultats sont cohérents avec les résultats des études antérieures de la ZZS utilisant les systèmes isotopiques de Sm/Nd et Lu/Hf sur des eclogites mafiques. ils confirment qu'aux moins une partie de la ZZS a subi des conditions de pression et de température maximale il y a moins de 44 à 42 Ma avant d'être rapidement exhumée à des conditions métamorphiques du faciès schiste vert supérieur autour de 40 Ma. Cette étude détaillée des grenats a permis, également, de mettre en évidence le rôle des fluides durant le métamorphisme prograde. En effet, si tous les grenats montrent des puises de croissance et de résorption, on peut distinguer, dans différents calcschists provenant de la deuxième écaille, deux types distincts de porphyroblast de grenat en fonction de la présence ou non d'inclusions de graphite. Nous lions ces puises de croissances/résorptions ainsi que la présence ou l'absence de graphite en inclusion dans les grenats à l'infiltration de fluides dans le système, et ceci durant tous le chemin prograde mais plus particulièrement proche et éventuellement peu après du pic du métamorphisme comme le suggère l'âge de 40 Ma mesuré dans les inclusions de phengites de l'échantillon du Riffelberg. Des analyses in-situ d'isotopes d'oxygène réalisé à l'aide de la SHRIMP (Sensitive High Resolution Ion Microprobe) dans des coupes centrales des grenats indiquent des variations jusqu'à 5 %o au sein même d'un grenat. Les motifs de zonations chimiques et d'inclusions de graphite complexes, ainsi que les variations du δ180 correspondent à une croissance de grenat sous des conditions de fluides changeantes dues aux infiltrations de fluides externes. Nous lions l'origine de ces fluides aqueux aux unités ultramafiques et mafiques qui ont été subductés avec les méta-sédiments ; unités ultramafiques et mafiques qui forment la partie volumétrique la plus importante de la ZZS.
Resumo:
Given the very large amount of data obtained everyday through population surveys, much of the new research again could use this information instead of collecting new samples. Unfortunately, relevant data are often disseminated into different files obtained through different sampling designs. Data fusion is a set of methods used to combine information from different sources into a single dataset. In this article, we are interested in a specific problem: the fusion of two data files, one of which being quite small. We propose a model-based procedure combining a logistic regression with an Expectation-Maximization algorithm. Results show that despite the lack of data, this procedure can perform better than standard matching procedures.