121 resultados para A priori commitments


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Documenting and preserving the genetic diversity of populations, which conditions their long-term survival, have become a major issue in conservation biology. The loss of diversity often documented in declining populations is usually assumed to result from human disturbances; however, historical biogeographic events, otherwise known to strongly impact diversity, are rarely considered in this context. We apply a multilocus phylogeographic study to investigate the late-Quaternary history of a tree frog (Hyla arborea) with declining populations in the northern and western part of its distribution range. Mitochondrial and nuclear polymorphisms reveal high genetic diversity in the Balkan Peninsula, with a spatial structure moulded by the last glaciations. While two of the main refugial lineages remained limited to the Balkans (Adriatic coast, southern Balkans), a third one expanded to recolonize Northern and Western Europe, loosing much of its diversity in the process. Our findings show that mobile and a priori homogeneous taxa may also display substructure within glacial refugia ('refugia within refugia') and emphasize the importance of the Balkans as a major European biodiversity centre. Moreover, the distribution of diversity roughly coincides with regional conservation situations, consistent with the idea that historically impoverished genetic diversity may interact with anthropogenic disturbances, and increase the vulnerability of populations. Phylogeographic models seem important to fully appreciate the risks of local declines and inform conservation strategies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Le rétinoblastome (Rb) est une tumeur provenant des cellules rétiniennes progénitrices des photorécepteurs. C'est la tumeur pédiatrique maligne la plus fréquente avec une incidence par naissance évaluée entre 1/15Ό00 et 1/20Ό00. Les enfants atteints de Rb sont diagnostiqué dans leur grande majorité avant l'âge de 4 ans, soit le temps nécessaire à la différentiation et à la maturation des photorécepteurs et donc à la disparition de la cellule d'origine du Rb. La survie du patient, la sauvegarde oculaire et le pronostic visuel restent excellents pour autant que le traitement ne soit pas différé. Dans sa variante non héréditaire (60%) le Rb est toujours unilatéral et sporadique. Le Rb héréditaire de transmission dominante autosomique (40%), se décline sous toutes les formes, familiale (10%) ou sporadique (30%), que l'atteinte soit unilatérale ou bilatérale. La majorité des mutations causales sont uniques et distribuées de façon aatoire sur la totalité du gène RB1 sans région prédisposante. La détection de ces mutations est couteuse et chronophage, tout en présentant un taux de détection relativement bas; surtout dans les cas de Rb sporadiques unilatéraux. Dans le but d'identifier les patients présentant un risque réel de développer un Rb, et de réduire le nombre d'examens sous narcose requis pour le dépistage de la maladie chez les sujets à risque, nous avons développé une stratégie sensible, rapide, efficace et peu couteuse basée sur une analyse de l'haplotype intragénique. Cet algorithme prend en compte a) la perte d'hétérozygotie intratumorale du gène RB1, b) l'origine paternelle préférentielle des nouvelles mutations germinales et c) un risque a priori dérivé des données empiriques de Vogel. Pendant la période allant de janvier 1994 à décembre 2006, nous avons comparé l'apparition de nouveau Rb parmi la fratrie et la descendance de patient atteints au nombre de nouveaux cas attendus calculé par notre algorithme. 134 familles ont été étudiées. L'analyse moléculaire a été effectuée chez 570 personnes dont 99 patients âgés de moins de 4 ans et donc à risque de développer un Rb. Parmi cette cohorte, nous avons observé l'apparition d'un cas de Rb, alors que les risques cumulés a posteriori calculé par notre algorithme prédisait l'apparition de 1.77 nouveau cas. Dans cette étude, nous avons pu valider notre algorithme prédisant la récurrence de Rb chez les parents de 1er degré de patients atteints. Cet outil devrait grandement faciliter le conseil génétique ainsi que le suivi des patients à risque de développer un Rb, surtout dans les cas ou le séquençage direct du gène RB1 n'est pas disponible ou est resté non informatif. - Purpose: Most RBI mutations are unique and distributed throughout the RBI gene. Their detection can be time-consuming and the yield especially low in cases of conservatively-treated sporadic unilateral retinoblas-toma (Rb) patients. In order to identify patients with true risk of developing Rb, and to reduce the number of unnecessary examinations under anesthesia in all other cases, we developed a universal sensitive, efficient and cost-effective strategy based on intragenic haplotype analysis. Methods: This algorithm allows the calculation of the a posteriori risk of developing Rb and takes into account (a) RBI loss of heterozygosity in tumors, (b) preferential paternal origin of new germline mutations, (c) a priori risk derived from empirical data by Vogel, and (d) disease penetrance of 90% in most cases. We report the occurrence of Rb in first degree relatives of patients with sporadic Rb who visited the Jules Gonin Eye Hospital, Lausanne, Switzerland, from January 1994 to December 2006 compared to expected new cases of Rb using our algorithm. Results: A total of 134 families with sporadic Rb were enrolled; testing was performed in 570 individuals and 99 patients younger than 4 years old were identified. We observed one new case of Rb. Using our algorithm, the cumulated total a posteriori risk of recurrence was 1.77. Conclusions: This is the first time that linkage analysis has been validated to monitor the risk of recurrence in sporadic Rb. This should be a useful tool in genetic counseling, especially when direct RBI screening for mutations leaves a negative result or is unavailable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives: We are interested in the numerical simulation of the anastomotic region comprised between outflow canula of LVAD and the aorta. Segmenta¬tion, geometry reconstruction and grid generation from patient-specific data remain an issue because of the variable quality of DICOM images, in particular CT-scan (e.g. metallic noise of the device, non-aortic contrast phase). We pro¬pose a general framework to overcome this problem and create suitable grids for numerical simulations.Methods: Preliminary treatment of images is performed by reducing the level window and enhancing the contrast of the greyscale image using contrast-limited adaptive histogram equalization. A gradient anisotropic diffusion filter is applied to reduce the noise. Then, watershed segmentation algorithms and mathematical morphology filters allow reconstructing the patient geometry. This is done using the InsightToolKit library (www.itk.org). Finally the Vascular Model¬ing ToolKit (www.vmtk.org) and gmsh (www.geuz.org/gmsh) are used to create the meshes for the fluid (blood) and structure (arterial wall, outflow canula) and to a priori identify the boundary layers. The method is tested on five different patients with left ventricular assistance and who underwent a CT-scan exam.Results: This method produced good results in four patients. The anastomosis area is recovered and the generated grids are suitable for numerical simulations. In one patient the method failed to produce a good segmentation because of the small dimension of the aortic arch with respect to the image resolution.Conclusions: The described framework allows the use of data that could not be otherwise segmented by standard automatic segmentation tools. In particular the computational grids that have been generated are suitable for simulations that take into account fluid-structure interactions. Finally the presented method features a good reproducibility and fast application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis is situated in the domain of contemporary metaphysics of science. The question is which ontology fits best with our knowledge of the world. The method chosen is the one of evaluating the consequences of different ontological frameworks against the background of our scientific knowledge of the world. The thesis analyses the two main frameworks in today's metaphysics of science, Humeanism and dispositionalism. It advocates that only an unorthodox version of Humeanism and only an unorthodox version of dispositionalism can be defended, the unorthodox character of these versions consisting in taking the fundamental properties to be relations rather than intrinsic properties. The thesis then sets out in detail what such an unorthodox version of Humeanism amounts to. Chapters 1 and 2 introduce the standard versions of Humeanism and dispositionalism, focussing on the accounts of laws of nature and causation. Chapter 3 compares both these positions and concludes that as far as the orthodox versions are concerned, dispositionalism fares better than Humeanism, since it can avoid Humeanism's commitments to quidditism and humility. However, as is argued in chapter 4, instead of replying to the objections from quidditism and humility by switching to dispositionalism, there is an unorthodox version of Humeanism available that does not run into these problematic consequences and that is supported by science: if one takes the fundamental physical properties to be relations instead of intrinsic properties, the objection from quidditism is avoided, since there is no hidden intrinsic essence of relations. As regards the objection from humility, one can maintain that science is in principle able to provide knowledge of the fundamental relations that there are in the world so that there is no principled ignorance. Consequently, the thesis concludes that Humeanism and dispositionalism are on a par as regards the remaining charge of humility. Unorthodox Humeanism provides a competitive and adequate ontology in the light of contemporary science.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: Vitamin D deficiency is frequent in the general population and might be even more prevalent among populations with kidney failure. We compared serum vitamin D levels, vitamin D insufficiency/deficiency status, and vitamin D level determinants in populations without chronic kidney disease (CKD) and with CKD not requiring renal dialysis. DESIGN AND METHODS: This was a cross-sectional, multicenter, population-based study conducted from 2010 to 2011. Participants were from 10 centers that represent the geographical and cultural diversity of the Swiss adult population (≥15 years old). INTERVENTION: CKD was defined using estimated glomerular filtration rate and 24-hour albuminuria. Serum vitamin D was measured by liquid chromatography-tandem mass spectrometry. Statistical procedures adapted for survey data were used. MAIN OUTCOME MEASURE: We compared 25-hydroxy-vitamin D (25(OH)D) levels and the prevalence of vitamin D insufficiency/deficiency (serum 25(OH)D < 30 ng/mL) in participants with and without CKD. We tested the interaction of CKD status with 6 a priori defined attributes (age, sex, body mass index, walking activity, serum albumin-corrected calcium, and altitude) on serum vitamin D level or insufficiency/deficiency status taking into account potential confounders. RESULTS: Overall, 11.8% (135 of 1,145) participants had CKD. The 25(OH)D adjusted means (95% confidence interval [CI]) were 23.1 (22.6-23.7) and 23.5 (21.7-25.3) ng/mL in participants without and with CKD, respectively (P = .70). Vitamin D insufficiency or deficiency was frequent among participants without and with CKD (75.3% [95% CI 69.3-81.5] and 69.1 [95% CI 53.9-86.1], P = .054). CKD status did not interact with major determinants of vitamin D, including age, sex, BMI, walking minutes, serum albumin-corrected calcium, or altitude for its effect on vitamin D status or levels. CONCLUSION: Vitamin D concentration and insufficiency/deficiency status are similar in people with or without CKD not requiring renal dialysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Classical treatments of problems of sequential mate choice assume that the distribution of the quality of potential mates is known a priori. This assumption, made for analytical purposes, may seem unrealistic, opposing empirical data as well as evolutionary arguments. Using stochastic dynamic programming, we develop a model that includes the possibility for searching individuals to learn about the distribution and in particular to update mean and variance during the search. In a constant environment, a priori knowledge of the parameter values brings strong benefits in both time needed to make a decision and average value of mate obtained. Knowing the variance yields more benefits than knowing the mean, and benefits increase with variance. However, the costs of learning become progressively lower as more time is available for choice. When parameter values differ between demes and/or searching periods, a strategy relying on fixed a priori information might lead to erroneous decisions, which confers advantages on the learning strategy. However, time for choice plays an important role as well: if a decision must be made rapidly, a fixed strategy may do better even when the fixed image does not coincide with the local parameter values. These results help in delineating the ecological-behavior context in which learning strategies may spread.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: Most RB1 mutations are unique and distributed throughout the RB1 gene. Their detection can be time-consuming and the yield especially low in cases of conservatively-treated sporadic unilateral retinoblastoma (Rb) patients. In order to identify patients with true risk of developing Rb, and to reduce the number of unnecessary examinations under anesthesia in all other cases, we developed a universal sensitive, efficient and cost-effective strategy based on intragenic haplotype analysis. METHODS: This algorithm allows the calculation of the a posteriori risk of developing Rb and takes into account (a) RB1 loss of heterozygosity in tumors, (b) preferential paternal origin of new germline mutations, (c) a priori risk derived from empirical data by Vogel, and (d) disease penetrance of 90% in most cases. We report the occurrence of Rb in first degree relatives of patients with sporadic Rb who visited the Jules Gonin Eye Hospital, Lausanne, Switzerland, from January 1994 to December 2006 compared to expected new cases of Rb using our algorithm. RESULTS: A total of 134 families with sporadic Rb were enrolled; testing was performed in 570 individuals and 99 patients younger than 4 years old were identified. We observed one new case of Rb. Using our algorithm, the cumulated total a posteriori risk of recurrence was 1.77. CONCLUSIONS: This is the first time that linkage analysis has been validated to monitor the risk of recurrence in sporadic Rb. This should be a useful tool in genetic counseling, especially when direct RB1 screening for mutations leaves a negative result or is unavailable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract This thesis proposes a set of adaptive broadcast solutions and an adaptive data replication solution to support the deployment of P2P applications. P2P applications are an emerging type of distributed applications that are running on top of P2P networks. Typical P2P applications are video streaming, file sharing, etc. While interesting because they are fully distributed, P2P applications suffer from several deployment problems, due to the nature of the environment on which they perform. Indeed, defining an application on top of a P2P network often means defining an application where peers contribute resources in exchange for their ability to use the P2P application. For example, in P2P file sharing application, while the user is downloading some file, the P2P application is in parallel serving that file to other users. Such peers could have limited hardware resources, e.g., CPU, bandwidth and memory or the end-user could decide to limit the resources it dedicates to the P2P application a priori. In addition, a P2P network is typically emerged into an unreliable environment, where communication links and processes are subject to message losses and crashes, respectively. To support P2P applications, this thesis proposes a set of services that address some underlying constraints related to the nature of P2P networks. The proposed services include a set of adaptive broadcast solutions and an adaptive data replication solution that can be used as the basis of several P2P applications. Our data replication solution permits to increase availability and to reduce the communication overhead. The broadcast solutions aim, at providing a communication substrate encapsulating one of the key communication paradigms used by P2P applications: broadcast. Our broadcast solutions typically aim at offering reliability and scalability to some upper layer, be it an end-to-end P2P application or another system-level layer, such as a data replication layer. Our contributions are organized in a protocol stack made of three layers. In each layer, we propose a set of adaptive protocols that address specific constraints imposed by the environment. Each protocol is evaluated through a set of simulations. The adaptiveness aspect of our solutions relies on the fact that they take into account the constraints of the underlying system in a proactive manner. To model these constraints, we define an environment approximation algorithm allowing us to obtain an approximated view about the system or part of it. This approximated view includes the topology and the components reliability expressed in probabilistic terms. To adapt to the underlying system constraints, the proposed broadcast solutions route messages through tree overlays permitting to maximize the broadcast reliability. Here, the broadcast reliability is expressed as a function of the selected paths reliability and of the use of available resources. These resources are modeled in terms of quotas of messages translating the receiving and sending capacities at each node. To allow a deployment in a large-scale system, we take into account the available memory at processes by limiting the view they have to maintain about the system. Using this partial view, we propose three scalable broadcast algorithms, which are based on a propagation overlay that tends to the global tree overlay and adapts to some constraints of the underlying system. At a higher level, this thesis also proposes a data replication solution that is adaptive both in terms of replica placement and in terms of request routing. At the routing level, this solution takes the unreliability of the environment into account, in order to maximize reliable delivery of requests. At the replica placement level, the dynamically changing origin and frequency of read/write requests are analyzed, in order to define a set of replica that minimizes communication cost.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction: Responses to external stimuli are typically investigated by averaging peri-stimulus electroencephalography (EEG) epochs in order to derive event-related potentials (ERPs) across the electrode montage, under the assumption that signals that are related to the external stimulus are fixed in time across trials. We demonstrate the applicability of a single-trial model based on patterns of scalp topographies (De Lucia et al, 2007) that can be used for ERP analysis at the single-subject level. The model is able to classify new trials (or groups of trials) with minimal a priori hypotheses, using information derived from a training dataset. The features used for the classification (the topography of responses and their latency) can be neurophysiologically interpreted, because a difference in scalp topography indicates a different configuration of brain generators. An above chance classification accuracy on test datasets implicitly demonstrates the suitability of this model for EEG data. Methods: The data analyzed in this study were acquired from two separate visual evoked potential (VEP) experiments. The first entailed passive presentation of checkerboard stimuli to each of the four visual quadrants (hereafter, "Checkerboard Experiment") (Plomp et al, submitted). The second entailed active discrimination of novel versus repeated line drawings of common objects (hereafter, "Priming Experiment") (Murray et al, 2004). Four subjects per experiment were analyzed, using approx. 200 trials per experimental condition. These trials were randomly separated in training (90%) and testing (10%) datasets in 10 independent shuffles. In order to perform the ERP analysis we estimated the statistical distribution of voltage topographies by a Mixture of Gaussians (MofGs), which reduces our original dataset to a small number of representative voltage topographies. We then evaluated statistically the degree of presence of these template maps across trials and whether and when this was different across experimental conditions. Based on these differences, single-trials or sets of a few single-trials were classified as belonging to one or the other experimental condition. Classification performance was assessed using the Receiver Operating Characteristic (ROC) curve. Results: For the Checkerboard Experiment contrasts entailed left vs. right visual field presentations for upper and lower quadrants, separately. The average posterior probabilities, indicating the presence of the computed template maps in time and across trials revealed significant differences starting at ~60-70 ms post-stimulus. The average ROC curve area across all four subjects was 0.80 and 0.85 for upper and lower quadrants, respectively and was in all cases significantly higher than chance (unpaired t-test, p<0.0001). In the Priming Experiment, we contrasted initial versus repeated presentations of visual object stimuli. Their posterior probabilities revealed significant differences, which started at 250ms post-stimulus onset. The classification accuracy rates with single-trial test data were at chance level. We therefore considered sub-averages based on five single trials. We found that for three out of four subjects' classification rates were significantly above chance level (unpaired t-test, p<0.0001). Conclusions: The main advantage of the present approach is that it is based on topographic features that are readily interpretable along neurophysiologic lines. As these maps were previously normalized by the overall strength of the field potential on the scalp, a change in their presence across trials and between conditions forcibly reflects a change in the underlying generator configurations. The temporal periods of statistical difference between conditions were estimated for each training dataset for ten shuffles of the data. Across the ten shuffles and in both experiments, we observed a high level of consistency in the temporal periods over which the two conditions differed. With this method we are able to analyze ERPs at the single-subject level providing a novel tool to compare normal electrophysiological responses versus single cases that cannot be considered part of any cohort of subjects. This aspect promises to have a strong impact on both basic and clinical research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Predicting which species will occur together in the future, and where, remains one of the greatest challenges in ecology, and requires a sound understanding of how the abiotic and biotic environments interact with dispersal processes and history across scales. Biotic interactions and their dynamics influence species' relationships to climate, and this also has important implications for predicting future distributions of species. It is already well accepted that biotic interactions shape species' spatial distributions at local spatial extents, but the role of these interactions beyond local extents (e.g. 10 km(2) to global extents) are usually dismissed as unimportant. In this review we consolidate evidence for how biotic interactions shape species distributions beyond local extents and review methods for integrating biotic interactions into species distribution modelling tools. Drawing upon evidence from contemporary and palaeoecological studies of individual species ranges, functional groups, and species richness patterns, we show that biotic interactions have clearly left their mark on species distributions and realised assemblages of species across all spatial extents. We demonstrate this with examples from within and across trophic groups. A range of species distribution modelling tools is available to quantify species environmental relationships and predict species occurrence, such as: (i) integrating pairwise dependencies, (ii) using integrative predictors, and (iii) hybridising species distribution models (SDMs) with dynamic models. These methods have typically only been applied to interacting pairs of species at a single time, require a priori ecological knowledge about which species interact, and due to data paucity must assume that biotic interactions are constant in space and time. To better inform the future development of these models across spatial scales, we call for accelerated collection of spatially and temporally explicit species data. Ideally, these data should be sampled to reflect variation in the underlying environment across large spatial extents, and at fine spatial resolution. Simplified ecosystems where there are relatively few interacting species and sometimes a wealth of existing ecosystem monitoring data (e.g. arctic, alpine or island habitats) offer settings where the development of modelling tools that account for biotic interactions may be less difficult than elsewhere.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Few episodes of suspected infection observed in paediatric intensive care are classifiable without ambiguity by a priori defined criteria. Most require additional expert judgement. Recently, we observed a high variability in antibiotic prescription rates, not explained by the patients' clinical data or underlying diseases. We hypothesised that the disagreement of experts in adjudication of episodes of suspected infection could be one of the potential causes for this variability. During a 5-month period, we included all patients of a 19-bed multidisciplinary, tertiary, neonatal and paediatric intensive care unit, in whom infection was clinically suspected and antibiotics were prescribed ( n=183). Three experts (two senior ICU physicians and a specialist in infectious diseases) were provided with all patient data, laboratory and microbiological findings. All experts classified episodes according to a priori defined criteria into: proven sepsis, probable sepsis (negative cultures), localised infection and no infection. Episodes of proven viral infection and incomplete data sets were excluded. Of the remaining 167 episodes, 48 were classifiable by a priori criteria ( n=28 proven sepsis, n= 20 no infection). The three experts only achieved limited agreement beyond chance in the remaining 119 episodes (kappa = 0.32, and kappa = 0.19 amongst the ICU physicians). The kappa is a measure of the degree of agreement beyond what would be expected by chance alone, with 0 indicating the chance result and 1 indicating perfect agreement. CONCLUSION: agreement of specialists in hindsight adjudication of episodes of suspected infection is of questionable reliability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Particle physics studies highly complex processes which cannot be directly observed. Scientific realism claims that we are nevertheless warranted in believing that these processes really occur and that the objects involved in them really exist. This dissertation defends a version of scientific realism, called causal realism, in the context of particle physics. I start by introducing the central theses and arguments in the recent philosophical debate on scientific realism (chapter 1), with a special focus on an important presupposition of the debate, namely common sense realism. Chapter 2 then discusses entity realism, which introduces a crucial element into the debate by emphasizing the importance of experiments in defending scientific realism. Most of the chapter is concerned with Ian Hacking's position, but I also argue that Nancy Cartwright's version of entity realism is ultimately preferable as a basis for further development. In chapter 3,1 take a step back and consider the question whether the realism debate is worth pursuing at all. Arthur Fine has given a negative answer to that question, proposing his natural ontologica! attitude as an alternative to both realism and antirealism. I argue that the debate (in particular the realist side of it) is in fact less vicious than Fine presents it. The second part of my work (chapters 4-6) develops, illustrates and defends causal realism. The key idea is that inference to the best explanation is reliable in some cases, but not in others. Chapter 4 characterizes the difference between these two kinds of cases in terms of three criteria which distinguish causal from theoretical warrant. In order to flesh out this distinction, chapter 5 then applies it to a concrete case from the history of particle physics, the discovery of the neutrino. This case study shows that the distinction between causal and theoretical warrant is crucial for understanding what it means to "directly detect" a new particle. But the distinction is also an effective tool against what I take to be the presently most powerful objection to scientific realism: Kyle Stanford's argument from unconceived alternatives. I respond to this argument in chapter 6, and I illustrate my response with a discussion of Jean Perrin's experimental work concerning the atomic hypothesis. In the final part of the dissertation, I turn to the specific challenges posed to realism by quantum theories. One of these challenges comes from the experimental violations of Bell's inequalities, which indicate a failure of locality in the quantum domain. I show in chapter 7 how causal realism can further our understanding of quantum non-locality by taking account of some recent experimental results. Another challenge to realism in quantum mechanics comes from delayed-choice experiments, which seem to imply that certain aspects of what happens in an experiment can be influenced by later choices of the experimenter. Chapter 8 analyzes these experiments and argues that they do not warrant the antirealist conclusions which some commentators draw from them. It pays particular attention to the case of delayed-choice entanglement swapping and the corresponding question whether entanglement is a real physical relation. In chapter 9,1 finally address relativistic quantum theories. It is often claimed that these theories are incompatible with a particle ontology, and this calls into question causal realism's commitment to localizable and countable entities. I defend the commitments of causal realism against these objections, and I conclude with some remarks connecting the interpretation of quantum field theory to more general metaphysical issues confronting causal realism.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Résumé Les glissements de terrain représentent un des principaux risques naturels dans les régions montagneuses. En Suisse, chaque année les glissements de terrains causent des dégâts qui affectent les infrastructures et ont des coûts financiers importants. Une bonne compréhension des mécanismes des glissements peut permettre d'atténuer leur impact. Celle-ci passe notamment par la connaissance de la structure interne du glissement, la détermination de son volume et de son ou ses plans de glissement. Dans un glissement de terrain, la désorganisation et la présence de fractures dans le matériel déplacé engendre un changement des paramètres physiques et en particulier une diminution des vitesses de propagation des ondes sismiques ainsi que de la densité du matériel. Les méthodes sismiques sont de ce fait bien adaptées à l'étude des glissements de terrain. Parmi les méthodes sismiques, l'analyse de la dispersion des ondes de surface est une méthode simple à mettre en oeuvre. Elle présente l'avantage d'estimer les variations des vitesses de cisaillement avec la profondeur sans avoir spécifiquement recours à l'utilisation d'une source d'onde S et de géophones horizontaux. Sa mise en oeuvre en trois étapes implique la mesure de la dispersion des ondes de surface sur des réseaux étendus, la détermination des courbes de dispersion pour finir par l'inversion de ces courbes. Les modèles de vitesse obtenus à partir de cette procédure ne sont valides que lorsque les milieux explorés ne présentent pas de variations latérales. En pratique cette hypothèse est rarement vérifiée, notamment pour un glissement de terrain dans lequel les couches remaniées sont susceptibles de présenter de fortes hétérogénéités latérales. Pour évaluer la possibilité de déterminer des courbes de dispersion à partir de réseaux de faible extension des mesures testes ont été effectuées sur un site (Arnex, VD) équipé d'un forage. Un profil sismique de 190 m de long a été implanté dans une vallée creusée dans du calcaire et remplie par des dépôts glacio-lacustres d'une trentaine de mètres d'épaisseur. Les données acquises le long de ce profil ont confirmé que la présence de variations latérales sous le réseau de géophones affecte l'allure des courbes de dispersion jusqu'à parfois empêcher leur détermination. Pour utiliser l'analyse de la dispersion des ondes de surface sur des sites présentant des variations latérales, notre approche consiste à déterminer les courbes de dispersions pour une série de réseaux de faible extension, à inverser chacune des courbes et à interpoler les différents modèles de vitesse obtenus. Le choix de la position ainsi que de l'extension des différents réseaux de géophones est important. Il tient compte de la localisation des hétérogénéités détectées à partir de l'analyse de sismique réfraction, mais également d'anomalies d'amplitudes observées sur des cartes qui représentent dans le domaine position de tir - position du récepteur, l'amplitude mesurée pour différentes fréquences. La procédure proposée par Lin et Lin (2007) s'est avérée être une méthode efficace permettant de déterminer des courbes de dispersion à partir de réseaux de faible extension. Elle consiste à construire à partir d'un réseau de géophones et de plusieurs positions de tir un enregistrement temps-déports qui tient compte d'une large gamme de distances source-récepteur. Au moment d'assembler les différentes données une correction de phase est appliquée pour tenir compte des hétérogénéités situées entre les différents points de tir. Pour évaluer cette correction nous suggérons de calculer pour deux tir successif la densité spectrale croisée des traces de même offset: Sur le site d'Arnex, 22 courbes de dispersions ont été déterminées pour de réseaux de géophones de 10 m d'extension. Nous avons également profité du forage pour acquérir un profil de sismique verticale en ondes S. Le modèle de vitesse S déduit de l'interprétation du profil de sismique verticale est utilisé comme information à priori lors l'inversion des différentes courbes de dispersion. Finalement, le modèle en deux dimension qui a été établi grâce à l'analyse de la dispersion des ondes de surface met en évidence une structure tabulaire à trois couches dont les limites coïncident bien avec les limites lithologiques observées dans le forage. Dans celui-ci des argiles limoneuses associées à une vitesse de propagation des ondes S de l'ordre de 175 m/s surmontent vers 9 m de profondeur des dépôts de moraine argilo-sableuse caractérisés par des vitesses de propagation des ondes S de l'ordre de 300 m/s jusqu'à 14 m de profondeur et supérieur ou égal à 400 m/s entre 14 et 20 m de profondeur. Le glissement de la Grande Combe (Ballaigues, VD) se produit à l'intérieur du remplissage quaternaire d'une combe creusée dans des calcaires Portlandien. Comme dans le cas du site d'Arnex les dépôts quaternaires correspondent à des dépôts glacio-lacustres. Dans la partie supérieure la surface de glissement a été localisée à une vingtaine de mètres de profondeur au niveau de l'interface qui sépare des dépôts de moraine jurassienne et des dépôts glacio-lacustres. Au pied du glissement 14 courbes de dispersions ont été déterminées sur des réseaux de 10 m d'extension le long d'un profil de 144 m. Les courbes obtenues sont discontinues et définies pour un domaine de fréquence de 7 à 35 Hz. Grâce à l'utilisation de distances source-récepteur entre 8 et 72 m, 2 à 4 modes de propagation ont été identifiés pour chacune des courbes. Lors de l'inversion des courbes de dispersion la prise en compte des différents modes de propagation a permis d'étendre la profondeur d'investigation jusqu'à une vingtaine de mètres de profondeur. Le modèle en deux dimensions permet de distinguer 4 couches (Vs1 < 175 m/s, 175 m/s < Vs2 < 225 m/s, 225 m/s < Vs3 < 400 m/s et Vs4 >.400 m/s) qui présentent des variations d'épaisseur. Des profils de sismiques réflexion en ondes S acquis avec une source construite dans le cadre de ce travail, complètent et corroborent le modèle établi à partir de l'analyse de la dispersion des ondes de surface. Un réflecteur localisé entre 5 et 10 m de profondeur et associé à une vitesse de sommation de 180 m/s souligne notamment la géométrie de l'interface qui sépare la deuxième de la troisième couche du modèle établi à partir de l'analyse de la dispersion des ondes de surface. Abstract Landslides are one of the main natural hazards in mountainous regions. In Switzerland, landslides cause damages every year that impact infrastructures and have important financial costs. In depth understanding of sliding mechanisms may help limiting their impact. In particular, this can be achieved through a better knowledge of the internal structure of the landslide, the determination of its volume and its sliding surface or surfaces In a landslide, the disorganization and the presence of fractures in the displaced material generate a change of the physical parameters and in particular a decrease of the seismic velocities and of the material density. Therefoe, seismic methods are well adapted to the study of landslides. Among seismic methods, surface-wave dispersion analysis is a easy to implement. Through it, shearwave velocity variations with depth can be estimated without having to resort to an S-wave source and to horizontal geophones. Its 3-step implementation implies measurement of surface-wave dispersion with long arrays, determination of the dispersion curves and finally inversion of these curves. Velocity models obtained through this approach are only valid when the investigated medium does not include lateral variations. In practice, this assumption is seldom correct, in particular for landslides in which reshaped layers likely include strong lateral heterogeneities. To assess the possibility of determining dispersion curves from short array lengths we carried out tests measurements on a site (Arnex, VD) that includes a borehole. A 190 m long seismic profile was acquired in a valley carved into limestone and filled with 30 m of glacio-lacustrine sediments. The data acquired along this profile confirmed that the presence of lateral variations under the geophone array influences the dispersion-curve shape so much that it sometimes preventes the dispersion curves determination. Our approach to use the analysis of surface-wave dispersion on sites that include lateral variations consists in obtaining dispersion curves for a series of short length arrays; inverting each so obtained curve and interpolating the different obtained velocity model. The choice of the location as well as the geophone array length is important. It takes into account the location of the heterogeneities that are revealed by the seismic refraction interpretation of the data but also, the location of signal amplitude anomalies observed on maps that represent, for a given frequency, the measured amplitude in the shot position - receiver position domain. The procedure proposed by Lin and Lin (2007) turned out to be an efficient one to determine dispersion curves using short extension arrays. It consists in building a time-offset from an array of geophones with a wide offset range by gathering seismograms acquired with different source-to-receiver offsets. When assembling the different data, a phase correction is applied in order to reduce static phase error induced by lateral variation. To evaluate this correction, we suggest to calculate, for two successive shots, the cross power spectral density of common offset traces. On the Arnex site, 22 curves were determined with 10m in length geophone-arrays. We also took advantage of the borehole to acquire a S-wave vertical seismic profile. The S-wave velocity depth model derived from the vertical seismic profile interpretation is used as prior information in the inversion of the dispersion-curves. Finally a 2D velocity model was established from the analysis of the different dispersion curves. It reveals a 3-layer structure in good agreement with the observed lithologies in the borehole. In it a clay layer with a shear-wave of 175 m/s shear-wave velocity overlies a clayey-sandy till layer at 9 m depth that is characterized down to 14 m by a 300 m/s S-wave velocity; these deposits have a S-wave velocity of 400 m/s between depths of 14 to 20 m. The La Grand Combe landslide (Ballaigues, VD) occurs inside the Quaternary filling of a valley carved into Portlandien limestone. As at the Arnex site, the Quaternary deposits correspond to glaciolacustrine sediments. In the upper part of the landslide, the sliding surface is located at a depth of about 20 m that coincides with the discontinuity between Jurassian till and glacio-lacustrine deposits. At the toe of the landslide, we defined 14 dispersion curves along a 144 m long profile using 10 m long geophone arrays. The obtained curves are discontinuous and defined within a frequency range of 7 to 35 Hz. The use of a wide range of offsets (from 8 to 72 m) enabled us to determine 2 to 4 mode of propagation for each dispersion curve. Taking these higher modes into consideration for dispersion curve inversion allowed us to reach an investigation depth of about 20 m. A four layer 2D model was derived (Vs1< 175 m/s, 175 m/s <Vs2< 225 m/s, 225 m/s < Vs3 < 400 m/s, Vs4> 400 m/s) with variable layer thicknesses. S-wave seismic reflection profiles acquired with a source built as part of this work complete and the velocity model revealed by surface-wave analysis. In particular, reflector at a depth of 5 to 10 m associated with a 180 m/s stacking velocity image the geometry of the discontinuity between the second and third layer of the model derived from the surface-wave dispersion analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Les recherches visant à élucider les bases neurales de l'adolescence ont émergé au cours des années 1990 pour s'imposer cette dernière décennie. Il est aujourd'hui accepté dans le champ des neurosciences cognitives et du développement que le cerveau continue à se développer après la 10eme année de vie et qu'il atteint un stade de maturation similaire à l'adulte seulement vers 25 ans. La configuration structurelle et fonctionnelle du cerveau spécifique à la période de l'adolescence impliquerait un manque de contrôle émotionnel, favorisant des comportements dits de prise de risques qui à la fois permettent l'acquisition de l'indépendance et provoquent des situations de mise en danger du jeune individu et de son entourage. Ses mêmes comportements, dans leur acception négative - consommation d'alcool et de stupéfiants, conduite en état d'ivresse, rapports sexuels non-protégés, port d'arme, etc. - mobilisent les politiques de prévention et de santé publique relatives à l'adolescence et à la jeunesse. Cette thèse qui retrace l'histoire du cerveau adolescent de la fin des années 1950 à nos jours se situe à l'intersection de ces deux thèmes d'intérêt scientifique et public. A partir d'une perspective d'histoire culturelle et sociale des sciences, elle approche les éléments expérimentaux, institutionnels et contextuels qui ont contribué à la construction d'une adolescence définie par son immaturité cérébrale, associée à des comportements dits à risque. Plus précisément, elle met en évidence, sous l'angle privilégié du genre, selon quelles modalités et quelles temporalités l'histoire des recherches scientifiques sur le développement cérébral humain à l'adolescence et celle du façonnage d'un type d'adolescent-e impulsif/ve et preneur/euse de risques - c'est-à-dire potentiellement délinquant, dépendant, invalide ou malade chronique, constitué en problème de politique et de santé publique - sont amenées à converger. L'argument développé est que le genre et le sexe sont des catégories actives dans la construction d'un cerveau adolescent idéalement unisexe. En d'autres termes, bien que le cerveau adolescent qualifie des individus en regard de leur âge, sans distinction apparente de sexe et de genre, les conditions de sa production et les critères de sa définition sont constitutivement genrés, notamment par des comportements à risque qui concernent une majorité de garçons. Il s'agit d'analyser comment du sexe et du genre peuvent produire de l'âge, a priori unisexe, et d'interroger les enjeux scientifiques, sociaux et politiques qui participent de cette invisibilisation des catégories de sexe et de genre. Le but est de considérer la manière dont l'adolescence cérébrale reconfigure la gestion des questions liées à l'adolescence et à la jeunesse, en termes de problèmes sanitaires et de délinquance, mais aussi en termes de reproduction des normes sociales, de ce qu'implique devenir un homme ou une femme.