106 resultados para Distance-based techniques
Resumo:
Among the types of remote sensing acquisitions, optical images are certainly one of the most widely relied upon data sources for Earth observation. They provide detailed measurements of the electromagnetic radiation reflected or emitted by each pixel in the scene. Through a process termed supervised land-cover classification, this allows to automatically yet accurately distinguish objects at the surface of our planet. In this respect, when producing a land-cover map of the surveyed area, the availability of training examples representative of each thematic class is crucial for the success of the classification procedure. However, in real applications, due to several constraints on the sample collection process, labeled pixels are usually scarce. When analyzing an image for which those key samples are unavailable, a viable solution consists in resorting to the ground truth data of other previously acquired images. This option is attractive but several factors such as atmospheric, ground and acquisition conditions can cause radiometric differences between the images, hindering therefore the transfer of knowledge from one image to another. The goal of this Thesis is to supply remote sensing image analysts with suitable processing techniques to ensure a robust portability of the classification models across different images. The ultimate purpose is to map the land-cover classes over large spatial and temporal extents with minimal ground information. To overcome, or simply quantify, the observed shifts in the statistical distribution of the spectra of the materials, we study four approaches issued from the field of machine learning. First, we propose a strategy to intelligently sample the image of interest to collect the labels only in correspondence of the most useful pixels. This iterative routine is based on a constant evaluation of the pertinence to the new image of the initial training data actually belonging to a different image. Second, an approach to reduce the radiometric differences among the images by projecting the respective pixels in a common new data space is presented. We analyze a kernel-based feature extraction framework suited for such problems, showing that, after this relative normalization, the cross-image generalization abilities of a classifier are highly increased. Third, we test a new data-driven measure of distance between probability distributions to assess the distortions caused by differences in the acquisition geometry affecting series of multi-angle images. Also, we gauge the portability of classification models through the sequences. In both exercises, the efficacy of classic physically- and statistically-based normalization methods is discussed. Finally, we explore a new family of approaches based on sparse representations of the samples to reciprocally convert the data space of two images. The projection function bridging the images allows a synthesis of new pixels with more similar characteristics ultimately facilitating the land-cover mapping across images.
Resumo:
We conducted a preliminary, questionnaire-based, retrospective analysis of training and injury in British National Squad Olympic distance (OD) and Ironman distance (IR) triathletes. The main outcome measures were training duration and training frequency and injury frequency and severity. The number of overuse injuries sustained over a 5-year period did not differ between OD and IR. However, the proportions of OD and IR athletes who were affected by injury to particular anatomical sites differed (p < 0.05). Also, fewer OD athletes (16.7 vs. 36.8%, p < 0.05) reported that their injury recurred. Although OD sustained fewer running injuries than IR (1.6 +/- 0.5 vs. 1.9 +/- 0.3, p < 0.05), more subsequently stopped running (41.7 vs. 15.8%) and for longer (33.5 +/- 43.0 vs. 16.7 +/- 16.6 days, p < 0.01). In OD, the number of overuse injuries sustained inversely correlated with percentage training time, and number of sessions, doing bike hill repetitions (r = -0.44 and -0.39, respectively, both p < 0.05). The IR overuse injury number correlated with the amount of intensive sessions done (r = 0.67, p < 0.01 and r = 0.56, p < 0.05 for duration of "speed run" and "speed bike" sessions). Coaches should note that training differences between triathletes who specialize in OD or IR competition may lead to their exhibiting differential risk for injury to specific anatomical sites. It is also important to note that cycle and run training may have a "cumulative stress" influence on injury risk. Therefore, the tendency of some triathletes to modify rather than stop training when injured-usually by increasing load in another discipline from that in which the injury first occurred-may increase both their risk of injury recurrence and time to full rehabilitation.
Resumo:
Recently graph theory and complex networks have been widely used as a mean to model functionality of the brain. Among different neuroimaging techniques available for constructing the brain functional networks, electroencephalography (EEG) with its high temporal resolution is a useful instrument of the analysis of functional interdependencies between different brain regions. Alzheimer's disease (AD) is a neurodegenerative disease, which leads to substantial cognitive decline, and eventually, dementia in aged people. To achieve a deeper insight into the behavior of functional cerebral networks in AD, here we study their synchronizability in 17 newly diagnosed AD patients compared to 17 healthy control subjects at no-task, eyes-closed condition. The cross-correlation of artifact-free EEGs was used to construct brain functional networks. The extracted networks were then tested for their synchronization properties by calculating the eigenratio of the Laplacian matrix of the connection graph, i.e., the largest eigenvalue divided by the second smallest one. In AD patients, we found an increase in the eigenratio, i.e., a decrease in the synchronizability of brain networks across delta, alpha, beta, and gamma EEG frequencies within the wide range of network costs. The finding indicates the destruction of functional brain networks in early AD.
Resumo:
In groundwater applications, Monte Carlo methods are employed to model the uncertainty on geological parameters. However, their brute-force application becomes computationally prohibitive for highly detailed geological descriptions, complex physical processes, and a large number of realizations. The Distance Kernel Method (DKM) overcomes this issue by clustering the realizations in a multidimensional space based on the flow responses obtained by means of an approximate (computationally cheaper) model; then, the uncertainty is estimated from the exact responses that are computed only for one representative realization per cluster (the medoid). Usually, DKM is employed to decrease the size of the sample of realizations that are considered to estimate the uncertainty. We propose to use the information from the approximate responses for uncertainty quantification. The subset of exact solutions provided by DKM is then employed to construct an error model and correct the potential bias of the approximate model. Two error models are devised that both employ the difference between approximate and exact medoid solutions, but differ in the way medoid errors are interpolated to correct the whole set of realizations. The Local Error Model rests upon the clustering defined by DKM and can be seen as a natural way to account for intra-cluster variability; the Global Error Model employs a linear interpolation of all medoid errors regardless of the cluster to which the single realization belongs. These error models are evaluated for an idealized pollution problem in which the uncertainty of the breakthrough curve needs to be estimated. For this numerical test case, we demonstrate that the error models improve the uncertainty quantification provided by the DKM algorithm and are effective in correcting the bias of the estimate computed solely from the MsFV results. The framework presented here is not specific to the methods considered and can be applied to other combinations of approximate models and techniques to select a subset of realizations
Resumo:
Blood doping involves the use of products that enhance the uptake, transport, or delivery of oxygen to the blood. One approach uses artificial oxygen carriers, known as hemoglobin-based oxygen carriers (HBOCs). This study describes an analytical strategy based on CE for detecting intact HBOCs in plasma samples collected for doping control. On-capillary detection was performed by UV/Vis at 415 nm, which offered detection selectivity for hemoproteins (such as hemoglobin and HBOCs). On-line ESI-MS detection with a TOF analyzer was further used to provide accurate masses on CE peaks and to confirm the presence of HBOCs. An immunodepletion sample preparation step was mandatory prior to analysis, in order to remove most abundant proteins that interfered with CE separation and altered the ESI process. This analytical method was successfully applied to plasma samples enriched with Oxyglobin, a commercially available HBOC used for veterinary purposes. Detection limits of 0.20 and 0.45 g/dL were achieved in plasma for CE-UV/Vis at 415 nm and CE-ESI-TOF/MS, respectively.
Resumo:
Rock slope instabilities such as rock slides, rock avalanche or deep-seated gravitational slope deformations are widespread in Alpine valleys. These phenomena represent at the same time a main factor that control the mountain belts erosion and also a significant natural hazard that creates important losses to the mountain communities. However, the potential geometrical and dynamic connections linking outcrop and slope-scale instabilities are often unknown. A more detailed definition of the potential links will be essential to improve the comprehension of the destabilization processes and to dispose of a more complete hazard characterization of the rock instabilities at different spatial scales. In order to propose an integrated approach in the study of the rock slope instabilities, three main themes were analysed in this PhD thesis: (1) the inventory and the spatial distribution of rock slope deformations at regional scale and their influence on the landscape evolution, (2) the influence of brittle and ductile tectonic structures on rock slope instabilities development and (3) the characterization of hazard posed by potential rock slope instabilities through the development of conceptual instability models. To prose and integrated approach for the analyses of these topics, several techniques were adopted. In particular, high resolution digital elevation models revealed to be fundamental tools that were employed during the different stages of the rock slope instability assessment. A special attention was spent in the application of digital elevation model for detailed geometrical modelling of past and potential instabilities and for the rock slope monitoring at different spatial scales. Detailed field analyses and numerical models were performed to complete and verify the remote sensing approach. In the first part of this thesis, large slope instabilities in Rhone valley (Switzerland) were mapped in order to dispose of a first overview of tectonic and climatic factors influencing their distribution and their characteristics. Our analyses demonstrate the key influence of neotectonic activity and the glacial conditioning on the spatial distribution of the rock slope deformations. Besides, the volumes of rock instabilities identified along the main Rhone valley, were then used to propose the first estimate of the postglacial denudation and filling of the Rhone valley associated to large gravitational movements. In the second part of the thesis, detailed structural analyses of the Frank slide and the Sierre rock avalanche were performed to characterize the influence of brittle and ductile tectonic structures on the geometry and on the failure mechanism of large instabilities. Our observations indicated that the geometric characteristics and the variation of the rock mass quality associated to ductile tectonic structures, that are often ignored landslide study, represent important factors that can drastically influence the extension and the failure mechanism of rock slope instabilities. In the last part of the thesis, the failure mechanisms and the hazard associated to five potential instabilities were analysed in detail. These case studies clearly highlighted the importance to incorporate different analyses and monitoring techniques to dispose of reliable and hazard scenarios. This information associated to the development of a conceptual instability model represents the primary data for an integrated risk management of rock slope instabilities. - Les mouvements de versant tels que les chutes de blocs, les éboulements ou encore les phénomènes plus lents comme les déformations gravitaires profondes de versant représentent des manifestations courantes en régions montagneuses. Les mouvements de versant sont à la fois un des facteurs principaux contrôlant la destruction progressive des chaines orogéniques mais aussi un danger naturel concret qui peut provoquer des dommages importants. Pourtant, les phénomènes gravitaires sont rarement analysés dans leur globalité et les rapports géométriques et mécaniques qui lient les instabilités à l'échelle du versant aux instabilités locales restent encore mal définis. Une meilleure caractérisation de ces liens pourrait pourtant représenter un apport substantiel dans la compréhension des processus de déstabilisation des versants et améliorer la caractérisation des dangers gravitaires à toutes les échelles spatiales. Dans le but de proposer un approche plus globale à la problématique des mouvements gravitaires, ce travail de thèse propose trois axes de recherche principaux: (1) l'inventaire et l'analyse de la distribution spatiale des grandes instabilités rocheuses à l'échelle régionale, (2) l'analyse des structures tectoniques cassantes et ductiles en relation avec les mécanismes de rupture des grandes instabilités rocheuses et (3) la caractérisation des aléas rocheux par une approche multidisciplinaire visant à développer un modèle conceptuel de l'instabilité et une meilleure appréciation du danger . Pour analyser les différentes problématiques traitées dans cette thèse, différentes techniques ont été utilisées. En particulier, le modèle numérique de terrain s'est révélé être un outil indispensable pour la majorité des analyses effectuées, en partant de l'identification de l'instabilité jusqu'au suivi des mouvements. Les analyses de terrain et des modélisations numériques ont ensuite permis de compléter les informations issues du modèle numérique de terrain. Dans la première partie de cette thèse, les mouvements gravitaires rocheux dans la vallée du Rhône (Suisse) ont été cartographiés pour étudier leur répartition en fonction des variables géologiques et morphologiques régionales. En particulier, les analyses ont mis en évidence l'influence de l'activité néotectonique et des phases glaciaires sur la distribution des zones à forte densité d'instabilités rocheuses. Les volumes des instabilités rocheuses identifiées le long de la vallée principale ont été ensuite utilisés pour estimer le taux de dénudations postglaciaire et le remplissage de la vallée du Rhône lié aux grands mouvements gravitaires. Dans la deuxième partie, l'étude de l'agencement structural des avalanches rocheuses de Sierre (Suisse) et de Frank (Canada) a permis de mieux caractériser l'influence passive des structures tectoniques sur la géométrie des instabilités. En particulier, les structures issues d'une tectonique ductile, souvent ignorées dans l'étude des instabilités gravitaires, ont été identifiées comme des structures très importantes qui contrôlent les mécanismes de rupture des instabilités à différentes échelles. Dans la dernière partie de la thèse, cinq instabilités rocheuses différentes ont été étudiées par une approche multidisciplinaire visant à mieux caractériser l'aléa et à développer un modèle conceptuel trois dimensionnel de ces instabilités. A l'aide de ces analyses on a pu mettre en évidence la nécessité d'incorporer différentes techniques d'analyses et de surveillance pour une gestion plus objective du risque associée aux grandes instabilités rocheuses.
Resumo:
The proportion of population living in or around cites is more important than ever. Urban sprawl and car dependence have taken over the pedestrian-friendly compact city. Environmental problems like air pollution, land waste or noise, and health problems are the result of this still continuing process. The urban planners have to find solutions to these complex problems, and at the same time insure the economic performance of the city and its surroundings. At the same time, an increasing quantity of socio-economic and environmental data is acquired. In order to get a better understanding of the processes and phenomena taking place in the complex urban environment, these data should be analysed. Numerous methods for modelling and simulating such a system exist and are still under development and can be exploited by the urban geographers for improving our understanding of the urban metabolism. Modern and innovative visualisation techniques help in communicating the results of such models and simulations. This thesis covers several methods for analysis, modelling, simulation and visualisation of problems related to urban geography. The analysis of high dimensional socio-economic data using artificial neural network techniques, especially self-organising maps, is showed using two examples at different scales. The problem of spatiotemporal modelling and data representation is treated and some possible solutions are shown. The simulation of urban dynamics and more specifically the traffic due to commuting to work is illustrated using multi-agent micro-simulation techniques. A section on visualisation methods presents cartograms for transforming the geographic space into a feature space, and the distance circle map, a centre-based map representation particularly useful for urban agglomerations. Some issues on the importance of scale in urban analysis and clustering of urban phenomena are exposed. A new approach on how to define urban areas at different scales is developed, and the link with percolation theory established. Fractal statistics, especially the lacunarity measure, and scale laws are used for characterising urban clusters. In a last section, the population evolution is modelled using a model close to the well-established gravity model. The work covers quite a wide range of methods useful in urban geography. Methods should still be developed further and at the same time find their way into the daily work and decision process of urban planners. La part de personnes vivant dans une région urbaine est plus élevé que jamais et continue à croître. L'étalement urbain et la dépendance automobile ont supplanté la ville compacte adaptée aux piétons. La pollution de l'air, le gaspillage du sol, le bruit, et des problèmes de santé pour les habitants en sont la conséquence. Les urbanistes doivent trouver, ensemble avec toute la société, des solutions à ces problèmes complexes. En même temps, il faut assurer la performance économique de la ville et de sa région. Actuellement, une quantité grandissante de données socio-économiques et environnementales est récoltée. Pour mieux comprendre les processus et phénomènes du système complexe "ville", ces données doivent être traitées et analysées. Des nombreuses méthodes pour modéliser et simuler un tel système existent et sont continuellement en développement. Elles peuvent être exploitées par le géographe urbain pour améliorer sa connaissance du métabolisme urbain. Des techniques modernes et innovatrices de visualisation aident dans la communication des résultats de tels modèles et simulations. Cette thèse décrit plusieurs méthodes permettant d'analyser, de modéliser, de simuler et de visualiser des phénomènes urbains. L'analyse de données socio-économiques à très haute dimension à l'aide de réseaux de neurones artificiels, notamment des cartes auto-organisatrices, est montré à travers deux exemples aux échelles différentes. Le problème de modélisation spatio-temporelle et de représentation des données est discuté et quelques ébauches de solutions esquissées. La simulation de la dynamique urbaine, et plus spécifiquement du trafic automobile engendré par les pendulaires est illustrée à l'aide d'une simulation multi-agents. Une section sur les méthodes de visualisation montre des cartes en anamorphoses permettant de transformer l'espace géographique en espace fonctionnel. Un autre type de carte, les cartes circulaires, est présenté. Ce type de carte est particulièrement utile pour les agglomérations urbaines. Quelques questions liées à l'importance de l'échelle dans l'analyse urbaine sont également discutées. Une nouvelle approche pour définir des clusters urbains à des échelles différentes est développée, et le lien avec la théorie de la percolation est établi. Des statistiques fractales, notamment la lacunarité, sont utilisées pour caractériser ces clusters urbains. L'évolution de la population est modélisée à l'aide d'un modèle proche du modèle gravitaire bien connu. Le travail couvre une large panoplie de méthodes utiles en géographie urbaine. Toutefois, il est toujours nécessaire de développer plus loin ces méthodes et en même temps, elles doivent trouver leur chemin dans la vie quotidienne des urbanistes et planificateurs.
Resumo:
The development of a whole-cell based sensor for arsenite detection coupling biological engineering and electrochemical techniques is presented. This strategy takes advantage of the natural Escherichia coli resistance mechanism against toxic arsenic species, such as arsenite, which consists of the selective intracellular recognition of arsenite and its pumping out from the cell. A whole-cell based biosensor can be produced by coupling the intracellular recognition of arsenite to the generation of an electrochemical signal. Hereto, E. coli was equipped with a genetic circuit in which synthesis of beta-galactosidase is under control of the arsenite-derepressable arsR-promoter. The E. coli reporter strain was filled in a microchip containing 16 independent electrochemical cells (i.e. two-electrode cell), which was then employed for analysis of tap and groundwater samples. The developed arsenic-sensitive electrochemical biochip is easy to use and outperforms state-of-the-art bacterial bioreporters assays specifically in its simplicity and response time, while keeping a very good limit of detection in tap water, i.e. 0.8ppb. Additionally, a very good linear response in the ranges of concentration tested (0.94ppb to 3.75ppb, R(2)=0.9975 and 3.75 ppb to 30ppb, R(2)=0.9991) was obtained, complying perfectly with the acceptable arsenic concentration limits defined by the World Health Organization for drinking water samples (i.e. 10ppb). Therefore, the proposed assay provides a very good alternative for the portable quantification of As (III) in water as corroborated by the analysis of natural groundwater samples from Swiss mountains, which showed a very good agreement with the results obtained by atomic absorption spectroscopy.
Resumo:
The use of synthetic combinatorial peptide libraries in positional scanning format (PS-SCL) has emerged recently as an alternative approach for the identification of peptides recognized by T lymphocytes. The choice of both the PS-SCL used for screening experiments and the method used for data analysis are crucial for implementing this approach. With this aim, we tested the recognition of different PS-SCL by a tyrosinase 368-376-specific CTL clone and analyzed the data obtained with a recently developed biometric data analysis based on a model of independent and additive contribution of individual amino acids to peptide antigen recognition. Mixtures defined with amino acids present at the corresponding positions in the native sequence were among the most active for all of the libraries. Somewhat surprisingly, a higher number of native amino acids were identifiable by using amidated COOH-terminal rather than free COOH-terminal PS-SCL. Also, our data clearly indicate that when using PS-SCL longer than optimal, frame shifts occur frequently and should be taken into account. Biometric analysis of the data obtained with the amidated COOH-terminal nonapeptide library allowed the identification of the native ligand as the sequence with the highest score in a public human protein database. However, the adequacy of the PS-SCL data for the identification for the peptide ligand varied depending on the PS-SCL used. Altogether these results provide insight into the potential of PS-SCL for the identification of CTL-defined tumor-derived antigenic sequences and may significantly implement our ability to interpret the results of these analyses.
Resumo:
Background : This study aimed to use plantar pressure analysis in relatively long-distance walking for objective outcome evaluation of ankle osteoarthritis treatments, i.e., ankle arthrodesis and total ankle replacement.Methods : Forty-seven subjects in four groups: three patient groups and controls, participated in the study. Each subject walked twice in 50-m trials. Plantar pressure under the pathological foot was measured using pressure insoles. Six parameters: initial contact time, terminal contact time, maximum force time, peak pressure time, maximum force and peak pressure were calculated and averaged over trials in ten regions of foot. The parameters in each region were compared between patient groups and controls and their effect size was estimated. Besides, the correlations between pressure parameters and clinical scales were calculated.Findings : We observed based on temporal parameters that patients postpone the heel-off event, when high force in forefoot and high ankle moment happens. Also based on maximum force and peak pressure, the patients apply smoothened maximum forces on the affected foot. In ten regions, some parameters showed improvements after total ankle replacement, some showed alteration of foot function after ankle arthrodesis and some others showed still abnormality after both surgical treatments. These parameters showed also significant correlation with clinical scales in at least two regions of foot.Interpretation : Plantar pressure parameters in relatively long-distance trials showed to be strong tools for outcome evaluation of ankle osteoarthritis treatments. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Principles: Surgeon's experience is crucial for proper application of sentinel node biopsy (SNB) in patients with breast cancer. A 20-30 cases learning curve of sentinel node (SN) and axillary lymph node dissection (ALND) was widely practiced. In order to speed up this learning curve, surgeons may be trained intraoperative by an experienced surgeon. The purpose of this report is to evaluate the results of this procedure. Methods: Patients with one primary invasive breast cancer (cT1-T2[<3 cm]cN0) underwent SNB based on lymphoscintigraphy using technetium Tc 99m colloid, intraoperative gamma probe detection, with or without blue dye mapping. This was followed by completion ALND when SN was positive or not found. SNB was performed by one experienced surgeon (teacher) or by 10 junior surgeons trained by the experienced surgeon (trainees). Four groups were defined: (i) SNB with immediate ALND for the teacher's learning curve, (ii) SNB by the teacher, (iii) SNB by the trainees under the teacher's supervision, and (iv) SNB by the trainees alone. Results: Between May 1999 and December 2007, a total of 808 évaluable patients underwent SNB. The SN identification rate was 98% in the teacher's group, and 99% in the trainees' group (p = 0.196). SN were positive in respectively 28% and 29% of patients (p = 0.196). The distribution of isolated tumor cells, micrometastases and metastases was not statistically different between the teacher's and the trainees' groups (p = 0.163). Conclusion: These comparable results confirm the success with which the SNB was taught. This strategy avoided the 20-30 SNB followed by immediate ALND early required per surgeon.
Resumo:
PURPOSE: Neurophysiological monitoring aims to improve the safety of pedicle screw placement, but few quantitative studies assess specificity and sensitivity. In this study, screw placement within the pedicle is measured (post-op CT scan, horizontal and vertical distance from the screw edge to the surface of the pedicle) and correlated with intraoperative neurophysiological stimulation thresholds. METHODS: A single surgeon placed 68 thoracic and 136 lumbar screws in 30 consecutive patients during instrumented fusion under EMG control. The female to male ratio was 1.6 and the average age was 61.3 years (SD 17.7). Radiological measurements, blinded to stimulation threshold, were done on reformatted CT reconstructions using OsiriX software. A standard deviation of the screw position of 2.8 mm was determined from pilot measurements, and a 1 mm of screw-pedicle edge distance was considered as a difference of interest (standardised difference of 0.35) leading to a power of the study of 75 % (significance level 0.05). RESULTS: Correct placement and stimulation thresholds above 10 mA were found in 71 % of screws. Twenty-two percent of screws caused cortical breach, 80 % of these had stimulation thresholds above 10 mA (sensitivity 20 %, specificity 90 %). True prediction of correct position of the screw was more frequent for lumbar than for thoracic screws. CONCLUSION: A screw stimulation threshold of >10 mA does not indicate correct pedicle screw placement. A hypothesised gradual decrease of screw stimulation thresholds was not observed as screw placement approaches the nerve root. Aside from a robust threshold of 2 mA indicating direct contact with nervous tissue, a secondary threshold appears to depend on patients' pathology and surgical conditions.
Resumo:
In three-dimensional (3D) coronary magnetic resonance angiography (MRA), the in-flow contrast between the coronary blood and the surrounding myocardium is attenuated as compared to thin-slab two-dimensional (2D) techniques. The application of a gadolinium (Gd)-based intravascular contrast agent may provide an additional source of signal and contrast by reducing T(1blood) and supporting the visualization of more distal or branching segments of the coronary arterial tree. In six healthy adults, the left coronary artery (LCA) system was imaged pre- and postcontrast with a 0.075-mmol/kg bodyweight dose of the intravascular contrast agent B-22956. For imaging, an optimized free-breathing, navigator-gated and -corrected 3D inversion recovery (IR) sequence was used. For comparison, state-of-the-art baseline 3D coronary MRA with T(2) preparation for non-exogenous contrast enhancement was acquired. The combination of IR 3D coronary MRA, sophisticated navigator technology, and B-22956 allowed for an extensive visualization of the LCA system. Postcontrast, a significant increase in both the signal-to-noise ratio (SNR; 46%, P < 0.05) and contrast-to-noise ratio (CNR; 160%, P < 0.01) was observed, while vessel sharpness of the left anterior descending (LAD) artery and the left coronary circumflex (LCX) were improved by 20% (P < 0.05) and 18% (P < 0.05), respectively.
Resumo:
Summary Detection, analysis and monitoring of slope movements by high-resolution digital elevation modelsSlope movements, such as rockfalls, rockslides, shallow landslides or debris flows, are frequent in many mountainous areas. These natural hazards endanger the inhabitants and infrastructures making it necessary to assess the hazard and risk caused by these phenomena. This PhD thesis explores various approaches using digital elevation models (DEMs) - and particularly high-resolution DEMs created by aerial or terrestrial laser scanning (TLS) - that contribute to the assessment of slope movement hazard at regional and local scales.The regional detection of areas prone to rockfalls and large rockslides uses different morphologic criteria or geometric instability factors derived from DEMs, i.e. the steepness of the slope, the presence of discontinuities, which enable a sliding mechanism, and the denudation potential. The combination of these factors leads to a map of susceptibility to rockfall initiation that is in good agreement with field studies as shown with the example of the Little Mill Campground area (Utah, USA). Another case study in the Illgraben catchment in the Swiss Alps highlighted the link between areas with a high denudation potential and actual rockfall areas.Techniques for a detailed analysis and characterization of slope movements based on high-resolution DEMs have been developed for specific, localized sites, i.e. ancient slide scars, present active instabilities or potential slope instabilities. The analysis of the site's characteristics mainly focuses on rock slopes and includes structural analyses (orientation of discontinuities); estimation of spacing, persistence and roughness of discontinuities; failure mechanisms based on the structural setting; and volume calculations. For the volume estimation a new 3D approach was tested to reconstruct the topography before a landslide or to construct the basal failure surface of an active or potential instability. The rockslides at Åknes, Tafjord and Rundefjellet in western Norway were principally used as study sites to develop and test the different techniques.The monitoring of slope instabilities investigated in this PhD thesis is essentially based on multitemporal (or sequential) high-resolution DEMs, in particular sequential point clouds acquired by TLS. The changes in the topography due to slope movements can be detected and quantified by sequential TLS datasets, notably by shortest distance comparisons revealing the 3D slope movements over the entire region of interest. A detailed analysis of rock slope movements is based on the affine transformation between an initial and a final state of the rock mass and its decomposition into translational and rotational movements. Monitoring using TLS was very successful on the fast-moving Eiger rockslide in the Swiss Alps, but also on the active rockslides of Åknes and Nordnesfjellet (northern Norway). One of the main achievements on the Eiger and Aknes rockslides is to combine the site's morphology and structural setting with the measured slope movements to produce coherent instability models. Both case studies also highlighted a strong control of the structures in the rock mass on the sliding directions. TLS was also used to monitor slope movements in soils, such as landslides in sensitive clays in Québec (Canada), shallow landslides on river banks (Sorge River, Switzerland) and a debris flow channel (Illgraben).The PhD thesis underlines the broad uses of high-resolution DEMs and especially of TLS in the detection, analysis and monitoring of slope movements. Future studies should explore in more depth the different techniques and approaches developed and used in this PhD, improve them and better integrate the findings in current hazard assessment practices and in slope stability models.Résumé Détection, analyse et surveillance de mouvements de versant à l'aide de modèles numériques de terrain de haute résolutionDes mouvements de versant, tels que des chutes de blocs, glissements de terrain ou laves torrentielles, sont fréquents dans des régions montagneuses et mettent en danger les habitants et les infrastructures ce qui rend nécessaire d'évaluer le danger et le risque causé par ces phénomènes naturels. Ce travail de thèse explore diverses approches qui utilisent des modèles numériques de terrain (MNT) et surtout des MNT de haute résolution créés par scanner laser terrestre (SLT) ou aérien - et qui contribuent à l'évaluation du danger de mouvements de versant à l'échelle régionale et locale.La détection régionale de zones propices aux chutes de blocs ou aux éboulements utilise plusieurs critères morphologiques dérivés d'un MNT, tels que la pente, la présence de discontinuités qui permettent un mécanisme de glissement ou le potentiel de dénudation. La combinaison de ces facteurs d'instabilité mène vers une carte de susceptibilité aux chutes de blocs qui est en accord avec des travaux de terrain comme démontré avec l'exemple du Little Mill Campground (Utah, États-Unis). Un autre cas d'étude - l'Illgraben dans les Alpes valaisannes - a mis en évidence le lien entre les zones à fort potentiel de dénudation et les sources effectives de chutes de blocs et d'éboulements.Des techniques pour l'analyse et la caractérisation détaillée de mouvements de versant basées sur des MNT de haute résolution ont été développées pour des sites spécifiques et localisés, comme par exemple des cicatrices d'anciens éboulements et des instabilités actives ou potentielles. Cette analyse se focalise principalement sur des pentes rocheuses et comprend l'analyse structurale (orientation des discontinuités); l'estimation de l'espacement, la persistance et la rugosité des discontinuités; l'établissement des mécanismes de rupture; et le calcul de volumes. Pour cela une nouvelle approche a été testée en rétablissant la topographie antérieure au glissement ou en construisant la surface de rupture d'instabilités actuelles ou potentielles. Les glissements rocheux d'Åknes, Tafjord et Rundefjellet en Norvège ont été surtout utilisés comme cas d'étude pour développer et tester les diverses approches. La surveillance d'instabilités de versant effectuée dans cette thèse de doctorat est essentiellement basée sur des MNT de haute résolution multi-temporels (ou séquentiels), en particulier des nuages de points séquentiels acquis par SLT. Les changements topographiques dus aux mouvements de versant peuvent être détectés et quantifiés sur l'ensemble d'un glissement, notamment par comparaisons des distances les plus courtes entre deux nuages de points. L'analyse détaillée des mouvements est basée sur la transformation affine entre la position initiale et finale d'un bloc et sa décomposition en mouvements translationnels et rotationnels. La surveillance par SLT a démontré son potentiel avec l'effondrement d'un pan de l'Eiger dans les Alpes suisses, mais aussi aux glissements rocheux d'Aknes et Nordnesfjellet en Norvège. Une des principales avancées à l'Eiger et à Aknes est la création de modèles d'instabilité cohérents en combinant la morphologie et l'agencement structural des sites avec les mesures de déplacements. Ces deux cas d'étude ont aussi démontré le fort contrôle des structures existantes dans le massif rocheux sur les directions de glissement. Le SLT a également été utilisé pour surveiller des glissements dans des terrains meubles comme dans les argiles sensibles au Québec (Canada), sur les berges de la rivière Sorge en Suisse et dans le chenal à laves torrentielles de l'Illgraben.Cette thèse de doctorat souligne le vaste champ d'applications des MNT de haute résolution et particulièrement du SLT dans la détection, l'analyse et la surveillance des mouvements de versant. Des études futures devraient explorer plus en profondeur les différentes techniques et approches développées, les améliorer et mieux les intégrer dans des pratiques actuelles d'analyse de danger et surtout dans la modélisation de stabilité des versants.
Resumo:
A novel approach for the identification of tumor antigen-derived sequences recognized by CD8(+) cytolytic T lymphocytes (CTL) consists in using synthetic combinatorial peptide libraries. Here we have screened a library composed of 3.1 x 10(11) nonapeptides arranged in a positional scanning format, in a cytotoxicity assay, to search the antigen recognized by melanoma-reactive CTL of unknown specificity. The results of this analysis enabled the identification of several optimal peptide ligands, as most of the individual nonapeptides deduced from the primary screening were efficiently recognized by the CTL. The results of the library screening were also analyzed with a mathematical approach based on a model of independent and additive contribution of individual amino acids to antigen recognition. This biometrical data analysis enabled the retrieval, in public databases, of the native antigenic peptide SSX-2(41-49), whose sequence is highly homologous to the ones deduced from the library screening, among the ones with the highest stimulatory score. These results underline the high predictive value of positional scanning synthetic combinatorial peptide library analysis and encourage its use for the identification of CTL ligands.