86 resultados para nonlinear least-square fit


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of the present article is to take stock of a recent exchange in Organizational Research Methods between critics (Rönkkö & Evermann, 2013) and proponents (Henseler et al., 2014) of partial least squares path modeling (PLS-PM). The two target articles were centered around six principal issues, namely whether PLS-PM: (1) can be truly characterized as a technique for structural equation modeling (SEM); (2) is able to correct for measurement error; (3) can be used to validate measurement models; (4) accommodates small sample sizes; (5) is able to provide null hypothesis tests for path coefficients; and (6) can be employed in an exploratory, model-building fashion. We summarize and elaborate further on the key arguments underlying the exchange, drawing from the broader methodological and statistical literature in order to offer additional thoughts concerning the utility of PLS-PM and ways in which the technique might be improved. We conclude with recommendations as to whether and how PLS-PM serves as a viable contender to SEM approaches for estimating and evaluating theoretical models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

U-Pb dating of zircons by laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS) is a widely used analytical technique in Earth Sciences. For U-Pb ages below 1 billion years (1 Ga), Pb-206/U-238 dates are usually used, showing the least bias by external parameters such as the presence of initial lead and its isotopic composition in the analysed mineral. Precision and accuracy of the Pb/U ratio are thus of highest importance in LA-ICPMS geochronology. We consider the evaluation of the statistical distribution of the sweep intensities based on goodness-of-fit tests in order to find a model probability distribution fitting the data to apply an appropriate formulation for the standard deviation. We then discuss three main methods to calculate the Pb/U intensity ratio and its uncertainty in the LA-ICPMS: (1) ratio-of-the-mean intensities method, (2) mean-of-the-intensity-ratios method and (3) intercept method. These methods apply different functions to the same raw intensity vs. time data to calculate the mean Pb/U intensity ratio. Thus, the calculated intensity ratio and its uncertainty depend on the method applied. We demonstrate that the accuracy and, conditionally, the precision of the ratio-of-the-mean intensities method are invariant to the intensity fluctuations and averaging related to the dwell time selection and off-line data transformation (averaging of several sweeps); we present a statistical approach how to calculate the uncertainty of this method for transient signals. We also show that the accuracy of methods (2) and (3) is influenced by the intensity fluctuations and averaging, and the extent of this influence can amount to tens of percentage points; we show that the uncertainty of these methods also depends on how the signal is averaged. Each of the above methods imposes requirements to the instrumentation. The ratio-of-the-mean intensities method is sufficiently accurate provided the laser induced fractionation between the beginning and the end of the signal is kept low and linear. We show, based on a comprehensive series of analyses with different ablation pit sizes, energy densities and repetition rates for a 193 nm ns-ablation system that such a fractionation behaviour requires using a low ablation speed (low energy density and low repetition rate). Overall, we conclude that the ratio-of-the-mean intensities method combined with low sampling rates is the most mathematically accurate among the existing data treatment methods for U-Pb zircon dating by sensitive sector field ICPMS.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study aimed to use the plantar pressure insole for estimating the three-dimensional ground reaction force (GRF) as well as the frictional torque (T(F)) during walking. Eleven subjects, six healthy and five patients with ankle disease participated in the study while wearing pressure insoles during several walking trials on a force-plate. The plantar pressure distribution was analyzed and 10 principal components of 24 regional pressure values with the stance time percentage (STP) were considered for GRF and T(F) estimation. Both linear and non-linear approximators were used for estimating the GRF and T(F) based on two learning strategies using intra-subject and inter-subjects data. The RMS error and the correlation coefficient between the approximators and the actual patterns obtained from force-plate were calculated. Our results showed better performance for non-linear approximation especially when the STP was considered as input. The least errors were observed for vertical force (4%) and anterior-posterior force (7.3%), while the medial-lateral force (11.3%) and frictional torque (14.7%) had higher errors. The result obtained for the patients showed higher error; nevertheless, when the data of the same patient were used for learning, the results were improved and in general slight differences with healthy subjects were observed. In conclusion, this study showed that ambulatory pressure insole with data normalization, an optimal choice of inputs and a well-trained nonlinear mapping function can estimate efficiently the three-dimensional ground reaction force and frictional torque in consecutive gait cycle without requiring a force-plate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim To assess the geographical transferability of niche-based species distribution models fitted with two modelling techniques. Location Two distinct geographical study areas in Switzerland and Austria, in the subalpine and alpine belts. Methods Generalized linear and generalized additive models (GLM and GAM) with a binomial probability distribution and a logit link were fitted for 54 plant species, based on topoclimatic predictor variables. These models were then evaluated quantitatively and used for spatially explicit predictions within (internal evaluation and prediction) and between (external evaluation and prediction) the two regions. Comparisons of evaluations and spatial predictions between regions and models were conducted in order to test if species and methods meet the criteria of full transferability. By full transferability, we mean that: (1) the internal evaluation of models fitted in region A and B must be similar; (2) a model fitted in region A must at least retain a comparable external evaluation when projected into region B, and vice-versa; and (3) internal and external spatial predictions have to match within both regions. Results The measures of model fit are, on average, 24% higher for GAMs than for GLMs in both regions. However, the differences between internal and external evaluations (AUC coefficient) are also higher for GAMs than for GLMs (a difference of 30% for models fitted in Switzerland and 54% for models fitted in Austria). Transferability, as measured with the AUC evaluation, fails for 68% of the species in Switzerland and 55% in Austria for GLMs (respectively for 67% and 53% of the species for GAMs). For both GAMs and GLMs, the agreement between internal and external predictions is rather weak on average (Kulczynski's coefficient in the range 0.3-0.4), but varies widely among individual species. The dominant pattern is an asymmetrical transferability between the two study regions (a mean decrease of 20% for the AUC coefficient when the models are transferred from Switzerland and 13% when they are transferred from Austria). Main conclusions The large inter-specific variability observed among the 54 study species underlines the need to consider more than a few species to test properly the transferability of species distribution models. The pronounced asymmetry in transferability between the two study regions may be due to peculiarities of these regions, such as differences in the ranges of environmental predictors or the varied impact of land-use history, or to species-specific reasons like differential phenotypic plasticity, existence of ecotypes or varied dependence on biotic interactions that are not properly incorporated into niche-based models. The lower variation between internal and external evaluation of GLMs compared to GAMs further suggests that overfitting may reduce transferability. Overall, a limited geographical transferability calls for caution when projecting niche-based models for assessing the fate of species in future environments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Kinetic parameters of T cell receptor (TCR) interactions with its ligand have been proposed to control T cell activation. Analysis of kinetic data obtained has so far produced conflicting insights; here, we offer a consideration of this problem. As a model system, association and dissociation of a soluble TCR (sT1) and its specific ligand, an azidobenzoic acid derivative of the peptide SYIPSAEK-(ABA)I (residues 252-260 from Plasmodium berghei circumsporozoite protein), bound to class I MHC H-2K(d)-encoded molecule (MHCp) were studied by surface plasmon resonance. The association time courses exhibited biphasic patterns. The fast and dominant phase was assigned to ligand association with the major fraction of TCR molecules, whereas the slow component was attributed to the presence of traces of TCR dimers. The association rate constant derived for the fast phase, assuming a reversible, single-step reaction mechanism, was relatively slow and markedly temperature-dependent, decreasing from 7.0 x 10(3) at 25 degrees C to 1.8 x 10(2) M(-1).s(-1) at 4 degrees C. Hence, it is suggested that these observed slow rate constants are the result of unresolved elementary steps of the process. Indeed, our analysis of the kinetic data shows that the time courses of TCR-MHCp interaction fit well to two different, yet closely related mechanisms, where an induced fit or a preequilibrium of two unbound TCR conformers are operational. These mechanisms may provide a rationale for the reported conformational flexibility of the TCR and its unusual ligand recognition properties, which combine high specificity with considerable crossreactivity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Spatial data analysis mapping and visualization is of great importance in various fields: environment, pollution, natural hazards and risks, epidemiology, spatial econometrics, etc. A basic task of spatial mapping is to make predictions based on some empirical data (measurements). A number of state-of-the-art methods can be used for the task: deterministic interpolations, methods of geostatistics: the family of kriging estimators (Deutsch and Journel, 1997), machine learning algorithms such as artificial neural networks (ANN) of different architectures, hybrid ANN-geostatistics models (Kanevski and Maignan, 2004; Kanevski et al., 1996), etc. All the methods mentioned above can be used for solving the problem of spatial data mapping. Environmental empirical data are always contaminated/corrupted by noise, and often with noise of unknown nature. That's one of the reasons why deterministic models can be inconsistent, since they treat the measurements as values of some unknown function that should be interpolated. Kriging estimators treat the measurements as the realization of some spatial randomn process. To obtain the estimation with kriging one has to model the spatial structure of the data: spatial correlation function or (semi-)variogram. This task can be complicated if there is not sufficient number of measurements and variogram is sensitive to outliers and extremes. ANN is a powerful tool, but it also suffers from the number of reasons. of a special type ? multiplayer perceptrons ? are often used as a detrending tool in hybrid (ANN+geostatistics) models (Kanevski and Maignank, 2004). Therefore, development and adaptation of the method that would be nonlinear and robust to noise in measurements, would deal with the small empirical datasets and which has solid mathematical background is of great importance. The present paper deals with such model, based on Statistical Learning Theory (SLT) - Support Vector Regression. SLT is a general mathematical framework devoted to the problem of estimation of the dependencies from empirical data (Hastie et al, 2004; Vapnik, 1998). SLT models for classification - Support Vector Machines - have shown good results on different machine learning tasks. The results of SVM classification of spatial data are also promising (Kanevski et al, 2002). The properties of SVM for regression - Support Vector Regression (SVR) are less studied. First results of the application of SVR for spatial mapping of physical quantities were obtained by the authorsin for mapping of medium porosity (Kanevski et al, 1999), and for mapping of radioactively contaminated territories (Kanevski and Canu, 2000). The present paper is devoted to further understanding of the properties of SVR model for spatial data analysis and mapping. Detailed description of the SVR theory can be found in (Cristianini and Shawe-Taylor, 2000; Smola, 1996) and basic equations for the nonlinear modeling are given in section 2. Section 3 discusses the application of SVR for spatial data mapping on the real case study - soil pollution by Cs137 radionuclide. Section 4 discusses the properties of the modelapplied to noised data or data with outliers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE: Knowledge of cerebral blood flow (CBF) alterations in cases of acute stroke could be valuable in the early management of these cases. Among imaging techniques affording evaluation of cerebral perfusion, perfusion CT studies involve sequential acquisition of cerebral CT sections obtained in an axial mode during the IV administration of iodinated contrast material. They are thus very easy to perform in emergency settings. Perfusion CT values of CBF have proved to be accurate in animals, and perfusion CT affords plausible values in humans. The purpose of this study was to validate perfusion CT studies of CBF by comparison with the results provided by stable xenon CT, which have been reported to be accurate, and to evaluate acquisition and processing modalities of CT data, notably the possible deconvolution methods and the selection of the reference artery. METHODS: Twelve stable xenon CT and perfusion CT cerebral examinations were performed within an interval of a few minutes in patients with various cerebrovascular diseases. CBF maps were obtained from perfusion CT data by deconvolution using singular value decomposition and least mean square methods. The CBF were compared with the stable xenon CT results in multiple regions of interest through linear regression analysis and bilateral t tests for matched variables. RESULTS: Linear regression analysis showed good correlation between perfusion CT and stable xenon CT CBF values (singular value decomposition method: R(2) = 0.79, slope = 0.87; least mean square method: R(2) = 0.67, slope = 0.83). Bilateral t tests for matched variables did not identify a significant difference between the two imaging methods (P >.1). Both deconvolution methods were equivalent (P >.1). The choice of the reference artery is a major concern and has a strong influence on the final perfusion CT CBF map. CONCLUSION: Perfusion CT studies of CBF achieved with adequate acquisition parameters and processing lead to accurate and reliable results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fit produced by Pseudomonas fluorescens CHA0 is a novel insect toxin in root colonizing pseudomonads, of which a homologue is described in Photorhabdus species.However, occurrence and abundance of insect pathogenicity in plant-associated pseudomonads is still unclear. An extensive screening outside the P. fluorescens complex identified strains of Pseudomonas chlororaphis as further Fit toxin producing candidates. Sequences of five different P. chlororaphis strains generated in this study were used to reconstruct the evolutionary history of the Fit toxin gene and to analyse its mode of evolution. We found that P. chlororaphis is closely associated with a small subgroup of 2,4-diacetylphloroglucinol and pyoluteorin- producing pseudomonads, both when analyzing four housekeeping genes and the nucleotide sequences for the Fit toxin gene. Additionally, we identified purifying selection to be the predominant mode of Fit toxin evolution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: While rifaximin was able to improve symptoms in patients with irritable bowel syndrome (IBS) in phase III trials, these results are yet to be repeated in phase IV studies. AIM: To evaluate the treatment response to rifaximin in IBS patients in a phase IV trial. METHODS: IBS patients underwent lactulose hydrogen breath testing (LHBT). LHBT-positive patients were treated with rifaximin for 14 days. Prior to treatment as well as at week 4 and 14 following the start of rifaximin treatment, patients completed a questionnaire assessing symptom severity on a Likert scale from 0 to 10. RESULTS: One hundred and six of 150 IBS patients (71%) were LHBT-positive and treated with rifaximin. As assessed at week 4 following commencement of the therapy, rifaximin provided significant improvement of the following IBS-associated symptoms: bloating (5.5±2.6 before the start of the treatment vs. 3.6±2.7 at week 4, P<0.001), flatulence (5.0±2.7 vs. 4.0±2.7, P=0.015), diarrhoea (2.9±2.4 vs. 2.0±2.4, P=0.005) and abdominal pain (4.8±2.7 vs. 3.3±2.5, P<0.001). Overall well-being also significantly improved (3.9 ± 2.4 vs. 2.7 ± 2.3, P < 0.001). Similar improvements in IBS symptoms were obtained at week 14. Eighty-six per cent of patients undergoing repetitive LHBT (55/64) tested negative at week 4. CONCLUSIONS: We found a high percentage of LHBT-positive IBS patients. IBS-associated symptoms (bloating, flatulence, diarrhoea, pain) were improved for a period of 3 months following 2 weeks of treatment with rifaximin. We conclude that rifaximin treatment alleviates symptoms in LHBT-positive IBS patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Biological scaling analyses employing the widely used bivariate allometric model are beset by at least four interacting problems: (1) choice of an appropriate best-fit line with due attention to the influence of outliers; (2) objective recognition of divergent subsets in the data (allometric grades); (3) potential restrictions on statistical independence resulting from phylogenetic inertia; and (4) the need for extreme caution in inferring causation from correlation. A new non-parametric line-fitting technique has been developed that eliminates requirements for normality of distribution, greatly reduces the influence of outliers and permits objective recognition of grade shifts in substantial datasets. This technique is applied in scaling analyses of mammalian gestation periods and of neonatal body mass in primates. These analyses feed into a re-examination, conducted with partial correlation analysis, of the maternal energy hypothesis relating to mammalian brain evolution, which suggests links between body size and brain size in neonates and adults, gestation period and basal metabolic rate. Much has been made of the potential problem of phylogenetic inertia as a confounding factor in scaling analyses. However, this problem may be less severe than suspected earlier because nested analyses of variance conducted on residual variation (rather than on raw values) reveals that there is considerable variance at low taxonomic levels. In fact, limited divergence in body size between closely related species is one of the prime examples of phylogenetic inertia. One common approach to eliminating perceived problems of phylogenetic inertia in allometric analyses has been calculation of 'independent contrast values'. It is demonstrated that the reasoning behind this approach is flawed in several ways. Calculation of contrast values for closely related species of similar body size is, in fact, highly questionable, particularly when there are major deviations from the best-fit line for the scaling relationship under scrutiny.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

RESUME: Une zone tectonique large et complexe, connue sous le nom de ligne des Centovalli, traverse le secteur des Alpes Centrales compris entre Domodossola et Locarno. Cette région, formée par le Val Vigezzo et la vallée des Centovalli, constitue la terminaison méridionale du dôme Lepontin et représente une portion de la zone des racines des nappes alpines. Elle fait partie d’une grande et complexe zone de cisaillement, en partie associée à des phénomènes hydrothermaux d’âge alpin (<20 Ma), qui comprend le système tectonique Insubrien et celui du Simplon. Le Val Vigezzo et les Centovalli constituent un vrai carrefour entre les principaux accidents tectoniques des Alpes ainsi qu'une zone de juxtaposition du socle Sudalpin avec la zone des racines de l’Austroalpin et du Pennique. Les phases de déformation et les structures géologiques qui peuvent être étudiées s'étalent sur une période comprise entre environ 35 Ma et l'actuel. L’étude détaillée de terrain a mis en évidence la présence de nombreuses roches et structures de déformation de type ductile et cassant tels que des mylonites, des cataclasites, des pseudotachylites, des kakirites, des failles minéralisées, des gouges de faille et des plis. Sur le terrain on a pu distinguer au moins quatre générations de plis liés aux différentes phases de déformation. Le nombre et la complexité de ces structures indiquent une histoire très compliquée, selon plusieurs étapes distinctes, parfois liées, voire même superposées. Une partie de ces structures de déformation affectent aussi les dépôts sédimentaires d’âge quaternaire, notamment des limons et des sables lacustres. Ces sédiments constituent les restes d'un bassin lacustre attribué à l'époque interglaciaire Riss/Würm (éemien, 67.000-120.000 ans) et ils affleurent dans la partie centrale de la zone étudiée, à l'Est de la plaine de Santa Maria Maggiore. Ces sédiments montrent en leur sein toute une série de structures de déformation tels que des plans de faille inverses, des structures conjuguées de raccourcissement et des véritables plis. Ces failles et ces plis représenteraient les évidences de surface d’une déformation probablement active en époque quaternaire. Une autre formation rocheuse a retenu tout notre attention; il s'agit d'un corps de brèches péridotitiques monogéniques qui affleure en discontinuité le long du versant méridional et le long du fond de la vallée Vigezzo sur environ 20 km. Ces brèches se posent indifféremment sur le socle (unités Finero, Orselina) ou sur les sédiments lacustres. Elles sont traversées par des plans de failles qui développent des véritables stries de faille et des gouges de faille; l’orientation de ces plans est la même que celle affectant les failles à gouges du socle. La genèse de cette brèche est liée à l'altération et au modelage glacier (rock-glaciers) d'une brèche tectonique originelle qui borde la partie externe du Corps de Finero. Les structures de déformation de cette brèche, pareillement à celles des sédiments lacustres, ont été considérées comme les évidences de surface d'une tectonique quaternaire active dans la région. La dernière phase de déformation cassante qui affecte cette région peut donc être considérée comme active en époque quaternaire. Une vue d’ensemble de la région étudiée nous permet de reconnaître à l’échelle régionale une zone de cisaillement complexe orientée E-W, parallèlement à l’axe de la vallée Centovalli-Val Vigezzo. Les données de terrain, indiquent que cette zone de cisaillement débute sous conditions ductiles et évolue en plusieurs étapes jusqu’à des conditions de failles cassantes de surface. La reconstruction de l'évolution géodynamique de la région a permis de définir trois étapes distinctes qui marquent le passage, de ce secteur de socle cristallin, de conditions P-T profondes à des conditions de surface. Dans ce contexte, on a reconnu trois phases principales de déformation à l’échelle régionale qui caractérisent ces trois étapes. La phase la plus ancienne est constituée par des mylonites en faciès amphibolite, associées à des mouvements de cisaillement dextre, qui sont ensuite remplacés par des mylonites en faciès schistes verts et des plis rétrovergentes liés au rétrocharriage des nappes alpines. Une deuxième étape est identifiée par le développement d’une phase hydrothermale liée à un système de failles extensives et décrochantes dextres à direction principale E-W, NE-SW et NW-SE. Leur caractérisation minéralogique a permis la mise en évidence des phases cristallines de néoformation liées à cet événement constituées par : K-feldspath (microcline), chlorites (Fe+Mg), épidotes, prehnite, zéolites (laumontite), sphène, calcite. Dans ce contexte, pour obtenir une meilleure caractérisation de cet événement hydrothermal on a utilisé des géothermomètres sur chlorites, sensible aussi à la pression et a la a(H2O), qui ont donné des valeurs descendantes comprises entre 450-200°C. Les derniers mouvements sont mis en évidence par le développement d’une série de plans majeurs de failles à gouge, qui forment une structure en sigmoïdes d’épaisseur kilométrique reconnaissable à l’échelle de la vallée et caractérisée par des mouvements transpressifs avec une composante décrochante dextre toujours importante. Cette phase de déformation forme un système conjugué de failles avec direction moyenne E-W qui coupent la zone des racines des nappes alpines, la zone du Canavese et le corps ultramafique de Finero. Ce système se déroule de manière subparallèle à l'axe de la vallée le long de plusieurs dizaines de kilomètres. Une analyse complète et détaillée des gouges de faille par XRD a montré que la fraction argileuse (<2 µm) de ces gouges contient une partie de néoformation très importante constituée par, des illites, des chlorites et des interstratifiés de type illite/smectite ou chlorite/smectite. Des datations avec méthode K-Ar sur ces illites ont donné des valeurs comprises entre 12 et 4 Ma qui représentent l'âge de cette dernière déformation cassante. L'application de la méthode de la cristallinité de l'illite (C.I.) a permis d'évaluer les conditions thermiques qui caractérisent le déroulement de cette dernière phase tectonique qui se produit sous conditions de température caractéristiques de l'anchizone et de la diagenèse. L'ensemble des structures de déformation qu'on vient de décrire s'insère parfaitement dans le contexte de convergence oblique entre la plaque adriatique et celle européenne qui à produit l'orogène alpin. On peut considérer les structures tectoniques du Val Vigezzo-Centovalli comme l'expression d'une zone majeure de cisaillement "Simplo-Insubrienne". L'empilement structural et les structures tectoniques affleurantes dans la région sont le résultat de l'interaction entre un régime tectonique transpressif et un régime transtensif. Ces deux champs de tension sont antagonistes entre eux mais sont reliés, de toute façon, à une seule phase décrochante dextre principale, due à une convergence oblique entre deux plaques. À l'échelle de l'évolution géodynamique on peut distinguer différentes étapes au cours desquelles les structures de ces deux régimes tectoniques interagissent en manière différente. En accord avec les données géophysiques et les reconstructions paléodynamiques prises dans la littérature on considère que la ligne Rhône-Simplon-Centovalli représente l'évidence de surface de la suture majeure profonde entre la plaque Adriatique et celle Européenne. Les vitesses de soulèvement qui ont été calculées dans cette étude pour cette région des Alpes donnent une valeur moyenne de 0.8 mm/a qui est tout à fait comparable avec les données proposées par la littérature sur cette zone. La zone Val Vigezzo-Centovalli peut être donc considérée comme un carrefour géologique où se croisent différentes phases tectoniques qui représentent les évidences de surface d'une suture profonde majeure entre deux plaques dans un contexte de collision continentale. ABSTRACT: A wide and complex tectonic zone known as Centovalli line, crosses the Central Alps sector between Domodossola and Locarno. This area, formed by the Vigezzo Valley and Centovalli valley, constitutes the southernmost termination of the Lepontin dome and represents a portion of the alpine nappes root zone. It belongs to a large and complex shear-zone, partly associated with hydrothermal phenomena of alpine age (<20 My), which includes the Insubric Line and the Simplon fault zone. Vigezzo Valley and Centovalli constitute a real crossroads between the mains alpines tectonics lines as well as a zone of juxtaposition of the Southalpine basement with the Austroalpin and Pennique root zone. The deformation phases and the geological structures that can be studied between approximately 35 My and the present. The detailed field study showed the presence of many brittle and ductile deformation structures and fault rocks such as mylonites, cataclasites, pseudotachylites, kakirites, mineralized faults, fault gouges and folds. In the field we could distinguish at least four folds generations related to the various deformation phases. The number and the complexity of these structures indicate a very complicated history, comprising several different stages, that sometimes are related and even superimposed. Part of these deformation structures affect also the sedimentary deposits of quaternary age, in particular the silts and sands lake deposit. These sediments constitute the remainders of a lake basin ascribed to the interglacial Riss/Würm (Eemien, 67.000-120.000 years) and outcroping in the central part of the studied area, in the Eastern part of Santa Maria Maggiore plain. These sediments show a whole series of deformation structures such as inverse fault planes, combined shortening structures and true folds. These faults and folds would represent the surface evidence of a probably active tectonic deformation in quaternary time. Another rock formation attracted all our attention. It is a body of monogenic peridotite breccia which outcrops in discontinuity along the southernmost slope and the bottom of the Vigezzo valley on approximately 20 km. This breccia lies indifferently on the basement (Finero and Orselina units) or on the lake sediments. They are crossed by fault planes which developed slikenside and fault gouges whose orientation is the same of the faults gouges in the alpine basement. This breccia results from the weathering and the surface modelling of an original tectonic breccia which borders the external part of Finero peridotite body. This breccia deformation structures, like those of the lake sediments, were regarded as the surface interaction of active quaternary tectonics in the area. So the last brittle deformation phases which affects this area seems to be actives in quaternary time. Theoverall picture of the studied area on a regional scale enables us to point out a complex shear-zone directed E-W, parallel to the axis of the Centovalli and Vigezzo Valley. The field analysis indicates that this shear-zone began under ductile conditions and evolved in several stages to brittle faulting under surface conditions. The analysis of the geodynamic evolution of the area allows to define three different stages which mark the transition of this alpine basement root zone, from deep P-T conditions to P-T surface conditions. In this context on regional scale three principal deformation phases, which characterize these three stages can be distinguished. The oldest phase consisted of the amphibolitie facies mylonites, associated to dextral strikeslip movements. They are then replaced by green-schists facies mylonites and backfolds related to the backthrusting of the alpines nappes. A second episode is caracterized by the development of an hydrothermal phase bound to an extensive fault and dextral strike-slip fault system, with E-W, NW-SE and SE-NW principal directionsThe principal neoformed mineral phases related to this event are: K-feldspar (microcline), chlorites (Fe+Mg), epidotes prehnite, zéolites (laumontite), sphene and calcite. In this context, to obtain a better characterization of this hydrothermal event, we have used an chlorite geothermometer, sensitive also to the pressure and has the a(H2O), which gave downward values ranging between 450-200°C. The last movements are caracterized by the development of important gouge fault plans, which form a sigmoid structure of kilometric thickness which is recognizable at the valley scale, and is characterized by transpressive movements always with a significant dextral strike-slip component. This deformation phase forms a combined faults system with an average E-W direction, which cuts trough the alpine root zone, the Canavese zone and the Finero ultramafic body. This fault system takes place subparallel to the axis of the valley over several tens of kilometers. A complete and detailed XRD analysis of the gouges fault showed that the clay fraction (<2µm) contains a very significant neo-formation of illite, chlorites and mixed layered clays such as illite/smectite or chlorite/smectite. The K-Ar datings of the illite fraction <2µm gave values ranging between 12 and 4 My and the illite fraction <0.2µm gave more recents values until to 2,4-0 My.This values represent the age of this last brittle deformation. The application of the illite crystallinity method (C.I.) allowed evaluating the thermal conditions which characterize this tectonic phase that occured under temperature conditions of the anchizone and diagenesis. The whole set of deformation structures which we just described, perfectly fit the context of oblique convergence between the Adriatic and the European plate that produced the alpine orogen. We can regard the Vigezzo valley and Centovalli tectonic structures as the expression of a major "Simplo-Insubric" shear-zone. Structural stacking and tectonic structures that outcrop in the studied area, are the result of the interaction between a transpressive and a transtensve tectonic phases. These two tension fields are antagonistic but they are also connected, in any event, with only one principal dextral strike-slip movement, caused by an oblique convergence between two plates. On the geodynamic evolution scale we can distinguish various stages during which these two tectonic structures fields interact in various ways. In agreement with the geophysical data and the paleodynamic recostructions taken in the literature we considers that the Rhone-Simplon-Centovalli line are the surface feature of the major collision between the Adriatique and the European plate at depth. The uplift speeds we calculated in this study for this Alpine area give an average value of 0.8 mm/a, which is in good agreement with the data suggested by the literature on this zone. TheVigezzo Valley and Centovalli zone can therefore be regarded as a geological crossroad where various tectonic phases are superimposed. They represent the evidences of a major and deeper suture between two plates in a continental collision context.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Résumé: Les gouvernements des pays occidentaux ont dépensé des sommes importantes pour faciliter l'intégration des technologies de l'information et de la communication dans l'enseignement espérant trouver une solution économique à l'épineuse équation que l'on pourrait résumer par la célèbre formule " faire plus et mieux avec moins ". Cependant force est de constater que, malgré ces efforts et la très nette amélioration de la qualité de service des infrastructures, cet objectif est loin d'être atteint. Si nous pensons qu'il est illusoire d'attendre et d'espérer que la technologie peut et va, à elle seule, résoudre les problèmes de qualité de l'enseignement, nous croyons néanmoins qu'elle peut contribuer à améliorer les conditions d'apprentissage et participer de la réflexion pédagogique que tout enseignant devrait conduire avant de dispenser ses enseignements. Dans cette optique, et convaincu que la formation à distance offre des avantages non négligeables à condition de penser " autrement " l'enseignement, nous nous sommes intéressé à la problématique du développement de ce type d'applications qui se situent à la frontière entre les sciences didactiques, les sciences cognitives, et l'informatique. Ainsi, et afin de proposer une solution réaliste et simple permettant de faciliter le développement, la mise-à-jour, l'insertion et la pérennisation des applications de formation à distance, nous nous sommes impliqué dans des projets concrets. Au fil de notre expérience de terrain nous avons fait le constat que (i)la qualité des modules de formation flexible et à distance reste encore très décevante, entre autres parce que la valeur ajoutée que peut apporter l'utilisation des technologies n'est, à notre avis, pas suffisamment exploitée et que (ii)pour réussir tout projet doit, outre le fait d'apporter une réponse utile à un besoin réel, être conduit efficacement avec le soutien d'un " champion ". Dans l'idée de proposer une démarche de gestion de projet adaptée aux besoins de la formation flexible et à distance, nous nous sommes tout d'abord penché sur les caractéristiques de ce type de projet. Nous avons ensuite analysé les méthodologies de projet existantes dans l'espoir de pouvoir utiliser l'une, l'autre ou un panachage adéquat de celles qui seraient les plus proches de nos besoins. Nous avons ensuite, de manière empirique et par itérations successives, défini une démarche pragmatique de gestion de projet et contribué à l'élaboration de fiches d'aide à la décision facilitant sa mise en oeuvre. Nous décrivons certains de ses acteurs en insistant particulièrement sur l'ingénieur pédagogique que nous considérons comme l'un des facteurs clé de succès de notre démarche et dont la vocation est de l'orchestrer. Enfin, nous avons validé a posteriori notre démarche en revenant sur le déroulement de quatre projets de FFD auxquels nous avons participé et qui sont représentatifs des projets que l'on peut rencontrer dans le milieu universitaire. En conclusion nous pensons que la mise en oeuvre de notre démarche, accompagnée de la mise à disposition de fiches d'aide à la décision informatisées, constitue un atout important et devrait permettre notamment de mesurer plus aisément les impacts réels des technologies (i) sur l'évolution de la pratique des enseignants, (ii) sur l'organisation et (iii) sur la qualité de l'enseignement. Notre démarche peut aussi servir de tremplin à la mise en place d'une démarche qualité propre à la FFD. D'autres recherches liées à la réelle flexibilisation des apprentissages et aux apports des technologies pour les apprenants pourront alors être conduites sur la base de métriques qui restent à définir. Abstract: Western countries have spent substantial amount of monies to facilitate the integration of the Information and Communication Technologies (ICT) into Education hoping to find a solution to the touchy equation that can be summarized by the famous statement "do more and better with less". Despite these efforts, and notwithstanding the real improvements due to the undeniable betterment of the infrastructure and of the quality of service, this goal is far from reached. Although we think it illusive to expect technology, all by itself, to solve our economical and educational problems, we firmly take the view that it can greatly contribute not only to ameliorate learning conditions but participate to rethinking the pedagogical approach as well. Every member of our community could hence take advantage of this opportunity to reflect upon his or her strategy. In this framework, and convinced that integrating ICT into education opens a number of very interesting avenues provided we think teaching "out of the box", we got ourself interested in courseware development positioned at the intersection of didactics and pedagogical sciences, cognitive sciences and computing. Hence, and hoping to bring a realistic and simple solution that could help develop, update, integrate and sustain courseware we got involved in concrete projects. As ze gained field experience we noticed that (i)The quality of courseware is still disappointing, amongst others, because the added value that the technology can bring is not made the most of, as it could or should be and (ii)A project requires, besides bringing a useful answer to a real problem, to be efficiently managed and be "championed". Having in mind to propose a pragmatic and practical project management approach we first looked into open and distance learning characteristics. We then analyzed existing methodologies in the hope of being able to utilize one or the other or a combination to best fit our needs. In an empiric manner and proceeding by successive iterations and refinements, we defined a simple methodology and contributed to build descriptive "cards" attached to each of its phases to help decision making. We describe the different actors involved in the process insisting specifically on the pedagogical engineer, viewed as an orchestra conductor, whom we consider to be critical to ensure the success of our approach. Last but not least, we have validated a posteriori our methodology by reviewing four of the projects we participated to and that we think emblematic of the university reality. We believe that the implementation of our methodology, along with the availability of computerized cards to help project managers to take decisions, could constitute a great asset and contribute to measure the technologies' real impacts on (i) the evolution of teaching practices (ii) the organization and (iii) the quality of pedagogical approaches. Our methodology could hence be of use to help put in place an open and distance learning quality assessment. Research on the impact of technologies to learning adaptability and flexibilization could rely on adequate metrics.