22 resultados para Computer Algebra Systems (CAS)
em Université de Lausanne, Switzerland
Resumo:
The aim of this retrospective study was to compare the clinical and radiographic results after TKA (PFC, DePuy), performed either by computer assisted navigation (CAS, Brainlab, Johnson&Johnson) or by conventional means. Material and methods: Between May and December 2006 we reviewed 36 conventional TKA performed between 2002 and 2003 (group A) and 37 navigated TKA performed between 2005 and 2006 (group B) by the same experienced surgeon. The mean age in group A was 74 years (range 62-90) and 73 (range 58-85) in group B with a similar age distribution. The preoperative mechanical axes in group A ranged from -13° varus to +13° valgus (mean absolute deviation 6.83°, SD 3.86), in group B from -13° to +16° (mean absolute deviation 5.35, SD 4.29). Patients with a previous tibial osteotomy or revision arthroplasty were excluded from the study. Examination was done by an experienced orthopedic resident independent of the surgeon. All patients had pre- and postoperative long standing radiographs. The IKSS and the WOMAC were utilized to determine the clinical outcome. Patient's degree of satisfaction was assessed on a visual analogous scale (VAS). Results: 32 of the 37 navigated TKAs (86,5%) showed a postoperative mechanical axis within the limits of 3 degrees of valgus or varus deviation compared to only 24 (66%) of the 36 standard TKAs. This difference was significant (p = 0.045). The mean absolute deviation from neutral axis was 3.00° (range -5° to +9°, SD: 1.75) in group A in comparison to 1.54° (range -5° to +4°, SD: 1.41) in group B with a highly significant difference (p = 0.000). Furthermore, both groups showed a significant postoperative improvement of their mean IKSS-values (group A: 89 preoperative to 169 postoperative, group B 88 to 176) without a significant difference between the two groups. Neither the WOMAC nor the patient's degree of satisfaction - as assessed by VAS - showed significant differences. Operation time was significantly higher in group B (mean 119.9 min.) than in group A (mean 99.6 min., p <0.000). Conclusion: Our study showed consistent significant improvement of postoperative frontal alignment in TKA by computer assisted navigation (CAS) compared to standard methods, even in the hands of a surgeon well experienced in standard TKA implantation. However, the follow-up time of this study was not long enough to judge differences in clinical outcome. Thus, the relevance of computer navigation for clinical outcome and survival of TKA remains to be proved in long term studies to justify the longer operation time. References 1 Stulberg SD. Clin Orth Rel Res. 2003;(416):177-84. 2 Chauhan SK. JBJS Br. 2004;86(3):372-7. 3 Bäthis H, et al. Orthopäde. 2006;35(10):1056-65.
Resumo:
RÉSUMÉ Cette thèse porte sur le développement de méthodes algorithmiques pour découvrir automatiquement la structure morphologique des mots d'un corpus. On considère en particulier le cas des langues s'approchant du type introflexionnel, comme l'arabe ou l'hébreu. La tradition linguistique décrit la morphologie de ces langues en termes d'unités discontinues : les racines consonantiques et les schèmes vocaliques. Ce genre de structure constitue un défi pour les systèmes actuels d'apprentissage automatique, qui opèrent généralement avec des unités continues. La stratégie adoptée ici consiste à traiter le problème comme une séquence de deux sous-problèmes. Le premier est d'ordre phonologique : il s'agit de diviser les symboles (phonèmes, lettres) du corpus en deux groupes correspondant autant que possible aux consonnes et voyelles phonétiques. Le second est de nature morphologique et repose sur les résultats du premier : il s'agit d'établir l'inventaire des racines et schèmes du corpus et de déterminer leurs règles de combinaison. On examine la portée et les limites d'une approche basée sur deux hypothèses : (i) la distinction entre consonnes et voyelles peut être inférée sur la base de leur tendance à alterner dans la chaîne parlée; (ii) les racines et les schèmes peuvent être identifiés respectivement aux séquences de consonnes et voyelles découvertes précédemment. L'algorithme proposé utilise une méthode purement distributionnelle pour partitionner les symboles du corpus. Puis il applique des principes analogiques pour identifier un ensemble de candidats sérieux au titre de racine ou de schème, et pour élargir progressivement cet ensemble. Cette extension est soumise à une procédure d'évaluation basée sur le principe de la longueur de description minimale, dans- l'esprit de LINGUISTICA (Goldsmith, 2001). L'algorithme est implémenté sous la forme d'un programme informatique nommé ARABICA, et évalué sur un corpus de noms arabes, du point de vue de sa capacité à décrire le système du pluriel. Cette étude montre que des structures linguistiques complexes peuvent être découvertes en ne faisant qu'un minimum d'hypothèses a priori sur les phénomènes considérés. Elle illustre la synergie possible entre des mécanismes d'apprentissage portant sur des niveaux de description linguistique distincts, et cherche à déterminer quand et pourquoi cette coopération échoue. Elle conclut que la tension entre l'universalité de la distinction consonnes-voyelles et la spécificité de la structuration racine-schème est cruciale pour expliquer les forces et les faiblesses d'une telle approche. ABSTRACT This dissertation is concerned with the development of algorithmic methods for the unsupervised learning of natural language morphology, using a symbolically transcribed wordlist. It focuses on the case of languages approaching the introflectional type, such as Arabic or Hebrew. The morphology of such languages is traditionally described in terms of discontinuous units: consonantal roots and vocalic patterns. Inferring this kind of structure is a challenging task for current unsupervised learning systems, which generally operate with continuous units. In this study, the problem of learning root-and-pattern morphology is divided into a phonological and a morphological subproblem. The phonological component of the analysis seeks to partition the symbols of a corpus (phonemes, letters) into two subsets that correspond well with the phonetic definition of consonants and vowels; building around this result, the morphological component attempts to establish the list of roots and patterns in the corpus, and to infer the rules that govern their combinations. We assess the extent to which this can be done on the basis of two hypotheses: (i) the distinction between consonants and vowels can be learned by observing their tendency to alternate in speech; (ii) roots and patterns can be identified as sequences of the previously discovered consonants and vowels respectively. The proposed algorithm uses a purely distributional method for partitioning symbols. Then it applies analogical principles to identify a preliminary set of reliable roots and patterns, and gradually enlarge it. This extension process is guided by an evaluation procedure based on the minimum description length principle, in line with the approach to morphological learning embodied in LINGUISTICA (Goldsmith, 2001). The algorithm is implemented as a computer program named ARABICA; it is evaluated with regard to its ability to account for the system of plural formation in a corpus of Arabic nouns. This thesis shows that complex linguistic structures can be discovered without recourse to a rich set of a priori hypotheses about the phenomena under consideration. It illustrates the possible synergy between learning mechanisms operating at distinct levels of linguistic description, and attempts to determine where and why such a cooperation fails. It concludes that the tension between the universality of the consonant-vowel distinction and the specificity of root-and-pattern structure is crucial for understanding the advantages and weaknesses of this approach.
Resumo:
Therapeutic drug monitoring (TDM) aims to optimize treatments by individualizing dosage regimens based on the measurement of blood concentrations. Dosage individualization to maintain concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculations currently represent the gold standard TDM approach but require computation assistance. In recent decades computer programs have been developed to assist clinicians in this assignment. The aim of this survey was to assess and compare computer tools designed to support TDM clinical activities. The literature and the Internet were searched to identify software. All programs were tested on personal computers. Each program was scored against a standardized grid covering pharmacokinetic relevance, user friendliness, computing aspects, interfacing and storage. A weighting factor was applied to each criterion of the grid to account for its relative importance. To assess the robustness of the software, six representative clinical vignettes were processed through each of them. Altogether, 12 software tools were identified, tested and ranked, representing a comprehensive review of the available software. Numbers of drugs handled by the software vary widely (from two to 180), and eight programs offer users the possibility of adding new drug models based on population pharmacokinetic analyses. Bayesian computation to predict dosage adaptation from blood concentration (a posteriori adjustment) is performed by ten tools, while nine are also able to propose a priori dosage regimens, based only on individual patient covariates such as age, sex and bodyweight. Among those applying Bayesian calculation, MM-USC*PACK© uses the non-parametric approach. The top two programs emerging from this benchmark were MwPharm© and TCIWorks. Most other programs evaluated had good potential while being less sophisticated or less user friendly. Programs vary in complexity and might not fit all healthcare settings. Each software tool must therefore be regarded with respect to the individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Computer-assisted TDM is gaining growing interest and should further improve, especially in terms of information system interfacing, user friendliness, data storage capability and report generation.
Resumo:
An objective analysis of image quality parameters was performed for a computed radiography (CR) system using both standard single-side and prototype dual-side read plates. The pre-sampled modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) for the systems were determined at three different beam qualities representative of pediatric chest radiography, at an entrance detector air kerma of 5 microGy. The NPS and DQE measurements were realized under clinically relevant x-ray spectra for pediatric radiology, including x-ray scatter radiations. Compared to the standard single-side read system, the MTF for the dual-side read system is reduced, but this is offset by a significant decrease in image noise, resulting in a marked increase in DQE (+40%) in the low spatial frequency range. Thus, for the same image quality, the new technology permits the CR system to be used at a reduced dose level.
Resumo:
Malposition of the acetabular component during hip arthroplasty increases the occurrence of impingement, reduces range of motion, and increases the risk of dislocation and long-term wear. To prevent malpositioned hip implants, an increasing number of computer-assisted orthopaedic systems have been described, but their accuracy is not well established. The purpose of this study was to determine the reproducibility and accuracy of conventional versus computer-assisted techniques for positioning the acetabular component in total hip arthroplasty. Using a lateral approach, 150 cups were placed by 10 surgeons in 10 identical plastic pelvis models (freehand, with a mechanical guide, using computer assistance). Conditions for cup implantations were made to mimic the operating room situation. Preoperative planning was done from a computed tomography scan. The accuracy of cup abduction and anteversion was assessed with an electromagnetic system. Freehand placement revealed a mean accuracy of cup anteversion and abduction of 10 degrees and 3.5 degrees, respectively (maximum error, 35 degrees). With the cup positioner, these angles measured 8 degrees and 4 degrees (maximum error, 29.8 degrees), respectively, and using computer assistance, 1.5 degrees and 2.5 degrees degrees (maximum error, 8 degrees), respectively. Computer-assisted cup placement was an accurate and reproducible technique for total hip arthroplasty. It was more accurate than traditional methods of cup positioning.
Resumo:
In the context of Systems Biology, computer simulations of gene regulatory networks provide a powerful tool to validate hypotheses and to explore possible system behaviors. Nevertheless, modeling a system poses some challenges of its own: especially the step of model calibration is often difficult due to insufficient data. For example when considering developmental systems, mostly qualitative data describing the developmental trajectory is available while common calibration techniques rely on high-resolution quantitative data. Focusing on the calibration of differential equation models for developmental systems, this study investigates different approaches to utilize the available data to overcome these difficulties. More specifically, the fact that developmental processes are hierarchically organized is exploited to increase convergence rates of the calibration process as well as to save computation time. Using a gene regulatory network model for stem cell homeostasis in Arabidopsis thaliana the performance of the different investigated approaches is evaluated, documenting considerable gains provided by the proposed hierarchical approach.
Resumo:
Cette thèse propose de passer en revue les modalités de la représentation écrite de l'oralité en français. La pratique littéraire constitue le matériau et l'horizon de la théorisation. La problématique - comment l'écrit représente-t-il l'oral ? - est d'abord située et reformulée dans le cadre de la linguistique de la parole (I). Les rapports entre oralité et scripturalité sont ensuite étudiés sous trois angles. L'angle biotechnologique compare la matérialité et l'affordance des signaux graphiques et des signaux acoustiques (II 1). L'examen sémiotique reconnaît dans le français écrit un système dit phonographique dont la fonction est de représenter l'expression des signes du français oral. Sont analysées alors les relations entre les systèmes de signes impliqués, la diversité des actualisations possibles du système phonographique (effets d'écoute), ainsi que diverses sémiotiques analogiques (II 2). On étudie ensuite le rôle de la prosodie dans la lecture. La position adoptée est la suivante : bien qu'elle soit facultative dans l'activité de lecture, la prosodie est spécialement sollicitée par des écrits qu'on peut caractériser linguistiquement. L'interprétation prosodique apporte à ces écrits un surcroît de signification en même temps qu'il produit un mode spécifique de représentation de l'oral appelé effet prosodique (II 3). L'angle sémantique est esquissé finalement : il conduit à dégager deux modalités de représentation supplémentaire. Pour la première, l'oral se situe sur le plan sémantico-référentiel de l'expression écrite (écrire à propos d'oral) ; pour la seconde, l'oral est un extérieur discursif modalisant le dire écrit : l'écrit est reconnu comme énoncé à la manière de l'oral (effet de style oral). - This PhD thesis attempts to review the modalities of orality in written representation. Literary writings act as the material for theorization. First of all, the thesis statement - how does writing represent oral - is situated and then, reformulated within the frame of linguistique de la parole (the linguistic field of speech) (I). The connections between orality and writing are then studied under three angles. The biotechnological angle compares the materiality and the affordance of graphic signs and acoustic signals (II 1). A semiotic examination acknowledges, in French, a phonographical system whose function is to represent the expression of French oral signs. Thus, the relationships between the systems of implicated signs, the diversity of possible actualisations of the phonographic system (voice effects), as well as various analogical semiotics are analysed (II 2). Furthermore, the role of prosody is studied within reading. The stand taken is the following : even though it is optional during a reading activity, prosody is especially sought-after by linguistically characterised writings. The prosodie interpretation brings to these writings a surge of signification while producing a specific mode of oral representation called the prosodie effects (II 3). The semantic angle is finally drawn : it leads to two additional modalities of representation. For the first part, speech is located on the semantic and referential plan of the written expression (writing about speech); as for the second part, spoken language is a discursive exteriority : writing is recognised as an oral-like utterance {oral-like effect).
Resumo:
Extensible Markup Language (XML) is a generic computing language that provides an outstanding case study of commodification of service standards. The development of this language in the late 1990s marked a shift in computer science as its extensibility let store and share any kind of data. Many office suites software rely on it. The chapter highlights how the largest multinational firms pay special attention to gain a recognised international standard for such a major technological innovation. It argues that standardisation processes affects market structures and can lead to market capture. By examining how a strategic use of standardisation arenas can generate profits, it shows that Microsoft succeeded in making its own technical solution a recognised ISO standard in 2008, while the same arena already adopted two years earlier the open source standard set by IBM and Sun Microsystems. Yet XML standardisation also helped to establish a distinct model of information technology services at the expense of Microsoft monopoly on proprietary software
Resumo:
BACKGROUND: Clinical practice does not always reflect best practice and evidence, partly because of unconscious acts of omission, information overload, or inaccessible information. Reminders may help clinicians overcome these problems by prompting the doctor to recall information that they already know or would be expected to know and by providing information or guidance in a more accessible and relevant format, at a particularly appropriate time. OBJECTIVES: To evaluate the effects of reminders automatically generated through a computerized system and delivered on paper to healthcare professionals on processes of care (related to healthcare professionals' practice) and outcomes of care (related to patients' health condition). SEARCH METHODS: For this update the EPOC Trials Search Co-ordinator searched the following databases between June 11-19, 2012: The Cochrane Central Register of Controlled Trials (CENTRAL) and Cochrane Library (Economics, Methods, and Health Technology Assessment sections), Issue 6, 2012; MEDLINE, OVID (1946- ), Daily Update, and In-process; EMBASE, Ovid (1947- ); CINAHL, EbscoHost (1980- ); EPOC Specialised Register, Reference Manager, and INSPEC, Engineering Village. The authors reviewed reference lists of related reviews and studies. SELECTION CRITERIA: We included individual or cluster-randomized controlled trials (RCTs) and non-randomized controlled trials (NRCTs) that evaluated the impact of computer-generated reminders delivered on paper to healthcare professionals on processes and/or outcomes of care. DATA COLLECTION AND ANALYSIS: Review authors working in pairs independently screened studies for eligibility and abstracted data. We contacted authors to obtain important missing information for studies that were published within the last 10 years. For each study, we extracted the primary outcome when it was defined or calculated the median effect size across all reported outcomes. We then calculated the median absolute improvement and interquartile range (IQR) in process adherence across included studies using the primary outcome or median outcome as representative outcome. MAIN RESULTS: In the 32 included studies, computer-generated reminders delivered on paper to healthcare professionals achieved moderate improvement in professional practices, with a median improvement of processes of care of 7.0% (IQR: 3.9% to 16.4%). Implementing reminders alone improved care by 11.2% (IQR 6.5% to 19.6%) compared with usual care, while implementing reminders in addition to another intervention improved care by 4.0% only (IQR 3.0% to 6.0%) compared with the other intervention. The quality of evidence for these comparisons was rated as moderate according to the GRADE approach. Two reminder features were associated with larger effect sizes: providing space on the reminder for provider to enter a response (median 13.7% versus 4.3% for no response, P value = 0.01) and providing an explanation of the content or advice on the reminder (median 12.0% versus 4.2% for no explanation, P value = 0.02). Median improvement in processes of care also differed according to the behaviour the reminder targeted: for instance, reminders to vaccinate improved processes of care by 13.1% (IQR 12.2% to 20.7%) compared with other targeted behaviours. In the only study that had sufficient power to detect a clinically significant effect on outcomes of care, reminders were not associated with significant improvements. AUTHORS' CONCLUSIONS: There is moderate quality evidence that computer-generated reminders delivered on paper to healthcare professionals achieve moderate improvement in process of care. Two characteristics emerged as significant predictors of improvement: providing space on the reminder for a response from the clinician and providing an explanation of the reminder's content or advice. The heterogeneity of the reminder interventions included in this review also suggests that reminders can improve care in various settings under various conditions.
Resumo:
In our recent paper by Monnin et al. [Med. Phys. 33, 411-420 (2006)], an objective analysis of the relative performance of a computed radiography (CR) system using both standard single-side (ST-VI) and prototype dual-side read (ST-BD) plates was reported. The presampled modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) for the systems were determined at three different beam qualities representative of paediatric chest radiography, at an entrance detector air kerma of 5 microGy. Experiments demonstrated that, compared to the standard single-side read system, the MTF for the dual-side read system was slightly reduced, but a significant decrease in image noise resulted in a marked increase in DQE (+40%) in the low spatial frequency range. However, the DQE improvement for the ST-BD plate decreased with increasing spatial frequency, and, at spatial frequencies above 2.2 mm(-1), the DQE of the dual-side read system was lower than that of the single-side one.
Resumo:
This work compares the detector performance and image quality of the new Kodak Min-R EV mammography screen-film system with the Fuji CR Profect detector and with other current mammography screen-film systems from Agfa, Fuji and Kodak. Basic image quality parameters (MTF, NPS, NEQ and DQE) were evaluated for a 28 kV Mo/Mo (HVL = 0.646 mm Al) beam using different mAs exposure settings. Compared with other screen-film systems, the new Kodak Min-R EV detector has the highest contrast and a low intrinsic noise level, giving better NEQ and DQE results, especially at high optical density. Thus, the properties of the new mammography film approach those of a fine mammography detector, especially at low frequency range. Screen-film systems provide the best resolution. The presampling MTF of the digital detector has a value of 15% at the Nyquist frequency and, due to the spread size of the laser beam, the use of a smaller pixel size would not permit a significant improvement of the detector resolution. The dual collection reading technology increases significantly the low frequency DQE of the Fuji CR system that can at present compete with the most efficient mammography screen-film systems.
Resumo:
The theory of small-world networks as initiated by Watts and Strogatz (1998) has drawn new insights in spatial analysis as well as systems theory. The theoryâeuro?s concepts and methods are particularly relevant to geography, where spatial interaction is mainstream and where interactions can be described and studied using large numbers of exchanges or similarity matrices. Networks are organized through direct links or by indirect paths, inducing topological proximities that simultaneously involve spatial, social, cultural or organizational dimensions. Network synergies build over similarities and are fed by complementarities between or inside cities, with the two effects potentially amplifying each other according to the âeurooepreferential attachmentâeuro hypothesis that has been explored in a number of different scientific fields (Barabási, Albert 1999; Barabási A-L 2002; Newman M, Watts D, Barabà si A-L). In fact, according to Barabási and Albert (1999), the high level of hierarchy observed in âeurooescale-free networksâeuro results from âeurooepreferential attachmentâeuro, which characterizes the development of networks: new connections appear preferentially close to nodes that already have the largest number of connections because in this way, the improvement in the network accessibility of the new connection will likely be greater. However, at the same time, network regions gathering dense and numerous weak links (Granovetter, 1985) or network entities acting as bridges between several components (Burt 2005) offer a higher capacity for urban communities to benefit from opportunities and create future synergies. Several methodologies have been suggested to identify such denser and more coherent regions (also called communities or clusters) in terms of links (Watts, Strogatz 1998; Watts 1999; Barabási, Albert 1999; Barabási 2002; Auber 2003; Newman 2006). These communities not only possess a high level of dependency among their member entities but also show a low level of âeurooevulnerabilityâeuro, allowing for numerous redundancies (Burt 2000; Burt 2005). The SPANGEO project 2005âeuro"2008 (SPAtial Networks in GEOgraphy), gathering a team of geographers and computer scientists, has included empirical studies to survey concepts and measures developed in other related fields, such as physics, sociology and communication science. The relevancy and potential interpretation of weighted or non-weighted measures on edges and nodes were examined and analyzed at different scales (intra-urban, inter-urban or both). New classification and clustering schemes based on the relative local density of subgraphs were developed. The present article describes how these notions and methods contribute on a conceptual level, in terms of measures, delineations, explanatory analyses and visualization of geographical phenomena.
Resumo:
Summary Artificial radionuclides were released in the environment during the atmospheric nuclear weapon tests and after accidental events involving nuclear industries. As a primary receptor of the deposition, the soil is a very sensitive compartment and understanding the interaction and migration of radionuclides within soils allows the development of scenario for the contamination risk of the population and of the environment. Most available field studies on radionuclides in soils only concern one or two isotopes, mostly 137Cs, and few physico-chemical soil parameters. The purpose of this study was a broader understanding of the radioecology of an Alpine valley. In a first part, we aimed to describe the depth distribution of 137Cs, 90Sr, 239+240Pu, and 241Am within different alpine soils and to identify some stable elements as indicators for accumulating layers. In the central part of the study, the goal was to investigate the repartition of ^Sr and 239Pu between the truly dissolved fraction and the colloidal fraction of the soil solutions and to identify the nature of colloids involved in the adsorption of ^Sr and 239Pu. These results were integrated in an "advection- sorption" transport model seeking to explain the migration of 239Pu and 90Sr within the soils and to assess the importance of colloidal transport for these two isotopes. A further aspect studied was the role of the competition between the radioisotopes (137Cs and 90Sr) and their stable chemical analogues (K and Ca) with respect to plant uptake by different plant species. The results on the depth distribution within the soils showed that 137Cs was mostly retained in the topsoil, to the exception of an organic-rich soil (Histosol 2) receiving important surface runoff, where migration down to a depth of 30 cm was observed. 137Cs depth distribution within the soils was similar to unsupported 210Pb depth distribution. The plant uptake of 137Cs clearly depended on the concentration of exchangeable potassium in the soils. Moreover, we showed that the 137Cs uptake by certain species of the taxonomic orders Poales and Rosales was more sensitive to the increase in exchangeable Κ compared to other orders. Strontium-90 was much more mobile in the soils than 137Cs and depth migration and accumulation in specific AI- and Fe-rich layers were found down to 30 cm. Copper and Ni showed accumulations in these same layers, indicating their potential to be used as indicators for the migration of ^Sr within the soils. In addition, we observed a 90Sr activity peak in the topsoil that can be attributable to recycling of 90Sr by plant uptake. We demonstrated for the first time that a part of 90Sr (at least 40%) was associated with the colloids in organic-rich soil solutions. Therefore, we predict a significant effect of the colloidal migration of ^Sr in organic-rich soil solutions. The plant uptake results for 90Sr indicated a phylogenetic effect between Non-Eudicot and Eudicots: the order Poales concentrating much less 90Sr than Eudicots do. Moreover, we were able to demonstrate that the sensitivity of the 90Sr uptake by 5 different Alpine plant species to the amount of exchangeable Ca was species-independent. Plutonium and 241Am accumulated in the second layer of all soils and only a slight migration deeper than 20 cm was observed. Plutonium and 241Am showed a similar depth distribution in the soils. The model results suggested that the present day migration of 239Pu was very slow and that the uptake by plants was negligible. 239Pu activities between 0.01 to 0.08 mBq/L were measured in the bulk soil solutions. Migration of 239Pu with the soil solution is dominated by colloidal transport. We reported strong evidences that humic substances were responsible of the sorption of 239Pu to the colloidal fraction of the soil solutions. This was reflected by the strong correlation between 239Pu concentrations and the content of (colloidal) organic matter in the soil solution. Résumé Certains radioéléments artificiels ont été disséminés dans l'environnement suite aux essais atmosphériques de bombes nucléaires et suite à des accidents impliquant les industries nucléaires. En tant que récepteur primaire de la déposition, le sol est un compartiment sensible et des connaissances sur les interactions et la migration des radioéléments dans le sol permettent de développer des modèles pour estimer la contamination de la population et de l'environnement. Actuellement, la plupart des études de terrain sur ce sujet concernent uniquement un ou deux radioéléments, surtout le 137Cs et peu d'études intègrent les paramètres du sol pour expliquer la migration des radioéléments. Le but général de cette étude était une compréhension étendue de la radio-écologie d'une vallée alpine. Notre premier objectif était de décrire la distribution en profondeur de 137Cs, ^Sr, 239+240pu et 241Am dans différents sols alpins en relation avec des éléments stables du sol, dans le but d'identifier des éléments stables qui pourraient servir d'indicateurs pour des horizons accumulateurs. L'objectif de la deuxième partie, qui était la partie centrale de l'étude, était d'estimer le pourcentage d'activité sous forme colloïdale du 239Pu et du 90Sr dans les solutions des sols. De plus nous avons déterminé la nature des colloïdes impliqués dans la fixation du ^Sr et 239Pu. Nous avons ensuite intégré ces résultats dans un modèle de transport développé dans le but de décrire la migration du 239Pu et 90Sr dans le sol. Finalement, nous avons étudié l'absorption de 137Cs et 90Sr par les plantes en fonction de l'espèce et de la compétition avec leur élément analogue stable (K et Ca). Les résultats sur la migration en profondeur du 137Cs ont montré que ce radioélément était généralement retenu en surface, à l'exception d'un sol riche en matière organique dans lequel nous avons observé une nette migration en profondeur. Dans tous les sols, la distribution en profondeur du 137Cs était corrélée avec la distribution du 210Pb. L'absorption du 137Cs par les plantes, était dépendante de la concentration en Κ échangeable dans le sol, le potassium étant un compétiteur. De plus, nous avons observé que les espèces ne réagissaient pas de la même manière aux variations de la concentration de Κ échangeable. En effet, les espèces appartenant aux ordres des Poales et des Rosales étaient plus sensibles aux variations de potassium échangeable dans le sol. Dans tous les sols Le 90Sr était beaucoup plus mobile que le 137Cs. En effet, nous avons observé des accumulations de 90Sr dans des horizons riches en Fe et Al jusqu'à 30 cm de profondeur. De plus, le Cu et le Ni montraient des accumulations dans les mêmes horizons que le 90Sr, indiquant qu'il pourrait être possible d'utiliser ces deux éléments comme analogues pour la migration du 90Sr. D'après le modèle développé, le pic de 90Sr dans les premiers centimètres du sol peut être attribué à du recyclage par les plantes. Le 90Sr en solution était principalement sous forme dissoute dans des solutions de sols peu organique (entre 60 et 100% de 90Sr dissous). Par contre, dans des solutions organiques, un important pourcentage de 90Sr (plus de 40%) était associé aux colloïdes. La migration colloïdale du 90Sr peut donc être significative dans des solutions organiques. Comme pour le 137Cs, l'absorption du 90Sr par les plantes dépendait de la concentration de son analogue chimique dans la fraction échangeable du sol. Par contre, les espèces de plantes étudiées avaient la même sensibilité aux variations de la concentration du calcium échangeable. Le plutonium et l'américium étaient accumulés dans le deuxième horizon du sol et nous avons observé seulement une faible migration plus profondément que 20 cm. Selon le modèle, la migration actuelle du plutonium est très lente et l'absorption par les plantes semble négligeable. Nous avons mesuré entre 0.01 et 0.08 mBq/L de 239Pu dans les solutions de sol brutes. La migration du plutonium par la solution du sol est due principalement aux colloïdes, probablement de nature humique. Résumé grand public Dans les années 1950 à 1960, l'environnement a été contaminé par des éléments radioactifs (radioéléments) artificiels provenant des essais des armes atomiques et de l'industrie nucléaire. En effet, durant ces années, les premiers essais de bombes atomiques se faisaient dans l'atmosphère, libérant de grandes quantités d'éléments radioactifs. De plus certains accidents impliquant l'industrie nucléaire civile ont contribué à la dissémination d'éléments radioactifs dans l'environnement. Ce fut par exemple le cas de l'accident de la centrale atomique de Tchernobyl en 1986 qui a causé une importante contamination d'une grande partie de l'Europe par le 137Cs. Lorsqu'ils sont libérés dans l'atmosphère, les radioéléments sont dispersés et transportés par les courants atmosphériques, puis peuvent être déposés dans l'environnement, principalement par les précipitations. Une fois déposés sur le sol, les radioéléments vont interagir avec les composants du sol et migrer plus ou moins vite. La connaissance des interactions des éléments radioactifs avec le sol est donc importante pour prédire les risques de contamination de l'environnement et de l'homme. Le but général de ce travail était d'évaluer la migration de différents éléments radioactifs (césium-137, strontium-90, plutonium et américium-241) à travers le sol. Nous avons choisi un site d'étude en milieu alpin (Val Piora, Tessin, Suisse), contaminé en radioéléments principalement par les retombées de l'accident de Tchernobyl et des essais atmosphériques de bombes atomiques. Dans un premier temps, nous avons caractérisé la distribution en profondeur des éléments radioactifs dans le sol et l'avons comparée à divers éléments stables. Cette comparaison nous a permit de remarquer que le cuivre et le nickel s'accumulaient dans les mêmes horizons du sol que le strontium-90 et pourraient donc être utilisés comme analogue pour la migration du strontium-90 dans les sols. Dans la plupart des sols étudiés, la migration du césium-137, du plutonium et de l'américium-241 était lente et ces radioéléments étaient donc accumulés dans les premiers centimètres du sol. Par contre, le strontium-90 a migré beaucoup plus rapidement que les autres radioéléments si bien qu'on observe des accumulations de strontium-90 à plus de 30 cm de profondeur. Les radioéléments migrent dans la solution du sol soit sous forme dissoute, soit sous forme colloïdale, c'est-à-dire associés à des particules de diamètre < Ιμηι. Cette association avec des colloïdes permet à des radioéléments peu solubles, comme le plutonium, de migrer plus rapidement qu'attendu. Nous avons voulu savoir quelle était la part de strontium-90 et plutonium associés à des colloïdes dans la solution du sol. Les résultats ont montré que le plutonium en solution était principalement associé à des colloïdes de type organique. Quant au strontium-90, ce dernier était en partie associé à des colloïdes dans des solutions de sol riches en matière organique, par contre, il était principalement sous forme dissoute dans les solutions de sol peu organiques. L'absorption de radioéléments par les plantes représente une voie importante pour le transfert vers la chaîne alimentaire, par conséquent pour la contamination de l'homme. Nous avons donc étudié le transfert du césium-137 et du strontium-90 de plusieurs sols vers différentes espèces de plantes. Les résultats ont montré que l'absorption des radioéléments par les plantes était liée à la concentration de leur analogue chimique (calcium pour le strontium-90 et potassium pour le césium- 137) dans la fraction échangeable du sol. De plus certaines espèces de plantes accumulent significativement moins de strontium-90.