145 resultados para microprocessor-based control
em Université de Lausanne, Switzerland
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.
Resumo:
This study analyzed high-density event-related potentials (ERPs) within an electrical neuroimaging framework to provide insights regarding the interaction between multisensory processes and stimulus probabilities. Specifically, we identified the spatiotemporal brain mechanisms by which the proportion of temporally congruent and task-irrelevant auditory information influences stimulus processing during a visual duration discrimination task. The spatial position (top/bottom) of the visual stimulus was indicative of how frequently the visual and auditory stimuli would be congruent in their duration (i.e., context of congruence). Stronger influences of irrelevant sound were observed when contexts associated with a high proportion of auditory-visual congruence repeated and also when contexts associated with a low proportion of congruence switched. Context of congruence and context transition resulted in weaker brain responses at 228 to 257 ms poststimulus to conditions giving rise to larger behavioral cross-modal interactions. Importantly, a control oddball task revealed that both congruent and incongruent audiovisual stimuli triggered equivalent non-linear multisensory interactions when congruence was not a relevant dimension. Collectively, these results are well explained by statistical learning, which links a particular context (here: a spatial location) with a certain level of top-down attentional control that further modulates cross-modal interactions based on whether a particular context repeated or changed. The current findings shed new light on the importance of context-based control over multisensory processing, whose influences multiplex across finer and broader time scales.
Resumo:
Purpose: Optimal induction and maintenance immunosuppressive therapies in renal transplantation are still a matter of debate.Chronic corticosteroid usage is a major cause of morbidity but steroid-free immunosuppression (SF) can result in unacceptably high rates of acute rejection and even graft loss. Methods and materials: We have conducted a prospective openlabelled clinical trial in the Geneva-Lausanne Transplant Network from March 2005 to May 2008. 20 low immunological risk (<20% PRA, no DSA) adult recipients of a primary kidney allograft received a 4-day course of thymoglobulin (1.5 mg/kg/d) with methylprednisolone and maintenance based immunosuppression of tacrolimus and entericcoated mycophenolic acid (MPA). The control arm consisted of 16 matched recipients treated with basiliximab induction, tacrolimus, mycophenolate mofetil and corticosteroids. Primary endpoints were the percentage of recipients not taking steroids and the percentage of rejection-free recipients at 12 months.Secondary end points were allograft survival at 12 months and significant thymoglobulin and/or other drugs side effects. Results: In the SF group, 85% of the kidney recipients remained steroid-free at 12 months. The 3 cases of steroids introduction were due to one acute tubulo-interstitial rejection occurring at day 11, one tacrolimus withdrawal due to thrombotic microangiopathy and one MPA withdrawal because of multiple sinusitis and CMV reactivations. No BK viremia was detected nor CMV disease. The 6 CMV negative patients who received a positive CMV allograft had a symptomatic primoinfection after their 6-month course valgancyclovir prophylaxis. In the steroid-based group, 3 acute rejection episodes (acute humoral rejection, acute tubulointerstitial Banff IA and vascular Banff IIA) occurred in 2 recipients, 3 BK virus nephropathies were diagnosed between 45 and 135 days post transplant No side effects were associated with thymoglobulin infusion.In the SF group, 4 recipients presented severe leukopenia or agranulocytosis and one recipient had febrile hepatitis leading to transient MPA withdrawal. Discontinuation of MPA was needed in 2 patients for recurrent sinusitis and CMV reactivations. Patient and graft survival was 100% in both groups at 12 month follow-up. Conclusion: Steroid-free with short-course thymoglobulin induction therapy was a safe protocol in low-risk renal transplant recipients. Lower rates of acute rejection and BK virus infections episodes were seen compared to the steroid-based control group. A longer follow-up will be needed to determine whether this SF immunosuppressive regimen will result in higher graft and patient survival.
Resumo:
The traditionally coercive and state-controlled governance of protected areas for nature conservation in developing countries has in many cases undergone change in the context of widespread decentralization and liberalization. This article examines an emerging "mixed" (coercive, community- and market-oriented) conservation approach in managed-resource protected areas and its effects on state power through a case study on forest protection in the central Indian state of Madhya Pradesh. The findings suggest that imperfect decentralization and partial liberalization resulted in changed forms, rather than uniform loss, of state power. A forest co-management program paradoxically strengthened local capacity and influence of the Forest Department, which generally maintained its territorial and knowledge-based control over forests and timber management. Furthermore, deregulation and reregulation enabled the state to withdraw from uneconomic activities but also implied reduced place-based control of non-timber forest products. Generally, the new policies and programs contributed to the separation of livelihoods and forests in Madhya Pradesh. The article concludes that regulatory, community- and market-based initiatives would need to be better coordinated to lead to more effective nature conservation and positive livelihood outcomes.
Resumo:
OBJECTIVES: It is still debated if pre-existing minority drug-resistant HIV-1 variants (MVs) affect the virological outcomes of first-line NNRTI-containing ART. METHODS: This Europe-wide case-control study included ART-naive subjects infected with drug-susceptible HIV-1 as revealed by population sequencing, who achieved virological suppression on first-line ART including one NNRTI. Cases experienced virological failure and controls were subjects from the same cohort whose viraemia remained suppressed at a matched time since initiation of ART. Blinded, centralized 454 pyrosequencing with parallel bioinformatic analysis in two laboratories was used to identify MVs in the 1%-25% frequency range. ORs of virological failure according to MV detection were estimated by logistic regression. RESULTS: Two hundred and sixty samples (76 cases and 184 controls), mostly subtype B (73.5%), were used for the analysis. Identical MVs were detected in the two laboratories. 31.6% of cases and 16.8% of controls harboured pre-existing MVs. Detection of at least one MV versus no MVs was associated with an increased risk of virological failure (OR = 2.75, 95% CI = 1.35-5.60, P = 0.005); similar associations were observed for at least one MV versus no NRTI MVs (OR = 2.27, 95% CI = 0.76-6.77, P = 0.140) and at least one MV versus no NNRTI MVs (OR = 2.41, 95% CI = 1.12-5.18, P = 0.024). A dose-effect relationship between virological failure and mutational load was found. CONCLUSIONS: Pre-existing MVs more than double the risk of virological failure to first-line NNRTI-based ART.
Resumo:
The ability to identify the species origin of an unknown biological sample is relevant in the fields of human and wildlife forensics. However, the detection of several species mixed in the same sample still remains a challenge. We developed and tested a new approach for mammal DNA identification in mixtures of two or three species, based on the analysis of mitochondrial DNA control region interspecific length polymorphism followed by direct sequencing. Contrary to other published methods dealing with species mixtures, our protocol requires a single universal primer pair and is not based on a pre-defined panel of species. Amplicons can be separated either on agarose gels or using CE. The advantages and limitations of the assay are discussed under different conditions, such as variable template concentration, amplicon sizes and size difference among the amplicons present in the mixture. For the first time, this protocol provides a simple, reliable and flexible method for simultaneous identification of multiple mammalian species from mixtures, without any prior knowledge of the species involved.
Resumo:
The objective of this work is to present a multitechnique approach to define the geometry, the kinematics, and the failure mechanism of a retrogressive large landslide (upper part of the La Valette landslide, South French Alps) by the combination of airborne and terrestrial laser scanning data and ground-based seismic tomography data. The advantage of combining different methods is to constrain the geometrical and failure mechanism models by integrating different sources of information. Because of an important point density at the ground surface (4. 1 points m?2), a small laser footprint (0.09 m) and an accurate three-dimensional positioning (0.07 m), airborne laser scanning data are adapted as a source of information to analyze morphological structures at the surface. Seismic tomography surveys (P-wave and S-wave velocities) may highlight the presence of low-seismic-velocity zones that characterize the presence of dense fracture networks at the subsurface. The surface displacements measured from the terrestrial laser scanning data over a period of 2 years (May 2008?May 2010) allow one to quantify the landslide activity at the direct vicinity of the identified discontinuities. An important subsidence of the crown area with an average subsidence rate of 3.07 m?year?1 is determined. The displacement directions indicate that the retrogression is controlled structurally by the preexisting discontinuities. A conceptual structural model is proposed to explain the failure mechanism and the retrogressive evolution of the main scarp. Uphill, the crown area is affected by planar sliding included in a deeper wedge failure system constrained by two preexisting fractures. Downhill, the landslide body acts as a buttress for the upper part. Consequently, the progression of the landslide body downhill allows the development of dip-slope failures, and coherent blocks start sliding along planar discontinuities. The volume of the failed mass in the crown area is estimated at 500,000 m3 with the sloping local base level method.
Resumo:
BACKGROUND: In Western countries, leptospirosis is uncommon and mainly occurs in farmers and individuals indulging in water-related activities. In tropical countries, leptospirosis can be up to 1000 times more frequent and risk factors for this often severe disease may differ. METHODS: We conducted a one-year population-based matched case-control study to investigate the frequency and associated factors of leptospirosis in the entire population of Seychelles. RESULTS: A total of 75 patients had definite acute leptospirosis based on microagglutination test (MAT) and polymerase chain reaction (PCR) assay (incidence: 101 per 100,000 per year; 95% confidence interval [CI]: 79-126). Among the controls, MAT was positive in 37% (past infection) and PCR assay in 9% (subclinical infection) of men aged 25-64 with manual occupation. Comparing cases and controls with negative MAT and PCR, leptospirosis was associated positively with walking barefoot around the home, washing in streams, gardening, activities in forests, alcohol consumption, rainfall, wet soil around the home, refuse around the home, rats visible around the home during day time, cats in the home, skin wounds and inversely with indoor occupation. The considered factors accounted for as much as 57% of the variance in predicting the disease. CONCLUSION: These data indicate a high incidence of leptospirosis in Seychelles. This suggests that leptospires are likely to be ubiquitous and that effective leptospirosis control in tropical countries needs a multifactorial approach including major behaviour change by large segments of the general public.
Resumo:
Objectives: The AMS 800TM is the current artificial urinary sphincter (AUS) for incontinence due to intrinsic sphincter deficiency. Despite good clinical results, technical failures inherent to the hydraulic mechanism or urethral ischemic injury contribute to revisions up to 60%. We are developing an electronic AUS, called ARTUS to overcome the rigors of AMS. The objective of this study was to evaluate the technical efficacy and tissue tolerance of the ARTUS system in an animal model.Methods: The ARTUS is composed by three parts: the contractile unit, a series of rings and an integrated microprocessor. The contractile unit is made of Nitinol fibers. The rings are placed around the urethra to control the flow of urine by squeezing the urethra. They work in a sequential alternative mode and are controlled by a microprocessor. In the first phase a three-rings device was used while in the second phase a two-rings ARTUS was used. The device was implanted in 14 sheep divided in two groups of six and eight animals for study purpose. The first group aimed at bladder leak point pressure (BLPP) measurement and validation of the animal model; the second group aimed at verifying mid-term tissue tolerance by explants at twelve weeks. General animal tolerance was also evaluated.Results: The ARTUS system implantation was uneventful. When the system was activated, the BLPP was measured at 1.038±0.044 bar (mean±SD). Urethral tissue analysis did not show significant morphological changes. No infection and no sign of discomfort were noted in animals at 12 weeks.Conclusions: The ARTUS proved to be effective in continence achievement in this study. Histological results support our idea that a sequential alternative mode can avoid urethral atrophy and ischemia. Further technical developments are needed to verify long-term outcome and permit human use.
Resumo:
PURPOSE: To determine the local control and complication rates for children with papillary and/or macular retinoblastoma progressing after chemotherapy and undergoing stereotactic radiotherapy (SRT) with a micromultileaf collimator. METHODS AND MATERIALS: Between 2004 and 2008, 11 children (15 eyes) with macular and/or papillary retinoblastoma were treated with SRT. The mean age was 19 months (range, 2-111). Of the 15 eyes, 7, 6, and 2 were classified as International Classification of Intraocular Retinoblastoma Group B, C, and E, respectively. The delivered dose of SRT was 50.4 Gy in 28 fractions using a dedicated micromultileaf collimator linear accelerator. RESULTS: The median follow-up was 20 months (range, 13-39). Local control was achieved in 13 eyes (87%). The actuarial 1- and 2-year local control rates were both 82%. SRT was well tolerated. Late adverse events were reported in 4 patients. Of the 4 patients, 2 had developed focal microangiopathy 20 months after SRT; 1 had developed a transient recurrence of retinal detachment; and 1 had developed bilateral cataracts. No optic neuropathy was observed. CONCLUSIONS: Linear accelerator-based SRT for papillary and/or macular retinoblastoma in children resulted in excellent tumor control rates with acceptable toxicity. Additional research regarding SRT and its intrinsic organ-at-risk sparing capability is justified in the framework of prospective trials.
Resumo:
SUMMARYIntercellular communication is achieved at specialized regions of the plasma membrane by gap junctions. The proteins constituting the gap junctions are called connexins and are encoded by a family of genes highly conserved during evolution. In adult mouse, four connexins (Cxs) are known to be expressed in the vasculature: Cx37, Cx40, Cx43 and Cx45. Several recent studies have provided evidences that vascular connexins expression and blood pressure regulation are closely linked, suggesting a role for connexins in the control of blood pressure. However, the precise function that each vascular connexin plays under physiological and pathophysiological conditions is still not elucidated. In this context, this work was dedicated to evaluate the contribution of each of the four vascular connexins in the control of the vascular function and in the blood pressure regulation.In the present work, we first demonstrated that vascular connexins are differently regulated by hypertension in the mouse aorta. We also observed that endothelial connexins play a regulatory role on eNOS expression levels and function in the aorta, therefore in the control of vascular tone. Then, we demonstrated that Cx40 plays a pivotal role in the kidney by regulating the renal levels of COX-2 and nNOS, two key enzymes of the macula densa known to participate in the control of renin secreting cells. We also found that Cx43 forms the functional gap junction involved in intercellular Ca2+ wave propagation between vascular smooth muscle cells. Finally, we have started to generate transgenic mice expressing specifically Cx40 in the endothelium to investigate the involvement of Cx40 in the vasomotor tone, or in the renin secreting cells to evaluate the role of Cx40 in the control of renin secretion.In conclusion, this work has allowed us to identify new roles for connexins in the vasculature. Our results suggest that vascular connexins could be interesting targets for new therapies caring hypertension and vascular diseases.
Resumo:
Gene expression data from microarrays are being applied to predict preclinical and clinical endpoints, but the reliability of these predictions has not been established. In the MAQC-II project, 36 independent teams analyzed six microarray data sets to generate predictive models for classifying a sample with respect to one of 13 endpoints indicative of lung or liver toxicity in rodents, or of breast cancer, multiple myeloma or neuroblastoma in humans. In total, >30,000 models were built using many combinations of analytical methods. The teams generated predictive models without knowing the biological meaning of some of the endpoints and, to mimic clinical reality, tested the models on data that had not been used for training. We found that model performance depended largely on the endpoint and team proficiency and that different approaches generated models of similar performance. The conclusions and recommendations from MAQC-II should be useful for regulatory agencies, study committees and independent investigators that evaluate methods for global gene expression analysis.
Resumo:
Diabet. Med. 28, 539-542 (2011) ABSTRACT: Aims Achievement of good metabolic control in Type 1 diabetes is a difficult task in routine diabetes care. Education-based flexible intensified insulin therapy has the potential to meet the therapeutic targets while limiting the risk for severe hypoglycaemia. We evaluated the metabolic control and the rate of severe hypoglycaemia in real-life clinical practice in a centre using flexible intensified insulin therapy as standard of care since 1990. Methods Patients followed for Type 1 diabetes (n = 206) or those with other causes of absolute insulin deficiency (n = 17) in our outpatient clinic were analysed in a cross-sectional study. Mean age (± standard deviation) was 48.9 ± 15.7 years, with diabetes duration of 21.4 ± 14.4 years. Outcome measures were HbA(1c) and frequency of severe hypoglycaemia. Results Median HbA(1c) was 7.1% (54 mmol/mol) [interquartile range 6.6-7.8 (51-62 mmol/mol)]; a good or acceptable metabolic control with HbA(1c) < 7.0% (53 mmol/mol) or 7.5% (58 mmol/mol) was reached in 43.5 and 64.6% of the patients, respectively. The frequency of severe hypoglycaemic episodes was 15 per 100 patient years: 72.3% of the patients did not experience any such episodes during the past 5 years. Conclusions Good or acceptable metabolic control is achievable in the majority of patients with Type 1 diabetes or other causes of absolute insulin deficiency in routine diabetes care while limiting the risk for severe hypoglycaemia.
Resumo:
The Helvetic nappe system in Western Switzerland is a stack of fold nappes and thrust sheets em-placed at low grade metamorphism. Fold nappes and thrust sheets are also some of the most common features in orogens. Fold nappes are kilometer scaled recumbent folds which feature a weakly deformed normal limb and an intensely deformed overturned limb. Thrust sheets on the other hand are characterized by the absence of overturned limb and can be defined as almost rigid blocks of crust that are displaced sub-horizontally over up to several tens of kilometers. The Morcles and Doldenhom nappe are classic examples of fold nappes and constitute the so-called infra-Helvetic complex in Western and Central Switzerland, respectively. This complex is overridden by thrust sheets such as the Diablerets and Wildhörn nappes in Western Switzerland. One of the most famous example of thrust sheets worldwide is the Glariis thrust sheet in Central Switzerland which features over 35 kilometers of thrusting which are accommodated by a ~1 m thick shear zone. Since the works of the early Alpine geologist such as Heim and Lugeon, the knowledge of these nappes has been steadily refined and today the geometry and kinematics of the Helvetic nappe system is generally agreed upon. However, despite the extensive knowledge we have today of the kinematics of fold nappes and thrust sheets, the mechanical process leading to the emplacement of these nappe is still poorly understood. For a long time geologist were facing the so-called 'mechanical paradox' which arises from the fact that a block of rock several kilometers high and tens of kilometers long (i.e. nappe) would break internally rather than start moving on a low angle plane. Several solutions were proposed to solve this apparent paradox. Certainly the most successful is the theory of critical wedges (e.g. Chappie 1978; Dahlen, 1984). In this theory the orogen is considered as a whole and this change of scale allows thrust sheet like structures to form while being consistent with mechanics. However this theoiy is intricately linked to brittle rheology and fold nappes, which are inherently ductile structures, cannot be created in these models. When considering the problem of nappe emplacement from the perspective of ductile rheology the problem of strain localization arises. The aim of this thesis was to develop and apply models based on continuum mechanics and integrating heat transfer to understand the emplacement of nappes. Models were solved either analytically or numerically. In the first two papers of this thesis we derived a simple model which describes channel flow in a homogeneous material with temperature dependent viscosity. We applied this model to the Morcles fold nappe and to several kilometer-scale shear zones worldwide. In the last paper we zoomed out and studied the tectonics of (i) ductile and (ii) visco-elasto-plastic and temperature dependent wedges. In this last paper we focused on the relationship between basement and cover deformation. We demonstrated that during the compression of a ductile passive margin both fold nappes and thrust sheets can develop and that these apparently different structures constitute two end-members of a single structure (i.e. nappe). The transition from fold nappe to thrust sheet is to first order controlled by the deformation of the basement. -- Le système des nappes helvétiques en Suisse occidentale est un empilement de nappes de plis et de nappes de charriage qui se sont mis en place à faible grade métamorphique. Les nappes de plis et les nappes de charriage sont parmi les objets géologiques les plus communs dans les orogènes. Les nappes de plis sont des plis couchés d'échelle kilométrique caractérisés par un flanc normal faiblement défor-mé, au contraire de leur flanc inverse, intensément déformé. Les nappes de charriage, à l'inverse se caractérisent par l'absence d'un flanc inverse bien défini. Elles peuvent être définies comme des blocs de croûte terrestre qui se déplacent de manière presque rigide qui sont déplacés sub-horizontalement jusqu'à plusieurs dizaines de kilomètres. La nappe de Mordes et la nappe du Doldenhorn sont des exemples classiques de nappes de plis et constitue le complexe infra-helvétique en Suisse occidentale et centrale, respectivement. Ce complexe repose sous des nappes de charriages telles les nappes des Diablerets et du Widlhörn en Suisse occidentale. La nappe du Glariis en Suisse centrale se distingue par un déplacement de plus de 35 kilomètres qui s'est effectué à la faveur d'une zone de cisaillement basale épaisse de seulement 1 mètre. Aujourd'hui la géométrie et la cinématique des nappes alpines fait l'objet d'un consensus général. Malgré cela, les processus mécaniques par lesquels ces nappes se sont mises en place restent mal compris. Pendant toute la première moitié du vingtième siècle les géologues les géologues ont été confrontés au «paradoxe mécanique». Celui-ci survient du fait qu'un bloc de roche haut de plusieurs kilomètres et long de plusieurs dizaines de kilomètres (i.e., une nappe) se fracturera de l'intérieur plutôt que de se déplacer sur une surface frictionnelle. Plusieurs solutions ont été proposées pour contourner cet apparent paradoxe. La solution la plus populaire est la théorie des prismes d'accrétion critiques (par exemple Chappie, 1978 ; Dahlen, 1984). Dans le cadre de cette théorie l'orogène est considéré dans son ensemble et ce simple changement d'échelle solutionne le paradoxe mécanique (la fracturation interne de l'orogène correspond aux nappes). Cette théorie est étroitement lié à la rhéologie cassante et par conséquent des nappes de plis ne peuvent pas créer au sein d'un prisme critique. Le but de cette thèse était de développer et d'appliquer des modèles basés sur la théorie de la méca-nique des milieux continus et sur les transferts de chaleur pour comprendre l'emplacement des nappes. Ces modèles ont été solutionnés de manière analytique ou numérique. Dans les deux premiers articles présentés dans ce mémoire nous avons dérivé un modèle d'écoulement dans un chenal d'un matériel homogène dont la viscosité dépend de la température. Nous avons appliqué ce modèle à la nappe de Mordes et à plusieurs zone de cisaillement d'échelle kilométrique provenant de différents orogènes a travers le monde. Dans le dernier article nous avons considéré le problème à l'échelle de l'orogène et avons étudié la tectonique de prismes (i) ductiles, et (ii) visco-élasto-plastiques en considérant les transferts de chaleur. Nous avons démontré que durant la compression d'une marge passive ductile, a la fois des nappes de plis et des nappes de charriages peuvent se développer. Nous avons aussi démontré que nappes de plis et de charriages sont deux cas extrêmes d'une même structure (i.e. nappe) La transition entre le développement d'une nappe de pli ou d'une nappe de charriage est contrôlé au premier ordre par la déformation du socle. -- Le système des nappes helvétiques en Suisse occidentale est un emblement de nappes de plis et de nappes de chaînage qui se sont mis en place à faible grade métamoiphique. Les nappes de plis et les nappes de charriage sont parmi les objets géologiques les plus communs dans les orogènes. Les nappes de plis sont des plis couchés d'échelle kilométrique caractérisés par un flanc normal faiblement déformé, au contraire de leur flanc inverse, intensément déformé. Les nappes de charriage, à l'inverse se caractérisent par l'absence d'un flanc inverse bien défini. Elles peuvent être définies comme des blocs de croûte terrestre qui se déplacent de manière presque rigide qui sont déplacés sub-horizontalement jusqu'à plusieurs dizaines de kilomètres. La nappe de Morcles and la nappe du Doldenhorn sont des exemples classiques de nappes de plis et constitue le complexe infra-helvétique en Suisse occidentale et centrale, respectivement. Ce complexe repose sous des nappes de charriages telles les nappes des Diablerets et du Widlhörn en Suisse occidentale. La nappe du Glarüs en Suisse centrale est certainement l'exemple de nappe de charriage le plus célèbre au monde. Elle se distingue par un déplacement de plus de 35 kilomètres qui s'est effectué à la faveur d'une zone de cisaillement basale épaisse de seulement 1 mètre. La géométrie et la cinématique des nappes alpines fait l'objet d'un consensus général parmi les géologues. Au contraire les processus physiques par lesquels ces nappes sont mises en place reste mal compris. Les sédiments qui forment les nappes alpines se sont déposés à l'ère secondaire et à l'ère tertiaire sur le socle de la marge européenne qui a été étiré durant l'ouverture de l'océan Téthys. Lors de la fermeture de la Téthys, qui donnera naissance aux Alpes, le socle et les sédiments de la marge européenne ont été déformés pour former les nappes alpines. Le but de cette thèse était de développer et d'appliquer des modèles basés sur la théorie de la mécanique des milieux continus et sur les transferts de chaleur pour comprendre l'emplacement des nappes. Ces modèles ont été solutionnés de manière analytique ou numérique. Dans les deux premiers articles présentés dans ce mémoire nous nous sommes intéressés à la localisation de la déformation à l'échelle d'une nappe. Nous avons appliqué le modèle développé à la nappe de Morcles et à plusieurs zones de cisaillement provenant de différents orogènes à travers le monde. Dans le dernier article nous avons étudié la relation entre la déformation du socle et la défonnation des sédiments. Nous avons démontré que nappe de plis et nappes de charriages constituent les cas extrêmes d'un continuum. La transition entre nappe de pli et nappe de charriage est intrinsèquement lié à la déformation du socle sur lequel les sédiments reposent.