978 resultados para Correction of resistivity


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Correction of sagittal and transverse maxillary discrepancies in patients with cleft lip or palate remains a challenge for craniofacial surgeons. Distraction osteogenesis has revolutionized the conceptualization and approach to the craniofacial malformations and has become a reliable and irreplaceable part of the surgical armamentarium. We are reporting a case of sequential maxillary advancement and transpalatal expansion using internal distraction in a patient with unilateral cleft lip and palate presenting with severe maxillary sagittal and transverse deficiencies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A 11 months old female infant from Portugal, free of family history, consults for apathy, weight loss, tachycardia, tachypnea, petechiae, pallor without icterus and hepatoslenomegaly. Seven months earlier, while being in Portugal, she presented a persistent bluish pimple on her buttock. Laboratory results showed anemia (35 g/l), leucopenia (3.3 G/l), thrombocytopenia (13 G/l), impaired coagulation (INR 1.4, PTT 41 sec.), hyponatremia (124 mmol/l), elevated CRP (139 mg/l), high ferritin (34.775 μg/l) and high triglycerides (5.22 mmol/l). After correction of vital parameters, a bone marrow aspiration and biopsy (BMB) revealed both the etiological diagnosis, namely a visceral leishmaniasis (VL) as well as one of its potential complications, the hemophagocytic syndrome (HS). Transfusions of whole blood, platelets and fresh frozen plasma were immediately started. Dexamethasone (10 mg/m2) and amphotericin B (3 mg/kg/day) have also been administrated. Visceral leishmaniasis is caused by a protozoan (Leishmania donovani) transmitted by the female sandfly. It is endemic in the Mediterranean basin (including France, Italy, Spain and Portugal), South America, sub-Saharan Africa as well as in India and Bangladesh. The parasite infects macrophages and, after several weeks of incubation, the disease occurs by affection of bloodlines (anemia, leucopenia, thrombocytopenia), hepatosplenomegaly, cachexia, gastrointestinal damage. The complications of the disease may lead to death. Liposomal amphotericin B is the currently recommended treatment. HS is caused by the proliferation and activation of macrophages in the marrow in response to a cytokine storm. It may be of primary cause. When it is secondary, it may be related to infections such as leishmaniasis. Patients present with fever and laboratory diagnostic criteria include cytopenia, hypertriglyceridemia, high ferritin and hemophagocytosis in the BMB. The treatment consists among other in the administration of high doses corticosteroids and, in secondary cases, in the treatment of the underlying cause. In conclusion, the clinical and biological features of VL may mimic haematological disorders as leukemia, but an enlargement of the liver and especially of the spleen should remind in this parasitic infection and its potential fatal complication, the HS.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper is aimed at describing the impact of infrastructure on the economic evolution of Central Pyrenees (i.e., Huesca and the Catalan"Alt Pirineu"). The text analyses if investment in railways, roads and dams favoured economic development or, on the contrary, was just an instrument to extract domestic resources. The paper distinguishes among three different periods. Firstly, during the second half of the nineteenth century and the first few years of the twentieth century, the lack of railway connections prevented the economic development of the area. Secondly, between the first decades of the twentieth century and 1975, a road network was set up that reinforced the economic decadence of the most depressed valleys, and the construction of large dams was a powerful factor of depopulation all over the region. Finally, from 1975 onwards, some trends may be observed towards the correction of the previous policies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Summary: Surgical correction of abomasal displacement in dairy cattle : a literature review and two case reports of relapses after omentopexia

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Alveolar haemorrhage (AH) is a rare and potentially life-threatening condition characterised by diffuse blood leakage from the pulmonary microcirculation into the alveolar spaces due to microvascular damage. It is not a single disease but a clinical syndrome that may have numerous causes. Autoimmune disorders account for fewer than half of cases, whereas the majority are due to nonimmune causes such as left heart disease, infections, drug toxicities, coagulopathies and malignancies. The clinical picture includes haemoptysis, diffuse alveolar opacities at imaging and anaemia. Bronchoalveolar lavage is the gold standard method for diagnosing AH. The lavage fluid appears macroscopically haemorrhagic and/or contains numerous haemosiderin-laden macrophages. The diagnostic work-up includes search for autoimmune disorders, review of drugs and exposures, assessment of coagulation and left heart function, and search for infectious agents. Renal biopsy is often indicated if AH is associated with renal involvement, whereas lung biopsy is only rarely useful. Therapy aims at correction of reversible factors and immunosuppressive therapy in autoimmune causes, with plasmapheresis in selected situations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

98% of patients who have undergone a gastric bypass for treating severe obesity develop multiple micronutrient deficits. However, prior to surgery, it isn't rare to find nutrient deficiencies. Indeed, the dietary intakes of surgery candidates are often unbalanced, lacking in variety especially in high vitamin and mineral nutrients. We present the preliminary results concerning the qualitative and quantitative analysis in a group of patients waiting for a gastric bypass. The recommended daily amounts in vitamin B9, vitamin D and iron are insufficient in the majority of the patients. The correction of nutritional intakes is advisable, even before the surgery, in order to reduce the risks of developing biological deficiencies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Surgical or conservative treatment of ACTH-producing tumors results in acute drop of the previously excessively high cortisol levels. The following associated pathophysiological changes also occur in the organism's recovery from stress, such as trauma, operation or chemotherapy of tumors. Both cases result in a regeneration of the immune system, which might even be exalted. The corresponding radiographic feature is the "rebound" enlargement of the thymus occurring about six months after remission of hypercortisolism. Histological examination reveals benign thymus hyperplasia. Especially in cases of still unknown primary tumor the appearance of this anterior mediastinal mass can lead to misdiagnosis. We present the cases of two patients with diffuse thymic hyperplasia following surgical and medical correction of hypercortisolism. One patient suffered from classic Cushing's disease responding to transsphenoidal resection of an ACTH-secreting pituitary microadenoma. Six months later CT of the chest incidentally demonstrated an anterior mediastinal mass known as thymic hyperplasia. The second patient presented with an ectopic, still unkown source of ACTH-production. Six months after medical correction of hypercortisolism CT of the thorax showed an enlargement of the anterior mediastinum. Thymectomy was performed in order to exclude thymus carcinoid. Histological examination revealed benign thymus hyperplasia with negative immunostaining. CONCLUSION: Radiologists and clinicians should be familiar with the pathophysiological changes resulting from precipitously dropping cortisol levels in order to prevent diagnostic errors and unnecessary operations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objectives: Levosimendan, a calcium-sensitizing agent has been reported as useful for the management of patients with low cardiac output state. We report here our experience, safety and efficacy of use of levosimendan as rescue therapy after surgery for congenital heart disease. Methods: Retrospective cohort study on patients necessitating levosimendan therapy for post operative low cardiac output or severe post operative systolic and diastolic dysfunction. Twelve patients with a mean age of 2.1 years (range 7 days - 14 years old) received levosimendan. Type of surgery: 3 arterial switch, 3 correction of complete abnormal pulmonary venous return, 3 closure of VSD and correction of aortic coarctation, 3 Tetralogy of Fallot, one correction of truncus arteriosus and one palliation for single ventricle. The mean time of ECC was 203 +/- 81min. Ten patients received levosimendan for low cardiac output not responding to conventional therapy in these cases (milrinone, dopamine and noradrenaline) in the first 6 hours following entry in the ICU and 3 patients received levosimendan 3-4 days after surgery for severe systolic and diastolic dysfunction. Levosimendan was given as a drip for 24-48 hours at the dose of 0.1-0.2 mcg/ kg/min, without loading dose. Results: Significant changes were noted on mean plasmatic lactate (3.3 +/- 1.7mmole/L vs 1.8 +/-0.6mmole/L, p+0.01), mean central venous saturation (55 +/- 11% vs 68 +/- 10%, p+0.01) and mean arterio-venous difference in CO2 (9.6 +/- 4.9mmHg vs 6.7 +/- 2.1mmHg, p+0.05) for values before and at the end of levosimendan administration. There was no significant changes on heart rate, systolic pressure or central venous pressure. No adverse effect was observed. Conclusion: Levosimendan, used as rescue therapy after surgery for congenital heart disease, is safe and improves cardiac output as demonstrated with improvement of parameters commonly used clinically.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of the study was to evaluate the tissue oxygenation and hemodynamic effects of NOS inhibition in clinical severe septic shock. Eight patients with septic shock refractory to volume loading and high level of adrenergic support were prospectively enrolled in the study. Increasing doses of NOS inhibitors [N(G)-nitro-L-arginine-methyl ester (L-NAME) or N(G)-monomethyl-L-arginine (L-NMMA)] were administered as i.v. bolus until a peak effect = 10 mmHg on mean blood pressure was obtained or until side effects occurred. If deemed clinically appropriate, a continuous infusion of L-NAME was instituted and adrenergic support weaning attempted. The bolus administration of NOS inhibitors transiently increased mean blood pressure by 10 mm Hg in all patients. Seven out of eight patients received an L-NAME infusion, associated over 24 h with a progressive decline in cardiac index (P < 0.001) and an increase in systemic vascular resistance (P < 0.01). Partial or total adrenergic support weaning was rapidly possible in 6/8 patients. Oxygen transport decreased (P < 0.001), but oxygen consumption remained unchanged in those patients in whom it could be measured by indirect calorimetry (5/8). Blood lactate and the difference between tonometric gastric and arterial PCO2 remained unchanged. There were 4/8 ICU survivors. We conclude that nitric oxide synthase inhibition in severe septic shock was followed with a progressive correction of the vasoplegic hemodynamic disturbances with finally normalization of cardiac output and systemic vascular resistances without any demonstrable deterioration in tissue oxygenation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Koneet voidaan usein jakaa osajärjestelmiin, joita ovat ohjaus- ja säätöjärjestelmät, voimaa tuottavat toimilaitteet ja voiman välittävät mekanismit. Eri osajärjestelmiä on simuloitu tietokoneavusteisesti jo usean vuosikymmenen ajan. Osajärjestelmien yhdistäminen on kuitenkin uudempi ilmiö. Usein esimerkiksi mekanismien mallinnuksessa toimilaitteen tuottama voimaon kuvattu vakiona, tai ajan funktiona muuttuvana voimana. Vastaavasti toimilaitteiden analysoinnissa mekanismin toimilaitteeseen välittämä kuormitus on kuvattu vakiovoimana, tai ajan funktiona työkiertoa kuvaavana kuormituksena. Kun osajärjestelmät on erotettu toisistaan, on niiden välistenvuorovaikutuksien tarkastelu erittäin epätarkkaa. Samoin osajärjestelmän vaikutuksen huomioiminen koko järjestelmän käyttäytymissä on hankalaa. Mekanismien dynamiikan mallinnukseen on kehitetty erityisesti tietokoneille soveltuvia numeerisia mallinnusmenetelmiä. Useimmat menetelmistä perustuvat Lagrangen menetelmään, joka mahdollistaa vapaasti valittaviin koordinaattimuuttujiin perustuvan mallinnuksen. Numeerista ratkaisun mahdollistamiseksi menetelmän avulla muodostettua differentiaali-algebraaliyhtälöryhmää joudutaan muokkaamaan esim. derivoimalla rajoiteyhtälöitä kahteen kertaan. Menetelmän alkuperäisessä numeerisissa ratkaisuissa kaikki mekanismia kuvaavat yleistetyt koordinaatit integroidaan jokaisella aika-askeleella. Tästä perusmenetelmästä johdetuissa menetelmissä riippumattomat yleistetyt koordinaatit joko integroidaan ja riippuvat koordinaatit ratkaistaan rajoiteyhtälöiden perusteella tai yhtälöryhmän kokoa pienennetään esim. käyttämällä nopeus- ja kiihtyvyysanalyyseissä eri kiertymäkoordinaatteja kuin asema-analyysissä. Useimmat integrointimenetelmät on alun perin tarkoitettu differentiaaliyhtälöiden (ODE) ratkaisuunjolloin yhtälöryhmään liitetyt niveliä kuvaavat algebraaliset rajoiteyhtälöt saattavat aiheuttaa ongelmia. Nivelrajoitteiden virheiden korjaus, stabilointi, on erittäin tärkeää mekanismien dynamiikan simuloinnin onnistumisen ja tulosten oikeellisuuden kannalta. Mallinnusmenetelmien johtamisessa käytetyn virtuaalisen työn periaatteen oletuksena nimittäin on, etteivät rajoitevoimat tee työtä, eli rajoitteiden vastaista siirtymää ei tapahdu. Varsinkaan monimutkaisten järjestelmien pidemmissä analyyseissä nivelrajoitteet eivät toteudu tarkasti. Tällöin järjestelmän energiatasapainoei toteudu ja järjestelmään muodostuu virtuaalista energiaa, joka rikkoo virtuaalisen työn periaatetta, Tästä syystä tulokset eivät enää pidäpaikkaansa. Tässä raportissa tarkastellaan erityyppisiä mallinnus- ja ratkaisumenetelmiä, ja vertaillaan niiden toimivuutta yksinkertaisten mekanismien numeerisessa ratkaisussa. Menetelmien toimivuutta tarkastellaan ratkaisun tehokkuuden, nivelrajoitteiden toteutumisen ja energiatasapainon säilymisen kannalta.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tenint en compte el temps i la energia que els mestres i professors dediquen a la correcció de redaccions i produccions escrites i el poc efecte que aquestes semblen tenir en altres produccions dels aprenents, es va dissenyar la recerca que es presenta en aquest article. Els resultats de l'estudi semblen suggerir que les correccions no faciliten l'aprenentatge. Com presentem aquesta activitat de retroalimentació a l'alumnat i la posició que li donem dins d'un enfocament basat en el procés de l'escriptura seran clau per fer que les correccions siguin facilitadores de l'aprenentatge de la producció escrita.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Characterizing the geological features and structures in three dimensions over inaccessible rock cliffs is needed to assess natural hazards such as rockfalls and rockslides and also to perform investigations aimed at mapping geological contacts and building stratigraphy and fold models. Indeed, the detailed 3D data, such as LiDAR point clouds, allow to study accurately the hazard processes and the structure of geologic features, in particular in vertical and overhanging rock slopes. Thus, 3D geological models have a great potential of being applied to a wide range of geological investigations both in research and applied geology projects, such as mines, tunnels and reservoirs. Recent development of ground-based remote sensing techniques (LiDAR, photogrammetry and multispectral / hyperspectral images) are revolutionizing the acquisition of morphological and geological information. As a consequence, there is a great potential for improving the modeling of geological bodies as well as failure mechanisms and stability conditions by integrating detailed remote data. During the past ten years several large rockfall events occurred along important transportation corridors where millions of people travel every year (Switzerland: Gotthard motorway and railway; Canada: Sea to sky highway between Vancouver and Whistler). These events show that there is still a lack of knowledge concerning the detection of potential rockfalls, making mountain residential settlements and roads highly risky. It is necessary to understand the main factors that destabilize rocky outcrops even if inventories are lacking and if no clear morphological evidences of rockfall activity are observed. In order to increase the possibilities of forecasting potential future landslides, it is crucial to understand the evolution of rock slope stability. Defining the areas theoretically most prone to rockfalls can be particularly useful to simulate trajectory profiles and to generate hazard maps, which are the basis for land use planning in mountainous regions. The most important questions to address in order to assess rockfall hazard are: Where are the most probable sources for future rockfalls located? What are the frequencies of occurrence of these rockfalls? I characterized the fracturing patterns in the field and with LiDAR point clouds. Afterwards, I developed a model to compute the failure mechanisms on terrestrial point clouds in order to assess the susceptibility to rockfalls at the cliff scale. Similar procedures were already available to evaluate the susceptibility to rockfalls based on aerial digital elevation models. This new model gives the possibility to detect the most susceptible rockfall sources with unprecented detail in the vertical and overhanging areas. The results of the computation of the most probable rockfall source areas in granitic cliffs of Yosemite Valley and Mont-Blanc massif were then compared to the inventoried rockfall events to validate the calculation methods. Yosemite Valley was chosen as a test area because it has a particularly strong rockfall activity (about one rockfall every week) which leads to a high rockfall hazard. The west face of the Dru was also chosen for the relevant rockfall activity and especially because it was affected by some of the largest rockfalls that occurred in the Alps during the last 10 years. Moreover, both areas were suitable because of their huge vertical and overhanging cliffs that are difficult to study with classical methods. Limit equilibrium models have been applied to several case studies to evaluate the effects of different parameters on the stability of rockslope areas. The impact of the degradation of rockbridges on the stability of large compartments in the west face of the Dru was assessed using finite element modeling. In particular I conducted a back-analysis of the large rockfall event of 2005 (265'000 m3) by integrating field observations of joint conditions, characteristics of fracturing pattern and results of geomechanical tests on the intact rock. These analyses improved our understanding of the factors that influence the stability of rock compartments and were used to define the most probable future rockfall volumes at the Dru. Terrestrial laser scanning point clouds were also successfully employed to perform geological mapping in 3D, using the intensity of the backscattered signal. Another technique to obtain vertical geological maps is combining triangulated TLS mesh with 2D geological maps. At El Capitan (Yosemite Valley) we built a georeferenced vertical map of the main plutonio rocks that was used to investigate the reasons for preferential rockwall retreat rate. Additional efforts to characterize the erosion rate were made at Monte Generoso (Ticino, southern Switzerland) where I attempted to improve the estimation of long term erosion by taking into account also the volumes of the unstable rock compartments. Eventually, the following points summarize the main out puts of my research: The new model to compute the failure mechanisms and the rockfall susceptibility with 3D point clouds allows to define accurately the most probable rockfall source areas at the cliff scale. The analysis of the rockbridges at the Dru shows the potential of integrating detailed measurements of the fractures in geomechanical models of rockmass stability. The correction of the LiDAR intensity signal gives the possibility to classify a point cloud according to the rock type and then use this information to model complex geologic structures. The integration of these results, on rockmass fracturing and composition, with existing methods can improve rockfall hazard assessments and enhance the interpretation of the evolution of steep rockslopes. -- La caractérisation de la géologie en 3D pour des parois rocheuses inaccessibles est une étape nécessaire pour évaluer les dangers naturels tels que chutes de blocs et glissements rocheux, mais aussi pour réaliser des modèles stratigraphiques ou de structures plissées. Les modèles géologiques 3D ont un grand potentiel pour être appliqués dans une vaste gamme de travaux géologiques dans le domaine de la recherche, mais aussi dans des projets appliqués comme les mines, les tunnels ou les réservoirs. Les développements récents des outils de télédétection terrestre (LiDAR, photogrammétrie et imagerie multispectrale / hyperspectrale) sont en train de révolutionner l'acquisition d'informations géomorphologiques et géologiques. Par conséquence, il y a un grand potentiel d'amélioration pour la modélisation d'objets géologiques, ainsi que des mécanismes de rupture et des conditions de stabilité, en intégrant des données détaillées acquises à distance. Pour augmenter les possibilités de prévoir les éboulements futurs, il est fondamental de comprendre l'évolution actuelle de la stabilité des parois rocheuses. Définir les zones qui sont théoriquement plus propices aux chutes de blocs peut être très utile pour simuler les trajectoires de propagation des blocs et pour réaliser des cartes de danger, qui constituent la base de l'aménagement du territoire dans les régions de montagne. Les questions plus importantes à résoudre pour estimer le danger de chutes de blocs sont : Où se situent les sources plus probables pour les chutes de blocs et éboulement futurs ? Avec quelle fréquence vont se produire ces événements ? Donc, j'ai caractérisé les réseaux de fractures sur le terrain et avec des nuages de points LiDAR. Ensuite, j'ai développé un modèle pour calculer les mécanismes de rupture directement sur les nuages de points pour pouvoir évaluer la susceptibilité au déclenchement de chutes de blocs à l'échelle de la paroi. Les zones sources de chutes de blocs les plus probables dans les parois granitiques de la vallée de Yosemite et du massif du Mont-Blanc ont été calculées et ensuite comparés aux inventaires des événements pour vérifier les méthodes. Des modèles d'équilibre limite ont été appliqués à plusieurs cas d'études pour évaluer les effets de différents paramètres sur la stabilité des parois. L'impact de la dégradation des ponts rocheux sur la stabilité de grands compartiments de roche dans la paroi ouest du Petit Dru a été évalué en utilisant la modélisation par éléments finis. En particulier j'ai analysé le grand éboulement de 2005 (265'000 m3), qui a emporté l'entier du pilier sud-ouest. Dans le modèle j'ai intégré des observations des conditions des joints, les caractéristiques du réseau de fractures et les résultats de tests géoméchaniques sur la roche intacte. Ces analyses ont amélioré l'estimation des paramètres qui influencent la stabilité des compartiments rocheux et ont servi pour définir des volumes probables pour des éboulements futurs. Les nuages de points obtenus avec le scanner laser terrestre ont été utilisés avec succès aussi pour produire des cartes géologiques en 3D, en utilisant l'intensité du signal réfléchi. Une autre technique pour obtenir des cartes géologiques des zones verticales consiste à combiner un maillage LiDAR avec une carte géologique en 2D. A El Capitan (Yosemite Valley) nous avons pu géoréferencer une carte verticale des principales roches plutoniques que j'ai utilisé ensuite pour étudier les raisons d'une érosion préférentielle de certaines zones de la paroi. D'autres efforts pour quantifier le taux d'érosion ont été effectués au Monte Generoso (Ticino, Suisse) où j'ai essayé d'améliorer l'estimation de l'érosion au long terme en prenant en compte les volumes des compartiments rocheux instables. L'intégration de ces résultats, sur la fracturation et la composition de l'amas rocheux, avec les méthodes existantes permet d'améliorer la prise en compte de l'aléa chute de pierres et éboulements et augmente les possibilités d'interprétation de l'évolution des parois rocheuses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tämän diplomityön lähtökohtana on tutkia kiinteän kunnonvalvonnan ja ehkäisevän kunnossapidon edellytyksiä ja mahdollisuuksia Anjalan Paperitehtaan painehiomossa. Työn tavoitteena on kunnossapidon kustannustehokkuuden parantaminen ja häiriöaikojen vähentäminen ja sitä kautta koko tuotantoprosessin tuottavuuden nostaminen. Työn alussa tarkastellaan tämänhetkisiä prosessilaitteiston häiriötekijöitä ja kunnossapidon vaikutusmahdollisuuksia häiriöiden korjaukseen ja kustannuksiin. Tarkastelun perusteella päädyttiin painehiomossa ehkäisevän kunnossapidon määrän nostamiseen ja kiinteän kunnonvalvontajärjestelmän käyttöönottoon H4-linjalla. Jatkuvatoimisen kunnonvalvonnan edut tutkimuksen ja teorian perusteella ovat laitteiden käyttövarmuuden olennainen paraneminen ja kunnossapitokustannusten aleneminen. Työn kokeellisen osan perusteella kunnossapitokustannukset alenivat noin 24 % ja käytettävyyden tehostumisen johdosta laitehäiriöt alenivat 50 %:lla. Tulokset saatiin aikaan toteuttamalla suunnitellut ehkäisevät huollot kriittisille laitteille. Kiinteä kunnonvalvonta antoi kunnossapidolle tiedon laitteiden oikea-aikaisesta huoltovälistä. Investoinnin hyödyt saatiin täysmääräisesti käyttöön jo laitteiston ensimmäisen käyttövuoden aikana. Henkilöstön osaamiskapasiteetin nosto tulee vielä lisäämään edellä mainittuja hyötyjä.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper analyzes an innovative experience of formative assessment aimed at improving the teaching of Statistics, which could be easily extrapolated to other studies. We detail the implementation of the double correction, consisting of correcting students' work twice. With the first correction, carried out by classmates according to a rubric developed by the academic, possible errors or deficiencies are discovered, and students are provided with a feedback that allows them to correct and improve their work before being graded by the teacher; whereas in the second correction of the work, once upgraded, the professor evaluates and grades the work. As a result, there is a significant improvement in the quality of students" works, and an active learning from their own mistakes. Both contents and competencies are reinforced by the experience.