994 resultados para Intensity parameters


Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: Electrolytes handling by the kidney is essential for volume and blood pressure (BP) homeostasis but their distribution and heritability are not well described. We estimated the heritability of kidney function as well as of serum and urine concentrations, renal clearances and fractional excretions for sodium, chloride, potassium, calcium, phosphate and magnesium in a Swiss population-based study. DESIGN AND METHOD: Nuclear families were randomly selected from the general population in Switzerland. We estimated glomerular filtration rate (eGFR) using the CKD-EPI and MDRD equations. Urine was collected separately during day and night over 24-hour. We used the ASSOC program (S.A.G.E.) to estimate narrow sense heritability, including as covariates in the model: age, sex, body mass index and study center. RESULTS: The 1128 participants (537 men and 591 women from 273 families), had mean (sd) age of 47.4(17.5) years, body mass index of 25.0 (4.5) kg/m2 and CKD-EPI of 98.0(18.5) mL/min/1.73 m2. Heritability estimates (SE) were 46.0% (0.06), 48.0% (0.06) and 18.0% (0.06) for CKD-EPI, MDRD and 24-hour creatinine clearance (P < 0.05), respectively. Heritability [SE] of serum concentration was highest for calcium (37%[0.06]) and lowest for sodium (13%[0.05]). Heritabilities [SE] of 24-h urine concentrations and excretions, and of fractional excretions were highest for calcium (51%[0.06], 44%[0.06] and 51%[0.06], respectively) and lowest for potassium (11%[0.05], 10%[0.05] and 16%[0.06], respectively). All results were statistically different from zero.(Figure is included in full-text article.) CONCLUSIONS: : Serum and urine levels, urinary excretions and renal handling of electrolytes, particularly calcium, are heritable in the general adult population. Identifying genetic variants involved in electrolytes homeostasis may provide useful insight into the pathophysiological mechanisms involved in common chronic diseases such as kidney diseases, hypertension and diabetes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electrical resistivity tomography (ERT) is a well-established method for geophysical characterization and has shown potential for monitoring geologic CO2 sequestration, due to its sensitivity to electrical resistivity contrasts generated by liquid/gas saturation variability. In contrast to deterministic inversion approaches, probabilistic inversion provides the full posterior probability density function of the saturation field and accounts for the uncertainties inherent in the petrophysical parameters relating the resistivity to saturation. In this study, the data are from benchtop ERT experiments conducted during gas injection into a quasi-2D brine-saturated sand chamber with a packing that mimics a simple anticlinal geological reservoir. The saturation fields are estimated by Markov chain Monte Carlo inversion of the measured data and compared to independent saturation measurements from light transmission through the chamber. Different model parameterizations are evaluated in terms of the recovered saturation and petrophysical parameter values. The saturation field is parameterized (1) in Cartesian coordinates, (2) by means of its discrete cosine transform coefficients, and (3) by fixed saturation values in structural elements whose shape and location is assumed known or represented by an arbitrary Gaussian Bell structure. Results show that the estimated saturation fields are in overall agreement with saturations measured by light transmission, but differ strongly in terms of parameter estimates, parameter uncertainties and computational intensity. Discretization in the frequency domain (as in the discrete cosine transform parameterization) provides more accurate models at a lower computational cost compared to spatially discretized (Cartesian) models. A priori knowledge about the expected geologic structures allows for non-discretized model descriptions with markedly reduced degrees of freedom. Constraining the solutions to the known injected gas volume improved estimates of saturation and parameter values of the petrophysical relationship. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tämän diplomityön tavoitteena oli sekundäärisen esiflotaation optimointi Stora Enso Sachsen GmbH:n tehtaalla. Optimoinnin muuttujana käytettiin vaahdon määrää ja optimointiparametreinä ISO-vaaleutta, saantoja sekä tuhkapitoisuutta. Lisäksi tutkittiin flotaatiosakeuden vaikutusta myös muihin tehtaan flotaatioprosesseihin. Kirjallisuusosassa tarkasteltiin flotaatiotapahtumaa, poistettavien partikkeleiden ja ilmakuplien kontaktia, vaahdon muodostumista sekä tärkeimpiä käytössä olevia siistausflotaattoreiden laiteratkaisuja. Kokeellisessa osassa tutkittiin flotaatiosakeuden pienetämisen vaikutuksia tehtaan flotaatioprosesseihin tuhkapitoisuuden, ISO-vaaleuden, valon sironta- ja valon absorpiokerrointen kannalta. Sekundäärisen esiflotaation optimonti suoritettiin muuttamalla vaahdon määrää kolmella erilaisella injektorin koolla, (8 mm, 10 mm ja 13 mm), joista keskimmäinen kasvattaa 30 % massan tilavuusvirtaa ilmapitoisuuden muodossa. Optimonnin tarkoituksena oli kasvattaa hyväksytyn massajakeen ISO-vaaleutta, sekä kasvattaa kuitu- ja kokonaissaantoa sekundäärisessä esiflotaatiossa. Flotaatiosakeuden pienentämisellä oli edullisia vaikutuksia ISO-vaaleuteen ja valon sirontakertoimeen kussakin flotaatiossa. Tuhkapitoisuus pieneni sekundäärisissä flotaatioissa enemmän sakeuden ollessa pienempi, kun taas primäärisissä flotaatiossa vaikutus oli päinvastainen. Valon absorptiokerroin parani jälkiflotaatioissa alhaisemmalla sakeudella, kun taas esiflotaatioissa vaikutus oli päinvastainen. Sekundäärisen esiflotaation optimoinnin tuloksena oli lähes 5 % parempi ISO-vaaleus hyväksytyssä massajakeessa. Kokonaissaanto parani optimoinnin myötä 5 % ja kuitusaanto 2 %. Saantojen nousu tuottaa vuosittaisia säästöjä siistauslaitoksen tuotantokapasiteetin noustessa 0,5 %. Tämän lisäksi sekundäärisessä esiflotaatiossa rejektoituvan massavirran pienentyminen tuottaa lisäsäästöjä tehtaan voimalaitoksella.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La spectroscopie infrarouge (FTIR) est une technique de choix dans l'analyse des peintures en spray (traces ou bonbonnes de référence), grâce à son fort pouvoir discriminant, sa sensibilité, et ses nombreuses possibilités d'échantillonnage. La comparaison des spectres obtenus est aujourd'hui principalement faite visuellement, mais cette procédure présente des limitations telles que la subjectivité de la prise de décision car celle-ci dépend de l'expérience et de la formation suivie par l'expert. De ce fait, de faibles différences d'intensités relatives entre deux pics peuvent être perçues différemment par des experts, même au sein d'un même laboratoire. Lorsqu'il s'agit de justifier ces différences, certains les expliqueront par la méthode analytique utilisée, alors que d'autres estimeront plutôt qu'il s'agit d'une variabilité intrinsèque à la peinture et/ou à son vécu (par exemple homogénéité, sprayage, ou dégradation). Ce travail propose d'étudier statistiquement les différentes sources de variabilité observables dans les spectres infrarouges, de les identifier, de les comprendre et tenter de les minimiser. Le deuxième objectif principal est de proposer une procédure de comparaison des spectres qui soit davantage transparente et permette d'obtenir des réponses reproductibles indépendamment des experts interrogés. La première partie du travail traite de l'optimisation de la mesure infrarouge et des principaux paramètres analytiques. Les conditions nécessaires afin d'obtenir des spectres reproductibles et minimisant la variation au sein d'un même échantillon (intra-variabilité) sont présentées. Par la suite une procédure de correction des spectres est proposée au moyen de prétraitements et de sélections de variables, afin de minimiser les erreurs systématiques et aléatoires restantes, et de maximiser l'information chimique pertinente. La seconde partie présente une étude de marché effectuée sur 74 bonbonnes de peintures en spray représentatives du marché suisse. Les capacités de discrimination de la méthode FTIR au niveau de la marque et du modèle sont évaluées au moyen d'une procédure visuelle, et comparées à diverses procédures statistiques. Les limites inférieures de discrimination sont testées sur des peintures de marques et modèles identiques mais provenant de différents lots de production. Les résultats ont montré que la composition en pigments était particulièrement discriminante, à cause des étapes de corrections et d'ajustement de la couleur subies lors de la production. Les particularités associées aux peintures en spray présentes sous forme de traces (graffitis, gouttelettes) ont également été testées. Trois éléments sont mis en évidence et leur influence sur le spectre infrarouge résultant testée : 1) le temps minimum de secouage nécessaire afin d'obtenir une homogénéité suffisante de la peinture et, en conséquence, de la surface peinte, 2) la dégradation initiée par le rayonnement ultra- violet en extérieur, et 3) la contamination provenant du support lors du prélèvement. Finalement une étude de population a été réalisée sur 35 graffitis de la région lausannoise et les résultats comparés à l'étude de marché des bonbonnes en spray. La dernière partie de ce travail s'est concentrée sur l'étape de prise de décision lors de la comparaison de spectres deux-à-deux, en essayant premièrement de comprendre la pratique actuelle au sein des laboratoires au moyen d'un questionnaire, puis de proposer une méthode statistique de comparaison permettant d'améliorer l'objectivité et la transparence lors de la prise de décision. Une méthode de comparaison basée sur la corrélation entre les spectres est proposée, et ensuite combinée à une évaluation Bayesienne de l'élément de preuve au niveau de la source et au niveau de l'activité. Finalement des exemples pratiques sont présentés et la méthodologie est discutée afin de définir le rôle précis de l'expert et des statistiques dans la procédure globale d'analyse des peintures. -- Infrared spectroscopy (FTIR) is a technique of choice for analyzing spray paint speciments (i.e. traces) and reference samples (i.e. cans seized from suspects) due to its high discriminating power, sensitivity and sampling possibilities. The comparison of the spectra is currently carried out visually, but this procedure has limitations such as the subjectivity in the decision due to its dependency on the experience and training of the expert. This implies that small differences in the relative intensity of two peaks can be perceived differently by experts, even between analysts working in the same laboratory. When it comes to justifying these differences, some will explain them by the analytical technique, while others will estimate that the observed differences are mostly due to an intrinsic variability from the paint sample and/or its acquired characteristics (for example homogeneity, spraying, or degradation). This work proposes to statistically study the different sources of variability observed in infrared spectra, to identify them, understand them and try to minimize them. The second goal is to propose a procedure for spectra comparison that is more transparent, and allows obtaining reproducible answers being independent from the expert. The first part of the manuscript focuses on the optimization of infrared measurement and on the main analytical parameters. The necessary conditions to obtain reproducible spectra with a minimized variation within a sample (intra-variability) are presented. Following that a procedure of spectral correction is then proposed using pretreatments and variable selection methods, in order to minimize systematic and random errors, and increase simultaneously relevant chemical information. The second part presents a market study of 74 spray paints representative of the Swiss market. The discrimination capabilities of FTIR at the brand and model level are evaluated by means of visual and statistical procedures. The inferior limits of discrimination are tested on paints coming from the same brand and model, but from different production batches. The results showed that the pigment composition was particularly discriminatory, because of the corrections and adjustments made to the paint color during its manufacturing process. The features associated with spray paint traces (graffitis, droplets) were also tested. Three elements were identified and their influence on the resulting infrared spectra were tested: 1) the minimum shaking time necessary to obtain a sufficient homogeneity of the paint and subsequently of the painted surface, 2) the degradation initiated by ultraviolet radiation in an exterior environment, and 3) the contamination from the support when paint is recovered. Finally a population study was performed on 35 graffitis coming from the city of Lausanne and surroundings areas, and the results were compared to the previous market study of spray cans. The last part concentrated on the decision process during the pairwise comparison of spectra. First, an understanding of the actual practice among laboratories was initiated by submitting a questionnaire. Then, a proposition for a statistical method of comparison was advanced to improve the objectivity and transparency during the decision process. A method of comparison based on the correlation between spectra is proposed, followed by the integration into a Bayesian framework at both source and activity levels. Finally, some case examples are presented and the recommended methodology is discussed in order to define the role of the expert as well as the contribution of the tested statistical approach within a global analytical sequence for paint examinations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Parkinson disease (PD) is associated with a clinical course of variable duration, severity, and a combination of motor and non-motor features. Recent PD research has focused primarily on etiology rather than clinical progression and long-term outcomes. For the PD patient, caregivers, and clinicians, information on expected clinical progression and long-term outcomes is of great importance. Today, it remains largely unknown what factors influence long-term clinical progression and outcomes in PD; recent data indicate that the factors that increase the risk to develop PD differ, at least partly, from those that accelerate clinical progression and lead to worse outcomes. Prospective studies will be required to identify factors that influence progression and outcome. We suggest that data for such studies is collected during routine office visits in order to guarantee high external validity of such research. We report here the results of a consensus meeting of international movement disorder experts from the Genetic Epidemiology of Parkinson's Disease (GEO-PD) consortium, who convened to define which long-term outcomes are of interest to patients, caregivers and clinicians, and what is presently known about environmental or genetic factors influencing clinical progression or long-term outcomes in PD. We propose a panel of rating scales that collects a significant amount of phenotypic information, can be performed in the routine office visit and allows international standardization. Research into the progression and long-term outcomes of PD aims at providing individual prognostic information early, adapting treatment choices, and taking specific measures to provide care optimized to the individual patient's needs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses the levels of degradation of some co- and byproducts of the food chain intended for feed uses. As the first part of a research project, 'Feeding Fats Safety', financed by the sixth Framework Programme-EC, a total of 123 samples were collected from 10 European countries, corresponding to fat co- and byproducts such as animal fats, fish oils, acid oils from refining, recycled cooking oils, and other. Several composition and degradation parameters (moisture, acid value, diacylglycerols and monoacylglycerols, peroxides, secondary oxidation products, polymers of triacylglycerols, fatty acid composition, tocopherols, and tocotrienols) were evaluated. These findings led to the conclusion that some fat by- and coproducts, such as fish oils, lecithins, and acid oils, show poor, nonstandardized quality and that production processes need to be greatly improved. Conclusions are also put forward about the applicability and utility of each analytical parameter for characterization and quality control.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: This study aimed to assess the long-term outcome of functional endoscopic sinus surgery for Samter's triad patients using an objective visual analogue scale and nasal endoscopy. METHOD: Using a retrospective database, 33 Samter's triad patients who underwent functional endoscopic sinus surgery were evaluated pre- and post-operatively between 1987 and 2007 in Hospital of La Chaux-de-Fonds, Switzerland. RESULTS: A total of 33 patients participated in the study, and the mean follow-up period was 11.6 years (range 1.2-20 years). Patients were divided into two groups based on visual analogue scale scores of the five parameters with the greatest difference in intensity of symptoms between the beginning and end of follow up. Group 1 included patients with a mean visual analogue scale score of 6 and below at the end of follow up and group 2 included patients with a mean visual analogue scale score of more than 6. The only statistically significant difference noted between the two groups was the endonasal findings: stage III-IV polyposis was present in 1 out of 24 patients (4 per cent) in group 1 and in 5 out of 9 patients (56 per cent) in group 2. CONCLUSION: The results of our study indicate that functional endoscopic sinus surgery helps stabilise disease progression. Stage III-IV polyposis had a significant adverse effect on long-term outcome.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, we randomly compared high doses of the tyrosine kinase inhibitor imatinib combined with reduced-intensity chemotherapy (arm A) to standard imatinib/hyperCVAD (cyclophosphamide/vincristine/doxorubicin/dexamethasone) therapy (arm B) in 268 adults (median age, 47 years) with Philadelphia chromosome-positive (Ph+) acute lymphoblastic leukemia (ALL). The primary objective was the major molecular response (MMolR) rate after cycle 2, patients being then eligible for allogeneic stem cell transplantation (SCT) if they had a donor, or autologous SCT if in MMolR and no donor. With fewer induction deaths, the complete remission (CR) rate was higher in arm A than in arm B (98% vs 91%; P = .006), whereas the MMolR rate was similar in both arms (66% vs 64%). With a median follow-up of 4.8 years, 5-year event-free survival and overall survival (OS) rates were estimated at 37.1% and 45.6%, respectively, without difference between the arms. Allogeneic transplantation was associated with a significant benefit in relapse-free survival (hazard ratio [HR], 0.69; P = .036) and OS (HR, 0.64; P = .02), with initial white blood cell count being the only factor significantly interacting with this SCT effect. In patients achieving MMolR, outcome was similar after autologous and allogeneic transplantation. This study validates an induction regimen combining reduced-intensity chemotherapy and imatinib in Ph+ ALL adult patients and suggests that SCT in first CR is still a good option for Ph+ ALL adult patients. This trial was registered at www.clinicaltrials.gov as #NCT00327678.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: Walking in patients with chronic low back pain (cLBP) is characterized by motor control adaptations as a protective strategy against further injury or pain. The purpose of this study was to compare the preferred walking speed, the biomechanical and the energetic parameters of walking at different speeds between patients with cLBP and healthy men individually matched for age, body mass and height. METHODS: Energy cost of walking was assessed with a breath-by-breath gas analyser; mechanical and spatiotemporal parameters of walking were computed using two inertial sensors equipped with a triaxial accelerometer and gyroscope and compared in 13 men with cLBP and 13 control men (CTR) during treadmill walking at standard (0.83, 1.11, 1.38, 1.67 m s(-1)) and preferred (PWS) speeds. Low back pain intensity (visual analogue scale, cLBP only) and perceived exertion (Borg scale) were assessed at each walking speed. RESULTS: PWS was slower in cLBP [1.17 (SD = 0.13) m s(-1)] than in CTR group [1.33 (SD = 0.11) m s(-1); P = 0.002]. No significant difference was observed between groups in mechanical work (P ≥ 0.44), spatiotemporal parameters (P ≥ 0.16) and energy cost of walking (P ≥ 0.36). At the end of the treadmill protocol, perceived exertion was significantly higher in cLBP [11.7 (SD = 2.4)] than in CTR group [9.9 (SD = 1.1); P = 0.01]. Pain intensity did not significantly increase over time (P = 0.21). CONCLUSIONS: These results do not support the hypothesis of a less efficient walking pattern in patients with cLBP and imply that high walking speeds are well tolerated by patients with moderately disabling cLBP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the most important problems in optical pattern recognition by correlation is the appearance of sidelobes in the correlation plane, which causes false alarms. We present a method that eliminate sidelobes of up to a given height if certain conditions are satisfied. The method can be applied to any generalized synthetic discriminant function filter and is capable of rejecting lateral peaks that are even higher than the central correlation. Satisfactory results were obtained in both computer simulations and optical implementation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

AIMS: c-Met is an emerging biomarker in pancreatic ductal adenocarcinoma (PDAC); there is no consensus regarding the immunostaining scoring method for this marker. We aimed to assess the prognostic value of c-Met overexpression in resected PDAC, and to elaborate a robust and reproducible scoring method for c-Met immunostaining in this setting. METHODS AND RESULTS: c-Met immunostaining was graded according to the validated MetMab score, a classic visual scale combining surface and intensity (SI score), or a simplified score (high c-Met: ≥20% of tumour cells with strong membranous staining), in stage I-II PDAC. A computer-assisted classification method (Aperio software) was developed. Clinicopathological parameters were correlated with disease-free survival (DFS) and overall survival(OS). One hundred and forty-nine patients were analysed retrospectively in a two-step process. Thirty-seven samples (whole slides) were analysed as a pre-run test. Reproducibility values were optimal with the simplified score (kappa = 0.773); high c-Met expression (7/37) was associated with shorter DFS [hazard ratio (HR) 3.456, P = 0.0036] and OS (HR 4.257, P = 0.0004). c-Met expression was concordant on whole slides and tissue microarrays in 87.9% of samples, and quantifiable with a specific computer-assisted algorithm. In the whole cohort (n = 131), patients with c-Met(high) tumours (36/131) had significantly shorter DFS (9.3 versus 20.0 months, HR 2.165, P = 0.0005) and OS (18.2 versus 35.0 months, HR 1.832, P = 0.0098) in univariate and multivariate analysis. CONCLUSIONS: Simplified c-Met expression is an independent prognostic marker in stage I-II PDAC that may help to identify patients with a high risk of tumour relapse and poor survival.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To meta-analyze the literature on the clinical performance of Class V restorations to assess the factors that influence retention, marginal integrity, and marginal discoloration of cervical lesions restored with composite resins, glass-ionomer-cement-based materials [glass-ionomer cement (GIC) and resin-modified glass ionomers (RMGICs)], and polyacid-modified resin composites (PMRC). MATERIALS AND METHODS: The English literature was searched (MEDLINE and SCOPUS) for prospective clinical trials on cervical restorations with an observation period of at least 18 months. The studies had to report about retention, marginal discoloration, marginal integrity, and marginal caries and include a description of the operative technique (beveling of enamel, roughening of dentin, type of isolation). Eighty-one studies involving 185 experiments for 47 adhesives matched the inclusion criteria. The statistical analysis was carried out by using the following linear mixed model: log (-log (Y /100)) = β + α log(T ) + error with β = log(λ), where β is a summary measure of the non-linear deterioration occurring in each experiment, including a random study effect. RESULTS: On average, 12.3% of the cervical restorations were lost, 27.9% exhibited marginal discoloration, and 34.6% exhibited deterioration of marginal integrity after 5 years. The calculation of the clinical index was 17.4% of failures after 5 years and 32.3% after 8 years. A higher variability was found for retention loss and marginal discoloration. Hardly any secondary caries lesions were detected, even in the experiments with a follow-up time longer than 8 years. Restorations placed using rubber-dam in teeth whose dentin was roughened showed a statistically significantly higher retention rate than those placed in teeth with unprepared dentin or without rubber-dam (p < 0.05). However, enamel beveling had no influence on any of the examined variables. Significant differences were found between pairs of adhesive systems and also between pairs of classes of adhesive systems. One-step self-etching had a significantly worse clinically index than two-step self-etching and three-step etch-and-rinse (p = 0.026 and p = 0.002, respectively). CONCLUSION: The clinical performance is significantly influenced by the type of adhesive system and/or the adhesive class to which the system belongs. Whether the dentin/enamel is roughened or not and whether rubberdam isolation is used or not also significantly influenced the clinical performance. Composite resin restorations placed with two-step self-etching and three-step etch-and-rinse adhesive systems should be preferred over onestep self-etching adhesive systems, GIC-based materials, and PMRCs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Characterizing the geological features and structures in three dimensions over inaccessible rock cliffs is needed to assess natural hazards such as rockfalls and rockslides and also to perform investigations aimed at mapping geological contacts and building stratigraphy and fold models. Indeed, the detailed 3D data, such as LiDAR point clouds, allow to study accurately the hazard processes and the structure of geologic features, in particular in vertical and overhanging rock slopes. Thus, 3D geological models have a great potential of being applied to a wide range of geological investigations both in research and applied geology projects, such as mines, tunnels and reservoirs. Recent development of ground-based remote sensing techniques (LiDAR, photogrammetry and multispectral / hyperspectral images) are revolutionizing the acquisition of morphological and geological information. As a consequence, there is a great potential for improving the modeling of geological bodies as well as failure mechanisms and stability conditions by integrating detailed remote data. During the past ten years several large rockfall events occurred along important transportation corridors where millions of people travel every year (Switzerland: Gotthard motorway and railway; Canada: Sea to sky highway between Vancouver and Whistler). These events show that there is still a lack of knowledge concerning the detection of potential rockfalls, making mountain residential settlements and roads highly risky. It is necessary to understand the main factors that destabilize rocky outcrops even if inventories are lacking and if no clear morphological evidences of rockfall activity are observed. In order to increase the possibilities of forecasting potential future landslides, it is crucial to understand the evolution of rock slope stability. Defining the areas theoretically most prone to rockfalls can be particularly useful to simulate trajectory profiles and to generate hazard maps, which are the basis for land use planning in mountainous regions. The most important questions to address in order to assess rockfall hazard are: Where are the most probable sources for future rockfalls located? What are the frequencies of occurrence of these rockfalls? I characterized the fracturing patterns in the field and with LiDAR point clouds. Afterwards, I developed a model to compute the failure mechanisms on terrestrial point clouds in order to assess the susceptibility to rockfalls at the cliff scale. Similar procedures were already available to evaluate the susceptibility to rockfalls based on aerial digital elevation models. This new model gives the possibility to detect the most susceptible rockfall sources with unprecented detail in the vertical and overhanging areas. The results of the computation of the most probable rockfall source areas in granitic cliffs of Yosemite Valley and Mont-Blanc massif were then compared to the inventoried rockfall events to validate the calculation methods. Yosemite Valley was chosen as a test area because it has a particularly strong rockfall activity (about one rockfall every week) which leads to a high rockfall hazard. The west face of the Dru was also chosen for the relevant rockfall activity and especially because it was affected by some of the largest rockfalls that occurred in the Alps during the last 10 years. Moreover, both areas were suitable because of their huge vertical and overhanging cliffs that are difficult to study with classical methods. Limit equilibrium models have been applied to several case studies to evaluate the effects of different parameters on the stability of rockslope areas. The impact of the degradation of rockbridges on the stability of large compartments in the west face of the Dru was assessed using finite element modeling. In particular I conducted a back-analysis of the large rockfall event of 2005 (265'000 m3) by integrating field observations of joint conditions, characteristics of fracturing pattern and results of geomechanical tests on the intact rock. These analyses improved our understanding of the factors that influence the stability of rock compartments and were used to define the most probable future rockfall volumes at the Dru. Terrestrial laser scanning point clouds were also successfully employed to perform geological mapping in 3D, using the intensity of the backscattered signal. Another technique to obtain vertical geological maps is combining triangulated TLS mesh with 2D geological maps. At El Capitan (Yosemite Valley) we built a georeferenced vertical map of the main plutonio rocks that was used to investigate the reasons for preferential rockwall retreat rate. Additional efforts to characterize the erosion rate were made at Monte Generoso (Ticino, southern Switzerland) where I attempted to improve the estimation of long term erosion by taking into account also the volumes of the unstable rock compartments. Eventually, the following points summarize the main out puts of my research: The new model to compute the failure mechanisms and the rockfall susceptibility with 3D point clouds allows to define accurately the most probable rockfall source areas at the cliff scale. The analysis of the rockbridges at the Dru shows the potential of integrating detailed measurements of the fractures in geomechanical models of rockmass stability. The correction of the LiDAR intensity signal gives the possibility to classify a point cloud according to the rock type and then use this information to model complex geologic structures. The integration of these results, on rockmass fracturing and composition, with existing methods can improve rockfall hazard assessments and enhance the interpretation of the evolution of steep rockslopes. -- La caractérisation de la géologie en 3D pour des parois rocheuses inaccessibles est une étape nécessaire pour évaluer les dangers naturels tels que chutes de blocs et glissements rocheux, mais aussi pour réaliser des modèles stratigraphiques ou de structures plissées. Les modèles géologiques 3D ont un grand potentiel pour être appliqués dans une vaste gamme de travaux géologiques dans le domaine de la recherche, mais aussi dans des projets appliqués comme les mines, les tunnels ou les réservoirs. Les développements récents des outils de télédétection terrestre (LiDAR, photogrammétrie et imagerie multispectrale / hyperspectrale) sont en train de révolutionner l'acquisition d'informations géomorphologiques et géologiques. Par conséquence, il y a un grand potentiel d'amélioration pour la modélisation d'objets géologiques, ainsi que des mécanismes de rupture et des conditions de stabilité, en intégrant des données détaillées acquises à distance. Pour augmenter les possibilités de prévoir les éboulements futurs, il est fondamental de comprendre l'évolution actuelle de la stabilité des parois rocheuses. Définir les zones qui sont théoriquement plus propices aux chutes de blocs peut être très utile pour simuler les trajectoires de propagation des blocs et pour réaliser des cartes de danger, qui constituent la base de l'aménagement du territoire dans les régions de montagne. Les questions plus importantes à résoudre pour estimer le danger de chutes de blocs sont : Où se situent les sources plus probables pour les chutes de blocs et éboulement futurs ? Avec quelle fréquence vont se produire ces événements ? Donc, j'ai caractérisé les réseaux de fractures sur le terrain et avec des nuages de points LiDAR. Ensuite, j'ai développé un modèle pour calculer les mécanismes de rupture directement sur les nuages de points pour pouvoir évaluer la susceptibilité au déclenchement de chutes de blocs à l'échelle de la paroi. Les zones sources de chutes de blocs les plus probables dans les parois granitiques de la vallée de Yosemite et du massif du Mont-Blanc ont été calculées et ensuite comparés aux inventaires des événements pour vérifier les méthodes. Des modèles d'équilibre limite ont été appliqués à plusieurs cas d'études pour évaluer les effets de différents paramètres sur la stabilité des parois. L'impact de la dégradation des ponts rocheux sur la stabilité de grands compartiments de roche dans la paroi ouest du Petit Dru a été évalué en utilisant la modélisation par éléments finis. En particulier j'ai analysé le grand éboulement de 2005 (265'000 m3), qui a emporté l'entier du pilier sud-ouest. Dans le modèle j'ai intégré des observations des conditions des joints, les caractéristiques du réseau de fractures et les résultats de tests géoméchaniques sur la roche intacte. Ces analyses ont amélioré l'estimation des paramètres qui influencent la stabilité des compartiments rocheux et ont servi pour définir des volumes probables pour des éboulements futurs. Les nuages de points obtenus avec le scanner laser terrestre ont été utilisés avec succès aussi pour produire des cartes géologiques en 3D, en utilisant l'intensité du signal réfléchi. Une autre technique pour obtenir des cartes géologiques des zones verticales consiste à combiner un maillage LiDAR avec une carte géologique en 2D. A El Capitan (Yosemite Valley) nous avons pu géoréferencer une carte verticale des principales roches plutoniques que j'ai utilisé ensuite pour étudier les raisons d'une érosion préférentielle de certaines zones de la paroi. D'autres efforts pour quantifier le taux d'érosion ont été effectués au Monte Generoso (Ticino, Suisse) où j'ai essayé d'améliorer l'estimation de l'érosion au long terme en prenant en compte les volumes des compartiments rocheux instables. L'intégration de ces résultats, sur la fracturation et la composition de l'amas rocheux, avec les méthodes existantes permet d'améliorer la prise en compte de l'aléa chute de pierres et éboulements et augmente les possibilités d'interprétation de l'évolution des parois rocheuses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present work is a part of the large project with purpose to qualify the Flash memory for automotive application using a standardized test and measurement flow. High memory reliability and data retention are the most critical parameters in this application. The current work covers the functional tests and data retention test. The purpose of the data retention test is to obtain the data retention parameters of the designed memory, i.e. the maximum time of information storage at specified conditions without critical charge leakage. For this purpose the charge leakage from the cells, which results in decrease of cells threshold voltage, was measured after a long-time hightemperature treatment at several temperatures. The amount of lost charge for each temperature was used to calculate the Arrhenius constant and activation energy for the discharge process. With this data, the discharge of the cells at different temperatures during long time can be predicted and the probability of data loss after years can be calculated. The memory chips, investigated in this work, were 0.035 μm CMOS Flash memory testchips, designed for further use in the Systems-on-Chips for automotive electronics.