912 resultados para Data distribution


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background HBV genotype F is primarily found in indigenous populations from South America and is classified in four subgenotypes (F1 to F4). Subgenotype F2a is the most common in Brazil among genotype F cases. The aim of this study was to characterize HBV genotype F2a circulating in 16 patients from São Paulo, Brazil. Samples were collected between 2006 and 2012 and sent to Hospital Israelita Albert Einstein. A fragment of 1306 bp partially comprising HBsAg and DNA polymerase coding regions was amplified and sequenced. Viral sequences were genotyped by phylogenetic analysis using reference sequences from GenBank (n=198), including 80 classified as subgenotype F2a. Bayesian Markov chain Monte Carlo simulation implemented in BEAST v.1.5.4 was applied to obtain the best possible estimates using the model of nucleotide substitutions GTR+G+I. Findings It were identified three groups of sequences of subgenotype F2a: 1) 10 sequences from São Paulo state; 2) 3 sequences from Rio de Janeiro and one from São Paulo states; 3) 8 sequences from the West Amazon Basin. Conclusions These results showing for the first time the distribution of F2a subgenotype in Brazil. The spreading and the dynamic of subgenotype F2a in Brazil requires the study of a higher number of samples from different regions as it is unfold in almost all Brazilian populations studied so far. We cannot infer with certainty the origin of these different groups due to the lack of available sequences. Nevertheless, our data suggest that the common origin of these groups probably occurred a long time ago.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A thorough search for large-scale anisotropies in the distribution of arrival directions of cosmic rays detected above '10 POT. 18' eV at the Pierre Auger Observatory is reported. For the first time, these large-scale anisotropy searches are performed as a function of both the right ascension and the declination and expressed in terms of dipole and quadrupole moments.Within the systematic uncertainties, no significant deviation from isotropy is revealed. Upper limits on dipole and quadrupole amplitudes are derived under the hypothesis that any cosmic ray anisotropy is dominated by such moments in this energy range. These upper limits provide constraints on the production of cosmic rays above '10 POT. 18' eV, since they allow us to challenge an origin from stationary galactic sources densely distributed in the galactic disk and emitting predominantly light particles in all directions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Persistent organic pollutants (POPs) is a group of chemicals that are toxic, undergo long-range transport and accumulate in biota. Due to their persistency the distribution and recirculation in the environment often continues for a long period of time. Thereby they appear virtually everywhere within the biosphere, and poses a toxic stress to living organisms. In this thesis, attempts are made to contribute to the understanding of factors that influence the distribution of POPs with focus on processes in the marine environment. The bioavailability and the spatial distribution are central topics for the environmental risk management of POPs. In order to study these topics, various field studies were undertaken. To determine the bioavailable fraction of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs), polychlorinated naphthalenes (PCNs), and polychlorinated biphenyls (PCBs) the aqueous dissolved phase were sampled and analysed. In the same samples, we also measured how much of these POPs were associated with suspended particles. Different models, which predicted the phase distribution of these POPs, were then evaluated. It was found that important water characteristics, which influenced the solid-water phase distribution of POPs, were particulate organic matter (POM), particulate soot (PSC), and dissolved organic matter (DOM). The bioavailable dissolved POP-phase in the water was lower when these sorbing phases were present. Furthermore, sediments were sampled and the spatial distribution of the POPs was examined. The results showed that the concentration of PCDD/Fs, and PCNs were better described using PSC- than using POM-content of the sediment. In parallel with these field studies, we synthesized knowledge of the processes affecting the distribution of POPs in a multimedia mass balance model. This model predicted concentrations of PCDD/Fs throughout our study area, the Grenlandsfjords in Norway, within factors of ten. This makes the model capable to validate the effect of suitable remedial actions in order to decrease the exposure of these POPs to biota in the Grenlandsfjords which was the aim of the project. Also, to evaluate the influence of eutrophication on the marine occurrence PCB data from the US Musselwatch and Benthic Surveillance Programs are examined in this thesis. The dry weight based concentrations of PCB in bivalves were found to correlate positively to the organic matter content of nearby sediments, and organic matter based concentrations of PCB in sediments were negatively correlated to the organic matter content of the sediment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The barred spiral galaxy M83 (NGC5236) has been observed in the 12CO J=1–0 and J=2–1 millimetre lines with the Swedish-ESO Submillimetre Telescope (SEST). The sizes of the CO maps are 100×100, and they cover the entire optical disk. The CO emission is strongly peaked toward the nucleus. The molecular spiral arms are clearly resolved and can be traced for about 360º. The total molecular gas mass is comparable to the total Hi mass, but H2 dominates in the optical disk. Iso-velocity maps show the signature of an inclined, rotating disk, but also the effects of streaming motions along the spiral arms. The dynamical mass is determined and compared to the gas mass. The pattern speed is determined from the residual velocity pattern, and the locations of various resonances are discussed. The molecular gas velocity dispersion is determined, and a trend of decreasing dispersion with increasing galactocentric radius is found. A total gas (H2+Hi+He) mass surface density map is presented, and compared to the critical density for star formation of an isothermal gaseous disk. The star formation rate (SFR) in the disk is estimated using data from various star formation tracers. The different SFR estimates agree well when corrections for extinctions, based on the total gas mass map, are made. The radial SFR distribution shows features that can be associated with kinematic resonances. We also find an increased star formation efficiency in the spiral arms. Different Schmidt laws are fitted to the data. The star formation properties of the nuclear region, based on high angular resolution HST data, are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] Here we present monthly, basin-wide maps of the partial pressure of carbon dioxide (pCO2) for the North Atlantic on a latitude by longitude grid for years 2004 through 2006 inclusive. The maps have been computed using a neural network technique which reconstructs the non-linear relationships between three biogeochemical parameters and marine pCO2. A self organizing map (SOM) neural network has been trained using 389 000 triplets of the SeaWiFSMODIS chlorophyll-a concentration, the NCEP/NCAR reanalysis sea surface temperature, and the FOAM mixed layer depth. The trained SOM was labelled with 137 000 underway pCO2 measurements collected in situ during 2004, 2005 and 2006 in the North Atlantic, spanning the range of 208 to 437atm. The root mean square error (RMSE) of the neural network fit to the data is 11.6?atm, which equals to just above 3 per cent of an average pCO2 value in the in situ dataset. The seasonal pCO2 cycle as well as estimates of the interannual variability in the major biogeochemical provinces are presented and discussed. High resolution combined with basin-wide coverage makes the maps a useful tool for several applications such as the monitoring of basin-wide air-sea CO2 fluxes or improvement of seasonal and interannual marine CO2 cycles in future model predictions. The method itself is a valuable alternative to traditional statistical modelling techniques used in geosciences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]The Cape Verde Frontal Zone separates North and South Atlantic Central Waters in the eastern North Atlantic Subtropical Gyre. CTD-O2 and shipboard ADCP data from three hydrographic sections carried out in September 2003 are used to study the structure of the front. Results show the relation between spatial variations of water masses and currents, demonstrating the importance of advection in the distribution of water masses. Diapycnal diffusivities due to double diffusion and vertical shear instabilities are also estimated. Existence of competition between the two processes through the water column is shown. Depth-averaged diffusivities suggest that salt fingering dominates diapycnal mixing, except areas of purest South Atlantic Central Water. Here, double diffusion processes are weak and, consequently, shear of the flow is the main process. Results also show that strong mixing induced by vertical shear is associated with a large intrusion found near the front.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past decade, the advent of efficient genome sequencing tools and high-throughput experimental biotechnology has lead to enormous progress in the life science. Among the most important innovations is the microarray tecnology. It allows to quantify the expression for thousands of genes simultaneously by measurin the hybridization from a tissue of interest to probes on a small glass or plastic slide. The characteristics of these data include a fair amount of random noise, a predictor dimension in the thousand, and a sample noise in the dozens. One of the most exciting areas to which microarray technology has been applied is the challenge of deciphering complex disease such as cancer. In these studies, samples are taken from two or more groups of individuals with heterogeneous phenotypes, pathologies, or clinical outcomes. these samples are hybridized to microarrays in an effort to find a small number of genes which are strongly correlated with the group of individuals. Eventhough today methods to analyse the data are welle developed and close to reach a standard organization (through the effort of preposed International project like Microarray Gene Expression Data -MGED- Society [1]) it is not unfrequant to stumble in a clinician's question that do not have a compelling statistical method that could permit to answer it.The contribution of this dissertation in deciphering disease regards the development of new approaches aiming at handle open problems posed by clinicians in handle specific experimental designs. In Chapter 1 starting from a biological necessary introduction, we revise the microarray tecnologies and all the important steps that involve an experiment from the production of the array, to the quality controls ending with preprocessing steps that will be used into the data analysis in the rest of the dissertation. While in Chapter 2 a critical review of standard analysis methods are provided stressing most of problems that In Chapter 3 is introduced a method to adress the issue of unbalanced design of miacroarray experiments. In microarray experiments, experimental design is a crucial starting-point for obtaining reasonable results. In a two-class problem, an equal or similar number of samples it should be collected between the two classes. However in some cases, e.g. rare pathologies, the approach to be taken is less evident. We propose to address this issue by applying a modified version of SAM [2]. MultiSAM consists in a reiterated application of a SAM analysis, comparing the less populated class (LPC) with 1,000 random samplings of the same size from the more populated class (MPC) A list of the differentially expressed genes is generated for each SAM application. After 1,000 reiterations, each single probe given a "score" ranging from 0 to 1,000 based on its recurrence in the 1,000 lists as differentially expressed. The performance of MultiSAM was compared to the performance of SAM and LIMMA [3] over two simulated data sets via beta and exponential distribution. The results of all three algorithms over low- noise data sets seems acceptable However, on a real unbalanced two-channel data set reagardin Chronic Lymphocitic Leukemia, LIMMA finds no significant probe, SAM finds 23 significantly changed probes but cannot separate the two classes, while MultiSAM finds 122 probes with score >300 and separates the data into two clusters by hierarchical clustering. We also report extra-assay validation in terms of differentially expressed genes Although standard algorithms perform well over low-noise simulated data sets, multi-SAM seems to be the only one able to reveal subtle differences in gene expression profiles on real unbalanced data. In Chapter 4 a method to adress similarities evaluation in a three-class prblem by means of Relevance Vector Machine [4] is described. In fact, looking at microarray data in a prognostic and diagnostic clinical framework, not only differences could have a crucial role. In some cases similarities can give useful and, sometimes even more, important information. The goal, given three classes, could be to establish, with a certain level of confidence, if the third one is similar to the first or the second one. In this work we show that Relevance Vector Machine (RVM) [2] could be a possible solutions to the limitation of standard supervised classification. In fact, RVM offers many advantages compared, for example, with his well-known precursor (Support Vector Machine - SVM [3]). Among these advantages, the estimate of posterior probability of class membership represents a key feature to address the similarity issue. This is a highly important, but often overlooked, option of any practical pattern recognition system. We focused on Tumor-Grade-three-class problem, so we have 67 samples of grade I (G1), 54 samples of grade 3 (G3) and 100 samples of grade 2 (G2). The goal is to find a model able to separate G1 from G3, then evaluate the third class G2 as test-set to obtain the probability for samples of G2 to be member of class G1 or class G3. The analysis showed that breast cancer samples of grade II have a molecular profile more similar to breast cancer samples of grade I. Looking at the literature this result have been guessed, but no measure of significance was gived before.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For its particular position and the complex geological history, the Northern Apennines has been considered as a natural laboratory to apply several kinds of investigations. By the way, it is complicated to joint all the knowledge about the Northern Apennines in a unique picture that explains the structural and geological emplacement that produced it. The main goal of this thesis is to put together all information on the deformation - in the crust and at depth - of this region and to describe a geodynamical model that takes account of it. To do so, we have analyzed the pattern of deformation in the crust and in the mantle. In both cases the deformation has been studied using always information recovered from earthquakes, although using different techniques. In particular the shallower deformation has been studied using seismic moment tensors information. For our purpose we used the methods described in Arvidsson and Ekstrom (1998) that allowing the use in the inversion of surface waves [and not only of the body waves as the Centroid Moment Tensor (Dziewonski et al., 1981) one] allow to determine seismic source parameters for earthquakes with magnitude as small as 4.0. We applied this tool in the Northern Apennines and through this activity we have built up the Italian CMT dataset (Pondrelli et al., 2006) and the pattern of seismic deformation using the Kostrov (1974) method on a regular grid of 0.25 degree cells. We obtained a map of lateral variations of the pattern of seismic deformation on different layers of depth, taking into account the fact that shallow earthquakes (within 15 km of depth) in the region occur everywhere while most of events with a deeper hypocenter (15-40 km) occur only in the outer part of the belt, on the Adriatic side. For the analysis of the deep deformation, i.e. that occurred in the mantle, we used the anisotropy information characterizing the structure below the Northern Apennines. The anisotropy is an earth properties that in the crust is due to the presence of aligned fluid filled cracks or alternating isotropic layers with different elastic properties while in the mantle the most important cause of seismic anisotropy is the lattice preferred orientation (LPO) of the mantle minerals as the olivine. This last is a highly anisotropic mineral and tends to align its fast crystallographic axes (a-axis) parallel to the astenospheric flow as a response to finite strain induced by geodynamic processes. The seismic anisotropy pattern of a region is measured utilizing the shear wave splitting phenomenon (that is the seismological analogue to optical birefringence). Here, to do so, we apply on teleseismic earthquakes recorded on stations located in the study region, the Sileny and Plomerova (1996) approach. The results are analyzed on the basis of their lateral and vertical variations to better define the earth structure beneath Northern Apennines. We find different anisotropic domains, a Tuscany and an Adria one, with a pattern of seismic anisotropy which laterally varies in a similar way respect to the seismic deformation. Moreover, beneath the Adriatic region the distribution of the splitting parameters is so complex to request an appropriate analysis. Therefore we applied on our data the code of Menke and Levin (2003) which allows to look for different models of structures with multilayer anisotropy. We obtained that the structure beneath the Po Plain is probably even more complicated than expected. On the basis of the results obtained for this thesis, added with those from previous works, we suggest that slab roll-back, which created the Apennines and opened the Tyrrhenian Sea, evolved in the north boundary of Northern Apennines in a different way from its southern part. In particular, the trench retreat developed primarily south of our study region, with an eastward roll-back. In the northern portion of the orogen, after a first stage during which the retreat was perpendicular to the trench, it became oblique with respect to the structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Subduction zones are the favorite places to generate tsunamigenic earthquakes, where friction between oceanic and continental plates causes the occurrence of a strong seismicity. The topics and the methodologies discussed in this thesis are focussed to the understanding of the rupture process of the seismic sources of great earthquakes that generate tsunamis. The tsunamigenesis is controlled by several kinematical characteristic of the parent earthquake, as the focal mechanism, the depth of the rupture, the slip distribution along the fault area and by the mechanical properties of the source zone. Each of these factors plays a fundamental role in the tsunami generation. Therefore, inferring the source parameters of tsunamigenic earthquakes is crucial to understand the generation of the consequent tsunami and so to mitigate the risk along the coasts. The typical way to proceed when we want to gather information regarding the source process is to have recourse to the inversion of geophysical data that are available. Tsunami data, moreover, are useful to constrain the portion of the fault area that extends offshore, generally close to the trench that, on the contrary, other kinds of data are not able to constrain. In this thesis I have discussed the rupture process of some recent tsunamigenic events, as inferred by means of an inverse method. I have presented the 2003 Tokachi-Oki (Japan) earthquake (Mw 8.1). In this study the slip distribution on the fault has been inferred by inverting tsunami waveform, GPS, and bottom-pressure data. The joint inversion of tsunami and geodetic data has revealed a much better constrain for the slip distribution on the fault rather than the separate inversions of single datasets. Then we have studied the earthquake occurred on 2007 in southern Sumatra (Mw 8.4). By inverting several tsunami waveforms, both in the near and in the far field, we have determined the slip distribution and the mean rupture velocity along the causative fault. Since the largest patch of slip was concentrated on the deepest part of the fault, this is the likely reason for the small tsunami waves that followed the earthquake, pointing out how much the depth of the rupture plays a crucial role in controlling the tsunamigenesis. Finally, we have presented a new rupture model for the great 2004 Sumatra earthquake (Mw 9.2). We have performed the joint inversion of tsunami waveform, GPS and satellite altimetry data, to infer the slip distribution, the slip direction, and the rupture velocity on the fault. Furthermore, in this work we have presented a novel method to estimate, in a self-consistent way, the average rigidity of the source zone. The estimation of the source zone rigidity is important since it may play a significant role in the tsunami generation and, particularly for slow earthquakes, a low rigidity value is sometimes necessary to explain how a relatively low seismic moment earthquake may generate significant tsunamis; this latter point may be relevant for explaining the mechanics of the tsunami earthquakes, one of the open issues in present day seismology. The investigation of these tsunamigenic earthquakes has underlined the importance to use a joint inversion of different geophysical data to determine the rupture characteristics. The results shown here have important implications for the implementation of new tsunami warning systems – particularly in the near-field – the improvement of the current ones, and furthermore for the planning of the inundation maps for tsunami-hazard assessment along the coastal area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a non linear technique to invert strong motion records with the aim of obtaining the final slip and rupture velocity distributions on the fault plane. In this thesis, the ground motion simulation is obtained evaluating the representation integral in the frequency. The Green’s tractions are computed using the discrete wave-number integration technique that provides the full wave-field in a 1D layered propagation medium. The representation integral is computed through a finite elements technique, based on a Delaunay’s triangulation on the fault plane. The rupture velocity is defined on a coarser regular grid and rupture times are computed by integration of the eikonal equation. For the inversion, the slip distribution is parameterized by 2D overlapping Gaussian functions, which can easily relate the spectrum of the possible solutions with the minimum resolvable wavelength, related to source-station distribution and data processing. The inverse problem is solved by a two-step procedure aimed at separating the computation of the rupture velocity from the evaluation of the slip distribution, the latter being a linear problem, when the rupture velocity is fixed. The non-linear step is solved by optimization of an L2 misfit function between synthetic and real seismograms, and solution is searched by the use of the Neighbourhood Algorithm. The conjugate gradient method is used to solve the linear step instead. The developed methodology has been applied to the M7.2, Iwate Nairiku Miyagi, Japan, earthquake. The estimated magnitude seismic moment is 2.6326 dyne∙cm that corresponds to a moment magnitude MW 6.9 while the mean the rupture velocity is 2.0 km/s. A large slip patch extends from the hypocenter to the southern shallow part of the fault plane. A second relatively large slip patch is found in the northern shallow part. Finally, we gave a quantitative estimation of errors associates with the parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The term "Brain Imaging" identi�es a set of techniques to analyze the structure and/or functional behavior of the brain in normal and/or pathological situations. These techniques are largely used in the study of brain activity. In addition to clinical usage, analysis of brain activity is gaining popularity in others recent �fields, i.e. Brain Computer Interfaces (BCI) and the study of cognitive processes. In this context, usage of classical solutions (e.g. f MRI, PET-CT) could be unfeasible, due to their low temporal resolution, high cost and limited portability. For these reasons alternative low cost techniques are object of research, typically based on simple recording hardware and on intensive data elaboration process. Typical examples are ElectroEncephaloGraphy (EEG) and Electrical Impedance Tomography (EIT), where electric potential at the patient's scalp is recorded by high impedance electrodes. In EEG potentials are directly generated from neuronal activity, while in EIT by the injection of small currents at the scalp. To retrieve meaningful insights on brain activity from measurements, EIT and EEG relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of the electric �field distribution therein. The inhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeo�ff between physical accuracy and technical feasibility, which currently severely limits the capabilities of these techniques. Moreover elaboration of data recorded requires usage of regularization techniques computationally intensive, which influences the application with heavy temporal constraints (such as BCI). This work focuses on the parallel implementation of a work-flow for EEG and EIT data processing. The resulting software is accelerated using multi-core GPUs, in order to provide solution in reasonable times and address requirements of real-time BCI systems, without over-simplifying the complexity and accuracy of the head models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last years, sustainable horticulture has been increasing; however, to be successful this practice needs an efficient soil fertility management to maintain a high productivity and fruit quality standards. For this purpose composted organic materials from agri-food industry and municipal solid waste has been used as a source to replace chemical fertilizers and increase soil organic matter. To better understand the influence of compost application on soil fertility and plant growth, we carried out a study comparing organic and mineral nitrogen (N) fertilization in micro propagated plants, potted trees and commercial peach orchard with these aims: 1. evaluation of tree development, CO2 fixation and carbon partition to the different organs of two-years-old potted peach trees. 2. Determination of soil N concentration and nitrate-N effect on plant growth and root oxidative stress of micro propagated plant after increasing rates of N applications. 3. Assessment of soil chemical and biological fertility, tree growth and yield and fruit quality in a commercial orchard. The addition of compost at high rate was effective in increasing CO2 fixation, promoting root growth, shoot and fruit biomass. Furthermore, organic fertilizers influenced C partitioning, favoring C accumulation in roots, wood and fruits. The higher CO2 fixation was the result of a larger tree leaf area, rather than an increase in leaf photosynthetic efficiency, showing a stimulation of plant growth by application of compost. High concentrations of compost increased total soil N concentration, but were not effective in increasing nitrate-N soil concentration; in contrast mineral-N applications increased linearly soil nitrate-N, even at the lowest rate tested. Soil nitrate-N concentration influenced positively plant growth at low rate (60- 80 mg kg-1), whereas at high concentrations showed negative effects. In this trial, the decrease of root growth, as a response to excessive nitrate-N soil concentration, was not anticipated by root oxidative stress. Continuous annual applications of compost for 10 years enhanced soil organic matter content and total soil N concentration. Additionally, high rate of compost application (10 t ha-1 year-1) enhanced microbial biomass. On the other hand, different fertilizers management did not modify tree yield, but influenced fruit size and precocity index. The present data support the idea that organic fertilizers can be used successfully as a substitute of mineral fertilizers in fruit tree nutrient management, since they promote an increase of soil chemical and biological fertility, prevent excessive nitrate-N soil concentration, promote plant growth and potentially C sequestration into the soil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are different ways to do cluster analysis of categorical data in the literature and the choice among them is strongly related to the aim of the researcher, if we do not take into account time and economical constraints. Main approaches for clustering are usually distinguished into model-based and distance-based methods: the former assume that objects belonging to the same class are similar in the sense that their observed values come from the same probability distribution, whose parameters are unknown and need to be estimated; the latter evaluate distances among objects by a defined dissimilarity measure and, basing on it, allocate units to the closest group. In clustering, one may be interested in the classification of similar objects into groups, and one may be interested in finding observations that come from the same true homogeneous distribution. But do both of these aims lead to the same clustering? And how good are clustering methods designed to fulfil one of these aims in terms of the other? In order to answer, two approaches, namely a latent class model (mixture of multinomial distributions) and a partition around medoids one, are evaluated and compared by Adjusted Rand Index, Average Silhouette Width and Pearson-Gamma indexes in a fairly wide simulation study. Simulation outcomes are plotted in bi-dimensional graphs via Multidimensional Scaling; size of points is proportional to the number of points that overlap and different colours are used according to the cluster membership.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This PhD thesis concerns geochemical constraints on recycling and partial melting of Archean continental crust. A natural example of such processes was found in the Iisalmi area of Central Finland. The rocks from this area are Middle to Late Archean in age and experienced metamorphism and partial melting between 2.7-2.63 Ga. The work is based on extensive field work. It is furthermore founded on bulk rock geochemical data as well as in-situ analyses of minerals. All geochemical data were obtained at the Institute of Geosciences, University of Mainz using X-ray fluorescence, solution ICP-MS and laser ablation-ICP-MS for bulk rock geochemical analyses. Mineral analyses were accomplished by electron microprobe and laser ablation ICP-MS. Fluid inclusions were studied by microscope on a heating-freezing-stage at the Geoscience Center, University Göttingen. Part I focuses on the development of a new analytical method for bulk rock trace element determination by laser ablation-ICP-MS using homogeneous glasses fused from rock powder on an Iridium strip heater. This method is applicable for mafic rock samples whose melts have low viscosities and homogenize quickly at temperatures of ~1200°C. Highly viscous melts of felsic samples prevent melting and homogenization at comparable temperatures. Fusion of felsic samples can be enabled by addition of MgO to the rock powder and adjustment of melting temperature and melting duration to the rock composition. Advantages of the fusion method are low detection limits compared to XRF analyses and avoidance of wet-chemical processing and use of strong acids as in solution ICP-MS as well as smaller sample volumes compared to the other methods. Part II of the thesis uses bulk rock geochemical data and results from fluid inclusion studies for discrimination of melting processes observed in different rock types. Fluid inclusion studies demonstrate a major change in fluid composition from CO2-dominated fluids in granulites to aqueous fluids in TTG gneisses and amphibolites. Partial melts were generated in the dry, CO2-rich environment by dehydration melting reactions of amphibole which in addition to tonalitic melts produced the anhydrous mineral assemblages of granulites (grt + cpx + pl ± amph or opx + cpx + pl + amph). Trace element modeling showed that mafic granulites are residues of 10-30 % melt extraction from amphibolitic precursor rocks. The maximum degree of melting in intermediate granulites was ~10 % as inferred from modal abundances of amphibole, clinopyroxene and orthopyroxene. Carbonic inclusions are absent in upper-amphibolite facies migmatites whereas aqueous inclusion with up to 20 wt% NaCl are abundant. This suggests that melting within TTG gneisses and amphibolites took place in the presence of an aqueous fluid phase that enabled melting at the wet solidus at temperatures of 700-750°C. The strong disruption of pre-metamorphic structures in some outcrops suggests that the maximum amount of melt in TTG gneisses was ~25 vol%. The presence of leucosomes in all rock types is taken as the principle evidence for melt formation. However, mineralogical appearance as well as major and trace element composition of many leucosomes imply that leucosomes seldom represent frozen in-situ melts. They are better considered as remnants of the melt channel network, e.g. ways on which melts escaped from the system. Part III of the thesis describes how analyses of minerals from a specific rock type (granulite) can be used to determine partition coefficients between different minerals and between minerals and melt suitable for lower crustal conditions. The trace element analyses by laser ablation-ICP-MS show coherent distribution among the principal mineral phases independent of rock composition. REE contents in amphibole are about 3 times higher than REE contents in clinopyroxene from the same sample. This consistency has to be taken into consideration in models of lower crustal melting where amphibole is replaced by clinopyroxene in the course of melting. A lack of equilibrium is observed between matrix clinopyroxene / amphibole and garnet porphyroblasts which suggests a late stage growth of garnet and slow diffusion and equilibration of the REE during metamorphism. The data provide a first set of distribution coefficients of the transition metals (Sc, V, Cr, Ni) in the lower crust. In addition, analyses of ilmenite and apatite demonstrate the strong influence of accessory phases on trace element distribution. Apatite contains high amounts of REE and Sr while ilmenite incorporates about 20-30 times higher amounts of Nb and Ta than amphibole. Furthermore, trace element mineral analyses provide evidence for magmatic processes such as melt depletion, melt segregation, accumulation and fractionation as well as metasomatism having operated in this high-grade anatectic area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During my PhD, starting from the original formulations proposed by Bertrand et al., 2000 and Emolo & Zollo 2005, I developed inversion methods and applied then at different earthquakes. In particular large efforts have been devoted to the study of the model resolution and to the estimation of the model parameter errors. To study the source kinematic characteristics of the Christchurch earthquake we performed a joint inversion of strong-motion, GPS and InSAR data using a non-linear inversion method. Considering the complexity highlighted by superficial deformation data, we adopted a fault model consisting of two partially overlapping segments, with dimensions 15x11 and 7x7 km2, having different faulting styles. This two-fault model allows to better reconstruct the complex shape of the superficial deformation data. The total seismic moment resulting from the joint inversion is 3.0x1025 dyne.cm (Mw = 6.2) with an average rupture velocity of 2.0 km/s. Errors associated with the kinematic model have been estimated of around 20-30 %. The 2009 Aquila sequence was characterized by an intense aftershocks sequence that lasted several months. In this study we applied an inversion method that assumes as data the apparent Source Time Functions (aSTFs), to a Mw 4.0 aftershock of the Aquila sequence. The estimation of aSTFs was obtained using the deconvolution method proposed by Vallée et al., 2004. The inversion results show a heterogeneous slip distribution, characterized by two main slip patches located NW of the hypocenter, and a variable rupture velocity distribution (mean value of 2.5 km/s), showing a rupture front acceleration in between the two high slip zones. Errors of about 20% characterize the final estimated parameters.