964 resultados para Data Migration Processes Modeling
Resumo:
Current scientific applications have been producing large amounts of data. The processing, handling and analysis of such data require large-scale computing infrastructures such as clusters and grids. In this area, studies aim at improving the performance of data-intensive applications by optimizing data accesses. In order to achieve this goal, distributed storage systems have been considering techniques of data replication, migration, distribution, and access parallelism. However, the main drawback of those studies is that they do not take into account application behavior to perform data access optimization. This limitation motivated this paper which applies strategies to support the online prediction of application behavior in order to optimize data access operations on distributed systems, without requiring any information on past executions. In order to accomplish such a goal, this approach organizes application behaviors as time series and, then, analyzes and classifies those series according to their properties. By knowing properties, the approach selects modeling techniques to represent series and perform predictions, which are, later on, used to optimize data access operations. This new approach was implemented and evaluated using the OptorSim simulator, sponsored by the LHC-CERN project and widely employed by the scientific community. Experiments confirm this new approach reduces application execution time in about 50 percent, specially when handling large amounts of data.
Resumo:
A new method for analysis of scattering data from lamellar bilayer systems is presented. The method employs a form-free description of the cross-section structure of the bilayer and the fit is performed directly to the scattering data, introducing also a structure factor when required. The cross-section structure (electron density profile in the case of X-ray scattering) is described by a set of Gaussian functions and the technique is termed Gaussian deconvolution. The coefficients of the Gaussians are optimized using a constrained least-squares routine that induces smoothness of the electron density profile. The optimization is coupled with the point-of-inflection method for determining the optimal weight of the smoothness. With the new approach, it is possible to optimize simultaneously the form factor, structure factor and several other parameters in the model. The applicability of this method is demonstrated by using it in a study of a multilamellar system composed of lecithin bilayers, where the form factor and structure factor are obtained simultaneously, and the obtained results provided new insight into this very well known system.
Resumo:
Abstract Background To understand the molecular mechanisms underlying important biological processes, a detailed description of the gene products networks involved is required. In order to define and understand such molecular networks, some statistical methods are proposed in the literature to estimate gene regulatory networks from time-series microarray data. However, several problems still need to be overcome. Firstly, information flow need to be inferred, in addition to the correlation between genes. Secondly, we usually try to identify large networks from a large number of genes (parameters) originating from a smaller number of microarray experiments (samples). Due to this situation, which is rather frequent in Bioinformatics, it is difficult to perform statistical tests using methods that model large gene-gene networks. In addition, most of the models are based on dimension reduction using clustering techniques, therefore, the resulting network is not a gene-gene network but a module-module network. Here, we present the Sparse Vector Autoregressive model as a solution to these problems. Results We have applied the Sparse Vector Autoregressive model to estimate gene regulatory networks based on gene expression profiles obtained from time-series microarray experiments. Through extensive simulations, by applying the SVAR method to artificial regulatory networks, we show that SVAR can infer true positive edges even under conditions in which the number of samples is smaller than the number of genes. Moreover, it is possible to control for false positives, a significant advantage when compared to other methods described in the literature, which are based on ranks or score functions. By applying SVAR to actual HeLa cell cycle gene expression data, we were able to identify well known transcription factor targets. Conclusion The proposed SVAR method is able to model gene regulatory networks in frequent situations in which the number of samples is lower than the number of genes, making it possible to naturally infer partial Granger causalities without any a priori information. In addition, we present a statistical test to control the false discovery rate, which was not previously possible using other gene regulatory network models.
Resumo:
Molecular modeling is growing as a research tool in Chemical Engineering studies, as can be seen by a simple research on the latest publications in the field. Molecular investigations retrieve information on properties often accessible only by expensive and time-consuming experimental techniques, such as those involved in the study of radical-based chain reactions. In this work, different quantum chemical techniques were used to study phenol oxidation by hydroxyl radicals in Advanced Oxidation Processes used for wastewater treatment. The results obtained by applying a DFT-based model showed good agreement with experimental values available, as well as qualitative insights into the mechanism of the overall reaction chain. Solvation models were also tried, but were found to be limited for this reaction system within the considered theoretical level without further parameterization.
Resumo:
[EN] Many ecologically important chemical transformations in the ocean are controlled by biochemical enzyme reactions in plankton. Nitrogenase regulates the transformation of N2 to ammonium in some cyanobacteria and serves as the entryway for N2 into the ocean biosphere. Nitrate reductase controls the reduction of NO3 to NO2 and hence new production in phytoplankton. The respiratory electron transfer system in all organisms links the carbon oxidation reactions of intermediary metabolism with the reduction of oxygen in respiration. Rubisco controls the fixation of CO2 into organic matter in phytoplankton and thus is the major entry point of carbon into the oceanic biosphere. In addition to these, there are the enzymes that control CO2 production, NH4 excretion and the fluxes of phosphate. Some of these enzymes have been recognized and researched by marine scientists in the last thirty years. However, until recently the kinetic principles of enzyme control have not been exploited to formulate accurate mathematical equations of the controlling physiological expressions. Were such expressions available they would increase our power to predict the rates of chemical transformations in the extracellular environment of microbial populations whether this extracellular environment is culture media or the ocean. Here we formulate from the principles of bisubstrate enzyme kinetics, mathematical expressions for the processes of NO3 reduction, O2 consumption, N2 fixation, total nitrogen uptake.
Resumo:
[EN] This work makes a theoretical–experimental contribution to the study of ester and alkane solutions. Experimental data of isobaric vapor–liquid equilibria (VLE) are presented at 101.3 kPa for binary systems of methyl ethanoate with six alkanes (from C5 to C10), and of volumes and mixing enthalpies, vE and hE.
Resumo:
This work provides a forward step in the study and comprehension of the relationships between stochastic processes and a certain class of integral-partial differential equation, which can be used in order to model anomalous diffusion and transport in statistical physics. In the first part, we brought the reader through the fundamental notions of probability and stochastic processes, stochastic integration and stochastic differential equations as well. In particular, within the study of H-sssi processes, we focused on fractional Brownian motion (fBm) and its discrete-time increment process, the fractional Gaussian noise (fGn), which provide examples of non-Markovian Gaussian processes. The fGn, together with stationary FARIMA processes, is widely used in the modeling and estimation of long-memory, or long-range dependence (LRD). Time series manifesting long-range dependence, are often observed in nature especially in physics, meteorology, climatology, but also in hydrology, geophysics, economy and many others. We deepely studied LRD, giving many real data examples, providing statistical analysis and introducing parametric methods of estimation. Then, we introduced the theory of fractional integrals and derivatives, which indeed turns out to be very appropriate for studying and modeling systems with long-memory properties. After having introduced the basics concepts, we provided many examples and applications. For instance, we investigated the relaxation equation with distributed order time-fractional derivatives, which describes models characterized by a strong memory component and can be used to model relaxation in complex systems, which deviates from the classical exponential Debye pattern. Then, we focused in the study of generalizations of the standard diffusion equation, by passing through the preliminary study of the fractional forward drift equation. Such generalizations have been obtained by using fractional integrals and derivatives of distributed orders. In order to find a connection between the anomalous diffusion described by these equations and the long-range dependence, we introduced and studied the generalized grey Brownian motion (ggBm), which is actually a parametric class of H-sssi processes, which have indeed marginal probability density function evolving in time according to a partial integro-differential equation of fractional type. The ggBm is of course Non-Markovian. All around the work, we have remarked many times that, starting from a master equation of a probability density function f(x,t), it is always possible to define an equivalence class of stochastic processes with the same marginal density function f(x,t). All these processes provide suitable stochastic models for the starting equation. Studying the ggBm, we just focused on a subclass made up of processes with stationary increments. The ggBm has been defined canonically in the so called grey noise space. However, we have been able to provide a characterization notwithstanding the underline probability space. We also pointed out that that the generalized grey Brownian motion is a direct generalization of a Gaussian process and in particular it generalizes Brownain motion and fractional Brownain motion as well. Finally, we introduced and analyzed a more general class of diffusion type equations related to certain non-Markovian stochastic processes. We started from the forward drift equation, which have been made non-local in time by the introduction of a suitable chosen memory kernel K(t). The resulting non-Markovian equation has been interpreted in a natural way as the evolution equation of the marginal density function of a random time process l(t). We then consider the subordinated process Y(t)=X(l(t)) where X(t) is a Markovian diffusion. The corresponding time-evolution of the marginal density function of Y(t) is governed by a non-Markovian Fokker-Planck equation which involves the same memory kernel K(t). We developed several applications and derived the exact solutions. Moreover, we considered different stochastic models for the given equations, providing path simulations.
Resumo:
This research has been triggered by an emergent trend in customer behavior: customers have rapidly expanded their channel experiences and preferences beyond traditional channels (such as stores) and they expect the company with which they do business to have a presence on all these channels. This evidence has produced an increasing interest in multichannel customer behavior and it has motivated several researchers to study the customers’ channel choices dynamics in multichannel environment. We study how the consumer decision process for channel choice and response to marketing communications evolves for a cohort of new customers. We assume a newly acquired customer’s decisions are described by a “trial” model, but the customer’s choice process evolves to a “post-trial” model as the customer learns his or her preferences and becomes familiar with the firm’s marketing efforts. The trial and post-trial decision processes are each described by different multinomial logit choice models, and the evolution from the trial to post-trial model is determined by a customer-level geometric distribution that captures the time it takes for the customer to make the transition. We utilize data for a major retailer who sells in three channels – retail store, the Internet, and via catalog. The model is estimated using Bayesian methods that allow for cross-customer heterogeneity. This allows us to have distinct parameters estimates for a trial and an after trial stages and to estimate the quickness of this transit at the individual level. The results show for example that the customer decision process indeed does evolve over time. Customers differ in the duration of the trial period and marketing has a different impact on channel choice in the trial and post-trial stages. Furthermore, we show that some people switch channel decision processes while others don’t and we found that several factors have an impact on the probability to switch decision process. Insights from this study can help managers tailor their marketing communication strategy as customers gain channel choice experience. Managers may also have insights on the timing of the direct marketing communications. They can predict the duration of the trial phase at individual level detecting the customers with a quick, long or even absent trial phase. They can even predict if the customer will change or not his decision process over time, and they can influence the switching process using specific marketing tools
Resumo:
This work is a detailed study of hydrodynamic processes in a defined area, the littoral in front of the Venice Lagoon and its inlets, which are complex morphological areas of interconnection. A finite element hydrodynamic model of the Venice Lagoon and the Adriatic Sea has been developed in order to study the coastal current patterns and the exchanges at the inlets of the Venice Lagoon. This is the first work in this area that tries to model the interaction dynamics, running together a model for the lagoon and the Adriatic Sea. First the barotropic processes near the inlets of the Venice Lagoon have been studied. Data from more than ten tide gauges displaced in the Adriatic Sea have been used in the calibration of the simulated water levels. To validate the model results, empirical flux data measured by ADCP probes installed inside the inlets of Lido and Malamocco have been used and the exchanges through the three inlets of the Venice Lagoon have been analyzed. The comparison between modelled and measured fluxes at the inlets outlined the efficiency of the model to reproduce both tide and wind induced water exchanges between the sea and the lagoon. As a second step, also small scale processes around the inlets that connect the Venice lagoon with the Northern Adriatic Sea have been investigated by means of 3D simulations. Maps of vorticity have been produced, considering the influence of tidal flows and wind stress in the area. A sensitivity analysis has been carried out to define the importance of the advection and of the baroclinic pressure gradients in the development of vortical processes seen along the littoral close to the inlets. Finally a comparison with real data measurements, surface velocity data from HF Radar near the Venice inlets, has been performed, which allows for a better understanding of the processes and their seasonal dynamics. The results outline the predominance of wind and tidal forcing in the coastal area. Wind forcing acts mainly on the mean coastal current inducing its detachment offshore during Sirocco events and an increase of littoral currents during Bora events. The Bora action is more homogeneous on the whole coastal area whereas the Sirocco strengthens its impact in the South, near Chioggia inlet. Tidal forcing at the inlets is mainly barotropic. The sensitivity analysis shows how advection is the main physical process responsible for the persistent vortical structures present along the littoral between the Venice Lagoon inlets. The comparison with measurements from HF Radar not only permitted a validation the model results, but also a description of different patterns in specific periods of the year. The success of the 2D and the 3D simulations on the reproduction both of the SSE, inside and outside the Venice Lagoon, of the tidal flow, through the lagoon inlets, and of the small scale phenomena, occurring along the littoral, indicates that the finite element approach is the most suitable tool for the investigation of coastal processes. For the first time, as shown by the flux modeling, the physical processes that drive the interaction between the two basins were reproduced.
Resumo:
Die vorliegende Dissertation untersucht die biogeochemischen Vorgänge in der Vegetationsschicht (Bestand) und die Rückkopplungen zwischen physiologischen und physikalischen Umweltprozessen, die das Klima und die Chemie der unteren Atmosphäre beeinflussen. Ein besondere Schwerpunkt ist die Verwendung theoretischer Ansätze zur Quantifizierung des vertikalen Austauschs von Energie und Spurengasen (Vertikalfluss) unter besonderer Berücksichtigung der Wechselwirkungen der beteiligten Prozesse. Es wird ein differenziertes Mehrschicht-Modell der Vegetation hergeleitet, implementiert, für den amazonischen Regenwald parametrisiert und auf einen Standort in Rondonia (Südwest Amazonien) angewendet, welches die gekoppelten Gleichungen zur Energiebilanz der Oberfläche und CO2-Assimilation auf der Blattskala mit einer Lagrange-Beschreibung des Vertikaltransports auf der Bestandesskala kombiniert. Die hergeleiteten Parametrisierungen beinhalten die vertikale Dichteverteilung der Blattfläche, ein normalisiertes Profil der horizontalen Windgeschwindigkeit, die Lichtakklimatisierung der Photosynthesekapazität und den Austausch von CO2 und Wärme an der Bodenoberfläche. Desweiteren werden die Berechnungen zur Photosynthese, stomatären Leitfähigkeit und der Strahlungsabschwächung im Bestand mithilfe von Feldmessungen evaluiert. Das Teilmodell zum Vertikaltransport wird im Detail unter Verwendung von 222-Radon-Messungen evaluiert. Die ``Vorwärtslösung'' und der ``inverse Ansatz'' des Lagrangeschen Dispersionsmodells werden durch den Vergleich von beobachteten und vorhergesagten Konzentrationsprofilen bzw. Bodenflüssen bewertet. Ein neuer Ansatz wird hergeleitet, um die Unsicherheiten des inversen Ansatzes aus denjenigen des Eingabekonzentrationsprofils zu quantifizieren. Für nächtliche Bedingungen wird eine modifizierte Parametrisierung der Turbulenz vorgeschlagen, welche die freie Konvektion während der Nacht im unteren Bestand berücksichtigt und im Vergleich zu früheren Abschätzungen zu deutlich kürzeren Aufenthaltszeiten im Bestand führt. Die vorhergesagte Stratifizierung des Bestandes am Tage und in der Nacht steht im Einklang mit Beobachtungen in dichter Vegetation. Die Tagesgänge der vorhergesagten Flüsse und skalaren Profile von Temperatur, H2O, CO2, Isopren und O3 während der späten Regen- und Trockenzeit am Rondonia-Standort stimmen gut mit Beobachtungen überein. Die Ergebnisse weisen auf saisonale physiologische Änderungen hin, die sich durch höhere stomatäre Leitfähigkeiten bzw. niedrigere Photosyntheseraten während der Regen- und Trockenzeit manifestieren. Die beobachteten Depositionsgeschwindigkeiten für Ozon während der Regenzeit überschreiten diejenigen der Trockenzeit um 150-250%. Dies kann nicht durch realistische physiologische Änderungen erklärt werden, jedoch durch einen zusätzlichen cuticulären Aufnahmemechanismus, möglicherweise an feuchten Oberflächen. Der Vergleich von beobachteten und vorhergesagten Isoprenkonzentrationen im Bestand weist auf eine reduzierte Isoprenemissionskapazität schattenadaptierter Blätter und zusätzlich auf eine Isoprenaufnahme des Bodens hin, wodurch sich die globale Schätzung für den tropischen Regenwald um 30% reduzieren würde. In einer detaillierten Sensitivitätsstudie wird die VOC Emission von amazonischen Baumarten unter Verwendung eines neuronalen Ansatzes in Beziehung zu physiologischen und abiotischen Faktoren gesetzt. Die Güte einzelner Parameterkombinationen bezüglich der Vorhersage der VOC Emission wird mit den Vorhersagen eines Modells verglichen, das quasi als Standardemissionsalgorithmus für Isopren dient und Licht sowie Temperatur als Eingabeparameter verwendet. Der Standardalgorithmus und das neuronale Netz unter Verwendung von Licht und Temperatur als Eingabeparameter schneiden sehr gut bei einzelnen Datensätzen ab, scheitern jedoch bei der Vorhersage beobachteter VOC Emissionen, wenn Datensätze von verschiedenen Perioden (Regen/Trockenzeit), Blattentwicklungsstadien, oder gar unterschiedlichen Spezies zusammengeführt werden. Wenn dem Netzwerk Informationen über die Temperatur-Historie hinzugefügt werden, reduziert sich die nicht erklärte Varianz teilweise. Eine noch bessere Leistung wird jedoch mit physiologischen Parameterkombinationen erzielt. Dies verdeutlicht die starke Kopplung zwischen VOC Emission und Blattphysiologie.
Resumo:
The term "Brain Imaging" identi�es a set of techniques to analyze the structure and/or functional behavior of the brain in normal and/or pathological situations. These techniques are largely used in the study of brain activity. In addition to clinical usage, analysis of brain activity is gaining popularity in others recent �fields, i.e. Brain Computer Interfaces (BCI) and the study of cognitive processes. In this context, usage of classical solutions (e.g. f MRI, PET-CT) could be unfeasible, due to their low temporal resolution, high cost and limited portability. For these reasons alternative low cost techniques are object of research, typically based on simple recording hardware and on intensive data elaboration process. Typical examples are ElectroEncephaloGraphy (EEG) and Electrical Impedance Tomography (EIT), where electric potential at the patient's scalp is recorded by high impedance electrodes. In EEG potentials are directly generated from neuronal activity, while in EIT by the injection of small currents at the scalp. To retrieve meaningful insights on brain activity from measurements, EIT and EEG relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of the electric �field distribution therein. The inhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeo�ff between physical accuracy and technical feasibility, which currently severely limits the capabilities of these techniques. Moreover elaboration of data recorded requires usage of regularization techniques computationally intensive, which influences the application with heavy temporal constraints (such as BCI). This work focuses on the parallel implementation of a work-flow for EEG and EIT data processing. The resulting software is accelerated using multi-core GPUs, in order to provide solution in reasonable times and address requirements of real-time BCI systems, without over-simplifying the complexity and accuracy of the head models.
Resumo:
A study of maar-diatreme volcanoes has been perfomed by inversion of gravity and magnetic data. The geophysical inverse problem has been solved by means of the damped nonlinear least-squares method. To ensure stability and convergence of the solution of the inverse problem, a mathematical tool, consisting in data weighting and model scaling, has been worked out. Theoretical gravity and magnetic modeling of maar-diatreme volcanoes has been conducted in order to get information, which is used for a simple rough qualitative and/or quantitative interpretation. The information also serves as a priori information to design models for the inversion and/or to assist the interpretation of inversion results. The results of theoretical modeling have been used to roughly estimate the heights and the dip angles of the walls of eight Eifel maar-diatremes — each taken as a whole. Inversemodeling has been conducted for the Schönfeld Maar (magnetics) and the Hausten-Morswiesen Maar (gravity and magnetics). The geometrical parameters of these maars, as well as the density and magnetic properties of the rocks filling them, have been estimated. For a reliable interpretation of the inversion results, beside the knowledge from theoretical modeling, it was resorted to other tools such like field transformations and spectral analysis for complementary information. Geologic models, based on thesynthesis of the respective interpretation results, are presented for the two maars mentioned above. The results gave more insight into the genesis, physics and posteruptive development of the maar-diatreme volcanoes. A classification of the maar-diatreme volcanoes into three main types has been elaborated. Relatively high magnetic anomalies are indicative of scoria cones embeded within maar-diatremes if they are not caused by a strong remanent component of the magnetization. Smaller (weaker) secondary gravity and magnetic anomalies on the background of the main anomaly of a maar-diatreme — especially in the boundary areas — are indicative for subsidence processes, which probably occurred in the late sedimentation phase of the posteruptive development. Contrary to postulates referring to kimberlite pipes, there exists no generalized systematics between diameter and height nor between geophysical anomaly and the dimensions of the maar-diatreme volcanoes. Although both maar-diatreme volcanoes and kimberlite pipes are products of phreatomagmatism, they probably formed in different thermodynamic and hydrogeological environments. In the case of kimberlite pipes, large amounts of magma and groundwater, certainly supplied by deep and large reservoirs, interacted under high pressure and temperature conditions. This led to a long period phreatomagmatic process and hence to the formation of large structures. Concerning the maar-diatreme and tuff-ring-diatreme volcanoes, the phreatomagmatic process takes place due to an interaction between magma from small and shallow magma chambers (probably segregated magmas) and small amounts of near-surface groundwater under low pressure and temperature conditions. This leads to shorter time eruptions and consequently to structures of smaller size in comparison with kimberlite pipes. Nevertheless, the results show that the diameter to height ratio for 50% of the studied maar-diatremes is around 1, whereby the dip angle of the diatreme walls is similar to that of the kimberlite pipes and lies between 70 and 85°. Note that these numerical characteristics, especially the dip angle, hold for the maars the diatremes of which — estimated by modeling — have the shape of a truncated cone. This indicates that the diatreme can not be completely resolved by inversion.
Resumo:
We use data from about 700 GPS stations in the EuroMediterranen region to investigate the present-day behavior of the the Calabrian subduction zone within the Mediterranean-scale plates kinematics and to perform local scale studies about the strain accumulation on active structures. We focus attenction on the Messina Straits and Crati Valley faults where GPS data show extentional velocity gradients of ∼3 mm/yr and ∼2 mm/yr, respectively. We use dislocation model and a non-linear constrained optimization algorithm to invert for fault geometric parameters and slip-rates and evaluate the associated uncertainties adopting a bootstrap approach. Our analysis suggest the presence of two partially locked normal faults. To investigate the impact of elastic strain contributes from other nearby active faults onto the observed velocity gradient we use a block modeling approach. Our models show that the inferred slip-rates on the two analyzed structures are strongly impacted by the assumed locking width of the Calabrian subduction thrust. In order to frame the observed local deformation features within the present- day central Mediterranean kinematics we realyze a statistical analysis testing the indipendent motion (w.r.t. the African and Eurasias plates) of the Adriatic, Cal- abrian and Sicilian blocks. Our preferred model confirms a microplate like behaviour for all the investigated blocks. Within these kinematic boundary conditions we fur- ther investigate the Calabrian Slab interface geometry using a combined approach of block modeling and χ2ν statistic. Almost no information is obtained using only the horizontal GPS velocities that prove to be a not sufficient dataset for a multi-parametric inversion approach. Trying to stronger constrain the slab geometry we estimate the predicted vertical velocities performing suites of forward models of elastic dislocations varying the fault locking depth. Comparison with the observed field suggest a maximum resolved locking depth of 25 km.
Resumo:
Ocean Island Basalts (OIB) provide important information on the chemical and physical characteristics of their mantle sources. However, the geochemical composition of a generated magma is significantly affected by partial melting and/or subsequent fractional crystallization processes. In addition, the isotopic composition of an ascending magma may be modified during transport through the oceanic crust. The influence of these different processes on the chemical and isotopic composition of OIB from two different localities, Hawaii and Tubuai in the Pacific Ocean, are investigated here. In a first chapter, the Os-isotope variations in suites of lavas from Kohala Volcano, Hawaii, are examined to constrain the role of melt/crust interactions on the evolution of these lavas. As 187Os/188Os sensitivity to any radiogenic contaminant strongly depend on the Os content in the melt, Os and other PGE variations are investigated first. This study reveals that Os and other PGE behavior change during the Hawaiian magma differentiation. While PGE concentrations are relatively constant in lavas with relatively primitive compositions, all PGE contents strongly decrease in the melt as it evolved through ~ 8% MgO. This likely reflects the sulfur saturation of the Hawaiian magma and the onset of sulfide fractionation at around 8% MgO. Kohala tholeiites with more than 8% MgO and rich in Os have homogeneous 187Os/188Os values likely to represent the mantle signature of Kohala lavas. However, Os isotopic ratios become more radiogenic with decreasing MgO and Os contents in the lavas, which reflects assimilation of local crust material during fractional crystallization processes. Less than 8% upper oceanic crust assimilation could have produced the most radiogenic Os-isotope ratios recorded in the shield lavas. However, these small amounts of upper crust assimilation have only negligible effects on Sr and Nd isotopic ratios and therefore, are not responsible for the Sr and Nd isotopic heterogeneities observed in Kohala lavas. In a second chapter, fractional crystallization and partial melting processes are constrained using major and trace element variations in the same suites of lavas from Kohala Volcano, Hawaii. This inverse modeling approach allows the estimation of most of the trace element composition of the Hawaiian mantle source. The calculated initial trace element pattern shows slight depletion of the concentrations from LREE to the most incompatible elements, which indicates that the incompatible element enrichments described by the Hawaiian melt patterns are entirely produced by partial melting processes. The “Kea trend” signature of lavas from Kohala Volcano is also confirmed, with Kohala lavas having lower Sr/Nd and La/Th ratios than lavas from Mauna Loa Volcano. Finally, the magmatic evolution of Tubuai Island is investigated in a last chapter using the trace element and Sr, Nd, Hf isotopic variations in mafic lava suites. The Sr, Nd and Hf isotopic data are homogeneous and typical for the HIMU-type OIB and confirms the cogenetic nature of the different mafic lavas from Tubuai Island. The trace element patterns show progressive enrichment of incompatible trace elements with increasing alkali content in the lavas, which reflect progressive decrease in the degree of partial melting towards the later volcanic events. In addition, this enrichment of incompatible trace elements is associated with relative depletion of Rb, Ba, K, Nb, Ta and Ti in the lavas, which require the presence of small amount of residual phlogopite and of a Ti-bearing phase (ilmenite or rutile) during formation of the younger analcitic and nephelinitic magmas.