915 resultados para Dynamic Headspace Analysis


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The research presented in this thesis was developed as part of DIBANET, an EC funded project aiming to develop an energetically self-sustainable process for the production of diesel miscible biofuels (i.e. ethyl levulinate) via acid hydrolysis of selected biomass feedstocks. Three thermal conversion technologies, pyrolysis, gasification and combustion, were evaluated in the present work with the aim of recovering the energy stored in the acid hydrolysis solid residue (AHR). Mainly consisting of lignin and humins, the AHR can contain up to 80% of the energy in the original feedstock. Pyrolysis of AHR proved unsatisfactory, so attention focussed on gasification and combustion with the aim of producing heat and/or power to supply the energy demanded by the ethyl levulinate production process. A thermal processing rig consisting on a Laminar Entrained Flow Reactor (LEFR) equipped with solid and liquid collection and online gas analysis systems was designed and built to explore pyrolysis, gasification and air-blown combustion of AHR. Maximum liquid yield for pyrolysis of AHR was 30wt% with volatile conversion of 80%. Gas yield for AHR gasification was 78wt%, with 8wt% tar yields and conversion of volatiles close to 100%. 90wt% of the AHR was transformed into gas by combustion, with volatile conversions above 90%. 5volO2%-95vol%N2 gasification resulted in a nitrogen diluted, low heating value gas (2MJ/m3). Steam and oxygen-blown gasification of AHR were additionally investigated in a batch gasifier at KTH in Sweden. Steam promoted the formation of hydrogen (25vol%) and methane (14vol%) improving the gas heating value to 10MJ/m3, below the typical for steam gasification due to equipment limitations. Arrhenius kinetic parameters were calculated using data collected with the LEFR to provide reaction rate information for process design and optimisation. Activation energy (EA) and pre-exponential factor (ko in s-1) for pyrolysis (EA=80kJ/mol, lnko=14), gasification (EA=69kJ/mol, lnko=13) and combustion (EA=42kJ/mol, lnko=8) were calculated after linearly fitting the data using the random pore model. Kinetic parameters for pyrolysis and combustion were also determined by dynamic thermogravimetric analysis (TGA), including studies of the original biomass feedstocks for comparison. Results obtained by differential and integral isoconversional methods for activation energy determination were compared. Activation energy calculated by the Vyazovkin method was 103-204kJ/mol for pyrolysis of untreated feedstocks and 185-387kJ/mol for AHRs. Combustion activation energy was 138-163kJ/mol for biomass and 119-158 for AHRs. The non-linear least squares method was used to determine reaction model and pre-exponential factor. Pyrolysis and combustion of biomass were best modelled by a combination of third order reaction and 3 dimensional diffusion models, while AHR decomposed following the third order reaction for pyrolysis and the 3 dimensional diffusion for combustion.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract

The goal of modern radiotherapy is to precisely deliver a prescribed radiation dose to delineated target volumes that contain a significant amount of tumor cells while sparing the surrounding healthy tissues/organs. Precise delineation of treatment and avoidance volumes is the key for the precision radiation therapy. In recent years, considerable clinical and research efforts have been devoted to integrate MRI into radiotherapy workflow motivated by the superior soft tissue contrast and functional imaging possibility. Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that measures properties of tissue microvasculature. Its sensitivity to radiation-induced vascular pharmacokinetic (PK) changes has been preliminary demonstrated. In spite of its great potential, two major challenges have limited DCE-MRI’s clinical application in radiotherapy assessment: the technical limitations of accurate DCE-MRI imaging implementation and the need of novel DCE-MRI data analysis methods for richer functional heterogeneity information.

This study aims at improving current DCE-MRI techniques and developing new DCE-MRI analysis methods for particular radiotherapy assessment. Thus, the study is naturally divided into two parts. The first part focuses on DCE-MRI temporal resolution as one of the key DCE-MRI technical factors, and some improvements regarding DCE-MRI temporal resolution are proposed; the second part explores the potential value of image heterogeneity analysis and multiple PK model combination for therapeutic response assessment, and several novel DCE-MRI data analysis methods are developed.

I. Improvement of DCE-MRI temporal resolution. First, the feasibility of improving DCE-MRI temporal resolution via image undersampling was studied. Specifically, a novel MR image iterative reconstruction algorithm was studied for DCE-MRI reconstruction. This algorithm was built on the recently developed compress sensing (CS) theory. By utilizing a limited k-space acquisition with shorter imaging time, images can be reconstructed in an iterative fashion under the regularization of a newly proposed total generalized variation (TGV) penalty term. In the retrospective study of brain radiosurgery patient DCE-MRI scans under IRB-approval, the clinically obtained image data was selected as reference data, and the simulated accelerated k-space acquisition was generated via undersampling the reference image full k-space with designed sampling grids. Two undersampling strategies were proposed: 1) a radial multi-ray grid with a special angular distribution was adopted to sample each slice of the full k-space; 2) a Cartesian random sampling grid series with spatiotemporal constraints from adjacent frames was adopted to sample the dynamic k-space series at a slice location. Two sets of PK parameters’ maps were generated from the undersampled data and from the fully-sampled data, respectively. Multiple quantitative measurements and statistical studies were performed to evaluate the accuracy of PK maps generated from the undersampled data in reference to the PK maps generated from the fully-sampled data. Results showed that at a simulated acceleration factor of four, PK maps could be faithfully calculated from the DCE images that were reconstructed using undersampled data, and no statistically significant differences were found between the regional PK mean values from undersampled and fully-sampled data sets. DCE-MRI acceleration using the investigated image reconstruction method has been suggested as feasible and promising.

Second, for high temporal resolution DCE-MRI, a new PK model fitting method was developed to solve PK parameters for better calculation accuracy and efficiency. This method is based on a derivative-based deformation of the commonly used Tofts PK model, which is presented as an integrative expression. This method also includes an advanced Kolmogorov-Zurbenko (KZ) filter to remove the potential noise effect in data and solve the PK parameter as a linear problem in matrix format. In the computer simulation study, PK parameters representing typical intracranial values were selected as references to simulated DCE-MRI data for different temporal resolution and different data noise level. Results showed that at both high temporal resolutions (<1s) and clinically feasible temporal resolution (~5s), this new method was able to calculate PK parameters more accurate than the current calculation methods at clinically relevant noise levels; at high temporal resolutions, the calculation efficiency of this new method was superior to current methods in an order of 102. In a retrospective of clinical brain DCE-MRI scans, the PK maps derived from the proposed method were comparable with the results from current methods. Based on these results, it can be concluded that this new method can be used for accurate and efficient PK model fitting for high temporal resolution DCE-MRI.

II. Development of DCE-MRI analysis methods for therapeutic response assessment. This part aims at methodology developments in two approaches. The first one is to develop model-free analysis method for DCE-MRI functional heterogeneity evaluation. This approach is inspired by the rationale that radiotherapy-induced functional change could be heterogeneous across the treatment area. The first effort was spent on a translational investigation of classic fractal dimension theory for DCE-MRI therapeutic response assessment. In a small-animal anti-angiogenesis drug therapy experiment, the randomly assigned treatment/control groups received multiple fraction treatments with one pre-treatment and multiple post-treatment high spatiotemporal DCE-MRI scans. In the post-treatment scan two weeks after the start, the investigated Rényi dimensions of the classic PK rate constant map demonstrated significant differences between the treatment and the control groups; when Rényi dimensions were adopted for treatment/control group classification, the achieved accuracy was higher than the accuracy from using conventional PK parameter statistics. Following this pilot work, two novel texture analysis methods were proposed. First, a new technique called Gray Level Local Power Matrix (GLLPM) was developed. It intends to solve the lack of temporal information and poor calculation efficiency of the commonly used Gray Level Co-Occurrence Matrix (GLCOM) techniques. In the same small animal experiment, the dynamic curves of Haralick texture features derived from the GLLPM had an overall better performance than the corresponding curves derived from current GLCOM techniques in treatment/control separation and classification. The second developed method is dynamic Fractal Signature Dissimilarity (FSD) analysis. Inspired by the classic fractal dimension theory, this method measures the dynamics of tumor heterogeneity during the contrast agent uptake in a quantitative fashion on DCE images. In the small animal experiment mentioned before, the selected parameters from dynamic FSD analysis showed significant differences between treatment/control groups as early as after 1 treatment fraction; in contrast, metrics from conventional PK analysis showed significant differences only after 3 treatment fractions. When using dynamic FSD parameters, the treatment/control group classification after 1st treatment fraction was improved than using conventional PK statistics. These results suggest the promising application of this novel method for capturing early therapeutic response.

The second approach of developing novel DCE-MRI methods is to combine PK information from multiple PK models. Currently, the classic Tofts model or its alternative version has been widely adopted for DCE-MRI analysis as a gold-standard approach for therapeutic response assessment. Previously, a shutter-speed (SS) model was proposed to incorporate transcytolemmal water exchange effect into contrast agent concentration quantification. In spite of richer biological assumption, its application in therapeutic response assessment is limited. It might be intriguing to combine the information from the SS model and from the classic Tofts model to explore potential new biological information for treatment assessment. The feasibility of this idea was investigated in the same small animal experiment. The SS model was compared against the Tofts model for therapeutic response assessment using PK parameter regional mean value comparison. Based on the modeled transcytolemmal water exchange rate, a biological subvolume was proposed and was automatically identified using histogram analysis. Within the biological subvolume, the PK rate constant derived from the SS model were proved to be superior to the one from Tofts model in treatment/control separation and classification. Furthermore, novel biomarkers were designed to integrate PK rate constants from these two models. When being evaluated in the biological subvolume, this biomarker was able to reflect significant treatment/control difference in both post-treatment evaluation. These results confirm the potential value of SS model as well as its combination with Tofts model for therapeutic response assessment.

In summary, this study addressed two problems of DCE-MRI application in radiotherapy assessment. In the first part, a method of accelerating DCE-MRI acquisition for better temporal resolution was investigated, and a novel PK model fitting algorithm was proposed for high temporal resolution DCE-MRI. In the second part, two model-free texture analysis methods and a multiple-model analysis method were developed for DCE-MRI therapeutic response assessment. The presented works could benefit the future DCE-MRI routine clinical application in radiotherapy assessment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bayesian nonparametric models, such as the Gaussian process and the Dirichlet process, have been extensively applied for target kinematics modeling in various applications including environmental monitoring, traffic planning, endangered species tracking, dynamic scene analysis, autonomous robot navigation, and human motion modeling. As shown by these successful applications, Bayesian nonparametric models are able to adjust their complexities adaptively from data as necessary, and are resistant to overfitting or underfitting. However, most existing works assume that the sensor measurements used to learn the Bayesian nonparametric target kinematics models are obtained a priori or that the target kinematics can be measured by the sensor at any given time throughout the task. Little work has been done for controlling the sensor with bounded field of view to obtain measurements of mobile targets that are most informative for reducing the uncertainty of the Bayesian nonparametric models. To present the systematic sensor planning approach to leaning Bayesian nonparametric models, the Gaussian process target kinematics model is introduced at first, which is capable of describing time-invariant spatial phenomena, such as ocean currents, temperature distributions and wind velocity fields. The Dirichlet process-Gaussian process target kinematics model is subsequently discussed for modeling mixture of mobile targets, such as pedestrian motion patterns.

Novel information theoretic functions are developed for these introduced Bayesian nonparametric target kinematics models to represent the expected utility of measurements as a function of sensor control inputs and random environmental variables. A Gaussian process expected Kullback Leibler divergence is developed as the expectation of the KL divergence between the current (prior) and posterior Gaussian process target kinematics models with respect to the future measurements. Then, this approach is extended to develop a new information value function that can be used to estimate target kinematics described by a Dirichlet process-Gaussian process mixture model. A theorem is proposed that shows the novel information theoretic functions are bounded. Based on this theorem, efficient estimators of the new information theoretic functions are designed, which are proved to be unbiased with the variance of the resultant approximation error decreasing linearly as the number of samples increases. Computational complexities for optimizing the novel information theoretic functions under sensor dynamics constraints are studied, and are proved to be NP-hard. A cumulative lower bound is then proposed to reduce the computational complexity to polynomial time.

Three sensor planning algorithms are developed according to the assumptions on the target kinematics and the sensor dynamics. For problems where the control space of the sensor is discrete, a greedy algorithm is proposed. The efficiency of the greedy algorithm is demonstrated by a numerical experiment with data of ocean currents obtained by moored buoys. A sweep line algorithm is developed for applications where the sensor control space is continuous and unconstrained. Synthetic simulations as well as physical experiments with ground robots and a surveillance camera are conducted to evaluate the performance of the sweep line algorithm. Moreover, a lexicographic algorithm is designed based on the cumulative lower bound of the novel information theoretic functions, for the scenario where the sensor dynamics are constrained. Numerical experiments with real data collected from indoor pedestrians by a commercial pan-tilt camera are performed to examine the lexicographic algorithm. Results from both the numerical simulations and the physical experiments show that the three sensor planning algorithms proposed in this dissertation based on the novel information theoretic functions are superior at learning the target kinematics with

little or no prior knowledge

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cette étude est destinée à la production et à la caractérisation des composites d’acide polylactique (PLA) et des fibres naturelles (lin, poudre de bois). Le moussage du PLA et ses composites ont également été étudiés afin d’évaluer les effets des conditions de moulage par injection et du renfort sur les propriétés finales de ces matériaux. Dans la première partie, les composites constitués de PLA et des fibres de lin ont été produits par extrusion suivit par un moulage en injection. L’effet de la variation du taux de charge (15, 25 et 40% en poids) sur les caractéristiques morphologique, mécanique, thermique et rhéologique des composites a été évalué. Dans la deuxième étape, la poudre de bois (WF) a été choisie pour renforcer le PLA. La préparation des composites de PLA et WF a été effectuée comme dans la première partie et une série complète de caractérisations morphologique, mécanique, thermique et l’analyse mécanique dynamique ont été effectués afin d’obtenir une évaluation complète de l’effet du taux de charge (15, 25 et 40% en poids) sur les propriétés du PLA. Finalement, la troisième partie de cette étude porte sur les composites de PLA et de renfort naturel afin de produire des composites moussés. Ces mousses ont été réalisées à l’aide d’un agent moussant exothermique (azodicarbonamide) via le moulage par injection, suite à un mélange du PLA et de fibres naturelles. Dans ce cas, la charge d’injection (quantité de matière injectée dans le moule: 31, 33, 36, 38 et 43% de la capacité de la presse à injection) et la concentration en poudre de bois (15, 25 et 40% en poids) ont été variées. La caractérisation des propriétés mécanique et thermique a été effectuée et les résultats ont démontré que les renforts naturels étudiés (lin et poudre de bois) permettaient d’améliorer les propriétés mécaniques des composites, notamment le module de flexion et la résistance au choc du polymère (PLA). En outre, la formation de la mousse était également efficace pour le PLA vierge et ses composites car les masses volumiques ont été significativement réduites.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La gestion intégrée de la ressource en eau implique de distinguer les parcours de l’eau qui sont accessibles aux sociétés de ceux qui ne le sont pas. Les cheminements de l’eau sont nombreux et fortement variables d’un lieu à l’autre. Il est possible de simplifier cette question en s’attardant plutôt aux deux destinations de l’eau. L’eau bleue forme les réserves et les flux dans l’hydrosystème : cours d’eau, nappes et écoulements souterrains. L’eau verte est le flux invisible de vapeur d’eau qui rejoint l’atmosphère. Elle inclut l’eau consommée par les plantes et l’eau dans les sols. Or, un grand nombre d’études ne portent que sur un seul type d’eau bleue, en ne s’intéressant généralement qu’au devenir des débits ou, plus rarement, à la recharge des nappes. Le portrait global est alors manquant. Dans un même temps, les changements climatiques viennent impacter ce cheminement de l’eau en faisant varier de manière distincte les différents composants de cycle hydrologique. L’étude réalisée ici utilise l’outil de modélisation SWAT afin de réaliser le suivi de toutes les composantes du cycle hydrologique et de quantifier l’impact des changements climatiques sur l’hydrosystème du bassin versant de la Garonne. Une première partie du travail a permis d’affiner la mise en place du modèle pour répondre au mieux à la problématique posée. Un soin particulier a été apporté à l’utilisation de données météorologiques sur grille (SAFRAN) ainsi qu’à la prise en compte de la neige sur les reliefs. Le calage des paramètres du modèle a été testé dans un contexte differential split sampling, en calant puis validant sur des années contrastées en terme climatique afin d’appréhender la robustesse de la simulation dans un contexte de changements climatiques. Cette étape a permis une amélioration substantielle des performances sur la période de calage (2000-2010) ainsi que la mise en évidence de la stabilité du modèle face aux changements climatiques. Par suite, des simulations sur une période d’un siècle (1960-2050) ont été produites puis analysées en deux phases : i) La période passée (1960-2000), basée sur les observations climatiques, a servi de période de validation à long terme du modèle sur la simulation des débits, avec de très bonnes performances. L’analyse des différents composants hydrologiques met en évidence un impact fort sur les flux et stocks d’eau verte, avec une diminution de la teneur en eau des sols et une augmentation importante de l’évapotranspiration. Les composantes de l’eau bleue sont principalement perturbées au niveau du stock de neige et des débits qui présentent tous les deux une baisse substantielle. ii) Des projections hydrologiques ont été réalisées (2010-2050) en sélectionnant une gamme de scénarios et de modèles climatiques issus d’une mise à l’échelle dynamique. L’analyse de simulation vient en bonne part confirmer les conclusions tirées de la période passée : un impact important sur l’eau verte, avec toujours une baisse de la teneur en eau des sols et une augmentation de l’évapotranspiration potentielle. Les simulations montrent que la teneur en eau des sols pendant la période estivale est telle qu’elle en vient à réduire les flux d’évapotranspiration réelle, mettant en évidence le possible déficit futur des stocks d’eau verte. En outre, si l’analyse des composantes de l’eau bleue montre toujours une diminution significative du stock de neige, les débits semblent cette fois en hausse pendant l’automne et l’hiver. Ces résultats sont un signe de l’«accélération» des composantes d’eau bleue de surface, probablement en relation avec l’augmentation des évènements extrêmes de précipitation. Ce travail a permis de réaliser une analyse des variations de la plupart des composantes du cycle hydrologique à l’échelle d’un bassin versant, confirmant l’importance de prendre en compte toutes ces composantes pour évaluer l’impact des changements climatiques et plus largement des changements environnementaux sur la ressource en eau.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A proposta deste artigo foi estudar a evolução do tamanho das cidades dos estados do nordeste do Brasil para os anos de 1990, 2000 e 2010 através da regularidade empírica conhecida como lei de Zipf, a qual pode ser representada por meio da distribuição de Pareto. Por meio da análise na dinâmica da distribuição das populações através do tempo, o crescimento urbano revelou uma persistência hierárquica das cidades de Salvador, Fortaleza e Recife, enquanto que São Luís experimentou o quarto lugar no rankinging das maiores cidades, que persistiu nas duas últimas décadas. A lei de Zipf não se verificou quando se considerou as cidades do Nordeste em conjunto, que pode ser devido ao menor grau de desenvolvimento urbano das cidades dessa região. Na análise dos estados em separado, também não se observou a lei de Zipf, embora tenha se verificado a lei de Gibrat, a qual postula que o crescimento das cidades é independente de seu tamanho. Por fim, acredita-se que a instalação do complexo minerometalúrgico do Maranhão tenha contribuído para o desenvolvimento e para a redução da desigualdade urbana intracidade nesta área.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A necessidade de produção de dispositivos eletrónicos mais eficientes e a sua miniaturização tem sido um dos principais desígnios da indústria eletrónica. Assim surgiu a necessidade de melhorar o desempenho das designadas placas de circuito impresso, tornando-as simultaneamente mais flexíveis, com menos ruído, mais estáveis face a variações bruscas de temperatura e que permitam operar numa vasta gama de frequências e potências. Para tal, uma das estratégias que tem vindo a ser estudada é a possibilidade de incorporar os componentes passivos, nomeadamente condensadores, sob a forma de filme diretamente no interior da placa. Por forma a manter uma elevada constante dielétrica e baixas perdas, mantendo a flexibilidade, associada ao polímero, têm sido desenvolvidos os designados compósitos de matriz polimérica. Nesta dissertação procedeu-se ao estudo do comportamento dielétrico e elétrico da mistura do cerâmico CaCu3Ti4O12 com o copolímero estireno-isoprenoestireno. Foram preparados filmes com diferentes concentrações de CCTO, recorrendo ao método de arrastamento, em conjunto com o Centro de Polímeros da Eslováquia. Foram também preparados filmes por spin-coating para as mesmas concentrações. Usaram-se dois métodos distintos para a preparação do pó de CCTO, reação de estado sólido e sol-gel. Foi realizada a caraterização estrutural (difração de raios-X. espetroscopia de Raman), morfológica (microscopia eletrónica de varrimento) e dielétrica aos filmes produzidos. Na caracterização dielétrica determinou-se o valor da constante dielétrica e das perdas para todos os filmes, à temperatura ambiente, bem como na gama de temperatura entre os 200 K e os 400 K, o que permitiu identificar existência de relaxações vítreas e subvítreas, e assim calcular as temperaturas de transição vítrea e energias de ativação, respetivamente. Foram realizados testes de adesão e aplicada a técnica de análise mecânica dinâmica para o cálculo das temperaturas de transição vítrea nos filmes preparados pelo método de arrastamento. Estudou-se ainda qual a lei de mistura que melhor se ajusta ao comportamento dielétrico do nosso compósito. Verificou-se que é a lei de Looyenga generalizada a que melhor se ajusta à resposta dielétrica dos compósitos produzidos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract : Natural materials have received a full attention in many applications because they are degradable and derived directly from earth. In addition to these benefits, natural materials can be obtained from renewable resources such as plants (i.e. cellulosic fibers like flax, hemp, jute, and etc). Being cheap and light in weight, the cellulosic natural fiber is a good candidate for reinforcing bio-based polymer composites. However, the hydrophilic nature -resulted from the presence of hydroxyl groups in the structure of these fibers- restricts the application of these fibers in the polymeric matrices. This is because of weak interfacial adhesion, and difficulties in mixing due to poor wettability of the fibers within the matrices. Many attempts have been done to modify surface properties of natural fibers including physical, chemical, and physico-chemical treatments but on the one hand, these treatments are unable to cure the intrinsic defects of the surface of the fibers and on the other hand they cannot improve moisture, and alkali resistance of the fibers. However, the creation of a thin film on the fibers would achieve the mentioned objectives. This study aims firstly to functionalize the flax fibers by using selective oxidation of hydroxyl groups existed in cellulose structure to pave the way for better adhesion of subsequent amphiphilic TiO[subscript 2] thin films created by Sol-Gel technique. This method is capable of creating a very thin layer of metallic oxide on a substrate. In the next step, the effect of oxidation on the interfacial adhesion between the TiO[subscript 2] film and the fiber and thus on the physical and mechanical properties of the fiber was characterized. Eventually, the TiO[subscript 2] grafted fibers with and without oxidation were used to reinforce poly lactic acid (PLA). Tensile, impact, and short beam shear tests were performed to characterize the mechanical properties while Thermogravimetric analysis (TGA), Differential Scanning Calorimetry (DSC), Dynamic mechanical analysis (DMA), and moisture absorption were used to show the physical properties of the composites. Results showed a significant increase in physical and mechanical properties of flax fibers when the fibers were oxidized prior to TiO[subscript 2] grafting. Moreover, the TiO[subscript 2] grafted oxidized fiber caused significant changes when they were used as reinforcements in PLA. A higher interfacial strength and less amount of water absorption were obtained in comparison with the reference samples.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of raw materials from renewable sources for production of materials has been the subject of several studies and researches, because of its potential to substitute petrochemical-based materials. The addition of natural fibers to polymers represents an alternative in the partial or total replacement of glass fibers in composites. In this work, carnauba leaf fibers were used in the production of biodegradable composites with polyhydroxybutyrate (PHB) matrix. To improve the interfacial properties fiber / matrix were studied four chemical treatments to the fibers..The effect of the different chemical treatments on the morphological, physical, chemical and mechanical properties of the fibers and composites were investigated by scanning electron microscopy (SEM), infrared spectroscopy, X-ray diffraction, tensile and flexural tests, dynamic mechanical analysis (DMA), thermogravimetry (TGA) and diferential scanning calorimetry (DSC). The results of tensile tests indicated an increase in tensile strength of the composites after the chemical treatment of the fibers, with best results for the hydrogen peroxide treated fibers, even though the tensile strength of fibers was slightly reduced. This suggests a better interaction fiber/matrix which was also observed by SEM fractographs. The glass transition temperature (Tg) was reduced for all composites compared to the pure polymer which can be attributed to the absorption of solvents, moisture and other low molecular weight molecules by the fibers

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Time series of commercial landings from the Algarve (southern Portugal) from 1982 to 1999 were analyzed using min/max autocorrelation factor analysis (MAFA) and dynamic factor analysis (DFA). These techniques were used to identify trends and explore the relationships between the response variables (annual landings of 12 species) and explanatory variables [sea surface temperature, rainfall, an upwelling index, Guadiana river (south-east Portugal) flow, the North Atlantic oscillation, the number of licensed fishing vessels and the number of commercial fishermen]. Landings were more highly correlated with non-lagged environmental variables and in particular with Guadiana river flow. Both techniques gave coherent results, with the most important trend being a steady decline over time. A DFA model with two explanatory variables (Guadiana river flow and number of fishermen) and three common trends (smoothing functions over time) gave good fits to 10 of the 12 species. Results of other models indicated that river flow is the more important explanatory variable in this model. Changes in the mean flow and discharge regime of the Guadiana river resulting from the construction of the Alqueva dam, completed in 2002, are therefore likely to have a significant and deleterious impact on Algarve fisheries landings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Understanding the fluctuations in population abundance is a central question in fisheries. Sardine fisheries is of great importance to Portugal and is data-rich and of primary concern to fisheries managers. In Portugal, sub-stocks of Sardina pilchardus (sardine) are found in different regions: the Northwest (IXaCN), Southwest (IXaCS) and the South coast (IXaS-Algarve). Each of these sardine sub-stocks is affected differently by a unique set of climate and ocean conditions, mainly during larval development and recruitment, which will consequently affect sardine fisheries in the short term. Taking this hypothesis into consideration we examined the effects of hydrographic (river discharge), sea surface temperature, wind driven phenomena, upwelling, climatic (North Atlantic Oscillation) and fisheries variables (fishing effort) on S. pilchardus catch rates (landings per unit effort, LPUE, as a proxy for sardine biomass). A 20-year time series (1989-2009) was used, for the different subdivisions of the Portuguese coast (sardine sub-stocks). For the purpose of this analysis a multi-model approach was used, applying different time series models for data fitting (Dynamic Factor Analysis, Generalised Least Squares), forecasting (Autoregressive Integrated Moving Average), as well as Surplus Production stock assessment models. The different models were evaluated, compared and the most important variables explaining changes in LPUE were identified. The type of relationship between catch rates of sardine and environmental variables varied across regional scales due to region-specific recruitment responses. Seasonality plays an important role in sardine variability within the three study regions. In IXaCN autumn (season with minimum spawning activity, larvae and egg concentrations) SST, northerly wind and wind magnitude were negatively related with LPUE. In IXaCS none of the explanatory variables tested was clearly related with LPUE. In IXaS-Algarve (South Portugal) both spring (period when large abundances of larvae are found) northerly wind and wind magnitude were negatively related with LPUE, revealing that environmental effects match with the regional peak in spawning time. Overall, results suggest that management of small, short-lived pelagic species, such as sardine quotas/sustainable yields, should be adapted to a regional scale because of regional environmental variability.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A causa di una maggiore preoccupazione ambientale, la ricerca si sta muovendo verso lo sviluppo di nuovi prodotti bio-based. Il progetto è stato incentrato sulla possibilità di produrre materiali compositi utilizzando un sistema resina-indurente interamente ottenuto da fonti rinnovabili con fibre naturali totalmente green e sintetiche. In particolare, è stata studiata la possibilità di utilizzare l’adenina, una molecola atossica e derivante da fonte rinnovabile, per la produzione di compositi con fibre di lino, juta, carbonio vergine e riciclato con resina epossidica da fonte bio. In particolare è stato ottimizzato il ciclo di cura e sono state caratterizzate le proprietà termiche e meccaniche dei materiali prodotti mediante analisi DSC (Differential Scanning Calorimetry), DMA (Dynamic Mechanical Analysis) e TGA (Termo-Gravimetric-Analysis). Per comprendere meglio la reticolazione delle resine epossidiche, si è studiato il meccanismo di reazione tra l’adenina e un precursore epossidico attraverso spettroscopia 1H-NMR (Nuclear Magnetic Resonance).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nanofibrous membranes are a promising material for tailoring the properties of laminated CFRP composites by embedding them into the structure. This project aimed to understand the effect of number, position and thickness of nanofibrous modifications specifically on the damping behaviour of the resulting nano-modified CFRP composite with an epoxy matrix. An improvement of damping capacity is expected to improve a composites lifetime and fatigue resistance by prohibiting the formation of microcracks and consequently hindering delamination, it also promises a rise in comfort for a range of final products by intermission of vibration propagation and therefore diminution of noise. Electrospinning was the technique employed to produce nanofibrous membranes from a blend of polymeric solutions. SEM, WAXS and DSC were utilised to evaluate the quality of the obtained membranes before they were introduced, following a specific stacking sequence, in the production process of the laminate. A suitable curing cycle in an autoclave was applied to mend the modifications together with the matrix material, ensuring full crosslinking of the matrix and therefore finalising the production process. DMA was exercised in order to gain an understanding about the effects of the different modifications on the properties of the composite. During this investigation it became apparent that a high number of modifications of laminate CFRP composites, with an epoxy matrix, with thick rubbery nanofibrous membranes has a positive effect on the damping capacity and the temperature range the effect applies in. A suggestion for subsequent studies as well as a recommendation for the production of nano-modified CFRP structures is included at the end of this document.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Needle trap devices (NTDs) are a relatively new and promising tool for headspace (HS) analysis. In this study, a dynamic HS sampling procedure is evaluated for the determination of volatile organic compounds (VOCs) in whole blood samples. A full factorial design was used to evaluate the influence of the number of cycles and incubation time and it is demonstrated that the controlling factor in the process is the number of cycles. A mathematical model can be used to determine the most appropriate number of cycles required to adsorb a prefixed amount of VOCs present in the HS phase whenever quantitative adsorption is reached in each cycle. Matrix effect is of great importance when complex biological samples, such as blood, are analyzed. The evaluation of the salting out effect showed a significant improvement in the volatilization of VOCs to the HS in this type of matrices. Moreover, a 1:4 (blood:water) dilution is required to obtain quantitative recoveries of the target analytes when external calibration is used. The method developed gives detection limits in the 0.020–0.080 μg L−1 range (0.1–0.4 μg L−1 range for undiluted blood samples) with appropriate repeatability values (RSD < 15% at high level and <23% at LOQ level). Figure of merits of the method can be improved by using a smaller phase ratio (i.e., an increase in the blood volume and a decrease in the HS volume), which lead to lower detection limits, better repeatability values and greater sensibility. Twenty-eight blood samples have been evaluated with the proposed method and the results agree with those indicated in other studies. Benzene was the only target compound that gave significant differences between blood levels detected in volunteer non-smokers and smokers

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Plackett-Burman experimental design was applied for the robustness assessment of GC×GC-qMS (Comprehensive Two-Dimensional Gas Chromatography with Fast Quadrupolar Mass Spectrometric Detection) in quantitative and qualitative analysis of volatiles compounds from chocolate samples isolated by headspace solid-phase microextraction (HS-SPME). The influence of small changes around the nominal level of six factors deemed as important on peak areas (carrier gas flow rate, modulation period, temperature of ionic source, MS photomultiplier power, injector temperature and interface temperature) and of four factors considered as potentially influential on spectral quality (minimum and maximum limits of the scanned mass ranges, ions source temperature and photomultiplier power). The analytes selected for the study were 2,3,5-trimethylpyrazine, 2-octanone, octanal, 2-pentyl-furan, 2,3,5,6-tetramethylpyrazine, and 2-nonanone e nonanal. The factors pointed out as important on the robustness of the system were photomultiplier power for quantitative analysis and lower limit of mass scanning range for qualitative analysis.