28 resultados para source analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern food systems are characterized by a high energy intensity as well as by the production of large amounts of waste, residuals and food losses. This inefficiency presents major consequences, in terms of GHG emissions, waste disposal, and natural resource depletion. The research hypothesis is that residual biomass material could contribute to the energetic needs of food systems, if recovered as an integrated renewable energy source (RES), leading to a sensitive reduction of the impacts of food systems, primarily in terms of fossil fuel consumption and GHG emissions. In order to assess these effects, a comparative life cycle assessment (LCA) has been conducted to compare two different food systems: a fossil fuel-based system and an integrated system with the use of residual as RES for self-consumption. The food product under analysis has been the peach nectar, from cultivation to end-of-life. The aim of this LCA is twofold. On one hand, it allows an evaluation of the energy inefficiencies related to agro-food waste. On the other hand, it illustrates how the integration of bioenergy into food systems could effectively contribute to reduce this inefficiency. Data about inputs and waste generated has been collected mainly through literature review and databases. Energy balance, GHG emissions (Global Warming Potential) and waste generation have been analyzed in order to identify the relative requirements and contribution of the different segments. An evaluation of the energy “loss” through the different categories of waste allowed to provide details about the consequences associated with its management and/or disposal. Results should provide an insight of the impacts associated with inefficiencies within food systems. The comparison provides a measure of the potential reuse of wasted biomass and the amount of energy recoverable, that could represent a first step for the formulation of specific policies on the integration of bioenergies for self-consumption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, new tools in atmospheric pollutant sampling and analysis were applied in order to go deeper in source apportionment study. The project was developed mainly by the study of atmospheric emission sources in a suburban area influenced by a municipal solid waste incinerator (MSWI), a medium-sized coastal tourist town and a motorway. Two main research lines were followed. For what concerns the first line, the potentiality of the use of PM samplers coupled with a wind select sensor was assessed. Results showed that they may be a valid support in source apportionment studies. However, meteorological and territorial conditions could strongly affect the results. Moreover, new markers were investigated, particularly focusing on the processes of biomass burning. OC revealed a good biomass combustion process indicator, as well as all determined organic compounds. Among metals, lead and aluminium are well related to the biomass combustion. Surprisingly PM was not enriched of potassium during bonfire event. The second research line consists on the application of Positive Matrix factorization (PMF), a new statistical tool in data analysis. This new technique was applied to datasets which refer to different time resolution data. PMF application to atmospheric deposition fluxes identified six main sources affecting the area. The incinerator’s relative contribution seemed to be negligible. PMF analysis was then applied to PM2.5 collected with samplers coupled with a wind select sensor. The higher number of determined environmental indicators allowed to obtain more detailed results on the sources affecting the area. Vehicular traffic revealed the source of greatest concern for the study area. Also in this case, incinerator’s relative contribution seemed to be negligible. Finally, the application of PMF analysis to hourly aerosol data demonstrated that the higher the temporal resolution of the data was, the more the source profiles were close to the real one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this thesis is the power transient analysis concerning experimental devices placed within the reflector of Jules Horowitz Reactor (JHR). Since JHR material testing facility is designed to achieve 100 MW core thermal power, a large reflector hosts fissile material samples that are irradiated up to total relevant power of 3 MW. MADISON devices are expected to attain 130 kW, conversely ADELINE nominal power is of some 60 kW. In addition, MOLFI test samples are envisaged to reach 360 kW for what concerns LEU configuration and up to 650 kW according to HEU frame. Safety issues concern shutdown transients and need particular verifications about thermal power decreasing of these fissile samples with respect to core kinetics, as far as single device reactivity determination is concerned. Calculation model is conceived and applied in order to properly account for different nuclear heating processes and relative time-dependent features of device transients. An innovative methodology is carried out since flux shape modification during control rod insertions is investigated regarding the impact on device power through core-reflector coupling coefficients. In fact, previous methods considering only nominal core-reflector parameters are then improved. Moreover, delayed emissions effect is evaluated about spatial impact on devices of a diffuse in-core delayed neutron source. Delayed gammas transport related to fission products concentration is taken into account through evolution calculations of different fuel compositions in equilibrium cycle. Provided accurate device reactivity control, power transients are then computed for every sample according to envisaged shutdown procedures. Results obtained in this study are aimed at design feedback and reactor management optimization by JHR project team. Moreover, Safety Report is intended to utilize present analysis for improved device characterization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study is focused on radio-frequency inductively coupled thermal plasma (ICP) synthesis of nanoparticles, combining experimental and modelling approaches towards process optimization and industrial scale-up, in the framework of the FP7-NMP SIMBA European project (Scaling-up of ICP technology for continuous production of Metallic nanopowders for Battery Applications). First the state of the art of nanoparticle production through conventional and plasma routes is summarized, then results for the characterization of the plasma source and on the investigation of the nanoparticle synthesis phenomenon, aiming at highlighting fundamental process parameters while adopting a design oriented modelling approach, are presented. In particular, an energy balance of the torch and of the reaction chamber, employing a calorimetric method, is presented, while results for three- and two-dimensional modelling of an ICP system are compared with calorimetric and enthalpy probe measurements to validate the temperature field predicted by the model and used to characterize the ICP system under powder-free conditions. Moreover, results from the modeling of critical phases of ICP synthesis process, such as precursor evaporation, vapour conversion in nanoparticles and nanoparticle growth, are presented, with the aim of providing useful insights both for the design and optimization of the process and on the underlying physical phenomena. Indeed, precursor evaporation, one of the phases holding the highest impact on industrial feasibility of the process, is discussed; by employing models to describe particle trajectories and thermal histories, adapted from the ones originally developed for other plasma technologies or applications, such as DC non-transferred arc torches and powder spherodization, the evaporation of micro-sized Si solid precursor in a laboratory scale ICP system is investigated. Finally, a discussion on the role of thermo-fluid dynamic fields on nano-particle formation is presented, as well as a study on the effect of the reaction chamber geometry on produced nanoparticle characteristics and process yield.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The discovery of the Cosmic Microwave Background (CMB) radiation in 1965 is one of the fundamental milestones supporting the Big Bang theory. The CMB is one of the most important source of information in cosmology. The excellent accuracy of the recent CMB data of WMAP and Planck satellites confirmed the validity of the standard cosmological model and set a new challenge for the data analysis processes and their interpretation. In this thesis we deal with several aspects and useful tools of the data analysis. We focus on their optimization in order to have a complete exploitation of the Planck data and contribute to the final published results. The issues investigated are: the change of coordinates of CMB maps using the HEALPix package, the problem of the aliasing effect in the generation of low resolution maps, the comparison of the Angular Power Spectrum (APS) extraction performances of the optimal QML method, implemented in the code called BolPol, and the pseudo-Cl method, implemented in Cromaster. The QML method has been then applied to the Planck data at large angular scales to extract the CMB APS. The same method has been applied also to analyze the TT parity and the Low Variance anomalies in the Planck maps, showing a consistent deviation from the standard cosmological model, the possible origins for this results have been discussed. The Cromaster code instead has been applied to the 408 MHz and 1.42 GHz surveys focusing on the analysis of the APS of selected regions of the synchrotron emission. The new generation of CMB experiments will be dedicated to polarization measurements, for which are necessary high accuracy devices for separating the polarizations. Here a new technology, called Photonic Crystals, is exploited to develop a new polarization splitter device and its performances are compared to the devices used nowadays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Group B Streptococcus (GBS), in its transition from commensal to pathogen, will encounter diverse host environments and thus require coordinately controlling its transcriptional responses to these changes. This work was aimed at better understanding the role of two component signal transduction systems (TCS) in GBS pathophysiology through a systematic screening procedure. We first performed a complete inventory and sensory mechanism classification of all putative GBS TCS by genomic analysis. Five TCS were further investigated by the generation of knock-out strains, and in vitro transcriptome analysis identified genes regulated by these systems, ranging from 0.1-3% of the genome. Interestingly, two sugar phosphotransferase systems appeared differently regulated in the knock-out mutant of TCS-16, suggesting an involvement in monitoring carbon source availability. High throughput analysis of bacterial growth on different carbon sources showed that TCS-16 was necessary for growth of GBS on fructose-6-phosphate. Additional transcriptional analysis provided further evidence for a stimulus-response circuit where extracellular fructose-6-phosphate leads to autoinduction of TCS-16 with concomitant dramatic up-regulation of the adjacent operon encoding a phosphotransferase system. The TCS-16-deficient strain exhibited decreased persistence in a model of vaginal colonization and impaired growth/survival in the presence of vaginal mucoid components. All mutant strains were also characterized in a murine model of systemic infection, and inactivation of TCS-17 (also known as RgfAC) resulted in hypervirulence. Our data suggest a role for the previously unknown TCS-16, here named FspSR, in bacterial fitness and carbon metabolism during host colonization, and also provide experimental evidence for TCS-17/RgfAC involvement in virulence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Folates (vitamin B9) are essential water soluble vitamins, whose deficiency in humans may contribute to the onset of several diseases, such as anaemia, cancer, cardiovascular diseases, neurological problems as well as defects in embryonic development. Human and other mammals are unable to synthesize ex novo folate obtaining it from exogenous sources, via intestinal absorption. Recently the gut microbiota has been identified as an important source of folates and the selection and use of folate producing microorganisms represents an innovative strategy to increase human folate levels. The aim of this thesis was to gain a fundamental understanding of folate metabolism in Bifidobacterium adolescentis. The work was subdivided in three main phases, also aimed to solve different problems encountered working with Bifidobacterium strains. First, a new identification method (based on PCR-RFLP of hsp60 gene) was specifically developed to identify Bifidobacterium strains. Secondly, Bifidobacterium adolescentis biodiversity was explored in order to recognize representing strains of this species to be screened for their folate production ability. Results showed that this species is characterized by a wide variability and support the idea that a possible new taxonomic re-organization would be required. Finally B. adolescentis folate metabolism was studied using a double approach. A quantitative analysis of folate content was complemented by the examination of expression levels of genes involved in folate related pathways. For the normalization process, required to increase the robustness of the qRT-PCR analysis, an appropriate set of reference genes was tested using two different algorithms. Results demonstrate that B.adolescentis strains may represent an endogenous source of natural folate and they could be used to fortify fermented dairy products. This bio-fortification strategy presents many advantages for the consumer, providing native folate forms more bio-available, and not implicated in the discussed controversy concerning the safety of high intake of synthetic folic acid.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis aims to present the ORC technology, its advantages and related problems. In particular, it provides an analysis of ORC waste heat recovery system in different and innovative scenarios, focusing on cases from the biggest to the lowest scale. Both industrial and residential ORC applications are considered. In both applications, the installation of a subcritical and recuperated ORC system is examined. Moreover, heat recovery is considered in absence of an intermediate heat transfer circuit. This solution allow to improve the recovery efficiency, but requiring safety precautions. Possible integrations of ORC systems with renewable sources are also presented and investigated to improve the non-programmable source exploitation. In particular, the offshore oil and gas sector has been selected as a promising industrial large-scale ORC application. From the design of ORC systems coupled with Gas Turbines (GTs) as topper systems, the dynamic behavior of the GT+ORC innovative combined cycles has been analyzed by developing a dynamic model of all the considered components. The dynamic behavior is caused by integration with a wind farm. The electric and thermal aspects have been examined to identify the advantages related to the waste heat recovery system installation. Moreover, an experimental test rig has been realized to test the performance of a micro-scale ORC prototype. The prototype recovers heat from a low temperature water stream, available for instance in industrial or residential waste heat. In the test bench, various sensors have been installed, an acquisitions system developed in Labview environment to completely analyze the ORC behavior. Data collected in real time and corresponding to the system dynamic behavior have been used to evaluate the system performance based on selected indexes. Moreover, various operational steady-state conditions are identified and operation maps are realized for a completely characterization of the system and to detect the optimal operating conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present thesis focuses on the on-fault slip distribution of large earthquakes in the framework of tsunami hazard assessment and tsunami warning improvement. It is widely known that ruptures on seismic faults are strongly heterogeneous. In the case of tsunamigenic earthquakes, the slip heterogeneity strongly influences the spatial distribution of the largest tsunami effects along the nearest coastlines. Unfortunately, after an earthquake occurs, the so-called finite-fault models (FFM) describing the coseismic on-fault slip pattern becomes available over time scales that are incompatible with early tsunami warning purposes, especially in the near field. Our work aims to characterize the slip heterogeneity in a fast, but still suitable way. Using finite-fault models to build a starting dataset of seismic events, the characteristics of the fault planes are studied with respect to the magnitude. The patterns of the slip distribution on the rupture plane, analysed with a cluster identification algorithm, reveal a preferential single-asperity representation that can be approximated by a two-dimensional Gaussian slip distribution (2D GD). The goodness of the 2D GD model is compared to other distributions used in literature and its ability to represent the slip heterogeneity in the form of the main asperity is proven. The magnitude dependence of the 2D GD parameters is investigated and turns out to be of primary importance from an early warning perspective. The Gaussian model is applied to the 16 September 2015 Illapel, Chile, earthquake and used to compute early tsunami predictions that are satisfactorily compared with the available observations. The fast computation of the 2D GD and its suitability in representing the slip complexity of the seismic source make it a useful tool for the tsunami early warning assessments, especially for what concerns the near field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last few decades, scientific evidence has pointed out the health-beneficial effects of phenolic compounds in foods, including a decrease in risk of developing degenerative and chronic diseases, known to be caused by oxidative stress. In this frame can be inserted research carried out during my PhD thesis, which concerns the phytochemical investigation of phenolic composition in sweet cherries (Prunus avium L.), apple fruits (Malus domestica L.) and quinoa seeds (Chenopodium quinoa Willd.). The first project was focused on the investigation of phytochemical profile and nutraceutical value of fruits of new sweet cherry cultivars. Their phenolic profile and antioxidant activity were investigated and compared with those of commonly commercialized cultivars. Their nutraceutical value was evaluated in terms of antioxidant/neuroprotective capacity in neuron-like SH-SY5Y cells, in order to investigate their ability to counteract the oxidative stress and/or neurodegeneration process The second project was focused on phytochemical analysis of phenolic compounds in apples of ancient cultivars with the aim of selecting the most diverse cultivars, that will then be assayed for their anti-carcinogenic and anti-proliferative activities against the hepato-biliary and pancreatic tumours. The third project was focused on the analysis of polyphenolic pattern of seeds of two quinoa varieties grown at different latitudes. Analysis of phenolic profile and in vitro antioxidant activity of seed extracts both in their free and soluble-conjugated forms, showed that the accumulation of some classes of flavonoids is strictly regulated by environmental factors, even though the overall antioxidant capacity does not differ in quinoa Regalona grown in Chile and Italy. During the internship period carried out at the Department of Organic Chemistry at Universidad Autónoma de Madrid (UAM), it was achieved the isolation of two pentacyclic triterpenoids, from an endemic Peruvian plant, Jatropha macrantha Müll. Arg., with bio-guided fractionation technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is part of a project promoted by Emilia-Romagna that aims at encouraging research activities in order to support the innovation strategies of the regional economic system through the exploitation of new data sources. To gain this scope, a database containing administrative data is provided by the Municipality of Bologna. This is achieved by linking data from the Register Office of the Municipality and fiscal data coming from the tax returns submitted to the Revenue Agency and released by the Ministry of Economy and Finance for the period 2002-2017. The main purpose of the project is the analysis of the medium term financial and distributional trends of income of the citizens residing in the Municipality of Bologna. Exploiting this innovative source of data allow us to analyse the dynamics of income at municipal level, overcoming the lack of information in official survey-based statistic. We investigate these trends by building inequality indicators and by examining the persistence of in-work poverty. Our results represent an important informative element to improve the effectiveness and equity of welfare policies at the local level, and to guide the distribution of economic and social support and urban redevelopment interventions in different areas of the Municipality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The high quality of protected designation of origin (PDO) dry-cured pork products depends largely on the chemical and physical parameters of the fresh meat and their variation during the production process of the final product. The discovery of the mechanisms that regulate the variability of these parameters was aided by the reference genome of swine adjuvant to genetic analysis methods. This thesis can contribute to the discovery of genetic mechanisms that regulate the variability of some quality parameters of fresh meat for PDO dry-cured pork production. The first study is of gene expression and showed that between low and high glycolytic potential (GP) samples of Semimembranosus muscle of Italian Large White (ILW) pigs in early postmortem, the differentially expressed genes were all but one over expressed in low GP. These were involved in ATP biosynthesis processes, calcium homeostasis, and lipid metabolism including the potential master regulator gene Peroxisome Proliferator-Activated Receptor Alpha (PPARA). The second is a study in commercial hybrid pigs to evaluate correlations between carcass and fresh ham traits, including carcass and fresh ham lean meat percentages, the former, a potential predictor of the latter. In addition, a genome-wide association study allowed the identification of chromosome-wide associations with phenotypic traits for 19 SNPs, and genome-wide associations for 14 SNPs for ferrochelatase activity. The latter could be a determinant for color variation in nitrite-free dry-cured ham. The third study showed gene expression differences in the Longissimus thoracis muscle of ILW pigs by feeding diets with extruded linseed (source of polyunsaturated fatty acids) and vitamin E and selenium (diet three) or natural (diet four) antioxidants. The diet three promoted a more rapid and massive immune system response possibly determined by improvement in muscle tissue function, while the diet four promoted oxidative stability and increased the anti-inflammatory potential of muscle tissue.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Frame. Assessing the difficulty of source texts and parts thereof is important in CTIS, whether for research comparability, for didactic purposes or setting price differences in the market. In order to empirically measure it, Campbell & Hale (1999) and Campbell (2000) developed the Choice Network Analysis (CNA) framework. Basically, the CNA’s main hypothesis is that the more translation options (a group of) translators have to render a given source text stretch, the higher the difficulty of that text stretch will be. We will call this the CNA hypothesis. In a nutshell, this research project puts the CNA hypothesis to the test and studies whether it does actually measure difficulty. Data collection. Two groups of participants (n=29) of different profiles and from two universities in different countries had three translation tasks keylogged with Inputlog, and filled pre- and post-translation questionnaires. Participants translated from English (L2) into their L1s (Spanish or Italian), and worked—first in class and then at home—using their own computers, on texts ca. 800–1000 words long. Each text was translated in approximately equal halves in two 1-hour sessions, in three consecutive weeks. Only the parts translated at home were considered in the study. Results. A very different picture emerged from data than that which the CNA hypothesis might predict: there was no prevalence of disfluent task segments when there were many translation options, nor was a prevalence of fluent task segments associated to fewer translation options. Indeed, there was no correlation between the number of translation options (many and few) and behavioral fluency. Additionally, there was no correlation between pauses and both behavioral fluency and typing speed. The discussed theoretical flaws and the empirical evidence lead to the conclusion that the CNA framework does not and cannot measure text and translation difficulty.