7 resultados para Literature data
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
The main objective of my thesis was the technical-economic feasibility of a system of electricity generation integrated with CCS. The policy framework for development processing is part of the recent attention that at the political level has been directed towards the use of CCS technologies with the aim of addressing the problems of actual climate change. Several technological options have been proposed to stabilize and reduce the atmospheric concentrations of carbon dioxide (CO2) among which, the most promising for IPPC (Intergovernmental Panel on Climate Change)are the CCS technologies (Carbon Capture and Storage & Carbon Capture and Sequestration). The remedy proposed for large stationary CO2 sources as thermoelectric power plants is to separate the flue gas capturing CO2 and to store it into deep subsurface geological formations (more than 800 meters of depth). In order to support the identification of potential CO2 storage reservoirs in Italy and in Europe by Geo Capacity(an European database) new studies are developing. From the various literature data analyzed shows that most of the CO2 emitted from large stationary sources comes from the processes of electricity generation (78% of total emissions) and from (about 60%) those using coal especially. The CCS have the objective of return "to the sender" , the ground, the carbon in oxidized form (CO2) after it has been burned by man starting from its reduced form (CH4, oil and coal), then the carbon dioxide is not a "pollutant" if injected into the subsurface, CO2 is an acid reagent that interacts with the rock, with underground fluid and the characteristics of the host rock. The results showed that the CCS technology are very urgent, because unfortunately there are too many industrial sources of CO2 in assets (power plants, refineries, cement plants, steel mills) in the world who are carrying too quickly the CO2 atmospheric concentration levels to values that aren't acceptable for our dear planet.
Resumo:
The goal of this thesis is the application of an opto-electronic numerical simulation to heterojunction silicon solar cells featuring an all back contact architecture (Interdigitated Back Contact Hetero-Junction IBC-HJ). The studied structure exhibits both metal contacts, emitter and base, at the back surface of the cell with the objective to reduce the optical losses due to the shadowing by front contact of conventional photovoltaic devices. Overall, IBC-HJ are promising low-cost alternatives to monocrystalline wafer-based solar cells featuring front and back contact schemes, in fact, for IBC-HJ the high concentration doping diffusions are replaced by low-temperature deposition processes of thin amorphous silicon layers. Furthermore, another advantage of IBC solar cells with reference to conventional architectures is the possibility to enable a low-cost assembling of photovoltaic modules, being all contacts on the same side. A preliminary extensive literature survey has been helpful to highlight the specific critical aspects of IBC-HJ solar cells as well as the state-of-the-art of their modeling, processing and performance of practical devices. In order to perform the analysis of IBC-HJ devices, a two-dimensional (2-D) numerical simulation flow has been set up. A commercial device simulator based on finite-difference method to solve numerically the whole set of equations governing the electrical transport in semiconductor materials (Sentuarus Device by Synopsys) has been adopted. The first activity carried out during this work has been the definition of a 2-D geometry corresponding to the simulation domain and the specification of the electrical and optical properties of materials. In order to calculate the main figures of merit of the investigated solar cells, the spatially resolved photon absorption rate map has been calculated by means of an optical simulator. Optical simulations have been performed by using two different methods depending upon the geometrical features of the front interface of the solar cell: the transfer matrix method (TMM) and the raytracing (RT). The first method allows to model light prop-agation by plane waves within one-dimensional spatial domains under the assumption of devices exhibiting stacks of parallel layers with planar interfaces. In addition, TMM is suitable for the simulation of thin multi-layer anti reflection coating layers for the reduction of the amount of reflected light at the front interface. Raytracing is required for three-dimensional optical simulations of upright pyramidal textured surfaces which are widely adopted to significantly reduce the reflection at the front surface. The optical generation profiles are interpolated onto the electrical grid adopted by the device simulator which solves the carriers transport equations coupled with Poisson and continuity equations in a self-consistent way. The main figures of merit are calculated by means of a postprocessing of the output data from device simulation. After the validation of the simulation methodology by means of comparison of the simulation result with literature data, the ultimate efficiency of the IBC-HJ architecture has been calculated. By accounting for all optical losses, IBC-HJ solar cells result in a theoretical maximum efficiency above 23.5% (without texturing at front interface) higher than that of both standard homojunction crystalline silicon (Homogeneous Emitter HE) and front contact heterojuction (Heterojunction with Intrinsic Thin layer HIT) solar cells. However it is clear that the criticalities of this structure are mainly due to the defects density and to the poor carriers transport mobility in the amorphous silicon layers. Lastly, the influence of the most critical geometrical and physical parameters on the main figures of merit have been investigated by applying the numerical simulation tool set-up during the first part of the present thesis. Simulations have highlighted that carrier mobility and defects level in amorphous silicon may lead to a potentially significant reduction of the conversion efficiency.
Resumo:
This work assesses the environmental impact of a municipal solid waste incinerator with energy recovery in Forlì-Cesena province (Emilia-Romagna region, Italy). The methodology used is Life Cycle Assessment (LCA). As the plant already applies the best technologies available in waste treatment, this study focuses on the fate of the residues (bottom and fly ash) produced during combustion. Nine scenarios are made, based on different ash treatment disposing/recycling techniques. The functional unit is the amount of waste incinerated in 2011. Boundaries are set from waste arrival in the plant to the disposal/recovery of the residues produced, with energy recovery. Only the operative period is considered. Software used is GaBi 4 and the LCIA method used is CML2001. The impact categories analyzed are: abiotic depletion, acidification, eutrophication, freshwater aquatic ecotoxicity, global warming, human toxicity, ozone layer depletion, photochemical oxidant formation, terrestrial ecotoxicity and primary energy demand. Most of the data are taken from Herambiente. When primary data are not available, data from Ecoinvent and GaBi databases or literature data are used. The whole incineration process is sustainable, due to the relevant avoided impact given by co-generator. As far as regards bottom ash treatment, the most influential process is the impact savings from iron recovery. Bottom ash recycling in road construction or as building material are both valid alternatives, even if the first option faces legislative limits in Italy. Regarding fly ash inertization, the adding of cement and Ferrox treatment results the most feasible alternatives. However, this inertized fly ash can maintain its hazardous nature. The only method to ensure the stability of an inertized fly ash is to couple two different stabilization treatments. Ash stabilization technologies shall improve with the same rate of the flexibility of the national legislation about incineration residues recycling.
Resumo:
Ontology design and population -core aspects of semantic technologies- re- cently have become fields of great interest due to the increasing need of domain-specific knowledge bases that can boost the use of Semantic Web. For building such knowledge resources, the state of the art tools for ontology design require a lot of human work. Producing meaningful schemas and populating them with domain-specific data is in fact a very difficult and time-consuming task. Even more if the task consists in modelling knowledge at a web scale. The primary aim of this work is to investigate a novel and flexible method- ology for automatically learning ontology from textual data, lightening the human workload required for conceptualizing domain-specific knowledge and populating an extracted schema with real data, speeding up the whole ontology production process. Here computational linguistics plays a fundamental role, from automati- cally identifying facts from natural language and extracting frame of relations among recognized entities, to producing linked data with which extending existing knowledge bases or creating new ones. In the state of the art, automatic ontology learning systems are mainly based on plain-pipelined linguistics classifiers performing tasks such as Named Entity recognition, Entity resolution, Taxonomy and Relation extraction [11]. These approaches present some weaknesses, specially in capturing struc- tures through which the meaning of complex concepts is expressed [24]. Humans, in fact, tend to organize knowledge in well-defined patterns, which include participant entities and meaningful relations linking entities with each other. In literature, these structures have been called Semantic Frames by Fill- 6 Introduction more [20], or more recently as Knowledge Patterns [23]. Some NLP studies has recently shown the possibility of performing more accurate deep parsing with the ability of logically understanding the structure of discourse [7]. In this work, some of these technologies have been investigated and em- ployed to produce accurate ontology schemas. The long-term goal is to collect large amounts of semantically structured information from the web of crowds, through an automated process, in order to identify and investigate the cognitive patterns used by human to organize their knowledge.
Resumo:
Network Theory is a prolific and lively field, especially when it approaches Biology. New concepts from this theory find application in areas where extensive datasets are already available for analysis, without the need to invest money to collect them. The only tools that are necessary to accomplish an analysis are easily accessible: a computing machine and a good algorithm. As these two tools progress, thanks to technology advancement and human efforts, wider and wider datasets can be analysed. The aim of this paper is twofold. Firstly, to provide an overview of one of these concepts, which originates at the meeting point between Network Theory and Statistical Mechanics: the entropy of a network ensemble. This quantity has been described from different angles in the literature. Our approach tries to be a synthesis of the different points of view. The second part of the work is devoted to presenting a parallel algorithm that can evaluate this quantity over an extensive dataset. Eventually, the algorithm will also be used to analyse high-throughput data coming from biology.
Resumo:
This thesis offers a practical and theoretical evaluations about gossip-epidemic algorithms, comparing those most common in the literature with new proposed algorithms and analyzing their behavior. Tests have been executed using one hundred graphs that has been randomly generated by Large Unstructured NEtwork Simulator (LUNES), a simulation software provided by Parallel and Distributed Simulation Research Group (PADS), of the Department of Computer Science, Università di Bologna and simulated using Advanced RTI System (ARTÌS), based on the High Level Architecture standard. Literatures algorithms have been analyzed and taken as base for new algorithms.
Resumo:
The astrophysical context in which this thesis project lies concerns the comprehension of the mutual interaction between the accretion onto a Super Massive Black Hole (SMBH) and the Star Formation (SF), that take place in the host galaxy. This is one of the key topic of the modern extragalactic astrophysical research. Indeed, it is widely accepted that to understand the physics of a galaxy, the contribution of a possible central AGN must be taken into account. The aim of this thesis is the study of the physical processes of the nearby Seyfert galaxy NGC 34. This source was selected because of the wide collection of multiwavelength data available in the literature. In addition, recently, it has been observed with the Atacama Large Submillimeter/Millimeter Array (ALMA) in Band 9. This project is divided in two main parts: first of all, we reduced and analyzed the ALMA data, obtaining the continuum and CO(6-5) maps; then, we looked for a coherent explaination of NGC 34 physical characteristics. In particular, we focused on the ISM physics, in order to understand its properties in terms of density, chemical composition and dominant radiation field (SF or accretion). This work has been done through the analysis of the spectral distribution of several CO transitions as a function of the transition number (CO SLED), obtained joining the CO(6-5) line with other transitions available in the literature. More precisely, the observed CO SLED has been compared with ISM models, including Photo-Dissociation Regions (PDRs) and X-ray-Dominated Regions (XDRs). These models have been obtained through the state-of-the-art photoionization code CLOUDY. Along with the observed CO SLED, we have taken into account other physical properties of NGC 34, such as the Star Formation Rate (SFR), the gas mass and the X-ray luminosity.