19 resultados para Agricultural systems modelling
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
In chapter 1 and 2 calcium hydroxide as impregnation agent before steam explosion of sugarcane bagasse and switchgrass, respectively, was compared with auto-hydrolysis, assessing the effects on enzymatic hydrolysis and simultaneous saccharification and fermentation (SSF) at high solid concentration of pretreated solid fraction. In addition, anaerobic digestion of pretreated liquid fraction was carried out, in order to appraise the effectiveness of calcium hydroxide before steam explosion in a more comprehensive way. In As water is an expensive input in both cultivation of biomass crops and subsequent pretreatment, Chapter 3 addressed the effects of variable soil moisture on biomass growth and composition of biomass sorghum. Moreover, the effect of water stress was related to the characteristics of stem juice for 1st generation ethanol and structural carbohydrates for 2nd generation ethanol. In the frame of chapter 1, calcium hydroxide was proven to be a suitable catalyst for sugarcane bagasse before steam explosion, in order to enhance fibre deconstruction. In chapter 2, effect of calcium hydroxide on switchgrass showed a great potential when ethanol was focused, whereas acid addition produced higher methane yield. Regarding chapter 3, during crop cycle the amount of cellulose, hemicellulose and AIL changed causing a decrease of 2G ethanol amount. Biomass physical and chemical properties involved a lower glucose yield and concentration at the end of enzymatic hydrolysis and, consequently, a lower 2G ethanol concentration at the end of simultaneous saccharification and fermentation, proving that there is strong relationship between structure, chemical composition, and fermentable sugar yield. The significantly higher concentration of ethanol at the early crop stage could be an important incentive to consider biomass sorghum as second crop in the season, to be introduced into some agricultural systems, potentially benefiting farmers and, above all, avoiding the exacerbation of the debate about fuel vs food crops.
Resumo:
The challenges of the current global food systems are often framed around feeding the world's growing population while meeting sustainable development for future generations. Globalization has brought to a fragmentation of food spaces, leading to a flexible and mutable supply chain. This poses a major challenge to food and nutrition security, affecting also rural-urban dynamics in territories. Furthermore, the recent crises have highlighted the vulnerability to shocks and disruptions of the food systems and the eco-system due to the intensive management of natural, human and economic capital. Hence, a sustainable and resilient transition of the food systems is required through a multi-faceted approach that tackles the causes of unsustainability and promotes sustainable practices at all levels of the food system. In this respect, a territorial approach becomes a relevant entry point of analysis for the food system’s multifunctionality and can support the evaluation of sustainability by quantifying impacts associated with quantitative methods and understanding the territorial responsibility of different actors with qualitative ones. Against this background the present research aims to i) investigate the environmental, costing and social indicators suitable for a scoring system able to measure the integrated sustainability performance of food initiatives within the City/Region territorial context; ii) develop a territorial assessment framework to measure sustainability impacts of agricultural systems; and iii) define an integrated methodology to match production and consumption at a territorial level to foster a long-term vision of short food supply chains. From a methodological perspective, the research proposes a mixed quantitative and qualitative research method. The outcomes provide an in-depth view into the environmental and socio-economic impacts of food systems at the territorial level, investigating possible indicators, frameworks, and business strategies to foster their future sustainable development.
Resumo:
Although its great potential as low to medium temperature waste heat recovery (WHR) solution, the ORC technology presents open challenges that still prevent its diffusion in the market, which are different depending on the application and the size at stake. Focusing on the micro range power size and low temperature heat sources, the ORC technology is still not mature due to the lack of appropriate machines and working fluids. Considering instead the medium to large size, the technology is already available but the investment is still risky. The intention of this thesis is to address some of the topical themes in the ORC field, paying special attention in the development of reliable models based on realistic data and accounting for the off-design performance of the ORC system and of each of its components. Concerning the “Micro-generation” application, this work: i) explores the modelling methodology, the performance and the optimal parameters of reciprocating piston expanders; ii) investigates the performance of such expander and of the whole micro-ORC system when using Hydrofluorocarbons as working fluid or their new low GWP alternatives and mixtures; iii) analyzes the innovative ORC reversible architecture (conceived for the energy storage), its optimal regulation strategy and its potential when inserted in typical small industrial frameworks. Regarding the “Industrial WHR” sector, this thesis examines the WHR opportunity of ORCs, with a focus on the natural gas compressor stations application. This work provides information about all the possible parameters that can influence the optimal sizing, the performance and thus the feasibility of installing an ORC system. New WHR configurations are explored: i) a first one, relying on the replacement of a compressor prime mover with an ORC; ii) a second one, which consists in the use of a supercritical CO2 cycle as heat recovery system.
Resumo:
Power-to-Gas storage systems have the potential to address grid-stability issues that arise when an increasing share of power is generated from sources that have a highly variable output. Although the proof-of-concept of these has been promising, the behaviour of the processes in off-design conditions is not easily predictable. The primary aim of this PhD project was to evaluate the performance of an original Power-to-Gas system, made up of innovative components. To achieve this, a numerical model has been developed to simulate the characteristics and the behaviour of the several components when the whole system is coupled with a renewable source. The developed model has been applied to a large variety of scenarios, evaluating the performance of the considered process and exploiting a limited amount of experimental data. The model has been then used to compare different Power-to-Gas concepts, in a real scenario of functioning. Several goals have been achieved. In the concept phase, the possibility to thermally integrate the high temperature components has been demonstrated. Then, the parameters that affect the energy performance of a Power-to-Gas system coupled with a renewable source have been identified, providing general recommendations on the design of hybrid systems; these parameters are: 1) the ratio between the storage system size and the renewable generator size; 2) the type of coupled renewable source; 3) the related production profile. Finally, from the results of the comparative analysis, it is highlighted that configurations with a highly oversized renewable source with respect to the storage system show the maximum achievable profit.
Resumo:
Nowadays, technological advancements have brought industry and research towards the automation of various processes. Automation brings a reduction in costs and an improvement in product quality. For this reason, companies are pushing research to investigate new technologies. The agriculture industry has always looked towards automating various processes, from product processing to storage. In the last years, the automation of harvest and cultivation phases also has become attractive, pushed by the advancement of autonomous driving. Nevertheless, ADAS systems are not enough. Merging different technologies will be the solution to obtain total automation of agriculture processes. For example, sensors that estimate products' physical and chemical properties can be used to evaluate the maturation level of fruit. Therefore, the fusion of these technologies has a key role in industrial process automation. In this dissertation, ADAS systems and sensors for precision agriculture will be both treated. Several measurement procedures for characterizing commercial 3D LiDARs will be proposed and tested to cope with the growing need for comparison tools. Axial errors and transversal errors have been investigated. Moreover, a measurement method and setup for evaluating the fog effect on 3D LiDARs will be proposed. Each presented measurement procedure has been tested. The obtained results highlight the versatility and the goodness of the proposed approaches. Regarding the precision agriculture sensors, a measurement approach for the Moisture Content and density estimation of crop directly on the field is presented. The approach regards the employment of a Near Infrared spectrometer jointly with Partial Least Square statistical analysis. The approach and the model will be described together with a first laboratory prototype used to evaluate the NIRS approach. Finally, a prototype for on the field analysis is realized and tested. The test results are promising, evidencing that the proposed approach is suitable for Moisture Content and density estimation.
Resumo:
Protected crop production is a modern and innovative approach to cultivating plants in a controlled environment to optimize growth, yield, and quality. This method involves using structures such as greenhouses or tunnels to create a sheltered environment. These productive solutions are characterized by a careful regulation of variables like temperature, humidity, light, and ventilation, which collectively contribute to creating an optimal microclimate for plant growth. Heating, cooling, and ventilation systems are used to maintain optimal conditions for plant growth, regardless of external weather fluctuations. Protected crop production plays a crucial role in addressing challenges posed by climate variability, population growth, and food security. Similarly, animal husbandry involves providing adequate nutrition, housing, medical care and environmental conditions to ensure animal welfare. Then, sustainability is a critical consideration in all forms of agriculture, including protected crop and animal production. Sustainability in animal production refers to the practice of producing animal products in a way that minimizes negative impacts on the environment, promotes animal welfare, and ensures the long-term viability of the industry. Then, the research activities performed during the PhD can be inserted exactly in the field of Precision Agriculture and Livestock farming. Here the focus is on the computational fluid dynamic (CFD) approach and environmental assessment applied to improve yield, resource efficiency, environmental sustainability, and cost savings. It represents a significant shift from traditional farming methods to a more technology-driven, data-driven, and environmentally conscious approach to crop and animal production. On one side, CFD is powerful and precise techniques of computer modeling and simulation of airflows and thermo-hygrometric parameters, that has been applied to optimize the growth environment of crops and the efficiency of ventilation in pig barns. On the other side, the sustainability aspect has been investigated and researched in terms of Life Cycle Assessment analyses.
Resumo:
Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.
Resumo:
This thesis describes modelling tools and methods suited for complex systems (systems that typically are represented by a plurality of models). The basic idea is that all models representing the system should be linked by well-defined model operations in order to build a structured repository of information, a hierarchy of models. The port-Hamiltonian framework is a good candidate to solve this kind of problems as it supports the most important model operations natively. The thesis in particular addresses the problem of integrating distributed parameter systems in a model hierarchy, and shows two possible mechanisms to do that: a finite-element discretization in port-Hamiltonian form, and a structure-preserving model order reduction for discretized models obtainable from commercial finite-element packages.
Resumo:
The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.
Resumo:
The last decades have seen a large effort of the scientific community to study and understand the physics of sea ice. We currently have a wide - even though still not exhaustive - knowledge of the sea ice dynamics and thermodynamics and of their temporal and spatial variability. Sea ice biogeochemistry is instead largely unknown. Sea ice algae production may account for up to 25% of overall primary production in ice-covered waters of the Southern Ocean. However, the influence of physical factors, such as the location of ice formation, the role of snow cover and light availability on sea ice primary production is poorly understood. There are only sparse localized observations and little knowledge of the functioning of sea ice biogeochemistry at larger scales. Modelling becomes then an auxiliary tool to help qualifying and quantifying the role of sea ice biogeochemistry in the ocean dynamics. In this thesis, a novel approach is used for the modelling and coupling of sea ice biogeochemistry - and in particular its primary production - to sea ice physics. Previous attempts were based on the coupling of rather complex sea ice physical models to empirical or relatively simple biological or biogeochemical models. The focus is moved here to a more biologically-oriented point of view. A simple, however comprehensive, physical model of the sea ice thermodynamics (ESIM) was developed and coupled to a novel sea ice implementation (BFM-SI) of the Biogeochemical Flux Model (BFM). The BFM is a comprehensive model, largely used and validated in the open ocean environment and in regional seas. The physical model has been developed having in mind the biogeochemical properties of sea ice and the physical inputs required to model sea ice biogeochemistry. The central concept of the coupling is the modelling of the Biologically-Active-Layer (BAL), which is the time-varying fraction of sea ice that is continuously connected to the ocean via brines pockets and channels and it acts as rich habitat for many microorganisms. The physical model provides the key physical properties of the BAL (e.g., brines volume, temperature and salinity), and the BFM-SI simulates the physiological and ecological response of the biological community to the physical enviroment. The new biogeochemical model is also coupled to the pelagic BFM through the exchange of organic and inorganic matter at the boundaries between the two systems . This is done by computing the entrapment of matter and gases when sea ice grows and release to the ocean when sea ice melts to ensure mass conservation. The model was tested in different ice-covered regions of the world ocean to test the generality of the parameterizations. The focus was particularly on the regions of landfast ice, where primary production is generally large. The implementation of the BFM in sea ice and the coupling structure in General Circulation Models will add a new component to the latters (and in general to Earth System Models), which will be able to provide adequate estimate of the role and importance of sea ice biogeochemistry in the global carbon cycle.
Resumo:
In the last years of research, I focused my studies on different physiological problems. Together with my supervisors, I developed/improved different mathematical models in order to create valid tools useful for a better understanding of important clinical issues. The aim of all this work is to develop tools for learning and understanding cardiac and cerebrovascular physiology as well as pathology, generating research questions and developing clinical decision support systems useful for intensive care unit patients. I. ICP-model Designed for Medical Education We developed a comprehensive cerebral blood flow and intracranial pressure model to simulate and study the complex interactions in cerebrovascular dynamics caused by multiple simultaneous alterations, including normal and abnormal functional states of auto-regulation of the brain. Individual published equations (derived from prior animal and human studies) were implemented into a comprehensive simulation program. Included in the normal physiological modelling was: intracranial pressure, cerebral blood flow, blood pressure, and carbon dioxide (CO2) partial pressure. We also added external and pathological perturbations, such as head up position and intracranial haemorrhage. The model performed clinically realistically given inputs of published traumatized patients, and cases encountered by clinicians. The pulsatile nature of the output graphics was easy for clinicians to interpret. The manoeuvres simulated include changes of basic physiological inputs (e.g. blood pressure, central venous pressure, CO2 tension, head up position, and respiratory effects on vascular pressures) as well as pathological inputs (e.g. acute intracranial bleeding, and obstruction of cerebrospinal outflow). Based on the results, we believe the model would be useful to teach complex relationships of brain haemodynamics and study clinical research questions such as the optimal head-up position, the effects of intracranial haemorrhage on cerebral haemodynamics, as well as the best CO2 concentration to reach the optimal compromise between intracranial pressure and perfusion. We believe this model would be useful for both beginners and advanced learners. It could be used by practicing clinicians to model individual patients (entering the effects of needed clinical manipulations, and then running the model to test for optimal combinations of therapeutic manoeuvres). II. A Heterogeneous Cerebrovascular Mathematical Model Cerebrovascular pathologies are extremely complex, due to the multitude of factors acting simultaneously on cerebral haemodynamics. In this work, the mathematical model of cerebral haemodynamics and intracranial pressure dynamics, described in the point I, is extended to account for heterogeneity in cerebral blood flow. The model includes the Circle of Willis, six regional districts independently regulated by autoregulation and CO2 reactivity, distal cortical anastomoses, venous circulation, the cerebrospinal fluid circulation, and the intracranial pressure-volume relationship. Results agree with data in the literature and highlight the existence of a monotonic relationship between transient hyperemic response and the autoregulation gain. During unilateral internal carotid artery stenosis, local blood flow regulation is progressively lost in the ipsilateral territory with the presence of a steal phenomenon, while the anterior communicating artery plays the major role to redistribute the available blood flow. Conversely, distal collateral circulation plays a major role during unilateral occlusion of the middle cerebral artery. In conclusion, the model is able to reproduce several different pathological conditions characterized by heterogeneity in cerebrovascular haemodynamics and can not only explain generalized results in terms of physiological mechanisms involved, but also, by individualizing parameters, may represent a valuable tool to help with difficult clinical decisions. III. Effect of Cushing Response on Systemic Arterial Pressure. During cerebral hypoxic conditions, the sympathetic system causes an increase in arterial pressure (Cushing response), creating a link between the cerebral and the systemic circulation. This work investigates the complex relationships among cerebrovascular dynamics, intracranial pressure, Cushing response, and short-term systemic regulation, during plateau waves, by means of an original mathematical model. The model incorporates the pulsating heart, the pulmonary circulation and the systemic circulation, with an accurate description of the cerebral circulation and the intracranial pressure dynamics (same model as in the first paragraph). Various regulatory mechanisms are included: cerebral autoregulation, local blood flow control by oxygen (O2) and/or CO2 changes, sympathetic and vagal regulation of cardiovascular parameters by several reflex mechanisms (chemoreceptors, lung-stretch receptors, baroreceptors). The Cushing response has been described assuming a dramatic increase in sympathetic activity to vessels during a fall in brain O2 delivery. With this assumption, the model is able to simulate the cardiovascular effects experimentally observed when intracranial pressure is artificially elevated and maintained at constant level (arterial pressure increase and bradicardia). According to the model, these effects arise from the interaction between the Cushing response and the baroreflex response (secondary to arterial pressure increase). Then, patients with severe head injury have been simulated by reducing intracranial compliance and cerebrospinal fluid reabsorption. With these changes, oscillations with plateau waves developed. In these conditions, model results indicate that the Cushing response may have both positive effects, reducing the duration of the plateau phase via an increase in cerebral perfusion pressure, and negative effects, increasing the intracranial pressure plateau level, with a risk of greater compression of the cerebral vessels. This model may be of value to assist clinicians in finding the balance between clinical benefits of the Cushing response and its shortcomings. IV. Comprehensive Cardiopulmonary Simulation Model for the Analysis of Hypercapnic Respiratory Failure We developed a new comprehensive cardiopulmonary model that takes into account the mutual interactions between the cardiovascular and the respiratory systems along with their short-term regulatory mechanisms. The model includes the heart, systemic and pulmonary circulations, lung mechanics, gas exchange and transport equations, and cardio-ventilatory control. Results show good agreement with published patient data in case of normoxic and hyperoxic hypercapnia simulations. In particular, simulations predict a moderate increase in mean systemic arterial pressure and heart rate, with almost no change in cardiac output, paralleled by a relevant increase in minute ventilation, tidal volume and respiratory rate. The model can represent a valid tool for clinical practice and medical research, providing an alternative way to experience-based clinical decisions. In conclusion, models are not only capable of summarizing current knowledge, but also identifying missing knowledge. In the former case they can serve as training aids for teaching the operation of complex systems, especially if the model can be used to demonstrate the outcome of experiments. In the latter case they generate experiments to be performed to gather the missing data.
Resumo:
Pharmaceuticals are useful tools to prevent and treat human and animal diseases. Following administration, a significant fraction of pharmaceuticals is excreted unaltered into faeces and urine and may enter the aquatic ecosystem and agricultural soil through irrigation with recycled water, constituting a significant source of emerging contaminants into the environment. Understanding major factors influencing their environmental fate is consequently needed to value the risk, reduce contamination, and set up bioremediation technologies. The antiviral drug Tamiflu (oseltamivir carboxylate, OC) has received recent attention due to the potential use as a first line defence against H5N1 and H1N1 influenza viruses. Research has shown that OC is not removed during conventional wastewater treatments, thus having the potential to enter surface water bodies. A series of laboratory experiments investigated the fate and the removal of OC in surface water systems in Italy and Japan and in a municipal wastewater treatment plant. A preliminary laboratory study investigated the persistence of the active antiviral drug in water samples from an irrigation canal in northern Italy (Canale Emiliano Romagnolo). After an initial rapid decrease, OC concentration slowly decreased during the remaining incubation period. Approximately 65% of the initial OC amount remained in water at the end of the 36-day incubation period. A negligible amount of OC was lost both from sterilized water and from sterilized water/sediment samples, suggesting a significant role of microbial degradation. Stimulating microbial processes by the addition of sediments resulted in reduced OC persistence. Presence of OC (1.5 μg mL-1) did not significantly affect the metabolic potential of the water microbial population, that was estimated by glyphosate and metolachlor mineralization. In contrast, OC caused an initial transient decrease in the size of the indigenous microbial population of water samples. A second laboratory study focused on basic processes governing the environmental fate of OC in surface water from two contrasting aquatic ecosystems of northern Italy, the River Po and the Venice Lagoon. Results of this study confirmed the potential of OC to persist in surface water. However, the addition of 5% of sediments resulted in rapid OC degradation. The estimated half-life of OC in water/sediment of the River Po was 15 days. After three weeks of incubation at 20 °C, more than 8% of 14C-OC evolved as 14CO2 from water/sediment samples of the River Po and Venice Lagoon. OC was moderately retained onto coarse sediments from the two sites. In water/sediment samples of the River Po and Venice Lagoon treated with 14C-OC, more than 30% of the 14C-residues remained water-extractable after three weeks of incubation. The low affinity of OC to sediments suggests that the presence of sediments would not reduce its bioavailability to microbial degradation. Another series of laboratory experiments investigated the fate and the removal of OC in two surface water ecosystems of Japan and in the municipal wastewater treatment plant of the city of Bologna, in Northern Italy. The persistence of OC in surface water ranged from non-detectable degradation to a half-life of 53 days. After 40 days, less than 3% of radiolabeled OC evolved as 14CO2. The presence of sediments (5%) led to a significant increase of OC degradation and of mineralization rates. A more intense mineralization was observed in samples of the wastewater treatment plant when applying a long incubation period (40 days). More precisely, 76% and 37% of the initial radioactivity applied as 14C-OC was recovered as 14CO2 from samples of the biological tank and effluent water, respectively. Two bacterial strains growing on OC as sole carbon source were isolated and used for its removal from synthetic medium and environmental samples, including surface water and wastewater. Inoculation of water and wastewater samples with the two OC-degrading strains showed that mineralization of OC was significantly higher in both inoculated water and wastewater, than in uninoculated controls. Denaturing gradient gel electrophoresis and quantitative PCR analysis showed that OC would not affect the microbial population of surface water and wastewater. The capacity of the ligninolytic fungus Phanerochaete chrysosporium to degrade a wide variety of environmentally persistent xenobiotics has been largely reported in literature. In a series of laboratory experiments, the efficiency of a formulation using P. chrysosporium was evaluated for the removal of selected pharmaceuticals from wastewater samples. Addition of the fungus to samples of the wastewater treatment plant of Bologna significantly increased (P < 0.05) the removal of OC and three antibiotics, erythromycin, sulfamethoxazole, and ciprofloxacin. Similar effects were also observed in effluent water. OC was the most persistent of the four pharmaceuticals. After 30 days of incubation, approximately two times more OC was removed in bioremediated samples than in controls. The highest removal efficiency of the formulation was observed with the antibiotic ciprofloxacin. The studies included environmental aspects of soil contamination with two emerging veterinary contaminants, such as doramectin and oxibendazole, wich are common parasitic treatments in cattle farms.
Resumo:
Reliable electronic systems, namely a set of reliable electronic devices connected to each other and working correctly together for the same functionality, represent an essential ingredient for the large-scale commercial implementation of any technological advancement. Microelectronics technologies and new powerful integrated circuits provide noticeable improvements in performance and cost-effectiveness, and allow introducing electronic systems in increasingly diversified contexts. On the other hand, opening of new fields of application leads to new, unexplored reliability issues. The development of semiconductor device and electrical models (such as the well known SPICE models) able to describe the electrical behavior of devices and circuits, is a useful means to simulate and analyze the functionality of new electronic architectures and new technologies. Moreover, it represents an effective way to point out the reliability issues due to the employment of advanced electronic systems in new application contexts. In this thesis modeling and design of both advanced reliable circuits for general-purpose applications and devices for energy efficiency are considered. More in details, the following activities have been carried out: first, reliability issues in terms of security of standard communication protocols in wireless sensor networks are discussed. A new communication protocol is introduced, allows increasing the network security. Second, a novel scheme for the on-die measurement of either clock jitter or process parameter variations is proposed. The developed scheme can be used for an evaluation of both jitter and process parameter variations at low costs. Then, reliability issues in the field of “energy scavenging systems” have been analyzed. An accurate analysis and modeling of the effects of faults affecting circuit for energy harvesting from mechanical vibrations is performed. Finally, the problem of modeling the electrical and thermal behavior of photovoltaic (PV) cells under hot-spot condition is addressed with the development of an electrical and thermal model.
Resumo:
Several countries have acquired, over the past decades, large amounts of area covering Airborne Electromagnetic data. Contribution of airborne geophysics has dramatically increased for both groundwater resource mapping and management proving how those systems are appropriate for large-scale and efficient groundwater surveying. We start with processing and inversion of two AEM dataset from two different systems collected over the Spiritwood Valley Aquifer area, Manitoba, Canada respectively, the AeroTEM III (commissioned by the Geological Survey of Canada in 2010) and the “Full waveform VTEM” dataset, collected and tested over the same survey area, during the fall 2011. We demonstrate that in the presence of multiple datasets, either AEM and ground data, due processing, inversion, post-processing, data integration and data calibration is the proper approach capable of providing reliable and consistent resistivity models. Our approach can be of interest to many end users, ranging from Geological Surveys, Universities to Private Companies, which are often proprietary of large geophysical databases to be interpreted for geological and\or hydrogeological purposes. In this study we deeply investigate the role of integration of several complimentary types of geophysical data collected over the same survey area. We show that data integration can improve inversions, reduce ambiguity and deliver high resolution results. We further attempt to use the final, most reliable output resistivity models as a solid basis for building a knowledge-driven 3D geological voxel-based model. A voxel approach allows a quantitative understanding of the hydrogeological setting of the area, and it can be further used to estimate the aquifers volumes (i.e. potential amount of groundwater resources) as well as hydrogeological flow model prediction. In addition, we investigated the impact of an AEM dataset towards hydrogeological mapping and 3D hydrogeological modeling, comparing it to having only a ground based TEM dataset and\or to having only boreholes data.
Resumo:
Waste management represents an important issue in our society and Waste-to-Energy incineration plants have been playing a significant role in the last decades, showing an increased importance in Europe. One of the main issues posed by waste combustion is the generation of air contaminants. Particular concern is present about acid gases, mainly hydrogen chloride and sulfur oxides, due to their potential impact on the environment and on human health. Therefore, in the present study the main available technological options for flue gas treatment were analyzed, focusing on dry treatment systems, which are increasingly applied in Municipal Solid Wastes (MSW) incinerators. An operational model was proposed to describe and optimize acid gas removal process. It was applied to an existing MSW incineration plant, where acid gases are neutralized in a two-stage dry treatment system. This process is based on the injection of powdered calcium hydroxide and sodium bicarbonate in reactors followed by fabric filters. HCl and SO2 conversions were expressed as a function of reactants flow rates, calculating model parameters from literature and plant data. The implementation in a software for process simulation allowed the identification of optimal operating conditions, taking into account the reactant feed rates, the amount of solid products and the recycle of the sorbent. Alternative configurations of the reference plant were also assessed. The applicability of the operational model was extended developing also a fundamental approach to the issue. A predictive model was developed, describing mass transfer and kinetic phenomena governing the acid gas neutralization with solid sorbents. The rate controlling steps were identified through the reproduction of literature data, allowing the description of acid gas removal in the case study analyzed. A laboratory device was also designed and started up to assess the required model parameters.