12 resultados para Potential distribution modelling

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis is focused on Smart Grid applications in medium voltage distribution networks. For the development of new applications it appears useful the availability of simulation tools able to model dynamic behavior of both the power system and the communication network. Such a co-simulation environment would allow the assessment of the feasibility of using a given network technology to support communication-based Smart Grid control schemes on an existing segment of the electrical grid and to determine the range of control schemes that different communications technologies can support. For this reason, is presented a co-simulation platform that has been built by linking the Electromagnetic Transients Program Simulator (EMTP v3.0) with a Telecommunication Network Simulator (OPNET-Riverbed v18.0). The simulator is used to design and analyze a coordinate use of Distributed Energy Resources (DERs) for the voltage/var control (VVC) in distribution network. This thesis is focused control structure based on the use of phase measurement units (PMUs). In order to limit the required reinforcements of the communication infrastructures currently adopted by Distribution Network Operators (DNOs), the study is focused on leader-less MAS schemes that do not assign special coordinating rules to specific agents. Leader-less MAS are expected to produce more uniform communication traffic than centralized approaches that include a moderator agent. Moreover, leader-less MAS are expected to be less affected by limitations and constraint of some communication links. The developed co-simulator has allowed the definition of specific countermeasures against the limitations of the communication network, with particular reference to the latency and loss and information, for both the case of wired and wireless communication networks. Moreover, the co-simulation platform has bee also coupled with a mobility simulator in order to study specific countermeasures against the negative effects on the medium voltage/current distribution network caused by the concurrent connection of electric vehicles.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this work, we explore and demonstrate the potential for modeling and classification using quantile-based distributions, which are random variables defined by their quantile function. In the first part we formalize a least squares estimation framework for the class of linear quantile functions, leading to unbiased and asymptotically normal estimators. Among the distributions with a linear quantile function, we focus on the flattened generalized logistic distribution (fgld), which offers a wide range of distributional shapes. A novel naïve-Bayes classifier is proposed that utilizes the fgld estimated via least squares, and through simulations and applications, we demonstrate its competitiveness against state-of-the-art alternatives. In the second part we consider the Bayesian estimation of quantile-based distributions. We introduce a factor model with independent latent variables, which are distributed according to the fgld. Similar to the independent factor analysis model, this approach accommodates flexible factor distributions while using fewer parameters. The model is presented within a Bayesian framework, an MCMC algorithm for its estimation is developed, and its effectiveness is illustrated with data coming from the European Social Survey. The third part focuses on depth functions, which extend the concept of quantiles to multivariate data by imposing a center-outward ordering in the multivariate space. We investigate the recently introduced integrated rank-weighted (IRW) depth function, which is based on the distribution of random spherical projections of the multivariate data. This depth function proves to be computationally efficient and to increase its flexibility we propose different methods to explicitly model the projected univariate distributions. Its usefulness is shown in classification tasks: the maximum depth classifier based on the IRW depth is proven to be asymptotically optimal under certain conditions, and classifiers based on the IRW depth are shown to perform well in simulated and real data experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural hazard related to the volcanic activity represents a potential risk factor, particularly in the vicinity of human settlements. Besides to the risk related to the explosive and effusive activity, the instability of volcanic edifices may develop into large landslides often catastrophically destructive, as shown by the collapse of the northern flank of Mount St. Helens in 1980. A combined approach was applied to analyse slope failures that occurred at Stromboli volcano. SdF slope stability was evaluated by using high-resolution multi-temporal DTMMs and performing limit equilibrium stability analyses. High-resolution topographical data collected with remote sensing techniques and three-dimensional slope stability analysis play a key role in understanding instability mechanism and the related risks. Analyses carried out on the 2002–2003 and 2007 Stromboli eruptions, starting from high-resolution data acquired through airborne remote sensing surveys, permitted the estimation of the lava volumes emplaced on the SdF slope and contributed to the investigation of the link between magma emission and slope instabilities. Limit Equilibrium analyses were performed on the 2001 and 2007 3D models, in order to simulate the slope behavior before 2002-2003 landslide event and after the 2007 eruption. Stability analyses were conducted to understand the mechanisms that controlled the slope deformations which occurred shortly after the 2007 eruption onset, involving the upper part of slope. Limit equilibrium analyses applied to both cases yielded results which are congruent with observations and monitoring data. The results presented in this work undoubtedly indicate that hazard assessment for the island of Stromboli should take into account the fact that a new magma intrusion could lead to further destabilisation of the slope, which may be more significant than the one recently observed because it will affect an already disarranged deposit and fractured and loosened crater area. The two-pronged approach based on the analysis of 3D multi-temporal mapping datasets and on the application of LE methods contributed to better understanding volcano flank behaviour and to be prepared to undertake actions aimed at risk mitigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Atmospheric CO2 concentration ([CO2]) has increased over the last 250 years, mainly due to human activities. Of total anthropogenic emissions, almost 31% has been sequestered by the terrestrial biosphere. A considerable contribution to this sink comes from temperate and boreal forest ecosystems of the northern hemisphere, which contain a large amount of carbon (C) stored as biomass and soil organic matter. Several potential drivers for this forest C sequestration have been proposed, including increasing atmospheric [CO2], temperature, nitrogen (N) deposition and changes in management practices. However, it is not known which of these drivers are most important. The overall aim of this thesis project was to develop a simple ecosystem model which explicitly incorporates our best understanding of the mechanisms by which these drivers affect forest C storage, and to use this model to investigate the sensitivity of the forest ecosystem to these drivers. I firstly developed a version of the Generic Decomposition and Yield (G’DAY) model to explicitly investigate the mechanisms leading to forest C sequestration following N deposition. Specifically, I modified the G’DAY model to include advances in understanding of C allocation, canopy N uptake, and leaf trait relationships. I also incorporated a simple forest management practice subroutine. Secondly, I investigated the effect of CO2 fertilization on forest productivity with relation to the soil N availability feedback. I modified the model to allow it to simulate short-term responses of deciduous forests to environmental drivers, and applied it to data from a large-scale forest Free-Air CO2 Enrichment (FACE) experiment. Finally, I used the model to investigate the combined effects of recent observed changes in atmospheric [CO2], N deposition, and climate on a European forest stand. The model developed in my thesis project was an effective tool for analysis of effects of environmental drivers on forest ecosystem C storage. Key results from model simulations include: (i) N availability has a major role in forest ecosystem C sequestration; (ii) atmospheric N deposition is an important driver of N availability on short and long time-scales; (iii) rising temperature increases C storage by enhancing soil N availability and (iv) increasing [CO2] significantly affects forest growth and C storage only when N availability is not limiting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The term "Brain Imaging" identi�es a set of techniques to analyze the structure and/or functional behavior of the brain in normal and/or pathological situations. These techniques are largely used in the study of brain activity. In addition to clinical usage, analysis of brain activity is gaining popularity in others recent �fields, i.e. Brain Computer Interfaces (BCI) and the study of cognitive processes. In this context, usage of classical solutions (e.g. f MRI, PET-CT) could be unfeasible, due to their low temporal resolution, high cost and limited portability. For these reasons alternative low cost techniques are object of research, typically based on simple recording hardware and on intensive data elaboration process. Typical examples are ElectroEncephaloGraphy (EEG) and Electrical Impedance Tomography (EIT), where electric potential at the patient's scalp is recorded by high impedance electrodes. In EEG potentials are directly generated from neuronal activity, while in EIT by the injection of small currents at the scalp. To retrieve meaningful insights on brain activity from measurements, EIT and EEG relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of the electric �field distribution therein. The inhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeo�ff between physical accuracy and technical feasibility, which currently severely limits the capabilities of these techniques. Moreover elaboration of data recorded requires usage of regularization techniques computationally intensive, which influences the application with heavy temporal constraints (such as BCI). This work focuses on the parallel implementation of a work-flow for EEG and EIT data processing. The resulting software is accelerated using multi-core GPUs, in order to provide solution in reasonable times and address requirements of real-time BCI systems, without over-simplifying the complexity and accuracy of the head models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis starts showing the main characteristics and application fields of the AlGaN/GaN HEMT technology, focusing on reliability aspects essentially due to the presence of low frequency dispersive phenomena which limit in several ways the microwave performance of this kind of devices. Based on an equivalent voltage approach, a new low frequency device model is presented where the dynamic nonlinearity of the trapping effect is taken into account for the first time allowing considerable improvements in the prediction of very important quantities for the design of power amplifier such as power added efficiency, dissipated power and internal device temperature. An innovative and low-cost measurement setup for the characterization of the device under low-frequency large-amplitude sinusoidal excitation is also presented. This setup allows the identification of the new low frequency model through suitable procedures explained in detail. In this thesis a new non-invasive empirical method for compact electrothermal modeling and thermal resistance extraction is also described. The new contribution of the proposed approach concerns the non linear dependence of the channel temperature on the dissipated power. This is very important for GaN devices since they are capable of operating at relatively high temperatures with high power densities and the dependence of the thermal resistance on the temperature is quite relevant. Finally a novel method for the device thermal simulation is investigated: based on the analytical solution of the tree-dimensional heat equation, a Visual Basic program has been developed to estimate, in real time, the temperature distribution on the hottest surface of planar multilayer structures. The developed solver is particularly useful for peak temperature estimation at the design stage when critical decisions about circuit design and packaging have to be made. It facilitates the layout optimization and reliability improvement, allowing the correct choice of the device geometry and configuration to achieve the best possible thermal performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several countries have acquired, over the past decades, large amounts of area covering Airborne Electromagnetic data. Contribution of airborne geophysics has dramatically increased for both groundwater resource mapping and management proving how those systems are appropriate for large-scale and efficient groundwater surveying. We start with processing and inversion of two AEM dataset from two different systems collected over the Spiritwood Valley Aquifer area, Manitoba, Canada respectively, the AeroTEM III (commissioned by the Geological Survey of Canada in 2010) and the “Full waveform VTEM” dataset, collected and tested over the same survey area, during the fall 2011. We demonstrate that in the presence of multiple datasets, either AEM and ground data, due processing, inversion, post-processing, data integration and data calibration is the proper approach capable of providing reliable and consistent resistivity models. Our approach can be of interest to many end users, ranging from Geological Surveys, Universities to Private Companies, which are often proprietary of large geophysical databases to be interpreted for geological and\or hydrogeological purposes. In this study we deeply investigate the role of integration of several complimentary types of geophysical data collected over the same survey area. We show that data integration can improve inversions, reduce ambiguity and deliver high resolution results. We further attempt to use the final, most reliable output resistivity models as a solid basis for building a knowledge-driven 3D geological voxel-based model. A voxel approach allows a quantitative understanding of the hydrogeological setting of the area, and it can be further used to estimate the aquifers volumes (i.e. potential amount of groundwater resources) as well as hydrogeological flow model prediction. In addition, we investigated the impact of an AEM dataset towards hydrogeological mapping and 3D hydrogeological modeling, comparing it to having only a ground based TEM dataset and\or to having only boreholes data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Waste management represents an important issue in our society and Waste-to-Energy incineration plants have been playing a significant role in the last decades, showing an increased importance in Europe. One of the main issues posed by waste combustion is the generation of air contaminants. Particular concern is present about acid gases, mainly hydrogen chloride and sulfur oxides, due to their potential impact on the environment and on human health. Therefore, in the present study the main available technological options for flue gas treatment were analyzed, focusing on dry treatment systems, which are increasingly applied in Municipal Solid Wastes (MSW) incinerators. An operational model was proposed to describe and optimize acid gas removal process. It was applied to an existing MSW incineration plant, where acid gases are neutralized in a two-stage dry treatment system. This process is based on the injection of powdered calcium hydroxide and sodium bicarbonate in reactors followed by fabric filters. HCl and SO2 conversions were expressed as a function of reactants flow rates, calculating model parameters from literature and plant data. The implementation in a software for process simulation allowed the identification of optimal operating conditions, taking into account the reactant feed rates, the amount of solid products and the recycle of the sorbent. Alternative configurations of the reference plant were also assessed. The applicability of the operational model was extended developing also a fundamental approach to the issue. A predictive model was developed, describing mass transfer and kinetic phenomena governing the acid gas neutralization with solid sorbents. The rate controlling steps were identified through the reproduction of literature data, allowing the description of acid gas removal in the case study analyzed. A laboratory device was also designed and started up to assess the required model parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis the potential risks associated to the application of biochar in soil as well the stability of biochar were investigated. The study was focused on the potential risks arising from the occurrence of polycyclic aromatic hydrocarbons (PAHs) in biochar. An analytical method was developed for the determination of the 16 USEPA-PAHs in the original biochar and soil containing biochar. The method was successfully validated with a certified reference material for the soil matrix and compared with methods in use in other laboratories during a laboratory exercise within the EU-COST TD1107. The concentration of 16 USEPA-PAHs along with the 15 EU-PAHs, priority hazardous substances in food, was determined in a suite of currently available biochars for agricultural field applications derived from a variety of parent materials and pyrolysis conditions. Biochars analyzed contained the USEPA and some of the EU-PAHs at detectable levels ranging from 1.2 to 19 µg g-1. This method allowed investigating changes in PAH content and distribution in a four years study following biochar addition in soils in a vineyard (CNR-IBIMET). The results showed that biochar addition determined an increase of the amount of PAHs. However, the levels of PAHs in the soil remained within the maximum acceptable concentration for European countries. The vineyard soil performed by CNR-IBIMET was exploited to study the environmental stability of biochar and its impact on soil organic carbon. The stability of biochar was investigated by analytical pyrolysis (Py-GC-MS) and pyrolysis in the presence of hydrogen (HyPy). The findings showed that biochar amendment significantly influence soil stable carbon fraction concentration during the incubation period. Moreover, HyPy and Py-GC-MS were applied to biochars deriving from three different feedstock at two different pyrolysis temperatures. The results evidenced the influence of feedstock type and pyrolysis conditions on the degree of carbonisation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research field of my PhD concerns mathematical modeling and numerical simulation, applied to the cardiac electrophysiology analysis at a single cell level. This is possible thanks to the development of mathematical descriptions of single cellular components, ionic channels, pumps, exchangers and subcellular compartments. Due to the difficulties of vivo experiments on human cells, most of the measurements are acquired in vitro using animal models (e.g. guinea pig, dog, rabbit). Moreover, to study the cardiac action potential and all its features, it is necessary to acquire more specific knowledge about single ionic currents that contribute to the cardiac activity. Electrophysiological models of the heart have become very accurate in recent years giving rise to extremely complicated systems of differential equations. Although describing the behavior of cardiac cells quite well, the models are computationally demanding for numerical simulations and are very difficult to analyze from a mathematical (dynamical-systems) viewpoint. Simplified mathematical models that capture the underlying dynamics to a certain extent are therefore frequently used. The results presented in this thesis have confirmed that a close integration of computational modeling and experimental recordings in real myocytes, as performed by dynamic clamp, is a useful tool in enhancing our understanding of various components of normal cardiac electrophysiology, but also arrhythmogenic mechanisms in a pathological condition, especially when fully integrated with experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heart diseases are the leading cause of death worldwide, both for men and women. However, the ionic mechanisms underlying many cardiac arrhythmias and genetic disorders are not completely understood, thus leading to a limited efficacy of the current available therapies and leaving many open questions for cardiac electrophysiologists. On the other hand, experimental data availability is still a great issue in this field: most of the experiments are performed in vitro and/or using animal models (e.g. rabbit, dog and mouse), even when the final aim is to better understand the electrical behaviour of in vivo human heart either in physiological or pathological conditions. Computational modelling constitutes a primary tool in cardiac electrophysiology: in silico simulations, based on the available experimental data, may help to understand the electrical properties of the heart and the ionic mechanisms underlying a specific phenomenon. Once validated, mathematical models can be used for making predictions and testing hypotheses, thus suggesting potential therapeutic targets. This PhD thesis aims to apply computational cardiac modelling of human single cell action potential (AP) to three clinical scenarios, in order to gain new insights into the ionic mechanisms involved in the electrophysiological changes observed in vitro and/or in vivo. The first context is blood electrolyte variations, which may occur in patients due to different pathologies and/or therapies. In particular, we focused on extracellular Ca2+ and its effect on the AP duration (APD). The second context is haemodialysis (HD) therapy: in addition to blood electrolyte variations, patients undergo a lot of other different changes during HD, e.g. heart rate, cell volume, pH, and sympatho-vagal balance. The third context is human hypertrophic cardiomyopathy (HCM), a genetic disorder characterised by an increased arrhythmic risk, and still lacking a specific pharmacological treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The way mass is distributed in galaxies plays a major role in shaping their evolution across cosmic time. The galaxy's total mass is usually determined by tracing the motion of stars in its potential, which can be probed observationally by measuring stellar spectra at different distances from the galactic centre, whose kinematics is used to constrain dynamical models. A class of such models, commonly used to accurately determine the distribution of luminous and dark matter in galaxies, is that of equilibrium models. In this Thesis, a novel approach to the design of equilibrium dynamical models, in which the distribution function is an analytic function of the action integrals, is presented. Axisymmetric and rotating models are used to explain observations of a sample of nearby early-type galaxies in the Calar Alto Legacy Integral Field Area survey. Photometric and spectroscopic data for round and flattened galaxies are well fitted by the models, which are then used to get the galaxies' total mass distribution and orbital anisotropy. The time evolution of massive early-type galaxies is also investigated with numerical models. Their structural properties (mass, size, velocity dispersion) are observed to evolve, on average, with redshift. In particular, they appear to be significantly more compact at higher redshift, at fixed stellar mass, so it is interesting to investigate what drives such evolution. This Thesis focuses on the role played by dark-matter haloes: their mass-size and mass-velocity dispersion correlations evolve similarly to the analogous correlations of ellipticals; at fixed halo mass, the haloes are more compact at higher redshift, similarly to massive galaxies; a simple model, in which all the galaxy's size and velocity-dispersion evolution is due to the cosmological evolution of the underlying halo population, reproduces the observed size and velocity-dispersion of massive compact early-type galaxies up to redshift of about 2.