11 resultados para Two Approaches
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
There are different ways to do cluster analysis of categorical data in the literature and the choice among them is strongly related to the aim of the researcher, if we do not take into account time and economical constraints. Main approaches for clustering are usually distinguished into model-based and distance-based methods: the former assume that objects belonging to the same class are similar in the sense that their observed values come from the same probability distribution, whose parameters are unknown and need to be estimated; the latter evaluate distances among objects by a defined dissimilarity measure and, basing on it, allocate units to the closest group. In clustering, one may be interested in the classification of similar objects into groups, and one may be interested in finding observations that come from the same true homogeneous distribution. But do both of these aims lead to the same clustering? And how good are clustering methods designed to fulfil one of these aims in terms of the other? In order to answer, two approaches, namely a latent class model (mixture of multinomial distributions) and a partition around medoids one, are evaluated and compared by Adjusted Rand Index, Average Silhouette Width and Pearson-Gamma indexes in a fairly wide simulation study. Simulation outcomes are plotted in bi-dimensional graphs via Multidimensional Scaling; size of points is proportional to the number of points that overlap and different colours are used according to the cluster membership.
Resumo:
Urbanization is a continuing phenomenon in all the world. Grasslands, forests, etc. are being continually changed to residential, commercial and industrial complexes, roads and streets, and so on. One of the side effects of urbanization with which engineers and planners must deal with, is the increase of peak flows and volumes of runoff from rainfall events. As a result, the urban drainage and flood control systems must be designed to accommodate the peak flows from a variety of storms that may occur. Usually the peak flow, after development, is required not to exceed what would have occurred from the same storm under conditions existing prior to development. In order to do this it is necessary to design detention storage to hold back runoff and to release it downstream at controlled rates. In the first part of the work have been developed various simplified formulations that can be adopted for the design of stormwater detention facilities. In order to obtain a simplified hydrograph were adopted two approaches: the kinematic routing technique and the linear reservoir schematization. For the two approaches have been also obtained other two formulations depending if the IDF (intensity-duration-frequency) curve is described with two or three parameters. Other formulations have been developed taking into account if the outlet have a constant discharge or it depends on the water level in the pond. All these formulations can be easily applied when are known the characteristics of the drainage system and maximum discharge that these is in the outlet and has been defined a Return Period which characterize the IDF curve. In this way the volume of the detention pond can be calculated. In the second part of the work have been analyzed the design of detention ponds adopting continuous simulation models. The drainage systems adopted for the simulations, performed with SWMM5, are fictitious systems characterized by different sizes, and different shapes of the catchments and with a rainfall historical time series of 16 years recorded in Bologna. This approach suffers from the fact that continuous record of rainfall is often not available and when it is, the cost of such modelling can be very expensive, and that the majority of design practitioners are not prepared to use continuous long term modelling in the design of stormwater detention facilities. In the third part of the work have been analyzed statistical and stochastic methodologies in order to define the volume of the detention pond. In particular have been adopted the results of the long term simulation, performed with SWMM, to obtain the data to apply statistic and stochastic formulation. All these methodologies have been compared and correction coefficient have been proposed on the basis of the statistic and stochastic form. In this way engineers which have to design a detention pond can apply a simplified procedure appropriately corrected with the proposed coefficient.
Resumo:
This study aims at providing a theoretical framework encompassing the two approaches towards entrepreneurial opportunity (opportunity discovery and opportunity creation) by outlining a trajectory from firm creation to capability development, to firm performance in the short term (firm survival) and the medium/long term (growth rate). A set of empirically testable hypotheses is proposed and tested by performing qualitative analyses on interviews on a small sample of entrepreneurs and event history analysis on a large sample of firms founded in the United States in 2004.
Resumo:
The ever increasing demand for new services from users who want high-quality broadband services while on the move, is straining the efficiency of current spectrum allocation paradigms, leading to an overall feeling of spectrum scarcity. In order to circumvent this problem, two possible solutions are being investigated: (i) implementing new technologies capable of accessing the temporarily/locally unused bands, without interfering with the licensed services, like Cognitive Radios; (ii) release some spectrum bands thanks to new services providing higher spectral efficiency, e.g., DVB-T, and allocate them to new wireless systems. These two approaches are promising, but also pose novel coexistence and interference management challenges to deal with. In particular, the deployment of devices such as Cognitive Radio, characterized by the inherent unplanned, irregular and random locations of the network nodes, require advanced mathematical techniques in order to explicitly model their spatial distribution. In such context, the system performance and optimization are strongly dependent on this spatial configuration. On the other hand, allocating some released spectrum bands to other wireless services poses severe coexistence issues with all the pre-existing services on the same or adjacent spectrum bands. In this thesis, these methodologies for better spectrum usage are investigated. In particular, using Stochastic Geometry theory, a novel mathematical framework is introduced for cognitive networks, providing a closed-form expression for coverage probability and a single-integral form for average downlink rate and Average Symbol Error Probability. Then, focusing on more regulatory aspects, interference challenges between DVB-T and LTE systems are analysed proposing a versatile methodology for their proper coexistence. Moreover, the studies performed inside the CEPT SE43 working group on the amount of spectrum potentially available to Cognitive Radios and an analysis of the Hidden Node problem are provided. Finally, a study on the extension of cognitive technologies to Hybrid Satellite Terrestrial Systems is proposed.
Resumo:
This thesis analyses problems related to the applicability, in business environments, of Process Mining tools and techniques. The first contribution is a presentation of the state of the art of Process Mining and a characterization of companies, in terms of their "process awareness". The work continues identifying circumstance where problems can emerge: data preparation; actual mining; and results interpretation. Other problems are the configuration of parameters by not-expert users and computational complexity. We concentrate on two possible scenarios: "batch" and "on-line" Process Mining. Concerning the batch Process Mining, we first investigated the data preparation problem and we proposed a solution for the identification of the "case-ids" whenever this field is not explicitly indicated. After that, we concentrated on problems at mining time and we propose the generalization of a well-known control-flow discovery algorithm in order to exploit non instantaneous events. The usage of interval-based recording leads to an important improvement of performance. Later on, we report our work on the parameters configuration for not-expert users. We present two approaches to select the "best" parameters configuration: one is completely autonomous; the other requires human interaction to navigate a hierarchy of candidate models. Concerning the data interpretation and results evaluation, we propose two metrics: a model-to-model and a model-to-log. Finally, we present an automatic approach for the extension of a control-flow model with social information, in order to simplify the analysis of these perspectives. The second part of this thesis deals with control-flow discovery algorithms in on-line settings. We propose a formal definition of the problem, and two baseline approaches. The actual mining algorithms proposed are two: the first is the adaptation, to the control-flow discovery problem, of a frequency counting algorithm; the second constitutes a framework of models which can be used for different kinds of streams (stationary versus evolving).
Resumo:
This thesis deals with the transformation of ethanol into acetonitrile. Two approaches are investigated: (a) the ammoxidation of ethanol to acetonitrile and (b) the amination of ethanol to acetonitrile. The reaction of ethanol ammoxidation to acetonitrile has been studied using several catalytic systems, such as vanadyl pyrophosphate, supported vanadium oxide, multimetal molibdates and antimonates. The main conclusions are: (I) The surface acidity must be very low, because acidity catalyzes several undesired reactions, such as the formation of ethylene, and of heavy compounds as well. (II) Supported vanadium oxide is the catalyst showing the best catalytic behaviour, but the role of the support is of crucial importance. (III) Both metal molybdates and antimonates show interesting catalytic behaviour, but are poorly active, and probably require harder conditions than those used with the V oxide-based catalysts. (IV) One key point in the reaction network is the rate of reaction between acetaldehyde (the first intermediate) and ammonia, compared to the parallel rates of acetaldehyde transformation into by-products (CO, CO2, HCN, heavy compounds). Concerning the non-oxidative process, two possible strategies are investigated: (a) the ethanol ammonolysis to ethylamine coupled with ethylamine dehydrogenation, and (b) the direct non-reductive amination of ethanol to acetonitrile. Despite the good results obtained in each single step, the former reaction does not lead to good results in terms of yield to acetonitrile. The direct amination can be catalyzed with good acetonitrile yield over catalyst based on supported metal oxides. Strategies aimed at limiting catalyst deactivation have also been investigated.
Resumo:
One of the main problems recognized in sustainable development goals and sustainable agricultural objectives is Climate change. Farming contributes significantly to the overall Greenhouse gases (GHG) in the atmosphere, which is approximately 10-12 percent of total GHG emissions, but when taking in consideration also land-use change, including deforestation driven by agricultural expansion for food, fiber and fuel the number rises to approximately 30 percent (Smith et. al., 2007). There are two distinct methodological approaches for environmental impact assessment; Life Cycle Assessment (a bottom up approach) and Input-Output Analysis (a top down approach). The two methodologies differ significantly but there is not an immediate choice between them if the scope of the study is on a sectorial level. Instead, as an alternative, hybrid approaches which combine these two approaches have emerged. The aim of this study is to analyze in a greater detail the agricultural sectors contribution to Climate change caused by the consumption of food products. Hence, to identify the food products that have the greatest impact through their life cycle, identifying their hotspots and evaluating the mitigation possibilities for the same. At the same time evaluating methodological possibilities and models to be applied for this purpose both on a EU level and on a country level (Italy).
Resumo:
This PhD Thesis is focused on the development of fibrous polymeric scaffolds for tissue engineering applications and on the improvement of scaffold biomimetic properties. Scaffolds were fabricated by electrospinning, which allows to obtain scaffolds made of polymeric micro or nanofibers. Biomimetism was enhanced by following two approaches: (1) the use of natural biopolymers, and (2) the modification of the fibers surface chemistry. Gelatin was chosen for its bioactive properties and cellular affinity, however it lacks in mechanical properties. This problem was overcome by adding poly(lactic acid) to the scaffold through co-electrospinning and mechanical properties of the composite constructs were assessed. Gelatin effectively improves cell growth and viability and worth noting, composite scaffolds of gelatin and poly(lactic acid) were more effective than a plain gelatin scaffold. Scaffolds made of pure collagen fibers were fabricated. Modification of collagen triple helix structure in electrospun collagen fibers was studied. Mechanical properties were evaluated before and after crosslinking. The crosslinking procedure was developed and optimized by using - for the first time on electrospun collagen fibers - the crosslinking reactant 1,4-butanediol diglycidyl ether, with good results in terms of fibers stabilization. Cell culture experiments showed good results in term of cell adhesion and morphology. The fiber surface chemistry of electrospun poly(lactic acid) scaffold was modified by plasma treatment. Plasma did not affect thermal and mechanical properties of the scaffold, while it greatly increased its hydrophilicity by the introduction of carboxyl groups at the fiber surface. This fiber functionalization enhanced the fibroblast cell viability and spreading. Surface modifications by chemical reactions were conducted on electrospun scaffolds made of a polysophorolipid. The aim was to introduce a biomolecule at the fiber surface. By developing a series of chemical reactions, one oligopeptide every three repeating units of polysophorolipid was grafted at the surface of electrospun fibers.
Resumo:
Epoxy resins are mainly produced by reacting bisphenol A with epichlorohydrin. Growing concerns about the negative health effects of bisphenol A are urging researchers to find alternatives. In this work diphenolic acid is suggested, as it derives from levulinic acid, obtained from renewable resources. Nevertheless, it is also synthesized from phenol, from fossil resources, which, in the current paper has been substituted by plant-based phenols. Two interesting derivatives were identified: diphenolic acid from catechol and from resorcinol. Epichlorohydrin on the other hand, is highly carcinogenic and volatile, leading to a tremendous risk of exposure. Thus, two approaches have been investigated and compared with epichlorohydrin. The resulting resins have been characterized to find an appropriate application, as epoxy are commonly used for a wide range of products, ranging from composite materials for boats to films for food cans. Self-curing capacity was observed for the resin deriving from diphenolic acid from catechol. The glycidyl ether of the diphenolic acid from resorcinol, a fully renewable compound, was cured in isothermal and non-isothermal tests tracked by DSC. Two aliphatic amines were used, namely 1,4-butanediamine and 1,6-hexamethylendiamine, in order to determine the effect of chain length on the curing of an epoxy-amine system and determine the kinetic parameters. The latter are crucial to plan any industrial application. Both diamines demonstrated superior properties compared to traditional bisphenol A-amine systems.
Resumo:
In this thesis two approaches were applied to achieve a double general objective. The first chapter was dedicated to the study of the distribution of the expression of genes of several bitter and fat receptor in several gastrointestinal tracts. A set of 7 genes for bitter taste and for 3 genes for fat taste was amplified with real-time PCR from mRNA extracted from 5 gastrointestinal segments of weaned pigs. The presence of gene expression for several chemosensing receptors for bitter and fat taste in different compartments of the stomach confirms that this organ should be considered a player for the early detection of bolus composition. In the second chapter we investigated in young pigs the distribution of butyrate-sensing olfactory receptor (OR51E1) receptor along the GIT, its relation with some endocrine markers, its variation with age, and after interventions affecting the gut environment and intestinal microbiota in piglets and in different tissues. Our results indicate that OR51E1 is strictly related to the normal GIT enteroendocrine activity. In the third chapter we investigated the differential gene expression between oxyntic and pyloric mucosa in seven starter pigs. The obtained data indicate that there is significant differential gene exression between oxintic of the young pig and pyloric mucosa and further functional studies are needed to confirm their physiological importance. In the last chapter, thymol, that has been proposed as an oral alternative to antibiotics in the feed of pigs and broilers, was introduced directly into the stomach of 8 weaned pigs and sampled for gastric oxyntic and pyloric mucosa. The analysis of the whole transcript expression shoes that the stimulation of gastric proliferative activity and the control of digestive activity by thymol can influence positively gastric maturation and function in the weaned pigs.
Resumo:
During my PhD, starting from the original formulations proposed by Bertrand et al., 2000 and Emolo & Zollo 2005, I developed inversion methods and applied then at different earthquakes. In particular large efforts have been devoted to the study of the model resolution and to the estimation of the model parameter errors. To study the source kinematic characteristics of the Christchurch earthquake we performed a joint inversion of strong-motion, GPS and InSAR data using a non-linear inversion method. Considering the complexity highlighted by superficial deformation data, we adopted a fault model consisting of two partially overlapping segments, with dimensions 15x11 and 7x7 km2, having different faulting styles. This two-fault model allows to better reconstruct the complex shape of the superficial deformation data. The total seismic moment resulting from the joint inversion is 3.0x1025 dyne.cm (Mw = 6.2) with an average rupture velocity of 2.0 km/s. Errors associated with the kinematic model have been estimated of around 20-30 %. The 2009 Aquila sequence was characterized by an intense aftershocks sequence that lasted several months. In this study we applied an inversion method that assumes as data the apparent Source Time Functions (aSTFs), to a Mw 4.0 aftershock of the Aquila sequence. The estimation of aSTFs was obtained using the deconvolution method proposed by Vallée et al., 2004. The inversion results show a heterogeneous slip distribution, characterized by two main slip patches located NW of the hypocenter, and a variable rupture velocity distribution (mean value of 2.5 km/s), showing a rupture front acceleration in between the two high slip zones. Errors of about 20% characterize the final estimated parameters.