15 resultados para non-process elements

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Questa tesi si pone come obiettivo l'analisi delle componenti di sollecitazione statica di un serbatoio, in acciaio API 5L X52, sottoposto a carichi di flessione e pressione interna attraverso il programma agli elementi finiti PLCd4, sviluppato presso l'International Center for Numerical Methods in Engineering (CIMNE - Barcelona). Questo tipo di analisi rientra nel progetto europeo ULCF, il cui traguardo è lo studio della fatica a bassissimo numero di cicli per strutture in acciaio. Prima di osservare la struttura completa del serbatoio è stato studiato il comportamento del materiale per implementare all'interno del programma una nuova tipologia di curva che rappresentasse al meglio l'andamento delle tensioni interne. Attraverso il lavoro di preparazione alla tesi è stato inserito all'interno del programma un algoritmo per la distribuzione delle pressioni superficiali sui corpi 3D, successivamente utilizzato per l'analisi della pressione interna nel serbatoio. Sono state effettuate analisi FEM del serbatoio in diverse configurazioni di carico ove si è cercato di modellare al meglio la struttura portante relativa al caso reale di "full scale test". Dal punto di vista analitico i risultati ottenuti sono soddisfacenti in quanto rispecchiano un corretto comportamento del serbatoio in condizioni di pressioni molto elevate e confermano la bontà del programma nell'analisi computazionale.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Gli oli microbici stanno ricevendo sempre più attenzioni come possibile alternativa agli oli vegetali, nel processo di sostituzione dei combustibili fossili. Tuttavia, diversi aspetti necessitano di essere ottimizzati al fine di ottenere oli economicamente competitivi e con caratteristiche chimico-fisiche desiderate. In questa ricerca, sono stati utilizzati due differenti approcci per poter realizzare l’obiettivo preposto. Il primo, si è basato sull’ingegnerizzazione genetica del lievito C. oleaginous, al fine di incrementare la produttività di lipidi e modificare la composizione dei trigliceridi (TAG) sintetizzati. Un protocollo basato su una trasformazione genetica mediata da Agrobacterium è stato utilizzato per sovraesprimere la diacilglicerol trasnferasi (DGA1), l’enzima responsabile dell’ultimo step della sintesi dei TAG, e la Δ9-desaturasi, l’enzima che catalizza la conversione dell’acido stearico (C18:0) in acido oleico (C18:1). La selezione di colonie positive e l’analisi dei mutanti ottenuti ha confermato la buona riuscita della trasformazione. Il secondo approccio ha mirato a studiare l’influenza sulla crescita e sul profilo di lipidi accumulati da C. oleaginous da parte di diversi acidi grassi volatili (VFAs), una materia prima ottenibile da trattamenti di scarti industriali. A questo proposito, sono state utilizzate fermentazioni fed-batch su scala da 1-L basate su glucosio e miscele sintetiche di acido acetico e di VFAs come fonte di carbonio. L’utilizzo simultaneo di acido acetico e acidi secondari ha mostrato come sia possibile stimolare il metabolismo microbico al fine di incrementare l'accumulo di oli e ottenere una composizione chimica lipidica desiderata.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human reasoning is a fascinating and complex cognitive process that can be applied in different research areas such as philosophy, psychology, laws and financial. Unfortunately, developing supporting software (to those different areas) able to cope such as complex reasoning it’s difficult and requires a suitable logic abstract formalism. In this thesis we aim to develop a program, that has the job to evaluate a theory (a set of rules) w.r.t. a Goal, and provide some results such as “The Goal is derivable from the KB5 (of the theory)”. In order to achieve this goal we need to analyse different logics and choose the one that best meets our needs. In logic, usually, we try to determine if a given conclusion is logically implied by a set of assumptions T (theory). However, when we deal with programming logic we need an efficient algorithm in order to find such implications. In this work we use a logic rather similar to human logic. Indeed, human reasoning requires an extension of the first order logic able to reach a conclusion depending on not definitely true6 premises belonging to a incomplete set of knowledge. Thus, we implemented a defeasible logic7 framework able to manipulate defeasible rules. Defeasible logic is a non-monotonic logic designed for efficient defeasible reasoning by Nute (see Chapter 2). Those kind of applications are useful in laws area especially if they offer an implementation of an argumentation framework that provides a formal modelling of game. Roughly speaking, let the theory is the set of laws, a keyclaim is the conclusion that one of the party wants to prove (and the other one wants to defeat) and adding dynamic assertion of rules, namely, facts putted forward by the parties, then, we can play an argumentative challenge between two players and decide if the conclusion is provable or not depending on the different strategies performed by the players. Implementing a game model requires one more meta-interpreter able to evaluate the defeasible logic framework; indeed, according to Göedel theorem (see on page 127), we cannot evaluate the meaning of a language using the tools provided by the language itself, but we need a meta-language able to manipulate the object language8. Thus, rather than a simple meta-interpreter, we propose a Meta-level containing different Meta-evaluators. The former has been explained above, the second one is needed to perform the game model, and the last one will be used to change game execution and tree derivation strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Synthetic biology has recently had a great development, many papers have been published and many applications have been presented, spanning from the production of biopharmacheuticals to the synthesis of bioenergetic substrates or industrial catalysts. But, despite these advances, most of the applications are quite simple and don’t fully exploit the potential of this discipline. This limitation in complexity has many causes, like the incomplete characterization of some components, or the intrinsic variability of the biological systems, but one of the most important reasons is the incapability of the cell to sustain the additional metabolic burden introduced by a complex circuit. The objective of the project, of which this work is part, is trying to solve this problem through the engineering of a multicellular behaviour in prokaryotic cells. This system will introduce a cooperative behaviour that will allow to implement complex functionalities, that can’t be obtained with a single cell. In particular the goal is to implement the Leader Election, this procedure has been firstly devised in the field of distributed computing, to identify the process that allow to identify a single process as organizer and coordinator of a series of tasks assigned to the whole population. The election of the Leader greatly simplifies the computation providing a centralized control. Further- more this system may even be useful to evolutionary studies that aims to explain how complex organisms evolved from unicellular systems. The work presented here describes, in particular, the design and the experimental characterization of a component of the circuit that solves the Leader Election problem. This module, composed of an hybrid promoter and a gene, is activated in the non-leader cells after receiving the signal that a leader is present in the colony. The most important element, in this case, is the hybrid promoter, it has been realized in different versions, applying the heuristic rules stated in [22], and their activity has been experimentally tested. The objective of the experimental characterization was to test the response of the genetic circuit to the introduction, in the cellular environment, of particular molecules, inducers, that can be considered inputs of the system. The desired behaviour is similar to the one of a logic AND gate in which the exit, represented by the luminous signal produced by a fluorescent protein, is one only in presence of both inducers. The robustness and the stability of this behaviour have been tested by changing the concentration of the input signals and building dose response curves. From these data it is possible to conclude that the analysed constructs have an AND-like behaviour over a wide range of inducers’ concentrations, even if it is possible to identify many differences in the expression profiles of the different constructs. This variability accounts for the fact that the input and the output signals are continuous, and so their binary representation isn’t able to capture the complexity of the behaviour. The module of the circuit that has been considered in this analysis has a fundamental role in the realization of the intercellular communication system that is necessary for the cooperative behaviour to take place. For this reason, the second phase of the characterization has been focused on the analysis of the signal transmission. In particular, the interaction between this element and the one that is responsible for emitting the chemical signal has been tested. The desired behaviour is still similar to a logic AND, since, even in this case, the exit signal is determined by the hybrid promoter activity. The experimental results have demonstrated that the systems behave correctly, even if there is still a substantial variability between them. The dose response curves highlighted that stricter constrains on the inducers concentrations need to be imposed in order to obtain a clear separation between the two levels of expression. In the conclusive chapter the DNA sequences of the hybrid promoters are analysed, trying to identify the regulatory elements that are most important for the determination of the gene expression. Given the available data it wasn’t possible to draw definitive conclusions. In the end, few considerations on promoter engineering and complex circuits realization are presented. This section aims to briefly recall some of the problems outlined in the introduction and provide a few possible solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The subject of this work is the diffusion of turbulence in a non-turbulent flow. Such phenomenon can be found in almost every practical case of turbulent flow: all types of shear flows (wakes, jet, boundary layers) present some boundary between turbulence and the non-turbulent surround; all transients from a laminar flow to turbulence must account for turbulent diffusion; mixing of flows often involve the injection of a turbulent solution in a non-turbulent fluid. The mechanism of what Phillips defined as “the erosion by turbulence of the underlying non-turbulent flow”, is called entrainment. It is usually considered to operate on two scales with different mechanics. The small scale nibbling, which is the entrainment of fluid by viscous diffusion of turbulence, and the large scale engulfment, which entraps large volume of flow to be “digested” subsequently by viscous diffusion. The exact role of each of them in the overall entrainment rate is still not well understood, as it is the interplay between these two mechanics of diffusion. It is anyway accepted that the entrainment rate scales with large properties of the flow, while is not understood how the large scale inertial behavior can affect an intrinsically viscous phenomenon as diffusion of vorticity. In the present work we will address then the problem of turbulent diffusion through pseudo-spectral DNS simulations of the interface between a volume of decaying turbulence and quiescent flow. Such simulations will give us first hand measures of velocity, vorticity and strains fields at the interface; moreover the framework of unforced decaying turbulence will permit to study both spatial and temporal evolution of such fields. The analysis will evidence that for this kind of flows the overall production of enstrophy , i.e. the square of vorticity omega^2 , is dominated near the interface by the local inertial transport of “fresh vorticity” coming from the turbulent flow. Viscous diffusion instead plays a major role in enstrophy production in the outbound of the interface, where the nibbling process is dominant. The data from our simulation seems to confirm the theory of an inertially stirred viscous phenomenon proposed by others authors before and provides new data about the inertial diffusion of turbulence across the interface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

English: The assessment of safety in existing bridges and viaducts led the Ministry of Public Works of the Netherlands to finance a specific campaing aimed at the study of the response of the elements of these infrastructures. Therefore, this activity is focused on the investigation of the behaviour of reinforced concrete slabs under concentrated loads, adopting finite element modeling and comparison with experimental results. These elements are characterized by shear behaviour and crisi, whose modeling is, from a computational point of view, a hard challeng, due to the brittle behavior combined with three-dimensional effects. The numerical modeling of the failure is studied through Sequentially Linear Analysis (SLA), an alternative Finite Element method, with respect to traditional incremental and iterative approaches. The comparison between the two different numerical techniques represents one of the first works and comparisons in a three-dimensional environment. It's carried out adopting one of the experimental test executed on reinforced concrete slabs as well. The advantage of the SLA is to avoid the well known problems of convergence of typical non-linear analysis, by directly specifying a damage increment, in terms of reduction of stiffness and resistance in particular finite element, instead of load or displacement increasing on the whole structure . For the first time, particular attention has been paid to specific aspects of the slabs, like an accurate constraints modeling and sensitivity of the solution with respect to the mesh density. This detailed analysis with respect to the main parameters proofed a strong influence of the tensile fracture energy, mesh density and chosen model on the solution in terms of force-displacement diagram, distribution of the crack patterns and shear failure mode. The SLA showed a great potential, but it requires a further developments for what regards two aspects of modeling: load conditions (constant and proportional loads) and softening behaviour of brittle materials (like concrete) in the three-dimensional field, in order to widen its horizons in these new contexts of study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The seismic behaviour of one-storey asymmetric structures has been studied since 1970s by a number of researches studies which identified the coupled nature of the translational-to-torsional response of those class of systems leading to severe displacement magnifications at the perimeter frames and therefore to significant increase of local peak seismic demand to the structural elements with respect to those of equivalent not-eccentric systems (Kan and Chopra 1987). These studies identified the fundamental parameters (such as the fundamental period TL normalized eccentricity e and the torsional-to-lateral frequency ratio Ωϑ) governing the torsional behavior of in-plan asymmetric structures and trends of behavior. It has been clearly recognized that asymmetric structures characterized by Ωϑ >1, referred to as torsionally-stiff systems, behave quite different form structures with Ωϑ <1, referred to as torsionally-flexible systems. Previous research works by some of the authors proposed a simple closed-form estimation of the maximum torsional response of one-storey elastic systems (Trombetti et al. 2005 and Palermo et al. 2010) leading to the so called “Alpha-method” for the evaluation of the displacement magnification factors at the corner sides. The present paper provides an upgrade of the “Alpha Method” removing the assumption of linear elastic response of the system. The main objective is to evaluate how the excursion of the structural elements in the inelastic field (due to the reaching of yield strength) affects the displacement demand of one-storey in-plan asymmetric structures. The system proposed by Chopra and Goel in 2007, which is claimed to be able to capture the main features of the non-linear response of in-plan asymmetric system, is used to perform a large parametric analysis varying all the fundamental parameters of the system, including the inelastic demand by varying the force reduction factor from 2 to 5. Magnification factors for different force reduction factor are proposed and comparisons with the results obtained from linear analysis are provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays the medical field is struggling to decrease bacteria biofilm formation which leads to infection. Biomedical devices sterilization has not changed over a long period of time. This results in high costs for hospitals healthcare managements. The objective of this project is to investigate electric field effects and surface energy manipulation as solutions for preventing bacteria biofilm for future devices. Based on electrokinectic environments 2 different methods were tested: feasibility of electric gradient through mediums (DEP) reinforced by numerical simulations; and EWOD by the fabrication of golden interdigitated electrodes on silicon glass substrates, standard ~480 nm Teflon (PTFE) layer and polymeric gasket to contain the bacteria medium. In the first experiment quantitative analysis was carried out to achieve forces required to reject bacteria without considering dielectric environment limitations as bacteria and medium frequency dependence. In the second experiment applied voltages was characterized by droplets contact angle measurements and put to the live bacteria tests. The project resulted on promising results for DEP application due to its wide range of frequency that can be used to make a “general” bacteria rejecting; but in terms of practicality, EWOD probably have higher potential for success but more experiments are needed to verify if can prevent biofilm adhesion besides the Teflon non-adhesive properties (including limitations as Teflon breakthrough, layer sensitivity) at incubation times larger than 24 hours.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade the near-surface mounted (NSM) strengthening technique using carbon fibre reinforced polymers (CFRP) has been increasingly used to improve the load carrying capacity of concrete members. Compared to externally bonded reinforcement (EBR), the NSM system presents considerable advantages. This technique consists in the insertion of carbon fibre reinforced polymer laminate strips into pre-cut slits opened in the concrete cover of the elements to be strengthened. CFRP reinforcement is bonded to concrete with an appropriate groove filler, typically epoxy adhesive or cement grout. Up to now, research efforts have been mainly focused on several structural aspects, such as: bond behaviour, flexural and/or shear strengthening effectiveness, and energy dissipation capacity of beam-column joints. In such research works, as well as in field applications, the most widespread adhesives that are used to bond reinforcements to concrete are epoxy resins. It is largely accepted that the performance of the whole application of NSM systems strongly depends on the mechanical properties of the epoxy resins, for which proper curing conditions must be assured. Therefore, the existence of non-destructive methods that allow monitoring the curing process of epoxy resins in the NSM CFRP system is desirable, in view of obtaining continuous information that can provide indication in regard to the effectiveness of curing and the expectable bond behaviour of CFRP/adhesive/concrete systems. The experimental research was developed at the Laboratory of the Structural Division of the Civil Engineering Department of the University of Minho in Guimar\~aes, Portugal (LEST). The main objective was to develop and propose a new method for continuous quality control of the curing of epoxy resins applied in NSM CFRP strengthening systems. This objective is pursued through the adaptation of an existing technique, termed EMM-ARM (Elasticity Modulus Monitoring through Ambient Response Method) that has been developed for monitoring the early stiffness evolution of cement-based materials. The experimental program was composed of two parts: (i) direct pull-out tests on concrete specimens strengthened with NSM CFRP laminate strips were conducted to assess the evolution of bond behaviour between CFRP and concrete since early ages; and, (ii) EMM-ARM tests were carried out for monitoring the progressive stiffness development of the structural adhesive used in CFRP applications. In order to verify the capability of the proposed method for evaluating the elastic modulus of the epoxy, static E-Modulus was determined through tension tests. The results of the two series of tests were then combined and compared to evaluate the possibility of implementation of a new method for the continuous monitoring and quality control of NSM CFRP applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

After a first theoric introduction about Business Process Re-engineering (BPR), are considered in particular the possible options found in literature regarding the following three macro-elements: the methodologies, the modelling notations and the tools employed for process mapping. The theoric section is the base for the analysis of the same elements into the specific case of Rosetti Marino S.p.A., an EPC contractor, operating in the Oil&Gas industry. Rosetti Marino implemented a tool developped internally in order to satisfy its needs in the most suitable way possible and buit a Map of all business processes,navigable on the Company Intranet. Moreover it adopted a methodology based upon participation, interfunctional communication and sharing. The GIGA introduction is analysed from a structural, human resources, political and symbolic point of view.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large-scale structures can be considered an interesting and useful "laboratory" to better investigate the Universe; in particular the filaments connecting clusters and superclusters of galaxies can be a powerful tool for this intent, since they are not virialised systems yet. The large structures in the Universe have been studied in different bands, in particular the present work takes into consideration the emission in the radio band. In the last years both compact and diffuse radio emission have been detected, revealing to be associated to single objects and clusters of galaxies respectively. The detection of these sources is important, because the radiation process is the synchrotron emission, which in turn is linked to the presence of a magnetic field: therefore studying these radio sources can help in investigating the magnetic field which permeates different portions of space. Furthermore, radio emission in optical filaments have been detected recently, opening new chances to further improve the understanding of structure formation. Filaments can be seen as the net which links clusters and superclusters. This work was made with the aim of investigating non-thermal properties in low-density regions, looking for possible filaments associated to the diffuse emission. The analysed sources are 0917+75, which is located at a redshift z = 0.125, and the double cluster system A399-A401, positioned at z = 0.071806 and z = 0.073664 respectively. Data were taken from VLA/JVLA observations, and reduced and calibrated with the package AIPS, following the standard procedure. Isocountour and polarisation maps were yielded, allowing to derive the main physical properties. Unfortunately, because of a low quality data for A399-A401, it was not possible to see any radio halo or bridge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first chapter of this work has the aim to provide a brief overview of the history of our Universe, in the context of string theory and considering inflation as its possible application to cosmological problems. We then discuss type IIB string compactifications, introducing the study of the inflaton, a scalar field candidated to describe the inflation theory. The Large Volume Scenario (LVS) is studied in the second chapter paying particular attention to the stabilisation of the Kähler moduli which are four-dimensional gravitationally coupled scalar fields which parameterise the size of the extra dimensions. Moduli stabilisation is the process through which these particles acquire a mass and can become promising inflaton candidates. The third chapter is devoted to the study of Fibre Inflation which is an interesting inflationary model derived within the context of LVS compactifications. The fourth chapter tries to extend the zone of slow-roll of the scalar potential by taking larger values of the field φ. Everything is done with the purpose of studying in detail deviations of the cosmological observables, which can better reproduce current experimental data. Finally, we present a slight modification of Fibre Inflation based on a different compactification manifold. This new model produces larger tensor modes with a spectral index in good agreement with the date released in February 2015 by the Planck satellite.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il collasso di diverse colonne, caratterizzate da danneggiamenti simili, quali ampie fessure fortemente inclinate ad entrambe le estremità dell’elemento, lo schiacciamento del calcestruzzo e l’instabilità dei ferri longitudinali, ha portato ad interrogarsi riguardo gli effetti dell’interazione tra lo sforzo normale, il taglio ed il momento flettente. Lo studio è iniziato con una ricerca bibliografica che ha evidenziato una sostanziale carenza nella trattazione dell’argomento. Il problema è stato approcciato attraverso una ricerca di formule della scienza delle costruzioni, allo scopo di mettere in relazione lo sforzo assiale, il taglio ed il momento; la ricerca si è principalmente concentrata sulla teoria di Mohr. In un primo momento è stata considerata l’interazione tra solo due componenti di sollecitazione: sforzo assiale e taglio. L’analisi ha condotto alla costruzione di un dominio elastico di taglio e sforzo assiale che, confrontato con il dominio della Modified Compression Field Theory, trovata tramite ricerca bibliografica, ha permesso di concludere che i risultati sono assolutamente paragonabili. L’analisi si è poi orientata verso l’interazione tra sforzo assiale, taglio e momento flettente. Imponendo due criteri di rottura, il raggiungimento della resistenza a trazione ed a compressione del calcestruzzo, inserendo le componenti di sollecitazione tramite le formule di Navier e Jourawsky, sono state definite due formule che mettono in relazione le tre azioni e che, implementate nel software Matlab, hanno permesso la costruzione di un dominio tridimensionale. In questo caso non è stato possibile confrontare i risultati, non avendo la ricerca bibliografica mostrato niente di paragonabile. Lo studio si è poi concentrato sullo sviluppo di una procedura che tenta di analizzare il comportamento di una sezione sottoposta a sforzo normale, taglio e momento: è stato sviluppato un modello a fibre della sezione nel tentativo di condurre un calcolo non lineare, corrispondente ad una sequenza di analisi lineari. La procedura è stata applicata a casi reali di crollo, confermando l’avvenimento dei collassi.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Seismic assessment and seismic strengthening are the key issues need to be figured out during the process of protection and reusing of historical buildings. In this thesis the seismic behaviors of the hinged steel structure, a typical structure of historical buildings, i.e. hinged steel frames in Shanghai, China, were studied based on experimental investigations and theoretic analysis. How the non-structural members worked with the steel frames was analyzed thoroughly. Firstly, two 1/4 scale hinged steel frames were constructed based on the structural system of Bund 18, a historical building in Shanghai: M1 model without infill walls, M2 model with infill walls, and tested under the horizontal cyclic loads to investigate their seismic behavior. The Shaking Table Test and its results indicated that the seismic behavior of the hinged steel frames could be improved significantly with the help of non-structural members, i.e., surrounding elements outside the hinged steel frames and infilled walls. To specify, the columns are covered with bricks, they consist of I shape formed steel sections and steel plates, which are clenched together. The steel beams are connected to the steel column by steel angle, thus the structure should be considered as a hinged frame. And the infilled wall acted as a compression diagonal strut to withstand the horizontal load, therefore, the seismic capacity and stiffness of the hinged steel frames with infilled walls could be estimated by using the equivalent compression diagonal strut model. A SAP model has been constructed with the objective to perform a dynamic nonlinear analysis. The obtained results were compared with the results obtained from Shaking Table Test. The Test Results have validated that the influence of infill walls on seismic behavior can be estimated by using the equivalent diagonal strut model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A comparison between main design methods for unpaved roads is presented in this paper. An unpaved road is made up of an unbound aggregate base course lying on a usually weak subgrade. A geosynthetic might be put between the two in reinforcing and separating function. The goal of a design method is to find the appropriate thickness of the base course knowing at least traffic volume, wheel load, tire pressure, undrained cohesion of the subgrade, allowable rut depth and influence of the reinforcement. Geosynthetics can reduce the thickness or the quality of aggregate required and improve the durability of an unpaved road. Geotextiles contribute to save aggregate through interaction friction and separation, while geogrids through interlocking between his apertures and lithic base elements. In the last chapter a case study is discussed and design thicknesses with two design methods for the three possible cases (i.e. unreinforced, geotextile reinforced, geogrid reinforced) are calculated.