942 resultados para Climatic data simulation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the results of a simulation using physical objects. This concept integrates the physical dimensions of an entity such as length, width, and weight, with the usual process flow paradigm, recurrent in the discrete event simulation models. Based on a naval logistics system, we applied this technique in an access channel of the largest port of Latin America. This system is composed by vessel movement constrained by the access channel dimensions. Vessel length and width dictates whether it is safe or not to have one or two ships simultaneously. The success delivered by the methodology proposed was an accurate validation of the model, approximately 0.45% of deviation, when compared to real data. Additionally, the model supported the design of new terminals operations for Santos, delivering KPIs such as: canal utilization, queue time, berth utilization, and throughput capability

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is based on five papers addressing variance reduction in different ways. The papers have in common that they all present new numerical methods. Paper I investigates quantitative structure-retention relationships from an image processing perspective, using an artificial neural network to preprocess three-dimensional structural descriptions of the studied steroid molecules. Paper II presents a new method for computing free energies. Free energy is the quantity that determines chemical equilibria and partition coefficients. The proposed method may be used for estimating, e.g., chromatographic retention without performing experiments. Two papers (III and IV) deal with correcting deviations from bilinearity by so-called peak alignment. Bilinearity is a theoretical assumption about the distribution of instrumental data that is often violated by measured data. Deviations from bilinearity lead to increased variance, both in the data and in inferences from the data, unless invariance to the deviations is built into the model, e.g., by the use of the method proposed in paper III and extended in paper IV. Paper V addresses a generic problem in classification; namely, how to measure the goodness of different data representations, so that the best classifier may be constructed. Variance reduction is one of the pillars on which analytical chemistry rests. This thesis considers two aspects on variance reduction: before and after experiments are performed. Before experimenting, theoretical predictions of experimental outcomes may be used to direct which experiments to perform, and how to perform them (papers I and II). After experiments are performed, the variance of inferences from the measured data are affected by the method of data analysis (papers III-V).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] The last 5 Myr  are characterized by cliamatic variations globally and are reflected in ancient fossiliferous marine deposits visible in the Canary Islands. The fossils contained are identificated as paleoecological and paleoclimatic indicators. The Mio-Pliocene Transit is represented by the coral Siderastrea micoenica Osasco, 1897; the gastropods Rothpletzia rudista Simonelli, 1890; Ancilla glandiformis (Lamarck, 1822); Strombus coronatus Defrance, 1827 and Nerita emiliana Mayer, 1872 and the bivalve Gryphaea virleti Deshayes, 1832 as most characteristic fossils  and typical of a very warm climate and littoral zone. Associated  lava flows  have been dated radiometrically  and provides a range between  8.9 and about 4.2 Kyr. In the mid-Pleistocene, about 400,000 years ago, the called Marine Isotope Stage 11, a strong global warming that caused a sea level rise happens. Remains of the MIS 11  are preserved on the coast of Arucas (Gran Canaria), and associated with a tsunami in Piedra Alta (Lanzarote). These fossilifeorus  deposits contains the bivalve Saccostrea cucullata (Born, 1780), the gastropod Purpurellus gambiensis (Reeve, 1845) and the corals Madracis pharensis (Heller, 1868) and Dendrophyllia cornigera (Lamarck, 1816). Both sites have been dated by K-Ar on pillow lavas (approximately 420,000 years) and by Uranium Series on corals (about 481,000 years) respectively. The upper Pleistocene starts with another strong global warming known as the last interglacial or marine isotope  stage (MIS) 5.5, about 125,000 years ago, which also left marine  fossil deposits exposed in parallel to current in Igueste of San Andrés (Tenerife),  El Altillo, the  city of Las Palmas de Gran Canaria  and Maspalomas (Gran Canaria),  Matas Blancas, the Playitas and Morrojable (Fuerteventura ) and in Playa Blanca and Punta Penedo (Lanzarote ). The fossil coral Siderastrea radians (Pallas , 1766 ) currently living in the Cape Verde Islands , the Gulf of Guinea and the Caribbean has allowed Uranium series dating. The gastropods Strombus bubonius Lamarck, 1822 and Harpa doris (Röding , 1798 ) currently living in the Gulf of Guinea. Current biogeography using synoptic data obtained through satellites provided by the ISS Canary Seas provides data of Ocean Surface Temperature (SST) and Chlorophyll a (Chlor a) . This has allowed the estimation of these sea conditions during interglacials compared to today .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main aim of this Ph.D. dissertation is the study of clustering dependent data by means of copula functions with particular emphasis on microarray data. Copula functions are a popular multivariate modeling tool in each field where the multivariate dependence is of great interest and their use in clustering has not been still investigated. The first part of this work contains the review of the literature of clustering methods, copula functions and microarray experiments. The attention focuses on the K–means (Hartigan, 1975; Hartigan and Wong, 1979), the hierarchical (Everitt, 1974) and the model–based (Fraley and Raftery, 1998, 1999, 2000, 2007) clustering techniques because their performance is compared. Then, the probabilistic interpretation of the Sklar’s theorem (Sklar’s, 1959), the estimation methods for copulas like the Inference for Margins (Joe and Xu, 1996) and the Archimedean and Elliptical copula families are presented. In the end, applications of clustering methods and copulas to the genetic and microarray experiments are highlighted. The second part contains the original contribution proposed. A simulation study is performed in order to evaluate the performance of the K–means and the hierarchical bottom–up clustering methods in identifying clusters according to the dependence structure of the data generating process. Different simulations are performed by varying different conditions (e.g., the kind of margins (distinct, overlapping and nested) and the value of the dependence parameter ) and the results are evaluated by means of different measures of performance. In light of the simulation results and of the limits of the two investigated clustering methods, a new clustering algorithm based on copula functions (‘CoClust’ in brief) is proposed. The basic idea, the iterative procedure of the CoClust and the description of the written R functions with their output are given. The CoClust algorithm is tested on simulated data (by varying the number of clusters, the copula models, the dependence parameter value and the degree of overlap of margins) and is compared with the performance of model–based clustering by using different measures of performance, like the percentage of well–identified number of clusters and the not rejection percentage of H0 on . It is shown that the CoClust algorithm allows to overcome all observed limits of the other investigated clustering techniques and is able to identify clusters according to the dependence structure of the data independently of the degree of overlap of margins and the strength of the dependence. The CoClust uses a criterion based on the maximized log–likelihood function of the copula and can virtually account for any possible dependence relationship between observations. Many peculiar characteristics are shown for the CoClust, e.g. its capability of identifying the true number of clusters and the fact that it does not require a starting classification. Finally, the CoClust algorithm is applied to the real microarray data of Hedenfalk et al. (2001) both to the gene expressions observed in three different cancer samples and to the columns (tumor samples) of the whole data matrix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Assimilation in the Unstable Subspace (AUS) was introduced by Trevisan and Uboldi in 2004, and developed by Trevisan, Uboldi and Carrassi, to minimize the analysis and forecast errors by exploiting the flow-dependent instabilities of the forecast-analysis cycle system, which may be thought of as a system forced by observations. In the AUS scheme the assimilation is obtained by confining the analysis increment in the unstable subspace of the forecast-analysis cycle system so that it will have the same structure of the dominant instabilities of the system. The unstable subspace is estimated by Breeding on the Data Assimilation System (BDAS). AUS- BDAS has already been tested in realistic models and observational configurations, including a Quasi-Geostrophicmodel and a high dimensional, primitive equation ocean model; the experiments include both fixed and“adaptive”observations. In these contexts, the AUS-BDAS approach greatly reduces the analysis error, with reasonable computational costs for data assimilation with respect, for example, to a prohibitive full Extended Kalman Filter. This is a follow-up study in which we revisit the AUS-BDAS approach in the more basic, highly nonlinear Lorenz 1963 convective model. We run observation system simulation experiments in a perfect model setting, and with two types of model error as well: random and systematic. In the different configurations examined, and in a perfect model setting, AUS once again shows better efficiency than other advanced data assimilation schemes. In the present study, we develop an iterative scheme that leads to a significant improvement of the overall assimilation performance with respect also to standard AUS. In particular, it boosts the efficiency of regime’s changes tracking, with a low computational cost. Other data assimilation schemes need estimates of ad hoc parameters, which have to be tuned for the specific model at hand. In Numerical Weather Prediction models, tuning of parameters — and in particular an estimate of the model error covariance matrix — may turn out to be quite difficult. Our proposed approach, instead, may be easier to implement in operational models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Technology scaling increasingly emphasizes complexity and non-ideality of the electrical behavior of semiconductor devices and boosts interest on alternatives to the conventional planar MOSFET architecture. TCAD simulation tools are fundamental to the analysis and development of new technology generations. However, the increasing device complexity is reflected in an augmented dimensionality of the problems to be solved. The trade-off between accuracy and computational cost of the simulation is especially influenced by domain discretization: mesh generation is therefore one of the most critical steps and automatic approaches are sought. Moreover, the problem size is further increased by process variations, calling for a statistical representation of the single device through an ensemble of microscopically different instances. The aim of this thesis is to present multi-disciplinary approaches to handle this increasing problem dimensionality in a numerical simulation perspective. The topic of mesh generation is tackled by presenting a new Wavelet-based Adaptive Method (WAM) for the automatic refinement of 2D and 3D domain discretizations. Multiresolution techniques and efficient signal processing algorithms are exploited to increase grid resolution in the domain regions where relevant physical phenomena take place. Moreover, the grid is dynamically adapted to follow solution changes produced by bias variations and quality criteria are imposed on the produced meshes. The further dimensionality increase due to variability in extremely scaled devices is considered with reference to two increasingly critical phenomena, namely line-edge roughness (LER) and random dopant fluctuations (RD). The impact of such phenomena on FinFET devices, which represent a promising alternative to planar CMOS technology, is estimated through 2D and 3D TCAD simulations and statistical tools, taking into account matching performance of single devices as well as basic circuit blocks such as SRAMs. Several process options are compared, including resist- and spacer-defined fin patterning as well as different doping profile definitions. Combining statistical simulations with experimental data, potentialities and shortcomings of the FinFET architecture are analyzed and useful design guidelines are provided, which boost feasibility of this technology for mainstream applications in sub-45 nm generation integrated circuits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]In this talk we introduce a new methodology for wind field simulation or forecasting over complex terrain. The idea is to use wind measurements or predictions of the HARMONIE mesoscale model as the input data for an adaptive finite element mass consistent wind model [1,2]. The method has been recently implemented in the freely-available Wind3D code [3]. A description of the HARMONIE Non-Hydrostatic Dynamics can be found in [4]. The results of HARMONIE (obtained with a maximum resolution about 1 Km) are refined by the finite element model in a local scale (about a few meters). An interface between both models is implemented such that the initial wind field approximation is obtained by a suitable interpolation of the HARMONIE results…

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]A new methodology for wind field simulation or forecasting over complex terrain is introduced. The idea is to use wind measurements or predictions of the HARMONIE mesoscale model as the input data for an adaptive finite element mass consistent wind model. The method has been recently implemented in the freely-available Wind3D code. A description of the HARMONIE Non-Hydrostatic Dynamics can be found in. HARMONIE provides wind prediction with a maximum resolution about 1 Km that is refined by the finite element model in a local scale (about a few meters). An interface between both models is implemented such that the initial wind field approximation is obtained by a suitable interpolation of the HARMONIE results…

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this PhD thesis is to investigate the orientational and dynamical properties of liquid crystalline systems, at molecular level and using atomistic computer simulations, to reach a better understanding of material behavior from a microscopic point view. In perspective this should allow to clarify the relation between the micro and macroscopic properties with the objective of predicting or confirming experimental results on these systems. In this context, we developed four different lines of work in the thesis. The first one concerns the orientational order and alignment mechanism of rigid solutes of small dimensions dissolved in a nematic phase formed by the 4-pentyl,4 cyanobiphenyl (5CB) nematic liquid crystal. The orientational distribution of solutes have been obtained with Molecular Dynamics Simulation (MD) and have been compared with experimental data reported in literature. we have also verified the agreement between order parameters and dipolar coupling values measured in NMR experiments. The MD determined effective orientational potentials have been compared with the predictions of Maier­Saupe and Surface tensor models. The second line concerns the development of a correct parametrization able to reproduce the phase transition properties of a prototype of the oligothiophene semiconductor family: sexithiophene (T6). T6 forms two crystalline polymorphs largely studied, and possesses liquid crystalline phases still not well characterized, From simulations we detected a phase transition from crystal to liquid crystal at about 580 K, in agreement with available experiments, and in particular we found two LC phases, smectic and nematic. The crystal­smectic transition is associated to a relevant density variation and to strong conformational changes of T6, namely the molecules in the liquid crystal phase easily assume a bent shape, deviating from the planar structure typical of the crystal. The third line explores a new approach for calculating the viscosity in a nematic through a virtual exper- iment resembling the classical falling sphere experiment. The falling sphere is replaced by an hydrogenated silicon nanoparticle of spherical shape suspended in 5CB, and gravity effects are replaced by a constant force applied to the nanoparticle in a selected direction. Once the nanoparticle reaches a constant velocity, the viscosity of the medium can be evaluated using Stokes' law. With this method we successfully reproduced experimental viscosities and viscosity anisotropy for the solvent 5CB. The last line deals with the study of order induction on nematic molecules by an hydrogenated silicon surface. Gaining predicting power for the anchoring behavior of liquid crystals at surfaces will be a very desirable capability, as many properties related to devices depend on molecular organization close to surfaces. Here we studied, by means of atomistic MD simulations, the flat interface between an hydrogenated (001) silicon surface in contact with a sample of 5CB molecules. We found a planar anchoring of the first layers of 5CB where surface interactions are dominating with respect to the mesogen intermolecular interactions. We also analyzed the interface 5CB­vacuum, finding a homeotropic orientation of the nematic at this interface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research activity described in this thesis is focused mainly on the study of finite-element techniques applied to thermo-fluid dynamic problems of plant components and on the study of dynamic simulation techniques applied to integrated building design in order to enhance the energy performance of the building. The first part of this doctorate thesis is a broad dissertation on second law analysis of thermodynamic processes with the purpose of including the issue of the energy efficiency of buildings within a wider cultural context which is usually not considered by professionals in the energy sector. In particular, the first chapter includes, a rigorous scheme for the deduction of the expressions for molar exergy and molar flow exergy of pure chemical fuels. The study shows that molar exergy and molar flow exergy coincide when the temperature and pressure of the fuel are equal to those of the environment in which the combustion reaction takes place. A simple method to determine the Gibbs free energy for non-standard values of the temperature and pressure of the environment is then clarified. For hydrogen, carbon dioxide, and several hydrocarbons, the dependence of the molar exergy on the temperature and relative humidity of the environment is reported, together with an evaluation of molar exergy and molar flow exergy when the temperature and pressure of the fuel are different from those of the environment. As an application of second law analysis, a comparison of the thermodynamic efficiency of a condensing boiler and of a heat pump is also reported. The second chapter presents a study of borehole heat exchangers, that is, a polyethylene piping network buried in the soil which allows a ground-coupled heat pump to exchange heat with the ground. After a brief overview of low-enthalpy geothermal plants, an apparatus designed and assembled by the author to carry out thermal response tests is presented. Data obtained by means of in situ thermal response tests are reported and evaluated by means of a finite-element simulation method, implemented through the software package COMSOL Multyphysics. The simulation method allows the determination of the precise value of the effective thermal properties of the ground and of the grout, which are essential for the design of borehole heat exchangers. In addition to the study of a single plant component, namely the borehole heat exchanger, in the third chapter is presented a thorough process for the plant design of a zero carbon building complex. The plant is composed of: 1) a ground-coupled heat pump system for space heating and cooling, with electricity supplied by photovoltaic solar collectors; 2) air dehumidifiers; 3) thermal solar collectors to match 70% of domestic hot water energy use, and a wood pellet boiler for the remaining domestic hot water energy use and for exceptional winter peaks. This chapter includes the design methodology adopted: 1) dynamic simulation of the building complex with the software package TRNSYS for evaluating the energy requirements of the building complex; 2) ground-coupled heat pumps modelled by means of TRNSYS; and 3) evaluation of the total length of the borehole heat exchanger by an iterative method developed by the author. An economic feasibility and an exergy analysis of the proposed plant, compared with two other plants, are reported. The exergy analysis was performed by considering the embodied energy of the components of each plant and the exergy loss during the functioning of the plants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ventricular cells are immersed in a bath of electrolytes and these ions are essential for a healthy heart and a regular rhythm. Maintaining physiological concentration of them is fundamental for reducing arrhythmias and risk of sudden cardiac death, especially in haemodialysis patients and in the heart diseases treatments. Models of electrically activity of the heart based on mathematical formulation are a part of the efforts to improve the understanding and prediction of heart behaviour. Modern models incorporate the extensive and ever increasing amounts of experimental data in incorporating biophysically detailed mechanisms to allow the detailed study of molecular and subcellular mechanisms of heart disease. The goal of this project was to simulate the effects of changes in potassium and calcium concentrations in the extracellular space between experimental data and and a description incorpored into two modern biophysically detailed models (Grandi et al. Model; O’Hara Rudy Model). Moreover the task was to analyze the changes in the ventricular electrical activity, in particular by studying the modifications on the simulated electrocardiographic signal. We used the cellular information obtained by the heart models in order to build a 1D tissue description. The fibre is composed by 165 cells, it is divided in four groups to differentiate the cell types that compound human ventricular tissue. The main results are the following: Grandi et al. (GBP) model is not even able to reproduce the correct action potential profile in hyperkalemia. Data from hospitalized patients indicates that the action potential duration (APD) should be shorter than physiological state but in this model we have the opposite. From the potassium point of view the results obtained by using O’Hara model (ORD) are in agreement with experimental data for the single cell action potential in hypokalemia and hyperkalemia, most of the currents follow the data from literature. In the 1D simulations we were able to reproduce ECGs signal in most the potassium concentrations we selected for this study and we collected data that can help physician in understanding what happens in ventricular cells during electrolyte disorder. However the model fails in the conduction of the stimulus under hyperkalemic conditions. The model emphasized the ECG modifications when the K+ is slightly more than physiological value. In the calcium setting using the ORD model we found an APD shortening in hypocalcaemia and an APD lengthening in hypercalcaemia, i.e. the opposite to experimental observation. This wrong behaviour is kept in one dimensional simulations bringing a longer QT interval in the ECG under higher [Ca2+]o conditions and vice versa. In conclusion it has highlighted that the actual ventricular models present in literature, even if they are useful in the original form, they need an improvement in the sensitivity of these two important electrolytes. We suggest an use of the GBP model with modifications introduced by Carro et al. who understood that the failure of this model is related to the Shannon et al. model (a rabbit model) from which the GBP model was built. The ORD model should be modified in the Ca2+ - dependent IcaL and in the influence of the Iks in the action potential for letting it him produce a correct action potential under different calcium concentrations. In the 1D tissue maybe a heterogeneity setting of intra and extracellular conductances for the different cell types should improve a reproduction of the ECG signal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a non linear technique to invert strong motion records with the aim of obtaining the final slip and rupture velocity distributions on the fault plane. In this thesis, the ground motion simulation is obtained evaluating the representation integral in the frequency. The Green’s tractions are computed using the discrete wave-number integration technique that provides the full wave-field in a 1D layered propagation medium. The representation integral is computed through a finite elements technique, based on a Delaunay’s triangulation on the fault plane. The rupture velocity is defined on a coarser regular grid and rupture times are computed by integration of the eikonal equation. For the inversion, the slip distribution is parameterized by 2D overlapping Gaussian functions, which can easily relate the spectrum of the possible solutions with the minimum resolvable wavelength, related to source-station distribution and data processing. The inverse problem is solved by a two-step procedure aimed at separating the computation of the rupture velocity from the evaluation of the slip distribution, the latter being a linear problem, when the rupture velocity is fixed. The non-linear step is solved by optimization of an L2 misfit function between synthetic and real seismograms, and solution is searched by the use of the Neighbourhood Algorithm. The conjugate gradient method is used to solve the linear step instead. The developed methodology has been applied to the M7.2, Iwate Nairiku Miyagi, Japan, earthquake. The estimated magnitude seismic moment is 2.6326 dyne∙cm that corresponds to a moment magnitude MW 6.9 while the mean the rupture velocity is 2.0 km/s. A large slip patch extends from the hypocenter to the southern shallow part of the fault plane. A second relatively large slip patch is found in the northern shallow part. Finally, we gave a quantitative estimation of errors associates with the parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work is devoted to the assessment of the energy fluxes physics in the space of scales and physical space of wall-turbulent flows. The generalized Kolmogorov equation will be applied to DNS data of a turbulent channel flow in order to describe the energy fluxes paths from production to dissipation in the augmented space of wall-turbulent flows. This multidimensional description will be shown to be crucial to understand the formation and sustainment of the turbulent fluctuations fed by the energy fluxes coming from the near-wall production region. An unexpected behavior of the energy fluxes comes out from this analysis consisting of spiral-like paths in the combined physical/scale space where the controversial reverse energy cascade plays a central role. The observed behavior conflicts with the classical notion of the Richardson/Kolmogorov energy cascade and may have strong repercussions on both theoretical and modeling approaches to wall-turbulence. To this aim a new relation stating the leading physical processes governing the energy transfer in wall-turbulence is suggested and shown able to capture most of the rich dynamics of the shear dominated region of the flow. Two dynamical processes are identified as driving mechanisms for the fluxes, one in the near wall region and a second one further away from the wall. The former, stronger one is related to the dynamics involved in the near-wall turbulence regeneration cycle. The second suggests an outer self-sustaining mechanism which is asymptotically expected to take place in the log-layer and could explain the debated mixed inner/outer scaling of the near-wall statistics. The same approach is applied for the first time to a filtered velocity field. A generalized Kolmogorov equation specialized for filtered velocity field is derived and discussed. The results will show what effects the subgrid scales have on the resolved motion in both physical and scale space, singling out the prominent role of the filter length compared to the cross-over scale between production dominated scales and inertial range, lc, and the reverse energy cascade region lb. The systematic characterization of the resolved and subgrid physics as function of the filter scale and of the wall-distance will be shown instrumental for a correct use of LES models in the simulation of wall turbulent flows. Taking inspiration from the new relation for the energy transfer in wall turbulence, a new class of LES models will be also proposed. Finally, the generalized Kolmogorov equation specialized for filtered velocity fields will be shown to be an helpful statistical tool for the assessment of LES models and for the development of new ones. As example, some classical purely dissipative eddy viscosity models are analyzed via an a priori procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are different ways to do cluster analysis of categorical data in the literature and the choice among them is strongly related to the aim of the researcher, if we do not take into account time and economical constraints. Main approaches for clustering are usually distinguished into model-based and distance-based methods: the former assume that objects belonging to the same class are similar in the sense that their observed values come from the same probability distribution, whose parameters are unknown and need to be estimated; the latter evaluate distances among objects by a defined dissimilarity measure and, basing on it, allocate units to the closest group. In clustering, one may be interested in the classification of similar objects into groups, and one may be interested in finding observations that come from the same true homogeneous distribution. But do both of these aims lead to the same clustering? And how good are clustering methods designed to fulfil one of these aims in terms of the other? In order to answer, two approaches, namely a latent class model (mixture of multinomial distributions) and a partition around medoids one, are evaluated and compared by Adjusted Rand Index, Average Silhouette Width and Pearson-Gamma indexes in a fairly wide simulation study. Simulation outcomes are plotted in bi-dimensional graphs via Multidimensional Scaling; size of points is proportional to the number of points that overlap and different colours are used according to the cluster membership.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Synthetic Biology is a relatively new discipline, born at the beginning of the New Millennium, that brings the typical engineering approach (abstraction, modularity and standardization) to biotechnology. These principles aim to tame the extreme complexity of the various components and aid the construction of artificial biological systems with specific functions, usually by means of synthetic genetic circuits implemented in bacteria or simple eukaryotes like yeast. The cell becomes a programmable machine and its low-level programming language is made of strings of DNA. This work was performed in collaboration with researchers of the Department of Electrical Engineering of the University of Washington in Seattle and also with a student of the Corso di Laurea Magistrale in Ingegneria Biomedica at the University of Bologna: Marilisa Cortesi. During the collaboration I contributed to a Synthetic Biology project already started in the Klavins Laboratory. In particular, I modeled and subsequently simulated a synthetic genetic circuit that was ideated for the implementation of a multicelled behavior in a growing bacterial microcolony. In the first chapter the foundations of molecular biology are introduced: structure of the nucleic acids, transcription, translation and methods to regulate gene expression. An introduction to Synthetic Biology completes the section. In the second chapter is described the synthetic genetic circuit that was conceived to make spontaneously emerge, from an isogenic microcolony of bacteria, two different groups of cells, termed leaders and followers. The circuit exploits the intrinsic stochasticity of gene expression and intercellular communication via small molecules to break the symmetry in the phenotype of the microcolony. The four modules of the circuit (coin flipper, sender, receiver and follower) and their interactions are then illustrated. In the third chapter is derived the mathematical representation of the various components of the circuit and the several simplifying assumptions are made explicit. Transcription and translation are modeled as a single step and gene expression is function of the intracellular concentration of the various transcription factors that act on the different promoters of the circuit. A list of the various parameters and a justification for their value closes the chapter. In the fourth chapter are described the main characteristics of the gro simulation environment, developed by the Self Organizing Systems Laboratory of the University of Washington. Then, a sensitivity analysis performed to pinpoint the desirable characteristics of the various genetic components is detailed. The sensitivity analysis makes use of a cost function that is based on the fraction of cells in each one of the different possible states at the end of the simulation and the wanted outcome. Thanks to a particular kind of scatter plot, the parameters are ranked. Starting from an initial condition in which all the parameters assume their nominal value, the ranking suggest which parameter to tune in order to reach the goal. Obtaining a microcolony in which almost all the cells are in the follower state and only a few in the leader state seems to be the most difficult task. A small number of leader cells struggle to produce enough signal to turn the rest of the microcolony in the follower state. It is possible to obtain a microcolony in which the majority of cells are followers by increasing as much as possible the production of signal. Reaching the goal of a microcolony that is split in half between leaders and followers is comparatively easy. The best strategy seems to be increasing slightly the production of the enzyme. To end up with a majority of leaders, instead, it is advisable to increase the basal expression of the coin flipper module. At the end of the chapter, a possible future application of the leader election circuit, the spontaneous formation of spatial patterns in a microcolony, is modeled with the finite state machine formalism. The gro simulations provide insights into the genetic components that are needed to implement the behavior. In particular, since both the examples of pattern formation rely on a local version of Leader Election, a short-range communication system is essential. Moreover, new synthetic components that allow to reliably downregulate the growth rate in specific cells without side effects need to be developed. In the appendix are listed the gro code utilized to simulate the model of the circuit, a script in the Python programming language that was used to split the simulations on a Linux cluster and the Matlab code developed to analyze the data.