896 resultados para Driver-Vehicle System Modeling.
Resumo:
Synthetic Biology is a relatively new discipline, born at the beginning of the New Millennium, that brings the typical engineering approach (abstraction, modularity and standardization) to biotechnology. These principles aim to tame the extreme complexity of the various components and aid the construction of artificial biological systems with specific functions, usually by means of synthetic genetic circuits implemented in bacteria or simple eukaryotes like yeast. The cell becomes a programmable machine and its low-level programming language is made of strings of DNA. This work was performed in collaboration with researchers of the Department of Electrical Engineering of the University of Washington in Seattle and also with a student of the Corso di Laurea Magistrale in Ingegneria Biomedica at the University of Bologna: Marilisa Cortesi. During the collaboration I contributed to a Synthetic Biology project already started in the Klavins Laboratory. In particular, I modeled and subsequently simulated a synthetic genetic circuit that was ideated for the implementation of a multicelled behavior in a growing bacterial microcolony. In the first chapter the foundations of molecular biology are introduced: structure of the nucleic acids, transcription, translation and methods to regulate gene expression. An introduction to Synthetic Biology completes the section. In the second chapter is described the synthetic genetic circuit that was conceived to make spontaneously emerge, from an isogenic microcolony of bacteria, two different groups of cells, termed leaders and followers. The circuit exploits the intrinsic stochasticity of gene expression and intercellular communication via small molecules to break the symmetry in the phenotype of the microcolony. The four modules of the circuit (coin flipper, sender, receiver and follower) and their interactions are then illustrated. In the third chapter is derived the mathematical representation of the various components of the circuit and the several simplifying assumptions are made explicit. Transcription and translation are modeled as a single step and gene expression is function of the intracellular concentration of the various transcription factors that act on the different promoters of the circuit. A list of the various parameters and a justification for their value closes the chapter. In the fourth chapter are described the main characteristics of the gro simulation environment, developed by the Self Organizing Systems Laboratory of the University of Washington. Then, a sensitivity analysis performed to pinpoint the desirable characteristics of the various genetic components is detailed. The sensitivity analysis makes use of a cost function that is based on the fraction of cells in each one of the different possible states at the end of the simulation and the wanted outcome. Thanks to a particular kind of scatter plot, the parameters are ranked. Starting from an initial condition in which all the parameters assume their nominal value, the ranking suggest which parameter to tune in order to reach the goal. Obtaining a microcolony in which almost all the cells are in the follower state and only a few in the leader state seems to be the most difficult task. A small number of leader cells struggle to produce enough signal to turn the rest of the microcolony in the follower state. It is possible to obtain a microcolony in which the majority of cells are followers by increasing as much as possible the production of signal. Reaching the goal of a microcolony that is split in half between leaders and followers is comparatively easy. The best strategy seems to be increasing slightly the production of the enzyme. To end up with a majority of leaders, instead, it is advisable to increase the basal expression of the coin flipper module. At the end of the chapter, a possible future application of the leader election circuit, the spontaneous formation of spatial patterns in a microcolony, is modeled with the finite state machine formalism. The gro simulations provide insights into the genetic components that are needed to implement the behavior. In particular, since both the examples of pattern formation rely on a local version of Leader Election, a short-range communication system is essential. Moreover, new synthetic components that allow to reliably downregulate the growth rate in specific cells without side effects need to be developed. In the appendix are listed the gro code utilized to simulate the model of the circuit, a script in the Python programming language that was used to split the simulations on a Linux cluster and the Matlab code developed to analyze the data.
Resumo:
Computer simulations play an ever growing role for the development of automotive products. Assembly simulation, as well as many other processes, are used systematically even before the first physical prototype of a vehicle is built in order to check whether particular components can be assembled easily or whether another part is in the way. Usually, this kind of simulation is limited to rigid bodies. However, a vehicle contains a multitude of flexible parts of various types: cables, hoses, carpets, seat surfaces, insulations, weatherstrips... Since most of the problems using these simulations concern one-dimensional components and since an intuitive tool for cable routing is still needed, we have chosen to concentrate on this category, which includes cables, hoses and wiring harnesses. In this thesis, we present a system for simulating one dimensional flexible parts such as cables or hoses. The modeling of bending and torsion follows the Cosserat model. For this purpose we use a generalized spring-mass system and describe its configuration by a carefully chosen set of coordinates. Gravity and contact forces as well as the forces responsible for length conservation are expressed in Cartesian coordinates. But bending and torsion effects can be dealt with more effectively by using quaternions to represent the orientation of the segments joining two neighboring mass points. This augmented system allows an easy formulation of all interactions with the best appropriate coordinate type and yields a strongly banded Hessian matrix. An energy minimizing process accounts for a solution exempt from the oscillations that are typical of spring-mass systems. The use of integral forces, similar to an integral controller, allows to enforce exactly the constraints. The whole system is numerically stable and can be solved at interactive frame rates. It is integrated in the DaimlerChrysler in-house Virtual Reality Software veo for use in applications such as cable routing and assembly simulation and has been well received by users. Parts of this work have been published at the ACM Solid and Physical Modeling Conference 2006 and have been selected for the special issue of the Computer-Aided-Design Journal to the conference.
Resumo:
The cardiomyocyte is a complex biological system where many mechanisms interact non-linearly to regulate the coupling between electrical excitation and mechanical contraction. For this reason, the development of mathematical models is fundamental in the field of cardiac electrophysiology, where the use of computational tools has become complementary to the classical experimentation. My doctoral research has been focusing on the development of such models for investigating the regulation of ventricular excitation-contraction coupling at the single cell level. In particular, the following researches are presented in this thesis: 1) Study of the unexpected deleterious effect of a Na channel blocker on a long QT syndrome type 3 patient. Experimental results were used to tune a Na current model that recapitulates the effect of the mutation and the treatment, in order to investigate how these influence the human action potential. Our research suggested that the analysis of the clinical phenotype is not sufficient for recommending drugs to patients carrying mutations with undefined electrophysiological properties. 2) Development of a model of L-type Ca channel inactivation in rabbit myocytes to faithfully reproduce the relative roles of voltage- and Ca-dependent inactivation. The model was applied to the analysis of Ca current inactivation kinetics during normal and abnormal repolarization, and predicts arrhythmogenic activity when inhibiting Ca-dependent inactivation, which is the predominant mechanism in physiological conditions. 3) Analysis of the arrhythmogenic consequences of the crosstalk between β-adrenergic and Ca-calmodulin dependent protein kinase signaling pathways. The descriptions of the two regulatory mechanisms, both enhanced in heart failure, were integrated into a novel murine action potential model to investigate how they concur to the development of cardiac arrhythmias. These studies show how mathematical modeling is suitable to provide new insights into the mechanisms underlying cardiac excitation-contraction coupling and arrhythmogenesis.
Resumo:
Die Entstehung eines Marktpreises für einen Vermögenswert kann als Superposition der einzelnen Aktionen der Marktteilnehmer aufgefasst werden, die damit kumulativ Angebot und Nachfrage erzeugen. Dies ist in der statistischen Physik mit der Entstehung makroskopischer Eigenschaften vergleichbar, die von mikroskopischen Wechselwirkungen zwischen den beteiligten Systemkomponenten hervorgerufen werden. Die Verteilung der Preisänderungen an Finanzmärkten unterscheidet sich deutlich von einer Gaußverteilung. Dies führt zu empirischen Besonderheiten des Preisprozesses, zu denen neben dem Skalierungsverhalten nicht-triviale Korrelationsfunktionen und zeitlich gehäufte Volatilität zählen. In der vorliegenden Arbeit liegt der Fokus auf der Analyse von Finanzmarktzeitreihen und den darin enthaltenen Korrelationen. Es wird ein neues Verfahren zur Quantifizierung von Muster-basierten komplexen Korrelationen einer Zeitreihe entwickelt. Mit dieser Methodik werden signifikante Anzeichen dafür gefunden, dass sich typische Verhaltensmuster von Finanzmarktteilnehmern auf kurzen Zeitskalen manifestieren, dass also die Reaktion auf einen gegebenen Preisverlauf nicht rein zufällig ist, sondern vielmehr ähnliche Preisverläufe auch ähnliche Reaktionen hervorrufen. Ausgehend von der Untersuchung der komplexen Korrelationen in Finanzmarktzeitreihen wird die Frage behandelt, welche Eigenschaften sich beim Wechsel von einem positiven Trend zu einem negativen Trend verändern. Eine empirische Quantifizierung mittels Reskalierung liefert das Resultat, dass unabhängig von der betrachteten Zeitskala neue Preisextrema mit einem Anstieg des Transaktionsvolumens und einer Reduktion der Zeitintervalle zwischen Transaktionen einhergehen. Diese Abhängigkeiten weisen Charakteristika auf, die man auch in anderen komplexen Systemen in der Natur und speziell in physikalischen Systemen vorfindet. Über 9 Größenordnungen in der Zeit sind diese Eigenschaften auch unabhängig vom analysierten Markt - Trends, die nur für Sekunden bestehen, zeigen die gleiche Charakteristik wie Trends auf Zeitskalen von Monaten. Dies eröffnet die Möglichkeit, mehr über Finanzmarktblasen und deren Zusammenbrüche zu lernen, da Trends auf kleinen Zeitskalen viel häufiger auftreten. Zusätzlich wird eine Monte Carlo-basierte Simulation des Finanzmarktes analysiert und erweitert, um die empirischen Eigenschaften zu reproduzieren und Einblicke in deren Ursachen zu erhalten, die zum einen in der Finanzmarktmikrostruktur und andererseits in der Risikoaversion der Handelsteilnehmer zu suchen sind. Für die rechenzeitintensiven Verfahren kann mittels Parallelisierung auf einer Graphikkartenarchitektur eine deutliche Rechenzeitreduktion erreicht werden. Um das weite Spektrum an Einsatzbereichen von Graphikkarten zu aufzuzeigen, wird auch ein Standardmodell der statistischen Physik - das Ising-Modell - auf die Graphikkarte mit signifikanten Laufzeitvorteilen portiert. Teilresultate der Arbeit sind publiziert in [PGPS07, PPS08, Pre11, PVPS09b, PVPS09a, PS09, PS10a, SBF+10, BVP10, Pre10, PS10b, PSS10, SBF+11, PB10].
Resumo:
Traditionally, the study of internal combustion engines operation has focused on the steady-state performance. However, the daily driving schedule of automotive engines is inherently related to unsteady conditions. There are various operating conditions experienced by (diesel) engines that can be classified as transient. Besides the variation of the engine operating point, in terms of engine speed and torque, also the warm up phase can be considered as a transient condition. Chapter 2 has to do with this thermal transient condition; more precisely the main issue is the performance of a Selective Catalytic Reduction (SCR) system during cold start and warm up phases of the engine. The proposal of the underlying work is to investigate and identify optimal exhaust line heating strategies, to provide a fast activation of the catalytic reactions on SCR. Chapters 3 and 4 focus the attention on the dynamic behavior of the engine, when considering typical driving conditions. The common approach to dynamic optimization involves the solution of a single optimal-control problem. However, this approach requires the availability of models that are valid throughout the whole engine operating range and actuator ranges. In addition, the result of the optimization is meaningful only if the model is very accurate. Chapter 3 proposes a methodology to circumvent those demanding requirements: an iteration between transient measurements to refine a purpose-built model and a dynamic optimization which is constrained to the model validity region. Moreover all numerical methods required to implement this procedure are presented. Chapter 4 proposes an approach to derive a transient feedforward control system in an automated way. It relies on optimal control theory to solve a dynamic optimization problem for fast transients. From the optimal solutions, the relevant information is extracted and stored in maps spanned by the engine speed and the torque gradient.
Resumo:
The aim of this work is to investigate, using extensive Monte Carlo computer simulations, composite materials consisting of liquid crystals doped with nanoparticles. These systems are currently of great interest as they offer the possibility of tuning the properties of liquid crystals used in displays and other devices as well as providing a way of obtaining regularly organized systems of nanoparticles exploiting the molecular organization of the liquid crystal medium. Surprisingly enough, there is however a lack of fundamental knowledge on the properties and phase behavior of these hybrid materials, making the route to their application an essentially empirical one. Here we wish to contribute to the much needed rationalization of these systems studying some basic effects induced by different nanoparticles on a liquid crystal host. We investigate in particular the effects of nanoparticle shape, size and polarity as well as of their affinity to the liquid crystal solvent on the stability of the system, monitoring phase transitions, order and molecular organizations. To do this we have proposed a coarse grained approach where nanoparticles are modelled as a suitably shaped (spherical, rod and disk like) collection of spherical Lennard-Jones beads, while the mesogens are represented with Gay-Berne particles. We find that the addition of apolar nanoparticles of different shape typically lowers the nematic–isotropic transition of a non-polar nematic, with the destabilization being greater for spherical nanoparticles. For polar mesogens we have studied the effect of solvent affinity of the nanoparticles showing that aggregation takes places for low solvation values. Interestingly, if the nanoparticles are polar the aggregates contribute to stabilizing the system, compensating the shape effect. We thus find the overall effects on stability to be a delicate balance of often contrasting contributions pointing to the relevance of simulations studies for understanding these complex systems.
Resumo:
Aberrant expression of ETS transcription factors, including FLI1 and ERG, due to chromosomal translocations has been described as a driver event in initiation and progression of different tumors. In this study, the impact of prostate cancer (PCa) fusion gene TMPRSS2-ERG was evaluated on components of the insulin-like growth factor (IGF) system and the CD99 molecule, two well documented targets of EWS-FLI1, the hallmark of Ewing sarcoma (ES). The aim of this study was to identify common or distinctive ETS-related mechanisms which could be exploited at biological and clinical level. The results demonstrate that IGF-1R represents a common target of ETS rearrangements as ERG and FLI1 bind IGF-1R gene promoter and their modulation causes alteration in IGF-1R protein levels. At clinical level, this mechanism provides basis for a more rationale use of anti-IGF-1R inhibitors as PCa cells expressing the fusion gene better respond to anti-IGF-1R agents. EWS-FLI1/IGF-1R axis provides rationale for combination of anti-IGF-1R agents with trabectedin, an alkylator agent causing enhanced EWS-FLI1 occupancy on the IGF-1R promoter. TMPRSS2-ERG also influences prognosis relevance of IGF system as high IGF-1R correlates with a better biochemical progression free survival (BPFS) in PCa patients negative for the fusion gene while marginal or no association was found in the total cases or TMPRSS2-ERG-positive cases, respectively. This study indicates CD99 is differentially regulated between ETS-related tumors as CD99 is not a target of ERG. In PCa, CD99 did not show differential expression between TMPRSS2-ERG-positive and –negative cells. A direct correlation was anyway found between ERG and CD99 proteins both in vitro and in patients putatively suggesting that ERG target genes comprehend regulators of CD99. Despite a little trend suggesting a correlation between CD99 expression and a better BPFS, no clinical relevance for CD99 was found in the field of prognostic biomarkers.
Resumo:
In the present work, a detailed analysis of a Mediterranean TLC occurred in January 2014 has been conducted. The author is not aware of other studies regarding this particular event at the publication of this thesis. In order to outline the cyclone evolution, observational data, including weather-stations data, satellite data, radar data and photographic evidence, were collected at first. After having identified the cyclone path and its general features, the GLOBO, BOLAM and MOLOCH NWP models, developed at ISAC-CNR (Bologna), were used to simulate the phenomenon. Particular attention was paid on the Mediterranean phase as well as on the Atlantic phase, since the cyclone showed a well defined precursor up to 3 days before the minimum formation in the Alboran Sea. The Mediterranean phase has been studied using different combinations of GLOBO, BOLAM and MOLOCH models, so as to evaluate the best model chain to simulate this kind of phenomena. The BOLAM and MOLOCH models showed the best performance, by adjusting the path erroneously deviated in the National Centre for Environmental Prediction (NCEP) and ECMWF operational models. The analysis of the cyclone thermal phase shown the presence of a deep-warm core structure in many cases, thus confirming the tropical-like nature of the system. Furthermore, the results showed high sensitivity to initial conditions in the whole lifetime of the cyclone, while the Sea Surface Temperature (SST) modification leads only to small changes in the Adriatic phase. The Atlantic phase has been studied using GLOBO and BOLAM model and with the aid of the same methodology already developed. After tracing the precursor, in the form of a low-pressure system, from the American East Coast to Spain, the thermal phase analysis was conducted. The parameters obtained showed evidence of a deep-cold core asymmetric structure during the whole Atlantic phase, while the first contact with the Mediterranean Sea caused a sudden transition to a shallow-warm core structure. The examination of Potential Vorticity (PV) 3-dimensional structure revealed the presence of a PV streamer that individually formed over Greenland and eventually interacted with the low-pressure system over the Spanish coast, favouring the first phase of the cyclone baroclinic intensification. Finally, the development of an automated system that tracks and studies the thermal phase of Mediterranean cyclones has been encouraged. This could lead to the forecast of potential tropical transition, against with a minimum computational investment.
Resumo:
Ozon (O3) ist ein wichtiges Oxidierungs- und Treibhausgas in der Erdatmosphäre. Es hat Einfluss auf das Klima, die Luftqualität sowie auf die menschliche Gesundheit und die Vegetation. Ökosysteme, wie beispielsweise Wälder, sind Senken für troposphärisches Ozon und werden in Zukunft, bedingt durch Stürme, Pflanzenschädlinge und Änderungen in der Landnutzung, heterogener sein. Es ist anzunehmen, dass diese Heterogenitäten die Aufnahme von Treibhausgasen verringern und signifikante Rückkopplungen auf das Klimasystem bewirken werden. Beeinflusst wird der Atmosphären-Biosphären-Austausch von Ozon durch stomatäre Aufnahme, Deposition auf Pflanzenoberflächen und Böden sowie chemische Umwandlungen. Diese Prozesse zu verstehen und den Ozonaustausch für verschiedene Ökosysteme zu quantifizieren sind Voraussetzungen, um von lokalen Messungen auf regionale Ozonflüsse zu schließen.rnFür die Messung von vertikalen turbulenten Ozonflüssen wird die Eddy Kovarianz Methode genutzt. Die Verwendung von Eddy Kovarianz Systemen mit geschlossenem Pfad, basierend auf schnellen Chemilumineszenz-Ozonsensoren, kann zu Fehlern in der Flussmessung führen. Ein direkter Vergleich von nebeneinander angebrachten Ozonsensoren ermöglichte es einen Einblick in die Faktoren zu erhalten, die die Genauigkeit der Messungen beeinflussen. Systematische Unterschiede zwischen einzelnen Sensoren und der Einfluss von unterschiedlichen Längen des Einlassschlauches wurden untersucht, indem Frequenzspektren analysiert und Korrekturfaktoren für die Ozonflüsse bestimmt wurden. Die experimentell bestimmten Korrekturfaktoren zeigten keinen signifikanten Unterschied zu Korrekturfaktoren, die mithilfe von theoretischen Transferfunktionen bestimmt wurden, wodurch die Anwendbarkeit der theoretisch ermittelten Faktoren zur Korrektur von Ozonflüssen bestätigt wurde.rnIm Sommer 2011 wurden im Rahmen des EGER (ExchanGE processes in mountainous Regions) Projektes Messungen durchgeführt, um zu einem besseren Verständnis des Atmosphären-Biosphären Ozonaustauschs in gestörten Ökosystemen beizutragen. Ozonflüsse wurden auf beiden Seiten einer Waldkante gemessen, die einen Fichtenwald und einen Windwurf trennt. Auf der straßenähnlichen Freifläche, die durch den Sturm "Kyrill" (2007) entstand, entwickelte sich eine Sekundärvegetation, die sich in ihrer Phänologie und Blattphysiologie vom ursprünglich vorherrschenden Fichtenwald unterschied. Der mittlere nächtliche Fluss über dem Fichtenwald war -6 bis -7 nmol m2 s-1 und nahm auf -13 nmol m2 s-1 um die Mittagszeit ab. Die Ozonflüsse zeigten eine deutliche Beziehung zur Pflanzenverdunstung und CO2 Aufnahme, was darauf hinwies, dass während des Tages der Großteil des Ozons von den Pflanzenstomata aufgenommen wurde. Die relativ hohe nächtliche Deposition wurde durch nicht-stomatäre Prozesse verursacht. Die Deposition über dem Wald war im gesamten Tagesverlauf in etwa doppelt so hoch wie über der Freifläche. Dieses Verhältnis stimmte mit dem Verhältnis des Pflanzenflächenindex (PAI) überein. Die Störung des Ökosystems verringerte somit die Fähigkeit des Bewuchses, als Senke für troposphärisches Ozon zu fungieren. Der deutliche Unterschied der Ozonflüsse der beiden Bewuchsarten verdeutlichte die Herausforderung bei der Regionalisierung von Ozonflüssen in heterogen bewaldeten Gebieten.rnDie gemessenen Flüsse wurden darüber hinaus mit Simulationen verglichen, die mit dem Chemiemodell MLC-CHEM durchgeführt wurden. Um das Modell bezüglich der Berechnung von Ozonflüssen zu evaluieren, wurden gemessene und modellierte Flüsse von zwei Positionen im EGER-Gebiet verwendet. Obwohl die Größenordnung der Flüsse übereinstimmte, zeigten die Ergebnisse eine signifikante Differenz zwischen gemessenen und modellierten Flüssen. Zudem gab es eine klare Abhängigkeit der Differenz von der relativen Feuchte, mit abnehmender Differenz bei zunehmender Feuchte, was zeigte, dass das Modell vor einer Verwendung für umfangreiche Studien des Ozonflusses weiterer Verbesserungen bedarf.rn
Resumo:
Our generation of computational scientists is living in an exciting time: not only do we get to pioneer important algorithms and computations, we also get to set standards on how computational research should be conducted and published. From Euclid’s reasoning and Galileo’s experiments, it took hundreds of years for the theoretical and experimental branches of science to develop standards for publication and peer review. Computational science, rightly regarded as the third branch, can walk the same road much faster. The success and credibility of science are anchored in the willingness of scientists to expose their ideas and results to independent testing and replication by other scientists. This requires the complete and open exchange of data, procedures and materials. The idea of a “replication by other scientists” in reference to computations is more commonly known as “reproducible research”. In this context the journal “EAI Endorsed Transactions on Performance & Modeling, Simulation, Experimentation and Complex Systems” had the exciting and original idea to make the scientist able to submit simultaneously the article and the computation materials (software, data, etc..) which has been used to produce the contents of the article. The goal of this procedure is to allow the scientific community to verify the content of the paper, reproducing it in the platform independently from the OS chosen, confirm or invalidate it and especially allow its reuse to reproduce new results. This procedure is therefore not helpful if there is no minimum methodological support. In fact, the raw data sets and the software are difficult to exploit without the logic that guided their use or their production. This led us to think that in addition to the data sets and the software, an additional element must be provided: the workflow that relies all of them.
Resumo:
With the outlook of improving seismic vulnerability assessment for the city of Bishkek (Kyrgyzstan), the global dynamic behaviour of four nine-storey r.c. large-panel buildings in elastic regime is studied. The four buildings were built during the Soviet era within a serial production system. Since they all belong to the same series, they have very similar geometries both in plan and in height. Firstly, ambient vibration measurements are performed in the four buildings. The data analysis composed of discrete Fourier transform, modal analysis (frequency domain decomposition) and deconvolution interferometry, yields the modal characteristics and an estimate of the linear impulse response function for the structures of the four buildings. Then, finite element models are set up for all four buildings and the results of the numerical modal analysis are compared with the experimental ones. The numerical models are finally calibrated considering the first three global modes and their results match the experimental ones with an error of less then 20%.