932 resultados para Simulation and Modeling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The two-metal-ion architecture is a structural feature found in a variety of RNA processing metalloenzymes or ribozymes (RNA-based enzymes), which control the biogenesis and the metabolism of vital RNAs, including non-coding RNAs (ncRNAs). Notably, such ncRNAs are emerging as key players for the regulation of cellular homeostasis, and their altered expression has been often linked to the development of severe human pathologies, from cancer to mental disorders. Accordingly, understanding the biological processing of ncRNAs is foundational for the development of novel therapeutic strategies and tools. Here, we use state-of the-art molecular simulations, complemented with X-ray crystallography and biochemical experiments, to characterize the RNA processing cycle as catalyzed by two two-metal-ion enzymes: the group II intron ribozymes and the RNase H1. We show that multiple and diverse cations are strategically recruited at and timely released from the enzymes’ active site during catalysis. Such a controlled cations’ trafficking leads to the recursive formation and disruption of an extended two-metal ion architecture that is functional for RNA-hydrolysis – from substrate recruitment to product release. Importantly, we found that these cations’ binding sites are conserved among other RNA-processing machineries, including the human spliceosome and CRISPR-Cas systems, suggesting that an evolutionarily-converged catalytic strategy is adopted by these enzymes to process RNA molecules. Thus, our findings corroborate and sensibly extend the current knowledge of two-metal-ion enzymes, and support the design of novel drugs targeting RNA-processing metalloenzymes or ribozymes as well as the rational engineering of novel programmable gene-therapy tools.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The time-dependent CP asymmetries of the $B^0\to\pi^+\pi^-$ and $B^0_s\toK^+K^-$ decays and the time-integrated CP asymmetries of the $B^0\toK^+\pi^-$ and $B^0_s\to\pi^+K^-$ decays are measured, using the $p-p$ collision data collected with the LHCb detector and corresponding to the full Run2. The results are compatible with previous determinations of these quantities from LHCb, except for the CP-violation parameters of the $B^0_s\to K^+K^-$ decays, that show a discrepancy exceeding 3 standard deviations between different data-taking periods. The investigations being conducted to understand the discrepancy are documented. The measurement of the CKM matrix element $|V_{cb}|$ using $B^0_{s}\to D^{(*)-}_s\mu^+ \nu_\mu$ is also reported, using the $p-p$ collision data collected with the LHCb detector and corresponding to the full Run1. The measurement leads to $|V_{cb}| = (41.4\pm0.6\pm0.9\pm1.2)\times 10^{-3}$, where the first uncertainty is statistical, the second is systematic, and the third is due to external inputs. This measurement is compatible with the world averages and constitutes the first measurement of $|V_{cb}|$ at a hadron collider and the absolute first one with decays of the $B^0_s$ meson. The analysis also provides the very first measurements of the branching ratio and form factors parameters of the signal decay modes. The study of the characteristics ruling the response of an electromagnetic calorimeter (ECAL) to profitably operate in the high luminosity regime foreseen for the Upgrade2 of LHCb is reported in the final part of this Thesis. A fast and flexible simulation framework is developed to this purpose. Physics performance of different configurations of the ECAL are evaluated using samples of fully simulated $B^0\to \pi^+\pi^-\pi^0$ and $B^0\to K^{*0}e^+e^-$ decays. The results are used to guide the development of the future ECAL and are reported in the Framework Technical Design Report of the LHCb Upgrade2 detector.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The topic of this thesis is the design and the implementation of mathematical models and control system algorithms for rotary-wing unmanned aerial vehicles to be used in cooperative scenarios. The use of rotorcrafts has many attractive advantages, since these vehicles have the capability to take-off and land vertically, to hover and to move backward and laterally. Rotary-wing aircraft missions require precise control characteristics due to their unstable and heavy coupling aspects. As a matter of fact, flight test is the most accurate way to evaluate flying qualities and to test control systems. However, it may be very expensive and/or not feasible in case of early stage design and prototyping. A good compromise is made by a preliminary assessment performed by means of simulations and a reduced flight testing campaign. Consequently, having an analytical framework represents an important stage for simulations and control algorithm design. In this work mathematical models for various helicopter configurations are implemented. Different flight control techniques for helicopters are presented with theoretical background and tested via simulations and experimental flight tests on a small-scale unmanned helicopter. The same platform is used also in a cooperative scenario with a rover. Control strategies, algorithms and their implementation to perform missions are presented for two main scenarios. One of the main contributions of this thesis is to propose a suitable control system made by a classical PID baseline controller augmented with L1 adaptive contribution. In addition a complete analytical framework and the study of the dynamics and the stability of a synch-rotor are provided. At last, the implementation of cooperative control strategies for two main scenarios that include a small-scale unmanned helicopter and a rover.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Brain functioning relies on the interaction of several neural populations connected through complex connectivity networks, enabling the transmission and integration of information. Recent advances in neuroimaging techniques, such as electroencephalography (EEG), have deepened our understanding of the reciprocal roles played by brain regions during cognitive processes. The underlying idea of this PhD research is that EEG-related functional connectivity (FC) changes in the brain may incorporate important neuromarkers of behavior and cognition, as well as brain disorders, even at subclinical levels. However, a complete understanding of the reliability of the wide range of existing connectivity estimation techniques is still lacking. The first part of this work addresses this limitation by employing Neural Mass Models (NMMs), which simulate EEG activity and offer a unique tool to study interconnected networks of brain regions in controlled conditions. NMMs were employed to test FC estimators like Transfer Entropy and Granger Causality in linear and nonlinear conditions. Results revealed that connectivity estimates reflect information transmission between brain regions, a quantity that can be significantly different from the connectivity strength, and that Granger causality outperforms the other estimators. A second objective of this thesis was to assess brain connectivity and network changes on EEG data reconstructed at the cortical level. Functional brain connectivity has been estimated through Granger Causality, in both temporal and spectral domains, with the following goals: a) detect task-dependent functional connectivity network changes, focusing on internal-external attention competition and fear conditioning and reversal; b) identify resting-state network alterations in a subclinical population with high autistic traits. Connectivity-based neuromarkers, compared to the canonical EEG analysis, can provide deeper insights into brain mechanisms and may drive future diagnostic methods and therapeutic interventions. However, further methodological studies are required to fully understand the accuracy and information captured by FC estimates, especially concerning nonlinear phenomena.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, the study and the simulation of two advanced sensorless speed control techniques for a surface PMSM are presented. The aim is to implement a sensorless control algorithm for a submarine auxiliary propulsion system. This experimental activity is the result of a project collaboration with L3Harris Calzoni, a leader company in A&D systems for naval handling in military field. A Simulink model of the whole electric drive has been developed. Due to the satisfactory results of the simulations, the sensorless control system has been implemented in C code for STM32 environment. Finally, several tests on a real brushless machine have been carried out while the motor was connected to a mechanical load to simulate the real scenario of the final application. All the experimental results have been recorded through a graphical interface software developed at Calzoni.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Civil

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tämän työn tarkoituksena oli luoda kokonaisvaltainen tuotannollinen simulaatiomalli vaneritehtaasta sekä tutkia kustannuslaskennan mahdollisuuksia mallin yhteydessä. Perusolettamuksena on, että jos tuotannollinen malli toimii esikuvansa mukaisesti, myös sillä laskettuun kustannustietoon voidaan luottaa. Johdantona on tarkasteltu työn perustana olevia teorialähteitä. Ensimmäisenä asiana on esitetty vanerin valmistusprosessia ja siinä käytettyjä linja- ja laitetyyppejä. Toisena asiana on esitetty simulaatiotutkimuksen periaatteita, lainalaisuuksia ja mahdollisuuksia. Lisäksi on tarkasteltu kustannuslaskentaa, sen eri periaatteita ja muotoja sekä tuotannon- ja varastonohjausta. Aineistona ja menetelminä työssä on esitetty simulaatiomallin luomiseen tarvittavan pohjatiedon kerääminen sekä soveltaminen. Sitten on kuvattu tehdasmallin muodostavat eri tuotantolinjoja kuvaavat komponentit ja mallin käyttöliittymä. Lopuksi on kuvattu teoreettisen tehdasmallin soveltaminen todellisen vaneritehtaan mukaiseksi. Sovelletulla mallilla on ajettu kolme erilaista vuorokausituotantoa ja niistä saatua kustannustietoa on vertailtu optimitilannetta kuvaavaan taulukkoon.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La computación evolutiva y muy especialmente los algoritmos genéticos son cada vez más empleados en las organizaciones para resolver sus problemas de gestión y toma de decisiones (Apoteker & Barthelemy, 2000). La literatura al respecto es creciente y algunos estados del arte han sido publicados. A pesar de esto, no hay un trabajo explícito que evalúe de forma sistemática el uso de los algoritmos genéticos en problemas específicos de los negocios internacionales (ejemplos de ello son la logística internacional, el comercio internacional, el mercadeo internacional, las finanzas internacionales o estrategia internacional). El propósito de este trabajo de grado es, por lo tanto, realizar un estado situacional de las aplicaciones de los algoritmos genéticos en los negocios internacionales.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The papers is dedicated to the questions of modeling and basing super-resolution measuring- calculating systems in the context of the conception “device + PC = new possibilities”. By the authors of the article the new mathematical method of solution of the multi-criteria optimization problems was developed. The method is based on physic-mathematical formalism of reduction of fuzzy disfigured measurements. It is shown, that determinative part is played by mathematical properties of physical models of the object, which is measured, surroundings, measuring components of measuring-calculating systems and theirs cooperation as well as the developed mathematical method of processing and interpretation of measurements problem solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Рассматриваются языковые проблемы, которые могут возникнуть в двуязычном обществе. Построена игровая модель, в которой на основании равновесия Нэша определяются условия сохранения стабильности языковых групп.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tämän tutkimustyön kohteena on TietoEnator Oy:n kehittämän Fenix-tietojärjestelmän kapasiteettitarpeen ennustaminen. Työn tavoitteena on tutustua Fenix-järjestelmän eri osa-alueisiin, löytää tapa eritellä ja mallintaa eri osa-alueiden vaikutus järjestelmän kuormitukseen ja selvittää alustavasti mitkä parametrit vaikuttavat kyseisten osa-alueiden luomaan kuormitukseen. Osa tätä työtä on tutkia eri vaihtoehtoja simuloinnille ja selvittää eri vaihtoehtojen soveltuvuus monimutkaisten järjestelmien mallintamiseen. Kerätyn tiedon pohjaltaluodaan järjestelmäntietovaraston kuormitusta kuvaava simulaatiomalli. Hyödyntämällä mallista saatua tietoa ja tuotantojärjestelmästä mitattua tietoa mallia kehitetään vastaamaan yhä lähemmin todellisen järjestelmän toimintaa. Mallista tarkastellaan esimerkiksi simuloitua järjestelmäkuormaa ja jonojen käyttäytymistä. Tuotantojärjestelmästä mitataan eri kuormalähteiden käytösmuutoksia esimerkiksi käyttäjämäärän ja kellonajan suhteessa. Tämän työn tulosten on tarkoitus toimia pohjana myöhemmin tehtävälle jatkotutkimukselle, jossa osa-alueiden parametrisointia tarkennetaan lisää, mallin kykyä kuvata todellista järjestelmää tehostetaanja mallin laajuutta kasvatetaan.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modeling and simulation permeate all areas of business, science and engineering. With the increase in the scale and complexity of simulations, large amounts of computational resources are required, and collaborative model development is needed, as multiple parties could be involved in the development process. The Grid provides a platform for coordinated resource sharing and application development and execution. In this paper, we survey existing technologies in modeling and simulation, and we focus on interoperability and composability of simulation components for both simulation development and execution. We also present our recent work on an HLA-based simulation framework on the Grid, and discuss the issues to achieve composability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Synthetic Biology is a relatively new discipline, born at the beginning of the New Millennium, that brings the typical engineering approach (abstraction, modularity and standardization) to biotechnology. These principles aim to tame the extreme complexity of the various components and aid the construction of artificial biological systems with specific functions, usually by means of synthetic genetic circuits implemented in bacteria or simple eukaryotes like yeast. The cell becomes a programmable machine and its low-level programming language is made of strings of DNA. This work was performed in collaboration with researchers of the Department of Electrical Engineering of the University of Washington in Seattle and also with a student of the Corso di Laurea Magistrale in Ingegneria Biomedica at the University of Bologna: Marilisa Cortesi. During the collaboration I contributed to a Synthetic Biology project already started in the Klavins Laboratory. In particular, I modeled and subsequently simulated a synthetic genetic circuit that was ideated for the implementation of a multicelled behavior in a growing bacterial microcolony. In the first chapter the foundations of molecular biology are introduced: structure of the nucleic acids, transcription, translation and methods to regulate gene expression. An introduction to Synthetic Biology completes the section. In the second chapter is described the synthetic genetic circuit that was conceived to make spontaneously emerge, from an isogenic microcolony of bacteria, two different groups of cells, termed leaders and followers. The circuit exploits the intrinsic stochasticity of gene expression and intercellular communication via small molecules to break the symmetry in the phenotype of the microcolony. The four modules of the circuit (coin flipper, sender, receiver and follower) and their interactions are then illustrated. In the third chapter is derived the mathematical representation of the various components of the circuit and the several simplifying assumptions are made explicit. Transcription and translation are modeled as a single step and gene expression is function of the intracellular concentration of the various transcription factors that act on the different promoters of the circuit. A list of the various parameters and a justification for their value closes the chapter. In the fourth chapter are described the main characteristics of the gro simulation environment, developed by the Self Organizing Systems Laboratory of the University of Washington. Then, a sensitivity analysis performed to pinpoint the desirable characteristics of the various genetic components is detailed. The sensitivity analysis makes use of a cost function that is based on the fraction of cells in each one of the different possible states at the end of the simulation and the wanted outcome. Thanks to a particular kind of scatter plot, the parameters are ranked. Starting from an initial condition in which all the parameters assume their nominal value, the ranking suggest which parameter to tune in order to reach the goal. Obtaining a microcolony in which almost all the cells are in the follower state and only a few in the leader state seems to be the most difficult task. A small number of leader cells struggle to produce enough signal to turn the rest of the microcolony in the follower state. It is possible to obtain a microcolony in which the majority of cells are followers by increasing as much as possible the production of signal. Reaching the goal of a microcolony that is split in half between leaders and followers is comparatively easy. The best strategy seems to be increasing slightly the production of the enzyme. To end up with a majority of leaders, instead, it is advisable to increase the basal expression of the coin flipper module. At the end of the chapter, a possible future application of the leader election circuit, the spontaneous formation of spatial patterns in a microcolony, is modeled with the finite state machine formalism. The gro simulations provide insights into the genetic components that are needed to implement the behavior. In particular, since both the examples of pattern formation rely on a local version of Leader Election, a short-range communication system is essential. Moreover, new synthetic components that allow to reliably downregulate the growth rate in specific cells without side effects need to be developed. In the appendix are listed the gro code utilized to simulate the model of the circuit, a script in the Python programming language that was used to split the simulations on a Linux cluster and the Matlab code developed to analyze the data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We model nongraphitized carbon black surfaces and investigate adsorption of argon on these surfaces by using the grand canonical Monte Carlo simulation. In this model, the nongraphitized surface is modeled as a stack of graphene layers with some carbon atoms of the top graphene layer being randomly removed. The percentage of the surface carbon atoms being removed and the effective size of the defect ( created by the removal) are the key parameters to characterize the nongraphitized surface. The patterns of adsorption isotherm and isosteric heat are particularly studied, as a function of these surface parameters as well as pressure and temperature. It is shown that the adsorption isotherm shows a steplike behavior on a perfect graphite surface and becomes smoother on nongraphitized surfaces. Regarding the isosteric heat versus loading, we observe for the case of graphitized thermal carbon black the increase of heat in the submonolayer coverage and then a sharp decline in the heat when the second layer is starting to form, beyond which it increases slightly. On the other hand, the isosteric heat versus loading for a highly nongraphitized surface shows a general decline with respect to loading, which is due to the energetic heterogeneity of the surface. It is only when the fluid-fluid interaction is greater than the surface energetic factor that we see a minimum-maximum in the isosteric heat versus loading. These simulation results of isosteric heat agree well with the experimental results of graphitization of Spheron 6 (Polley, M. H.; Schaeffer, W. D.; Smith, W. R. J. Phys. Chem. 1953, 57, 469; Beebe, R. A.; Young, D. M. J. Phys. Chem. 1954, 58, 93). Adsorption isotherms and isosteric heat in pores whose walls have defects are also studied from the simulation, and the pattern of isotherm and isosteric heat could be used to identify the fingerprint of the surface.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Storm- and tsunami-deposits are generated by similar depositional mechanisms making their discrimination hard to establish using classic sedimentologic methods. Here we propose an original approach to identify tsunami-induced deposits by combining numerical simulation and rock magnetism. To test our method, we investigate the tsunami deposit of the Boca do Rio estuary generated by the 1755 earthquake in Lisbon which is well described in the literature. We first test the 1755 tsunami scenario using a numerical inundation model to provide physical parameters for the tsunami wave. Then we use concentration (MS. SIRM) and grain size (chi(ARM), ARM, B1/2, ARM/SIRM) sensitive magnetic proxies coupled with SEM microscopy to unravel the magnetic mineralogy of the tsunami-induced deposit and its associated depositional mechanisms. In order to study the connection between the tsunami deposit and the different sedimentologic units present in the estuary, magnetic data were processed by multivariate statistical analyses. Our numerical simulation show a large inundation of the estuary with flow depths varying from 0.5 to 6 m and run up of similar to 7 m. Magnetic data show a dominance of paramagnetic minerals (quartz) mixed with lesser amount of ferromagnetic minerals, namely titanomagnetite and titanohematite both of a detrital origin and reworked from the underlying units. Multivariate statistical analyses indicate a better connection between the tsunami-induced deposit and a mixture of Units C and D. All these results point to a scenario where the energy released by the tsunami wave was strong enough to overtop and erode important amount of sand from the littoral dune and mixed it with reworked materials from underlying layers at least 1 m in depth. The method tested here represents an original and promising tool to identify tsunami-induced deposits in similar embayed beach environments.