917 resultados para C. computational simulation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is the elucidation of structure-properties relationship of molecular semiconductors for electronic devices. This involves the use of a comprehensive set of simulation techniques, ranging from quantum-mechanical to numerical stochastic methods, and also the development of ad-hoc computational tools. In more detail, the research activity regarded two main topics: the study of electronic properties and structural behaviour of liquid crystalline (LC) materials based on functionalised oligo(p-phenyleneethynylene) (OPE), and the investigation on the electric field effect associated to OFET operation on pentacene thin film stability. In this dissertation, a novel family of substituted OPE liquid crystals with applications in stimuli-responsive materials is presented. In more detail, simulations can not only provide evidence for the characterization of the liquid crystalline phases of different OPEs, but elucidate the role of charge transfer states in donor-acceptor LCs containing an endohedral metallofullerene moiety. Such systems can be regarded as promising candidates for organic photovoltaics. Furthermore, exciton dynamics simulations are performed as a way to obtain additional information about the degree of order in OPE columnar phases. Finally, ab initio and molecular mechanics simulations are used to investigate the influence of an applied electric field on pentacene reactivity and stability. The reaction path of pentacene thermal dimerization in the presence of an external electric field is investigated; the results can be related to the fatigue effect observed in OFETs, that show significant performance degradation even in the absence of external agents. In addition to this, the effect of the gate voltage on a pentacene monolayer are simulated, and the results are then compared to X-ray diffraction measurements performed for the first time on operating OFETs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Self-organising pervasive ecosystems of devices are set to become a major vehicle for delivering infrastructure and end-user services. The inherent complexity of such systems poses new challenges to those who want to dominate it by applying the principles of engineering. The recent growth in number and distribution of devices with decent computational and communicational abilities, that suddenly accelerated with the massive diffusion of smartphones and tablets, is delivering a world with a much higher density of devices in space. Also, communication technologies seem to be focussing on short-range device-to-device (P2P) interactions, with technologies such as Bluetooth and Near-Field Communication gaining greater adoption. Locality and situatedness become key to providing the best possible experience to users, and the classic model of a centralised, enormously powerful server gathering and processing data becomes less and less efficient with device density. Accomplishing complex global tasks without a centralised controller responsible of aggregating data, however, is a challenging task. In particular, there is a local-to-global issue that makes the application of engineering principles challenging at least: designing device-local programs that, through interaction, guarantee a certain global service level. In this thesis, we first analyse the state of the art in coordination systems, then motivate the work by describing the main issues of pre-existing tools and practices and identifying the improvements that would benefit the design of such complex software ecosystems. The contribution can be divided in three main branches. First, we introduce a novel simulation toolchain for pervasive ecosystems, designed for allowing good expressiveness still retaining high performance. Second, we leverage existing coordination models and patterns in order to create new spatial structures. Third, we introduce a novel language, based on the existing ``Field Calculus'' and integrated with the aforementioned toolchain, designed to be usable for practical aggregate programming.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In dieser Arbeit wurden Simulation von Flüssigkeiten auf molekularer Ebene durchgeführt, wobei unterschiedliche Multi-Skalen Techniken verwendet wurden. Diese erlauben eine effektive Beschreibung der Flüssigkeit, die weniger Rechenzeit im Computer benötigt und somit Phänomene auf längeren Zeit- und Längenskalen beschreiben kann.rnrnEin wesentlicher Aspekt ist dabei ein vereinfachtes (“coarse-grained”) Modell, welches in einem systematischen Verfahren aus Simulationen des detaillierten Modells gewonnen wird. Dabei werden ausgewählte Eigenschaften des detaillierten Modells (z.B. Paar-Korrelationsfunktion, Druck, etc) reproduziert.rnrnEs wurden Algorithmen untersucht, die eine gleichzeitige Kopplung von detaillierten und vereinfachten Modell erlauben (“Adaptive Resolution Scheme”, AdResS). Dabei wird das detaillierte Modell in einem vordefinierten Teilvolumen der Flüssigkeit (z.B. nahe einer Oberfläche) verwendet, während der Rest mithilfe des vereinfachten Modells beschrieben wird.rnrnHierzu wurde eine Methode (“Thermodynamische Kraft”) entwickelt um die Kopplung auch dann zu ermöglichen, wenn die Modelle in verschiedenen thermodynamischen Zuständen befinden. Zudem wurde ein neuartiger Algorithmus der Kopplung beschrieben (H-AdResS) der die Kopplung mittels einer Hamilton-Funktion beschreibt. In diesem Algorithmus ist eine zur Thermodynamischen Kraft analoge Korrektur mit weniger Rechenaufwand möglich.rnrnAls Anwendung dieser grundlegenden Techniken wurden Pfadintegral Molekulardynamik (MD) Simulationen von Wasser untersucht. Mithilfe dieser Methode ist es möglich, quantenmechanische Effekte der Kerne (Delokalisation, Nullpunktsenergie) in die Simulation einzubeziehen. Hierbei wurde zuerst eine Multi-Skalen Technik (“Force-matching”) verwendet um eine effektive Wechselwirkung aus einer detaillierten Simulation auf Basis der Dichtefunktionaltheorie zu extrahieren. Die Pfadintegral MD Simulation verbessert die Beschreibung der intra-molekularen Struktur im Vergleich mit experimentellen Daten. Das Modell eignet sich auch zur gleichzeitigen Kopplung in einer Simulation, wobei ein Wassermolekül (beschrieben durch 48 Punktteilchen im Pfadintegral-MD Modell) mit einem vereinfachten Modell (ein Punktteilchen) gekoppelt wird. Auf diese Weise konnte eine Wasser-Vakuum Grenzfläche simuliert werden, wobei nur die Oberfläche im Pfadintegral Modell und der Rest im vereinfachten Modell beschrieben wird.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with the development of a novel simulation technique for macromolecules in electrolyte solutions, with the aim of a performance improvement over current molecular-dynamics based simulation methods. In solutions containing charged macromolecules and salt ions, it is the complex interplay of electrostatic interactions and hydrodynamics that determines the equilibrium and non-equilibrium behavior. However, the treatment of the solvent and dissolved ions makes up the major part of the computational effort. Thus an efficient modeling of both components is essential for the performance of a method. With the novel method we approach the solvent in a coarse-grained fashion and replace the explicit-ion description by a dynamic mean-field treatment. Hence we combine particle- and field-based descriptions in a hybrid method and thereby effectively solve the electrokinetic equations. The developed algorithm is tested extensively in terms of accuracy and performance, and suitable parameter sets are determined. As a first application we study charged polymer solutions (polyelectrolytes) in shear flow with focus on their viscoelastic properties. Here we also include semidilute solutions, which are computationally demanding. Secondly we study the electro-osmotic flow on superhydrophobic surfaces, where we perform a detailed comparison to theoretical predictions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biorelevante Medien sind entwickelt worden, um die Bedingungen im Magen-Darm-Trakt vor und nach der Mahlzeit zu imitieren. Mit FaSSIF und FeSSIF wurden Medien eingeführt, die nicht nur die pH- und Puffer-Kapazität des Dünndarms widerspiegeln, sondern auch Lipid und physiologische Tensid-Arten enthalten. Diese Medien (FaSSIF-V2 und FaSSlFmod6.5) wurden für Bioverfügbarkeitstudien in der Medikamentenentwicklung im Laufe der Jahre kontinuierlich weiterentwickelt. Dennoch sind die auf dem Markt verfügbaren Medien immer noch nicht in der Lage, die realen physiologischen Bedingungen zu simulieren. In der jetzigen Zusammensetzung sind nicht alle Kompetenten enthalten, welche natürlicher Weise im Duodenum vorkommen. Darüber hinaus wird nur eine 1:5 Verdünnung von FeSSIF zu FaSSIF angenommen, die individuelle Wasserzufuhr bei Medikamentengabe wird hierdurch jedoch nur eingeschränkt simuliert, obwohl diese von Patient zu Patient schwanken kann. rnZiel dieser Dissertation war die Verbesserung der Vorhersage der Auflösung und Absorption lipophiler Arzneistoffe durch Simulation der Bedingungen im zweiten Teil des Zwölffingerdarms mit neuen biorelevanten Medien, sowie unter Einwirkung zusätzlicher Detergention als Wirkstoffträger. rnUm den Effekt der Verdünnungsrate und Zeit im Dünndarm zu untersuchen, wurde die Entwicklung der Nanopartikel in der Magen-Darm-Flüssigkeit FaSSIFmod6.5 zu verschiedenen Zeitpunkten und Wassergehalten untersucht. Dafür wurden kinetische Studien an verschieden konzentrierten Modellmedien nach Verdünnungssprung untersucht. Das Modell entspricht der Vermischung der Gallenflüssigkeit mit dem Darminhalt bei variablem Volumen. Die Ergebnisse zeigen, dass Art und Größe der Nanopartikel stark von Verdünnung und Einirkungszeit abhängen. rnrnDie menschliche Darmflüssigkeit enthält Cholesterin, welches in allen früheren Modellmedien fehlt. Daher wurden biokompatible und physiologische Modellflüssigkeiten, FaSSIF-C, entwickelt. Der Cholesteringehalt von FaSSIF - 7C entspricht der Gallenflüssigkeit einer gesunden Frau, FaSSIF - 10C der einer gesunden männlichen Person und FaSSIF - 13C der in einigen Krankheitszuständen. Die intestinale Teilchen-Struktur-Untersuchung mit dynamische Lichtstreuung (DLS) und Neutronen-Kleinwinkelstreuung (SANS) ergab, dass die Korngröße von Vesikeln mit zunehmender Cholesterin-Konzentration abnahm. Zu hohe Cholesterin-Konzentration bewirkte zusätzlich sehr große Partikel, welche vermutlich aus Cholesterin-reichen “Disks“ bestehen. Die Löslichkeiten einiger BCS Klasse II Wirkstoffe (Fenofibrat, Griseofulvin, Carbamazepin, Danazol) in diesen neuen Medien zeigten, dass die Löslichkeit in unterschiedlicher Weise mit der Cholesteringehalt zusammen hing und dieser Effekt selektiv für die Droge war. rnDarüber hinaus wurde die Wirkung von einigen Tensiden auf die kolloidale Struktur und Löslichkeit von Fenofibrat in FaSSIFmod6.5 und FaSSIF -7C untersucht. Struktur und Löslichkeit waren Tensid- und Konzentrations-abhängig. Im Falle von FaSSIFmod6.5 zeigten die Ergebnisse eine dreifache Verzweigung der Lösungswege. Im Bereich mittlerer Tensidkonzentration wurde eine Löslichkeitslücke der Droge zwischen der Zerstörung der Galle-Liposomen und der Bildung von Tensid-reichen Mizellen beobachtet. In FaSSIF - 7C, zerstörten Tenside in höherer Konzentration die Liposomenstruktur trotz der allgemeinen Stabilisierung der Membranen durch Cholesterin. rnDie in dieser Arbeit vorgestellten Ergebnisse ergeben, dass die Anwesenheit von Cholesterin als eine fehlende Komponente der menschlichen Darmflüssigkeit in biorelevanten Medien wichtig ist und dazu beitragen kann, das in vivo Verhalten schwerlöslicher Arzneistoffe im Körper besser vorhersagen zu können. Der Verdünnungsgrad hat einen Einfluss auf die Nanopartikel-Struktur und Tenside beeinflussen die Löslichkeit von Medikamenten in biorelevanten Medien: Dieser Effekt ist sowohl von der Konzentration das Tensids abhängig, als auch dessen Typ.rnrn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Globalization has increased the pressure on organizations and companies to operate in the most efficient and economic way. This tendency promotes that companies concentrate more and more on their core businesses, outsource less profitable departments and services to reduce costs. By contrast to earlier times, companies are highly specialized and have a low real net output ratio. For being able to provide the consumers with the right products, those companies have to collaborate with other suppliers and form large supply chains. An effect of large supply chains is the deficiency of high stocks and stockholding costs. This fact has lead to the rapid spread of Just-in-Time logistic concepts aimed minimizing stock by simultaneous high availability of products. Those concurring goals, minimizing stock by simultaneous high product availability, claim for high availability of the production systems in the way that an incoming order can immediately processed. Besides of design aspects and the quality of the production system, maintenance has a strong impact on production system availability. In the last decades, there has been many attempts to create maintenance models for availability optimization. Most of them concentrated on the availability aspect only without incorporating further aspects as logistics and profitability of the overall system. However, production system operator’s main intention is to optimize the profitability of the production system and not the availability of the production system. Thus, classic models, limited to represent and optimize maintenance strategies under the light of availability, fail. A novel approach, incorporating all financial impacting processes of and around a production system, is needed. The proposed model is subdivided into three parts, maintenance module, production module and connection module. This subdivision provides easy maintainability and simple extendability. Within those modules, all aspect of production process are modeled. Main part of the work lies in the extended maintenance and failure module that offers a representation of different maintenance strategies but also incorporates the effect of over-maintaining and failed maintenance (maintenance induced failures). Order release and seizing of the production system are modeled in the production part. Due to computational power limitation, it was not possible to run the simulation and the optimization with the fully developed production model. Thus, the production model was reduced to a black-box without higher degree of details.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sub-grid scale (SGS) models are required in order to model the influence of the unresolved small scales on the resolved scales in large-eddy simulations (LES), the flow at the smallest scales of turbulence. In the following work two SGS models are presented and deeply analyzed in terms of accuracy through several LESs with different spatial resolutions, i.e. grid spacings. The first part of this thesis focuses on the basic theory of turbulence, the governing equations of fluid dynamics and their adaptation to LES. Furthermore, two important SGS models are presented: one is the Dynamic eddy-viscosity model (DEVM), developed by \cite{germano1991dynamic}, while the other is the Explicit Algebraic SGS model (EASSM), by \cite{marstorp2009explicit}. In addition, some details about the implementation of the EASSM in a Pseudo-Spectral Navier-Stokes code \cite{chevalier2007simson} are presented. The performance of the two aforementioned models will be investigated in the following chapters, by means of LES of a channel flow, with friction Reynolds numbers $Re_\tau=590$ up to $Re_\tau=5200$, with relatively coarse resolutions. Data from each simulation will be compared to baseline DNS data. Results have shown that, in contrast to the DEVM, the EASSM has promising potentials for flow predictions at high friction Reynolds numbers: the higher the friction Reynolds number is the better the EASSM will behave and the worse the performances of the DEVM will be. The better performance of the EASSM is contributed to the ability to capture flow anisotropy at the small scales through a correct formulation for the SGS stresses. Moreover, a considerable reduction in the required computational resources can be achieved using the EASSM compared to DEVM. Therefore, the EASSM combines accuracy and computational efficiency, implying that it has a clear potential for industrial CFD usage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our generation of computational scientists is living in an exciting time: not only do we get to pioneer important algorithms and computations, we also get to set standards on how computational research should be conducted and published. From Euclid’s reasoning and Galileo’s experiments, it took hundreds of years for the theoretical and experimental branches of science to develop standards for publication and peer review. Computational science, rightly regarded as the third branch, can walk the same road much faster. The success and credibility of science are anchored in the willingness of scientists to expose their ideas and results to independent testing and replication by other scientists. This requires the complete and open exchange of data, procedures and materials. The idea of a “replication by other scientists” in reference to computations is more commonly known as “reproducible research”. In this context the journal “EAI Endorsed Transactions on Performance & Modeling, Simulation, Experimentation and Complex Systems” had the exciting and original idea to make the scientist able to submit simultaneously the article and the computation materials (software, data, etc..) which has been used to produce the contents of the article. The goal of this procedure is to allow the scientific community to verify the content of the paper, reproducing it in the platform independently from the OS chosen, confirm or invalidate it and especially allow its reuse to reproduce new results. This procedure is therefore not helpful if there is no minimum methodological support. In fact, the raw data sets and the software are difficult to exploit without the logic that guided their use or their production. This led us to think that in addition to the data sets and the software, an additional element must be provided: the workflow that relies all of them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In these last years, systems engineering has became one of the major research domains. The complexity of systems has increased constantly and nowadays Cyber-Physical Systems (CPS) are a category of particular interest: these, are systems composed by a cyber part (computer-based algorithms) that monitor and control some physical processes. Their development and simulation are both complex due to the importance of the interaction between the cyber and the physical entities: there are a lot of models written in different languages that need to exchange information among each other. Normally people use an orchestrator that takes care of the simulation of the models and the exchange of informations. This orchestrator is developed manually and this is a tedious and long work. Our proposition is to achieve to generate the orchestrator automatically through the use of Co-Modeling, i.e. by modeling the coordination. Before achieving this ultimate goal, it is important to understand the mechanisms and de facto standards that could be used in a co-modeling framework. So, I studied the use of a technology employed for co-simulation in the industry: FMI. In order to better understand the FMI standard, I realized an automatic export, in the FMI format, of the models realized in an existing software for discrete modeling: TimeSquare. I also developed a simple physical model in the existing open source openmodelica tool. Later, I started to understand how works an orchestrator, developing a simple one: this will be useful in future to generate an orchestrator automatically.

Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Molecular dynamics simulations have been used to explore the conformational flexibility of a PNA·DNA·PNA triple helix in aqueous solution. Three 1.05 ns trajectories starting from different but reasonable conformations have been generated and analyzed in detail. All three trajectories converge within about 300 ps to produce stable and very similar conformational ensembles, which resemble the crystal structure conformation in many details. However, in contrast to the crystal structure, there is a tendency for the direct hydrogen-bonds observed between the amide hydrogens of the Hoogsteen-binding PNA strand and the phosphate oxygens of the DNA strand to be replaced by water-mediated hydrogen bonds, which also involve pyrimidine O2 atoms. This structural transition does not appear to weaken the triplex structure but alters groove widths and so may relate to the potential for recognition of such structures by other ligands (small molecules or proteins). Energetic analysis leads us to conclude that the reason that the hybrid PNA/DNA triplex has quite different helical characteristics from the all-DNA triplex is not because the additional flexibility imparted by the replacement of sugar−phosphate by PNA backbones allows motions to improve base-stacking but rather that base-stacking interactions are very similar in both types of triplex and the driving force comes from weak but definate conformational preferences of the PNA strands.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Breast cancer is the most common cancer among women, and tamoxifen is the preferred drug for estrogen receptor-positive breast cancer treatment. Many of these cancers are intrinsically resistant to tamoxifen or acquire resistance during treatment. Consequently, there is an ongoing need for breast cancer drugs that have different molecular targets. Previous work has shown that 8-mer and cyclic 9-mer peptides inhibit breast cancer in mouse and rat models, interacting with an unsolved receptor, while peptides smaller than eight amino acids did not. We show that the use of replica exchange molecular dynamics predicts the structure and dynamics of active peptides, leading to the discovery of smaller peptides with full biological activity. Simulations identified smaller peptide analogues with the same conserved reverse turn demonstrated in the larger peptides. These analogues were synthesized and shown to inhibit estrogen-dependent cell growth in a mouse uterine growth assay, a test showing reliable correlation with human breast cancer inhibition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Breast cancer is the most common cancer among women. Tamoxifen is the preferred drug for estrogen receptor-positive breast cancer treatment, yet many of these cancers are intrinsically resistant to tamoxifen or acquire resistance during treatment. Therefore, scientists are searching for breast cancer drugs that have different molecular targets. Methodology: Recently, a computational approach was used to successfully design peptides that are new lead compounds against breast cancer. We used replica exchange molecular dynamics to predict the structure and dynamics of active peptides, leading to the discovery of smaller bioactive peptides. Conclusions: These analogs inhibit estrogen-dependent cell growth in a mouse uterine growth assay, a test showing reliable correlation with human breast cancer inhibition. We outline the computational methods that were tried and used along with the experimental information that led to the successful completion of this research.