942 resultados para State Extension Problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Setup operations are significant in some production environments. It is mandatory that their production plans consider some features, as setup state conservation across periods through setup carryover and crossover. The modelling of setup crossover allows more flexible decisions and is essential for problems with long setup times. This paper proposes two models for the capacitated lot-sizing problem with backlogging and setup carryover and crossover. The first is in line with other models from the literature, whereas the second considers a disaggregated setup variable, which tracks the starting and completion times of the setup operation. This innovative approach permits a more compact formulation. Computational results show that the proposed models have outperformed other state-of-the-art formulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The heating of the solar corona has been investigated during four of decades and several mechanisms able to produce heating have been proposed. It has until now not been possible to produce quantitative estimates that would establish any of these heating mechanism as the most important in the solar corona. In order to investigate which heating mechanism is the most important, a more detailed approach is needed. In this thesis, the heating problem is approached ”ab initio”, using well observed facts and including realistic physics in a 3D magneto-hydrodynamic simulation of a small part of the solar atmosphere. The ”engine” of the heating mechanism is the solar photospheric velocity field, that braids the magnetic field into a configuration where energy has to be dissipated. The initial magnetic field is taken from an observation of a typical magnetic active region scaled down to fit inside the computational domain. The driving velocity field is generated by an algorithm that reproduces the statistical and geometrical fingerprints of solar granulation. Using a standard model atmosphere as the thermal initial condition, the simulation goes through a short startup phase, where the initial thermal stratification is quickly forgotten, after which the simulation stabilizes in statistical equilibrium. In this state, the magnetic field is able to dissipate the same amount of energy as is estimated to be lost through radiation, which is the main energy loss mechanism in the solar corona. The simulation produces heating that is intermittent on the smallest resolved scales and hot loops similar to those observed through narrow band filters in the ultra violet. Other observed characteristics of the heating are reproduced, as well as a coronal temperature of roughly one million K. Because of the ab initio approach, the amount of heating produced in these simulations represents a lower limit to coronal heating and the conclusion is that such heating of the corona is unavoidable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Peer-to-Peer network paradigm is drawing the attention of both final users and researchers for its features. P2P networks shift from the classic client-server approach to a high level of decentralization where there is no central control and all the nodes should be able not only to require services, but to provide them to other peers as well. While on one hand such high level of decentralization might lead to interesting properties like scalability and fault tolerance, on the other hand it implies many new problems to deal with. A key feature of many P2P systems is openness, meaning that everybody is potentially able to join a network with no need for subscription or payment systems. The combination of openness and lack of central control makes it feasible for a user to free-ride, that is to increase its own benefit by using services without allocating resources to satisfy other peers’ requests. One of the main goals when designing a P2P system is therefore to achieve cooperation between users. Given the nature of P2P systems based on simple local interactions of many peers having partial knowledge of the whole system, an interesting way to achieve desired properties on a system scale might consist in obtaining them as emergent properties of the many interactions occurring at local node level. Two methods are typically used to face the problem of cooperation in P2P networks: 1) engineering emergent properties when designing the protocol; 2) study the system as a game and apply Game Theory techniques, especially to find Nash Equilibria in the game and to reach them making the system stable against possible deviant behaviors. In this work we present an evolutionary framework to enforce cooperative behaviour in P2P networks that is alternative to both the methods mentioned above. Our approach is based on an evolutionary algorithm inspired by computational sociology and evolutionary game theory, consisting in having each peer periodically trying to copy another peer which is performing better. The proposed algorithms, called SLAC and SLACER, draw inspiration from tag systems originated in computational sociology, the main idea behind the algorithm consists in having low performance nodes copying high performance ones. The algorithm is run locally by every node and leads to an evolution of the network both from the topology and from the nodes’ strategy point of view. Initial tests with a simple Prisoners’ Dilemma application show how SLAC is able to bring the network to a state of high cooperation independently from the initial network conditions. Interesting results are obtained when studying the effect of cheating nodes on SLAC algorithm. In fact in some cases selfish nodes rationally exploiting the system for their own benefit can actually improve system performance from the cooperation formation point of view. The final step is to apply our results to more realistic scenarios. We put our efforts in studying and improving the BitTorrent protocol. BitTorrent was chosen not only for its popularity but because it has many points in common with SLAC and SLACER algorithms, ranging from the game theoretical inspiration (tit-for-tat-like mechanism) to the swarms topology. We discovered fairness, meant as ratio between uploaded and downloaded data, to be a weakness of the original BitTorrent protocol and we drew inspiration from the knowledge of cooperation formation and maintenance mechanism derived from the development and analysis of SLAC and SLACER, to improve fairness and tackle freeriding and cheating in BitTorrent. We produced an extension of BitTorrent called BitFair that has been evaluated through simulation and has shown the abilities of enforcing fairness and tackling free-riding and cheating nodes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Maintaining the postharvest quality of whole and fresh-cut fruit during storage and distribution is the major challenge facing fruit industry. For this purpose, industry adopt a wide range of technologies to enable extended shelf-life. Many factors can lead to loss of quality in fresh product, hence the common description of these products as ‘perishable’. As a consequence normal factors such as transpiration and respiration lead ultimately to water loss and senescence of the product. Fruits and vegetables are living commodities and their rate of respiration is of key importance to maintenance of quality. It has been commonly observed that the greater the respiration rate of a product, the shorter the shelf-life. The principal problem for fresh-cut fruit industries is the relative shorter shelf-life of minimally processed fruit (MPF) compared to intact product. This fact is strictly connected with the higher ethylene production of fruit tissue stimulated during fresh-cut processing (peeling, cutting, dipping). 1-Methylcyclopropene (1-MCP) is an inhibitor of ethylene action and several researches have shown its effectiveness on the inhibition of ripening and senescence incidence for intact fruit and consequently on their shelf-life extension. More recently 1-MCP treatment has been tested also for shelf-life extension of MPF but discordant results have been obtained. Considering that in some countries 1-MCP is already a commercial product registered for the use on a number of horticultural products, the main aim of this actual study was to enhance our understanding on the effects of 1-MCP treatment on the quality maintenance of whole and fresh-cut climacteric and non-climacteric fruit (apple, kiwifruit and pineapple). Concerning the effects of 1-MCP on whole fruit, was investigated the effects of a semi-commercial postharvest treatment with 1-MCP on the quality of Pink Lady apples as functions of fruit ripening stage, 1-MCP dose, storage time and also in combination with controlled atmospheres storage in order to better understand what is the relationship among these parameters and if is possible to maximize the 1-MCP treatment to meet the market/consumer needs and then in order to put in the market excellent fruit. To achieve this purpose an incomplete three-level three-factor design was adopted. During the storage were monitored several quality parameters: firmness, ripening index, ethylene and carbon dioxide production and were also performed a sensory evaluations after 6 month of storage. In this study the higher retention of firmness (at the end of storage) was achieved by applying the greatest 1-MCP concentration to fruits with the lowest maturity stage. This finding means that in these semi-commercial conditions we may considerate completely blocked the fruit softening. 1-MCP was able to delay also the ethylene and CO2 production and the maturity parameters (soluble solids content and total acidity). Only in some cases 1-MCP generate a synergistic effect with the CA storage. The results of sensory analyses indicated that, the 1-MCP treatment did not affect the sweetness and whole fruit flavour while had a little effect on the decreasing cut fruit flavour. On the contrary the treated apple was more sour, crisp, firm and juicy. The effects of some treatment (dipping and MAP) on the nutrient stability were also investigated showing that in this case study the adopted treatments did not have drastic effects on the antioxidant compounds on the contrary the dipping may enhance the total antioxidant activity by the accumulation of ascorbic acid on the apple cut surface. Results concerning the effects of 1-MCP in combination with MAP on the quality parameters behaviour of the kiwifruit were not always consistent and clear: in terms of colour maintenance, it seemed to have a synergistic effect with N2O MAP; as far as ripening index is concerned, 1-MCP had a preservative effect, but just for sample packed in air.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of Concurrency Theory to Systems Biology is in its earliest stage of progress. The metaphor of cells as computing systems by Regev and Shapiro opened the employment of concurrent languages for the modelling of biological systems. Their peculiar characteristics led to the design of many bio-inspired formalisms which achieve higher faithfulness and specificity. In this thesis we present pi@, an extremely simple and conservative extension of the pi-calculus representing a keystone in this respect, thanks to its expressiveness capabilities. The pi@ calculus is obtained by the addition of polyadic synchronisation and priority to the pi-calculus, in order to achieve compartment semantics and atomicity of complex operations respectively. In its direct application to biological modelling, the stochastic variant of the calculus, Spi@, is shown able to model consistently several phenomena such as formation of molecular complexes, hierarchical subdivision of the system into compartments, inter-compartment reactions, dynamic reorganisation of compartment structure consistent with volume variation. The pivotal role of pi@ is evidenced by its capability of encoding in a compositional way several bio-inspired formalisms, so that it represents the optimal core of a framework for the analysis and implementation of bio-inspired languages. In this respect, the encodings of BioAmbients, Brane Calculi and a variant of P Systems in pi@ are formalised. The conciseness of their translation in pi@ allows their indirect comparison by means of their encodings. Furthermore it provides a ready-to-run implementation of minimal effort whose correctness is granted by the correctness of the respective encoding functions. Further important results of general validity are stated on the expressive power of priority. Several impossibility results are described, which clearly state the superior expressiveness of prioritised languages and the problems arising in the attempt of providing their parallel implementation. To this aim, a new setting in distributed computing (the last man standing problem) is singled out and exploited to prove the impossibility of providing a purely parallel implementation of priority by means of point-to-point or broadcast communication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the recent decade, the request for structural health monitoring expertise increased exponentially in the United States. The aging issues that most of the transportation structures are experiencing can put in serious jeopardy the economic system of a region as well as of a country. At the same time, the monitoring of structures is a central topic of discussion in Europe, where the preservation of historical buildings has been addressed over the last four centuries. More recently, various concerns arose about security performance of civil structures after tragic events such the 9/11 or the 2011 Japan earthquake: engineers looks for a design able to resist exceptional loadings due to earthquakes, hurricanes and terrorist attacks. After events of such a kind, the assessment of the remaining life of the structure is at least as important as the initial performance design. Consequently, it appears very clear that the introduction of reliable and accessible damage assessment techniques is crucial for the localization of issues and for a correct and immediate rehabilitation. The System Identification is a branch of the more general Control Theory. In Civil Engineering, this field addresses the techniques needed to find mechanical characteristics as the stiffness or the mass starting from the signals captured by sensors. The objective of the Dynamic Structural Identification (DSI) is to define, starting from experimental measurements, the modal fundamental parameters of a generic structure in order to characterize, via a mathematical model, the dynamic behavior. The knowledge of these parameters is helpful in the Model Updating procedure, that permits to define corrected theoretical models through experimental validation. The main aim of this technique is to minimize the differences between the theoretical model results and in situ measurements of dynamic data. Therefore, the new model becomes a very effective control practice when it comes to rehabilitation of structures or damage assessment. The instrumentation of a whole structure is an unfeasible procedure sometimes because of the high cost involved or, sometimes, because it’s not possible to physically reach each point of the structure. Therefore, numerous scholars have been trying to address this problem. In general two are the main involved methods. Since the limited number of sensors, in a first case, it’s possible to gather time histories only for some locations, then to move the instruments to another location and replay the procedure. Otherwise, if the number of sensors is enough and the structure does not present a complicate geometry, it’s usually sufficient to detect only the principal first modes. This two problems are well presented in the works of Balsamo [1] for the application to a simple system and Jun [2] for the analysis of system with a limited number of sensors. Once the system identification has been carried, it is possible to access the actual system characteristics. A frequent practice is to create an updated FEM model and assess whether the structure fulfills or not the requested functions. Once again the objective of this work is to present a general methodology to analyze big structure using a limited number of instrumentation and at the same time, obtaining the most information about an identified structure without recalling methodologies of difficult interpretation. A general framework of the state space identification procedure via OKID/ERA algorithm is developed and implemented in Matlab. Then, some simple examples are proposed to highlight the principal characteristics and advantage of this methodology. A new algebraic manipulation for a prolific use of substructuring results is developed and implemented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is focused on the development of heteronuclear correlation methods in solid-state NMR spectroscopy, where the spatial dependence of the dipolar coupling is exploited to obtain structural and dynamical information in solids. Quantitative results on dipolar coupling constants are extracted by means of spinning sideband analysis in the indirect dimension of the two-dimensional experiments. The principles of sideband analysis were established and are currently widely used in the group of Prof. Spiess for the special case of homonuclear 1H double-quantum spectroscopy. The generalization of these principles to the heteronuclear case is presented, with special emphasis on naturally abundant 13C-1H systems. For proton spectroscopy in the solid state, line-narrowing is of particular importance, and is here achieved by very-fast sample rotation at the magic angle (MAS), with frequencies up to 35 kHz. Therefore, the heteronuclear dipolar couplings are suppressed and have to be recoupled in order to achieve an efficient excitation of the observed multiple-quantum modes. Heteronuclear recoupling is most straightforwardly accomplished by performing the known REDOR experiment, where pi-pulses are applied every half rotor period. This experiment was modified by the insertion of an additional spectroscopic dimension, such that heteronuclear multiple-quantum experiments can be carried out, which, as shown experimentally and theoretically, closely resemble homonuclear double-quantum experiments. Variants are presented which are well-suited for the recording of high-resolution 13C-1H shift correlation and spinning-sideband spectra, by means of which spatial proximities and quantitative dipolar coupling constants, respectively, of heteronuclear spin pairs can be determined. Spectral editing of 13C spectra is shown to be feasible with these techniques. Moreover, order phenomena and dynamics in columnar mesophases with 13C in natural abundance were investigated. Two further modifications of the REDOR concept allow the correlation of 13C with quadrupolar nuclei, such as 2H. The spectroscopic handling of these nuclei is challenging in that they cover large frequency ranges, and with the new experiments it is shown how the excitation problem can be tackled or circumvented altogether, respectively. As an example, one of the techniques is used for the identification of a yet unknown motional process of the H-bonded protons in the crystalline parts of poly(vinyl alcohol).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present thesis is concerned with the study of a quantum physical system composed of a small particle system (such as a spin chain) and several quantized massless boson fields (as photon gasses or phonon fields) at positive temperature. The setup serves as a simplified model for matter in interaction with thermal "radiation" from different sources. Hereby, questions concerning the dynamical and thermodynamic properties of particle-boson configurations far from thermal equilibrium are in the center of interest. We study a specific situation where the particle system is brought in contact with the boson systems (occasionally referred to as heat reservoirs) where the reservoirs are prepared close to thermal equilibrium states, each at a different temperature. We analyze the interacting time evolution of such an initial configuration and we show thermal relaxation of the system into a stationary state, i.e., we prove the existence of a time invariant state which is the unique limit state of the considered initial configurations evolving in time. As long as the reservoirs have been prepared at different temperatures, this stationary state features thermodynamic characteristics as stationary energy fluxes and a positive entropy production rate which distinguishes it from being a thermal equilibrium at any temperature. Therefore, we refer to it as non-equilibrium stationary state or simply NESS. The physical setup is phrased mathematically in the language of C*-algebras. The thesis gives an extended review of the application of operator algebraic theories to quantum statistical mechanics and introduces in detail the mathematical objects to describe matter in interaction with radiation. The C*-theory is adapted to the concrete setup. The algebraic description of the system is lifted into a Hilbert space framework. The appropriate Hilbert space representation is given by a bosonic Fock space over a suitable L2-space. The first part of the present work is concluded by the derivation of a spectral theory which connects the dynamical and thermodynamic features with spectral properties of a suitable generator, say K, of the time evolution in this Hilbert space setting. That way, the question about thermal relaxation becomes a spectral problem. The operator K is of Pauli-Fierz type. The spectral analysis of the generator K follows. This task is the core part of the work and it employs various kinds of functional analytic techniques. The operator K results from a perturbation of an operator L0 which describes the non-interacting particle-boson system. All spectral considerations are done in a perturbative regime, i.e., we assume that the strength of the coupling is sufficiently small. The extraction of dynamical features of the system from properties of K requires, in particular, the knowledge about the spectrum of K in the nearest vicinity of eigenvalues of the unperturbed operator L0. Since convergent Neumann series expansions only qualify to study the perturbed spectrum in the neighborhood of the unperturbed one on a scale of order of the coupling strength we need to apply a more refined tool, the Feshbach map. This technique allows the analysis of the spectrum on a smaller scale by transferring the analysis to a spectral subspace. The need of spectral information on arbitrary scales requires an iteration of the Feshbach map. This procedure leads to an operator-theoretic renormalization group. The reader is introduced to the Feshbach technique and the renormalization procedure based on it is discussed in full detail. Further, it is explained how the spectral information is extracted from the renormalization group flow. The present dissertation is an extension of two kinds of a recent research contribution by Jakšić and Pillet to a similar physical setup. Firstly, we consider the more delicate situation of bosonic heat reservoirs instead of fermionic ones, and secondly, the system can be studied uniformly for small reservoir temperatures. The adaption of the Feshbach map-based renormalization procedure by Bach, Chen, Fröhlich, and Sigal to concrete spectral problems in quantum statistical mechanics is a further novelty of this work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BTES (borehole thermal energy storage)systems exchange thermal energy by conduction with the surrounding ground through borehole materials. The spatial variability of the geological properties and the space-time variability of hydrogeological conditions affect the real power rate of heat exchangers and, consequently, the amount of energy extracted from / injected into the ground. For this reason, it is not an easy task to identify the underground thermal properties to use when designing. At the current state of technology, Thermal Response Test (TRT) is the in situ test for the characterization of ground thermal properties with the higher degree of accuracy, but it doesn’t fully solve the problem of characterizing the thermal properties of a shallow geothermal reservoir, simply because it characterizes only the neighborhood of the heat exchanger at hand and only for the test duration. Different analytical and numerical models exist for the characterization of shallow geothermal reservoir, but they are still inadequate and not exhaustive: more sophisticated models must be taken into account and a geostatistical approach is needed to tackle natural variability and estimates uncertainty. The approach adopted for reservoir characterization is the “inverse problem”, typical of oil&gas field analysis. Similarly, we create different realizations of thermal properties by direct sequential simulation and we find the best one fitting real production data (fluid temperature along time). The software used to develop heat production simulation is FEFLOW 5.4 (Finite Element subsurface FLOW system). A geostatistical reservoir model has been set up based on literature thermal properties data and spatial variability hypotheses, and a real TRT has been tested. Then we analyzed and used as well two other codes (SA-Geotherm and FV-Geotherm) which are two implementation of the same numerical model of FEFLOW (Al-Khoury model).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of this work was to investigate the impact of different hybridization concepts and levels of hybridization on fuel economy of a standard road vehicle where both conventional and non-conventional hybrid architectures are treated exactly in the same way from the point of view of overall energy flow optimization. Hybrid component models were developed and presented in detail as well as the simulations results mainly for NEDC cycle. The analysis was performed on four different parallel hybrid powertrain concepts: Hybrid Electric Vehicle (HEV), High Speed Flywheel Hybrid Vehicle (HSF-HV), Hydraulic Hybrid Vehicle (HHV) and Pneumatic Hybrid Vehicle (PHV). In order to perform equitable analysis of different hybrid systems, comparison was performed also on the basis of the same usable system energy storage capacity (i.e. 625kJ for HEV, HSF and the HHV) but in the case of pneumatic hybrid systems maximal storage capacity was limited by the size of the systems in order to comply with the packaging requirements of the vehicle. The simulations were performed within the IAV Gmbh - VeLoDyn software simulator based on Matlab / Simulink software package. Advanced cycle independent control strategy (ECMS) was implemented into the hybrid supervisory control unit in order to solve power management problem for all hybrid powertrain solutions. In order to maintain State of Charge within desired boundaries during different cycles and to facilitate easy implementation and recalibration of the control strategy for very different hybrid systems, Charge Sustaining Algorithm was added into the ECMS framework. Also, a Variable Shift Pattern VSP-ECMS algorithm was proposed as an extension of ECMS capabilities so as to include gear selection into the determination of minimal (energy) cost function of the hybrid system. Further, cycle-based energetic analysis was performed in all the simulated cases, and the results have been reported in the corresponding chapters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il lavoro è dedicato all'analisi fisica e alla modellizzazione dello strato limite atmosferico in condizioni stabili. L'obiettivo principale è quello di migliorare i modelli di parametrizzazione della turbulenza attualmente utilizzati dai modelli meteorologici a grande scala. Questi modelli di parametrizzazione della turbolenza consistono nell' esprimere gli stress di Reynolds come funzioni dei campi medi (componenti orizzontali della velocità e temperatura potenziale) usando delle chiusure. La maggior parte delle chiusure sono state sviluppate per i casi quasi-neutrali, e la difficoltà è trattare l'effetto della stabilità in modo rigoroso. Studieremo in dettaglio due differenti modelli di chiusura della turbolenza per lo strato limite stabile basati su assunzioni diverse: uno schema TKE-l (Mellor-Yamada,1982), che è usato nel modello di previsione BOLAM (Bologna Limited Area Model), e uno schema sviluppato recentemente da Mauritsen et al. (2007). Le assunzioni delle chiusure dei due schemi sono analizzate con dati sperimentali provenienti dalla torre di Cabauw in Olanda e dal sito CIBA in Spagna. Questi schemi di parametrizzazione della turbolenza sono quindi inseriti all'interno di un modello colonnare dello strato limite atmosferico, per testare le loro predizioni senza influenze esterne. Il confronto tra i differenti schemi è effettuato su un caso ben documentato in letteratura, il "GABLS1". Per confermare la validità delle predizioni, un dataset tridimensionale è creato simulando lo stesso caso GABLS1 con una Large Eddy Simulation. ARPS (Advanced Regional Prediction System) è stato usato per questo scopo. La stratificazione stabile vincola il passo di griglia, poichè la LES deve essere ad una risoluzione abbastanza elevata affinchè le tipiche scale verticali di moto siano correttamente risolte. Il confronto di questo dataset tridimensionale con le predizioni degli schemi turbolenti permettono di proporre un insieme di nuove chiusure atte a migliorare il modello di turbolenza di BOLAM. Il lavoro è stato compiuto all' ISAC-CNR di Bologna e al LEGI di Grenoble.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis analyses problems related to the applicability, in business environments, of Process Mining tools and techniques. The first contribution is a presentation of the state of the art of Process Mining and a characterization of companies, in terms of their "process awareness". The work continues identifying circumstance where problems can emerge: data preparation; actual mining; and results interpretation. Other problems are the configuration of parameters by not-expert users and computational complexity. We concentrate on two possible scenarios: "batch" and "on-line" Process Mining. Concerning the batch Process Mining, we first investigated the data preparation problem and we proposed a solution for the identification of the "case-ids" whenever this field is not explicitly indicated. After that, we concentrated on problems at mining time and we propose the generalization of a well-known control-flow discovery algorithm in order to exploit non instantaneous events. The usage of interval-based recording leads to an important improvement of performance. Later on, we report our work on the parameters configuration for not-expert users. We present two approaches to select the "best" parameters configuration: one is completely autonomous; the other requires human interaction to navigate a hierarchy of candidate models. Concerning the data interpretation and results evaluation, we propose two metrics: a model-to-model and a model-to-log. Finally, we present an automatic approach for the extension of a control-flow model with social information, in order to simplify the analysis of these perspectives. The second part of this thesis deals with control-flow discovery algorithms in on-line settings. We propose a formal definition of the problem, and two baseline approaches. The actual mining algorithms proposed are two: the first is the adaptation, to the control-flow discovery problem, of a frequency counting algorithm; the second constitutes a framework of models which can be used for different kinds of streams (stationary versus evolving).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Massive parallel robots (MPRs) driven by discrete actuators are force regulated robots that undergo continuous motions despite being commanded through a finite number of states only. Designing a real-time control of such systems requires fast and efficient methods for solving their inverse static analysis (ISA), which is a challenging problem and the subject of this thesis. In particular, five Artificial intelligence methods are proposed to investigate the on-line computation and the generalization error of ISA problem of a class of MPRs featuring three-state force actuators and one degree of revolute motion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die Themengebiete dieser Arbeit umfassen sowohl methodische Weiterentwicklungen im Rahmen der ab initio zweiter Ordnungsmethoden CC2 und ADC(2) als auch Anwendungen dieser Weiterentwick-lungen auf aktuelle Fragestellungen. Die methodischen Erweiterungen stehen dabei hauptsächlich im Zusammenhang mit Übergangsmomenten zwischen angeregten Zuständen. Durch die Implementie-rung der selbigen ist nun die Berechnung transienter Absorptionsspektren möglich. Die Anwendungen behandeln vorwiegend das Feld der organischen Halbleiter und deren photo-elektronische Eigen-schaften. Dabei spielen die bislang wenig erforschten Triplett-Excimere eine zentrale Rolle.rnDie Übergangsmomente zwischen angeregten Zuständen wurden in das Programmpaket TUR-BOMOLE implementiert. Dadurch wurde die Berechnung der Übergangsmomente zwischen Zustän-den gleicher Multiplizität (d.h. sowohl Singulett-Singulett- als auch Triplett-Triplett-Übergänge) und unterschiedlicher Multiplizität (also Singulett-Triplett-Übergänge) möglich. Als Erweiterung wurde durch ein Interface zum ORCA Programm die Berechnung von Spin-Orbit-Matrixelementen (SOMEs) implementiert. Des Weiteren kann man mit dieser Implementierung auch Übergänge in offenschaligen Systemen berechnen. Um den Speicherbedarf und die Rechenzeit möglichst gering zu halten wurde die resolution-of-the-identity (RI-) Näherung benutzt. Damit lässt sich der Speicherbedarf von O(N4) auf O(N3) reduzieren, da die mit O(N4) skalierenden Größen (z. B. die T2-Amplituden) sehr effizient aus RI-Intermediaten berechnet werden können und daher nicht abgespeichert werden müssen. Dadurch wird eine Berechnung für mittelgroße Moleküle (ca. 20-50 Atome) mit einer angemessenen Basis möglich.rnDie Genauigkeit der Übergangsmomente zwischen angeregten Zuständen wurde für einen Testsatz kleiner Moleküle sowie für ausgewählte größere organische Moleküle getestet. Dabei stellte sich her-aus, dass der Fehler der RI-Näherung sehr klein ist. Die Vorhersage der transienten Spektren mit CC2 bzw. ADC(2) birgt allerdings ein Problem, da diese Methoden solche Zustände nur sehr unzureichend beschreiben, welche hauptsächlich durch zweifach-Anregungen bezüglich der Referenzdeterminante erzeugt werden. Dies ist für die Spektren aus dem angeregten Zustand relevant, da Übergänge zu diesen Zuständen energetisch zugänglich und erlaubt sein können. Ein Beispiel dafür wird anhand eines Singulett-Singulett-Spektrums in der vorliegenden Arbeit diskutiert. Für die Übergänge zwischen Triplettzuständen ist dies allerdings weniger problematisch, da die energetisch niedrigsten Doppelan-regungen geschlossenschalig sind und daher für Tripletts nicht auftreten.rnVon besonderem Interesse für diese Arbeit ist die Bildung von Excimeren im angeregten Triplettzu-stand. Diese können aufgrund starker Wechselwirkungen zwischen den π-Elektronensystemen großer organischer Moleküle auftreten, wie sie zum Beispiel als organische Halbleiter in organischen Leucht-dioden eingesetzt werden. Dabei können die Excimere die photo-elktronischen Eigenschaften dieser Substanzen signifikant beeinflussen. Im Rahmen dieser Dissertation wurden daher zwei solcher Sys-teme untersucht, [3.3](4,4’)Biphenylophan und das Naphthalin-Dimer. Hierzu wurden die transienten Anregungsspektren aus dem ersten angeregten Triplettzustand berechnet und diese Ergebnisse für die Interpretation der experimentellen Spektren herangezogen. Aufgrund der guten Übereinstimmung zwischen den berechneten und den experimentellen Spektren konnte gezeigt werden, dass es für eine koplanare Anordnung der beiden Monomere zu einer starken Kopplung zwischen lokal angereg-ten und charge-transfer Zuständen kommt. Diese Kopplung resultiert in einer signifikanten energeti-schen Absenkung des ersten angeregten Zustandes und zu einem sehr geringen Abstand zwischen den Monomereinheiten. Dabei ist der angeregte Zustand über beide Monomere delokalisiert. Die star-ke Kopplung tritt bei einem intermolekularen Abstand ≤4 Å auf, was einem typischen Abstand in orga-nischen Halbleitern entspricht. In diesem Bereich kann man zur Berechnung dieser Systeme nicht auf die Förster-Dexter-Theorie zurückgreifen, da diese nur für den Grenzfall der schwachen Kopplung gültig ist.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die vorliegende Arbeit untersucht den Zusammenhang zwischen Skalen in Systemen weicher Materie, der für Multiskalen-Simulationen eine wichtige Rolle spielt. Zu diesem Zweck wurde eine Methode entwickelt, die die Approximation der Separierbarkeit von Variablen für die Molekulardynamik und ähnliche Anwendungen bewertet. Der zweite und größere Teil dieser Arbeit beschäftigt sich mit der konzeptionellen und technischen Erweiterung des Adaptive Resolution Scheme'' (AdResS), einer Methode zur gleichzeitigen Simulation von Systemen mit mehreren Auflösungsebenen. Diese Methode wurde auf Systeme erweitert, in denen klassische und quantenmechanische Effekte eine Rolle spielen.rnrnDie oben genannte erste Methode benötigt nur die analytische Form der Potentiale, wie sie die meisten Molekulardynamik-Programme zur Verfügung stellen. Die Anwendung der Methode auf ein spezielles Problem gibt bei erfolgreichem Ausgang einen numerischen Hinweis auf die Gültigkeit der Variablenseparation. Bei nicht erfolgreichem Ausgang garantiert sie, dass keine Separation der Variablen möglich ist. Die Methode wird exemplarisch auf ein zweiatomiges Molekül auf einer Oberfläche und für die zweidimensionale Version des Rotational Isomer State (RIS) Modells einer Polymerkette angewandt.rnrnDer zweite Teil der Arbeit behandelt die Entwicklung eines Algorithmus zur adaptiven Simulation von Systemen, in denen Quanteneffekte berücksichtigt werden. Die Quantennatur von Atomen wird dabei in der Pfadintegral-Methode durch einen klassischen Polymerring repräsentiert. Die adaptive Pfadintegral-Methode wird zunächst für einatomige Flüssigkeiten und tetraedrische Moleküle unter normalen thermodynamischen Bedingungen getestet. Schließlich wird die Stabilität der Methode durch ihre Anwendung auf flüssigen para-Wasserstoff bei niedrigen Temperaturen geprüft.