926 resultados para Scaling Of Chf
Resumo:
Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.
Resumo:
The present work is devoted to the assessment of the energy fluxes physics in the space of scales and physical space of wall-turbulent flows. The generalized Kolmogorov equation will be applied to DNS data of a turbulent channel flow in order to describe the energy fluxes paths from production to dissipation in the augmented space of wall-turbulent flows. This multidimensional description will be shown to be crucial to understand the formation and sustainment of the turbulent fluctuations fed by the energy fluxes coming from the near-wall production region. An unexpected behavior of the energy fluxes comes out from this analysis consisting of spiral-like paths in the combined physical/scale space where the controversial reverse energy cascade plays a central role. The observed behavior conflicts with the classical notion of the Richardson/Kolmogorov energy cascade and may have strong repercussions on both theoretical and modeling approaches to wall-turbulence. To this aim a new relation stating the leading physical processes governing the energy transfer in wall-turbulence is suggested and shown able to capture most of the rich dynamics of the shear dominated region of the flow. Two dynamical processes are identified as driving mechanisms for the fluxes, one in the near wall region and a second one further away from the wall. The former, stronger one is related to the dynamics involved in the near-wall turbulence regeneration cycle. The second suggests an outer self-sustaining mechanism which is asymptotically expected to take place in the log-layer and could explain the debated mixed inner/outer scaling of the near-wall statistics. The same approach is applied for the first time to a filtered velocity field. A generalized Kolmogorov equation specialized for filtered velocity field is derived and discussed. The results will show what effects the subgrid scales have on the resolved motion in both physical and scale space, singling out the prominent role of the filter length compared to the cross-over scale between production dominated scales and inertial range, lc, and the reverse energy cascade region lb. The systematic characterization of the resolved and subgrid physics as function of the filter scale and of the wall-distance will be shown instrumental for a correct use of LES models in the simulation of wall turbulent flows. Taking inspiration from the new relation for the energy transfer in wall turbulence, a new class of LES models will be also proposed. Finally, the generalized Kolmogorov equation specialized for filtered velocity fields will be shown to be an helpful statistical tool for the assessment of LES models and for the development of new ones. As example, some classical purely dissipative eddy viscosity models are analyzed via an a priori procedure.
Resumo:
In this thesis I present theoretical and experimental results concern- ing the operation and properties of a new kind of Penning trap, the planar trap. It consists of circular electrodes printed on an isolating surface, with an homogeneous magnetic field pointing perpendicular to that surface. The motivation of such geometry is to be found in the construction of an array of planar traps for quantum informa- tional purposes. The open access to radiation of this geometry, and the long coherence times expected for Penning traps, make the planar trap a good candidate for quantum computation. Several proposals for quantum 2-qubit interactions are studied and estimates for their rates are given. An expression for the electrostatic potential is presented, and its fea- tures exposed. A detailed study of the anharmonicity of the potential is given theoretically and is later demonstrated by experiment and numerical simulations, showing good agreement. Size scalability of this trap has been studied by replacing the original planar trap by a trap twice smaller in the experimental setup. This substitution shows no scale effect apart from those expected for the scaling of the parameters of the trap. A smaller lifetime for trapped electrons is seen for this smaller trap, but is clearly matched to a bigger misalignment of the trap’s surface and the magnetic field, due to its more difficult hand manipulation. I also give a hint that this trap may be of help in studying non-linear dynamics for a sextupolarly perturbed Penning trap.
Resumo:
Coupled-cluster theory in its single-reference formulation represents one of the most successful approaches in quantum chemistry for the description of atoms and molecules. To extend the applicability of single-reference coupled-cluster theory to systems with degenerate or near-degenerate electronic configurations, multireference coupled-cluster methods have been suggested. One of the most promising formulations of multireference coupled cluster theory is the state-specific variant suggested by Mukherjee and co-workers (Mk-MRCC). Unlike other multireference coupled-cluster approaches, Mk-MRCC is a size-extensive theory and results obtained so far indicate that it has the potential to develop to a standard tool for high-accuracy quantum-chemical treatments. This work deals with developments to overcome the limitations in the applicability of the Mk-MRCC method. Therefore, an efficient Mk-MRCC algorithm has been implemented in the CFOUR program package to perform energy calculations within the singles and doubles (Mk-MRCCSD) and singles, doubles, and triples (Mk-MRCCSDT) approximations. This implementation exploits the special structure of the Mk-MRCC working equations that allows to adapt existing efficient single-reference coupled-cluster codes. The algorithm has the correct computational scaling of d*N^6 for Mk-MRCCSD and d*N^8 for Mk-MRCCSDT, where N denotes the system size and d the number of reference determinants. For the determination of molecular properties as the equilibrium geometry, the theory of analytic first derivatives of the energy for the Mk-MRCC method has been developed using a Lagrange formalism. The Mk-MRCC gradients within the CCSD and CCSDT approximation have been implemented and their applicability has been demonstrated for various compounds such as 2,6-pyridyne, the 2,6-pyridyne cation, m-benzyne, ozone and cyclobutadiene. The development of analytic gradients for Mk-MRCC offers the possibility of routinely locating minima and transition states on the potential energy surface. It can be considered as a key step towards routine investigation of multireference systems and calculation of their properties. As the full inclusion of triple excitations in Mk-MRCC energy calculations is computational demanding, a parallel implementation is presented in order to circumvent limitations due to the required execution time. The proposed scheme is based on the adaption of a highly efficient serial Mk-MRCCSDT code by parallelizing the time-determining steps. A first application to 2,6-pyridyne is presented to demonstrate the efficiency of the current implementation.
Resumo:
This work contains several applications of the mode-coupling theory (MCT) and is separated into three parts. In the first part we investigate the liquid-glass transition of hard spheres for dimensions d→∞ analytically and numerically up to d=800 in the framework of MCT. We find that the critical packing fraction ϕc(d) scales as d²2^(-d), which is larger than the Kauzmann packing fraction ϕK(d) found by a small-cage expansion by Parisi and Zamponi [J. Stat. Mech.: Theory Exp. 2006, P03017 (2006)]. The scaling of the critical packing fraction is different from the relation ϕc(d)∼d2^(-d) found earlier by Kirkpatrick and Wolynes [Phys. Rev. A 35, 3072 (1987)]. This is due to the fact that the k dependence of the critical collective and self nonergodicity parameters fc(k;d) and fcs(k;d) was assumed to be Gaussian in the previous theories. We show that in MCT this is not the case. Instead fc(k;d) and fcs(k;d), which become identical in the limit d→∞, converge to a non-Gaussian master function on the scale k∼d^(3/2). We find that the numerically determined value for the exponent parameter λ and therefore also the critical exponents a and b depend on the dimension d, even at the largest evaluated dimension d=800. In the second part we compare the results of a molecular-dynamics simulation of liquid Lennard-Jones argon far away from the glass transition [D. Levesque, L. Verlet, and J. Kurkijärvi, Phys. Rev. A 7, 1690 (1973)] with MCT. We show that the agreement between theory and computer simulation can be improved by taking binary collisions into account [L. Sjögren, Phys. Rev. A 22, 2866 (1980)]. We find that an empiric prefactor of the memory function of the original MCT equations leads to similar results. In the third part we derive the equations for a mode-coupling theory for the spherical components of the stress tensor. Unfortunately it turns out that they are too complex to be solved numerically.
Resumo:
In der vorliegenden Arbeit wurden Miniemulsionen als räumliche Begrenzungen für die Synthese von unterschiedlichen funktionellen Materialien mit neuartigen Eigenschaften verwendet. Das erste Themengebiet umfasst die Herstellung von Polymer/Calciumphosphat-Hybridpartikeln und –Hybridkapseln über die templatgesteuerte Mineralisation von Calciumphosphat. Die funktionalisierte Oberfläche von Polymernanopartikeln, welche über die Miniemulsionspolymerisation hergestellt wurden, diente als Templat für die Kristallisation von Calciumphosphat auf den Partikeln. Der Einfluss der funktionellen Carboxylat- und Phosphonat-Oberflächengruppen auf die Komplexierung von Calcium-Ionen sowie die Mineralisation von Calciumphosphat auf der Oberfläche der Nanopartikel wurde mit mehreren Methoden (ionenselektive Elektroden, REM, TEM und XRD) detailliert analysiert. Es wurde herausgefunden, dass die Mineralisation bei verschiedenen pH-Werten zu vollkommen unterschiedlichen Kristallmorphologien (nadel- und plättchenförmige Kristalle) auf der Oberfläche der Partikel führt. Untersuchungen der Mineralisationskinetik zeigten, dass die Morphologie der Hydroxylapatit-Kristalle auf der Partikeloberfläche mit der Änderung der Kristallisationsgeschwindigkeit durch eine sorgfältige Wahl des pH-Wertes gezielt kontrolliert werden kann. Sowohl die Eigenschaften der als Templat verwendeten Polymernanopartikel (z. B. Größe, Form und Funktionalisierung), als auch die Oberflächentopografie der entstandenen Polymer/Calciumphosphat-Hybridpartikel wurden gezielt verändert, um die Eigenschaften der erhaltenen Kompositmaterialien zu steuern. rnEine ähnliche bio-inspirierte Methode wurde zur in situ-Herstellung von organisch/anorganischen Nanokapseln entwickelt. Hierbei wurde die flexible Grenzfläche von flüssigen Miniemulsionströpfchen zur Mineralisation von Calciumphosphat an der Grenzfläche eingesetzt, um Gelatine/Calciumphosphat-Hybridkapseln mit flüssigem Kern herzustellen. Der flüssige Kern der Nanokapseln ermöglicht dabei die Verkapselung unterschiedlicher hydrophiler Substanzen, was in dieser Arbeit durch die erfolgreiche Verkapselung sehr kleiner Hydroxylapatit-Kristalle sowie eines Fluoreszenzfarbstoffes (Rhodamin 6G) demonstriert wurde. Aufgrund der intrinsischen Eigenschaften der Gelatine/Calciumphosphat-Kapseln konnten abhängig vom pH-Wert der Umgebung unterschiedliche Mengen des verkapselten Fluoreszenzfarbstoffes aus den Kapseln freigesetzt werden. Eine mögliche Anwendung der Polymer/Calciumphosphat-Partikel und –Kapseln ist die Implantatbeschichtung, wobei diese als Bindeglied zwischen künstlichem Implantat und natürlichem Knochengewebe dienen. rnIm zweiten Themengebiet dieser Arbeit wurde die Grenzfläche von Nanometer-großen Miniemulsionströpfchen eingesetzt, um einzelne in der dispersen Phase gelöste Polymerketten zu separieren. Nach der Verdampfung des in den Tröpfchen vorhandenen Lösungsmittels wurden stabile Dispersionen sehr kleiner Polymer-Nanopartikel (<10 nm Durchmesser) erhalten, die aus nur wenigen oder einer einzigen Polymerkette bestehen. Die kolloidale Stabilität der Partikel nach der Synthese, gewährleistet durch die Anwesenheit von SDS in der wässrigen Phase der Dispersionen, ist vorteilhaft für die anschließende Charakterisierung der Polymer-Nanopartikel. Die Partikelgröße der Nanopartikel wurde mittels DLS und TEM bestimmt und mit Hilfe der Dichte und des Molekulargewichts der verwendeten Polymere die Anzahl an Polymerketten pro Partikel bestimmt. Wie es für Partikel, die aus nur einer Polymerkette bestehen, erwartet wird, stieg die mittels DLS bestimmte Partikelgröße mit steigendem Molekulargewicht des in der Synthese der Partikel eingesetzten Polymers deutlich an. Die Quantifizierung der Kettenzahl pro Partikel mit Hilfe von Fluoreszenzanisotropie-Messungen ergab, dass Polymer-Einzelkettenpartikel hoher Einheitlichkeit hergestellt wurden. Durch die Verwendung eines Hochdruckhomogenisators zur Herstellung der Einzelkettendispersionen war es möglich, größere Mengen der Einzelkettenpartikel herzustellen, deren Materialeigenschaften zurzeit näher untersucht werden.rn
Resumo:
Topologische Beschränkungen beeinflussen die Eigenschaften von Polymeren. Im Rahmen dieser Arbeit wird mit Hilfe von Computersimulationen im Detail untersucht, inwieweit sich die statischen Eigenschaften von kollabierten Polymerringen, Polymerringen in konzentrierten Lösungen und aus Polymerringen aufgebauten Bürsten mit topologischen Beschränkungen von solchen ohne topologische Beschränkungen unterscheiden. Des Weiteren wird analysiert, welchen Einfluss geometrische Beschränkungen auf die topologischen Eigenschaften von einzelnen Polymerketten besitzen. Im ersten Teil der Arbeit geht es um den Einfluss der Topologie auf die Eigenschaften einzelner Polymerketten in verschiedenen Situationen. Da allerdings gerade die effiziente Durchführung von Monte-Carlo-Simulationen von kollabierten Polymerketten eine große Herausforderung darstellt, werden zunächst drei Bridging-Monte-Carlo-Schritte für Gitter- auf Kontinuumsmodelle übertragen. Eine Messung der Effizienz dieser Schritte ergibt einen Beschleunigungsfaktor von bis zu 100 im Vergleich zum herkömmlichen Slithering-Snake-Algorithmus. Darauf folgt die Analyse einer einzelnen, vergröberten Polystyrolkette in sphärischer Geometrie hinsichtlich Verschlaufungen und Knoten. Es wird gezeigt, dass eine signifikante Verknotung der Polystrolkette erst eintritt, wenn der Radius des umgebenden Kapsids kleiner als der Gyrationsradius der Kette ist. Des Weiteren werden sowohl Monte-Carlo- als auch Molekulardynamiksimulationen sehr großer Ringe mit bis zu einer Million Monomeren im kollabierten Zustand durchgeführt. Während die Konfigurationen aus den Monte-Carlo-Simulationen aufgrund der Verwendung der Bridging-Schritte sehr stark verknotet sind, bleiben die Konfigurationen aus den Molekulardynamiksimulationen unverknotet. Hierbei zeigen sich signifikante Unterschiede sowohl in der lokalen als auch in der globalen Struktur der Ringpolymere. Im zweiten Teil der Arbeit wird das Skalierungsverhalten des Gyrationsradius der einzelnen Polymerringe in einer konzentrierten Lösung aus völlig flexiblen Polymerringen im Kontinuum untersucht. Dabei wird der Anfang des asymptotischen Skalierungsverhaltens, welches mit dem Modell des “fractal globules“ konsistent ist, erreicht. Im abschließenden, dritten Teil dieser Arbeit wird das Verhalten von Bürsten aus linearen Polymeren mit dem von Ringpolymerbürsten verglichen. Dabei zeigt sich, dass die Struktur und das Skalierungsverhalten beider Systeme mit identischem Dichteprofil parallel zum Substrat deutlich voneinander abweichen, obwohl die Eigenschaften beider Systeme in Richtung senkrecht zum Substrat übereinstimmen. Der Vergleich des Relaxationsverhaltens einzelner Ketten in herkömmlichen Polymerbürsten und Ringbürsten liefert keine gravierenden Unterschiede. Es stellt sich aber auch heraus, dass die bisher verwendeten Erklärungen zur Relaxationsverhalten von herkömmlichen Bürsten nicht ausreichen, da diese lediglich den anfänglichen Zerfall der Korrelationsfunktion berücksichtigen. Bei der Untersuchung der Dynamik einzelner Monomere in einer herkömmlichen Bürste aus offenen Ketten vom Substrat hin zum offenen Ende zeigt sich, dass die Monomere in der Mitte der Kette die langsamste Relaxation besitzen, obwohl ihre mittlere Verrückung deutlich kleiner als die der freien Endmonomere ist.
Resumo:
Dysfunction of Autonomic Nervous System (ANS) is a typical feature of chronic heart failure and other cardiovascular disease. As a simple non-invasive technology, heart rate variability (HRV) analysis provides reliable information on autonomic modulation of heart rate. The aim of this thesis was to research and develop automatic methods based on ANS assessment for evaluation of risk in cardiac patients. Several features selection and machine learning algorithms have been combined to achieve the goals. Automatic assessment of disease severity in Congestive Heart Failure (CHF) patients: a completely automatic method, based on long-term HRV was proposed in order to automatically assess the severity of CHF, achieving a sensitivity rate of 93% and a specificity rate of 64% in discriminating severe versus mild patients. Automatic identification of hypertensive patients at high risk of vascular events: a completely automatic system was proposed in order to identify hypertensive patients at higher risk to develop vascular events in the 12 months following the electrocardiographic recordings, achieving a sensitivity rate of 71% and a specificity rate of 86% in identifying high-risk subjects among hypertensive patients. Automatic identification of hypertensive patients with history of fall: it was explored whether an automatic identification of fallers among hypertensive patients based on HRV was feasible. The results obtained in this thesis could have implications both in clinical practice and in clinical research. The system has been designed and developed in order to be clinically feasible. Moreover, since 5-minute ECG recording is inexpensive, easy to assess, and non-invasive, future research will focus on the clinical applicability of the system as a screening tool in non-specialized ambulatories, in order to identify high-risk patients to be shortlisted for more complex investigations.
Resumo:
Thema dieser Arbeit ist die Entwicklung und Kombination verschiedener numerischer Methoden, sowie deren Anwendung auf Probleme stark korrelierter Elektronensysteme. Solche Materialien zeigen viele interessante physikalische Eigenschaften, wie z.B. Supraleitung und magnetische Ordnung und spielen eine bedeutende Rolle in technischen Anwendungen. Es werden zwei verschiedene Modelle behandelt: das Hubbard-Modell und das Kondo-Gitter-Modell (KLM). In den letzten Jahrzehnten konnten bereits viele Erkenntnisse durch die numerische Lösung dieser Modelle gewonnen werden. Dennoch bleibt der physikalische Ursprung vieler Effekte verborgen. Grund dafür ist die Beschränkung aktueller Methoden auf bestimmte Parameterbereiche. Eine der stärksten Einschränkungen ist das Fehlen effizienter Algorithmen für tiefe Temperaturen.rnrnBasierend auf dem Blankenbecler-Scalapino-Sugar Quanten-Monte-Carlo (BSS-QMC) Algorithmus präsentieren wir eine numerisch exakte Methode, die das Hubbard-Modell und das KLM effizient bei sehr tiefen Temperaturen löst. Diese Methode wird auf den Mott-Übergang im zweidimensionalen Hubbard-Modell angewendet. Im Gegensatz zu früheren Studien können wir einen Mott-Übergang bei endlichen Temperaturen und endlichen Wechselwirkungen klar ausschließen.rnrnAuf der Basis dieses exakten BSS-QMC Algorithmus, haben wir einen Störstellenlöser für die dynamische Molekularfeld Theorie (DMFT) sowie ihre Cluster Erweiterungen (CDMFT) entwickelt. Die DMFT ist die vorherrschende Theorie stark korrelierter Systeme, bei denen übliche Bandstrukturrechnungen versagen. Eine Hauptlimitation ist dabei die Verfügbarkeit effizienter Störstellenlöser für das intrinsische Quantenproblem. Der in dieser Arbeit entwickelte Algorithmus hat das gleiche überlegene Skalierungsverhalten mit der inversen Temperatur wie BSS-QMC. Wir untersuchen den Mott-Übergang im Rahmen der DMFT und analysieren den Einfluss von systematischen Fehlern auf diesen Übergang.rnrnEin weiteres prominentes Thema ist die Vernachlässigung von nicht-lokalen Wechselwirkungen in der DMFT. Hierzu kombinieren wir direkte BSS-QMC Gitterrechnungen mit CDMFT für das halb gefüllte zweidimensionale anisotrope Hubbard Modell, das dotierte Hubbard Modell und das KLM. Die Ergebnisse für die verschiedenen Modelle unterscheiden sich stark: während nicht-lokale Korrelationen eine wichtige Rolle im zweidimensionalen (anisotropen) Modell spielen, ist in der paramagnetischen Phase die Impulsabhängigkeit der Selbstenergie für stark dotierte Systeme und für das KLM deutlich schwächer. Eine bemerkenswerte Erkenntnis ist, dass die Selbstenergie sich durch die nicht-wechselwirkende Dispersion parametrisieren lässt. Die spezielle Struktur der Selbstenergie im Impulsraum kann sehr nützlich für die Klassifizierung von elektronischen Korrelationseffekten sein und öffnet den Weg für die Entwicklung neuer Schemata über die Grenzen der DMFT hinaus.
Resumo:
An experimental setup was designed to visualize water percolation inside the porous transport layer, PTL, of proton exchange membrane, PEM, fuel cells and identify the relevant characterization parameters. In parallel with the observation of the water movement, the injection pressure (pressure required to transport water through the PTL) was measured. A new scaling for the drainage in porous media has been proposed based on the ratio between the input and the dissipated energies during percolation. A proportional dependency was obtained between the energy ratio and a non-dimensional time and this relationship is not dependent on the flow regime; stable displacement or capillary fingering. Experimental results show that for different PTL samples (from different manufacturers) the proportionality is different. The identification of this proportionality allows a unique characterization of PTLs with respect to water transport. This scaling has relevance in porous media flows ranging far beyond fuel cells. In parallel with the experimental analysis, a two-dimensional numerical model was developed in order to simulate the phenomena observed in the experiments. The stochastic nature of the pore size distribution, the role of the PTL wettability and morphology properties on the water transport were analyzed. The effect of a second porous layer placed between the porous transport layer and the catalyst layer called microporous layer, MPL, was also studied. It was found that the presence of the MPL significantly reduced the water content on the PTL by enhancing fingering formation. Moreover, the presence of small defects (cracks) within the MPL was shown to enhance water management. Finally, a corroboration of the numerical simulation was carried out. A threedimensional version of the network model was developed mimicking the experimental conditions. The morphology and wettability of the PTL are tuned to the experiment data by using the new energy scaling of drainage in porous media. Once the fit between numerical and experimental data is obtained, the computational PTL structure can be used in different types of simulations where the conditions are representative of the fuel cell operating conditions.
Resumo:
Der vorliegende Beitrag greift das Auftreten von Größeneffekten auf, welche auf dem WGTL-Kolloquium 2006 vorgestellt worden sind und nun Gegenstand tiefer gehender Forschung sind. Dabei handelt es sich um einen Wechsel von Sekundäreffekten zu Primäreffekten, ausgelöst durch das Downscaling der Bauteildimensionen und somit eines veränderten Verhältnisses von der Masse zur Oberfläche. Es werden erste Ergebnisse aus der Abschätzung der Kräfte vorgestellt, und die Auswirkungen dieser Kräfte auf den Fördervorgang diskutiert.
Resumo:
Traditionally, desertification research has focused on degradation assessments, whereas prevention and mitigation strategies have not sufficiently been emphasised, although the concept of sustainable land management (SLM) is increasingly being acknowledged. SLM strategies are interventions at the local to regional scale aiming at increasing productivity, protecting the natural resource base, and improving livelihoods. The global WOCAT initiative and its partners have developed harmonized frameworks to compile, evaluate and analyse the impact of SLM practices around the globe. Recent studies within the EU research project DESIRE developed a methodological framework that combines a collective learning and decision-making approach with use of best practices from the WOCAT database. In-depth assessment of 30 technologies and 8 approaches from 17 desertification sites enabled an evaluation of how SLM addresses prevalent dryland threats such as water scarcity, soil and vegetation degradation, low production, climate change, resource use conflicts and migration. Among the impacts attributed to the documented technologies, those mentioned most were diversified and enhanced production and better management of water and soil degradation, whether through water harvesting, improving soil moisture, or reducing runoff. Water harvesting offers under-exploited opportunities for the drylands and the predominantly rainfed farming systems of the developing world. Recently compiled guidelines introduce the concepts behind water harvesting and propose a harmonised classification system, followed by an assessment of suitability, adoption and up-scaling of practices. Case studies go from large-scale floodwater spreading that make alluvial plains cultivable, to systems that boost cereal production in small farms, as well as practices that collect and store water from household compounds. Once contextualized and set in appropriate institutional frameworks, they can form part of an overall adaptation strategy for land users. More field research is needed to reinforce expert assessments of SLM impacts and provide the necessary evidence-based rationale for investing in SLM. This includes developing methods to quantify and value ecosystem services, both on-site and off-site, and assess the resilience of SLM practices, as currently aimed at within the new EU CASCADE project.
Resumo:
• Premise of the study: Isometric and allometric scaling of a conserved floral plan could provide a parsimonious mechanism for rapid and reversible transitions between breeding systems. This scaling may occur during transitions between predominant autogamy and xenogamy, contributing to the maintenance of a stable mixed mating system. • Methods: We compared nine disjunct populations of the polytypic, mixed mating species Oenothera flava (Onagraceae) to two parapatric relatives, the obligately xenogamous species O. acutissima and the mixed mating species O. triloba. We compared floral morphology of all taxa using principal component analysis (PCA) and developmental trajectories of floral organs using ANCOVA homogeneity of slopes. • Key results: The PCA revealed both isometric and allometric scaling of a conserved floral plan. Three principal components (PCs) explained 92.5% of the variation in the three species. PC1 predominantly loaded on measures of floral size and accounts for 36% of the variation. PC2 accounted for 35% of the variation, predominantly in traits that influence pollinator handling. PC3 accounted for 22% of the variation, primarily in anther–stigma distance (herkogamy). During O. flava subsp. taraxacoides development, style elongation was accelerated relative to anthers, resulting in positive herkogamy. During O. flava subsp. flava development, style elongation was decelerated, resulting in zero or negative herkogamy. Of the two populations with intermediate morphology, style elongation was accelerated in one population and decelerated in the other. • Conclusions: Isometric and allometric scaling of floral organs in North American Oenothera section Lavauxia drive variation in breeding system. Multiple developmental paths to intermediate phenotypes support the likelihood of multiple mating system transitions.
Resumo:
Well-established methods exist for measuring party positions, but reliable means for estimating intra-party preferences remain underdeveloped. While most efforts focus on estimating the ideal points of individual legislators based on inductive scaling of roll call votes, this data suffers from two problems: selection bias due to unrecorded votes and strong party discipline, which tends to make voting a strategic rather than a sincere indication of preferences. By contrast, legislative speeches are relatively unconstrained, as party leaders are less likely to punish MPs for speaking freely as long as they vote with the party line. Yet, the differences between roll call estimations and text scalings remain essentially unexplored, despite the growing application of statistical analysis of textual data to measure policy preferences. Our paper addresses this lacuna by exploiting a rich feature of the Swiss legislature: on most bills, legislators both vote and speak many times. Using this data, we compare text-based scaling of ideal points to vote-based scaling from a crucial piece of energy legislation. Our findings confirm that text scalings reveal larger intra-party differences than roll calls. Using regression models, we further explain the differences between roll call and text scalings by attributing differences to constituency-level preferences for energy policy.
Resumo:
The first complete cyclic sedimentary successions for the early Paleogene from drilling multiple holes have been retrieved during two ODP expeditions: Leg 198 (Shatsky Rise, NW Pacific Ocean) and Leg 208 (Walvis Ridge, SE Atlantic Ocean). These new records allow us to construct a comprehensive astronomically calibrated stratigraphic framework with an unprecedented accuracy for both the Atlantic and the Pacific Oceans covering the entire Paleocene epoch based on the identification of the stable long-eccentricity cycle (405-kyr). High resolution X-ray fluorescence (XRF) core scanner and non-destructive core logging data from Sites 1209 through1211 (Leg 198) and Sites 1262, 1267 (Leg 208) are the basis for such a robust chronostratigraphy. Former investigated marine (ODP Sites 1001 and 1051) and land-based (e.g., Zumaia) sections have been integrated as well. The high-fidelity chronology is the prerequisite for deciphering mechanisms in relation to prominent transient climatic events as well as completely new insights into Greenhouse climate variability in the early Paleogene. We demonstrate that the Paleocene epoch covers 24 long eccentricity cycles. We also show that no definite absolute age datums for the K/Pg boundary or the Paleocene - Eocene Thermal Maximum (PETM) can be provided by now, because of still existing uncertainties in orbital solutions and radiometric dating. However, we provide two options for tuning of the Paleocene which are only offset by 405-kyr. Our orbitally calibrated integrated Leg 208 magnetostratigraphy is used to revise the Geomagnetic Polarity Time Scale (GPTS) for Chron C29 to C25. We established a high-resolution calcareous nannofossil biostratigraphy for the South Atlantic which allows a much more detailed relative scaling of stages with biozones. The re-evaluation of the South Atlantic spreading rate model features higher frequent oscillations in spreading rates for magnetochron C28r, C27n, and C26n.