962 resultados para Two variable oregonator model
Resumo:
PURPOSE To ascertain whether the volume and circumference of the lacrimal sac and nasolacrimal duct as measured by contrast-enhanced computed tomographic dacryocystography (CT-DCG) before and after balloon dacryoplasty could be used to predict clinical success in children with congenital nasolacrimal obstruction. METHODS Nasolacrimal ducts of children aged 2 to 6 years with clinical signs of congenital nasolacrimal duct obstruction undergoing balloon dilation were imaged with contrast-enhanced CT-DCG before and 5 minutes after the procedure. The circumference of the most dilated portion of the lacrimal sac was measured on the axial plane. The volume of contrast within the nasolacrimal duct and sac was also measured before and after the procedure. Clinical success was defined as the disappearance of signs of epiphora. RESULTS A total of 18 nasolacrimal ducts of 13 children were included. The average circumference of the most dilated portion of the lacrimal sac was 1.30 +/- 0.45 cm (range, 0.64-2.50 cm) before the procedure. The average contrast volume was 0.12 +/- 0.08 cm(3) (range, 0.01-0.38 cm(3)) before and 0.07 +/- 0.06 cm(3) (range, 0.01-0.20 cm(3)) after (P = 0.01). Data were analyzed using multivariate logistic regression with a backward variable input model; a decrease in contrast volume before and after dilation (P = 0.04) was associated with clinical success, whereas the larger size of the most dilated portion of the lacrimal sac (P = 0.01) was associated with clinical failure. CONCLUSIONS Contrast-enhanced CT-DCG provides useful information about nasolacrimal anatomy in children with congenital nasolacrimal duct obstruction. The decrease in contrast volume before and after balloon dilation was predictive of success; A larger size of the most dilated portion of the lacrimal sac was associated with clinical failure. (J AAPOS 2012;16:464-467)
Resumo:
We present a new method to quantify substructures in clusters of galaxies, based on the analysis of the intensity of structures. This analysis is done in a residual image that is the result of the subtraction of a surface brightness model, obtained by fitting a two-dimensional analytical model (beta-model or Sersic profile) with elliptical symmetry, from the X-ray image. Our method is applied to 34 clusters observed by the Chandra Space Telescope that are in the redshift range z is an element of [0.02, 0.2] and have a signal-to-noise ratio (S/N) greater than 100. We present the calibration of the method and the relations between the substructure level with physical quantities, such as the mass, X-ray luminosity, temperature, and cluster redshift. We use our method to separate the clusters in two sub-samples of high-and low-substructure levels. We conclude, using Monte Carlo simulations, that the method recuperates very well the true amount of substructure for small angular core radii clusters (with respect to the whole image size) and good S/N observations. We find no evidence of correlation between the substructure level and physical properties of the clusters such as gas temperature, X-ray luminosity, and redshift; however, analysis suggest a trend between the substructure level and cluster mass. The scaling relations for the two sub-samples (high-and low-substructure level clusters) are different (they present an offset, i. e., given a fixed mass or temperature, low-substructure clusters tend to be more X-ray luminous), which is an important result for cosmological tests using the mass-luminosity relation to obtain the cluster mass function, since they rely on the assumption that clusters do not present different scaling relations according to their dynamical state.
Resumo:
Objectives: To evaluate the effect of insertion torque on micromotion to a lateral force in three different implant designs. Material and methods: Thirty-six implants with identical thread design, but different cutting groove design were divided in three groups: (1) non-fluted (no cutting groove, solid screw-form); (2) fluted (901 cut at the apex, tap design); and (3) Blossomt (Patent pending) (non-fluted with engineered trimmed thread design). The implants were screwed into polyurethane foam blocks and the insertion torque was recorded after each turn of 901 by a digital torque gauge. Controlled lateral loads of 10N followed by increments of 5 up to 100N were sequentially applied by a digital force gauge on a titanium abutment. Statistical comparison was performed with two-way mixed model ANOVA that evaluated implant design group, linear effects of turns and displacement loads, and their interaction. Results: While insertion torque increased as a function of number of turns for each design, the slope and final values increased (Po0.001) progressively from the Blossomt to the fluted to the non-fluted design (M +/- standard deviation [SD] = 64.1 +/- 26.8, 139.4 +/- 17.2, and 205.23 +/- 24.3 Ncm, respectively). While a linear relationship between horizontal displacement and lateral force was observed for each design, the slope and maximal displacement increased (Po0.001) progressively from the Blossomt to the fluted to the non-fluted design (M +/- SD 530 +/- 57.7, 585.9 +/- 82.4, and 782.33 +/- 269.4 mm, respectively). There was negligible to moderate levels of association between insertion torque and lateral displacement in the Blossomt, fluted and non-fluted design groups, respectively. Conclusion: Insertion torque was reduced in implant macrodesigns that incorporated cutting edges, and lesser insertion torque was generally associated with decreased micromovement. However, insertion torque and micromotion were unrelated within implant designs, particularly for those designs showing the least insertion torque.
Resumo:
Background and Study Aim: The grip strength endurance is important for Brazilian Jiu-Jitsu (BJJ). Thus, the aims of this study were: a) to test the reliability of two kimono grip strength tests named maximum static lift (MSL) and maximum number of repetitions (MNR) and b) to examine differences between elite and non-elite BJJ players in these tests. Material/Methods: Thirty BJJ players participated into two phases: "A" to test reliability and "B" to compare elite and non-elite. In phase A, twenty participants performed the MSL and, 15 min later, the MNR in two occasions with 24-h interval. In phase B, ten other BJJ practitioners (non-elite) and ten athletes (elite) performed the same tests. The intraclass correlation coefficient (ICC) two way fixed model (3,1), Bland-Altman plot and the limits of agreement were used to test reliability, correlation between the tests were evaluated by Pearson correlations and independent T test (P<0.05) was utilized to compare elite vs. non-elite. Results: The ICC was high for repeated measurements on different days of phase A (MSL: r=0.99 and MNR: r=0.97). Limits of agreement for time of suspension were -6.9 to 2.4-s, with a mean difference of -2.3 s (CI: -3.3 to -1.2-s), while for number of repetitions the limits of agreement were -2.9 to 2.3-rep, with a mean difference of -0.3-rep (CI: -0.9 to 0.3-rep). In phase B, elite presented better performance for both tests (P<0.05) compared to non-elite (56 +/- 10-s vs. 37 +/- 11-s in MSL and 15 +/- 4-rep vs. 8 +/- 3-rep in MNR). Moderate correlation were found between MSL and MNR for absolute values during test (r=0.475; p=0.034), and retest phases (r=0.489; p=0.029), while moderate and high correlations in the test (r=0.615; p=0.004) and retest phases (r=0.716; p=0.001) were found for relative values, respectively. Conclusions: These proposed tests are reliable and both static and dynamic grip strength endurance tests seem to differentiate BJJ athletes from different levels.
Resumo:
Semi-qualitative probabilistic networks (SQPNs) merge two important graphical model formalisms: Bayesian networks and qualitative probabilistic networks. They provade a very Complexity of inferences in polytree-shaped semi-qualitative probabilistic networks and qualitative probabilistic networks. They provide a very general modeling framework by allowing the combination of numeric and qualitative assessments over a discrete domain, and can be compactly encoded by exploiting the same factorization of joint probability distributions that are behind the bayesian networks. This paper explores the computational complexity of semi-qualitative probabilistic networks, and takes the polytree-shaped networks as its main target. We show that the inference problem is coNP-Complete for binary polytrees with multiple observed nodes. We also show that interferences can be performed in time linear in the number of nodes if there is a single observed node. Because our proof is construtive, we obtain an efficient linear time algorithm for SQPNs under such assumptions. To the best of our knowledge, this is the first exact polynominal-time algorithm for SQPn. Together these results provide a clear picture of the inferential complexity in polytree-shaped SQPNs.
Resumo:
The Assimilation in the Unstable Subspace (AUS) was introduced by Trevisan and Uboldi in 2004, and developed by Trevisan, Uboldi and Carrassi, to minimize the analysis and forecast errors by exploiting the flow-dependent instabilities of the forecast-analysis cycle system, which may be thought of as a system forced by observations. In the AUS scheme the assimilation is obtained by confining the analysis increment in the unstable subspace of the forecast-analysis cycle system so that it will have the same structure of the dominant instabilities of the system. The unstable subspace is estimated by Breeding on the Data Assimilation System (BDAS). AUS- BDAS has already been tested in realistic models and observational configurations, including a Quasi-Geostrophicmodel and a high dimensional, primitive equation ocean model; the experiments include both fixed and“adaptive”observations. In these contexts, the AUS-BDAS approach greatly reduces the analysis error, with reasonable computational costs for data assimilation with respect, for example, to a prohibitive full Extended Kalman Filter. This is a follow-up study in which we revisit the AUS-BDAS approach in the more basic, highly nonlinear Lorenz 1963 convective model. We run observation system simulation experiments in a perfect model setting, and with two types of model error as well: random and systematic. In the different configurations examined, and in a perfect model setting, AUS once again shows better efficiency than other advanced data assimilation schemes. In the present study, we develop an iterative scheme that leads to a significant improvement of the overall assimilation performance with respect also to standard AUS. In particular, it boosts the efficiency of regime’s changes tracking, with a low computational cost. Other data assimilation schemes need estimates of ad hoc parameters, which have to be tuned for the specific model at hand. In Numerical Weather Prediction models, tuning of parameters — and in particular an estimate of the model error covariance matrix — may turn out to be quite difficult. Our proposed approach, instead, may be easier to implement in operational models.
Resumo:
This thesis is dedicated to the analysis of non-linear pricing in oligopoly. Non-linear pricing is a fairly predominant practice in most real markets, mostly characterized by some amount of competition. The sophistication of pricing practices has increased in the latest decades due to the technological advances that have allowed companies to gather more and more data on consumers preferences. The first essay of the thesis highlights the main characteristics of oligopolistic non-linear pricing. Non-linear pricing is a special case of price discrimination. The theory of price discrimination has to be modified in presence of oligopoly: in particular, a crucial role is played by the competitive externality that implies that product differentiation is closely related to the possibility of discriminating. The essay reviews the theory of competitive non-linear pricing by starting from its foundations, mechanism design under common agency. The different approaches to model non-linear pricing are then reviewed. In particular, the difference between price and quantity competition is highlighted. Finally, the close link between non-linear pricing and the recent developments in the theory of vertical differentiation is explored. The second essay shows how the effects of non-linear pricing are determined by the relationship between the demand and the technological structure of the market. The chapter focuses on a model in which firms supply a homogeneous product in two different sizes. Information about consumers' reservation prices is incomplete and the production technology is characterized by size economies. The model provides insights on the size of the products that one finds in the market. Four equilibrium regions are identified depending on the relative intensity of size economies with respect to consumers' evaluation of the good. Regions for which the product is supplied in a single unit or in several different sizes or in only a very large one. Both the private and social desirability of non-linear pricing varies across different equilibrium regions. The third essay considers the broadband internet market. Non discriminatory issues seem the core of the recent debate on the opportunity or not of regulating the internet. One of the main questions posed is whether the telecom companies, owning the networks constituting the internet, should be allowed to offer quality-contingent contracts to content providers. The aim of this essay is to analyze the issue through a stylized two-sided market model of the web that highlights the effects of such a discrimination over quality, prices and participation to the internet of providers and final users. An overall welfare comparison is proposed, concluding that the final effects of regulation crucially depend on both the technology and preferences of agents.
Resumo:
In a large number of problems the high dimensionality of the search space, the vast number of variables and the economical constrains limit the ability of classical techniques to reach the optimum of a function, known or unknown. In this thesis we investigate the possibility to combine approaches from advanced statistics and optimization algorithms in such a way to better explore the combinatorial search space and to increase the performance of the approaches. To this purpose we propose two methods: (i) Model Based Ant Colony Design and (ii) Naïve Bayes Ant Colony Optimization. We test the performance of the two proposed solutions on a simulation study and we apply the novel techniques on an appplication in the field of Enzyme Engineering and Design.
Resumo:
In dieser Arbeit wurden die Phasenübergänge einer einzelnen Polymerkette mit Hilfe der Monte Carlo Methode untersucht. Das Bondfluktuationsmodell wurde zur Simulation benutzt, wobei ein attraktives Kastenpotential zwischen allen Monomeren der Polymerkette gewirkt hat. Drei Arten von Bewegungen sind eingeführt worden, um die Polymerkette richtig zu relaxieren. Diese sind die Hüpfbewegung, die Reptationsbewegung und die Pivotbewegung. Um die Volumenausschlußwechselwirkung zu prüfen und um die Anzahl der Nachbarn jedes Monomers zu bestimmen ist ein hierarchischer Suchalgorithmus eingeführt worden. Die Zustandsdichte des Modells ist mittels des Wang-Landau Algorithmus bestimmt worden. Damit sind thermodynamische Größen berechnet worden, um die Phasenübergänge der einzelnen Polymerkette zu studieren. Wir haben zuerst eine freie Polymerkette untersucht. Der Knäuel-Kügelchen Übergang zeigt sich als ein kontinuierlicher Übergang, bei dem der Knäuel zum Kügelchen zusammenfällt. Der Kügelchen-Kügelchen Übergang bei niedrigeren Temperaturen ist ein Phasenübergang der ersten Ordnung, mit einer Koexistenz des flüssigen und festen Kügelchens, das eine kristalline Struktur hat. Im thermodynamischen Limes sind die Übergangstemperaturen identisch. Das entspricht einem Verschwinden der flüssigen Phase. In zwei Dimensionen zeigt das Modell einen kontinuierlichen Knäuel-Kügelchen Übergang mit einer lokal geordneten Struktur. Wir haben ferner einen Polymermushroom, das ist eine verankerte Polymerkette, zwischen zwei repulsiven Wänden im Abstand D untersucht. Das Phasenverhalten der Polymerkette zeigt einen dimensionalen crossover. Sowohl die Verankerung als auch die Beschränkung fördern den Knäuel-Kügelchen Übergang, wobei es eine Symmetriebrechung gibt, da die Ausdehnung der Polymerkette parallel zu den Wänden schneller schrumpft als die senkrecht zu den Wänden. Die Beschränkung hindert den Kügelchen-Kügelchen Übergang, wobei die Verankerung keinen Einfluss zu haben scheint. Die Übergangstemperaturen im thermodynamischen Limes sind wiederum identisch im Rahmen des Fehlers. Die spezifische Wärme des gleichen Modells aber mit einem abstoßendem Kastenpotential zeigt eine Schottky Anomalie, typisch für ein Zwei-Niveau System.
Resumo:
In this thesis we consider three different models for strongly correlated electrons, namely a multi-band Hubbard model as well as the spinless Falicov-Kimball model, both with a semi-elliptical density of states in the limit of infinite dimensions d, and the attractive Hubbard model on a square lattice in d=2.
In the first part, we study a two-band Hubbard model with unequal bandwidths and anisotropic Hund's rule coupling (J_z-model) in the limit of infinite dimensions within the dynamical mean-field theory (DMFT). Here, the DMFT impurity problem is solved with the use of quantum Monte Carlo (QMC) simulations. Our main result is that the J_z-model describes the occurrence of an orbital-selective Mott transition (OSMT), in contrast to earlier findings. We investigate the model with a high-precision DMFT algorithm, which was developed as part of this thesis and which supplements QMC with a high-frequency expansion of the self-energy.
The main advantage of this scheme is the extraordinary accuracy of the numerical solutions, which can be obtained already with moderate computational effort, so that studies of multi-orbital systems within the DMFT+QMC are strongly improved. We also found that a suitably defined
Falicov-Kimball (FK) model exhibits an OSMT, revealing the close connection of the Falicov-Kimball physics to the J_z-model in the OSM phase.
In the second part of this thesis we study the attractive Hubbard model in two spatial dimensions within second-order self-consistent perturbation theory.
This model is considered on a square lattice at finite doping and at low temperatures. Our main result is that the predictions of first-order perturbation theory (Hartree-Fock approximation) are renormalized by a factor of the order of unity even at arbitrarily weak interaction (U->0). The renormalization factor q can be evaluated as a function of the filling n for 0
Resumo:
This thesis analyses problems related to the applicability, in business environments, of Process Mining tools and techniques. The first contribution is a presentation of the state of the art of Process Mining and a characterization of companies, in terms of their "process awareness". The work continues identifying circumstance where problems can emerge: data preparation; actual mining; and results interpretation. Other problems are the configuration of parameters by not-expert users and computational complexity. We concentrate on two possible scenarios: "batch" and "on-line" Process Mining. Concerning the batch Process Mining, we first investigated the data preparation problem and we proposed a solution for the identification of the "case-ids" whenever this field is not explicitly indicated. After that, we concentrated on problems at mining time and we propose the generalization of a well-known control-flow discovery algorithm in order to exploit non instantaneous events. The usage of interval-based recording leads to an important improvement of performance. Later on, we report our work on the parameters configuration for not-expert users. We present two approaches to select the "best" parameters configuration: one is completely autonomous; the other requires human interaction to navigate a hierarchy of candidate models. Concerning the data interpretation and results evaluation, we propose two metrics: a model-to-model and a model-to-log. Finally, we present an automatic approach for the extension of a control-flow model with social information, in order to simplify the analysis of these perspectives. The second part of this thesis deals with control-flow discovery algorithms in on-line settings. We propose a formal definition of the problem, and two baseline approaches. The actual mining algorithms proposed are two: the first is the adaptation, to the control-flow discovery problem, of a frequency counting algorithm; the second constitutes a framework of models which can be used for different kinds of streams (stationary versus evolving).
Resumo:
Der Haupt-Lichtsammelkomplex (LHCII) des Photosyntheseapparates höherer Pflanzen gehört zu den häufigsten Membranproteinen der Erde. Seine Kristallstruktur ist bekannt. Das Apoprotein kann rekombinant in Escherichia coli überexprimiert und somit molekularbiologisch vielfältig verändert werden. In Detergenzlösung besitzt das denaturierte Protein die erstaunliche Fähigkeit, sich spontan zu funktionalen Protein-Pigment-Komplexen zu organisieren, welche strukturell nahezu identisch sind mit nativem LHCII. Der Faltungsprozess findet in vitro im Zeitbereich von Sekunden bis Minuten statt und ist abhängig von der Bindung der Cofaktoren Chlorophyll a und b sowie verschiedenen Carotinoiden.rn Diese Eigenschaften machen LHCII besonders geeignet für Strukturuntersuchungen mittels der elektronenparamagnetischen Resonanz (EPR)-Spektrokopie. Diese setzt eine punktspezifische Spinmarkierung des LHCII voraus, die in dieser Arbeit zunächst optimiert wurde. Einschließlich der Beiträge Anderer stand eine breite Auswahl von über 40 spinmarkierten Mutanten des LHCII bereit, einen N-terminalen „Cys walk“ eingeschlossen. Weder der hierfür notwendige Austausch einzelner Aminosäuren noch die Anknüpfung des Spinmarkers beeinträchtigten die Funktion des LHCII. Zudem konnte ein Protokoll zur Präparation heterogen spinmarkierter LHCII-Trimere entwickelt werden, also von Trimeren, die jeweils nur ein Monomer mit einer Spinmarkierung enthalten.rn Spinmarkierte Proben des Detergenz-solubilisierten LHCII wurden unter Verwendung verschiedener EPR-Techniken strukturell analysiert. Als besonders aussagekräftig erwies sich die Messung der Wasserzugänglichkeit einzelner Aminosäurepositionen anhand der Electron Spin Echo Envelope Modulation (ESEEM). In Kombination mit der etablierten Double Electron-Electron Resonance (DEER)-Technik zur Detektion von Abständen zwischen zwei Spinmarkern wurde der membranständige Kernbereich des LHCII in Lösung eingehend untersucht und strukturell der Kristallstruktur für sehr ähnlich befunden. Die Vermessung kristallographisch nicht erfasster Bereiche nahe dem N-Terminus offenbarte die schon früher detektierte Strukturdynamik der Domäne in Abhängigkeit des Oligomerisierungsgrades. Der neue, noch zu vervollständigende Datensatz aus Abstandsverteilungen und ESEEM-Wasserzugänglichkeiten monomerer wie trimerer Proben sollte in naher Zukunft die sehr genaue Modellierung der N-terminalen Domäne des LHCII ermöglichen.rn In einem weiteren Abschnitt der Arbeit wurde die Faltung des LHCII-Apoproteins bei der LHCII-Assemblierung in vitro untersucht. Vorausgegangene fluoreszenzspektroskopi-sche Arbeiten hatten gezeigt, dass die Bindung von Chlorophyll a und b in aufeinanderfolgenden Schritten im Zeitbereich von weniger als einer Minute bzw. mehreren Minuten erfolgten. Sowohl die Wasserzugänglichkeit einzelner Aminosäurepositionen als auch Spin-Spin-Abstände änderten sich in ähnlichen Zeitbereichen. Die Daten deuten darauf hin, dass die Ausbildung der mittleren Transmembran-Helix mit der schnelleren Chlorophyll-a-Bindung einhergeht, während sich die Superhelix aus den beiden anderen Transmembranhelices erst im langsameren Schritt, zusammen mit der Chlorophyll-b-Bindung, ausbildet.rn
Resumo:
Stylolites are rough paired surfaces, indicative of localized stress-induced dissolution under a non-hydrostatic state of stress, separated by a clay parting which is believed to be the residuum of the dissolved rock. These structures are the most frequent deformation pattern in monomineralic rocks and thus provide important information about low temperature deformation and mass transfer. The intriguing roughness of stylolites can be used to assess amount of volume loss and paleo-stress directions, and to infer the destabilizing processes during pressure solution. But there is little agreement on how stylolites form and why these localized pressure solution patterns develop their characteristic roughness.rnNatural bedding parallel and vertical stylolites were studied in this work to obtain a quantitative description of the stylolite roughness and understand the governing processes during their formation. Adapting scaling approaches based on fractal principles it is demonstrated that stylolites show two self affine scaling regimes with roughness exponents of 1.1 and 0.5 for small and large length scales separated by a crossover length at the millimeter scale. Analysis of stylolites from various depths proved that this crossover length is a function of the stress field during formation, as analytically predicted. For bedding parallel stylolites the crossover length is a function of the normal stress on the interface, but vertical stylolites show a clear in-plane anisotropy of the crossover length owing to the fact that the in-plane stresses (σ2 and σ3) are dissimilar. Therefore stylolite roughness contains a signature of the stress field during formation.rnTo address the origin of stylolite roughness a combined microstructural (SEM/EBSD) and numerical approach is employed. Microstructural investigations of natural stylolites in limestones reveal that heterogeneities initially present in the host rock (clay particles, quartz grains) are responsible for the formation of the distinctive stylolite roughness. A two-dimensional numerical model, i.e. a discrete linear elastic lattice spring model, is used to investigate the roughness evolving from an initially flat fluid filled interface induced by heterogeneities in the matrix. This model generates rough interfaces with the same scaling properties as natural stylolites. Furthermore two coinciding crossover phenomena in space and in time exist that separate length and timescales for which the roughening is either balanced by surface or elastic energies. The roughness and growth exponents are independent of the size, amount and the dissolution rate of the heterogeneities. This allows to conclude that the location of asperities is determined by a polimict multi-scale quenched noise, while the roughening process is governed by inherent processes i.e. the transition from a surface to an elastic energy dominated regime.rn
Resumo:
An increased incidence of Clostridium difficile infection (CDI) is associated with the emergence of epidemic strains characterised by high genetic diversity. Among the factors that may have a role in CDI there is a family of 29 paralogs, the cell wall proteins (CWPs), which compose the outer layer of the bacterial cell and are likely to be involved in colonisation. Previous studies have shown that 12 of the29 cwp genes are clustered in the same region, named after slpA (cwp1) the slpA locus, whereas the remaining 17 paralogs are distributed throughout the genome. The variability of 14 of these 17 cwp paralogs was determined in 40 C. difficile clinical isolates belonging to six of the currently prevailing PCR ribotypes. Based on sequence conservation, these cwp genes were divided into two groups, one comprising cwp loci having highly conserved sequences in all isolates, and the other 5 loci showing low genetic conservation between isolates of the same PCR ribotype as well as between different PCR ribotypes. Three conserved CWPs, Cwp16, Cwp18 and Cwp25, and two variable ones, Cwp26 and Cwp27, were characterised further by Western blot analysis of total cell extracts or S-layer preparations of the C. difficile clinical isolates. Expression of genetically invariable CWPs is well conserved in all isolates, while genetically variable CWPs are not always expressed at comparable levels even in strains containing identical sequences but belonging to different PCR ribotypes. In addition, we chose to analyse the immune response obtained in a protection experiment, carried out in hamsters, using a protein microarray approach to study the in vivo expression and the immunoreactivity of several surface proteins, including 18 Cwps.
Resumo:
In dieser Arbeit werden vier unterschiedliche, stark korrelierte, fermionische Mehrbandsysteme untersucht. Es handelt sich dabei um ein Mehrstörstellen-Anderson-Modell, zwei Hubbard-Modelle sowie ein Mehrbandsystem, wie es sich aus einer ab initio-Beschreibung für ein korreliertes Halbmetall ergibt.rnrnDie Betrachtung des Mehrstörstellen-Anderson-Modells konzentriert sich auf die Untersuchung des Einflusses der Austauschwechselwirkung und der nicht-lokalen Korrelationen zwischen zwei Störstellen in einem einfach-kubischen Gitter. Das zentrale Resultat ist die Abstandsabhängigkeit der Korrelationen der Störstellenelektronen, welche stark von der Gitterdimension und der relativen Position der Störstellen abhängen. Bemerkenswert ist hier die lange Reichweite der Korrelationen in der Diagonalrichtung des Gitters. Außerdem ergibt sich, dass eine antiferromagnetische Austauschwechselwirkung ein Singulett zwischen den Störstellenelektronen gegenüber den Kondo-Singuletts der einzelnen Störstellen favorisiert und so den Kondo-Effekt der einzelnen Störstellen behindert.rnrnEin Zweiband-Hubbard-Modell, das Jz-Modell, wird im Hinblick auf seine Mott-Phasen in Abhängigkeit von Dotierung und Kristallfeldaufspaltung auf dem Bethe-Gitter untersucht. Die Entartung der Bänder ist durch eine unterschiedliche Bandbreite aufgehoben. Wichtigstes Ergebnis sind die Phasendiagramme in Bezug auf Wechselwirkung, Gesamtfüllung und Kristallfeldparameter. Im Vergleich zu Einbandmodellen kommen im Jz-Modell sogenannte orbital-selektive Mott-Phasen hinzu, die, abhängig von Wechselwirkung, Gesamtfüllung und Kristallfeldparameter, einerseits metallischen und andererseits isolierenden Charakter haben. Ein neuer Aspekt ergibt sich durch den Kristallfeldparameter, der die ionischen Einteilchenniveaus relativ zueinander verschiebt, und für bestimmte Werte eine orbital-selektive Mott-Phase des breiten Bands ermöglicht. Im Vergleich mit analytischen Näherungslösungen und Einbandmodellen lassen sich generische Vielteilchen- und Korrelationseffekte von typischen Mehrband- und Einteilcheneffekten differenzieren.rnrnDas zweite untersuchte Hubbard-Modell beschreibt eine magneto-optische Falle mit einer endlichen Anzahl Gitterplätze, in welcher fermionische Atome platziert sind. Es wird eine z-antiferromagnetische Phase unter Berücksichtigung nicht-lokaler Vielteilchenkorrelationen erhalten, und dabei werden bekannte Ergebnisse einer effektiven Einteilchenbeschreibung verbessert.rnrnDas korrelierte Halbmetall wird im Rahmen einer Mehrbandrechnung im Hinblick auf Korrelationseffekte untersucht. Ausgangspunkt ist eine ab initio-Beschreibung durch die Dichtefunktionaltheorie (DFT), welche dann durch die Hinzunahme lokaler Korrelationen ergänzt wird. Die Vielteilcheneffekte werden an Hand einer einfachen Wechselwirkungsnäherung verdeutlicht, und für ein Wechselwirkungsmodell in sphärischer Symmetrie präzisiert. Es ergibt sich nur eine schwache Quasiteilchenrenormierung. Besonders für röntgenspektroskopische Experimente wird eine gute Übereinstimmung erzielt.rnrnDie numerischen Ergebnisse für das Jz-Modell basieren auf Quanten-Monte-Carlo-Simulationen im Rahmen der dynamischen Molekularfeldtheorie (DMFT). Für alle anderen Systeme wird ein Mehrband-Algorithmus entwickelt und implementiert, welcher explizit nicht-diagonale Mehrbandprozesse berücksichtigt.rnrn