926 resultados para Elementary Methods In Number Theory
Resumo:
Abstract Background Myelodysplastic syndromes (MDS) are a group of clonal hematological disorders characterized by ineffective hematopoiesis with morphological evidence of marrow cell dysplasia resulting in peripheral blood cytopenia. Microarray technology has permitted a refined high-throughput mapping of the transcriptional activity in the human genome. Non-coding RNAs (ncRNAs) transcribed from intronic regions of genes are involved in a number of processes related to post-transcriptional control of gene expression, and in the regulation of exon-skipping and intron retention. Characterization of ncRNAs in progenitor cells and stromal cells of MDS patients could be strategic for understanding gene expression regulation in this disease. Methods In this study, gene expression profiles of CD34+ cells of 4 patients with MDS of refractory anemia with ringed sideroblasts (RARS) subgroup and stromal cells of 3 patients with MDS-RARS were compared with healthy individuals using 44 k combined intron-exon oligoarrays, which included probes for exons of protein-coding genes, and for non-coding RNAs transcribed from intronic regions in either the sense or antisense strands. Real-time RT-PCR was performed to confirm the expression levels of selected transcripts. Results In CD34+ cells of MDS-RARS patients, 216 genes were significantly differentially expressed (q-value ≤ 0.01) in comparison to healthy individuals, of which 65 (30%) were non-coding transcripts. In stromal cells of MDS-RARS, 12 genes were significantly differentially expressed (q-value ≤ 0.05) in comparison to healthy individuals, of which 3 (25%) were non-coding transcripts. Conclusions These results demonstrated, for the first time, the differential ncRNA expression profile between MDS-RARS and healthy individuals, in CD34+ cells and stromal cells, suggesting that ncRNAs may play an important role during the development of myelodysplastic syndromes.
Resumo:
INTRODUCTION: The aim of this study was to assess the epidemiological and operational characteristics of the Leprosy Program before and after its integration into the Primary healthcare Services of the municipality of Aracaju-Sergipe, Brazil. METHODS: Data were drawn from the national database. The study periods were divided into preintegration (1996-2000) and postintegration (2001-2007). Annual rates of epidemiological detection were calculated. Frequency data on clinico-epidemiological variables of cases detected and treated for the two periods were compared using the Chi-squared (χ2) test adopting a 5% level of significance. RESULTS: Rates of detection overall, and in subjects younger than 15 years, were greater for the postintegration period and were higher than rates recorded for Brazil as a whole during the same periods. A total of 780 and 1,469 cases were registered during the preintegration and postintegration periods, respectively. Observations for the postintegration period were as follows: I) a higher proportion of cases with disability grade assessed at diagnosis, with increase of 60.9% to 78.8% (p < 0.001), and at end of treatment, from 41.4% to 44.4% (p < 0.023); II) an increase in proportion of cases detected by contact examination, from 2.1% to 4.1% (p < 0.001); and III) a lower level of treatment default with a decrease from 5.64 to 3.35 (p < 0.008). Only 34% of cases registered from 2001 to 2007 were examined. CONCLUSIONS: The shift observed in rates of detection overall, and in subjects younger than 15 years, during the postintegration period indicate an increased level of health care access. The fall in number of patients abandoning treatment indicates greater adherence to treatment. However, previous shortcomings in key actions, pivotal to attaining the outcomes and impact envisaged for the program, persisted in the postintegration period.
Resumo:
Hermite interpolation is increasingly showing to be a powerful numerical solution tool, as applied to different kinds of second order boundary value problems. In this work we present two Hermite finite element methods to solve viscous incompressible flows problems, in both two- and three-dimension space. In the two-dimensional case we use the Zienkiewicz triangle to represent the velocity field, and in the three-dimensional case an extension of this element to tetrahedra, still called a Zienkiewicz element. Taking as a model the Stokes system, the pressure is approximated with continuous functions, either piecewise linear or piecewise quadratic, according to the version of the Zienkiewicz element in use, that is, with either incomplete or complete cubics. The methods employ both the standard Galerkin or the Petrov–Galerkin formulation first proposed in Hughes et al. (1986) [18], based on the addition of a balance of force term. A priori error analyses point to optimal convergence rates for the PG approach, and for the Galerkin formulation too, at least in some particular cases. From the point of view of both accuracy and the global number of degrees of freedom, the new methods are shown to have a favorable cost-benefit ratio, as compared to velocity Lagrange finite elements of the same order, especially if the Galerkin approach is employed.
Resumo:
In molecular and atomic devices the interaction between electrons and ionic vibrations has an important role in electronic transport. The electron-phonon coupling can cause the loss of the electron's phase coherence, the opening of new conductance channels and the suppression of purely elastic ones. From the technological viewpoint phonons might restrict the efficiency of electronic devices by energy dissipation, causing heating, power loss and instability. The state of the art in electron transport calculations consists in combining ab initio calculations via Density Functional Theory (DFT) with Non-Equilibrium Green's Function formalism (NEGF). In order to include electron-phonon interactions, one needs in principle to include a self-energy scattering term in the open system Hamiltonian which takes into account the effect of the phonons over the electrons and vice versa. Nevertheless this term could be obtained approximately by perturbative methods. In the First Born Approximation one considers only the first order terms of the electronic Green's function expansion. In the Self-Consistent Born Approximation, the interaction self-energy is calculated with the perturbed electronic Green's function in a self-consistent way. In this work we describe how to incorporate the electron-phonon interaction to the SMEAGOL program (Spin and Molecular Electronics in Atomically Generated Orbital Landscapes), an ab initio code for electronic transport based on the combination of DFT + NEGF. This provides a tool for calculating the transport properties of materials' specific system, particularly in molecular electronics. Preliminary results will be presented, showing the effects produced by considering the electron-phonon interaction in nanoscale devices.
Resumo:
Two types of mesoscale wind-speed jet and their effects on boundary-layer structure were studied. The first is a coastal jet off the northern California coast, and the second is a katabatic jet over Vatnajökull, Iceland. Coastal regions are highly populated, and studies of coastal meteorology are of general interest for environmental protection, fishing industry, and for air and sea transportation. Not so many people live in direct contact with glaciers but properties of katabatic flows are important for understanding glacier response to climatic changes. Hence, the two jets can potentially influence a vast number of people. Flow response to terrain forcing, transient behavior in time and space, and adherence to simplified theoretical models were examined. The turbulence structure in these stably stratified boundary layers was also investigated. Numerical modeling is the main tool in this thesis; observations are used primarily to ensure a realistic model behavior. Simple shallow-water theory provides a useful framework for analyzing high-velocity flows along mountainous coastlines, but for an unexpected reason. Waves are trapped in the inversion by the curvature of the wind-speed profile, rather than by an infinite stability in the inversion separating two neutral layers, as assumed in the theory. In the absence of blocking terrain, observations of steady-state supercritical flows are not likely, due to the diurnal variation of flow criticality. In many simplified models, non-local processes are neglected. In the flows studied here, we showed that this is not always a valid approximation. Discrepancies between simulated katabatic flow and that predicted by an analytical model are hypothesized to be due to non-local effects, such as surface inhomogeneity and slope geometry, neglected in the theory. On a different scale, a reason for variations in the shape of local similarity scaling functions between studies is suggested to be differences in non-local contributions to the velocity variance budgets.
Resumo:
In questi ultimi anni il tema della sicurezza sismica degli edifici storici in muratura ha assunto particolare rilievo in quanto a partire soprattutto dall’ordinanza 3274 del 2003, emanata in seguito al sisma che colpì il Molise nel 2002, la normativa ha imposto un monitoraggio ed una classificazione degli edifici storici sotto tutela per quanto riguarda la vulnerabilità sismica (nel 2008, quest’anno, scade il termine per attuare quest’opera di classificazione). Si è posto per questo in modo più urgente il problema dello studio del comportamento degli edifici storici (non solo quelli che costituiscono monumento, ma anche e soprattutto quelli minori) e della loro sicurezza. Le Linee Guida di applicazione dell’Ordinanza 3274 nascono con l’intento di fornire strumenti e metodologie semplici ed efficaci per affrontare questo studio nei tempi previsti. Il problema si pone in modo particolare per le chiese, presenti in grande quantità sul territorio italiano e di cui costituiscono gran parte del patrimonio culturale; questi edifici, composti di solito da grandi elementi murari, non presentano comportamento scatolare, mancando orizzontamenti, elementi di collegamento efficace e muri di spina interni e sono particolarmente vulnerabili ad azioni sismiche; presentano inoltre un comportamento strutturale a sollecitazioni orizzontali che non può essere colto con un approccio globale basato, ad esempio, su un’analisi modale lineare: non ci sono modi di vibrare che coinvolgano una sufficiente parte di massa della struttura; si hanno valori dei coefficienti di partecipazione dei varii modi di vibrare minori del 10% (in generale molto più bassi). Per questo motivo l’esperienza e l’osservazione di casi reali suggeriscono un approccio di studio degli edifici storici sacri in muratura attraverso l’analisi della sicurezza sismica dei cosiddetti “macroelementi” in cui si può suddividere un edificio murario, i quali sono elementi che presentano un comportamento strutturale autonomo. Questo lavoro si inserisce in uno studio più ampio iniziato con una tesi di laurea dal titolo “Analisi Limite di Strutture in Muratura. Teoria e Applicazione all'Arco Trionfale” (M. Temprati), che ha studiato il comportamento dell’arco trionfale della chiesa collegiata di Santa Maria del Borgo a San Nicandro Garganico (FG). Suddividere un edificio in muratura in più elementi è il metodo proposto nelle Linee Guida, di cui si parla nel primo capitolo del presente lavoro: la vulnerabilità delle strutture può essere studiata tramite il moltiplicatore di collasso quale parametro in grado di esprimere il livello di sicurezza sismica. Nel secondo capitolo si illustra il calcolo degli indici di vulnerabilità e delle accelerazioni di danno per la chiesa di Santa Maria del Borgo, attraverso la compilazione delle schede dette “di II livello”, secondo quanto indicato nelle Linee Guida. Nel terzo capitolo viene riportato il calcolo del moltiplicatore di collasso a ribaltamento della facciata della chiesa. Su questo elemento si è incentrata l’attenzione nel presente lavoro. A causa della complessità dello schema strutturale della facciata connessa ad altri elementi dell’edificio, si è fatto uso del codice di calcolo agli elementi finiti ABAQUS. Della modellazione del materiale e del settaggio dei parametri del software si è discusso nel quarto capitolo. Nel quinto capitolo si illustra l’analisi condotta tramite ABAQUS sullo stesso schema della facciata utilizzato per il calcolo manuale nel capitolo tre: l’utilizzo combinato dell’analisi cinematica e del metodo agli elementi finiti permette per esempi semplici di convalidare i risultati ottenibili con un’analisi non-lineare agli elementi finiti e di estenderne la validità a schemi più completi e più complessi. Nel sesto capitolo infatti si riportano i risultati delle analisi condotte con ABAQUS su schemi strutturali in cui si considerano anche gli elementi connessi alla facciata. Si riesce in questo modo ad individuare con chiarezza il meccanismo di collasso di più facile attivazione per la facciata e a trarre importanti informazioni sul comportamento strutturale delle varie parti, anche in vista di un intervento di ristrutturazione e miglioramento sismico.
Resumo:
We study some perturbative and nonperturbative effects in the framework of the Standard Model of particle physics. In particular we consider the time dependence of the Higgs vacuum expectation value given by the dynamics of the StandardModel and study the non-adiabatic production of both bosons and fermions, which is intrinsically non-perturbative. In theHartree approximation, we analyze the general expressions that describe the dissipative dynamics due to the backreaction of the produced particles. Then, we solve numerically some relevant cases for the Standard Model phenomenology in the regime of relatively small oscillations of the Higgs vacuum expectation value (vev). As perturbative effects, we consider the leading logarithmic resummation in small Bjorken x QCD, concentrating ourselves on the Nc dependence of the Green functions associated to reggeized gluons. Here the eigenvalues of the BKP kernel for states of more than three reggeized gluons are unknown in general, contrary to the large Nc limit (planar limit) case where the problem becomes integrable. In this contest we consider a 4-gluon kernel for a finite number of colors and define some simple toy models for the configuration space dynamics, which are directly solvable with group theoretical methods. In particular we study the depencence of the spectrum of thesemodelswith respect to the number of colors andmake comparisons with the planar limit case. In the final part we move on the study of theories beyond the Standard Model, considering models built on AdS5 S5/Γ orbifold compactifications of the type IIB superstring, where Γ is the abelian group Zn. We present an appealing three family N = 0 SUSY model with n = 7 for the order of the orbifolding group. This result in a modified Pati–Salam Model which reduced to the StandardModel after symmetry breaking and has interesting phenomenological consequences for LHC.
Resumo:
Introduction. Postnatal neurogenesis in the hippocampal dentate gyrus, can be modulated by numerous determinants, such as hormones, transmitters and stress. Among the factors positively interfering with neurogenesis, the complexity of the environment appears to play a particularly striking role. Adult mice reared in an enriched environment produce more neurons and exhibit better performance in hippocampus-specific learning tasks. While the effects of complex environments on hippocampal neurogenesis are well documented, there is a lack of information on the effects of living under socio-sensory deprivation conditions. Due to the immaturity of rats and mice at birth, studies dealing with the effects of environmental enrichment on hippocampal neurogenesis were carried out in adult animals, i.e. during a period of relatively low rate of neurogenesis. The impact of environment is likely to be more dramatic during the first postnatal weeks, because at this time granule cell production is remarkably higher than at later phases of development. The aim of the present research was to clarify whether and to what extent isolated or enriched rearing conditions affect hippocampal neurogenesis during the early postnatal period, a time window characterized by a high rate of precursor proliferation and to elucidate the mechanisms underlying these effects. The experimental model chosen for this research was the guinea pig, a precocious rodent, which, at 4-5 days of age can be independent from maternal care. Experimental design. Animals were assigned to a standard (control), an isolated, or an enriched environment a few days after birth (P5-P6). On P14-P17 animals received one daily bromodeoxyuridine (BrdU) injection, to label dividing cells, and were sacrificed either on P18, to evaluate cell proliferation or on P45, to evaluate cell survival and differentiation. Methods. Brain sections were processed for BrdU immunhistochemistry, to quantify the new born and surviving cells. The phenotype of the surviving cells was examined by means of confocal microscopy and immunofluorescent double-labeling for BrdU and either a marker of neurons (NeuN) or a marker of astrocytes (GFAP). Apoptotic cell death was examined with the TUNEL method. Serial sections were processed for immunohistochemistry for i) vimentin, a marker of radial glial cells, ii) BDNF (brain-derived neurotrofic factor), a neurotrophin involved in neuron proliferation/survival, iii) PSA-NCAM (the polysialylated form of the neural cell adhesion molecule), a molecule associated with neuronal migration. Total granule cell number in the dentate gyrus was evaluated by stereological methods, in Nissl-stained sections. Results. Effects of isolation. In P18 isolated animals we found a reduced cell proliferation (-35%) compared to controls and a lower expression of BDNF. Though in absolute terms P45 isolated animals had less surviving cells than controls, they showed no differences in survival rate and phenotype percent distribution compared to controls. Evaluation of the absolute number of surviving cells of each phenotype showed that isolated animals had a reduced number of cells with neuronal phenotype than controls. Looking at the location of the new neurons, we found that while in control animals 76% of them had migrated to the granule cell layer, in isolated animals only 55% of the new neurons had reached this layer. Examination of radial glia cells of P18 and P45 animals by vimentin immunohistochemistry showed that in isolated animals radial glia cells were reduced in density and had less and shorter processes. Granule cell count revealed that isolated animals had less granule cells than controls (-32% at P18 and -42% at P45). Effects of enrichment. In P18 enriched animals there was an increase in cell proliferation (+26%) compared to controls and a higher expression of BDNF. Though in both groups there was a decline in the number of BrdU-positive cells by P45, enriched animals had more surviving cells (+63) and a higher survival rate than controls. No differences were found between control and enriched animals in phenotype percent distribution. Evaluation of the absolute number of cells of each phenotype showed that enriched animals had a larger number of cells of each phenotype than controls. Looking at the location of cells of each phenotype we found that enriched animals had more new neurons in the granule cell layer and more astrocytes and cells with undetermined phenotype in the hilus. Enriched animals had a higher expression of PSA-NCAM in the granule cell layer and hilus Vimentin immunohistochemistry showed that in enriched animals radial glia cells were more numerous and had more processes.. Granule cell count revealed that enriched animals had more granule cells than controls (+37% at P18 and +31% at P45). Discussion. Results show that isolation rearing reduces hippocampal cell proliferation but does not affect cell survival, while enriched rearing increases both cell proliferation and cell survival. Changes in the expression of BDNF are likely to contribute to he effects of environment on precursor cell proliferation. The reduction and increase in final number of granule neurons in isolated and enriched animals, respectively, are attributable to the effects of environment on cell proliferation and survival and not to changes in the differentiation program. As radial glia cells play a pivotal role in neuron guidance to the granule cell layer, the reduced number of radial glia cells in isolated animals and the increased number in enriched animals suggests that the size of radial glia population may change dynamically, in order to match changes in neuron production. The high PSA-NCAM expression in enriched animals may concur to favor the survival of the new neurons by facilitating their migration to the granule cell layer. Conclusions. By using a precocious rodent we could demonstrate that isolated/enriched rearing conditions, at a time window during which intense granule cell proliferation takes place, lead to a notable decrease/increase of total granule cell number. The time-course and magnitude of postnatal granule cell production in guinea pigs are more similar to the human and non-human primate condition than in rats and mice. Translation of current data to humans would imply that exposure of children to environments poor/rich of stimuli may have a notably large impact on dentate neurogenesis and, very likely, on hippocampus dependent memory functions.
Resumo:
Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.
Resumo:
There have been almost fifty years since Harry Eckstein' s classic monograph, A Theory of Stable Democracy (Princeton, 1961), where he sketched out the basic tenets of the “congruence theory”, which was to become one of the most important and innovative contributions to understanding democratic rule. His next work, Division and Cohesion in Democracy, (Princeton University Press: 1966) is designed to serve as a plausibility probe for this 'theory' (ftn.) and is a case study of a Northern democratic system, Norway. What is more, this line of his work best exemplifies the contribution Eckstein brought to the methodology of comparative politics through his seminal article, “ “Case Study and Theory in Political Science” ” (in Greenstein and Polsby, eds., Handbook of Political Science, 1975), on the importance of the case study as an approach to empirical theory. This article demonstrates the special utility of “crucial case studies” in testing theory, thereby undermining the accepted wisdom in comparative research that the larger the number of cases the better. Although not along the same lines, but shifting the case study unit of research, I intend to take up here the challenge and build upon an equally unique political system, the Swedish one. Bearing in mind the peculiarities of the Swedish political system, my unit of analysis is going to be further restricted to the Swedish Social Democratic Party, the Svenska Arbetare Partiet. However, my research stays within the methodological framework of the case study theory inasmuch as it focuses on a single political system and party. The Swedish SAP endurance in government office and its electoral success throughout half a century (ftn. As of the 1991 election, there were about 56 years - more than half century - of interrupted social democratic "reign" in Sweden.) are undeniably a performance no other Social Democrat party has yet achieved in democratic conditions. Therefore, it is legitimate to inquire about the exceptionality of this unique political power combination. Which were the different components of this dominance power position, which made possible for SAP's governmental office stamina? I will argue here that it was the end-product of a combination of multifarious factors such as a key position in the party system, strong party leadership and organization, a carefully designed strategy regarding class politics and welfare policy. My research is divided into three main parts, the historical incursion, the 'welfare' part and the 'environment' part. The first part is a historical account of the main political events and issues, which are relevant for my case study. Chapter 2 is devoted to the historical events unfolding in the 1920-1960 period: the Saltsjoebaden Agreement, the series of workers' strikes in the 1920s and SAP's inception. It exposes SAP's ascent to power in the mid 1930s and the party's ensuing strategies for winning and keeping political office, that is its economic program and key economic goals. The following chapter - chapter 3 - explores the next period, i.e. the period from 1960s to 1990s and covers the party's troubled political times, its peak and the beginnings of the decline. The 1960s are relevant for SAP's planning of a long term economic strategy - the Rehn Meidner model, a new way of macroeconomic steering, based on the Keynesian model, but adapted to the new economic realities of welfare capitalist societies. The second and third parts of this study develop several hypotheses related to SAP's 'dominant position' (endurance in politics and in office) and test them afterwards. Mainly, the twin issues of economics and environment are raised and their political relevance for the party analyzed. On one hand, globalization and its spillover effects over the Swedish welfare system are important causal factors in explaining the transformative social-economic challenges the party had to put up with. On the other hand, Europeanization and environmental change influenced to a great deal SAP's foreign policy choices and its domestic electoral strategies. The implications of globalization on the Swedish welfare system will make the subject of two chapters - chapters four and five, respectively, whereupon the Europeanization consequences will be treated at length in the third part of this work - chapters six and seven, respectively. Apparently, at first sight, the link between foreign policy and electoral strategy is difficult to prove and uncanny, in the least. However, in the SAP's case there is a bulk of literature and public opinion statistical data able to show that governmental domestic policy and party politics are in a tight dependence to foreign policy decisions and sovereignty issues. Again, these country characteristics and peculiar causal relationships are outlined in the first chapters and explained in the second and third parts. The sixth chapter explores the presupposed relationship between Europeanization and environmental policy, on one hand, and SAP's environmental policy formulation and simultaneous agenda-setting at the international level, on the other hand. This chapter describes Swedish leadership in environmental policy formulation on two simultaneous fronts and across two different time spans. The last chapter, chapter eight - while trying to develop a conclusion, explores the alternative theories plausible in explaining the outlined hypotheses and points out the reasons why these theories do not fit as valid alternative explanation to my systemic corporatism thesis as the main causal factor determining SAP's 'dominant position'. Among the alternative theories, I would consider Traedgaardh L. and Bo Rothstein's historical exceptionalism thesis and the public opinion thesis, which alone are not able to explain the half century social democratic endurance in government in the Swedish case.
Resumo:
Kolloidale Suspensionen, bei denen man die kolloidalen Teilchen als "Makroatome" in einem Kontinuum aus Lösungsmittelmolekülen auffaßt, stellen ein geeignetes Modellsystem zur Untersuchung von Verfestigungsvorgängen dar. Auf Grund der typischen beteiligten Längen- und Zeitskalen können Phasenübergänge bequem mit optischen Verfahren studiert werden. In der vorliegenden Arbeit wurde die Kinetik der Kristallisation in drei kolloidalen Systemen unterschiedlicher Teilchen-Teilchen-Wechselwirkung mit Lichtstreu- und mikroskopischen Methoden untersucht. Zur Untersuchung von Suspensionen aus sterisch stabilisierten PMMA-Teilchen, die in guter Näherung wie harte Kugeln wechselwirken, wurde ein neuartiges Laserlichtstreuexperiment aufgebaut, das die gleichzeitige Detektion von Bragg- und Kleinwinkelstreuung an einer Probe erlaubt. Damit konnte der zeitliche Verlauf der Kristallisation verfolgt sowie u.a. Nukleationsraten und erstmals auch Wachstumsgeschwindigkeiten bestimmt werden; diese wurden mit klassischer Nukleationstheorie sowie Wilson-Frenkel-Wachstum verglichen. In beiden Fällen konnte sehr gute Übereinstimmung mit der Theorie festgestellt werden. In Systemen geladener Partikel wurden mit Bragg-Mikroskopie die Wachstumsgeschwindigkeiten heterogener, an der Wand der Probenzelle aufwachsender Kristalle untersucht. Die Anpassung eines Wilson-Frenkel-Wachstumsgesetzes gelingt auch hier, wenn man die dazu eingeführte reskalierte Energiedichte auf den Schmelzpunkt bezieht. Geeignete Reskalierung der Daten erlaubt den Vergleich mit den Hartkugelsystemen. Zum ersten Mal wurde die Kristallisationskinetik in zwei verschiedenen kolloidalen binären Mischungen bestimmt und ausgewertet: In Beimischungen einer nichtkristallisierenden Teilchensorte zu einer kristallisierenden Suspension konnten die Daten mit einem modifizierten Wilson-Frenkel-Gesetz beschrieben werden, während in Mischungen aus zwei kristallisierenden Partikelsystemen eine unerwartet hohe Abnahme der Wachstumsgeschwindigkeiten beobachtet wurde. Kolloidale Suspensionen hartkugelähnlicher Mikrogel-Partikel konnten mit Hilfe des Lichtstreuaufbaues ebenfalls zum ersten Mal untersucht werden. Es wurde eine ähnliche Kristallisationskinetik wie in den PMMA-Systemen gefunden, jedoch auch einige wichtige Unterschiede, die insbesondere den Streumechanismus im Kleinwinkelbereich betrafen. Hier wurden verschiedene Interpretationsvorschläge diskutiert.
Resumo:
Die Arbeit beginnt mit dem Vergleich spezieller Regularisierungsmethoden in der Quantenfeldtheorie mit dem Verfahren zur störungstheoretischen Konstruktion der S-Matrix nach Epstein und Glaser. Da das Epstein-Glaser-Verfahren selbst als Regularisierungsverfahren verwandt werden kann und darüberhinaus ausschließlich auf physikalisch motivierten Postulaten basiert, liefert dieser Vergleich ein Kriterium für die Zulässigkeit anderer Regularisierungsmethoden. Zusätzlich zur Herausstellung dieser Zulässigkeit resultiert aus dieser Gegenüberstellung als weiteres wesentliches Resultat ein neues, in der Anwendung praktikables sowie konsistentes Regularisierungsverfahren, das modifizierte BPHZ-Verfahren. Dieses wird anhand von Ein-Schleifen-Diagrammen aus der QED (Elektronselbstenergie, Vakuumpolarisation und Vertexkorrektur) demonstriert. Im Gegensatz zur vielverwandten Dimensionalen Regularisierung ist dieses Verfahren uneingeschränkt auch für chirale Theorien anwendbar. Als Beispiel hierfür dient die Berechnung der im Rahmen einer axialen Erweiterung der QED-Lagrangedichte auftretenden U(1)-Anomalie. Auf der Stufe von Mehr-Schleifen-Diagrammen zeigt der Vergleich der Epstein-Glaser-Konstruktion mit dem bekannten BPHZ-Verfahren an mehreren Beispielen aus der Phi^4-Theorie, darunter das sog. Sunrise-Diagramm, daß zu deren Berechnung die nach der Waldformel des BPHZ-Verfahrens zur Regularisierung beitragenden Unterdiagramme auf eine kleinere Klasse eingeschränkt werden können. Dieses Resultat ist gleichfalls für die Praxis der Regularisierung bedeutsam, da es bereits auf der Stufe der zu berücksichtigenden Unterdiagramme zu einer Vereinfachung führt.
Resumo:
Die vorliegende Dissertation beinhaltet Anwendungen der Quantenchemie und methodische Entwicklungen im Bereich der "Coupled-Cluster"-Theorie zu den folgenden Themen: 1.) Die Bestimmung von Geometrieparametern in wasserstoffverbrückten Komplexen mit Pikometer-Genauigkeit durch Kopplung von NMR-Experimenten und quantenchemischen Rechnungen wird an zwei Beispielen dargelegt. 2.) Die hierin auftretenden Unterschiede in Theorie und Experiment werden diskutiert. Hierzu wurde die Schwingungsmittelung des Dipolkopplungstensors implementiert, um Nullpunkt-Effekte betrachten zu können. 3.) Ein weiterer Aspekt der Arbeit behandelt die Strukturaufklärung an diskotischen Flüssigkristallen. Die quantenchemische Modellbildung und das Zusammenspiel mit experimentellen Methoden, vor allem der Festkörper-NMR, wird vorgestellt. 4.) Innerhalb dieser Arbeit wurde mit der Parallelisierung des Quantenchemiepaketes ACESII begonnen. Die grundlegende Strategie und erste Ergebnisse werden vorgestellt. 5.) Zur Skalenreduktion des CCCSD(T)-Verfahrens durch Faktorisierung wurden verschiedene Zerlegungen des Energienenners getestet. Ein sich hieraus ergebendes Verfahren zur Berechnung der CCSD(T)-Energie wurde implementiert. 6.) Die Reaktionsaufklärung der Bildung von HSOH aus di-tert-Butyl-Sulfoxid wird vorgestellt. Dazu wurde die Thermodynamik der Reaktionsschritte mit Methoden der Quantenchemie berechnet.
Resumo:
La sindrome di Noonan (SN) è una patologia a trasmissione autosomica dominante caratterizzata da bassa statura, difetti cardiaci congeniti, dismorfia facciale. In letteratura sono stati pubblicati pochi case reports riguardanti le condizioni orali-facciali in pazienti affetti da SN. Obiettivo. Individuare patologie di pertinenza ortopedico-ortodontica caratteristiche della sindrome utilizzando un campione di pazienti con diagnosi di SN. Metodi. Un gruppo di 10 pazienti affetti da SN è stato sottoposto a esame obiettivo extraorale ed intraorale, ortopantomografia, teleradiografia latero-laterale, impronte delle arcate dentarie. Le misurazioni sulle TLL sono state effettuate sulla base dell'analisi MBT; i valori palatali provengono dai modelli di studio dell’arcata superiore. È stata utilizzato il test t-Student per mettere a confronto il gruppo di studio e il gruppo di controllo riguardo le misure cefalometriche e i valori palatali. Risultati. Nel gruppo di studio sono state rilevate anomalie di numero (un dente deciduo soprannumerario e una agenesia di un dente permanente). Il test t-Student rivela differenze statisticamente significative per 7 variabili cefalometriche su 13 e per 2 variabili palatali. Conclusioni. Basandosi su questo studio è possibile concludere che i pazienti con SN mostrano II classe scheletrica di tipo mandibolare, crescita iperdivergente, tendenza al morso aperto scheletrico, palatoversione degli incisivi superiori, palato stretto. Questi risultati possono fornire informazioni utili sia per la diagnosi di SN sia per la pianificazione del corretto trattamento ortodontico.
Resumo:
In this work we investigate the existence of resonances for two-centers Coulomb systems with arbitrary charges in two and three dimensions, defining them in terms of generalized complex eigenvalues of a non-selfadjoint deformation of the two-center Schrödinger operator. After giving a description of the bifurcation of the classical system for positive energies, we construct the resolvent kernel of the operators and we prove that they can be extended analytically to the second Riemann sheet. The resonances are then defined and studied with numerical methods and perturbation theory.