13 resultados para principal sparse non-negative matrix factorization
em ArchiMeD - Elektronische Publikationen der Universität Mainz - Alemanha
Resumo:
Eine Menge B nicht negativer ganzer Zahlen heißt Basis h-ter Ordnung, wenn jede nicht negative ganze Zahl Summe von höchstens h Elementen von B ist. Eine der großen Fragen der additiven Zahlentheorie ist die nach der effektivsten Basis h-ter Ordnung für ein gegebenes h>=2. Im Fokus des Interesses steht dabei der immer noch offene Fall h=2. Bezeichnet B(x) die Anzahl der Elemente b aus B mit 0= af(x), wobei f die Wurzelfunktion bezeichne. Andererseits gibt es Basen B zweiter Ordnung mit B(x) <= cf(x). Daher kann man den Limes superior S(B), den Limes inferior s(B) sowie im Falle der Existenz den Limes d(B) des Quotienten B(x) / f(x) als Dichtefunktionen von Basen zweiter Ordnung betrachten. J. W. S. Cassels konstruierte 1957 eine Basis C zweiter Ordnung mit d(C)=5,196…. G. Hofmeister gab 2001 eine Basis H zweiter Ordnung mit asymptotischer Wurzeldichte d(H)=4,638… an. In der vorliegenden Arbeit wird eine Basis S zweiter Ordnung mit asymptotischer Wurzeldichte d(S)=3,464… konstruiert. Darüber hinaus wird für die von J. W. S. Cassels, für die von G. Hofmeister und für die in dieser Arbeit verwendete Klasse von Basen zweiter Ordnung gezeigt, dass die asymptotische Wurzeldichte innerhalb der jeweiligen Klasse nicht mehr zu verbessern ist. Bisher war die Frage nach möglichen Verbesserungen innerhalb der jeweiligen Konstruktionsprinzipien offen geblieben.
Resumo:
We consider the heat flux through a domain with subregions in which the thermal capacity approaches zero. In these subregions the parabolic heat equation degenerates to an elliptic one. We show the well-posedness of such parabolic-elliptic differential equations for general non-negative L-infinity-capacities and study the continuity of the solutions with respect to the capacity, thus giving a rigorous justification for modeling a small thermal capacity by setting it to zero. We also characterize weak directional derivatives of the temperature with respect to capacity as solutions of related parabolic-elliptic problems.
Resumo:
Conjugated polymers and conjugated polymer blends have attracted great interest due to their potential applications in biosensors and organic electronics. The sub-100 nm morphology of these materials is known to heavily influence their electromechanical properties and the performance of devices they are part of. Electromechanical properties include charge injection, transport, recombination, and trapping, the phase behavior and the mechanical robustness of polymers and blends. Electrical scanning probe microscopy techniques are ideal tools to measure simultaneously electric (conductivity and surface potential) and dielectric (dielectric constant) properties, surface morphology, and mechanical properties of thin films of conjugated polymers and their blends.rnIn this thesis, I first present a combined topography, Kelvin probe force microscopy (KPFM), and scanning conductive torsion mode microscopy (SCTMM) study on a gold/polystyrene model system. This system is a mimic for conjugated polymer blends where conductive domains (gold nanoparticles) are embedded in a non-conductive matrix (polystyrene film), like for polypyrrole:polystyrene sulfonate (PPy:PSS), and poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS). I controlled the nanoscale morphology of the model by varying the distribution of gold nanoparticles in the polystyrene films. I studied the influence of different morphologies on the surface potential measured by KPFM and on the conductivity measured by SCTMM. By the knowledge I gained from analyzing the data of the model system I was able to predict the nanostructure of a homemade PPy:PSS blend.rnThe morphologic, electric, and dielectric properties of water based conjugated polymer blends, e.g. PPy:PSS or PEDOT:PSS, are known to be influenced by their water content. These properties also influence the macroscopic performance when the polymer blends are employed in a device. In the second part I therefore present an in situ humidity-dependence study on PPy:PSS films spin-coated and drop-coated on hydrophobic highly ordered pyrolytic graphite substrates by KPFM. I additionally used a particular KPFM mode that detects the second harmonic electrostatic force. With this, I obtained images of dielectric constants of samples. Upon increasing relative humidity, the surface morphology and composition of the films changed. I also observed that relative humidity affected thermally unannealed and annealed PPy:PSS films differently. rnThe conductivity of a conjugated polymer may change once it is embedded in a non-conductive matrix, like for PPy embedded in PSS. To measure the conductivity of single conjugated polymer particles, in the third part, I present a direct method based on microscopic four-point probes. I started with metal core-shell and metal bulk particles as models, and measured their conductivities. The study could be extended to measure conductivity of single PPy particles (core-shell and bulk) with a diameter of a few micrometers.
Resumo:
In this thesis, a systematic analysis of the bar B to X_sgamma photon spectrum in the endpoint region is presented. The endpoint region refers to a kinematic configuration of the final state, in which the photon has a large energy m_b-2E_gamma = O(Lambda_QCD), while the jet has a large energy but small invariant mass. Using methods of soft-collinear effective theory and heavy-quark effective theory, it is shown that the spectrum can be factorized into hard, jet, and soft functions, each encoding the dynamics at a certain scale. The relevant scales in the endpoint region are the heavy-quark mass m_b, the hadronic energy scale Lambda_QCD and an intermediate scale sqrt{Lambda_QCD m_b} associated with the invariant mass of the jet. It is found that the factorization formula contains two different types of contributions, distinguishable by the space-time structure of the underlying diagrams. On the one hand, there are the direct photon contributions which correspond to diagrams with the photon emitted directly from the weak vertex. The resolved photon contributions on the other hand arise at O(1/m_b) whenever the photon couples to light partons. In this work, these contributions will be explicitly defined in terms of convolutions of jet functions with subleading shape functions. While the direct photon contributions can be expressed in terms of a local operator product expansion, when the photon spectrum is integrated over a range larger than the endpoint region, the resolved photon contributions always remain non-local. Thus, they are responsible for a non-perturbative uncertainty on the partonic predictions. In this thesis, the effect of these uncertainties is estimated in two different phenomenological contexts. First, the hadronic uncertainties in the bar B to X_sgamma branching fraction, defined with a cut E_gamma > 1.6 GeV are discussed. It is found, that the resolved photon contributions give rise to an irreducible theory uncertainty of approximately 5 %. As a second application of the formalism, the influence of the long-distance effects on the direct CP asymmetry will be considered. It will be shown that these effects are dominant in the Standard Model and that a range of -0.6 < A_CP^SM < 2.8 % is possible for the asymmetry, if resolved photon contributions are taken into account.
Resumo:
Die vorliegende Arbeit behandelt die Entwicklung und Verbesserung von linear skalierenden Algorithmen für Elektronenstruktur basierte Molekulardynamik. Molekulardynamik ist eine Methode zur Computersimulation des komplexen Zusammenspiels zwischen Atomen und Molekülen bei endlicher Temperatur. Ein entscheidender Vorteil dieser Methode ist ihre hohe Genauigkeit und Vorhersagekraft. Allerdings verhindert der Rechenaufwand, welcher grundsätzlich kubisch mit der Anzahl der Atome skaliert, die Anwendung auf große Systeme und lange Zeitskalen. Ausgehend von einem neuen Formalismus, basierend auf dem großkanonischen Potential und einer Faktorisierung der Dichtematrix, wird die Diagonalisierung der entsprechenden Hamiltonmatrix vermieden. Dieser nutzt aus, dass die Hamilton- und die Dichtematrix aufgrund von Lokalisierung dünn besetzt sind. Das reduziert den Rechenaufwand so, dass er linear mit der Systemgröße skaliert. Um seine Effizienz zu demonstrieren, wird der daraus entstehende Algorithmus auf ein System mit flüssigem Methan angewandt, das extremem Druck (etwa 100 GPa) und extremer Temperatur (2000 - 8000 K) ausgesetzt ist. In der Simulation dissoziiert Methan bei Temperaturen oberhalb von 4000 K. Die Bildung von sp²-gebundenem polymerischen Kohlenstoff wird beobachtet. Die Simulationen liefern keinen Hinweis auf die Entstehung von Diamant und wirken sich daher auf die bisherigen Planetenmodelle von Neptun und Uranus aus. Da das Umgehen der Diagonalisierung der Hamiltonmatrix die Inversion von Matrizen mit sich bringt, wird zusätzlich das Problem behandelt, eine (inverse) p-te Wurzel einer gegebenen Matrix zu berechnen. Dies resultiert in einer neuen Formel für symmetrisch positiv definite Matrizen. Sie verallgemeinert die Newton-Schulz Iteration, Altmans Formel für beschränkte und nicht singuläre Operatoren und Newtons Methode zur Berechnung von Nullstellen von Funktionen. Der Nachweis wird erbracht, dass die Konvergenzordnung immer mindestens quadratisch ist und adaptives Anpassen eines Parameters q in allen Fällen zu besseren Ergebnissen führt.
Resumo:
The present state of the theoretical predictions for the hadronic heavy hadron production is not quite satisfactory. The full next-to-leading order (NLO) ${cal O} (alpha_s^3)$ corrections to the hadroproduction of heavy quarks have raised the leading order (LO) ${cal O} (alpha_s^2)$ estimates but the NLO predictions are still slightly below the experimental numbers. Moreover, the theoretical NLO predictions suffer from the usual large uncertainty resulting from the freedom in the choice of renormalization and factorization scales of perturbative QCD.In this light there are hopes that a next-to-next-to-leading order (NNLO) ${cal O} (alpha_s^4)$ calculation will bring theoretical predictions even closer to the experimental data. Also, the dependence on the factorization and renormalization scales of the physical process is expected to be greatly reduced at NNLO. This would reduce the theoretical uncertainty and therefore make the comparison between theory and experiment much more significant. In this thesis I have concentrated on that part of NNLO corrections for hadronic heavy quark production where one-loop integrals contribute in the form of a loop-by-loop product. In the first part of the thesis I use dimensional regularization to calculate the ${cal O}(ep^2)$ expansion of scalar one-loop one-, two-, three- and four-point integrals. The Laurent series of the scalar integrals is needed as an input for the calculation of the one-loop matrix elements for the loop-by-loop contributions. Since each factor of the loop-by-loop product has negative powers of the dimensional regularization parameter $ep$ up to ${cal O}(ep^{-2})$, the Laurent series of the scalar integrals has to be calculated up to ${cal O}(ep^2)$. The negative powers of $ep$ are a consequence of ultraviolet and infrared/collinear (or mass ) divergences. Among the scalar integrals the four-point integrals are the most complicated. The ${cal O}(ep^2)$ expansion of the three- and four-point integrals contains in general classical polylogarithms up to ${rm Li}_4$ and $L$-functions related to multiple polylogarithms of maximal weight and depth four. All results for the scalar integrals are also available in electronic form. In the second part of the thesis I discuss the properties of the classical polylogarithms. I present the algorithms which allow one to reduce the number of the polylogarithms in an expression. I derive identities for the $L$-functions which have been intensively used in order to reduce the length of the final results for the scalar integrals. I also discuss the properties of multiple polylogarithms. I derive identities to express the $L$-functions in terms of multiple polylogarithms. In the third part I investigate the numerical efficiency of the results for the scalar integrals. The dependence of the evaluation time on the relative error is discussed. In the forth part of the thesis I present the larger part of the ${cal O}(ep^2)$ results on one-loop matrix elements in heavy flavor hadroproduction containing the full spin information. The ${cal O}(ep^2)$ terms arise as a combination of the ${cal O}(ep^2)$ results for the scalar integrals, the spin algebra and the Passarino-Veltman decomposition. The one-loop matrix elements will be needed as input in the determination of the loop-by-loop part of NNLO for the hadronic heavy flavor production.
Resumo:
Matrix metalloproteinases are the components of the tumour microenvironment which play a crucial role in tumour progression. Matrix metalloproteinase-7 (MMP-7) is expressed in a variety of tumours and the expression is associated with an aggressive malignant phenotype and poor prognosis. A role for MMP-7 in the immune escape of tumours has been postulated, but the mechanisms are not clearly understood. The present study was focused on identifying physiological inactivators of MMP-7 and also to unravel the mechanisms involved in MMP-7 mediated immune escape. This study shows that human leukocyte elastase (HLE), secreted by polymorphonuclear leukocytes cleaves MMP-7 in the catalytic domain as revealed by N-terminal sequencing. Further analysis demonstrates that the activity of MMP-7 was drastically decreased after HLE treatment in a time and dose dependent manner. MMP-7 induces apoptosis resistance in tumour cells by cleaving CD95 and CD95L. The effect of HLE on MMP-7 mediated apoptosis resistance was analysed. In vitro stimulation of apoptosis by anti-Apo-1 (anti-CD95 antibody) and the chemotherapeutic drug doxorubicin is reduced by MMP-7. Also tumour specific cytotoxic T cells do not effectively kill tumour cells in the presence of MMP-7. This study revealed that HLE abrogates the negative effect of MMP-7 on apoptosis induced by CD95 stimulation, doxorubicin or cytotoxic T cells and restores apoptosis sensitivity of tumour cells. To gain insight into the possible immune modulatory functions of MMP-7, experiments were performed to identify new immune relevant substrates. The human T cell line, Jurkat, was selected for these studies. Hsc70 which is involved in uncoating of clathrin vesicles was found in the supernatants of the MMP-7 treated cells indicating a modulatory role of MMP-7 on endocytosis. Further studies demonstrated that MMP-7 leads to decreased clathrin staining in HEK293, HepG2, Jurkat, CD4+ T cells and dendritic cells. Results also show MMP-7 treatment increased surface expression of cytotoxic T lymphocyte associated protein-4 (CTLA-4) which accumulated due to inhibition of the clathrin mediated internalization in CD4+CD25+ cells.
Resumo:
Das Hauptziel dieser Arbeit war die Identifizierung der Regulationsebenen auf denen die TPA-induzierte Matrix-Metalloproteinase-9 (MMP-9) durch das nitrose Gas Stickstoffmonoxid (NO) in MCF-7-Zellen verändert wird. Dabei konnte sowohl mit Hilfe der Zymographie als auch mit einem MMP-9-Aktivitäts-ELISA gezeigt werden, dass die extrazellulären MMP-9-Spiegel durch die Behandlung der Zellen mit NO reduziert werden. Gleichzeitig zeigte sich auch eine durch NO bedingte Abnahme der intrazellulären MMP-9-Spiegel, wie mit Hilfe von Western-Blot-Analyse nachgewiesen werden konnte. Experimente mit dem Proteasominhibitor Lactacystin und dem Proteinsynthesehemmstoff Cycloheximid ließen darüber hinaus eine NO-bedingte Veränderung der MMP-9-Proteinstabilität ausschließen. Im Gegensatz dazu konnte mittels der metabolischen Markierung mit radioaktiv markiertem Methionin und Cystein gezeigt werden, dass die Proteinneusynthese der MMP-9 durch eine Behandlung der Zellen mit NO stark beeinträchtigt wird. In Übereinstimmung mit diesen Daten finden sich reduzierte MMP-9-mRNA-Spiegel auch in der polysomalen Zellfraktion von MCF-7-Zellen. Wie mit Hilfe des Transkriptionshemmstoffes Actinomycin D und durch Reportergenstudien mit hybriden MMP-9-Promotorkonstrukten gezeigt werden konnte, ist die NO-induzierte Reduktion der MMP-9-mRNA-Spiegel nicht auf eine Verringerung der MMP-9-mRNA-Stabilität zurückzuführen. Reportergenstudien mit einem 670bp langen Promotorfragment des 5’flankierenden Bereichs des humanen MMP-9-Gens zeigten jedoch auf, dass der hemmende Effekt des NOs zum Teil auf eine NO-vermittelte Abnahme der TPA-induzierten MMP-9-Promotoraktivität zurückgeführt werden kann. Demzufolge wurde in den nachfolgenden Experimenten nach den für die MMP-9-Expression notwendigen und von NO modulierten Transkriptionsfaktoren in MCF-7-Zellen gesucht. Anhand von Western-Blot-Analysen und Gelshiftanalysen konnte gezeigt werden, dass die Aktivität des Transkriptionsfaktors AP-1 in MCF-7-Zellen durch NO gehemmt wird, während weder die Expressionspiegel noch die Bindungsaffinität der Transkriptionsfaktoren NFκB und Sp1 durch die NO-Behandlung verändert sind. Weiterhin konnte unter Verwendung von pharmakologischen Inhibitoren der MAPK-Signalwege mit Hilfe der Western-Blot-Analyse nachgewiesen werden, dass MAPK-vermittelte Signalwege zwar für die Induktion der MMP-9-Expression essenziell sind, diese jedoch nicht von NO beeinflusst sind. Im Unterschied hierzu konnte mit Hilfe eines PKC-Aktivitätsassays gezeigt werden, dass die Gesamtaktivität von PKCs nach Behandlung von MCF-7-Zellen mit NO signifikant gehemmt ist. Zusammenfassend zeigen diese Untersuchungen, dass die NO-vermittelte Hemmung der TPA-induzierten MMP-9-Expression in MCF-7-Zellen im Wesentlichen auf eine NO-abhängige Reduktion der Protein-Kinase-C-Aktivität und einer daraus resultierenden Aktivitätshemmung des Transkriptionsfaktors AP-1 zurückgeführt werden kann.
Resumo:
In various imaging problems the task is to use the Cauchy data of the solutions to an elliptic boundary value problem to reconstruct the coefficients of the corresponding partial differential equation. Often the examined object has known background properties but is contaminated by inhomogeneities that cause perturbations of the coefficient functions. The factorization method of Kirsch provides a tool for locating such inclusions. In this paper, the factorization technique is studied in the framework of coercive elliptic partial differential equations of the divergence type: Earlier it has been demonstrated that the factorization algorithm can reconstruct the support of a strictly positive (or negative) definite perturbation of the leading order coefficient, or if that remains unperturbed, the support of a strictly positive (or negative) perturbation of the zeroth order coefficient. In this work we show that these two types of inhomogeneities can, in fact, be located simultaneously. Unlike in the earlier articles on the factorization method, our inclusions may have disconnected complements and we also weaken some other a priori assumptions of the method. Our theoretical findings are complemented by two-dimensional numerical experiments that are presented in the framework of the diffusion approximation of optical tomography.
Resumo:
In electrical impedance tomography, one tries to recover the conductivity inside a physical body from boundary measurements of current and voltage. In many practically important situations, the investigated object has known background conductivity but it is contaminated by inhomogeneities. The factorization method of Andreas Kirsch provides a tool for locating such inclusions. Earlier, it has been shown that under suitable regularity conditions positive (or negative) inhomogeneities can be characterized by the factorization technique if the conductivity or one of its higher normal derivatives jumps on the boundaries of the inclusions. In this work, we use a monotonicity argument to generalize these results: We show that the factorization method provides a characterization of an open inclusion (modulo its boundary) if each point inside the inhomogeneity has an open neighbourhood where the perturbation of the conductivity is strictly positive (or negative) definite. In particular, we do not assume any regularity of the inclusion boundary or set any conditions on the behaviour of the perturbed conductivity at the inclusion boundary. Our theoretical findings are verified by two-dimensional numerical experiments.
Resumo:
Untersucht werden in der vorliegenden Arbeit Versionen des Satzes von Michlin f¨r Pseudodiffe- u rentialoperatoren mit nicht-regul¨ren banachraumwertigen Symbolen und deren Anwendungen a auf die Erzeugung analytischer Halbgruppen von solchen Operatoren auf vektorwertigen Sobo- levr¨umen Wp (Rn
Resumo:
The lattice formulation of Quantum ChromoDynamics (QCD) has become a reliable tool providing an ab initio calculation of low-energy quantities. Despite numerous successes, systematic uncertainties, such as discretisation effects, finite-size effects, and contaminations from excited states, are inherent in any lattice calculation. Simulations with controlled systematic uncertainties and close to the physical pion mass have become state-of-the-art. We present such a calculation for various hadronic matrix elements using non-perturbatively O(a)-improved Wilson fermions with two dynamical light quark flavours. The main topics covered in this thesis are the axial charge of the nucleon, the electro-magnetic form factors of the nucleon, and the leading hadronic contributions to the anomalous magnetic moment of the muon. Lattice simulations typically tend to underestimate the axial charge of the nucleon by 5 − 10%. We show that including excited state contaminations using the summed operator insertion method leads to agreement with the experimentally determined value. Further studies of systematic uncertainties reveal only small discretisation effects. For the electro-magnetic form factors of the nucleon, we see a similar contamination from excited states as for the axial charge. The electro-magnetic radii, extracted from a dipole fit to the momentum dependence of the form factors, show no indication of finite-size or cutoff effects. If we include excited states using the summed operator insertion method, we achieve better agreement with the radii from phenomenology. The anomalous magnetic moment of the muon can be measured and predicted to very high precision. The theoretical prediction of the anomalous magnetic moment receives contribution from strong, weak, and electro-magnetic interactions, where the hadronic contributions dominate the uncertainties. A persistent 3σ tension between the experimental determination and the theoretical calculation is found, which is considered to be an indication for physics beyond the Standard Model. We present a calculation of the connected part of the hadronic vacuum polarisation using lattice QCD. Partially twisted boundary conditions lead to a significant improvement of the vacuum polarisation in the region of small momentum transfer, which is crucial in the extraction of the hadronic vacuum polarisation.
Resumo:
This thesis assesses the question, whether accounting for non-tradable goods sectors in a calibrated Auerbach-Kotlikoff multi-regional overlapping-generations-model significantly affects this model’s results when simulating the economic impact of demographic change. Non-tradable goods constitute a major part of up to 80 percent of GDP of modern economies. At the same time, multi-regional overlapping-generations-models presented by literature on demographic change so far ignored their existence and counterfactually assumed perfect tradability between model regions. Moreover, this thesis introduces the assumption of an increasing preference share for non-tradable goods of old generations. This fact-based as-sumption is also not part of models in relevant literature. rnThese obvious simplifications of common models vis-à-vis reality notwithstanding, this thesis concludes that differences in results between a model featuring non-tradable goods and a common model with perfect tradability are very small. In other words, the common simplifi-cation of ignoring non-tradable goods is unlikely to lead to significant distortions in model results. rnIn order to ensure that differences in results between the ‘new’ model, featuring both non-tradable and tradable goods, and the common model solely reflect deviations due to the more realistic structure of the ‘new’ model, both models are calibrated to match exactly the same benchmark data and thus do not show deviations in their respective baseline steady states.rnA variation analysis performed in this thesis suggests that differences between the common model and a model with non-tradable goods can theoretically be large, but only if the bench-mark tradable goods sector is assumed to be unrealistically small.rnFinally, this thesis analyzes potential real exchange rate effects of demographic change, which could occur due to regional price differences of non-tradable goods. However, results show that shifts in real exchange rate based on these price differences are negligible.rn