936 resultados para 010501 Algebraic Structures in Mathematical Physics
Resumo:
We analyze the process of informational exchange through complex networks by measuring network efficiencies. Aiming to study nonclustered systems, we propose a modification of this measure on the local level. We apply this method to an extension of the class of small worlds that includes declustered networks and show that they are locally quite efficient, although their clustering coefficient is practically zero. Unweighted systems with small-world and scale-free topologies are shown to be both globally and locally efficient. Our method is also applied to characterize weighted networks. In particular we examine the properties of underground transportation systems of Madrid and Barcelona and reinterpret the results obtained for the Boston subway network.
Resumo:
Abstract The main objective of this work is to show how the choice of the temporal dimension and of the spatial structure of the population influences an artificial evolutionary process. In the field of Artificial Evolution we can observe a common trend in synchronously evolv¬ing panmictic populations, i.e., populations in which any individual can be recombined with any other individual. Already in the '90s, the works of Spiessens and Manderick, Sarma and De Jong, and Gorges-Schleuter have pointed out that, if a population is struc¬tured according to a mono- or bi-dimensional regular lattice, the evolutionary process shows a different dynamic with respect to the panmictic case. In particular, Sarma and De Jong have studied the selection pressure (i.e., the diffusion of a best individual when the only selection operator is active) induced by a regular bi-dimensional structure of the population, proposing a logistic modeling of the selection pressure curves. This model supposes that the diffusion of a best individual in a population follows an exponential law. We show that such a model is inadequate to describe the process, since the growth speed must be quadratic or sub-quadratic in the case of a bi-dimensional regular lattice. New linear and sub-quadratic models are proposed for modeling the selection pressure curves in, respectively, mono- and bi-dimensional regu¬lar structures. These models are extended to describe the process when asynchronous evolutions are employed. Different dynamics of the populations imply different search strategies of the resulting algorithm, when the evolutionary process is used to solve optimisation problems. A benchmark of both discrete and continuous test problems is used to study the search characteristics of the different topologies and updates of the populations. In the last decade, the pioneering studies of Watts and Strogatz have shown that most real networks, both in the biological and sociological worlds as well as in man-made structures, have mathematical properties that set them apart from regular and random structures. In particular, they introduced the concepts of small-world graphs, and they showed that this new family of structures has interesting computing capabilities. Populations structured according to these new topologies are proposed, and their evolutionary dynamics are studied and modeled. We also propose asynchronous evolutions for these structures, and the resulting evolutionary behaviors are investigated. Many man-made networks have grown, and are still growing incrementally, and explanations have been proposed for their actual shape, such as Albert and Barabasi's preferential attachment growth rule. However, many actual networks seem to have undergone some kind of Darwinian variation and selection. Thus, how these networks might have come to be selected is an interesting yet unanswered question. In the last part of this work, we show how a simple evolutionary algorithm can enable the emrgence o these kinds of structures for two prototypical problems of the automata networks world, the majority classification and the synchronisation problems. Synopsis L'objectif principal de ce travail est de montrer l'influence du choix de la dimension temporelle et de la structure spatiale d'une population sur un processus évolutionnaire artificiel. Dans le domaine de l'Evolution Artificielle on peut observer une tendence à évoluer d'une façon synchrone des populations panmictiques, où chaque individu peut être récombiné avec tout autre individu dans la population. Déjà dans les année '90, Spiessens et Manderick, Sarma et De Jong, et Gorges-Schleuter ont observé que, si une population possède une structure régulière mono- ou bi-dimensionnelle, le processus évolutionnaire montre une dynamique différente de celle d'une population panmictique. En particulier, Sarma et De Jong ont étudié la pression de sélection (c-à-d la diffusion d'un individu optimal quand seul l'opérateur de sélection est actif) induite par une structure régulière bi-dimensionnelle de la population, proposant une modélisation logistique des courbes de pression de sélection. Ce modèle suppose que la diffusion d'un individu optimal suit une loi exponentielle. On montre que ce modèle est inadéquat pour décrire ce phénomène, étant donné que la vitesse de croissance doit obéir à une loi quadratique ou sous-quadratique dans le cas d'une structure régulière bi-dimensionnelle. De nouveaux modèles linéaires et sous-quadratique sont proposés pour des structures mono- et bi-dimensionnelles. Ces modèles sont étendus pour décrire des processus évolutionnaires asynchrones. Différentes dynamiques de la population impliquent strategies différentes de recherche de l'algorithme résultant lorsque le processus évolutionnaire est utilisé pour résoudre des problèmes d'optimisation. Un ensemble de problèmes discrets et continus est utilisé pour étudier les charactéristiques de recherche des différentes topologies et mises à jour des populations. Ces dernières années, les études de Watts et Strogatz ont montré que beaucoup de réseaux, aussi bien dans les mondes biologiques et sociologiques que dans les structures produites par l'homme, ont des propriétés mathématiques qui les séparent à la fois des structures régulières et des structures aléatoires. En particulier, ils ont introduit la notion de graphe sm,all-world et ont montré que cette nouvelle famille de structures possède des intéressantes propriétés dynamiques. Des populations ayant ces nouvelles topologies sont proposés, et leurs dynamiques évolutionnaires sont étudiées et modélisées. Pour des populations ayant ces structures, des méthodes d'évolution asynchrone sont proposées, et la dynamique résultante est étudiée. Beaucoup de réseaux produits par l'homme se sont formés d'une façon incrémentale, et des explications pour leur forme actuelle ont été proposées, comme le preferential attachment de Albert et Barabàsi. Toutefois, beaucoup de réseaux existants doivent être le produit d'un processus de variation et sélection darwiniennes. Ainsi, la façon dont ces structures ont pu être sélectionnées est une question intéressante restée sans réponse. Dans la dernière partie de ce travail, on montre comment un simple processus évolutif artificiel permet à ce type de topologies d'émerger dans le cas de deux problèmes prototypiques des réseaux d'automates, les tâches de densité et de synchronisation.
Resumo:
Background: Optimization methods allow designing changes in a system so that specific goals are attained. These techniques are fundamental for metabolic engineering. However, they are not directly applicable for investigating the evolution of metabolic adaptation to environmental changes. Although biological systems have evolved by natural selection and result in well-adapted systems, we can hardly expect that actual metabolic processes are at the theoretical optimum that could result from an optimization analysis. More likely, natural systems are to be found in a feasible region compatible with global physiological requirements. Results: We first present a new method for globally optimizing nonlinear models of metabolic pathways that are based on the Generalized Mass Action (GMA) representation. The optimization task is posed as a nonconvex nonlinear programming (NLP) problem that is solved by an outer- approximation algorithm. This method relies on solving iteratively reduced NLP slave subproblems and mixed-integer linear programming (MILP) master problems that provide valid upper and lower bounds, respectively, on the global solution to the original NLP. The capabilities of this method are illustrated through its application to the anaerobic fermentation pathway in Saccharomyces cerevisiae. We next introduce a method to identify the feasibility parametric regions that allow a system to meet a set of physiological constraints that can be represented in mathematical terms through algebraic equations. This technique is based on applying the outer-approximation based algorithm iteratively over a reduced search space in order to identify regions that contain feasible solutions to the problem and discard others in which no feasible solution exists. As an example, we characterize the feasible enzyme activity changes that are compatible with an appropriate adaptive response of yeast Saccharomyces cerevisiae to heat shock Conclusion: Our results show the utility of the suggested approach for investigating the evolution of adaptive responses to environmental changes. The proposed method can be used in other important applications such as the evaluation of parameter changes that are compatible with health and disease states.
Resumo:
This thesis gives an overview of the use of the level set methods in the field of image science. The similar fast marching method is discussed for comparison, also the narrow band and the particle level set methods are introduced. The level set method is a numerical scheme for representing, deforming and recovering structures in an arbitrary dimensions. It approximates and tracks the moving interfaces, dynamic curves and surfaces. The level set method does not define how and why some boundary is advancing the way it is but simply represents and tracks the boundary. The principal idea of the level set method is to represent the N dimensional boundary in the N+l dimensions. This gives the generality to represent even the complex boundaries. The level set methods can be powerful tools to represent dynamic boundaries, but they can require lot of computing power. Specially the basic level set method have considerable computational burden. This burden can be alleviated with more sophisticated versions of the level set algorithm like the narrow band level set method or with the programmable hardware implementation. Also the parallel approach can be used in suitable applications. It is concluded that these methods can be used in a quite broad range of image applications, like computer vision and graphics, scientific visualization and also to solve problems in computational physics. Level set methods and methods derived and inspired by it will be in the front line of image processing also in the future.
Resumo:
The Feller process is an one-dimensional diffusion process with linear drift and state-dependent diffusion coefficient vanishing at the origin. The process is positive definite and it is this property along with its linear character that have made Feller process a convenient candidate for the modeling of a number of phenomena ranging from single-neuron firing to volatility of financial assets. While general properties of the process have long been well known, less known are properties related to level crossing such as the first-passage and the escape problems. In this work we thoroughly address these questions.
Resumo:
A study of D +π−, D 0π+ and D ∗+π− final states is performed using pp collision data, corresponding to an integrated luminosity of 1.0 fb−1, collected at a centre-of-mass energy of 7 TeV with the LHCb detector. The D 1(2420)0 resonance is observed in the D ∗+π− final state and the D∗2(2460) resonance is observed in the D +π−, D 0π+ and D ∗+π− final states. For both resonances, their properties and spin-parity assignments are obtained. In addition, two natural parity and two unnatural parity resonances are observed in the mass region between 2500 and 2800 MeV. Further structures in the region around 3000 MeV are observed in all the D ∗+π−, D +π− and D 0π+ final states.
Resumo:
Social, technological, and economic time series are divided by events which are usually assumed to be random, albeit with some hierarchical structure. It is well known that the interevent statistics observed in these contexts differs from the Poissonian profile by being long-tailed distributed with resting and active periods interwoven. Understanding mechanisms generating consistent statistics has therefore become a central issue. The approach we present is taken from the continuous-time random-walk formalism and represents an analytical alternative to models of nontrivial priority that have been recently proposed. Our analysis also goes one step further by looking at the multifractal structure of the interevent times of human decisions. We here analyze the intertransaction time intervals of several financial markets. We observe that empirical data describe a subtle multifractal behavior. Our model explains this structure by taking the pausing-time density in the form of a superstatistics where the integral kernel quantifies the heterogeneous nature of the executed tasks. A stretched exponential kernel provides a multifractal profile valid for a certain limited range. A suggested heuristic analytical profile is capable of covering a broader region.
Resumo:
The aim of this study is to analyse the content of the interdisciplinary conversations in Göttingen between 1949 and 1961. The task is to compare models for describing reality presented by quantum physicists and theologians. Descriptions of reality indifferent disciplines are conditioned by the development of the concept of reality in philosophy, physics and theology. Our basic problem is stated in the question: How is it possible for the intramental image to match the external object?Cartesian knowledge presupposes clear and distinct ideas in the mind prior to observation resulting in a true correspondence between the observed object and the cogitative observing subject. The Kantian synthesis between rationalism and empiricism emphasises an extended character of representation. The human mind is not a passive receiver of external information, but is actively construing intramental representations of external reality in the epistemological process. Heidegger's aim was to reach a more primordial mode of understanding reality than what is possible in the Cartesian Subject-Object distinction. In Heidegger's philosophy, ontology as being-in-the-world is prior to knowledge concerning being. Ontology can be grasped only in the totality of being (Dasein), not only as an object of reflection and perception. According to Bohr, quantum mechanics introduces an irreducible loss in representation, which classically understood is a deficiency in knowledge. The conflicting aspects (particle and wave pictures) in our comprehension of physical reality, cannot be completely accommodated into an entire and coherent model of reality. What Bohr rejects is not realism, but the classical Einsteinian version of it. By the use of complementary descriptions, Bohr tries to save a fundamentally realistic position. The fundamental question in Barthian theology is the problem of God as an object of theological discourse. Dialectics is Barth¿s way to express knowledge of God avoiding a speculative theology and a human-centred religious self-consciousness. In Barthian theology, the human capacity for knowledge, independently of revelation, is insufficient to comprehend the being of God. Our knowledge of God is real knowledge in revelation and our words are made to correspond with the divine reality in an analogy of faith. The point of the Bultmannian demythologising programme was to claim the real existence of God beyond our faculties. We cannot simply define God as a human ideal of existence or a focus of values. The theological programme of Bultmann emphasised the notion that we can talk meaningfully of God only insofar as we have existential experience of his intervention. Common to all these twentieth century philosophical, physical and theological positions, is a form of anti-Cartesianism. Consequently, in regard to their epistemology, they can be labelled antirealist. This common insight also made it possible to find a common meeting point between the different disciplines. In this study, the different standpoints from all three areas and the conversations in Göttingen are analysed in the frameworkof realism/antirealism. One of the first tasks in the Göttingen conversations was to analyse the nature of the likeness between the complementary structures inquantum physics introduced by Niels Bohr and the dialectical forms in the Barthian doctrine of God. The reaction against epistemological Cartesianism, metaphysics of substance and deterministic description of reality was the common point of departure for theologians and physicists in the Göttingen discussions. In his complementarity, Bohr anticipated the crossing of traditional epistemic boundaries and the generalisation of epistemological strategies by introducing interpretative procedures across various disciplines.
Resumo:
In this study, it was adjusted a mathematical model to measure the effect of electric motor efficiency on pumping system costs for irrigation on the tariff structure of conventional electricity and green horo-seasonal , and also to calculate the recovery period of the invested capital in higher efficiency equipment. Then, it was applied to a center pivot irrigation system in two options of electric motor efficiency, 92,6% (standard line) and 94,3% (high efficiency line), and the acquisition cost of the first corresponded to 70% the of the second. The power of the electric motor was 100hp. The results showed that the model allowed us to evaluate if a high efficiency motor was economically viable compared to the standard motor in each tariff structure. The high efficiency motor was not viable in the two tariff structures. In the green horo-seasonal tariff, would only be viable if its efficiency was 4.46% higher than the standard motor. In the conventional tariff, it would only be viable if the efficiency overcame 2.71%.
Resumo:
Cajal bodies (CB) are ubiquitous nuclear structures involved in the biogenesis of small nuclear ribonucleoproteins and show narrow association with the nucleolus. To identify possible relationships between CB and the nucleolus, the localization of coilin, a marker of CB, and of a set of nucleolar proteins was investigated in cultured PtK2 cells undergoing micronucleation. Nocodazol-induced micronucleated cells were examined by double indirect immunofluorescence with antibodies against coilin, fibrillarin, NOR-90/hUBF, RNA polymerase I, PM/Scl, and To/Th. Cells were imaged on a BioRad 1024-UV confocal system attached to a Zeiss Axiovert 100 microscope. Since PtK2 cells possess only one nucleolus organizer region, micronucleated cells presented only one or two micronuclei containing nucleolus. By confocal microscopy we showed that in most micronuclei lacking a typical nucleolus a variable number of round structures were stained by antibodies against fibrillarin, NOR-90/hUBF protein, and coilin. These bodies were regarded as CB-like structures and were not stained by anti-PM/Scl and anti-To/Th antibodies. Anti-RNA polymerase I antibodies also reacted with CB-like structures in some micronuclei lacking nucleolus. The demonstration that a set of proteins involved in RNA/RNP biogenesis, namely coilin, fibrillarin, NOR-90/hUBF, and RNA polymerase I gather in CB-like structures present in nucleoli-devoid micronuclei may contribute to shed some light into the understanding of CB function.
Resumo:
Optimization of quantum measurement processes has a pivotal role in carrying out better, more accurate or less disrupting, measurements and experiments on a quantum system. Especially, convex optimization, i.e., identifying the extreme points of the convex sets and subsets of quantum measuring devices plays an important part in quantum optimization since the typical figures of merit for measuring processes are affine functionals. In this thesis, we discuss results determining the extreme quantum devices and their relevance, e.g., in quantum-compatibility-related questions. Especially, we see that a compatible device pair where one device is extreme can be joined into a single apparatus essentially in a unique way. Moreover, we show that the question whether a pair of quantum observables can be measured jointly can often be formulated in a weaker form when some of the observables involved are extreme. Another major line of research treated in this thesis deals with convex analysis of special restricted quantum device sets, covariance structures or, in particular, generalized imprimitivity systems. Some results on the structure ofcovariant observables and instruments are listed as well as results identifying the extreme points of covariance structures in quantum theory. As a special case study, not published anywhere before, we study the structure of Euclidean-covariant localization observables for spin-0-particles. We also discuss the general form of Weyl-covariant phase-space instruments. Finally, certain optimality measures originating from convex geometry are introduced for quantum devices, namely, boundariness measuring how ‘close’ to the algebraic boundary of the device set a quantum apparatus is and the robustness of incompatibility quantifying the level of incompatibility for a quantum device pair by measuring the highest amount of noise the pair tolerates without becoming compatible. Boundariness is further associated to minimum-error discrimination of quantum devices, and robustness of incompatibility is shown to behave monotonically under certain compatibility-non-decreasing operations. Moreover, the value of robustness of incompatibility is given for a few special device pairs.
Resumo:
La famille des gènes Hox code pour des facteurs de transcription connus pour leur contribution essentielle à l’élaboration de l’architecture du corps et ce, au sein de tout le règne animal. Au cours de l’évolution chez les vertébrés, les gènes Hox ont été redéfinis pour générer toute une variété de nouveaux tissus/organes. Souvent, cette diversification s’est effectuée via des changements quant au contrôle transcriptionnel des gènes Hox. Chez les mammifères, la fonction de Hoxa13 n’est pas restreinte qu’à l’embryon même, mais s’avère également essentielle pour le développement de la vascularisation fœtale au sein du labyrinthe placentaire, suggérant ainsi que sa fonction au sein de cette structure aurait accompagné l’émergence des espèces placentaires. Au chapitre 2, nous mettons en lumière le recrutement de deux autres gènes Hoxa, soient Hoxa10 et Hoxa11, au compartiment extra-embryonnaire. Nous démontrons que l’expression de Hoxa10, Hoxa11 et Hoxa13 est requise au sein de l’allantoïde, précurseur du cordon ombilical et du système vasculaire fœtal au sein du labyrinthe placentaire. De façon intéressante, nous avons découvert que l’expression des gènes Hoxa10-13 dans l’allantoïde n’est pas restreinte qu’aux mammifères placentaires, mais est également présente chez un vertébré non-placentaire, indiquant que le recrutement des ces gènes dans l’allantoïde précède fort probablement l’émergence des espèces placentaires. Nous avons généré des réarrangements génétiques et utilisé des essais transgéniques pour étudier les mécanismes régulant l’expression des gènes Hoxa dans l’allantoïde. Nous avons identifié un fragment intergénique de 50 kb capable d’induire l’expression d’un gène rapporteur dans l’allantoïde. Cependant, nous avons trouvé que le mécanisme de régulation contrôlant l’expression du gène Hoxa au sein du compartiment extra-embryonnaire est fort complexe et repose sur plus qu’un seul élément cis-régulateur. Au chapitre 3, nous avons utilisé la cartographie génétique du destin cellulaire pour évaluer la contribution globale des cellules exprimant Hoxa13 aux différentes structures embryonnaires. Plus particulièrement, nous avons examiné plus en détail l’analyse de la cartographie du destin cellulaire de Hoxa13 dans les pattes antérieures en développement. Nous avons pu déterminer que, dans le squelette du membre, tous les éléments squelettiques de l’autopode (main), à l’exception de quelques cellules dans les éléments carpiens les plus proximaux, proviennent des cellules exprimant Hoxa13. En contraste, nous avons découvert que, au sein du compartiment musculaire, les cellules exprimant Hoxa13 et leurs descendantes (Hoxa13lin+) s’étendent à des domaines plus proximaux du membre, où ils contribuent à générer la plupart des masses musculaires de l’avant-bras et, en partie, du triceps. De façon intéressante, nous avons découvert que les cellules exprimant Hoxa13 et leurs descendantes ne sont pas distribuées uniformément parmi les différents muscles. Au sein d’une même masse musculaire, les fibres avec une contribution Hoxa13lin+ différente peuvent être identifiées et les fibres avec une contribution semblable sont souvent regroupées ensemble. Ce résultat évoque la possibilité que Hoxa13 soit impliqué dans la mise en place de caractéristiques spécifiques des groupes musculaires, ou la mise en place de connections nerf-muscle. Prises dans leur ensemble, les données ici présentées permettent de mieux comprendre le rôle de Hoxa13 au sein des compartiments embryonnaires et extra-embryonnaires. Par ailleurs, nos résultats seront d’une importance primordiale pour soutenir les futures études visant à expliquer les mécanismes transcriptionnels soutenant la régulation des gènes Hoxa dans les tissus extra-embryonnaires.
Resumo:
This thesis deals with some aspects of the Physics of the early universe, like phase transitions, bubble nucleations and premodial density perturbations which lead to the formation structures in the universe. Quantum aspects of the gravitational interaction play an essential role in retical high-energy physics. The questions of the quantum gravity are naturally connected with early universe and Grand Unification Theories. In spite of numerous efforts, the various problems of quantum gravity remain still unsolved. In this condition, the consideration of different quantum gravity models is an inevitable stage to study the quantum aspects of gravitational interaction. The important role of gravitationally coupled scalar field in the physics of the early universe is discussed in this thesis. The study shows that the scalar-gravitational coupling and the scalar curvature did play a crucial role in determining the nature of phase transitions that took place in the early universe. The key idea in studying the formation structure in the universe is that of gravitational instability.
Resumo:
Den Schwerpunkt dieser Dissertation bildet zum einen die Entwicklung eines theoretischen Modells zur Beschreibung des Strukturbildungsprozesses in organisch/anorganischen Doppelschichtsystemen und zum anderen die Untersuchung der Übertragbarkeit dieser theoretisch gewonnenen Ergebnisse auf reale Systeme. Hierzu dienen systematische experimentelle Untersuchungen dieses Phänomens an einem Testsystem. Der Bereich der selbstorganisierenden Systeme ist von hohem wissenschaftlichen Interesse, erlaubt er doch die Realisierung von Strukturen, die nicht den Begrenzungen heutiger Techniken unterliegen, wie etwa der Beugung bei lithographischen Verfahren. Darüber hinaus liefert ein vertieftes Verständnis des Strukturbildungsprozesses auch eine Möglichkeit, im Falle entsprechender technischer Anwendungen Instabilitäten innerhalb der Schichtsysteme zu verhindern und somit einer Degradation der Bauteile entgegenzuwirken. Im theoretischen Teil der Arbeit konnte ein Modell im Rahmen der klassischen Elastizitätstheorie entwickelt werden, mit dessen Hilfe sich die Entstehung der Strukturen in Doppelschichtsystemen verstehen läßt. Der hier gefundene funktionale Zusammenhang zwischen der Periode der Strukturen und dem Verhältnis der Schichtdicken von organischer und anorganischer Schicht, wird durch die experimentellen Ergebnisse sehr gut bestätigt. Die Ergebnisse zeigen, daß es technologisch möglich ist, über die Vorgabe der Schichtdicke in einem Materialsystem die Periodizität der entstehenden Strukturen vorzugeben. Darüber hinaus liefert das vorgestellte Modell eine Stabilitätsbedingung für die Schichtsysteme, die es ermöglicht, zu jedem Zeitpunkt die dominierende Mode zu identifizieren. Ein Schwerpunkt der experimentellen Untersuchungen dieser Arbeit liegt auf der Strukturbildung innerhalb der Schichtsysteme. Das Testsystem wurde durch Aufbringen einer organischen Schicht - eines sog. Molekularen Glases - auf ein Glassubstrat realisiert, als Deckschicht diente eine Siliziumnitrid-Schicht. Es wurden Proben mit variierenden Schichtdicken kontrolliert erwärmt. Sobald die Temperatur des Schichtsystems in der Größenordnung der Glasübergangstemperatur des jeweiligen organischen Materials lag, fand spontan eine Strukturbildung auf Grund einer Spannungsrelaxation statt. Es ließen sich durch die Wahl einer entsprechenden Heizquelle unterschiedliche Strukturen realisieren. Bei Verwendung eines gepulsten Lasers, also einer kreisförmigen Wärmequelle, ordneten sich die Strukturen konzentrisch an, wohingegen sich ihre Ausrichtung bei Verwendung einer flächenhaften Heizplatte statistisch verteilte. Auffällig bei allen Strukturen war eine starke Modulation der Oberfläche. Ferner konnte in der Arbeit gezeigt werden, daß sich durch eine gezielte Veränderung der Spannungsverteilung innerhalb der Schichtsysteme die Ausrichtung der Strukturen (gezielt) manipulieren ließen. Unabhängig davon erlaubte die Variation der Schichtdicken die Realisierung von Strukturen mit einer Periodizität im Bereich von einigen µm bis hinunter zu etwa 200 nm. Die Kontrolle über die Ausrichtung und die Periodizität ist Grundvoraussetzung für eine zukünftige technologische Nutzung des Effektes zur kontrollierten Herstellung von Mikro- bzw. Nanostrukturen. Darüber hinaus konnte ein zunächst von der Strukturbildung unabhängiges Konzept eines aktiven Sensors für die optische Raster-Nahfeld-Mikroskopie vorgestellt werden, das das oben beschriebene System, bestehend aus einem fluoreszierenden Molekularen Glas und einer Siliziumnitrid-Deckschicht, verwendet. Erste theoretische und experimentelle Ergebnisse zeigen das technologische Potential dieses Sensortyps.