955 resultados para Group theoretical based techniques


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The human movement analysis (HMA) aims to measure the abilities of a subject to stand or to walk. In the field of HMA, tests are daily performed in research laboratories, hospitals and clinics, aiming to diagnose a disease, distinguish between disease entities, monitor the progress of a treatment and predict the outcome of an intervention [Brand and Crowninshield, 1981; Brand, 1987; Baker, 2006]. To achieve these purposes, clinicians and researchers use measurement devices, like force platforms, stereophotogrammetric systems, accelerometers, baropodometric insoles, etc. This thesis focus on the force platform (FP) and in particular on the quality assessment of the FP data. The principal objective of our work was the design and the experimental validation of a portable system for the in situ calibration of FPs. The thesis is structured as follows: Chapter 1. Description of the physical principles used for the functioning of a FP: how these principles are used to create force transducers, such as strain gauges and piezoelectrics transducers. Then, description of the two category of FPs, three- and six-component, the signals acquisition (hardware structure), and the signals calibration. Finally, a brief description of the use of FPs in HMA, for balance or gait analysis. Chapter 2. Description of the inverse dynamics, the most common method used in the field of HMA. This method uses the signals measured by a FP to estimate kinetic quantities, such as joint forces and moments. The measures of these variables can not be taken directly, unless very invasive techniques; consequently these variables can only be estimated using indirect techniques, as the inverse dynamics. Finally, a brief description of the sources of error, present in the gait analysis. Chapter 3. State of the art in the FP calibration. The selected literature is divided in sections, each section describes: systems for the periodic control of the FP accuracy; systems for the error reduction in the FP signals; systems and procedures for the construction of a FP. In particular is detailed described a calibration system designed by our group, based on the theoretical method proposed by ?. This system was the “starting point” for the new system presented in this thesis. Chapter 4. Description of the new system, divided in its parts: 1) the algorithm; 2) the device; and 3) the calibration procedure, for the correct performing of the calibration process. The algorithm characteristics were optimized by a simulation approach, the results are here presented. In addiction, the different versions of the device are described. Chapter 5. Experimental validation of the new system, achieved by testing it on 4 commercial FPs. The effectiveness of the calibration was verified by measuring, before and after calibration, the accuracy of the FPs in measuring the center of pressure of an applied force. The new system can estimate local and global calibration matrices; by local and global calibration matrices, the non–linearity of the FPs was quantified and locally compensated. Further, a non–linear calibration is proposed. This calibration compensates the non– linear effect in the FP functioning, due to the bending of its upper plate. The experimental results are presented. Chapter 6. Influence of the FP calibration on the estimation of kinetic quantities, with the inverse dynamics approach. Chapter 7. The conclusions of this thesis are presented: need of a calibration of FPs and consequential enhancement in the kinetic data quality. Appendix: Calibration of the LC used in the presented system. Different calibration set–up of a 3D force transducer are presented, and is proposed the optimal set–up, with particular attention to the compensation of non–linearities. The optimal set–up is verified by experimental results.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Biohybrid derivatives of π-conjugated materials are emerging as powerful tools to study biological events through the (opto)electronic variations of the π-conjugated moieties, as well as to direct and govern the self-assembly properties of the organic materials through the organization principles of the bio component. So far, very few examples of thiophene-based biohybrids have been reported. The aim of this Ph. D thesis has been the development of oligothiophene-oligonucleotide hybrid derivatives as tools, on one side, to detect DNA hybridisation events and, on the other, as model compounds to investigate thiophene-nucleobase interactions in the solid state. To obtain oligothiophene bioconjugates with the required high level of purity, we first developed new synthetic ecofriendly protocols for the synthesis of thiophene oligomers. Our innovative heterogeneous Suzuki coupling methodology, carried out in EtOH/water or isopropanol under microwave irradiation, allowed us to obtain alkyl substituted oligothiophenes and thiophene based co-oligomers in high yields and very short reaction times, free from residual metals and with improved film forming properties. These methodologies were subsequently applied in the synthesis of oligothiophene-oligonucleotide conjugates. Oligothiophene-5-labeled deoxyuridines were synthesized and incorporated into 19-meric oligonucletide sequences. We showed that the oligothiophene-labeled oligonucletide sequences obtained can be used as probes to detect a single nucleotide polymorphism (SNP) in complementary DNA target sequences. In fact, all the probes showed marked variations in emission intensity upon hybridization with a complementary target sequence. The observed variations in emitted light were comparable or even superior to those reported in similar studies, showing that the biohybrids can potentially be useful to develop biosensors for the detection of DNA mismatches. Finally, water-soluble, photoluminescent and electroactive dinucleotide-hybrid derivatives of quaterthiophene and quinquethiophene were synthesized. By means of a combination of spectroscopy and microscopy techniques, electrical characterizations, microfluidic measurements and theoretical calculations, we were able to demonstrate that the self-assembly modalities of the biohybrids in thin films are driven by the interplay of intra and intermolecular interactions in which the π-stacking between the oligothiophene and nucleotide bases plays a major role.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

During the last few years, several methods have been proposed in order to study and to evaluate characteristic properties of the human skin by using non-invasive approaches. Mostly, these methods cover aspects related to either dermatology, to analyze skin physiology and to evaluate the effectiveness of medical treatments in skin diseases, or dermocosmetics and cosmetic science to evaluate, for example, the effectiveness of anti-aging treatments. To these purposes a routine approach must be followed. Although very accurate and high resolution measurements can be achieved by using conventional methods, such as optical or mechanical profilometry for example, their use is quite limited primarily to the high cost of the instrumentation required, which in turn is usually cumbersome, highlighting some of the limitations for a routine based analysis. This thesis aims to investigate the feasibility of a noninvasive skin characterization system based on the analysis of capacitive images of the skin surface. The system relies on a CMOS portable capacitive device which gives 50 micron/pixel resolution capacitance map of the skin micro-relief. In order to extract characteristic features of the skin topography, image analysis techniques, such as watershed segmentation and wavelet analysis, have been used to detect the main structures of interest: wrinkles and plateau of the typical micro-relief pattern. In order to validate the method, the features extracted from a dataset of skin capacitive images acquired during dermatological examinations of a healthy group of volunteers have been compared with the age of the subjects involved, showing good correlation with the skin ageing effect. Detailed analysis of the output of the capacitive sensor compared with optical profilometry of silicone replica of the same skin area has revealed potentiality and some limitations of this technology. Also, applications to follow-up studies, as needed to objectively evaluate the effectiveness of treatments in a routine manner, are discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Remote sensing (RS) techniques have evolved into an important instrument to investigate forest function. New methods based on the remote detection of leaf biochemistry and photosynthesis are being developed and applied in pilot studies from airborne and satellite platforms (PRI, solar-induced fluorescence; N and chlorophyll content). Non-destructive monitoring methods, a direct application of RS studies, are also proving increasingly attractive for the determination of stress conditions or nutrient deficiencies not only in research but also in agronomy, horticulture and urban forestry (proximal RS). In this work I will focus on some novel techniques recently developed for the estimation of photochemistry and photosynthetic rates based (i) on the proximal measurement of steady-state chlorophyll fluorescence yield, or (ii) the remote sensing of changes in hyperspectral leaf reflectance, associated to xanthophyll de-epoxydation and energy partitioning, which is closely coupled to leaf photochemistry and photosynthesis. I will also present and describe a mathematical model of leaf steady-state fluorescence and photosynthesis recently developed in our group. Two different species were used in the experiments: Arbutus unedo, a schlerophyllous Mediterranean species, and Populus euroamericana, a broad leaf deciduous tree widely used in plantation forestry. Results show that ambient fluorescence could provide a useful tool for testing photosynthetic processes from a distance. These results confirm also the photosynthetic reflectance index (PRI) as an efficient remote sensing reflectance index estimating short-term changes in photochemical efficiency as well as long-term changes in leaf biochemistry. The study also demonstrated that RS techniques could provide a fast and reliable method to estimate photosynthetic pigment content and total nitrogen, beside assessing the state of photochemical process in our plants’ leaves in the field. This could have important practical applications for the management of plant cultivation systems, for the estimation of the nutrient requirements of our plants for optimal growth.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Conjugated polymers are macromolecules that possess alternating single and double bonds along the main chain. These polymers combine the optoelectronic properties of semiconductors with the mechanical properties and processing advantages of plastics. In this thesis we discuss the synthesis, characterization and application of polyphenylene-based materials in various electronic devices. Poly(2,7-carbazole)s have the potential to be useful as blue emitters, but also as donor materials in solar cells due to their better hole-accepting properties. However, it is associated with two major drawbacks (1) the emission maximum occurs at 421 nm where the human eye is not very sensitive and (2) the 3- and 6- positions of carbazole are susceptible to chemical or electrochemical degradation. To overcome these problems, the ladder-type nitrogen-bridged polymers are synthesized. The resulting series of polymers, nitrogen-bridged poly(ladder-type tetraphenylene), nitrogen-bridged poly(ladder-type pentaphenylene), nitrogen-bridged poly(ladder-type hexaphenylene) and its derivatives are discussed in the light of photophysical and electrochemical properties and tested in PLEDs, solar cell, and OFETs. A promising trend which has emerged in recent years is the use of well defined oligomers as model compounds for their corresponding polymers. However, the uses of these molecules are many times limited by their solubility and one has to use vapor deposition techniques which require high vacuum and temperature and cannot be used for large area applications. One solution to this problem is the synthesis of small molecules having enough alkyl chain on the backbone so that they can be solution or melt processed and has the ability to form thin films like polymers as well as retain the high ordered structure characteristics of small molecules. Therefore, in the present work soluble ladderized oligomers based on thiophene and carbazole with different end group were made and tested in OFET devices. Carbazole is an attractive raw material for the synthesis of dyes since it is cheap and readily available. Carbazoledioxazine, commercially known as violet 23 is a representative compound of dioxazine pigments. As part of our efforts into developing cheap alternatives to violet 23, the synthesis and characterization of a new series of dyes by Buchwald-type coupling of 3-aminocarbazole with various isomers of chloroanthraquinone are presented.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The concept of competitiveness, for a long time considered as strictly connected to economic and financial performances, evolved, above all in recent years, toward new, wider interpretations disclosing its multidimensional nature. The shift to a multidimensional view of the phenomenon has excited an intense debate involving theoretical reflections on the features characterizing it, as well as methodological considerations on its assessment and measurement. The present research has a twofold objective: going in depth with the study of tangible and intangible aspect characterizing multidimensional competitive phenomena by assuming a micro-level point of view, and measuring competitiveness through a model-based approach. Specifically, we propose a non-parametric approach to Structural Equation Models techniques for the computation of multidimensional composite measures. Structural Equation Models tools will be used for the development of the empirical application on the italian case: a model based micro-level competitiveness indicator for the measurement of the phenomenon on a large sample of Italian small and medium enterprises will be constructed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Optical frequency comb technology has been used in this work for the first time to investigate the nuclear structure of light radioactive isotopes. Therefore, three laser systems were stabilized with different techniques to accurately known optical frequencies and used in two specialized experiments. Absolute transition frequency measurements of lithium and beryllium isotopes were performed with accuracy on the order of 10^(−10). Such a high accuracy is required for the light elements since the nuclear volume effect has only a 10^(−9) contribution to the total transition frequency. For beryllium, the isotope shift was determined with an accuracy that is sufficient to extract information about the proton distribution inside the nucleus. A Doppler-free two-photon spectroscopy on the stable lithium isotopes (6,7)^Li was performed in order to determine the absolute frequency of the 2S → 3S transition. The achieved relative accuracy of 2×10^(−10) is improved by one order of magnitude compared to previous measurements. The results provide an opportunity to determine the nuclear charge radius of the stable and short-lived isotopes in a pure optical way but this requires an improvement of the theoretical calculations by two orders of magnitude. The second experiment presented here was performed at ISOLDE/CERN, where the absolute transition frequencies of the D1 and D2 lines in beryllium ions for the isotopes (7,9,10,11)^Be were measured with an accuracy of about 1 MHz. Therefore, an advanced collinear laser spectroscopy technique involving two counter-propagating frequency-stabilized laser beams with a known absolute frequency was developed. The extracted isotope shifts were combined with recent accurate mass shift calculations and the root-mean square nuclear charge radii of (7,10)^Be and the one-neutron halo nucleus 11^Be were determined. Obtained charge radii are decreasing from 7^Be to 10^Be and increasing again for 11^Be. While the monotone decrease can be explained by a nucleon clustering inside the nucleus, the pronounced increase between 10^Be and 11^Be can be interpreted as a combination of two contributions: the center-of-mass motion of the 10^Be core and a change of intrinsic structure of the core. To disentangle these two contributions, the results from nuclear reaction measurements were used and indicate that the center-of-mass motion is the dominant effect. Additionally, the splitting isotope shift, i.e. the difference in the isotope shifts between the D1 and D2 fine structure transitions, was determined. This shows a good consistency with the theoretical calculations and provides a valuable check of the beryllium experiment.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Nowadays microfluidic is becoming an important technology in many chemical and biological processes and analysis applications. The potential to replace large-scale conventional laboratory instrumentation with miniaturized and self-contained systems, (called lab-on-a-chip (LOC) or point-of-care-testing (POCT)), offers a variety of advantages such as low reagent consumption, faster analysis speeds, and the capability of operating in a massively parallel scale in order to achieve high-throughput. Micro-electro-mechanical-systems (MEMS) technologies enable both the fabrication of miniaturized system and the possibility of developing compact and portable systems. The work described in this dissertation is towards the development of micromachined separation devices for both high-speed gas chromatography (HSGC) and gravitational field-flow fractionation (GrFFF) using MEMS technologies. Concerning the HSGC, a complete platform of three MEMS-based GC core components (injector, separation column and detector) is designed, fabricated and characterized. The microinjector consists of a set of pneumatically driven microvalves, based on a polymeric actuating membrane. Experimental results demonstrate that the microinjector is able to guarantee low dead volumes, fast actuation time, a wide operating temperature range and high chemical inertness. The microcolumn consists of an all-silicon microcolumn having a nearly circular cross-section channel. The extensive characterization has produced separation performances very close to the theoretical ideal expectations. A thermal conductivity detector (TCD) is chosen as most proper detector to be miniaturized since the volume reduction of the detector chamber results in increased mass and reduced dead volumes. The microTDC shows a good sensitivity and a very wide dynamic range. Finally a feasibility study for miniaturizing a channel suited for GrFFF is performed. The proposed GrFFF microchannel is at early stage of development, but represents a first step for the realization of a highly portable and potentially low-cost POCT device for biomedical applications.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the present thesis, a new methodology of diagnosis based on advanced use of time-frequency technique analysis is presented. More precisely, a new fault index that allows tracking individual fault components in a single frequency band is defined. More in detail, a frequency sliding is applied to the signals being analyzed (currents, voltages, vibration signals), so that each single fault frequency component is shifted into a prefixed single frequency band. Then, the discrete Wavelet Transform is applied to the resulting signal to extract the fault signature in the frequency band that has been chosen. Once the state of the machine has been qualitatively diagnosed, a quantitative evaluation of the fault degree is necessary. For this purpose, a fault index based on the energy calculation of approximation and/or detail signals resulting from wavelet decomposition has been introduced to quantify the fault extend. The main advantages of the developed new method over existing Diagnosis techniques are the following: - Capability of monitoring the fault evolution continuously over time under any transient operating condition; - Speed/slip measurement or estimation is not required; - Higher accuracy in filtering frequency components around the fundamental in case of rotor faults; - Reduction in the likelihood of false indications by avoiding confusion with other fault harmonics (the contribution of the most relevant fault frequency components under speed-varying conditions are clamped in a single frequency band); - Low memory requirement due to low sampling frequency; - Reduction in the latency of time processing (no requirement of repeated sampling operation).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this thesis we develop further the functional renormalization group (RG) approach to quantum field theory (QFT) based on the effective average action (EAA) and on the exact flow equation that it satisfies. The EAA is a generalization of the standard effective action that interpolates smoothly between the bare action for krightarrowinfty and the standard effective action rnfor krightarrow0. In this way, the problem of performing the functional integral is converted into the problem of integrating the exact flow of the EAA from the UV to the IR. The EAA formalism deals naturally with several different aspects of a QFT. One aspect is related to the discovery of non-Gaussian fixed points of the RG flow that can be used to construct continuum limits. In particular, the EAA framework is a useful setting to search for Asymptotically Safe theories, i.e. theories valid up to arbitrarily high energies. A second aspect in which the EAA reveals its usefulness are non-perturbative calculations. In fact, the exact flow that it satisfies is a valuable starting point for devising new approximation schemes. In the first part of this thesis we review and extend the formalism, in particular we derive the exact RG flow equation for the EAA and the related hierarchy of coupled flow equations for the proper-vertices. We show how standard perturbation theory emerges as a particular way to iteratively solve the flow equation, if the starting point is the bare action. Next, we explore both technical and conceptual issues by means of three different applications of the formalism, to QED, to general non-linear sigma models (NLsigmaM) and to matter fields on curved spacetimes. In the main part of this thesis we construct the EAA for non-abelian gauge theories and for quantum Einstein gravity (QEG), using the background field method to implement the coarse-graining procedure in a gauge invariant way. We propose a new truncation scheme where the EAA is expanded in powers of the curvature or field strength. Crucial to the practical use of this expansion is the development of new techniques to manage functional traces such as the algorithm proposed in this thesis. This allows to project the flow of all terms in the EAA which are analytic in the fields. As an application we show how the low energy effective action for quantum gravity emerges as the result of integrating the RG flow. In any treatment of theories with local symmetries that introduces a reference scale, the question of preserving gauge invariance along the flow emerges as predominant. In the EAA framework this problem is dealt with the use of the background field formalism. This comes at the cost of enlarging the theory space where the EAA lives to the space of functionals of both fluctuation and background fields. In this thesis, we study how the identities dictated by the symmetries are modified by the introduction of the cutoff and we study so called bimetric truncations of the EAA that contain both fluctuation and background couplings. In particular, we confirm the existence of a non-Gaussian fixed point for QEG, that is at the heart of the Asymptotic Safety scenario in quantum gravity; in the enlarged bimetric theory space where the running of the cosmological constant and of Newton's constant is influenced by fluctuation couplings.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Organische Ladungstransfersysteme weisen eine Vielfalt von konkurrierenden Wechselwirkungen zwischen Ladungs-, Spin- und Gitterfreiheitsgraden auf. Dies führt zu interessanten physikalischen Eigenschaften, wie metallische Leitfähigkeit, Supraleitung und Magnetismus. Diese Dissertation beschäftigt sich mit der elektronischen Struktur von organischen Ladungstransfersalzen aus drei Material-Familien. Dabei kamen unterschiedliche Photoemissions- und Röntgenspektroskopietechniken zum Einsatz. Die untersuchten Moleküle wurden z.T. im MPI für Polymerforschung synthetisiert. Sie stammen aus der Familie der Coronene (Donor Hexamethoxycoronen HMC und Akzeptor Coronen-hexaon COHON) und Pyrene (Donor Tetra- und Hexamethoxypyren TMP und HMP) im Komplex mit dem klassischen starken Akzeptor Tetracyanoquinodimethan (TCNQ). Als dritte Familie wurden Ladungstransfersalze der k-(BEDT-TTF)2X Familie (X ist ein monovalentes Anion) untersucht. Diese Materialien liegen nahe bei einem Bandbreite-kontrollierten Mottübergang im Phasendiagramm.rnFür Untersuchungen mittels Ultraviolett-Photoelektronenspektroskopie (UPS) wurden UHV-deponierte dünne Filme erzeugt. Dabei kam ein neuer Doppelverdampfer zum Einsatz, welcher speziell für Milligramm-Materialmengen entwickelt wurde. Diese Methode wies im Ladungstransferkomplex im Vergleich mit der reinen Donor- und Akzeptorspezies energetische Verschiebungen von Valenzzuständen im Bereich weniger 100meV nach. Ein wichtiger Aspekt der UPS-Messungen lag im direkten Vergleich mit ab-initio Rechnungen.rnDas Problem der unvermeidbaren Oberflächenverunreinigungen von lösungsgezüchteten 3D-Kristallen wurde durch die Methode Hard-X-ray Photoelectron Spectroscopy (HAXPES) bei Photonenenergien um 6 keV (am Elektronenspeicherring PETRA III in Hamburg) überwunden. Die große mittlere freie Weglänge der Photoelektronen im Bereich von 15 nm resultiert in echter Volumensensitivität. Die ersten HAXPES Experimente an Ladungstransferkomplexen weltweit zeigten große chemische Verschiebungen (mehrere eV). In der Verbindung HMPx-TCNQy ist die N1s-Linie ein Fingerabdruck der Cyanogruppe im TCNQ und zeigt eine Aufspaltung und einen Shift zu höheren Bindungsenergien von bis zu 6 eV mit zunehmendem HMP-Gehalt. Umgekehrt ist die O1s-Linie ein Fingerabdruck der Methoxygruppe in HMP und zeigt eine markante Aufspaltung und eine Verschiebung zu geringeren Bindungsenergien (bis zu etwa 2,5eV chemischer Verschiebung), d.h. eine Größenordnung größer als die im Valenzbereich.rnAls weitere synchrotronstrahlungsbasierte Technik wurde Near-Edge-X-ray-Absorption Fine Structure (NEXAFS) Spektroskopie am Speicherring ANKA Karlsruhe intensiv genutzt. Die mittlere freie Weglänge der niederenergetischen Sekundärelektronen (um 5 nm). Starke Intensitätsvariationen von bestimmten Vorkanten-Resonanzen (als Signatur der unbesetzte Zustandsdichte) zeigen unmittelbar die Änderung der Besetzungszahlen der beteiligten Orbitale in der unmittelbaren Umgebung des angeregten Atoms. Damit war es möglich, präzise die Beteiligung spezifischer Orbitale im Ladungstransfermechanismus nachzuweisen. Im genannten Komplex wird Ladung von den Methoxy-Orbitalen 2e(Pi*) und 6a1(σ*) zu den Cyano-Orbitalen b3g und au(Pi*) und – in geringerem Maße – zum b1g und b2u(σ*) der Cyanogruppe transferiert. Zusätzlich treten kleine energetische Shifts mit unterschiedlichem Vorzeichen für die Donor- und Akzeptor-Resonanzen auf, vergleichbar mit den in UPS beobachteten Shifts.rn

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Seit der Entwicklung einer großen Vielfalt von Anwendungsmöglichkeiten der Spintronik auf Basis von Heusler Verbindungen innerhalb der letzten Dekade kann der Forschungsfortschritt an dieser Material Klasse in einer Vielzahl von Publikationen verfolgt werden. Eine typische Heusler Verbindung X2YZ besteht aus zwei Übergangsmetallen (X, Y) und einem Hauptgruppenelement (Z). Diese Arbeit berichtet von Heusler Verbindungen mit besonderem Augenmerk auf deren potentielle halbmetallische Eigenschaften und davon insbesondere solche, die eine richtungsabhängige magnetische Anisotropie (perpendicular magnetic anisotropy- PMA) zeigen könnten. PMA ist insbesondere für Spin transfer Torque (STT) Bauelemente von großem Interesse und tritt in tetragonalrnverzerrten Heusler Verbindungen auf. Bei STT-Elementen werden mittels spinpolarisierter Ströme die magnetische Orientierung von magnetischen Schichten beeinflusst.rnDie signifikantesten Ergebnisse dieser Arbeit sind: die Synthese neuer kubischen Heusler Phasen Fe2YZ, die theoretisch als tetragonal vorausgesagt wurden (Kapitel 1), die Synthese von Mn2FeGa, das in der tetragonal verzerrten Struktur kristallisiert und Potential für STT Anwendungen zeigt (Kapitel 2); die Synthese von Fe2MnGa, das einen magnetischen Phasenübergang mit exchange-bias (EB) Effekt zeigt, der auf einer Koexistenz von ferromagnetischen (FM) und antiferromagnetischen (AFM) Phasen beruht (Kapitel 3); Schlussendlich wird in Kapitel 4 die Synthese von Mn3−xRhxSn diskutiert, in welcher insbesondere tetragonales Mn2RhSn als potentielles Material für Anwendungen in derrnSpintronik vorgestellt wird.rnIn dieser Arbeit wurden hauptsächlich Heusler Verbindungen mit mößbaueraktiven Elementen 57Fe und 119Sn, synthetisiert und untersucht. Im Falle der hier untersuchten Heusler Verbindungen spielt die Charakterisierung durch Mößbauer Spektroskopie eine entscheidende Rolle, da Heusler Verbindungen meistens ein gewisses Maß an Fehlordnung aufweisen, welche deren magnetischen und strukturellen Eigenschaften beeinflussen kann. Die Art der Fehlordnung jedoch kann nur schwer durch standard Pulver-Röntgendiffraktion bestimmt werden, weshalb wir die Vorteile der Mößbauer Spektroskopie als lokale Methode nutzen, um den Typ und den Grad der Fehlordnung aufzuklären. rnDiese Arbeit ist wie folgt gegliedert:rnIn Kapitel 1 wurden die neuen, kubisch-weichferromagnetischen Heuslerphasen Fe2NiGe, Fe2CuGa und Fe2CuAl synthetisiert und charakterisiert. In vorangegangenen theoretischen Studien wurde für deren Existenz in tetragonaler Heuslerstruktur vorhergesagt.rnUngeachtet dessen belegten unsere experimentellen Untersuchungen, dass diese Verbindungen hauptsächlich in der kubischen invers Heusler(X-) struktur mit unterschiedlichen Anteilen an atomarer Fehlordnung kristallisieren. Alle Verbindungen sind weiche Ferromagneten mit hoher Curietemperatur bis zu 900K, weswegen alle als potentielle Materialien für magnetische Anwendungen geeignet sind. In Kapitel 2 wurde Mn2FeGa synthetisiert. Es zeigte sich, dass Mn2FeGa nach Temperatur Nachbehandlung bei 400°C die invers tetragonale Struktur (I4m2) annimmt. Theoretisch wurde die Existenz in der inversen kubischen Heuslerstruktur vorausgesagt. Abhängig von den Synthesebedingungen ändern sich die magnetischen und strukturellen Eigenschaften von Mn2FeGa eklatant. Deshalb ändert sich die Kristallstruktur von M2FeGa bei Temperung bei 800 °C zu einer pseudokubischen Cu3Au-artigen Struktur, in welcher Fe- und Mn-Atome statistisch verteilt vorliegen. Dieser Übergang der Kristallstrukturen wurde durch Mößbauer Spektroskopie anhand des Vorliegens oder Fehlens der Quadrupolaufspaltung im Falle der invers tetragonalen bzw. pseudokubischen Modifikation nachgewiesen. In Kapitel 3 wurde Fe2MnGa ebenfalls erfolgreich synthetisiert und durch verschiedene Methoden charakterisiert. Der Zusammenhang von Kristallstruktur und magnetischen Eigenschaften wurde durch verschiedene Temperungskonditionen und mechanischer Behandlung untersucht. Der Schwerpunkt lag auf einer geschmolzenen Probe ohne weitere Temperung, die einen FM-AFM Phasenübergang zeigte. Diese magnetische Phasenumwandlung führt zu einem starken EB-Verhalten, welches seinen Ursprung hauptsächlich in der Koexistenz von FM- und AFM-Phasen unterhalb der FMAFM- Übergangstemperatur hat. Kapitel 4 ist den neuen Mn-basierten Heusler-Verbindungen Mn3−xRhxSn gewidmet, bei denen wir versuchten, durch den Austausch von Mn durch das größere Rh eine Umwandlung zu einer tetragonalen Struktur von den hexagonalen Mn3Sn-Struktur zu erreichen. Als interessant stellten sich Mn2RhSn und Mn2.1Rh0.9Sn heraus, da sie aus nur einer Phase vorzuliegen scheinen, wohingegen die anderen Verbindungen aus gemischten Phasen mit gleichzeitiger starken Fehlordnung bestehen. Im abschließenden Anhang wurden die Fehlordnung und gelegentliche Mischphasen einer großen Auswahl von Mn3−xFexGa Materialien mit 1≤x≤3, dokumentiert.rn

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Although the Standard Model of particle physics (SM) provides an extremely successful description of the ordinary matter, one knows from astronomical observations that it accounts only for around 5% of the total energy density of the Universe, whereas around 30% are contributed by the dark matter. Motivated by anomalies in cosmic ray observations and by attempts to solve questions of the SM like the (g-2)_mu discrepancy, proposed U(1) extensions of the SM gauge group have raised attention in recent years. In the considered U(1) extensions a new, light messenger particle, the hidden photon, couples to the hidden sector as well as to the electromagnetic current of the SM by kinetic mixing. This allows for a search for this particle in laboratory experiments exploring the electromagnetic interaction. Various experimental programs have been started to search for hidden photons, such as in electron-scattering experiments, which are a versatile tool to explore various physics phenomena. One approach is the dedicated search in fixed-target experiments at modest energies as performed at MAMI or at JLAB. In these experiments the scattering of an electron beam off a hadronic target e+(A,Z)->e+(A,Z)+l^+l^- is investigated and a search for a very narrow resonance in the invariant mass distribution of the lepton pair is performed. This requires an accurate understanding of the theoretical basis of the underlying processes. For this purpose it is demonstrated in the first part of this work, in which way the hidden photon can be motivated from existing puzzles encountered at the precision frontier of the SM. The main part of this thesis deals with the analysis of the theoretical framework for electron scattering fixed-target experiments searching for hidden photons. As a first step, the cross section for the bremsstrahlung emission of hidden photons in such experiments is studied. Based on these results, the applicability of the Weizsäcker-Williams approximation to calculate the signal cross section of the process, which is widely used to design such experimental setups, is investigated. In a next step, the reaction e+(A,Z)->e+(A,Z)+l^+l^- is analyzed as signal and background process in order to describe existing data obtained by the A1 experiment at MAMI with the aim to give accurate predictions of exclusion limits for the hidden photon parameter space. Finally, the derived methods are used to find predictions for future experiments, e.g., at MESA or at JLAB, allowing for a comprehensive study of the discovery potential of the complementary experiments. In the last part, a feasibility study for probing the hidden photon model by rare kaon decays is performed. For this purpose, invisible as well as visible decays of the hidden photon are considered within different classes of models. This allows one to find bounds for the parameter space from existing data and to estimate the reach of future experiments.