1000 resultados para MAGNETIC-FLUX
Resumo:
In this thesis, three different types of quantum rings arestudied. These are quantum rings with diamagnetic,paramagnetic or spontaneous persistent currents. It turns out that the main observable to characterizequantum rings is the Drude weight. Playing a key role inthis thesis, it will be used to distinguish betweendiamagnetic (positive Drude weight) and paramagnetic(negative Drude weight) ring currents. In most models, theDrude weight is positive. Especially in the thermodynamiclimit, it is positive semi-definite. In certain modelshowever, intuitivelysurprising, a negative Drude weight is found. This rareeffect occurs, e.g., in one-dimensional models with adegenerate ground state in conjunction with the possibilityof Umklapp scattering. One aim of this thesis is to examineone-dimensional quantum rings for the occurrence of anegative Drude weight. It is found, that the sign of theDrude weight can also be negative, if the band structurelacks particle-hole symmetry. The second aim of this thesis is the modeling of quantumrings intrinsically showing a spontaneous persistentcurrent. The construction of the model starts from theextended Hubbard model on a ring threaded by anAharonov-Bohm flux. A feedback term through which thecurrent in the ring can generate magnetic flux is added.Another extension of the Hamiltonian describes the energystored in the internally generated field. This model isevaluated using exact diagonalization and an iterativescheme to find the minima of the free energy. The quantumrings must satisfy two conditions to exhibit a spontaneousorbital magnetic moment: a negative Drude weight and aninductivity above the critical level. The magneticproperties of cyclic conjugated hydrocarbons likebenzene due to electron delocalization [magnetic anisotropy,magnetic susceptibility exaltation, nucleus-independent chemical shift (NICS)]---that have become important criteriafor aromaticity---can be examined using this model. Corrections to the presented calculations are discussed. Themost substantial simplification made in this thesis is theneglect of the Zeeman interaction of the electron spins withthe magnetic field. If a single flux tube threads a quantumring, the Zeeman interaction is zero, but in mostexperiments, this situation is difficult to realize. In themore realistic situation of a homogeneous field, the Zeemaninteraction has to be included, if the electrons have atotal spin component in the direction of the magnetic field,or if the magnetic field is strong.
Resumo:
We regularize compact and non-compact Abelian Chern–Simons–Maxwell theories on a spatial lattice using the Hamiltonian formulation. We consider a doubled theory with gauge fields living on a lattice and its dual lattice. The Hilbert space of the theory is a product of local Hilbert spaces, each associated with a link and the corresponding dual link. The two electric field operators associated with the link-pair do not commute. In the non-compact case with gauge group R, each local Hilbert space is analogous to the one of a charged “particle” moving in the link-pair group space R2 in a constant “magnetic” background field. In the compact case, the link-pair group space is a torus U(1)2 threaded by k units of quantized “magnetic” flux, with k being the level of the Chern–Simons theory. The holonomies of the torus U(1)2 give rise to two self-adjoint extension parameters, which form two non-dynamical background lattice gauge fields that explicitly break the manifest gauge symmetry from U(1) to Z(k). The local Hilbert space of a link-pair then decomposes into representations of a magnetic translation group. In the pure Chern–Simons limit of a large “photon” mass, this results in a Z(k)-symmetric variant of Kitaev’s toric code, self-adjointly extended by the two non-dynamical background lattice gauge fields. Electric charges on the original lattice and on the dual lattice obey mutually anyonic statistics with the statistics angle . Non-Abelian U(k) Berry gauge fields that arise from the self-adjoint extension parameters may be interesting in the context of quantum information processing.
Resumo:
Among all the different types of electric wind generators, those that are based on doubly fed induction generators, or DFIG technology, are the most vulnerable to grid faults such as voltage sags. This paper proposes a new control strategy for this type of wind generator, that allows these devices to withstand the effects of a voltage sag while following the new requirements imposed by grid operators. This new control strategy makes the use of complementary devices such as crowbars unnecessary, as it greatly reduces the value of currents originated by the fault. This ensures less costly designs for the rotor systems as well as a more economic sizing of the necessary power electronics. The strategy described here uses an electric generator model based on space-phasor theory that provides a direct control over the position of the rotor magnetic flux. Controlling the rotor magnetic flux has a direct influence on the rest of the electrical variables enabling the machine to evolve to a desired work point during the transient imposed by the grid disturbance. Simulation studies have been carried out, as well as test bench trials, in order to prove the viability and functionality of the proposed control strategy.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
This work deals with the analytical, computational and experimental study of phenomena related to the Eddy current induction in low permeability means for embedded electromagnetic braking systems applications. The phenomena of forces generation in opposing to the variation of stationary magnetic flux produced by DC power supplies, set in motion by the application of an external propulsive force are addressed. The study is motivated by search for solving the problem of speed control of PIGs used to verifying and maintaining pipelines, and is led based on the analytical models synthesis, validated by means of computer simulations in Finite Elements environment, provided by engineering support software; and with experimental tests conducted under controlled laboratory conditions. Finally, a damping systems design methodology based on analyzes results conducted throughout the study is presented
Resumo:
In this work it was developed mathematical resolutions taking as parameter maximum intensity values for the interference analysis of electric and magnetic fields and was given two virtual computer system that supports families of CDMA and WCDMA technologies. The first family were developed computational resources to solve electric and magnetic field calculations and power densities in Radio Base stations , with the use of CDMA technology in the 800 MHz band , taking into account the permissible values referenced by the Commission International Protection on non-Ionizing Radiation . The first family is divided into two segments of calculation carried out in virtual operation. In the first segment to compute the interference field radiated by the base station with input information such as radio channel power; Gain antenna; Radio channel number; Operating frequency; Losses in the cable; Attenuation of direction; Minimum Distance; Reflections. Said computing system allows to quickly and without the need of implementing instruments for measurements, meet the following calculated values: Effective Radiated Power; Sector Power Density; Electric field in the sector; Magnetic field in the sector; Magnetic flux density; point of maximum permissible exposure of electric field and power density. The results are shown in charts for clarity of view of power density in the industry, as well as the coverage area definition. The computer module also includes folders specifications antennas, cables and towers used in cellular telephony, the following manufacturers: RFS World, Andrew, Karthein and BRASILSAT. Many are presented "links" network access "Internet" to supplement the cable specifications, antennas, etc. . In the second segment of the first family work with more variables , seeking to perform calculations quickly and safely assisting in obtaining results of radio signal loss produced by ERB . This module displays screens representing propagation systems denominated "A" and "B". By propagating "A" are obtained radio signal attenuation calculations in areas of urban models , dense urban , suburban , and rural open . In reflection calculations are present the reflection coefficients , the standing wave ratio , return loss , the reflected power ratio , as well as the loss of the signal by mismatch impedance. With the spread " B" seek radio signal losses in the survey line and not targeted , the effective area , the power density , the received power , the coverage radius , the conversion levels and the gain conversion systems radiant . The second family of virtual computing system consists of 7 modules of which 5 are geared towards the design of WCDMA and 2 technology for calculation of telephone traffic serving CDMA and WCDMA . It includes a portfolio of radiant systems used on the site. In the virtual operation of the module 1 is compute-: distance frequency reuse, channel capacity with noise and without noise, Doppler frequency, modulation rate and channel efficiency; Module 2 includes computes the cell area, thermal noise, noise power (dB), noise figure, signal to noise ratio, bit of power (dBm); with the module 3 reaches the calculation: breakpoint, processing gain (dB) loss in the space of BTS, noise power (w), chip period and frequency reuse factor. Module 4 scales effective radiated power, sectorization gain, voice activity and load effect. The module 5 performs the calculation processing gain (Hz / bps) bit time, bit energy (Ws). Module 6 deals with the telephone traffic and scales 1: traffic volume, occupancy intensity, average time of occupancy, traffic intensity, calls completed, congestion. Module 7 deals with two telephone traffic and allows calculating call completion and not completed in HMM. Tests were performed on the mobile network performance field for the calculation of data relating to: CINP , CPI , RSRP , RSRQ , EARFCN , Drop Call , Block Call , Pilot , Data Bler , RSCP , Short Call, Long Call and Data Call ; ECIO - Short Call and Long Call , Data Call Troughput . As survey were conducted surveys of electric and magnetic field in an ERB , trying to observe the degree of exposure to non-ionizing radiation they are exposed to the general public and occupational element. The results were compared to permissible values for health endorsed by the ICNIRP and the CENELEC .
Resumo:
Using data obtained by the high-resolution CRisp Imaging SpectroPolarimeter instrument on the Swedish 1 m Solar Telescope, we investigate the dynamics and stability of quiet-Sun chromospheric jets observed at the disk center. Small-scale features, such as rapid redshifted and blueshifted excursions, appearing as high-peed jets in the wings of the Hα line, are characterized by short lifetimes and rapid fading without any descending behavior. To study the theoretical aspects of their stability without considering their formation mechanism, we model chromospheric jets as twisted magnetic flux tubes moving along their axis, and use the ideal linear incompressible magnetohydrodynamic approximation to derive the governing dispersion equation. Analytical solutions of the dispersion equation indicate that this type of jet is unstable to Kelvin–Helmholtz instability (KHI), with a very short (few seconds) instability growth time at high upflow speeds. The generated vortices and unresolved turbulent flows associated with the KHI could be observed as a broadening of chromospheric spectral lines. Analysis of the Hα line profiles shows that the detected structures have enhanced line widths with respect to the background. We also investigate the stability of a larger-scale Hα jet that was ejected along the line of sight. Vortex-like features, rapidly developing around the jet’s boundary, are considered as evidence of the KHI. The analysis of the energy equation in the partially ionized plasma shows that ion–neutral collisions may lead to fast heating of the KH vortices over timescales comparable to the lifetime of chromospheric jets.
Resumo:
Despite record-setting performance demonstrated by superconducting Transition Edge Sensors (TESs) and growing utilization of the technology, a theoretical model of the physics governing TES devices superconducting phase transition has proven elusive. Earlier attempts to describe TESs assumed them to be uniform superconductors. Sadleir et al. 2010 shows that TESs are weak links and that the superconducting order parameter strength has significant spatial variation. Measurements are presented of the temperature T and magnetic field B dependence of the critical current Ic measured over 7 orders of magnitude on square Mo/Au bilayers ranging in length from 8 to 290 microns. We find our measurements have a natural explanation in terms of a spatially varying order parameter that is enhanced in proximity to the higher transition temperature superconducting leads (the longitudinal proximity effect) and suppressed in proximity to the added normal metal structures (the lateral inverse proximity effect). These in-plane proximity effects and scaling relations are observed over unprecedentedly long lengths (in excess of 1000 times the mean free path) and explained in terms of a Ginzburg-Landau model. Our low temperature Ic(B) measurements are found to agree with a general derivation of a superconducting strip with an edge or geometric barrier to vortex entry and we also derive two conditions that lead to Ic rectification. At high temperatures the Ic(B) exhibits distinct Josephson effect behavior over long length scales and following functional dependences not previously reported. We also investigate how film stress changes the transition, explain some transition features in terms of a nonequilibrium superconductivity effect, and show that our measurements of the resistive transition are not consistent with a percolating resistor network model.
Resumo:
Este artículo presenta el reporte de caso sobre fallas presentadas en el núcleo de transformadores de potencia y algunas experiencias técnicas y metodológicas en la reparación parcial y total de los mismos a varias unidades, hechas en Industrias Explorer Ingeniería S. A. S., empresa dedicada al mantenimiento y reparación de transformadores. También es presentada la metodología para la selección del tipo de lámina, sistema de corte, ensamble, ajuste y prensado del núcleo, ya que estas actividades son decisivas para conseguir un equipo con menores pérdidas y corrientes de vacío, así como menores niveles de ruido. Se describen las etapas para cálculo del flujo de operación del núcleo, circuito de prueba para la saturación del mismo, consideraciones para realizar la inspección termográfica y medición de las pérdidas de vacío, selección del tipo de lámina y técnicas de ensamble empleadas. Se presentan algunas experiencias como: cambio de medio núcleo, reaislamiento de zonas afectadas empleando fibras Nómex entre láminas, cambio total del núcleo por corte mal realizado desde fábrica, cambio total del núcleo por doble aterrizamiento que ocasionó calentamiento del mismo y afectó el aislamiento de sus láminas dejándolas en corto. En todos los casos se evidencia una disminución de las pérdidas de vacío. Finalmente se presentan el comportamiento de los transformadores después de ser puestos nuevamente en servicio.
Resumo:
Polymer matrix composites offer advantages for many applications due their combination of properties, which includes low density, high specific strength and modulus of elasticity and corrosion resistance. However, the application of non-destructive techniques using magnetic sensors for the evaluation these materials is not possible since the materials are non-magnetizable. Ferrites are materials with excellent magnetic properties, chemical stability and corrosion resistance. Due to these properties, these materials are promising for the development of polymer composites with magnetic properties. In this work, glass fiber / epoxy circular plates were produced with 10 wt% of cobalt or barium ferrite particles. The cobalt ferrite was synthesized by the Pechini method. The commercial barium ferrite was subjected to a milling process to study the effect of particle size on the magnetic properties of the material. The characterization of the ferrites was carried out by x-ray diffraction (XRD), field emission gun scanning electron microscopy (FEG-SEM) and vibrating sample magnetometry (VSM). Circular notches of 1, 5 and 10 mm diameter were introduced in the composite plates using a drill bit for the non-destructive evaluation by the technique of magnetic flux leakage (MFL). The results indicated that the magnetic signals measured in plates with barium ferrite without milling and cobalt ferrite showed good correlation with the presence of notches. The milling process for 12 h and 20 h did not contribute to improve the identification of smaller size notches (1 mm). However, the smaller particle size produced smoother magnetic curves, with fewer discontinuities and improved signal-to-noise ratio. In summary, the results suggest that the proposed approach has great potential for the detection of damage in polymer composites structures
Resumo:
In modern power electronics equipment, it is desirable to design a low profile, high power density, and fast dynamic response converter. Increases in switching frequency reduce the size of the passive components such as transformers, inductors, and capacitors which results in compact size and less requirement for the energy storage. In addition, the fast dynamic response can be achieved by operating at high frequency. However, achieving high frequency operation while keeping the efficiency high, requires new advanced devices, higher performance magnetic components, and new circuit topology. These are required to absorb and utilize the parasitic components and also to mitigate the frequency dependent losses including switching loss, gating loss, and magnetic loss. Required performance improvements can be achieved through the use of Radio Frequency (RF) design techniques. To reduce switching losses, resonant converter topologies like resonant RF amplifiers (inverters) combined with a rectifier are the effective solution to maintain high efficiency at high switching frequencies through using the techniques such as device parasitic absorption, Zero Voltage Switching (ZVS), Zero Current Switching (ZCS), and a resonant gating. Gallium Nitride (GaN) device technologies are being broadly used in RF amplifiers due to their lower on- resistance and device capacitances compared with silicon (Si) devices. Therefore, this kind of semiconductor is well suited for high frequency power converters. The major problems involved with high frequency magnetics are skin and proximity effects, increased core and copper losses, unbalanced magnetic flux distribution generating localized hot spots, and reduced coupling coefficient. In order to eliminate the magnetic core losses which play a crucial role at higher operating frequencies, a coreless PCB transformer can be used. Compared to the conventional wire-wound transformer, a planar PCB transformer in which the windings are laid on the Printed Board Circuit (PCB) has a low profile structure, excellent thermal characteristics, and ease of manufacturing. Therefore, the work in this thesis demonstrates the design and analysis of an isolated low profile class DE resonant converter operating at 10 MHz switching frequency with a nominal output of 150 W. The power stage consists of a class DE inverter using GaN devices along with a sinusoidal gate drive circuit on the primary side and a class DE rectifier on the secondary side. For obtaining the stringent height converter, isolation is provided by a 10-layered coreless PCB transformer of 1:20 turn’s ratio. It is designed and optimized using 3D Finite Element Method (FEM) tools and radio frequency (RF) circuit design software. Simulation and experimental results are presented for a 10-layered coreless PCB transformer operating in 10 MHz.
Resumo:
BACKGROUND: The heart relies on continuous energy production and imbalances herein impair cardiac function directly. The tricarboxylic acid (TCA) cycle is the primary means of energy generation in the healthy myocardium, but direct noninvasive quantification of metabolic fluxes is challenging due to the low concentration of most metabolites. Hyperpolarized (13)C magnetic resonance spectroscopy (MRS) provides the opportunity to measure cellular metabolism in real time in vivo. The aim of this work was to noninvasively measure myocardial TCA cycle flux (VTCA) in vivo within a single minute. METHODS AND RESULTS: Hyperpolarized [1-(13)C]acetate was administered at different concentrations in healthy rats. (13)C incorporation into [1-(13)C]acetylcarnitine and the TCA cycle intermediate [5-(13)C]citrate was dynamically detected in vivo with a time resolution of 3s. Different kinetic models were established and evaluated to determine the metabolic fluxes by simultaneously fitting the evolution of the (13)C labeling in acetate, acetylcarnitine, and citrate. VTCA was estimated to be 6.7±1.7μmol·g(-1)·min(-1) (dry weight), and was best estimated with a model using only the labeling in citrate and acetylcarnitine, independent of the precursor. The TCA cycle rate was not linear with the citrate-to-acetate metabolite ratio, and could thus not be quantified using a ratiometric approach. The (13)C signal evolution of citrate, i.e. citrate formation was independent of the amount of injected acetate, while the (13)C signal evolution of acetylcarnitine revealed a dose dependency with the injected acetate. The (13)C labeling of citrate did not correlate to that of acetylcarnitine, leading to the hypothesis that acetylcarnitine formation is not an indication of mitochondrial TCA cycle activity in the heart. CONCLUSIONS: Hyperpolarized [1-(13)C]acetate is a metabolic probe independent of pyruvate dehydrogenase (PDH) activity. It allows the direct estimation of VTCA in vivo, which was shown to be neither dependent on the administered acetate dose nor on the (13)C labeling of acetylcarnitine. Dynamic (13)C MRS coupled to the injection of hyperpolarized [1-(13)C]acetate can enable the measurement of metabolic changes during impaired heart function.
Resumo:
Measurements of the ionospheric E-region during total solar eclipses have been used to provide information about the evolution of the solar magnetic field and EUV and X-ray emissions from the solar corona and chromosphere. By measuring levels of ionisation during an eclipse and comparing these measurements with an estimate of the unperturbed ionisation levels (such as those made during a control day, where available) it is possible to estimate the percentage of ionising radiation being emitted by the solar corona and chromosphere. Previously unpublished data from the two eclipses presented here are particularly valuable as they provide information that supplements the data published to date. The eclipse of 23 October 1976 over Australia provides information in a data gap that would otherwise have spanned the years 1966 to 1991. The eclipse of 4 December 2002 over Southern Africa is important as it extends the published sequence of measurements. Comparing measurements from eclipses between 1932 and 2002 with the solar magnetic source flux reveals that changes in the solar EUV and X-ray flux lag the open source flux measurements by approximately 1.5 years. We suggest that this unexpected result comes about from changes to the relative size of the limb corona between eclipses, with the lag representing the time taken to populate the coronal field with plasma hot enough to emit the EUV and X-rays ionising our atmosphere.
Resumo:
Svalgaard and Cliver (2010) recently reported a consensus between the various reconstructions of the heliospheric field over recent centuries. This is a significant development because, individually, each has uncertainties introduced by instrument calibration drifts, limited numbers of observatories, and the strength of the correlations employed. However, taken collectively, a consistent picture is emerging. We here show that this consensus extends to more data sets and methods than reported by Svalgaard and Cliver, including that used by Lockwood et al. (1999), when their algorithm is used to predict the heliospheric field rather than the open solar flux. One area where there is still some debate relates to the existence and meaning of a floor value to the heliospheric field. From cosmogenic isotope abundances, Steinhilber et al. (2010) have recently deduced that the near-Earth IMF at the end of the Maunder minimum was 1.80 ± 0.59 nT which is considerably lower than the revised floor of 4nT proposed by Svalgaard and Cliver. We here combine cosmogenic and geomagnetic reconstructions and modern observations (with allowance for the effect of solar wind speed and structure on the near-Earth data) to derive an estimate for the open solar flux of (0.48 ± 0.29) × 1014 Wb at the end of the Maunder minimum. By way of comparison, the largest and smallest annual means recorded by instruments in space between 1965 and 2010 are 5.75 × 1014 Wb and 1.37 × 1014 Wb, respectively, set in 1982 and 2009, and the maximum of the 11 year running means was 4.38 × 1014 Wb in 1986. Hence the average open solar flux during the Maunder minimum is found to have been 11% of its peak value during the recent grand solar maximum.