585 resultados para Quadrature Coils
Resumo:
It has recently been reported in this journal that local fat depots produce a sizable frequency-dependent signal attenuation in magnetic resonance spectroscopy (MRS) of the brain. If of a general nature, this effect would question the use of internal reference signals for quantification of MRS and the quantitative use of MRS as a whole. Here, it was attempted to verify this effect and pinpoint the potential causes by acquiring data with various acquisition settings, including two field strengths, two MR scanners from different vendors, different water suppression sequences, RF coils, localization sequences, echo times, and lipid/metabolite phantoms. With all settings tested, the reported effect could not be reproduced, and it is concluded that water referencing and quantitative MRS per se remain valid tools under common acquisition conditions.
Resumo:
PURPOSE To reliably determine the amplitude of the transmit radiofrequency ( B1+) field in moving organs like the liver and heart, where most current techniques are usually not feasible. METHODS B1+ field measurement based on the Bloch-Siegert shift induced by a pair of Fermi pulses in a double-triggered modified Point RESolved Spectroscopy (PRESS) sequence with motion-compensated crusher gradients has been developed. Performance of the sequence was tested in moving phantoms and in muscle, liver, and heart of six healthy volunteers each, using different arrangements of transmit/receive coils. RESULTS B1+ determination in a moving phantom was almost independent of type and amplitude of the motion and agreed well with theory. In vivo, repeated measurements led to very small coefficients of variance (CV) if the amplitude of the Fermi pulse was chosen above an appropriate level (CV in muscle 0.6%, liver 1.6%, heart 2.3% with moderate amplitude of the Fermi pulses and 1.2% with stronger Fermi pulses). CONCLUSION The proposed sequence shows a very robust determination of B1+ in a single voxel even under challenging conditions (transmission with a surface coil or measurements in the heart without breath-hold). Magn Reson Med, 2015. © 2015 Wiley Periodicals, Inc.
Resumo:
Magnetic resonance imaging (MRI) is a non-invasive technique that offers excellent soft tissue contrast for characterizing soft tissue pathologies. Diffusion tensor imaging (DTI) is an MRI technique that has shown to have the sensitivity to detect subtle pathology that is not evident on conventional MRI. ^ Rats are commonly used as animal models in characterizing the spinal cord pathologies including spinal cord injury (SCI), cancer, multiple sclerosis, etc. These pathologies could affect both thoracic and cervical regions and complete characterization of these pathologies using MRI requires DTI characterization in both the thoracic and cervical regions. Prior to the application of DTI for investigating the pathologic changes in the spinal cord, it is essential to establish DTI metrics in normal animals. ^ To date, in-vivo DTI studies of rat spinal cord have used implantable coils for high signal-to-noise ratio (SNR) and spin-echo pulse sequences for reduced geometric distortions. Implantable coils have several disadvantages including: (1) the invasive nature of implantation, (2) loss of SNR due to frequency shift with time in the longitudinal studies, and (3) difficulty in imaging the cervical region. While echo planar imaging (EPI) offers much shorter acquisition times compared to spin-echo imaging, EPI is very sensitive to static magnetic field inhomogeneities and the existing shimming techniques implemented on the MRI scanner do not perform well on spinal cord because of its geometry. ^ In this work, an integrated approach has been implemented for in-vivo DTI characterization of rat spinal cord in the thoracic and cervical regions. A three element phased array coil was developed for improved SNR and extended spatial coverage. A field-map shimming technique was developed for minimizing the geometric distortions in EPI images. Using these techniques, EPI based DWI images were acquired with optimized diffusion encoding scheme from 6 normal rats and the DTI-derived metrics were quantified. ^ The phantom studies indicated higher SNR and smaller bias in the estimated DTI metrics than the previous studies in the cervical region. In-vivo results indicated no statistical difference in the DTI characteristics of either gray matter or white matter between the thoracic and cervical regions. ^
Resumo:
Distribution, accumulation and diagenesis of surficial sediments in coastal and continental shelf systems follow complex chains of localized processes and form deposits of great spatial variability. Given the environmental and economic relevance of ocean margins, there is growing need for innovative geophysical exploration methods to characterize seafloor sediments by more than acoustic properties. A newly conceptualized benthic profiling and data processing approach based on controlled source electromagnetic (CSEM) imaging permits to coevally quantify the magnetic susceptibility and the electric conductivity of shallow marine deposits. The two physical properties differ fundamentally insofar as magnetic susceptibility mostly assesses solid particle characteristics such as terrigenous or iron mineral content, redox state and contamination level, while electric conductivity primarily relates to the fluid-filled pore space and detects salinity, porosity and grain-size variations. We develop and validate a layered half-space inversion algorithm for submarine multifrequency CSEM with concentric sensor configuration. Guided by results of modeling, we modified a commercial land CSEM sensor for submarine application, which was mounted into a nonconductive and nonmagnetic bottom-towed sled. This benthic EM profiler Neridis II achieves 25 soundings/second at 3-4 knots over continuous profiles of up to hundred kilometers. Magnetic susceptibility is determined from the 75 Hz in-phase response (90% signal originates from the top 50 cm), while electric conductivity is derived from the 5 kHz out-of-phase (quadrature) component (90% signal from the top 92 cm). Exemplary survey data from the north-west Iberian margin underline the excellent sensitivity, functionality and robustness of the system in littoral (~0-50 m) and neritic (~50-300 m) environments. Susceptibility vs. porosity cross-plots successfully identify known lithofacies units and their transitions. All presently available data indicate an eminent potential of CSEM profiling for assessing the complex distribution of shallow marine surficial sediments and for revealing climatic, hydrodynamic, diagenetic and anthropogenic factors governing their formation.
Resumo:
The identification in various proxy records of periods of rapid (decadal scale) climate change over recent millennia, together with the possibility that feedback mechanisms may amplify climate system responses to increasing atmospheric CO2, highlights the importance of a detailed understanding, at high spatial and temporal resolutions, of forcings and feedbacks within the system. Such an understanding has hitherto been limited because the temperate marine environment has lacked an absolute timescale of the kind provided by tree-rings for the terrestrial environment and by corals for the tropical marine environment. Here we present the first annually resolved, multi-centennial (489-year), absolutely dated, shell-based marine master chronology. The chronology has been constructed by detrending and averaging annual growth increment widths in the shells of multiple specimens of the very long-lived bivalve mollusc Arctica islandica, collected from sites to the south and west of the Isle of Man in the Irish Sea. The strength of the common environmental signal expressed in the chronology is fully comparable with equivalent statistics for tree-ring chronologies. Analysis of the 14C signal in the shells shows no trend in the marine radiocarbon reservoir correction (DR), although it may be more variable before ~1750. The d13C signal shows a very significant (R**2 = 0.456, p < 0.0001) trend due to the 13C Suess effect.
Resumo:
A particle accelerator is any device that, using electromagnetic fields, is able to communicate energy to charged particles (typically electrons or ionized atoms), accelerating and/or energizing them up to the required level for its purpose. The applications of particle accelerators are countless, beginning in a common TV CRT, passing through medical X-ray devices, and ending in large ion colliders utilized to find the smallest details of the matter. Among the other engineering applications, the ion implantation devices to obtain better semiconductors and materials of amazing properties are included. Materials supporting irradiation for future nuclear fusion plants are also benefited from particle accelerators. There are many devices in a particle accelerator required for its correct operation. The most important are the particle sources, the guiding, focalizing and correcting magnets, the radiofrequency accelerating cavities, the fast deflection devices, the beam diagnostic mechanisms and the particle detectors. Most of the fast particle deflection devices have been built historically by using copper coils and ferrite cores which could effectuate a relatively fast magnetic deflection, but needed large voltages and currents to counteract the high coil inductance in a response in the microseconds range. Various beam stability considerations and the new range of energies and sizes of present time accelerators and their rings require new devices featuring an improved wakefield behaviour and faster response (in the nanoseconds range). This can only be achieved by an electromagnetic deflection device based on a transmission line. The electromagnetic deflection device (strip-line kicker) produces a transverse displacement on the particle beam travelling close to the speed of light, in order to extract the particles to another experiment or to inject them into a different accelerator. The deflection is carried out by the means of two short, opposite phase pulses. The diversion of the particles is exerted by the integrated Lorentz force of the electromagnetic field travelling along the kicker. This Thesis deals with a detailed calculation, manufacturing and test methodology for strip-line kicker devices. The methodology is then applied to two real cases which are fully designed, built, tested and finally installed in the CTF3 accelerator facility at CERN (Geneva). Analytical and numerical calculations, both in 2D and 3D, are detailed starting from the basic specifications in order to obtain a conceptual design. Time domain and frequency domain calculations are developed in the process using different FDM and FEM codes. The following concepts among others are analyzed: scattering parameters, resonating high order modes, the wakefields, etc. Several contributions are presented in the calculation process dealing specifically with strip-line kicker devices fed by electromagnetic pulses. Materials and components typically used for the fabrication of these devices are analyzed in the manufacturing section. Mechanical supports and connexions of electrodes are also detailed, presenting some interesting contributions on these concepts. The electromagnetic and vacuum tests are then analyzed. These tests are required to ensure that the manufactured devices fulfil the specifications. Finally, and only from the analytical point of view, the strip-line kickers are studied together with a pulsed power supply based on solid state power switches (MOSFETs). The solid state technology applied to pulsed power supplies is introduced and several circuit topologies are modelled and simulated to obtain fast and good flat-top pulses.
Resumo:
Spannbetonbauteile sind anfällig für Relaxationsverluste, deren Berücksichtigung für die Tragwerksplanung von signifikanter Bedeutung ist. Spanndrähte werden nach der Herstellung zur Vereinfachung von Lagerung und Transport auf Coils gewickelt. Eventuelle nachteilige Auswirkungen dieser Vorgehensweise auf das Relaxationsverhalten von Spannstahldrähten werden in der Regel vernachlässigt, obwohl Hersteller sowie Bauunternehmer durch Relaxationsversuche nach langen Lagerzeiten festgestellt haben, dass der Relaxationsverlust in einigen Fällen höher ausfiel als kurz nach der Herstellung. Daher wird der Einfluss des Aufwickelns auf den Relaxationsverlust durch eine experimentelle Untersuchung überprüft und durch die Anwendung eines einfachen Analysemodells bestätigt. Die Ergebnisse zeigen, dass einige Faktoren wie die Anfangsspannung, Langzeitlagerung oder Lagerung bei hohen Temperaturen Relaxationsverluste auslösen oder verstärken können. Es wird jedoch auch gezeigt, dass diese Auswirkungen unter Berücksichtigung der Anforderungen der entsprechenden Richtlinien (Mindest durch - messer Coil) vernachlässigt werden können.
Resumo:
This paper describes two methods to cancel the effect of two kinds of leakage signals which may be presented when an antenna is measured in a planar near-field range. One method tries to reduce leakage bias errors from the receiver¿s quadrature detector and it is based on estimating the bias constant added to every near-field data sample. Then, that constant is subtracted from the data, removing its undesired effect on the far-field pattern. The estimation is performed by back-propagating the field from the scan plane to the antenna under test plane (AUT) and averaging all the data located outside the AUT aperture. The second method is able to cancel the effect of the leakage from faulty transmission lines, connectors or rotary joints. The basis of this method is also a reconstruction process to determine the field distribution on the AUT plane. Once this distribution is known, a spatial filtering is applied to cancel the contribution due to those faulty elements. After that, a near-field-to-far-field transformation is applied, obtaining a new radiation pattern where the leakage effects have disappeared. To verify the effectiveness of both methods, several examples are presented.
Resumo:
A system to evaluate nanoparticles efficiency in hyperthermia applications is presented. The method allows a direct measurement of the power dissipated by the nanoparticles through the determination of the first harmonic component of the in quadrature magnetic moment induced by the applied field. The magnetic moment is measured by using an induction method. To avoid errors and reduce the noise signal a double in phase demodulation technique is used. To test the system viability we have measured nanowires, nanoparticles and copper samples of different volumes to prove by comparing experimental and modeled results
Resumo:
One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.
Resumo:
There are many industries that use highly technological solutions to improve quality in all of their products. The steel industry is one example. Several automatic surface-inspection systems are used in the steel industry to identify various types of defects and to help operators decide whether to accept, reroute, or downgrade the material, subject to the assessment process. This paper focuses on promoting a strategy that considers all defects in an integrated fashion. It does this by managing the uncertainty about the exact position of a defect due to different process conditions by means of Gaussian additive influence functions. The relevance of the approach is in making possible consistency and reliability between surface inspection systems. The results obtained are an increase in confidence in the automatic inspection system and an ability to introduce improved prediction and advanced routing models. The prediction is provided to technical operators to help them in their decision-making process. It shows the increase in improvement gained by reducing the 40 % of coils that are downgraded at the hot strip mill because of specific defects. In addition, this technology facilitates an increase of 50 % in the accuracy of the estimate of defect survival after the cleaning facility in comparison to the former approach. The proposed technology is implemented by means of software-based, multi-agent solutions. It makes possible the independent treatment of information, presentation, quality analysis, and other relevant functions.
Resumo:
It is well known that the evaluation of the influence matrices in the boundary-element method requires the computation of singular integrals. Quadrature formulae exist which are especially tailored to the specific nature of the singularity, i.e. log(*- x0)9 Ijx- JC0), etc. Clearly the nodes and weights of these formulae vary with the location Xo of the singular point. A drawback of this approach is that a given problem usually includes different types of singularities, and therefore a general-purpose code would have to include many alternative formulae to cater for all possible cases. Recently, several authors1"3 have suggested a type independent alternative technique based on the combination of standard Gaussian rules with non-linear co-ordinate transformations. The transformation approach is particularly appealing in connection with the p.adaptive version, where the location of the collocation points varies at each step of the refinement process. The purpose of this paper is to analyse the technique in eference 3. We show that this technique is asymptotically correct as the number of Gauss points increases. However, the method possesses a 'hidden' source of error that is analysed and can easily be removed.
Resumo:
The numerical strategies employed in the evaluation of singular integrals existing in the Cauchy principal value (CPV) sense are, undoubtedly, one of the key aspects which remarkably affect the performance and accuracy of the boundary element method (BEM). Thus, a new procedure, based upon a bi-cubic co-ordinate transformation and oriented towards the numerical evaluation of both the CPV integrals and some others which contain different types of singularity is developed. Both the ideas and some details involved in the proposed formulae are presented, obtaining rather simple and-attractive expressions for the numerical quadrature which are also easily embodied into existing BEM codes. Some illustrative examples which assess the stability and accuracy of the new formulae are included.
Resumo:
Spannbetonbauteile sind anfällig für Relaxationsverluste, deren Berücksichtigung für die Tragwerksplanung von signifikanter Bedeutung ist. Spanndrähte werden nach der Herstellung zur Vereinfachung von Lagerung und Transport auf Coils gewickelt. Eventuelle nachteilige Auswirkungen dieser Vorgehensweise auf das Relaxationsverhalten von Spannstahldrähten werden in der Regel vernachlässigt, obwohl Hersteller sowie Bauunternehmer durch Relaxationsversuche nach langen Lagerzeiten festgestellt haben, dass der Relaxationsverlust in einigen Fällen höher ausfiel als kurz nach der Herstellung. Daher wird der Einfluss des Aufwickelns auf den Relaxationsverlust durch eine experimentelle Untersuchung überprüft und durch die Anwendung eines einfachen Analysemodells bestätigt. Die Ergebnisse zeigen, dass einige Faktoren wie die Anfangsspannung, Langzeitlagerung oder Lagerung bei hohen Temperaturen Relaxationsverluste auslösen oder verstärken können. Es wird jedoch auch gezeigt, dass diese Auswirkungen unter Berücksichtigung der Anforderungen der entsprechenden Richtlinien (Mindest durch - messer Coil) vernachlässigt werden können.
Resumo:
Esta tesis presenta un análisis teórico del funcionamiento de toberas magnéticas para la propulsión espacial por plasmas. El estudio está basado en un modelo tridimensional y bi-fluido de la expansión supersónica de un plasma caliente en un campo magnético divergente. El modelo básico es ampliado progresivamente con la inclusión de términos convectivos dominantes de electrones, el campo magnético inducido por el plasma, poblaciones electrónicas múltiples a distintas temperaturas, y la capacidad de integrar el flujo en la región de expansión lejana. La respuesta hiperbólica del plasma es integrada con alta precisión y eficiencia haciendo uso del método de las líneas características. Se realiza una caracterización paramétrica de la expansión 2D del plasma en términos del grado de magnetización de iones, la geometría del campo magnético, y el perfil inicial del plasma. Se investigan los mecanismos de aceleración, mostrando que el campo ambipolar convierte la energía interna de electrones en energía dirigida de iones. Las corrientes diamagnéticas de Hall, que pueden hallarse distribuidas en el volumen del plasma o localizadas en una delgada capa de corriente en el borde del chorro, son esenciales para la operación de la tobera, ya que la fuerza magnética repulsiva sobre ellas es la encargada de confinar radialmente y acelerar axialmente el plasma. El empuje magnético es la reacción a esta fuerza sobre el motor. La respuesta del plasma muestra la separación gradual hacia adentro de los tubos de iones respecto de los magnéticos, lo cual produce la formación de corrientes eléctricas longitudinales y pone el plasma en rotación. La ganancia de empuje obtenida y las pérdidas radiales de la pluma de plasma se evalúan en función de los parámetros de diseño. Se analiza en detalle la separación magnética del plasma aguas abajo respecto a las líneas magnéticas (cerradas sobre sí mismas), necesaria para la aplicación de la tobera magnética a fines propulsivos. Se demuestra que tres teorías existentes sobre separación, que se fundamentan en la resistividad del plasma, la inercia de electrones, y el campo magnético que induce el plasma, son inadecuadas para la tobera magnética propulsiva, ya que producen separación hacia afuera en lugar de hacia adentro, aumentando la divergencia de la pluma. En su lugar, se muestra que la separación del plasma tiene lugar gracias a la inercia de iones y la desmagnetización gradual del plasma que tiene lugar aguas abajo, que permiten la separación ilimitada del flujo de iones respecto a las líneas de campo en condiciones muy generales. Se evalúa la cantidad de plasma que permanece unida al campo magnético y retorna hacia el motor a lo largo de las líneas cerradas de campo, mostrando que es marginal. Se muestra cómo el campo magnético inducido por el plasma incrementa la divergencia de la tobera magnética y por ende de la pluma de plasma en el caso propulsivo, contrariamente a las predicciones existentes. Se muestra también cómo el inducido favorece la desmagnetización del núcleo del chorro, acelerando la separación magnética. La hipótesis de ambipolaridad de corriente local, común a varios modelos de tobera magnética existentes, es discutida críticamente, mostrando que es inadecuada para el estudio de la separación de plasma. Una inconsistencia grave en la derivación matemática de uno de los modelos más aceptados es señalada y comentada. Incluyendo una especie adicional de electrones supratérmicos en el modelo, se estudia la formación y geometría de dobles capas eléctricas en el interior del plasma. Cuando dicha capa se forma, su curvatura aumenta cuanto más periféricamente se inyecten los electrones supratérmicos, cuanto menor sea el campo magnético, y cuanto más divergente sea la tobera magnética. El plasma con dos temperaturas electrónicas posee un mayor ratio de empuje magnético frente a total. A pesar de ello, no se encuentra ninguna ventaja propulsiva de las dobles capas, reforzando las críticas existentes frente a las propuestas de estas formaciones como un mecanismo de empuje. Por último, se presenta una formulación general de modelos autosemejantes de la expansión 2D de una pluma no magnetizada en el vacío. El error asociado a la hipótesis de autosemejanza es calculado, mostrando que es pequeño para plumas hipersónicas. Tres modelos de la literatura son particularizados a partir de la formulación general y comparados. Abstract This Thesis presents a theoretical analysis of the operation of magnetic nozzles for plasma space propulsion. The study is based on a two-dimensional, two-fluid model of the supersonic expansion of a hot plasma in a divergent magnetic field. The basic model is extended progressively to include the dominant electron convective terms, the plasma-induced magnetic field, multi-temperature electron populations, and the capability to integrate the plasma flow in the far expansion region. The hyperbolic plasma response is integrated accurately and efficiently with the method of the characteristic lines. The 2D plasma expansion is characterized parametrically in terms of the ion magnetization strength, the magnetic field geometry, and the initial plasma profile. Acceleration mechanisms are investigated, showing that the ambipolar electric field converts the internal electron energy into directed ion energy. The diamagnetic electron Hall current, which can be distributed in the plasma volume or localized in a thin current sheet at the jet edge, is shown to be central for the operation of the magnetic nozzle. The repelling magnetic force on this current is responsible for the radial confinement and axial acceleration of the plasma, and magnetic thrust is the reaction to this force on the magnetic coils of the thruster. The plasma response exhibits a gradual inward separation of the ion streamtubes from the magnetic streamtubes, which focuses the jet about the nozzle axis, gives rise to the formation of longitudinal currents and sets the plasma into rotation. The obtained thrust gain in the magnetic nozzle and radial plasma losses are evaluated as a function of the design parameters. The downstream plasma detachment from the closed magnetic field lines, required for the propulsive application of the magnetic nozzle, is investigated in detail. Three prevailing detachment theories for magnetic nozzles, relying on plasma resistivity, electron inertia, and the plasma-induced magnetic field, are shown to be inadequate for the propulsive magnetic nozzle, as these mechanisms detach the plume outward, increasing its divergence, rather than focusing it as desired. Instead, plasma detachment is shown to occur essentially due to ion inertia and the gradual demagnetization that takes place downstream, which enable the unbounded inward ion separation from the magnetic lines beyond the turning point of the outermost plasma streamline under rather general conditions. The plasma fraction that remains attached to the field and turns around along the magnetic field back to the thruster is evaluated and shown to be marginal. The plasmainduced magnetic field is shown to increase the divergence of the nozzle and the resulting plasma plume in the propulsive case, and to enhance the demagnetization of the central part of the plasma jet, contrary to existing predictions. The increased demagnetization favors the earlier ion inward separation from the magnetic field. The local current ambipolarity assumption, common to many existing magnetic nozzle models, is critically discussed, showing that it is unsuitable for the study of plasma detachment. A grave mathematical inconsistency in a well-accepted model, related to the acceptance of this assumption, is found out and commented on. The formation and 2D shape of electric double layers in the plasma expansion is studied with the inclusion of an additional suprathermal electron population in the model. When a double layer forms, its curvature is shown to increase the more peripherally suprathermal electrons are injected, the lower the magnetic field strength, and the more divergent the magnetic nozzle is. The twoelectron- temperature plasma is seen to have a greater magnetic-to-total thrust ratio. Notwithstanding, no propulsive advantage of the double layer is found, supporting and reinforcing previous critiques to their proposal as a thrust mechanism. Finally, a general framework of self-similar models of a 2D unmagnetized plasma plume expansion into vacuum is presented and discussed. The error associated with the self-similarity assumption is calculated and shown to be small for hypersonic plasma plumes. Three models of the literature are recovered as particularizations from the general framework and compared.