19 resultados para Current generation
em Universidad Politécnica de Madrid
Resumo:
Satellite operators are starting to use the Ka-band (30/20 GHz) for communications systems requiring higher traffic capacity. The use of this band is expected to experience a significant growth in the next few years, as several operators have reported plans to launch new satellites with Ka-band capacity. It is worth mentioning the Ka-Sat satellite in Europe, launched in 2010, and ViaSat-1, of 2011, with coverage of USA1. Some other examples can be found in other parts of the World. Recent satellite communications standards, such as DVB-S22 or DVB-RCS3, which provide means to mitigate propagation impairments, have been developed with the objective of improving the use of the Ka-band, in comparison with previous technical standards. In the next years, the ALPHASAT satellite will bring about new opportunities4 for carrying out propagation and telecommunication experiments in the Ka- and Q/V-bands. Commercial uses are focused on the provision of high speed data communications, for Internet access and other applications. In the near future, it is expected that higher and higher data rates will also be needed to broadcast richer multimedia contents, including HD-TV, interactive content or 3D-TV. All of these services may be provided in the future by satellites of the current generation, whose life span can extend up to 2025 in some cases. Depending on local regulations, the available bandwidth for the satellite fixed and broadcasting services in the Ka-band is in excess of several hundred MHz, bidirectional, comprising more than 1 GHz for each sub-band in some cases. In this paper, the results of a propagation experiment that is being carried out at Universidad Politécnica de Madrid (UPM), Spain, are presented5. The objective of the experiment is twofold: gathering experimental time series of attenuation and analyzing them in order to characterize the propagation channel at these frequencies6. The experiment and statistical results correspond to five complete years of measurements. The experiment is described in more detail in Section II. Yearly characteristics of rain attenuation are presented in Section III, whereas Section IV is dedicated to the monthly, seasonal, and hourly characteristics. Section V covers the dynamic characteristics of this propagation effect, just before the conclusions are described in Section VI.
Resumo:
Esta tesis analiza los criterios con que fueron proyectadas y construidas las estructuras de hormigón hasta 1973, fecha coincidente con la Instrucción EH-73, que en contenido, formato y planteamiento, consagró la utilización de los criterios modernamente utilizados hasta ahora. Es heredera, además, de las CEB 1970. Esos años marcan el cambio de planteamiento desde la Teoría Clásica hacia los Estados Límite. Los objetivos perseguidos son, sintéticamente: 1) Cubrir un vacío patente en el estudio de la evolución del conocimiento. Hay tratados sobre la historia del hormigón que cubren de manera muy completa el relato de personajes y realizaciones, pero no, al menos de manera suficiente, la evolución del conocimiento. 2) Servir de ayuda a los técnicos de hoy para entender configuraciones estructurales, geometrías, disposiciones de armado, formatos de seguridad, etc, utilizados en el pasado, lo que servirá para la redacción más fundada de dictámenes preliminares sobre estructuras existentes. 3) Ser referencia para la realización de estudios de valoración de la capacidad resistente de construcciones existentes, constituyendo la base de un documento pre-normativo orientado en esa dirección. En efecto, esta tesis pretende ser una ayuda para los ingenieros de hoy que se enfrentan a la necesidad de conservar y reparar estructuras de hormigón armado que forman parte del patrimonio heredado. La gran mayoría de las estructuras, fueron construidas hace más de 40 años, por lo que es preciso conocer los criterios que marcaron su diseño, su cálculo y su construcción. Pretende determinar cuáles eran los límites de agotamiento y por tanto de seguridad, de estructuras dimensionadas con criterios de antaño, analizadas por la metodología de cálculo actual. De este modo, se podrá determinar el resguardo existente “real” de las estructuras dimensionadas y calculadas con criterios “distintos” a los actuales. Conocer el comportamiento de las estructuras construidas con criterios de la Teoría Clásica, según los criterios actuales, permitirá al ingeniero de hoy tratar de la forma más adecuada el abanico de necesidades que se puedan presentar en una estructura existente. Este trabajo se centra en la evolución del conocimiento por lo que no se encuentran incluidos los procesos constructivos. En lo relativo a los criterios de proyecto, hasta mediados del siglo XX, éstos se veían muy influidos por los ensayos y trabajos de autor consiguientes, en los que se basaban los reglamentos de algunos países. Era el caso del reglamento prusiano de 1904, de la Orden Circular francesa de 1906, del Congreso de Lieja de 1930. A partir de la segunda mitad del siglo XX, destacan las aportaciones de ingenieros españoles como es el caso de Alfredo Páez Balaca, Eduardo Torroja y Pedro Jiménez Montoya, entre otros, que permitieron el avance de los criterios de cálculo y de seguridad de las estructuras de hormigón, hasta los que se conocen hoy. El criterio rector del proyecto de las estructuras de hormigón se fundó, como es sabido, en los postulados de la Teoría Clásica, en particular en el “momento crítico”, aquel para el que hormigón y acero alcanzan sus tensiones admisibles y, por tanto, asegura el máximo aprovechamiento de los materiales y sin pretenderlo conscientemente, la máxima ductilidad. Si el momento solicitante es mayor que el crítico, se dispone de armadura en compresión. Tras el estudio de muchas de las estructuras existentes de la época por el autor de esta tesis, incluyendo entre ellas las Colecciones Oficiales de Puentes de Juan Manuel de Zafra, Eugenio Ribera y Carlos Fernández Casado, se concluye que la definición geométrica de las mismas no se corresponde exactamente con la resultante del momento crítico, dado que como ahora resultaba necesario armonizar los criterios de armado a nivel sección con la organización de la ferralla a lo largo de los diferentes elementos estructurales. Los parámetros de cálculo, resistencias de los materiales y formatos de seguridad, fueron evolucionando con los años. Se fueron conociendo mejor las prestaciones de los materiales, se fue enriqueciendo la experiencia de los propios procesos constructivos y, en menor medida, de las acciones solicitantes y, consiguientemente, acotándose las incertidumbres asociadas lo cual permitió ir ajustando los coeficientes de seguridad a emplear en el cálculo. Por ejemplo, para el hormigón se empleaba un coeficiente de seguridad igual a 4 a finales del siglo XIX, que evolucionó a 3,57 tras la publicación de la Orden Circular francesa de 1906, y a 3, tras la Instrucción española de 1939. En el caso del acero, al ser un material bastante más conocido por cuanto se había utilizado muchísimo previamente, el coeficiente de seguridad permaneció casi constante a lo largo de los años, con un valor igual a 2. Otra de las causas de la evolución de los parámetros de cálculo fue el mejor conocimiento del comportamiento de las estructuras merced a la vasta tarea de planificación y ejecución de ensayos, con los estudios teóricos consiguientes, realizados por numerosos autores, principalmente austríacos y alemanes, pero también norteamericanos y franceses. En cuanto a los criterios de cálculo, puede sorprender al técnico de hoy el conocimiento que tenían del comportamiento del hormigón desde los primeros años del empleo del mismo. Sabían del comportamiento no lineal del hormigón, pero limitaban su trabajo a un rango de tensióndeformación lineal porque eso aseguraba una previsión del comportamiento estructural conforme a las hipótesis de la Elasticidad Lineal y de la Resistencia de Materiales, muy bien conocidas a principios del s. XX (no así sucedía con la teoría de la Plasticidad, aún sin formular, aunque estaba implícita en los planteamientos algunos ingenieros especializados en estructuras de fábrica (piedra o ladrillo) y metálicas. Además, eso permitía independizar un tanto el proyecto de los valores de las resistencias reales de los materiales, lo que liberaba de la necesidad de llevar a cabo ensayos que, en la práctica, apenas se podían hacer debido a la escasez de los laboratorios. Tampoco disponían de programas informáticos ni de ninguna de las facilidades de las que hoy se tienen, que les permitiera hacer trabajar al hormigón en un rango no lineal. Así, sabia y prudentemente, limitaban las tensiones y deformaciones del material a un rango conocido. El modus operandi seguido para la elaboración de esta tesis, ha sido el siguiente: -Estudio documental: se han estudiado documentos de autor, recomendaciones y normativa generada en este ámbito, tanto en España como con carácter internacional, de manera sistemática con arreglo al índice del documento. En este proceso, se han detectado lagunas del conocimiento (y su afección a la seguridad estructural, en su caso) y se han identificado las diferencias con los procedimientos de hoy. También ha sido necesario adaptar la notación y terminología de la época a los criterios actuales, lo que ha supuesto una dificultad añadida. -Desarrollo del documento: A partir del estudio previo se han ido desarrollando los siguientes documentos, que conforman el contenido de la tesis: o Personajes e instituciones relevantes por sus aportaciones al conocimiento de las estructuras de hormigón (investigación, normativa, docencia). o Caracterización de las propiedades mecánicas de los materiales (hormigón y armaduras), en relación a sus resistencias, diagramas tensión-deformación, módulos de deformación, diagramas momento-curvatura, etc. Se incluye aquí la caracterización clásica de los hormigones, la geometría y naturaleza de las armaduras, etc. o Formatos de seguridad: Se trata de un complejo capítulo del que se pretende extraer la información suficiente que permita a los técnicos de hoy entender los criterios utilizados entonces y compararlos con los actuales. o Estudio de secciones y piezas sometidas a tensiones normales y tangenciales: Se trata de presentar la evolución en el tratamiento de la flexión simple y compuesta, del cortante, del rasante, torsión, etc. Se tratan también en esta parte del estudio aspectos que, no siendo de preocupación directa de los técnicos de antaño (fisuración y deformaciones), tienen hoy mayor importancia frente a cambios de usos y condiciones de durabilidad. o Detalles de armado: Incluye el tratamiento de la adherencia, el anclaje, el solapo de barras, el corte de barras, las disposiciones de armado en función de la geometría de las piezas y sus solicitaciones, etc. Es un capítulo de importancia obvia para los técnicos de hoy. Se incluye un anejo con las referencias más significativas a los estudios experimentales en que se basaron las propuestas que han marcado hito en la evolución del conocimiento. Finalmente, junto a las conclusiones más importantes, se enuncian las propuestas de estudios futuros. This thesis analyzes the criteria with which structures of reinforced concrete have been designed and constructed prior to 1973. Initially, the year 1970 was chosen as starting point, coinciding with the CEB recommendations, but with the development of the thesis it was decided that 1973 was the better option, coinciding with the Spanish regulations of 1973, whose content, format and description introduced the current criteria. The studied period includes the Classic Theory. The intended goals of this thesis are: 1) To cover a clear gap in the study of evolution of knowledge about reinforced concrete. The concept and accomplishments achieved by reinforced concrete itself has been treated in a very complete way by the main researchers in this area, but not the evolution of knowledge in this subject area. 2) To help the engineers understand structural configurations, geometries, dispositions of steel, safety formats etc, that will serve as preliminary judgments by experts on existing structures. To be a reference to the existing studies about the valuation of resistant capacity of existing constructions, constituting a basic study of a pre-regulation document. This thesis intends to be a help for the current generation of engineers who need to preserve and repair reinforced concrete structures that have existed for a significant number of years. Most of these structures in question were constructed more than 40 years ago, and it is necessary to know the criteria that influenced their design, the calculation and the construction. This thesis intends to determine the safety limits of the old structures and analyze them in the context of the current regulations and their methodology. Thus, it will then be possible to determine the safety of these structures, after being measured and calculated with the current criteria. This will allow the engineers to optimize the treatment of such a structure. This work considers the evolution of the knowledge, so constructive methods are not included. Related to the design criteria, there existed until middle of the 20th century a large number of diverse European tests and regulations, such as the Prussian norm of 1904, the Circular French Order of 1906, the Congress of Liège of 1930, as well as individual engineers’ own notes and criteria which incorporated the results of their own tests. From the second half of the 20th century, the contributions of Spanish engineers as Alfredo Páez Balaca, Eduardo Torroja and Pedro Jiménez Montoya, among others, were significant and this allowed the advancement of the criteria of the calculation of safety standards of concrete structures, many of which still exist to the present day. The design and calculation of reinforced concrete structures by the Classic Theory, was based on the ‘Critical Bending Moment’, when concrete and steel achieve their admissible tensions, that allows the best employment of materials and the best ductility. If the bending moment is major than the critical bending moment, will be necessary to introduce compression steel. After the study of the designs of many existing structures of that time by the author of this thesis, including the Historical Collections of Juan Manuel de Zafra, Eugenio Ribera and Carlos Fernandez Casado, the conclusion is that the geometric definition of the structures does not correspond exactly with the critical bending moment inherent in the structures. The parameters of these calculations changed throughout the years. The principal reason that can be outlined is that the materials were improving gradually and the number of calculated uncertainties were decreasing, thus allowing the reduction of the safety coefficients to use in the calculation. For example, concrete used a coefficient of 4 towards the end of the 19th century, which evolved to 3,57 after the publication of the Circular French Order of 1906, and then to 3 after the Spanish Instruction of 1939. In the case of the steel, a much more consistent material, the safety coefficient remained almost constant throughout the years, with a value of 2. Other reasons related to the evolution of the calculation parameters were that the tests and research undertaken by an ever-increasing number of engineers then allowed a more complete knowledge of the behavior of reinforced concrete. What is surprising is the extent of knowledge that existed about the behavior of the concrete from the outset. Engineers from the early years knew that the behavior of the concrete was non-linear, but they limited the work to a linear tension-deformation range. This was due to the difficulties of work in a non-linear range, because they did not have laboratories to test concrete, or facilities such as computers with appropriate software, something unthinkable today. These were the main reasons engineers of previous generations limited the tensions and deformations of a particular material to a known range. The modus operandi followed for the development of this thesis is the following one: -Document study: engineers’ documents, recommendations and regulations generated in this area, both from Spain or overseas, have been studied in a systematic way in accordance with the index of the document. In this process, a lack of knowledge has been detected concerning structural safety, and differences to current procedures have been identified and noted. Also, it has been necessary to adapt the notation and terminology of the Classic Theory to the current criteria, which has imposed an additional difficulty. -Development of the thesis: starting from the basic study, the next chapters of this thesis have been developed and expounded upon: o People and relevant institutions for their contribution to the knowledge about reinforced concrete structures (investigation, regulation, teaching). Determination of the mechanical properties of the materials (concrete and steel), in relation to their resistances, tension-deformation diagrams, modules of deformation, moment-curvature diagrams, etc. Included are the classic characterizations of concrete, the geometry and nature of the steel, etc. Safety formats: this is a very difficult chapter from which it is intended to provide enough information that will then allow the present day engineer to understand the criteria used in the Classic Theory and then to compare them with the current theories. Study of sections and pieces subjected to normal and tangential tensions: it intends to demonstrate the evolution in the treatment of the simple and complex flexion, shear, etc. Other aspects examined include aspects that were not very important in the Classic Theory but currently are, such as deformation and fissures. o Details of reinforcement: it includes the treatment of the adherence, the anchorage, the lapel of bars, the cut of bars, the dispositions of reinforcement depending on the geometry of the pieces and the solicitations, etc. It is a chapter of obvious importance for current engineers. The document will include an annex with the most references to the most significant experimental studies on which were based the proposals that have become a milestone in the evolution of knowledge in this area. Finally, there will be included conclusions and suggestions of future studies. A deep study of the documentation and researchers of that time has been done, juxtaposing their criteria and results with those considered relevant today, and giving a comparison between the resultant safety standards according to the Classic Theory criteria and currently used criteria. This thesis fundamentally intends to be a guide for engineers who have to treat or repair a structure constructed according to the Classic Theory criteria.
Resumo:
El sector energético, en España en particular, y de forma similar en los principales países de Europa, cuenta con una significativa sobrecapacidad de generación, debido al rápido y significativo crecimiento de las energías renovables en los últimos diez años y la reducción de la demanda energética, como consecuencia de la crisis económica. Esta situación ha hecho que las centrales térmicas de generación de electricidad, y en concreto los ciclos combinados de gas, operen con un factor de utilización extremadamente bajo, del orden del 10%. Además de la reducción de ingresos, esto supone para las plantas trabajar continuamente fuera del punto de diseño, provocando una significativa pérdida de rendimiento y mayores costes de explotación. En este escenario, cualquier contribución que ayude a mejorar la eficiencia y la condición de los equipos, es positivamente considerada. La gestión de activos está ganando relevancia como un proceso multidisciplinar e integrado, tal y como refleja la reciente publicación de las normas ISO 55000:2014. Como proceso global e integrado, la gestión de activos requiere el manejo de diversos procesos y grandes volúmenes de información, incluso en tiempo real. Para ello es necesario utilizar tecnologías de la información y aplicaciones de software. Esta tesis desarrolla un concepto integrado de gestión de activos (Integrated Plant Management – IPM) aplicado a centrales de ciclo combinado y una metodología para estimar el beneficio aportado por el mismo. Debido a las incertidumbres asociadas a la estimación del beneficio, se ha optado por un análisis probabilístico coste-beneficio. Así mismo, el análisis cuantitativo se ha completado con una validación cualitativa del beneficio aportado por las tecnologías incorporadas al concepto de gestión integrada de activos, mediante una entrevista realizada a expertos del sector de generación de energía. Los resultados del análisis coste-beneficio son positivos, incluso en el desfavorable escenario con un factor de utilización de sólo el 10% y muy prometedores para factores de utilización por encima del 30%. ABSTRACT The energy sector particularly in Spain, and in a similar way in Europe, has a significant overcapacity due to the big growth of the renewable energies in the last ten years, and it is seriously affected by the demand decrease due to the economic crisis. That situation has forced the thermal plants and in particular, the combined cycles to operate with extremely low annual average capacity factors, very close to 10%. Apart from the incomes reduction, working in out-of-design conditions, means getting a worse performance and higher costs than expected. In this scenario, anything that can be done to improve the efficiency and the equipment condition is positively received. Asset Management, as a multidisciplinary and integrated process, is gaining prominence, reflected in the recent publication of the ISO 55000 series in 2014. Dealing Asset Management as a global, integrated process needs to manage several processes and significant volumes of information, also in real time, that requires information technologies and software applications to support it. This thesis proposes an integrated asset management concept (Integrated Plant Management-IPM) applied to combined cycle power plants and develops a methodology to assess the benefit that it can provide. Due to the difficulties in getting deterministic benefit estimation, a statistical approach has been adopted for the cot-benefit analysis. As well, the quantitative analysis has been completed with a qualitative validation of the technologies included in the IPM and their contribution to key power plant challenges by power generation sector experts. The cost- benefit analysis provides positive results even in the negative scenario of annual average capacity factor close to 10% and is promising for capacity factors over 30%.
Resumo:
Predictions about electric energy needs, based on current electric energy models, forecast that the global energy consumption on Earth for 2050 will double present rates. Using distributed procedures for control and integration, the expected needs can be halved. Therefore implementation of Smart Grids is necessary. Interaction between final consumers and utilities is a key factor of future Smart Grids. This interaction is aimed to reach efficient and responsible energy consumption. Energy Residential Gateways (ERG) are new in-building devices that will govern the communication between user and utility and will control electric loads. Utilities will offer new services empowering residential customers to lower their electric bill. Some of these services are Smart Metering, Demand Response and Dynamic Pricing. This paper presents a practical development of an ERG for residential buildings.
Resumo:
This paper describes a practical activity, part of the renewable energy course where the students have to build their own complete wind generation system, including blades, PM-generator, power electronics and control. After connecting the system to the electric grid the system has been tested during real wind scenarios. The paper will describe the electric part of the work surface-mounted permanent magnet machine design criteria as well as the power electronics part for the power control and the grid connection. A Kalman filter is used for the voltage phase estimation and current commands obtained in order to control active and reactive power. The connection to the grid has been done and active and reactive power has been measured in the system.
Resumo:
Current trends in the European Higher Education Area (EHEA) are moving towards the continuous evaluation of the students in substitution of the traditional evaluation based on a single test or exam. This fact and the increase in the number of students during last years in Engineering Schools, requires to modify evaluation procedures making them compatible with the educational and research activities. This work presents a methodology for the automatic generation of questions. These questions can be used as self assessment questions by the student and/or as queries by the teacher. The proposed approach is based on the utilization of parametric questions, formulated as multiple choice questions and generated and supported by the utilization of common programs of data sheets and word processors. Through this approach, every teacher can apply the proposed methodology without the use of programs or tools different from those normally used in his/her daily activity
Resumo:
Massive integration of renewable energy sources in electrical power systems of remote islands is a subject of current interest. The increasing cost of fossil fuels, transport costs to isolated sites and environmental concerns constitute a serious drawback to the use of conventional fossil fuel plants. In a weak electrical grid, as it is typical on an island, if a large amount of conventional generation is substituted by renewable energy sources, power system safety and stability can be compromised, in the case of large grid disturbances. In this work, a model for transient stability analysis of an isolated electrical grid exclusively fed from a combination of renewable energy sources has been studied. This new generation model will be installed in El Hierro Island, in Spain. Additionally, an operation strategy to coordinate the generation units (wind, hydro) is also established. Attention is given to the assessment of inertial energy and reactive current to guarantee power system stability against large disturbances. The effectiveness of the proposed strategy is shown by means of simulation results.
Resumo:
We present an experimental study on the generation of high-peak-power short optical pulses from a fully integrated master-oscillator power-amplifier emitting at 1.5 μm. High-peak-power (2.7 W) optical pulses with short duration (100 ps) have been generated by gain switching the master oscillator under optimized driving conditions. The static and dynamic characteristics of the device have been studied as a function of the driving conditions. The ripples appearing in the power-current characteristics under cw conditions have been attributed to mode hopping between the master oscillator resonant mode and the Fabry-Perot modes of the entire device cavity. Although compound cavity effects have been evidenced to affect the static and dynamic performance of the device, we have demonstrated that trains of single-mode short optical pulses at gigahertz frequencies can be conveniently generated in these devices.
Resumo:
The advantages of fast-spectrum reactors consist not only of an efficient use of fuel through the breeding of fissile material and the use of natural or depleted uranium, but also of the potential reduction of the amount of actinides such as americium and neptunium contained in the irradiated fuel. The first aspect means a guaranteed future nuclear fuel supply. The second fact is key for high-level radioactive waste management, because these elements are the main responsible for the radioactivity of the irradiated fuel in the long term. The present study aims to analyze the hypothetical deployment of a Gen-IV Sodium Fast Reactor (SFR) fleet in Spain. A nuclear fleet of fast reactors would enable a fuel cycle strategy different than the open cycle, currently adopted by most of the countries with nuclear power. A transition from the current Gen-II to Gen-IV fleet is envisaged through an intermediate deployment of Gen-III reactors. Fuel reprocessing from the Gen-II and Gen-III Light Water Reactors (LWR) has been considered. In the so-called advanced fuel cycle, the reprocessed fuel used to produce energy will breed new fissile fuel and transmute minor actinides at the same time. A reference case scenario has been postulated and further sensitivity studies have been performed to analyze the impact of the different parameters on the required reactor fleet. The potential capability of Spain to supply the required fleet for the reference scenario using national resources has been verified. Finally, some consequences on irradiated final fuel inventory are assessed. Calculations are performed with the Monte Carlo transport-coupled depletion code SERPENT together with post-processing tools.
Resumo:
Deorbit, power generation, and thrusting performances of a bare thin-tape tether and an insulated tether with a spherical electron collector are compared for typical conditions in low-Earth orbit and common values of length L = 4−20 km and cross-sectional area of the tether A = 1−5 mm2. The relative performance of moderately large spheres, as compared with bare tapes, improves but still lags as one moves from deorbiting to power generation and to thrusting: Maximum drag in deorbiting requires maximum current and, thus, fully reflects on anodic collection capability, whereas extracting power at a load or using a supply to push current against the motional field requires reduced currents. The relative performance also improves as one moves to smaller A, which makes the sphere approach the limiting short-circuit current, and at greater L, with the higher bias only affecting moderately the already large bare-tape current. For a 4-m-diameter sphere, relative performances range from 0.09 sphere-to-bare tether drag ratio for L = 4 km and A = 5 mm2 to 0.82 thrust–efficiency ratio for L = 20 km and A = 1 mm2. Extremely large spheres collecting the short-circuit current at zero bias at daytime (diameters being about 14 m for A = 1 mm2 and 31 m for A = 5 mm2) barely outperform the bare tape for L = 4 km and are still outperformed by the bare tape for L = 20 km in both deorbiting and power generation; these large spheres perform like the bare tape in thrusting. In no case was sphere or sphere-related hardware taken into account in evaluating system mass, which would have reduced the sphere performances even further.
Resumo:
It was recently suggested that the magnetic field created by the current of a bare tether strongly reduces its own electron-collection capability when a magnetic separatrix disconnecting ambient magnetized plasma from tether extends beyond its electric sheath. It is here shown that current reduction by the self-field depends on the ratio meterizing bias and current profiles along the tether (Lt tether length, characteristic length gauging ohmic effects) and on a new dimensionless number Ks involving ambient and tether parameters. Current reduction is weaker the lower Ks and L*/ Lt, which depend critically on the type of cross section: Ks varies as R5/3, h2/3R, and h2/3 1/4 width for wires, round tethers conductive only in a thin layer, and thin tapes, respectively; L* varies as R2/3 for wires and as h2/3 for tapes and round tethers conductive in a layer (R radius, h thickness). Self-field effects are fully negligible for the last two types of cross sections whatever the mode of operation. In practical efficient tether systems having L*/Lt low, maximum current reduction in case of wires is again negligible for power generation; for deorbiting, reduction is <1% for a 10 km tether and 15% for a 20 km tether. In the reboost mode there are no effects for Ks below some threshold; moderate effects may occur in practical but heavy reboost-wire systems that need no dedicated solar power.
Resumo:
The outstanding problem for useful applications of electrodynamic tethers is obtaining sufficient electron current from the ionospheric plasma. Bare tether collectors, in which the conducting tether itself, left uninsulated over kilometers of its length, acts as the collecting anode, promise to attain currents of 10 A or more from reasonably sized systems. Current collection by a bare tether is also relatively insensitive to drops in electron density, which are regularly encountered on each revolution of an orbit. This makes nighttime operation feasible. We show how the bare tether's high efficiency of current collection and ability to adjust to density variations follow from the orbital motion limited collection law of thin cylinders. We consider both upwardly deployed (power generation mode) and downwardly deployed (reboost mode) tethers, and present results that indicate how bare tether systems would perform as their magnetic and plasma environment varies in low earth orbit.
Resumo:
It has been recently suggested that the magnetic field created by the current in a bare tether could sensibly reduce its electron collection capability in the magnetised ionosphere, a region of closed magnetic surfaces disconnecting the cylinder from infinity. In this paper, the ohmic voltage drop along the tether is taken into account in considering self-field effects. Separate analyses are carried out for the thrust and power generation and drag modes of operation, which are affected in different ways. In the power generation and drag modes, bias decreases as current increases along the tether, starting at the anodic, positively-biased end (upper end in the usual, eastward-flying spacecraft); in the thrust mode of operation, bias increases as current increases along the tether, starting at the lower end. When the ohmic voltage drop is considered, self-field effects are shown to be weak, in all cases, for tape tethers, and for circular cross-section tethers just conductive in a thin outer layer. Self-field effects might become important, in the drag case only, for tethers with fully conductive cross sections that are unrealistically heavy.
Resumo:
Performances of ED-tethers using either spherical collectors or bare tethers for drag, thrust, or power generation, are compared. The standard Parker-Murphy model of current to a full sphere, with neither space-charge nor plasmamotion effects considered, but modified to best fit TSS1R results, is used (the Lam, Al'pert/Gurevich space-charge limited model will be used elsewhere) In the analysis, the spherical collector is assumed to collect current well beyond its random-current value (thick-heath). Both average current in the bare-tether and current to the sphere are normalized with the short-circuit current in the absence of applied power, allowing a comparison of performances for all three applications in terms of characteristic dimensionless numbers. The sphere is always substantially outperformed by the bare-tether if ohmic effects are weak, though its performance improves as such effects increase.
Resumo:
Ambient Assisted Living (AAL) services are emerging as context-awareness solutions to support elderly people?s autonomy. The context-aware paradigm makes applications more user-adaptive. In this way, context and user models expressed in ontologies are employed by applications to describe user and environment characteristics. The rapid advance of technology allows creating context server to relieve applications of context reasoning techniques. Specifically, the Next Generation Networks (NGN) provides by means of the presence service a framework to manage the current user's state as well as the user's profile information extracted from Internet and mobile context. This paper propose a user modeling ontology for AAL services which can be deployed in a NGN environment with the aim at adapting their functionalities to the elderly's context information and state.