53 resultados para Low threshold current densities
em Universidad Politécnica de Madrid
Resumo:
It was recently suggested that the magnetic field created by the current of a bare tether strongly reduces its own electron-collection capability when a magnetic separatrix disconnecting ambient magnetized plasma from tether extends beyond its electric sheath. It is here shown that current reduction by the self-field depends on the ratio meterizing bias and current profiles along the tether (Lt tether length, characteristic length gauging ohmic effects) and on a new dimensionless number Ks involving ambient and tether parameters. Current reduction is weaker the lower Ks and L*/ Lt, which depend critically on the type of cross section: Ks varies as R5/3, h2/3R, and h2/3 1/4 width for wires, round tethers conductive only in a thin layer, and thin tapes, respectively; L* varies as R2/3 for wires and as h2/3 for tapes and round tethers conductive in a layer (R radius, h thickness). Self-field effects are fully negligible for the last two types of cross sections whatever the mode of operation. In practical efficient tether systems having L*/Lt low, maximum current reduction in case of wires is again negligible for power generation; for deorbiting, reduction is <1% for a 10 km tether and 15% for a 20 km tether. In the reboost mode there are no effects for Ks below some threshold; moderate effects may occur in practical but heavy reboost-wire systems that need no dedicated solar power.
Resumo:
The metallization stack Ti/Pd/Ag on n-type Si has been readily used in solar cells due to its low metal/semiconductor specific contact resistance, very high sheet conductance, bondability, long-term durability, and cost-effectiveness. In this study, the use of Ti/Pd/Ag metallization on n-type GaAs is examined, targeting electronic devices that need to handle high current densities and with grid-like contacts with limited surface coverage (i.e., solar cells, lasers, or light emitting diodes). Ti/Pd/Ag (50 nm/50 nm/1000 nm) metal layers were deposited on n-type GaAs by electron beam evaporation and the contact quality was assessed for different doping levels (from 1.3 × 1018 cm−3 to 1.6 × 1019 cm−3) and annealing temperatures (from 300°C to 750°C). The metal/semiconductor specific contact resistance, metal resistivity, and the morphology of the contacts were studied. The results show that samples doped in the range of 1018 cm−3 had Schottky-like I–V characteristics and only samples doped 1.6 × 1019 cm−3 exhibited ohmic behavior even before annealing. For the ohmic contacts, increasing annealing temperature causes a decrease in the specific contact resistance (ρ c,Ti/Pd/Ag ~ 5 × 10−4 Ω cm2). In regard to the metal resistivity, Ti/Pd/Ag metallization presents a very good metal conductivity for samples treated below 500°C (ρ M,Ti/Pd/Ag ~ 2.3 × 10−6 Ω cm); however, for samples treated at 750°C, metal resistivity is strongly degraded due to morphological degradation and contamination in the silver overlayer. As compared to the classic AuGe/Ni/Au metal system, the Ti/Pd/Ag system shows higher metal/semiconductor specific contact resistance and one order of magnitude lower metal resistivity.
Resumo:
We present experimental and numerical results on intense-laser-pulse-produced fast electron beams transport through aluminum samples, either solid or compressed and heated by laser-induced planar shock propagation. Thanks to absolute K� yield measurements and its very good agreement with results from numerical simulations, we quantify the collisional and resistive fast electron stopping powers: for electron current densities of � 8 � 1010 A=cm2 they reach 1:5 keV=�m and 0:8 keV=�m, respectively. For higher current densities up to 1012 A=cm2, numerical simulations show resistive and collisional energy losses at comparable levels. Analytical estimations predict the resistive stopping power will be kept on the level of 1 keV=�m for electron current densities of 1014 A=cm2, representative of the full-scale conditions in the fast ignition of inertially confined fusion targets.
Resumo:
Natural regeneration is an ecological key-process that makes plant persistence possible and, consequently, it constitutes an essential element of sustainable forest management. In this respect, natural regeneration in even-aged stands of Pinus pinea L. located in the Spanish Northern Plateau has not always been successfully achieved despite over a century of pine nut-based management. As a result, natural regeneration has recently become a major concern for forest managers when we are living a moment of rationalization of investment in silviculture. The present dissertation is addressed to provide answers to forest managers on this topic through the development of an integral regeneration multistage model for P. pinea stands in the region. From this model, recommendations for natural regeneration-based silviculture can be derived under present and future climate scenarios. Also, the model structure makes it possible to detect the likely bottlenecks affecting the process. The integral model consists of five submodels corresponding to each of the subprocesses linking the stages involved in natural regeneration (seed production, seed dispersal, seed germination, seed predation and seedling survival). The outputs of the submodels represent the transitional probabilities between these stages as a function of climatic and stand variables, which in turn are representative of the ecological factors driving regeneration. At subprocess level, the findings of this dissertation should be interpreted as follows. The scheduling of the shelterwood system currently conducted over low density stands leads to situations of dispersal limitation since the initial stages of the regeneration period. Concerning predation, predator activity appears to be only limited by the occurrence of severe summer droughts and masting events, the summer resulting in a favourable period for seed survival. Out of this time interval, predators were found to almost totally deplete seed crops. Given that P. pinea dissemination occurs in summer (i.e. the safe period against predation), the likelihood of a seed to not be destroyed is conditional to germination occurrence prior to the intensification of predator activity. However, the optimal conditions for germination seldom take place, restraining emergence to few days during the fall. Thus, the window to reach the seedling stage is narrow. In addition, the seedling survival submodel predicts extremely high seedling mortality rates and therefore only some individuals from large cohorts will be able to persist. These facts, along with the strong climate-mediated masting habit exhibited by P. pinea, reveal that viii the overall probability of establishment is low. Given this background, current management –low final stand densities resulting from intense thinning and strict felling schedules– conditions the occurrence of enough favourable events to achieve natural regeneration during the current rotation time. Stochastic simulation and optimisation computed through the integral model confirm this circumstance, suggesting that more flexible and progressive regeneration fellings should be conducted. From an ecological standpoint, these results inform a reproductive strategy leading to uneven-aged stand structures, in full accordance with the medium shade-tolerant behaviour of the species. As a final remark, stochastic simulations performed under a climate-change scenario show that regeneration in the species will not be strongly hampered in the future. This resilient behaviour highlights the fundamental ecological role played by P. pinea in demanding areas where other tree species fail to persist.
Resumo:
A theoretical model for the steady-state response of anodic contactors that emit a plasma current Ii and collect electrons from a collisionless, unmagnetized plasma is presented. The use of a (kinetic) monoenergetic population for the attracted species, well known in passive probe theory, gives both accuracy and tractability to the theory. The monoenergetic population is proved to behave like an isentropic fluid with radial plus centripetal motion, allowing direct comparisons with ad hoc fluid models. Also, a modification of the original monoenergetic equations permits analysis of contactors operating in orbit-limited conditions. Besides that, the theory predicts that, only for plasma emissions above certain threshold current a presheath/double layer/core structure for the potential is formed (the core mode), while for emissions below that threshold, a plasma contactor behaves exactly as a positive-ion emitter with a presheath/sheath structure (the no-core mode). Ion emitters are studied as a particular case. Emphasis is placed on obtaining dimensionless charts and approximate asymptotic laws of the current-voltage characteristic.
Resumo:
We present a practical implementation of a solar thermophotovoltaic (TPV) system. The system presented in this paper comprises a sunlight concentrator system, a cylindrical cup-shaped absorber/emitter (made of tungsten coated with HfO2), and an hexagonal-shaped water-cooled TPV generator comprising 24 germanium TPV cells, which is surrounding the cylindrical absorber/emitter. This paper focuses on the development of shingled TPV cell arrays, the characterization of the sunlight concentrator system, the estimation of the temperature achieved by the cylindrical emitters operated under concentrated sunlight, and the evaluation of the full system performance under real outdoor irradiance conditions. From the system characterization, we have measured short-circuit current densities up to 0.95 A/cm2, electric power densities of 67 mW/cm2, and a global conversion efficiency of about 0.8%. To our knowledge, this is the first overall solar-to-electricity efficiency reported for a complete solar thermophotovoltaic system. The very low efficiency is mainly due to the overheating of the cells (up to 120 °C) and to the high optical concentrator losses, which prevent the achievement of the optimum emitter temperature. The loss analysis shows that by improving both aspects, efficiencies above 5% could be achievable in the very short term and efficiencies above 10% could be achieved with further improvements.
Resumo:
La astronomía de rayos γ estudia las partículas más energéticas que llegan a la Tierra desde el espacio. Estos rayos γ no se generan mediante procesos térmicos en simples estrellas, sino mediante mecanismos de aceleración de partículas en objetos celestes como núcleos de galaxias activos, púlsares, supernovas, o posibles procesos de aniquilación de materia oscura. Los rayos γ procedentes de estos objetos y sus características proporcionan una valiosa información con la que los científicos tratan de comprender los procesos físicos que ocurren en ellos y desarrollar modelos teóricos que describan su funcionamiento con fidelidad. El problema de observar rayos γ es que son absorbidos por las capas altas de la atmósfera y no llegan a la superficie (de lo contrario, la Tierra será inhabitable). De este modo, sólo hay dos formas de observar rayos γ embarcar detectores en satélites, u observar los efectos secundarios que los rayos γ producen en la atmósfera. Cuando un rayo γ llega a la atmósfera, interacciona con las partículas del aire y genera un par electrón - positrón, con mucha energía. Estas partículas secundarias generan a su vez más partículas secundarias cada vez menos energéticas. Estas partículas, mientras aún tienen energía suficiente para viajar más rápido que la velocidad de la luz en el aire, producen una radiación luminosa azulada conocida como radiación Cherenkov durante unos pocos nanosegundos. Desde la superficie de la Tierra, algunos telescopios especiales, conocidos como telescopios Cherenkov o IACTs (Imaging Atmospheric Cherenkov Telescopes), son capaces de detectar la radiación Cherenkov e incluso de tomar imágenes de la forma de la cascada Cherenkov. A partir de estas imágenes es posible conocer las principales características del rayo γ original, y con suficientes rayos se pueden deducir características importantes del objeto que los emitió, a cientos de años luz de distancia. Sin embargo, detectar cascadas Cherenkov procedentes de rayos γ no es nada fácil. Las cascadas generadas por fotones γ de bajas energías emiten pocos fotones, y durante pocos nanosegundos, y las correspondientes a rayos γ de alta energía, si bien producen más electrones y duran más, son más improbables conforme mayor es su energía. Esto produce dos líneas de desarrollo de telescopios Cherenkov: Para observar cascadas de bajas energías son necesarios grandes reflectores que recuperen muchos fotones de los pocos que tienen estas cascadas. Por el contrario, las cascadas de altas energías se pueden detectar con telescopios pequeños, pero conviene cubrir con ellos una superficie grande en el suelo para aumentar el número de eventos detectados. Con el objetivo de mejorar la sensibilidad de los telescopios Cherenkov actuales, en el rango de energía alto (> 10 TeV), medio (100 GeV - 10 TeV) y bajo (10 GeV - 100 GeV), nació el proyecto CTA (Cherenkov Telescope Array). Este proyecto en el que participan más de 27 países, pretende construir un observatorio en cada hemisferio, cada uno de los cuales contará con 4 telescopios grandes (LSTs), unos 30 medianos (MSTs) y hasta 70 pequeños (SSTs). Con un array así, se conseguirán dos objetivos. En primer lugar, al aumentar drásticamente el área de colección respecto a los IACTs actuales, se detectarán más rayos γ en todos los rangos de energía. En segundo lugar, cuando una misma cascada Cherenkov es observada por varios telescopios a la vez, es posible analizarla con mucha más precisión gracias a las técnicas estereoscópicas. La presente tesis recoge varios desarrollos técnicos realizados como aportación a los telescopios medianos y grandes de CTA, concretamente al sistema de trigger. Al ser las cascadas Cherenkov tan breves, los sistemas que digitalizan y leen los datos de cada píxel tienen que funcionar a frecuencias muy altas (≈1 GHz), lo que hace inviable que funcionen de forma continua, ya que la cantidad de datos guardada será inmanejable. En su lugar, las señales analógicas se muestrean, guardando las muestras analógicas en un buffer circular de unos pocos µs. Mientras las señales se mantienen en el buffer, el sistema de trigger hace un análisis rápido de las señales recibidas, y decide si la imagen que hay en el buér corresponde a una cascada Cherenkov y merece ser guardada, o por el contrario puede ignorarse permitiendo que el buffer se sobreescriba. La decisión de si la imagen merece ser guardada o no, se basa en que las cascadas Cherenkov producen detecciones de fotones en píxeles cercanos y en tiempos muy próximos, a diferencia de los fotones de NSB (night sky background), que llegan aleatoriamente. Para detectar cascadas grandes es suficiente con comprobar que más de un cierto número de píxeles en una región hayan detectado más de un cierto número de fotones en una ventana de tiempo de algunos nanosegundos. Sin embargo, para detectar cascadas pequeñas es más conveniente tener en cuenta cuántos fotones han sido detectados en cada píxel (técnica conocida como sumtrigger). El sistema de trigger desarrollado en esta tesis pretende optimizar la sensibilidad a bajas energías, por lo que suma analógicamente las señales recibidas en cada píxel en una región de trigger y compara el resultado con un umbral directamente expresable en fotones detectados (fotoelectrones). El sistema diseñado permite utilizar regiones de trigger de tamaño seleccionable entre 14, 21 o 28 píxeles (2, 3, o 4 clusters de 7 píxeles cada uno), y con un alto grado de solapamiento entre ellas. De este modo, cualquier exceso de luz en una región compacta de 14, 21 o 28 píxeles es detectado y genera un pulso de trigger. En la versión más básica del sistema de trigger, este pulso se distribuye por toda la cámara de forma que todos los clusters sean leídos al mismo tiempo, independientemente de su posición en la cámara, a través de un delicado sistema de distribución. De este modo, el sistema de trigger guarda una imagen completa de la cámara cada vez que se supera el número de fotones establecido como umbral en una región de trigger. Sin embargo, esta forma de operar tiene dos inconvenientes principales. En primer lugar, la cascada casi siempre ocupa sólo una pequeña zona de la cámara, por lo que se guardan muchos píxeles sin información alguna. Cuando se tienen muchos telescopios como será el caso de CTA, la cantidad de información inútil almacenada por este motivo puede ser muy considerable. Por otro lado, cada trigger supone guardar unos pocos nanosegundos alrededor del instante de disparo. Sin embargo, en el caso de cascadas grandes la duración de las mismas puede ser bastante mayor, perdiéndose parte de la información debido al truncamiento temporal. Para resolver ambos problemas se ha propuesto un esquema de trigger y lectura basado en dos umbrales. El umbral alto decide si hay un evento en la cámara y, en caso positivo, sólo las regiones de trigger que superan el nivel bajo son leídas, durante un tiempo más largo. De este modo se evita guardar información de píxeles vacíos y las imágenes fijas de las cascadas se pueden convertir en pequeños \vídeos" que representen el desarrollo temporal de la cascada. Este nuevo esquema recibe el nombre de COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), y se ha descrito detalladamente en el capítulo 5. Un problema importante que afecta a los esquemas de sumtrigger como el que se presenta en esta tesis es que para sumar adecuadamente las señales provenientes de cada píxel, estas deben tardar lo mismo en llegar al sumador. Los fotomultiplicadores utilizados en cada píxel introducen diferentes retardos que deben compensarse para realizar las sumas adecuadamente. El efecto de estos retardos ha sido estudiado, y se ha desarrollado un sistema para compensarlos. Por último, el siguiente nivel de los sistemas de trigger para distinguir efectivamente las cascadas Cherenkov del NSB consiste en buscar triggers simultáneos (o en tiempos muy próximos) en telescopios vecinos. Con esta función, junto con otras de interfaz entre sistemas, se ha desarrollado un sistema denominado Trigger Interface Board (TIB). Este sistema consta de un módulo que irá montado en la cámara de cada LST o MST, y que estará conectado mediante fibras ópticas a los telescopios vecinos. Cuando un telescopio tiene un trigger local, este se envía a todos los vecinos conectados y viceversa, de modo que cada telescopio sabe si sus vecinos han dado trigger. Una vez compensadas las diferencias de retardo debidas a la propagación en las fibras ópticas y de los propios fotones Cherenkov en el aire dependiendo de la dirección de apuntamiento, se buscan coincidencias, y en el caso de que la condición de trigger se cumpla, se lee la cámara en cuestión, de forma sincronizada con el trigger local. Aunque todo el sistema de trigger es fruto de la colaboración entre varios grupos, fundamentalmente IFAE, CIEMAT, ICC-UB y UCM en España, con la ayuda de grupos franceses y japoneses, el núcleo de esta tesis son el Level 1 y la Trigger Interface Board, que son los dos sistemas en los que que el autor ha sido el ingeniero principal. Por este motivo, en la presente tesis se ha incluido abundante información técnica relativa a estos sistemas. Existen actualmente importantes líneas de desarrollo futuras relativas tanto al trigger de la cámara (implementación en ASICs), como al trigger entre telescopios (trigger topológico), que darán lugar a interesantes mejoras sobre los diseños actuales durante los próximos años, y que con suerte serán de provecho para toda la comunidad científica participante en CTA. ABSTRACT -ray astronomy studies the most energetic particles arriving to the Earth from outer space. This -rays are not generated by thermal processes in mere stars, but by means of particle acceleration mechanisms in astronomical objects such as active galactic nuclei, pulsars, supernovas or as a result of dark matter annihilation processes. The γ rays coming from these objects and their characteristics provide with valuable information to the scientist which try to understand the underlying physical fundamentals of these objects, as well as to develop theoretical models able to describe them accurately. The problem when observing rays is that they are absorbed in the highest layers of the atmosphere, so they don't reach the Earth surface (otherwise the planet would be uninhabitable). Therefore, there are only two possible ways to observe γ rays: by using detectors on-board of satellites, or by observing their secondary effects in the atmosphere. When a γ ray reaches the atmosphere, it interacts with the particles in the air generating a highly energetic electron-positron pair. These secondary particles generate in turn more particles, with less energy each time. While these particles are still energetic enough to travel faster than the speed of light in the air, they produce a bluish radiation known as Cherenkov light during a few nanoseconds. From the Earth surface, some special telescopes known as Cherenkov telescopes or IACTs (Imaging Atmospheric Cherenkov Telescopes), are able to detect the Cherenkov light and even to take images of the Cherenkov showers. From these images it is possible to know the main parameters of the original -ray, and with some -rays it is possible to deduce important characteristics of the emitting object, hundreds of light-years away. However, detecting Cherenkov showers generated by γ rays is not a simple task. The showers generated by low energy -rays contain few photons and last few nanoseconds, while the ones corresponding to high energy -rays, having more photons and lasting more time, are much more unlikely. This results in two clearly differentiated development lines for IACTs: In order to detect low energy showers, big reflectors are required to collect as much photons as possible from the few ones that these showers have. On the contrary, small telescopes are able to detect high energy showers, but a large area in the ground should be covered to increase the number of detected events. With the aim to improve the sensitivity of current Cherenkov showers in the high (> 10 TeV), medium (100 GeV - 10 TeV) and low (10 GeV - 100 GeV) energy ranges, the CTA (Cherenkov Telescope Array) project was created. This project, with more than 27 participating countries, intends to build an observatory in each hemisphere, each one equipped with 4 large size telescopes (LSTs), around 30 middle size telescopes (MSTs) and up to 70 small size telescopes (SSTs). With such an array, two targets would be achieved. First, the drastic increment in the collection area with respect to current IACTs will lead to detect more -rays in all the energy ranges. Secondly, when a Cherenkov shower is observed by several telescopes at the same time, it is possible to analyze it much more accurately thanks to the stereoscopic techniques. The present thesis gathers several technical developments for the trigger system of the medium and large size telescopes of CTA. As the Cherenkov showers are so short, the digitization and readout systems corresponding to each pixel must work at very high frequencies (_ 1 GHz). This makes unfeasible to read data continuously, because the amount of data would be unmanageable. Instead, the analog signals are sampled, storing the analog samples in a temporal ring buffer able to store up to a few _s. While the signals remain in the buffer, the trigger system performs a fast analysis of the signals and decides if the image in the buffer corresponds to a Cherenkov shower and deserves to be stored, or on the contrary it can be ignored allowing the buffer to be overwritten. The decision of saving the image or not, is based on the fact that Cherenkov showers produce photon detections in close pixels during near times, in contrast to the random arrival of the NSB phtotons. Checking if more than a certain number of pixels in a trigger region have detected more than a certain number of photons during a certain time window is enough to detect large showers. However, taking also into account how many photons have been detected in each pixel (sumtrigger technique) is more convenient to optimize the sensitivity to low energy showers. The developed trigger system presented in this thesis intends to optimize the sensitivity to low energy showers, so it performs the analog addition of the signals received in each pixel in the trigger region and compares the sum with a threshold which can be directly expressed as a number of detected photons (photoelectrons). The trigger system allows to select trigger regions of 14, 21, or 28 pixels (2, 3 or 4 clusters with 7 pixels each), and with extensive overlapping. In this way, every light increment inside a compact region of 14, 21 or 28 pixels is detected, and a trigger pulse is generated. In the most basic version of the trigger system, this pulse is just distributed throughout the camera in such a way that all the clusters are read at the same time, independently from their position in the camera, by means of a complex distribution system. Thus, the readout saves a complete camera image whenever the number of photoelectrons set as threshold is exceeded in a trigger region. However, this way of operating has two important drawbacks. First, the shower usually covers only a little part of the camera, so many pixels without relevant information are stored. When there are many telescopes as will be the case of CTA, the amount of useless stored information can be very high. On the other hand, with every trigger only some nanoseconds of information around the trigger time are stored. In the case of large showers, the duration of the shower can be quite larger, loosing information due to the temporal cut. With the aim to solve both limitations, a trigger and readout scheme based on two thresholds has been proposed. The high threshold decides if there is a relevant event in the camera, and in the positive case, only the trigger regions exceeding the low threshold are read, during a longer time. In this way, the information from empty pixels is not stored and the fixed images of the showers become to little \`videos" containing the temporal development of the shower. This new scheme is named COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), and it has been described in depth in chapter 5. An important problem affecting sumtrigger schemes like the one presented in this thesis is that in order to add the signals from each pixel properly, they must arrive at the same time. The photomultipliers used in each pixel introduce different delays which must be compensated to perform the additions properly. The effect of these delays has been analyzed, and a delay compensation system has been developed. The next trigger level consists of looking for simultaneous (or very near in time) triggers in neighbour telescopes. These function, together with others relating to interfacing different systems, have been developed in a system named Trigger Interface Board (TIB). This system is comprised of one module which will be placed inside the LSTs and MSTs cameras, and which will be connected to the neighbour telescopes through optical fibers. When a telescope receives a local trigger, it is resent to all the connected neighbours and vice-versa, so every telescope knows if its neighbours have been triggered. Once compensated the delay differences due to propagation in the optical fibers and in the air depending on the pointing direction, the TIB looks for coincidences, and in the case that the trigger condition is accomplished, the camera is read a fixed time after the local trigger arrived. Despite all the trigger system is the result of the cooperation of several groups, specially IFAE, Ciemat, ICC-UB and UCM in Spain, with some help from french and japanese groups, the Level 1 and the Trigger Interface Board constitute the core of this thesis, as they have been the two systems designed by the author of the thesis. For this reason, a large amount of technical information about these systems has been included. There are important future development lines regarding both the camera trigger (implementation in ASICS) and the stereo trigger (topological trigger), which will produce interesting improvements for the current designs during the following years, being useful for all the scientific community participating in CTA.
Resumo:
La Gestión Forestal Sostenible se define como “la administración y uso de los bosques y tierras forestales de forma e intensidad tales que mantengan su biodiversidad, productividad, capacidad de regeneración, vitalidad y su potencial para atender, ahora y en el futuro, las funciones ecológicas, económicas y sociales relevantes a escala local, nacional y global, y que no causan daño a otros ecosistemas” (MCPFE Conference, 1993). Dentro del proceso los procesos de planificación, en cualquier escala, es necesario establecer cuál será la situación a la que se quiere llegar mediante la gestión. Igualmente, será necesario conocer la situación actual, pues marcará la situación de partida y condicionará el tipo de actuaciones a realizar para alcanzar los objetivos fijados. Dado que, los Proyectos de Ordenación de Montes y sus respectivas revisiones son herramientas de planificación, durante la redacción de los mismos, será necesario establecer una serie de objetivos cuya consecución pueda verificarse de forma objetiva y disponer de una caracterización de la masa forestal que permita conocer la situación de partida. Esta tesis se centra en problemas prácticos, propios de una escala de planificación local o de Proyecto de Ordenación de Montes. El primer objetivo de la tesis es determinar distribuciones diamétricas y de alturas de referencia para masas regulares por bosquetes, empleando para ello el modelo conceptual propuesto por García-Abril et al., (1999) y datos procedentes de las Tablas de producción de Rojo y Montero (1996). Las distribuciones de referencia obtenidas permitirán guiar la gestión de masas irregulares y regulares por bosquetes. Ambos tipos de masas aparecen como una alternativa deseable en aquellos casos en los que se quiere potenciar la biodiversidad, la estabilidad, la multifuncionalidad del bosque y/o como alternativa productiva, especialmente indicada para la producción de madera de calidad. El segundo objetivo de la Tesis está relacionado con la necesidad de disponer de una caracterización adecuada de la masa forestal durante la redacción de los Proyectos de Ordenación de Montes y de sus respectivas revisiones. Con el fin de obtener estimaciones de variables forestales en distintas unidades territoriales de potencial interés para la Ordenación de Montes, así como medidas de la incertidumbre en asociada dichas estimaciones, se extienden ciertos resultados de la literatura de Estimación en Áreas Pequeñas. Mediante un caso de estudio, se demuestra el potencial de aplicación de estas técnicas en inventario forestales asistidos con información auxiliar procedente de sensores láser aerotransportados (ALS). Los casos de estudio se realizan empleando datos ALS similares a los recopilados en el marco del Plan Nacional de Ortofotografía Aérea (PNOA). Los resultados obtenidos muestran que es posible aumentar la eficiencia de los inventarios forestales tradicionales a escala de proyecto de Ordenación de Montes, mediante la aplicación de estimadores EBLUP (Empirical Best Linear Unbiased Predictor) con modelos a nivel de elemento poblacional e información auxiliar ALS similar a la recopilada por el PNOA. ABSTRACT According to MCPFE (1993) Sustainable Forest Management is “the stewardship and use of forests and forest lands in a way, and at a rate, that maintains their biodiversity, productivity, regeneration capacity, vitality and their potential to fulfill, now and in the future, relevant ecological, economic and social functions, at local, national, and global levels, and that does not cause damage to other ecosystems”. For forest management planning, at any scale, we must determine what situation is hoped to be achieved through management. It is also necessary to know the current situation, as this will mark the starting point and condition the type of actions to be performed in order to meet the desired objectives. Forest management at a local scale is no exception. This Thesis focuses on typical problems of forest management planning at a local scale. The first objective of this Thesis is to determine management objectives for group shelterwood management systems in terms of tree height and tree diameter reference distributions. For this purpose, the conceptual model proposed by García-Abril et al., (1999) is applied to the yield tables for Pinus sylvestris in Sierra de Guadrrama (Rojo y Montero, 1996). The resulting reference distributions will act as a guide in the management of forests treated under the group shelterwood management systems or as an approximated reference for the management of uneven aged forests. Both types of management systems are desirable in those cases where forest biodiversity, stability and multifunctionality are pursued goals. These management systems are also recommended as alternatives for the production of high quality wood. The second objective focuses on the need to adequately characterize the forest during the decision process that leads to local management. In order to obtain estimates of forest variables for different management units of potential interest for forest planning, as well as the associated measures of uncertainty in these estimates, certain results from Small Area Estimation Literature are extended to accommodate for the need of estimates and reliability measures in very small subpopulations containing a reduced number of pixels. A case study shows the potential of Small Area Estimation (SAE) techniques in forest inventories assisted with remotely sensed auxiliary information. The influence of the laser pulse density in the quality of estimates in different aggregation levels is analyzed. This study considers low laser pulse densities (0.5 returns/m2) similar to, those provided by large-scale Airborne Laser Scanner (ALS) surveys, such as the one conducted by the Spanish National Geographic Institute for about 80% of the Spanish territory. The results obtained show that it is possible to improve the efficiency of traditional forest inventories at local scale using EBLUP (Empirical Best Linear Unbiased Predictor) estimators based on unit level models and low density ALS auxiliary information.
Resumo:
The present paper deals with the calculation of grounding resistance of an electrode composed of thin wires, that we consider here as perfect electric conductors (PEC) e.g. with null internal resistance, when buried in a soil of uniform resistivity. The potential profile at the ground surface is also calculated when the electrode is energized with low frequency current. The classic treatment by using leakage currents, called Charge Simulated Method (CSM), is compared with that using a set of steady currents along the axis of the wires, here called the Longitudinal Currents Method (LCM), to solve the Maxwell equations. The method of moments is applied to obtain a numerical approximation of the solution by using rectangular basis functions. Both methods are applied to two types of electrodes and the results are also compared with those obtained using a thirth approach, the Average Potential Method (APM), later described in the text. From the analysis performed, we can estimate a value of the error in the determination of grounding resistance as a function of the number of segments in which the electrodes are divided.
Resumo:
Una amarra electrodinámica (electrodynamic tether) opera sobre principios electromagnéticos intercambiando momento con la magnetosfera planetaria e interactuando con su ionosfera. Es un subsistema pasivo fiable para desorbitar etapas de cohetes agotadas y satélites al final de su misión, mitigando el crecimiento de la basura espacial. Una amarra sin aislamiento captura electrones del plasma ambiente a lo largo de su segmento polarizado positivamente, el cual puede alcanzar varios kilómetros de longitud, mientras que emite electrones de vuelta al plasma mediante un contactor de plasma activo de baja impedancia en su extremo catódico, tal como un cátodo hueco (hollow cathode). En ausencia de un contactor catódico activo, la corriente que circula por una amarra desnuda en órbita es nula en ambos extremos de la amarra y se dice que ésta está flotando eléctricamente. Para emisión termoiónica despreciable y captura de corriente en condiciones limitadas por movimiento orbital (orbital-motion-limited, OML), el cociente entre las longitudes de los segmentos anódico y catódico es muy pequeño debido a la disparidad de masas entre iones y electrones. Tal modo de operación resulta en una corriente media y fuerza de Lorentz bajas en la amarra, la cual es poco eficiente como dispositivo para desorbitar. El electride C12A7 : e−, que podría presentar una función de trabajo (work function) tan baja como W = 0.6 eV y un comportamiento estable a temperaturas relativamente altas, ha sido propuesto como recubrimiento para amarras desnudas. La emisión termoiónica a lo largo de un segmento así recubierto y bajo el calentamiento de la operación espacial, puede ser más eficiente que la captura iónica. En el modo más simple de fuerza de frenado, podría eliminar la necesidad de un contactor catódico activo y su correspondientes requisitos de alimentación de gas y subsistema de potencia, lo que resultaría en un sistema real de amarra “sin combustible”. Con este recubrimiento de bajo W, cada segmento elemental del segmento catódico de una amarra desnuda de kilómetros de longitud emitiría corriente como si fuese parte de una sonda cilíndrica, caliente y uniformemente polarizada al potencial local de la amarra. La operación es similar a la de una sonda de Langmuir 2D tanto en los segmentos catódico como anódico. Sin embargo, en presencia de emisión, los electrones emitidos resultan en carga espacial (space charge) negativa, la cual reduce el campo eléctrico que los acelera hacia fuera, o incluso puede desacelerarlos y hacerlos volver a la sonda. Se forma una doble vainas (double sheath) estable con electrones emitidos desde la sonda e iones provenientes del plasma ambiente. La densidad de corriente termoiónica, variando a lo largo del segmento catódico, podría seguir dos leyes distintas bajo diferentes condiciones: (i) la ley de corriente limitada por la carga espacial (space-charge-limited, SCL) o (ii) la ley de Richardson-Dushman (RDS). Se presenta un estudio preliminar sobre la corriente SCL frente a una sonda emisora usando la teoría de vainas (sheath) formada por la captura iónica en condiciones OML, y la corriente electrónica SCL entre los electrodos cilíndricos según Langmuir. El modelo, que incluye efectos óhmicos y el efecto de transición de emisión SCL a emisión RDS, proporciona los perfiles de corriente y potencial a lo largo de la longitud completa de la amarra. El análisis muestra que en el modo más simple de fuerza de frenado, bajo condiciones orbitales y de amarras típicas, la emisión termoiónica proporciona un contacto catódico eficiente y resulta en una sección catódica pequeña. En el análisis anterior, tanto la transición de emisión SCL a RD como la propia ley de emisión SCL consiste en un modelo muy simplificado. Por ello, a continuación se ha estudiado con detalle la solución de vaina estacionaria de una sonda con emisión termoiónica polarizada negativamente respecto a un plasma isotrópico, no colisional y sin campo magnético. La existencia de posibles partículas atrapadas ha sido ignorada y el estudio incluye tanto un estudio semi-analítico mediante técnica asintóticas como soluciones numéricas completas del problema. Bajo las tres condiciones (i) alto potencial, (ii) R = Rmax para la validez de la captura iónica OML, y (iii) potencial monotónico, se desarrolla un análisis asintótico auto-consistente para la estructura de plasma compleja que contiene las tres especies de cargas (electrones e iones del plasma, electrones emitidos), y cuatro regiones espaciales distintas, utilizando teorías de movimiento orbital y modelos cinéticos de las especies. Aunque los electrones emitidos presentan carga espacial despreciable muy lejos de la sonda, su efecto no se puede despreciar en el análisis global de la estructura de la vaina y de dos capas finas entre la vaina y la región cuasi-neutra. El análisis proporciona las condiciones paramétricas para que la corriente sea SCL. También muestra que la emisión termoiónica aumenta el radio máximo de la sonda para operar dentro del régimen OML y que la emisión de electrones es mucho más eficiente que la captura iónica para el segmento catódico de la amarra. En el código numérico, los movimientos orbitales de las tres especies son modelados para potenciales tanto monotónico como no-monotónico, y sonda de radio R arbitrario (dentro o más allá del régimen de OML para la captura iónica). Aprovechando la existencia de dos invariante, el sistema de ecuaciones Poisson-Vlasov se escribe como una ecuación integro-diferencial, la cual se discretiza mediante un método de diferencias finitas. El sistema de ecuaciones algebraicas no lineal resultante se ha resuelto de con un método Newton-Raphson paralelizado. Los resultados, comparados satisfactoriamente con el análisis analítico, proporcionan la emisión de corriente y la estructura del plasma y del potencial electrostático. ABSTRACT An electrodynamic tether operates on electromagnetic principles and exchanges momentum through the planetary magnetosphere, by continuously interacting with the ionosphere. It is a reliable passive subsystem to deorbit spent rocket stages and satellites at its end of mission, mitigating the growth of orbital debris. A tether left bare of insulation collects electrons by its own uninsulated and positively biased segment with kilometer range, while electrons are emitted by a low-impedance active device at the cathodic end, such as a hollow cathode, to emit the full electron current. In the absence of an active cathodic device, the current flowing along an orbiting bare tether vanishes at both ends and the tether is said to be electrically floating. For negligible thermionic emission and orbital-motion-limited (OML) collection throughout the entire tether (electron/ion collection at anodic/cathodic segment, respectively), the anodic-to-cathodic length ratio is very small due to ions being much heavier, which results in low average current and Lorentz drag. The electride C12A7 : e−, which might present a possible work function as low as W = 0.6 eV and moderately high temperature stability, has been proposed as coating for floating bare tethers. Thermionic emission along a thus coated cathodic segment, under heating in space operation, can be more efficient than ion collection and, in the simplest drag mode, may eliminate the need for an active cathodic device and its corresponding gas-feed requirements and power subsystem, which would result in a truly “propellant-less” tether system. With this low-W coating, each elemental segment on the cathodic segment of a kilometers-long floating bare-tether would emit current as if it were part of a hot cylindrical probe uniformly polarized at the local tether bias, under 2D probe conditions that are also applied to the anodic-segment analysis. In the presence of emission, emitted electrons result in negative space charge, which decreases the electric field that accelerates them outwards, or even reverses it, decelerating electrons near the emitting probe. A double sheath would be established with electrons being emitted from the probe and ions coming from the ambient plasma. The thermionic current density, varying along the cathodic segment, might follow two distinct laws under different con ditions: i) space-charge-limited (SCL) emission or ii) full Richardson-Dushman (RDS) emission. A preliminary study on the SCL current in front of an emissive probe is presented using the orbital-motion-limited (OML) ion-collection sheath and Langmuir’s SCL electron current between cylindrical electrodes. A detailed calculation of current and bias profiles along the entire tether length is carried out with ohmic effects considered and the transition from SCL to full RDS emission is included. Analysis shows that in the simplest drag mode, under typical orbital and tether conditions, thermionic emission provides efficient cathodic contact and leads to a short cathodic section. In the previous analysis, both the transition between SCL and RDS emission and the current law for SCL condition have used a very simple model. To continue, considering an isotropic, unmagnetized, colissionless plasma and a stationary sheath, the probe-plasma contact is studied in detail for a negatively biased probe with thermionic emission. The possible trapped particles are ignored and this study includes both semianalytical solutions using asymptotic analysis and complete numerical solutions. Under conditions of i) high bias, ii) R = Rmax for ion OML collection validity, and iii) monotonic potential, a self-consistent asymptotic analysis is carried out for the complex plasma structure involving all three charge species (plasma electrons and ions, and emitted electrons) and four distinct spatial regions using orbital motion theories and kinetic modeling of the species. Although emitted electrons present negligible space charge far away from the probe, their effect cannot be neglected in the global analysis for the sheath structure and two thin layers in between the sheath and the quasineutral region. The parametric conditions for the current to be space-chargelimited are obtained. It is found that thermionic emission increases the range of probe radius for OML validity and is greatly more effective than ion collection for cathodic contact of tethers. In the numerical code, the orbital motions of all three species are modeled for both monotonic and non-monotonic potential, and for any probe radius R (within or beyond OML regime for ion collection). Taking advantage of two constants of motion (energy and angular momentum), the Poisson-Vlasov equation is described by an integro differential equation, which is discretized using finite difference method. The non-linear algebraic equations are solved using a parallel implementation of the Newton-Raphson method. The results, which show good agreement with the analytical results, provide the results for thermionic current, the sheath structure, and the electrostatic potential.
Resumo:
Quantum dot infrared photodetectors (QDIPs) are very attractive for infrared imaging applications due to its promising features such as high temperature operation, normal incidence response and low dark current [1]. However, the key issue is to obtain a high quality active region which requires a structural optimization of the nanostructures. With using GaAsSb capping layer, the optical properties, such as the PL intensity and its full width at half maximum (FWHM), of InAs QDs have been improved in the range between 1.15 and 1.5 m, because of the reduction of the compressive strain in QDs and the increment of QD height [2]. In this work, we have demonstrated strong and narrow intraband photoresponse spectra from GaAsSb-capped InAs-based QDIPs
Resumo:
Quantum dot infrared photodetectors (QDIPs) are very attractive for many applications such as infrared imaging, remote sensing and gas sensing, thanks to its promising features such as high temperature operation, normal incidence response and low dark current [1]. However, the key issue is to obtain a high-quality active region which requires an optimization of the nanostructure. By using GaAsSb capping layer, InAs QDs have improved their optical emission in the range between 1.15 and 1.3 m (at Sb composition of 14 %), due to a reduction of a compressive strain in QD and an increment of a QD height [2]. In this work, we have demonstrated strong and narrow intraband photoresponses at ~ 5 m from GaAsSb-capped InAs/GaAs QDIPs under normal light-incidence.
Resumo:
The optical and radio-frequency spectra of a monolithic master-oscillator power-amplifier emitting at 1.5 ?m have been analyzed in a wide range of steady-state injection conditions. The analysis of the spectral maps reveals that, under low injection current of the master oscillator, the device operates in two essentially different operation modes depending on the current injected into the amplifier section. The regular operation mode with predominance of the master oscillator alternates with lasing of the compound cavity modes allowed by the residual reflectance of the amplifier front facet. The quasi-periodic occurrence of these two regimes as a function of the amplifier current has been consistently interpreted in terms of a thermally tuned competition between the modes of the master oscillator and the compound cavity modes.
Resumo:
La corrosión del acero es una de las patologías más importantes que afectan a las estructuras de hormigón armado que están expuestas a ambientes marinos o al ataque de sales fundentes. Cuando se produce corrosión, se genera una capa de óxido alrededor de la superficie de las armaduras, que ocupa un volumen mayor que el acero inicial; como consecuencia, el óxido ejerce presiones internas en el hormigón circundante, que lleva a la fisuración y, ocasionalmente, al desprendimiento del recubrimiento de hormigón. Durante los últimos años, numerosos estudios han contribuido a ampliar el conocimiento sobre el proceso de fisuración; sin embargo, aún existen muchas incertidumbres respecto al comportamiento mecánico de la capa de óxido, que es fundamental para predecir la fisuración. Por ello, en esta tesis se ha desarrollado y aplicado una metodología, para mejorar el conocimiento respecto al comportamiento del sistema acero-óxido-hormigón, combinando experimentos y simulaciones numéricas. Se han realizado ensayos de corrosión acelerada en condiciones de laboratorio, utilizando la técnica de corriente impresa. Con el objetivo de obtener información cercana a la capa de acero, como muestras se seleccionaron prismas de hormigón con un tubo de acero liso como armadura, que se diseñaron para conseguir la formación de una única fisura principal en el recubrimiento. Durante los ensayos, las muestras se equiparon con instrumentos especialmente diseñados para medir la variación de diámetro y volumen interior de los tubos, y se midió la apertura de la fisura principal utilizando un extensómetro comercial, adaptado a la geometría de las muestras. Las condiciones de contorno se diseñaron cuidadosamente para que los campos de corriente y deformación fuesen planos durante los ensayos, resultando en corrosión uniforme a lo largo del tubo, para poder reproducir los ensayos en simulaciones numéricas. Se ensayaron series con varias densidades de corriente y varias profundidades de corrosión. De manera complementaria, el comportamiento en fractura del hormigón se caracterizó en ensayos independientes, y se midió la pérdida gravimétrica de los tubos siguiendo procedimientos estándar. En todos los ensayos, la fisura principal creció muy despacio durante las primeras micras de profundidad de corrosión, pero después de una cierta profundidad crítica, la fisura se desarrolló completamente, con un aumento rápido de su apertura; la densidad de corriente influye en la profundidad de corrosión crítica. Las variaciones de diámetro interior y de volumen interior de los tubos mostraron tendencias diferentes entre sí, lo que indica que la deformación del tubo no fue uniforme. Después de la corrosión acelerada, las muestras se cortaron en rebanadas, que se utilizaron en ensayos post-corrosión. El patrón de fisuración se estudió a lo largo del tubo, en rebanadas que se impregnaron en vacío con resina y fluoresceína para mejorar la visibilidad de las fisuras bajo luz ultravioleta, y se estudió la presencia de óxido dentro de las grietas. En todas las muestras, se formó una fisura principal en el recubrimiento, infiltrada con óxido, y varias fisuras secundarias finas alrededor del tubo; el número de fisuras varió con la profundidad de corrosión de las muestras. Para muestras con la misma corrosión, el número de fisuras y su posición fue diferente entre muestras y entre secciones de una misma muestra, debido a la heterogeneidad del hormigón. Finalmente, se investigó la adherencia entre el acero y el hormigón, utilizando un dispositivo diseñado para empujar el tubo en el hormigón. Las curvas de tensión frente a desplazamiento del tubo presentaron un pico marcado, seguido de un descenso constante; la profundidad de corrosión y la apertura de fisura de las muestras influyeron notablemente en la tensión residual del ensayo. Para simular la fisuración del hormigón causada por la corrosión de las armaduras, se programó un modelo numérico. Éste combina elementos finitos con fisura embebida adaptable que reproducen la fractura del hormigón conforme al modelo de fisura cohesiva estándar, y elementos de interfaz llamados elementos junta expansiva, que se programaron específicamente para reproducir la expansión volumétrica del óxido y que incorporan su comportamiento mecánico. En el elemento junta expansiva se implementó un fenómeno de despegue, concretamente de deslizamiento y separación, que resultó fundamental para obtener localización de fisuras adecuada, y que se consiguió con una fuerte reducción de la rigidez tangencial y la rigidez en tracción del óxido. Con este modelo, se realizaron simulaciones de los ensayos, utilizando modelos bidimensionales de las muestras con elementos finitos. Como datos para el comportamiento en fractura del hormigón, se utilizaron las propiedades determinadas en experimentos. Para el óxido, inicialmente se supuso un comportamiento fluido, con deslizamiento y separación casi perfectos. Después, se realizó un ajuste de los parámetros del elemento junta expansiva para reproducir los resultados experimentales. Se observó que variaciones en la rigidez normal del óxido apenas afectaban a los resultados, y que los demás parámetros apenas afectaban a la apertura de fisura; sin embargo, la deformación del tubo resultó ser muy sensible a variaciones en los parámetros del óxido, debido a la flexibilidad de la pared de los tubos, lo que resultó fundamental para determinar indirectamente los valores de los parámetros constitutivos del óxido. Finalmente, se realizaron simulaciones definitivas de los ensayos. El modelo reprodujo la profundidad de corrosión crítica y el comportamiento final de las curvas experimentales; se comprobó que la variación de diámetro interior de los tubos está fuertemente influenciada por su posición relativa respecto a la fisura principal, en concordancia con los resultados experimentales. De la comparación de los resultados experimentales y numéricos, se pudo extraer información sobre las propiedades del óxido que de otra manera no habría podido obtenerse. Corrosion of steel is one of the main pathologies affecting reinforced concrete structures exposed to marine environments or to molten salt. When corrosion occurs, an oxide layer develops around the reinforcement surface, which occupies a greater volume than the initial steel; thus, it induces internal pressure on the surrounding concrete that leads to cracking and, eventually, to full-spalling of the concrete cover. During the last years much effort has been devoted to understand the process of cracking; however, there is still a lack of knowledge regarding the mechanical behavior of the oxide layer, which is essential in the prediction of cracking. Thus, a methodology has been developed and applied in this thesis to gain further understanding of the behavior of the steel-oxide-concrete system, combining experiments and numerical simulations. Accelerated corrosion tests were carried out in laboratory conditions, using the impressed current technique. To get experimental information close to the oxide layer, concrete prisms with a smooth steel tube as reinforcement were selected as specimens, which were designed to get a single main crack across the cover. During the tests, the specimens were equipped with instruments that were specially designed to measure the variation of inner diameter and volume of the tubes, and the width of the main crack was recorded using a commercial extensometer that was adapted to the geometry of the specimens. The boundary conditions were carefully designed so that plane current and strain fields were expected during the tests, resulting in nearly uniform corrosion along the length of the tube, so that the tests could be reproduced in numerical simulations. Series of tests were carried out with various current densities and corrosion depths. Complementarily, the fracture behavior of concrete was characterized in independent tests, and the gravimetric loss of the steel tubes was determined by standard means. In all the tests, the main crack grew very slowly during the first microns of corrosion depth, but after a critical corrosion depth it fully developed and opened faster; the current density influenced the critical corrosion depth. The variation of inner diameter and inner volume of the tubes had different trends, which indicates that the deformation of the tube was not uniform. After accelerated corrosion, the specimens were cut into slices, which were used in post-corrosion tests. The pattern of cracking along the reinforcement was investigated in slices that were impregnated under vacuum with resin containing fluorescein to enhance the visibility of cracks under ultraviolet lightening and a study was carried out to assess the presence of oxide into the cracks. In all the specimens, a main crack developed through the concrete cover, which was infiltrated with oxide, and several thin secondary cracks around the reinforcement; the number of cracks diminished with the corrosion depth of the specimen. For specimens with the same corrosion, the number of cracks and their position varied from one specimen to another and between cross-sections of a given specimen, due to the heterogeneity of concrete. Finally, the bond between the steel and the concrete was investigated, using a device designed to push the tubes of steel in the concrete. The curves of stress versus displacement of the tube presented a marked peak, followed by a steady descent, with notably influence of the corrosion depth and the crack width on the residual stress. To simulate cracking of concrete due to corrosion of the reinforcement, a numerical model was implemented. It combines finite elements with an embedded adaptable crack that reproduces cracking of concrete according to the basic cohesive model, and interface elements so-called expansive joint elements, which were specially designed to reproduce the volumetric expansion of oxide and incorporate its mechanical behavior. In the expansive joint element, a debonding effect was implemented consisting of sliding and separation, which was proved to be essential to achieve proper localization of cracks, and was achieved by strongly reducing the shear and the tensile stiffnesses of the oxide. With that model, simulations of the accelerated corrosion tests were carried out on 2- dimensional finite element models of the specimens. For the fracture behavior of concrete, the properties experimentally determined were used as input. For the oxide, initially a fluidlike behavior was assumed with nearly perfect sliding and separation; then the parameters of the expansive joint element were modified to fit the experimental results. Changes in the bulk modulus of the oxide barely affected the results and changes in the remaining parameters had a moderate effect on the predicted crack width; however, the deformation of the tube was very sensitive to variations in the parameters of oxide, due to the flexibility of the tube wall, which was crucial for indirect determination of the constitutive parameters of oxide. Finally, definitive simulations of the tests were carried out. The model reproduced the critical corrosion depth and the final behavior of the experimental curves; it was assessed that the variation of inner diameter of the tubes is highly influenced by its relative position with respect to the main crack, in accordance with the experimental observations. From the comparison of the experimental and numerical results, some properties of the mechanical behavior of the oxide were disclosed that otherwise could not have been measured.
Resumo:
Integrated master-oscillator power amplifiers driven under steady-state injection conditions are known to show a complex dynamics resulting in a variety of emission regimes. We present experimental results on the emission characteristics of a 1.5 µm distributed feedback tapered master-oscillator power-amplifier in a wide range of steady-state injection conditions, showing different dynamic behaviors. The study combines the optical and radio-frequency spectra recorded under different levels of injected current into the master oscillator and the power amplifier sections. Under low injection current of the master oscillator the correlation between the optical and radio-frequency spectral maps allows to identify operation regimes in which the device emission arises from either the master oscillator mode or from the compound cavity modes allowed by the residual reflectance of the amplifier front facet. The quasi-periodic occurrence of these emission regimes as a function of the amplifier current is interpreted in terms of a thermally tuned competition between the modes of the master oscillator and the compound cavity modes. Under high injection current of the masteroscillator, two different regimes alternate quasi-periodically as a function of the injected current in the power amplifier: a stable regime with a single mode emission at the master oscillator frequency, and an unstable and complex self-pulsating regime showing strong peaks in the radio-frequency spectra as well as multiple frequencies in the optical spectra.