970 resultados para Pluto (Planet)


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A sustainable manufacturing process must rely on an also sustainable raw materials and energy supply. This paper is intended to show the results of the studies developed on sustainable business models for the minerals industry as a fundamental previous part of a sustainable manufacturing process. As it has happened in other economic activities, the mining and minerals industry has come under tremendous pressure to improve its social, developmental, and environmental performance. Mining, refining, and the use and disposal of minerals have in some instances led to significant local environmental and social damage. Nowadays, like in other parts of the corporate world, companies are more routinely expected to perform to ever higher standards of behavior, going well beyond achieving the best rate of return for shareholders. They are also increasingly being asked to be more transparent and subject to third-party audit or review, especially in environmental aspects. In terms of environment, there are three inter-related areas where innovation and new business models can make the biggest difference: carbon, water and biodiversity. The focus in these three areas is for two reasons. First, the industrial and energetic minerals industry has significant footprints in each of these areas. Second, these three areas are where the potential environmental impacts go beyond local stakeholders and communities, and can even have global impacts, like in the case of carbon. So prioritizing efforts in these areas will ultimately be a strategic differentiator as the industry businesses continues to grow. Over the next forty years, world?s population is predicted to rise from 6.300 million to 9.500 million people. This will mean a huge demand of natural resources. Indeed, consumption rates are such that current demand for raw materials will probably soon exceed the planet?s capacity. As awareness of the actual situation grows, the public is demanding goods and services that are even more environmentally sustainable. This means that massive efforts are required to reduce the amount of materials we use, including freshwater, minerals and oil, biodiversity, and marine resources. It?s clear that business as usual is no longer possible. Today, companies face not only the economic fallout of the financial crisis; they face the substantial challenge of transitioning to a low-carbon economy that is constrained by dwindling natural resources easily accessible. Innovative business models offer pioneering companies an early start toward the future. They can signal to consumers how to make sustainable choices and provide reward for both the consumer and the shareholder. Climate change and carbon remain major risk discontinuities that we need to better understand and deal with. In the absence of a global carbon solution, the principal objective of any individual country should be to reduce its global carbon emissions by encouraging conservation. The mineral industry internal response is to continue to focus on reducing the energy intensity of our existing operations through energy efficiency and the progressive introduction of new technology. Planning of the new projects must ensure that their energy footprint is minimal from the start. These actions will increase the long term resilience of the business to uncertain energy and carbon markets. This focus, combined with a strong demand for skills in this strategic area for the future requires an appropriate change in initial and continuing training of engineers and technicians and their awareness of the issue of eco-design. It will also need the development of measurement tools for consistent comparisons between companies and the assessments integration of the carbon footprint of mining equipments and services in a comprehensive impact study on the sustainable development of the Economy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La tecnología ha cambiado el mundo, pero las consecuencias de estos cambios en la sociedad no siempre se han pronosticado bien. Las Tecnologías de la Información transformaron el método de producción industrial. La nueva industria produce ideas y conceptos, no objetos. Este cambio ha dado como resultado una sociedad dualizada, ha desaparecido gran parte de la clase media y han aumentado las diferencias entre la clase alta y la baja. Las exigencias educativas de los nuevos puestos de trabajo innovadores son superiores a los de la industria tradicional, pero inferiores en los puestos de trabajo de producción. Además, el número de puestos de trabajo disponibles de este tipo es menor que en la industria tradicional, se necesita menos mano de obra, los procesos se pueden automatizar, las tareas mecánicas se aprenden en poco tiempo y son trabajos temporales, cuyo número dependerá de la demanda global. Para que el proceso de innovación funcione, las empresas se reúnen en las zonas financieras de grandes ciudades, como Nueva York o Londres, que fueron las primeras con acceso a las redes de telecomunicación. De esta manera se producen sinergias que contribuyen a mejorar el proceso innovador global. Estas ideas y conceptos que cambian el mundo necesitan de este entorno de producción, que no puede ser replicado, y son tan importantes que su acceso está restringido para la mayor parte del mundo por distintos mecanismos de control. El despliegue de las redes de telecomunicaciones inalámbricas ha sido enorme en los últimos años. El cliente busca llamar desde cualquier lugar y llevar un acceso a Internet en teléfono móvil. Para conseguirlo, las operadoras de telefonía móvil necesitan poner antenas de telefonía móvil en las ciudades, pero la instalación cerca de edificios no está siendo fácil. Pocos quieren tener una antena cerca por los problemas de salud de las personas que padecen los que ya viven o trabajan cerca de una. Los efectos del electromagnetismo en los seres humanos no están claros y provocan desconfianza hacia las antenas. La digitalización de los contenidos, que ha sido necesaria para transmitir contenido en Internet, permite que cualquier persona con un ordenador y una conexión a Internet pueda publicar un disco, una película o un libro. Pero esa persona también puede copiar los originales y enviarlos a cualquier lugar del mundo sin el permiso del autor. Con el fin de controlar la copia no autorizada, los derechos de autor se están usando para cambiar leyes e incluir sistemas de censura en Internet. Estos sistemas permiten a los autores eliminar el contenido ilegal, pero también pueden ser usados para censurar cualquier tipo de información. El control de la información es poder y usarlo de una manera o de otra afecta a todo el planeta. El problema no es la tecnología, que es solo una herramienta, es la forma que tienen los gobiernos y las grandes empresas de usarlo. Technology has changed the world, but the consequences of these changes in society have not always been well predicted. The Information Technology transformed the industrial production method. The new industry produces ideas and concepts, not objects. This change has resulted in a society dualized, most of the middle class has disappeared and the differences between high and low class have increased. The educational requirements of new innovative jobs are higher than the ones of the traditional industry, but lower in production jobs. Moreover, the number of available jobs of this type is lower than in the traditional industry, it takes less manpower, processes can be automated, mechanical tasks are learned in a short time and jobs are temporary, whose number depends on global demand. For the innovation process works, companies meet in the business districts of large cities, like New York or London, which were the first with access to telecommunications networks. This will produce synergies that improve the overall innovation process. These ideas and concepts that change the world need this production environment, which cannot be replicated, and are so important that their access is restricted to most of the world by different control mechanisms. The deploy of wireless telecommunications networks has been enormous in recent years. The client seeks to call from anywhere and to bring Internet access in his mobile phone. To achieve this, mobile operators need to put cell towers in cities, but the installation near buildings is not being easy. Just a few want to have an antenna closely because of the health problems suffered by people who already live or work near one. The effects of electromagnetism in humans are unclear and cause distrust of antennas. The digitization of content, which has been necessary to transmit Internet content, allows anyone with a computer and an Internet connection to be able to publish an album, a movie or a book. But that person can also copy the originals and send them anywhere in the world without the author's permission. In order to control the unauthorized copying, copyright is being used to change laws and include Internet censorship systems. These systems allow authors to eliminate illegal content, but may also be used to censor any information. The control of knowledge is power and using it in one way or another affects the whole planet. The problem is not technology, which is just a tool, but the way that governments and large corporations use it.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In 1998 the EXPORT team monitored microlensing event light curves using a charge-coupled device (CCD) camera on the IAC 0.8-m telescope on Tenerife to evaluate the prospect of using northern telescopes to find microlens anomalies that reveal planets orbiting the lens stars. The high airmass and more limited time available for observations of Galactic bulge sources make a northern site less favourable for microlensing planet searches. However, there are potentially a large number of northern 1-m class telescopes that could devote a few hours per night to monitor ongoing microlensing events. Our IAC observations indicate that accuracies sufficient to detect planets can be achieved despite the higher airmass.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Basic effects and dynamical and electrical contact issues in the physics of (electrodynamic space) bare tethers are discussed. Scientific experiments and powerpropulsion applications, including a paradoxical use of bare tethers in outer-planet exploration,are considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Three separate scenarios of an electrodynamic tether mission at Jupiter following capture of a spacecraft (SC) into an equatorial, highly elliptical orbit around the planet, with perijove at about 1.5 times the Jovian radius, are discussed. Repeated application of Lorentz drag on the spinning tether, at the perijove vicinity, can progressively lower the apojove. One mission involves the tethered-SC rapidly and frequently visiting Galilean moons; elliptical orbits with apojove down at the Ganymede, Europa, and Io orbits are in 2:5, 4:9, and 1:2 resonances with the respective moons. About 20 slow flybys of Io would take place before the accumulated radiation dose exceeds 3 Mrad (Si) at 10 mm Al shield thickness, with a total duration of 5 months after capture (4 months for lowering the apojove to Io and one month for the flybys). The respective number of flybys for Ganymede would be 10 with a total duration of about 9 months. An alternative mission would have the SC acquire a low circular orbit around Jupiter, below the radiation belts, and manoeuvre to get an optimal altitude, with no major radiation effects, in less than 5 months after capture. In a third mission, repeated thrusting at the apojove vicinity, once down at the Io torus, would raise the perijove itself to the torus to acquire a low circular orbit around Io in about 4 months, for a total of 8 months after capture; this corresponds, however, to over 100 apojove passes with an accumulated dose, of about 8.5 Mrad (Si), that poses a critical issue.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of humanity’s major challenges of the 21st century will be meeting future food demands on an increasingly resource constrained-planet. Global food production will have to rise by 70 percent between 2000 and 2050 to meet effective demand which poses major challenges to food production systems. Doing so without compromising environmental integrity is an even greater challenge. This study looks at the interdependencies between land and water resources, agricultural production and environmental outcomes in Latin America and the Caribbean (LAC), an area of growing importance in international agricultural markets. Special emphasis is given to the role of LAC’s agriculture for (a) global food security and (b) environmental sustainability. We use the International Model for Policy Analysis of Agricultural Commodities and Trade (IMPACT)—a global dynamic partial equilibrium model of the agricultural sector—to run different future production scenarios, and agricultural trade regimes out to 2050, and assess changes in related environmental indicators. Results indicate that further trade liberalization is crucial for improving food security globally, but that it would also lead to more environmental pressures in some regions across Latin America. Contrasting land expansion versus more intensified agriculture shows that productivity improvements are generally superior to agricultural land expansion, from an economic and environmental point of view. Finally, our analysis shows that there are trade-offs between environmental and food security goals for all agricultural development paths.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Se investiga la distribución espacial de contenidos metálicos analizados sobre testigos de sondeos obtenidos en las campañas de exploración de la Veta Pallancata. Se aplica el análisis factorial a dicha distribución y a los cocientes de los valores metálicos, discriminando los que están correlacionados con la mineralización argentífera y que sirven como guías de exploración para hallar zonas de potenciales reservas por sus gradientes de variación.Abstract:The metal distribution in a vein may show the paths of hydrothermal fluid flow at the time of mineralization. Such information may assist for in-fill drilling. The Pallancata Vein has been intersected by 52 drill holes, whose cores were sampled and analysed, and the results plotted to examine the mineralisation trends. The spatial distribution of the ore is observed from the logAg/logPb ratio distribution. Au is in this case closely related to Ag (electrum and uytenbogaardtite, Ag3AuS2 ). The Au grade shows the same spatial distribution as the Ag grade. The logAg/logPb ratio distribution also suggests possible ore to be expected at deeper locations. Shallow supergene Ag enrichment was also observed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An electrodynamic tether system for power generation at Jupiter is presented that allows extracting energy from Jupiter's corotating plasmasphere while leaving the system orbital energy unaltered to first order. The spacecraft is placed in a polar orbit with the tether spinning in the orbital plane so that the resulting Lorentz force, neglecting Jupiter's magnetic dipole tilt, is orthogonal to the instantaneous velocity vector and orbital radius, hence affecting orbital inclination rather than orbital energy. In addition, the electrodynamic tether subsystem, which consists of two radial tether arms deployed from the main central spacecraft, is designed in such a way as to extract maximum power while keeping the resulting Lorentz torque constantly null. The power-generation performance of the system and the effect on the orbit inclination is evaluated analytically for different orbital conditions and verified numerically. Finally, a thruster-based inclination-compensation maneuver at apoapsis is added, resulting in an efficient scheme to extract energy from the plasmasphere of the planet with minimum propellant consumption and no inclination change. A tradeoff analysis is conducted showing that, depending on tether size and orbit characteristics, the system performance can be considerably higher than conventional power-generation methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two complementary benchmarks have been proposed so far for the evaluation and continuous improvement of RDF stream processors: SRBench and LSBench. They put a special focus on different features of the evaluated systems, including coverage of the streaming extensions of SPARQL supported by each processor, query processing throughput, and an early analysis of query evaluation correctness, based on comparing the results obtained by different processors for a set of queries. However, none of them has analysed the operational semantics of these processors in order to assess the correctness of query evaluation results. In this paper, we propose a characterization of the operational semantics of RDF stream processors, adapting well-known models used in the stream processing engine community: CQL and SECRET. Through this formalization, we address correctness in RDF stream processor benchmarks, allowing to determine the multiple answers that systems should provide. Finally, we present CSRBench, an extension of SRBench to address query result correctness verification using an automatic method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Las evidencias del impacto sobre el medio ambiente asociado al funcionamiento de las ciudades hacen que sea urgente proponer medidas que reduzcan este consumo y mejoren la eficiencia del metabolismo urbano. La existencia de límites en la disponibilidad de recursos materiales y energéticos, plantea actualmente un reto importante al ser humano: ¿son posibles otras formas de organización más adecuadas a esta condición de limitación? Para dar respuesta, resulta fundamental conocer cómo interactuarán nuestros sistemas urbanos en un modelo económico de continuo desarrollo, con la población que alojan, y si podrán adaptarse a la capacidad de carga limitada del Planeta, evitando futuras situaciones de colapso anunciadas desde distintos ámbitos científicos. En ese contexto, la tesis formula un método de análisis del alojamiento que, mediante la evaluación a largo plazo, permita proponer medidas de intervención para reducir el consumo de recursos asociado a su funcionamiento. Se parte de un enfoque ecológico para definir el alojamiento como una parte fundamental del sistema urbano, que está compuesto por tres elementos: habitantes, viviendas y recursos naturales; por tanto, para reducir los impactos asociados al alojamiento es necesario tener en cuenta la diferente naturaleza de estos elementos y el carácter dinámico de las relaciones que se producen entre ellos. La Ecología, a través del estudio de los ecosistemas, ha identificado estrategias de supervivencia y adaptación ante los cambios y perturbaciones. La similitud entre los elementos de un ecosistema (población y soporte) y los del alojamiento (habitantes, viviendas y recursos naturales) permite verificar que la aplicación de estas estrategias al alojamiento puede conducir a situaciones más deseables en la relación de los sistemas urbanos con su entorno A partir de esta hipótesis, se propone un método de análisis diacrónico1 o a largo plazo, para conocer la evolución en el tiempo del alojamiento a través de la cuantificación de una serie de indicadores descriptivos de los habitantes, las viviendas y los recursos naturales. La aplicación de este método a un sistema urbano permitiría conocer las características de los elementos del sistema y, para un ámbito temporal de futuro, prever su evolución, identificar escenarios no deseables y proponer medidas de intervención que corrijan anticipadamente estas situaciones y eviten impactos ambientales innecesarios. La validación del método de análisis se realiza mediante la aplicación al caso de Madrid en periodo retrospectivo (de 1940 a 2010) A partir del estudio de la evolución en Madrid de los indicadores seleccionados, se proponen medidas de intervención en el alojamiento basadas en estrategias ecológicas que habrían conducido a una evolución alternativa con un impacto menor sobre el medio ambiente que la situación actual. El método de análisis se aplica con carácter prospectivo definiendo tres escenarios de futuro para Madrid hasta 2100. Para cada uno de ellos se analiza la evolución previsible de los indicadores y se proponen diferentes medidas de intervención que, desde el punto de vista ecológico, conducirían a una situación más adecuada. La tesis concluye en el interés de este método de análisis diacrónico que permite estimar posibilidades en la evolución futura del alojamiento, identificar las necesidades de recursos para responder a las demandas de los habitantes y, desde un enfoque ecológico, definir, cuantificar y evaluar posibles medidas de intervención que reduzcan los impactos ambientales. El método, por tanto, es una herramienta de interés para la toma de decisiones en la intervención en el alojamiento y en la planificación urbana a largo plazo. ABSTRACT Evidence of the impact on the environment associated with the operation of cities make it urgent to propose measures to reduce energy consumption and improve the efficiency of urban metabolism. The limited availability of materials and energy resources currently poses a significant challenge to human beings: are more appropriate forms of organization for this limitation possible? In response, it is essential to know how our urban systems with the population they host will interact in an economic model of continuous development and whether they can adapt to the limited endurance capacity of the planet, thus avoiding future collapse situations which are being announced from different scientific fields. In this context, the thesis puts forward a method of housing analysis that by a long term assessment will enable to propose intervention measures to reduce resource consumption associated with its operation. The thesis is based on an ecological approach to identify housing as a fundamental part of the urban system, which is composed of three elements: inhabitants, homes and natural resources. Therefore, in order to reduce the impacts associated with housing it is necessary to consider the characteristics of these elements and the dynamic nature of the relationships among them. Ecology, through the analysis of ecosystems, has identified strategies for survival and adaptation to changes and disturbances. The similarity between the elements of an ecosystem (population and support) and housing (inhabitants, homes and natural resources) enables to verify that the implementation of these strategies to housing can lead to better situations in the relationship existing between urban systems with their environment. From this hypothesis, a diachronic or long term analysis method is proposed to know the evolution of housing over time through the quantification of a descriptive indicators series of inhabitants, homes and natural resources. The implementation of this method to a urban system would allow better knowledge of the characteristics of these elements and forecast their development, identify undesirable scenarios and propose early intervention measures to correct these situations and avoid unnecessary environmental impacts. The validation of the method of analysis has been proved in the city of Madrid for the years 1940-2010. From the analysis of the evolution of the indicators selected, accommodation intervention measures are proposed based on ecological strategies that would have led to an alternative development having less impact on the environment than that of the current situation. The analysis method is implemented prospectively defining three future scenarios for the city of Madrid until 2100. The likely evolution of the indicators are analyzed for each scenario and various intervention measures that would lead to a better situation from an ecological point of view are proposed. The thesis conclusion states the interest of this diachronic analysis method to estimate potential future developments in the housing field, identify resource requirements to meet the demands of the citizens and, from an ecological approach, define, quantify and assess possible intervention measures to reduce environmental impacts. The method, therefore, is an interesting tool for decision-making process as for the intervention in housing and urban planning in the long term.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La astronomía de rayos γ estudia las partículas más energéticas que llegan a la Tierra desde el espacio. Estos rayos γ no se generan mediante procesos térmicos en simples estrellas, sino mediante mecanismos de aceleración de partículas en objetos celestes como núcleos de galaxias activos, púlsares, supernovas, o posibles procesos de aniquilación de materia oscura. Los rayos γ procedentes de estos objetos y sus características proporcionan una valiosa información con la que los científicos tratan de comprender los procesos físicos que ocurren en ellos y desarrollar modelos teóricos que describan su funcionamiento con fidelidad. El problema de observar rayos γ es que son absorbidos por las capas altas de la atmósfera y no llegan a la superficie (de lo contrario, la Tierra será inhabitable). De este modo, sólo hay dos formas de observar rayos γ embarcar detectores en satélites, u observar los efectos secundarios que los rayos γ producen en la atmósfera. Cuando un rayo γ llega a la atmósfera, interacciona con las partículas del aire y genera un par electrón - positrón, con mucha energía. Estas partículas secundarias generan a su vez más partículas secundarias cada vez menos energéticas. Estas partículas, mientras aún tienen energía suficiente para viajar más rápido que la velocidad de la luz en el aire, producen una radiación luminosa azulada conocida como radiación Cherenkov durante unos pocos nanosegundos. Desde la superficie de la Tierra, algunos telescopios especiales, conocidos como telescopios Cherenkov o IACTs (Imaging Atmospheric Cherenkov Telescopes), son capaces de detectar la radiación Cherenkov e incluso de tomar imágenes de la forma de la cascada Cherenkov. A partir de estas imágenes es posible conocer las principales características del rayo γ original, y con suficientes rayos se pueden deducir características importantes del objeto que los emitió, a cientos de años luz de distancia. Sin embargo, detectar cascadas Cherenkov procedentes de rayos γ no es nada fácil. Las cascadas generadas por fotones γ de bajas energías emiten pocos fotones, y durante pocos nanosegundos, y las correspondientes a rayos γ de alta energía, si bien producen más electrones y duran más, son más improbables conforme mayor es su energía. Esto produce dos líneas de desarrollo de telescopios Cherenkov: Para observar cascadas de bajas energías son necesarios grandes reflectores que recuperen muchos fotones de los pocos que tienen estas cascadas. Por el contrario, las cascadas de altas energías se pueden detectar con telescopios pequeños, pero conviene cubrir con ellos una superficie grande en el suelo para aumentar el número de eventos detectados. Con el objetivo de mejorar la sensibilidad de los telescopios Cherenkov actuales, en el rango de energía alto (> 10 TeV), medio (100 GeV - 10 TeV) y bajo (10 GeV - 100 GeV), nació el proyecto CTA (Cherenkov Telescope Array). Este proyecto en el que participan más de 27 países, pretende construir un observatorio en cada hemisferio, cada uno de los cuales contará con 4 telescopios grandes (LSTs), unos 30 medianos (MSTs) y hasta 70 pequeños (SSTs). Con un array así, se conseguirán dos objetivos. En primer lugar, al aumentar drásticamente el área de colección respecto a los IACTs actuales, se detectarán más rayos γ en todos los rangos de energía. En segundo lugar, cuando una misma cascada Cherenkov es observada por varios telescopios a la vez, es posible analizarla con mucha más precisión gracias a las técnicas estereoscópicas. La presente tesis recoge varios desarrollos técnicos realizados como aportación a los telescopios medianos y grandes de CTA, concretamente al sistema de trigger. Al ser las cascadas Cherenkov tan breves, los sistemas que digitalizan y leen los datos de cada píxel tienen que funcionar a frecuencias muy altas (≈1 GHz), lo que hace inviable que funcionen de forma continua, ya que la cantidad de datos guardada será inmanejable. En su lugar, las señales analógicas se muestrean, guardando las muestras analógicas en un buffer circular de unos pocos µs. Mientras las señales se mantienen en el buffer, el sistema de trigger hace un análisis rápido de las señales recibidas, y decide si la imagen que hay en el buér corresponde a una cascada Cherenkov y merece ser guardada, o por el contrario puede ignorarse permitiendo que el buffer se sobreescriba. La decisión de si la imagen merece ser guardada o no, se basa en que las cascadas Cherenkov producen detecciones de fotones en píxeles cercanos y en tiempos muy próximos, a diferencia de los fotones de NSB (night sky background), que llegan aleatoriamente. Para detectar cascadas grandes es suficiente con comprobar que más de un cierto número de píxeles en una región hayan detectado más de un cierto número de fotones en una ventana de tiempo de algunos nanosegundos. Sin embargo, para detectar cascadas pequeñas es más conveniente tener en cuenta cuántos fotones han sido detectados en cada píxel (técnica conocida como sumtrigger). El sistema de trigger desarrollado en esta tesis pretende optimizar la sensibilidad a bajas energías, por lo que suma analógicamente las señales recibidas en cada píxel en una región de trigger y compara el resultado con un umbral directamente expresable en fotones detectados (fotoelectrones). El sistema diseñado permite utilizar regiones de trigger de tamaño seleccionable entre 14, 21 o 28 píxeles (2, 3, o 4 clusters de 7 píxeles cada uno), y con un alto grado de solapamiento entre ellas. De este modo, cualquier exceso de luz en una región compacta de 14, 21 o 28 píxeles es detectado y genera un pulso de trigger. En la versión más básica del sistema de trigger, este pulso se distribuye por toda la cámara de forma que todos los clusters sean leídos al mismo tiempo, independientemente de su posición en la cámara, a través de un delicado sistema de distribución. De este modo, el sistema de trigger guarda una imagen completa de la cámara cada vez que se supera el número de fotones establecido como umbral en una región de trigger. Sin embargo, esta forma de operar tiene dos inconvenientes principales. En primer lugar, la cascada casi siempre ocupa sólo una pequeña zona de la cámara, por lo que se guardan muchos píxeles sin información alguna. Cuando se tienen muchos telescopios como será el caso de CTA, la cantidad de información inútil almacenada por este motivo puede ser muy considerable. Por otro lado, cada trigger supone guardar unos pocos nanosegundos alrededor del instante de disparo. Sin embargo, en el caso de cascadas grandes la duración de las mismas puede ser bastante mayor, perdiéndose parte de la información debido al truncamiento temporal. Para resolver ambos problemas se ha propuesto un esquema de trigger y lectura basado en dos umbrales. El umbral alto decide si hay un evento en la cámara y, en caso positivo, sólo las regiones de trigger que superan el nivel bajo son leídas, durante un tiempo más largo. De este modo se evita guardar información de píxeles vacíos y las imágenes fijas de las cascadas se pueden convertir en pequeños \vídeos" que representen el desarrollo temporal de la cascada. Este nuevo esquema recibe el nombre de COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), y se ha descrito detalladamente en el capítulo 5. Un problema importante que afecta a los esquemas de sumtrigger como el que se presenta en esta tesis es que para sumar adecuadamente las señales provenientes de cada píxel, estas deben tardar lo mismo en llegar al sumador. Los fotomultiplicadores utilizados en cada píxel introducen diferentes retardos que deben compensarse para realizar las sumas adecuadamente. El efecto de estos retardos ha sido estudiado, y se ha desarrollado un sistema para compensarlos. Por último, el siguiente nivel de los sistemas de trigger para distinguir efectivamente las cascadas Cherenkov del NSB consiste en buscar triggers simultáneos (o en tiempos muy próximos) en telescopios vecinos. Con esta función, junto con otras de interfaz entre sistemas, se ha desarrollado un sistema denominado Trigger Interface Board (TIB). Este sistema consta de un módulo que irá montado en la cámara de cada LST o MST, y que estará conectado mediante fibras ópticas a los telescopios vecinos. Cuando un telescopio tiene un trigger local, este se envía a todos los vecinos conectados y viceversa, de modo que cada telescopio sabe si sus vecinos han dado trigger. Una vez compensadas las diferencias de retardo debidas a la propagación en las fibras ópticas y de los propios fotones Cherenkov en el aire dependiendo de la dirección de apuntamiento, se buscan coincidencias, y en el caso de que la condición de trigger se cumpla, se lee la cámara en cuestión, de forma sincronizada con el trigger local. Aunque todo el sistema de trigger es fruto de la colaboración entre varios grupos, fundamentalmente IFAE, CIEMAT, ICC-UB y UCM en España, con la ayuda de grupos franceses y japoneses, el núcleo de esta tesis son el Level 1 y la Trigger Interface Board, que son los dos sistemas en los que que el autor ha sido el ingeniero principal. Por este motivo, en la presente tesis se ha incluido abundante información técnica relativa a estos sistemas. Existen actualmente importantes líneas de desarrollo futuras relativas tanto al trigger de la cámara (implementación en ASICs), como al trigger entre telescopios (trigger topológico), que darán lugar a interesantes mejoras sobre los diseños actuales durante los próximos años, y que con suerte serán de provecho para toda la comunidad científica participante en CTA. ABSTRACT -ray astronomy studies the most energetic particles arriving to the Earth from outer space. This -rays are not generated by thermal processes in mere stars, but by means of particle acceleration mechanisms in astronomical objects such as active galactic nuclei, pulsars, supernovas or as a result of dark matter annihilation processes. The γ rays coming from these objects and their characteristics provide with valuable information to the scientist which try to understand the underlying physical fundamentals of these objects, as well as to develop theoretical models able to describe them accurately. The problem when observing rays is that they are absorbed in the highest layers of the atmosphere, so they don't reach the Earth surface (otherwise the planet would be uninhabitable). Therefore, there are only two possible ways to observe γ rays: by using detectors on-board of satellites, or by observing their secondary effects in the atmosphere. When a γ ray reaches the atmosphere, it interacts with the particles in the air generating a highly energetic electron-positron pair. These secondary particles generate in turn more particles, with less energy each time. While these particles are still energetic enough to travel faster than the speed of light in the air, they produce a bluish radiation known as Cherenkov light during a few nanoseconds. From the Earth surface, some special telescopes known as Cherenkov telescopes or IACTs (Imaging Atmospheric Cherenkov Telescopes), are able to detect the Cherenkov light and even to take images of the Cherenkov showers. From these images it is possible to know the main parameters of the original -ray, and with some -rays it is possible to deduce important characteristics of the emitting object, hundreds of light-years away. However, detecting Cherenkov showers generated by γ rays is not a simple task. The showers generated by low energy -rays contain few photons and last few nanoseconds, while the ones corresponding to high energy -rays, having more photons and lasting more time, are much more unlikely. This results in two clearly differentiated development lines for IACTs: In order to detect low energy showers, big reflectors are required to collect as much photons as possible from the few ones that these showers have. On the contrary, small telescopes are able to detect high energy showers, but a large area in the ground should be covered to increase the number of detected events. With the aim to improve the sensitivity of current Cherenkov showers in the high (> 10 TeV), medium (100 GeV - 10 TeV) and low (10 GeV - 100 GeV) energy ranges, the CTA (Cherenkov Telescope Array) project was created. This project, with more than 27 participating countries, intends to build an observatory in each hemisphere, each one equipped with 4 large size telescopes (LSTs), around 30 middle size telescopes (MSTs) and up to 70 small size telescopes (SSTs). With such an array, two targets would be achieved. First, the drastic increment in the collection area with respect to current IACTs will lead to detect more -rays in all the energy ranges. Secondly, when a Cherenkov shower is observed by several telescopes at the same time, it is possible to analyze it much more accurately thanks to the stereoscopic techniques. The present thesis gathers several technical developments for the trigger system of the medium and large size telescopes of CTA. As the Cherenkov showers are so short, the digitization and readout systems corresponding to each pixel must work at very high frequencies (_ 1 GHz). This makes unfeasible to read data continuously, because the amount of data would be unmanageable. Instead, the analog signals are sampled, storing the analog samples in a temporal ring buffer able to store up to a few _s. While the signals remain in the buffer, the trigger system performs a fast analysis of the signals and decides if the image in the buffer corresponds to a Cherenkov shower and deserves to be stored, or on the contrary it can be ignored allowing the buffer to be overwritten. The decision of saving the image or not, is based on the fact that Cherenkov showers produce photon detections in close pixels during near times, in contrast to the random arrival of the NSB phtotons. Checking if more than a certain number of pixels in a trigger region have detected more than a certain number of photons during a certain time window is enough to detect large showers. However, taking also into account how many photons have been detected in each pixel (sumtrigger technique) is more convenient to optimize the sensitivity to low energy showers. The developed trigger system presented in this thesis intends to optimize the sensitivity to low energy showers, so it performs the analog addition of the signals received in each pixel in the trigger region and compares the sum with a threshold which can be directly expressed as a number of detected photons (photoelectrons). The trigger system allows to select trigger regions of 14, 21, or 28 pixels (2, 3 or 4 clusters with 7 pixels each), and with extensive overlapping. In this way, every light increment inside a compact region of 14, 21 or 28 pixels is detected, and a trigger pulse is generated. In the most basic version of the trigger system, this pulse is just distributed throughout the camera in such a way that all the clusters are read at the same time, independently from their position in the camera, by means of a complex distribution system. Thus, the readout saves a complete camera image whenever the number of photoelectrons set as threshold is exceeded in a trigger region. However, this way of operating has two important drawbacks. First, the shower usually covers only a little part of the camera, so many pixels without relevant information are stored. When there are many telescopes as will be the case of CTA, the amount of useless stored information can be very high. On the other hand, with every trigger only some nanoseconds of information around the trigger time are stored. In the case of large showers, the duration of the shower can be quite larger, loosing information due to the temporal cut. With the aim to solve both limitations, a trigger and readout scheme based on two thresholds has been proposed. The high threshold decides if there is a relevant event in the camera, and in the positive case, only the trigger regions exceeding the low threshold are read, during a longer time. In this way, the information from empty pixels is not stored and the fixed images of the showers become to little \`videos" containing the temporal development of the shower. This new scheme is named COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), and it has been described in depth in chapter 5. An important problem affecting sumtrigger schemes like the one presented in this thesis is that in order to add the signals from each pixel properly, they must arrive at the same time. The photomultipliers used in each pixel introduce different delays which must be compensated to perform the additions properly. The effect of these delays has been analyzed, and a delay compensation system has been developed. The next trigger level consists of looking for simultaneous (or very near in time) triggers in neighbour telescopes. These function, together with others relating to interfacing different systems, have been developed in a system named Trigger Interface Board (TIB). This system is comprised of one module which will be placed inside the LSTs and MSTs cameras, and which will be connected to the neighbour telescopes through optical fibers. When a telescope receives a local trigger, it is resent to all the connected neighbours and vice-versa, so every telescope knows if its neighbours have been triggered. Once compensated the delay differences due to propagation in the optical fibers and in the air depending on the pointing direction, the TIB looks for coincidences, and in the case that the trigger condition is accomplished, the camera is read a fixed time after the local trigger arrived. Despite all the trigger system is the result of the cooperation of several groups, specially IFAE, Ciemat, ICC-UB and UCM in Spain, with some help from french and japanese groups, the Level 1 and the Trigger Interface Board constitute the core of this thesis, as they have been the two systems designed by the author of the thesis. For this reason, a large amount of technical information about these systems has been included. There are important future development lines regarding both the camera trigger (implementation in ASICS) and the stereo trigger (topological trigger), which will produce interesting improvements for the current designs during the following years, being useful for all the scientific community participating in CTA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El objetivo de este Proyecto Final de Carrera es la realización de un ensayo de fiabilidad de componentes electrónicos, más concretamente de Minimódulos de Silicio, con el fin de estudiar su comportamiento a lo largo del tiempo de vida. Debido a la larga duración de los Minimódulos de Silicio , un ensayo de este tipo podría durar años, por lo que es necesario realizar un ensayo acelerado que acorte significativamente el tiempo del experimento, para ello, han de someterse a esfuerzos mayores que en condiciones normales de funcionamiento. A día de hoy, los Minimódulos de silicio, que conocemos como placas solares fotovoltaicas, se usan en infinidad de dispositivos debido a las múltiples ventajas que conllevan. La principal ventaja es poder llevar electricidad a cualquier parte del planeta sin necesidad de tener que hacer unas elevadas inversiones. Esta electricidad proviene de una fuente de energía inagotable y nada contaminante, con lo que ayudamos a mantener el equilibrio del planeta. La mayoría de las veces estas placas solares fotovoltaicas se usan en el exterior, soportando cambios de temperatura y de humedad elevados, de ahí, la importancia de realizar ensayos de fiabilidad, que muestren sus posibles causas de fallo, los efectos que producen estos fallos y los aspectos de diseño, fabricación y mantenimiento que puedan afectarles. Los Minimódulos de silicio utilizados en este proyecto son el modelo MC-SP0.8-NF-GCS de la empresa fabricante Multicomp. Para realizar el Proyecto hubiéramos necesitado una cámara climática que simulara unas condiciones ambientales determinadas, pero debido a la dificultad de iluminar el módulo dentro de la cámara climática hemos desarrollado un nuevo sistema de ensayos acelerados en temperatura. El nuevo sistema de ensayos acelerados consiste en: •Colocar los módulos fotovoltaicos en el laboratorio con un foco de 500W que irradia lo equivalente al sol. •Los tres módulos trabajarán a tres temperaturas diferentes para simular condiciones ambientales distintas, concretamente a 60°C, 72°C y 84°C. •Mediante un sistema automático de medida diseñado en LabVIEW, de manera simultánea tomará medidas de tensión en las tres placas y estudiaremos el grado degradación en cada placa. Se analizaran los resultados obtenido de cada una de las medidas y se realizará un estudio de fiabilidad y del proceso de degradación sufrido por los Minimódulos de silicio. Este PFC se puede dividir en las siguientes fases de trabajo siendo el ensayo la parte más larga en el tiempo: •Búsqueda de bibliografía documentación y normas aplicables. •Familiarización con los equipos y software, estudiando el manejo del software que viene con el Multímetro Keithley 2601 y el programa LabVIEW. •Desarrollo del hardware y sistemas necesarios para la realización del ensayo. •Montaje del ensayo •Realización del ensayo. •Análisis de resultados. ABSTRACT. The objective of this Final Project is conducting a test reliability of electronic components, more specifically Silicon minimodules, in order to study their behavior throughout the life span. Due to the long duration of Silicon minimodules a test like this could take years, so it is necessary to perform an accelerated significantly shorten the time of the experiment, testing for it, should be subjected to greater efforts than in normal operating. Today, the mini-modules, silicon is known as photovoltaic solar panels are used in a multitude of devices due to the many advantages they bring. The main advantage is to bring electricity to anywhere in the world without having to make high investments. This electricity comes from an inexhaustible source of energy and no pollution, thus helping to maintain the balance of the planet. Most of the time these solar photovoltaic panels are used on the outside, enduring changes in temperature and high humidity, hence, the importance of reliability testing, showing the possible causes of failure, the effects produced by these faults and aspects of design, manufacturing and maintenance that may affect them. The silicon mini-modules used in this project are the MC-SP0.8-NF-GCS model Multicomp manufacturing company. To realize the project we would have needed a climatic chamber to simulate specific environmental conditions, but due to the difficulty of illuminating the module in the climate chamber we have developed a new system of accelerated tests in temperature. The new system is accelerated tests: •Place the PV modules in the laboratory with a focus on the equivalent 500W radiating sun. •The three modules work at three different temperatures to simulate different environmental conditions, namely at 60 °C, 72 °C and 84 °C. •Automatic measurement system designed in LabVIEW, simultaneous voltage measurements taken at the three plates and study the degradation degree in each plate. The results obtained from each of the measurements and a feasibility study and degradation suffered by the silicon is performed minimodules were analyzed. This PFC can be divided into the following phases of the test work the longest part being overtime: •Literature search and documentation standards. •Familiarization with equipment and software, studying management software that comes with the Keithley 2601 multimeter and the LabVIEW program. •Development of hardware and systems necessary for the conduct of the trial. •Experiment setup •Carrying out the experiment. •Analysis of results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La fotosíntesis es el proceso biológico que permite la producción primaria y, por tanto, la vida en nuestro planeta. La tasa fotosintética viene determinada por la ‘maquinaria’ bioquímica y las resistencias difusivas al paso del CO2 desde la atmósfera hasta su fijación en el interior de los cloroplastos. Históricamente la mayor resistencia difusiva se ha atribuido al cierre estomático, sin embargo ahora sabemos, debido a las mejoras en las técnicas experimentales, que existe también una resistencia grande que se opone a la difusión del CO2 desde los espacios intercelulares a los lugares de carboxilación. Esta resistencia, llamada normalmente por su inversa: la conductancia del mesófilo (gm), puede ser igual o incluso superior a la resistencia debida por el cierre estomático. En la presente tesis doctoral he caracterizado la limitación que ejerce la resistencia del mesófilo a la fijación de CO2 en diversas especies forestales y en distintos momentos de su ciclo biológico. En la fase de regenerado, hemos estudiado tres situaciones ambientales relevantes en el mayor éxito de su supervivencia, que son: el déficit hídrico, su interacción con la irradiancia y el paso del crecimiento en la sombra a mayor irradiancia, como puede suceder tras la apertura de un hueco en el dosel forestal. En la fase de arbolado adulto se ha caracterizado el estado hídrico y el intercambio gaseoso en hojas desarrolladas a distinta irradiancia dentro del dosel vegetal durante tres años contrastados en pluviometría. Para cada tipo de estudio se han empleado las técnicas ecofisiológicas más pertinentes para evaluar el estado hídrico y el intercambio gaseoso. Por su complejidad y la falta de un método que permita su cuantificación directa, la gm ha sido evaluada por los métodos más usados, que son: la discriminación isotópica del carbono 13, el método de la J variable, el método de la J constante y el método de la curvatura. Los resultados más significativos permiten concluir que la limitación relativa a la fotosíntesis por la conductancia estomática, del mesófilo y bioquímica es dependiente de la localización de la hoja en el dosel forestal. Por primera vez se ha documentado que bajo estrés hídrico las hojas desarrolladas a la sombra estuvieron más limitadas por una reducción en la gm, mientras que las hojas desarrolladas a pleno sol estuvieron más limitadas por reducción mayor de la conductancia estomática (gsw). Encontramos buena conexión entre el aparato fotosintético foliar y el sistema hídrico debido al alto grado de correlación entre la conductancia hidráulica foliar aparente y la concentración de CO2 en los cloroplastos en distintas especies forestales. Además, hemos mostrado diferentes pautas de regulación del intercambio gaseoso según las particularidades ecológicas de las especies estudiadas. Tanto en brinzales crecidos de forma natural y en el arbolado adulto como en plántulas cultivadas en el invernadero la ontogenia afectó a las limitaciones de la fotosíntesis producidas por estrés hídrico, resultando que las limitaciones estomáticas fueron dominantes en hojas más jóvenes mientras que las no estomáticas en hojas más maduras. La puesta en luz supuso un gran descenso en la gm durante los días siguientes a la transferencia, siendo este efecto mayor según el grado de sombreo previo en el que se han desarrollado las hojas. La aclimatación de las hojas a la alta irradiancia estuvo ligada a las modificaciones anatómicas foliares y al estado de desarrollo de la hoja. El ratio entre la gm/gsw determinó la mayor eficiencia en el uso del agua y un menor estado oxidativo durante la fase de estrés hídrico y su posterior rehidratación, lo cual sugiere el uso de este ratio en los programas de mejora genética frente al estrés hídrico. Debido a que la mayoría de modelos de estimación de la producción primaria bruta (GPP) de un ecosistema no incluye la gm, los mismos están incurriendo en una sobreestimación del GPP particularmente bajo condiciones de estrés hídrico, porque más de la mitad de la reducción en fotosíntesis en hojas desarrolladas a la sombra se debe a la reducción en gm. Finalmente se presenta un análisis de la importancia en las estimas de la gm bajo estrés hídrico de la refijación del CO2 emitido en la mitocondria a consecuencia de la fotorrespiración y la respiración mitocondrial en luz. ABSTRACT Photosynthesis is the biological process that supports primary production and, therefore, life on our planet. Rates of photosynthesis are determined by biochemical “machinery” and the diffusive resistance to the transfer of CO2 from the atmosphere to the place of fixation within the chloroplasts. Historically the largest diffusive resistance was attributed to the stomata, although we now know via improvements in experimental techniques that there is also a large resistance from sub-stomatal cavities to sites of carboxylation. This resistance, commonly quantified as mesophyll conductance (gm), can be as large or even larger than that due to stomatal resistance. In the present PhD I have characterized the limitation exerted by the mesophyll resistance to CO2 fixation in different forest species at different stages of their life cycle. In seedlings, we studied three environmental conditions that affect plant fitness, namely, water deficit, the interaction of water deficit with irradiance, and the transfer of plants grown in the shade to higher irradiance as can occur when a gap opens in the forest canopy. At the stage of mature trees we characterized water status and gas exchange in leaves developed at different irradiance within the canopy over the course of three years that had contrasting rainfall. For each study we used the most relevant ecophysiological techniques to quantify water relations and gas exchange. Due to its complexity and the lack of a method that allows direct quantification, gm was estimated by the most commonly used methods which are: carbon isotope discrimination, the J-variable, constant J and the curvature method The most significant results suggest that the relative limitation of photosynthesis by stomata, mesophyll and biochemistry depending on the position of the leaf within the canopy. For the first time it was documented that under water stress shaded leaves were more limited by a reduction in gm, while the sun-adapted leaves were more limited by stomatal conductance (gsw). The connection between leaf photosynthetic apparatus and the hydraulic system was shown by the good correlations found between the apparent leaf hydraulic conductance and the CO2 concentration in the chloroplasts in shade- and sun-adapted leaves of several tree species. In addition, we have revealed different patterns of gas exchange regulation according to the functional ecology of the species studied. In field grown trees and greenhouse-grown seedlings ontogeny affected limitations of photosynthesis due to water stress with stomatal limitations dominating in young leaves and nonstomatal limitations in older leaves. The transfer to high light resulted in major decrease of gm during the days following the transfer and this effect was greater as higher was the shade which leaves were developed. Acclimation to high light was linked to the leaf anatomical changes and the state of leaf development. The ratio between the gm/gsw determined the greater efficiency in water use and reduced the oxidative stress during the water stress and subsequent rehydration, suggesting the use of this ratio in breeding programs aiming to increase avoidance of water stress. Because most models to estimate gross primary production (GPP) of an ecosystem do not include gm, they are incurring an overestimation of GPP particularly under conditions of water stress because more than half of An decrease in shade-developed leaves may be due to reduction in gm. Finally, we present an analysis of the importance of how estimates of gm under water stress are affected by the refixation of CO2 that is emitted from mitochondria via photorespiration and mitochondrial respiration in light.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study of granular systems is of great interest to many fields of science and technology. The packing of particles affects to the physical properties of the granular system. In particular, the crucial influence of particle size distribution (PSD) on the random packing structure increase the interest in relating both, either theoretically or by computational methods. A packing computational method is developed in order to estimate the void fraction corresponding to a fractal-like particle size distribution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Important physical and biological processes in soil-plant-microbial systems are dominated by the geometry of soil pore space, and a correct model of this geometry is critical for understanding them. We analyze the geometry of soil pore space with the X-ray computed tomography (CT) of intact soil columns. We present here some preliminary results of our investigation on Minkowski functionals of parallel sets to characterize soil structure. We also show how the evolution of Minkowski morphological measurements of parallel sets may help to characterize the influence of conventional tillage and permanent cover crop of resident vegetation on soil structure in a Spanish Mediterranean vineyard.