422 resultados para scientist


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The analysis of research data plays a key role in data-driven areas of science. Varieties of mixed research data sets exist and scientists aim to derive or validate hypotheses to find undiscovered knowledge. Many analysis techniques identify relations of an entire dataset only. This may level the characteristic behavior of different subgroups in the data. Like automatic subspace clustering, we aim at identifying interesting subgroups and attribute sets. We present a visual-interactive system that supports scientists to explore interesting relations between aggregated bins of multivariate attributes in mixed data sets. The abstraction of data to bins enables the application of statistical dependency tests as the measure of interestingness. An overview matrix view shows all attributes, ranked with respect to the interestingness of bins. Complementary, a node-link view reveals multivariate bin relations by positioning dependent bins close to each other. The system supports information drill-down based on both expert knowledge and algorithmic support. Finally, visual-interactive subset clustering assigns multivariate bin relations to groups. A list-based cluster result representation enables the scientist to communicate multivariate findings at a glance. We demonstrate the applicability of the system with two case studies from the earth observation domain and the prostate cancer research domain. In both cases, the system enabled us to identify the most interesting multivariate bin relations, to validate already published results, and, moreover, to discover unexpected relations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During Leg ANT-XXIII/9 on the 31st March 2007 the German research vessel Polarstern mapped a significant bathymetric feature with its swath sonar system at the north-west margin of the Kerguelen Plateau. Due to the fact, that the feature was discovered just a month after the third IPY 2007/2008 has started, it was named after Graf Wilczek who, together with Carl Weyprecht, had promoted the first IPY. The undersea feature name proposal was officialy accepted by the GEBCO Sub-Committee on Undersea Feature Names (SCUFN) at its 20th meeting in late July and was added to the GEBCO Gazetteer of UFN (http://www.iho.shom.fr/COMMITTEES/GEBCO/SCUFN/scufn_intro.htm). ______________ Graf Hans Wilczek (Notation of the name from the book of Wilczek's daughter Elisabeth Kinsky- Wilczek). The Austrian naval hero Tegetthoff in 1871 planned an expedition to the southern hemisphere. The geophysicist G. Neumayer (1826-1909) already was selected as its chief scientist. Also the naval officer Carl Weyprecht (1838-1881) and the mountaineer Julius Payer (1841-1915) were to participate. Because of the sudden death of Tegettoff the project came to a halt and eventually was cancelled. By support of the well known geographer August Petermann (1822-1878) Weyprecht and Payer made a voyage into the Barents Sea which made them believe having seen the "open polar sea". An additional undertaking to confirm and to extend the find was obvious. At this stage of the affair count Hans Wilczek (1837-1922) got involved. He not only fostered a new expedition with a considerable sum of money, but he participated in commanding a support vessel to Novaya Zemlya. Wilczek managed to get home but the expedition vessel under Weyprecht's command became imprisoned in the pack for two years and at least had to be abandoned. After an adventurous trip back to civilisation Weyprecht changed his mind in what he considered the best way of polar research. Together with Wilczek in 1875 he started the promotion of international station-based polar exploration - the IPY was born. Wilczek guaranteed the constitution of an Austrian station on Novaya Zemlya and was ready to winter over there personally. Because of several political and other obstructions the beginning of the IPY was delayed till 1882. Wilczek's friend Weyprecht had passed away already. The command of the Austrian station, eventually erected on Jan Mayen, was given to Emil v. Wohlgemuth (1843-1896). Wilczek financed the main part of the Austrian IPY participation. Wilczek is described as honest and popular. On the one hand acquainted with the most prominent persons of his days, he respected everybody and had many relationships with scientists and artists. There is a kind of autobiography under the title: Hans Wilczek erzählt seinen Enkeln Erinnerungen aus seinem Leben (Hans Wilczek tells his grandchildren reminiscences from his life); edited by his daughter Elisabeth Kinsky-Wilczek, Graz 1933, 502 p. The book is available in an English version: Happy Retrospect - the Reminiscences of Count Wilczek 1837-1922, Bell and Sons, London 1934, 295 p.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ROV operations had three objectives: (1) to check, whether the "Cherokee" system is suited for advanced benthological work in the high latitude Antarctic shelf areas; (2) to support the disturbance experiment, providing immediate visual Information; (3) to continue ecological work that started in 1989 at the hilltop situated at the northern margin of the Norsel Bank off the 4-Seasons Inlet (Weddell Sea). The "Cherokee" is was equipped with 3 video cameras, 2 of which support the operation. A high resolution Tritech Typhoon camera is used for scientific observations to be recorded. In addition, the ROV has a manipulator, a still camera, lights and strobe, compass, 2 lasers, a Posidonia transponder and an obstacle avoidance Sonar. The size of the vehicle is 160 X 90 X 90cm. In the present configuration without TMS (tether management system) the deployment has to start with paying out the full cable length, lay it in loops on deck and connect the glass fibres at the tether's spool winch. After a final technical check the vehicle is deployed into the water, actively driven perpendicular to the ship's axis and floatings are fixed to the tether. At a cable length of approx. 50 m, the tether is tightened to the depressor by several cable ties and both components are lowered towards the sea floor, the vehicle by the thruster's propulsion and the depressor by the ship's winch. At 5 m intervals the tether has to be tied to the single conductor cable. In good weather conditions the instruments supporting the navigation of the ROV, especially the Posidonia system, allow an operation mode to follow the ship's course if the ship's speed is slow. Together with the lasers which act as a scale in the images they also allow a reproducible scientific analysis since the transect can be plotted in a GIS system. Consequently, the area observed can be easily calculated. An operation as a predominantly drifting system, especially in areas with bottom near currents, is also possible, however, the connection of the tether at the rear of the vehicle is unsuitable for such conditions. The recovery of the system corresponds to that of the deployment. Most important is to reach the surface of the sea at a safe distance perpendicular to the ship's axis in order not to interfere with the ship's propellers. During this phase the Posidonia transponder system is of high relevance although it has to be switched off at a water depth of approx. 40 m. The minimum personal needed is 4 persons to handle the tether on deck, one person to operate the ship's winch, one pilot and one additional technician for the ROV's operation itself, one scientist, and one person on the ship's bridge in addition to one on deck for whale watching when the Posidonia system is in use. The time for the deployment of the ROV until it reaches the sea floor depends on the water depth and consequently on the length of the cable to be paid out beforehand and to be tightened to the single conductor cable. Deployment and recovery at intermediate water depths can last up to 2 hours each. A reasonable time for benthological observations close to the sea floor is 1 to 3 hours but can be extended if scientifically justified. Preliminary results: after a first test station, the ROV was deployed 3 times for observations related to the disturbance experiment. A first attempt to Cross the hilltop at the northern margin of the Norsel Bank close to the 4- Seasons Inlet was successful only for the first hundreds of metres transect length. The benthic community was dominated in biomass by the demosponge Cinachyra barbata. Due to the strong current of approx. 1 nm/h, the design of the system, and an expected more difficult current regime between grounded icebergs and the top of the hilltop the operation was stopped before the hilltop was reached. In a second attempt the hilltop was successfully crossed because the current and wind situation was much more suitable. In contrast to earlier expeditions with the "sprint" ROV it was the first time that both slopes, the smoother in the northeast and the steeper in the southwest were continuously observed during one cast. A coarse classification of the hilltop fauna shows patches dominated by single taxa: cnidarians, hydrozoans, holothurians, sea urchins and stalked sponges. Approximately 20 % of the north-eastern slope was devastated by grounding icebergs. Here the sediments consisted of large boulders, gravel or blocks of finer sediment looking like an irregularly ploughed field. On the Norsel Bank the Cinachyra concentrations were locally associated with high abundances of sea anemones. Total observation time amounted to 11.5 hours corresponding to almost 6-9 km transect length.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Baseline Surface Radiation Network (BSRN) and its central archive - the World Radiation Monitoring Center (WRMC) - was created in 1992. It is a project of the Data Assimilation Panel from the Global Energy and Water Cycle Experiment (GEWEX) under the umbrella of the World Climate Research Programme (WCRP) and as such is aimed at detecting important changes in the Earth's radiation field at the Earth's surface which may be related to climate changes. The data are of primary importance in supporting the validation and confirmation of satellite and computer model estimates of these quantities. At a small number of stations in contrasting climatic zones, covering a latitude range from 80°N to 90°S, solar and atmospheric radiation is measured with instruments of the highest available accuracy and with high time resolution (1 to 3 minutes). Since 2008 the WRMC is hosted by the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI), Bremerhaven, Germany (http://www.bsrn.awi.de/).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Within both aesthetic and history fields, civil engineering occupies a privileged place among arts whose manifestations are based on drawing. In this work, Leonardo’s creativity concerned with civil bridges proyects, have been studied. Leonardo designed ten bridges: eight of them intended for military porposes and only two were purely planned for civil functionaly - “Ponte sul corno d’oro”, infolio 66, manuscript L; and “Ponte a due piani”, represented in the Manuscript B at the Institute of France, infolio 23. There can be no doubt about Leonardo’s intentions when he started on designing these two bridges: his genious for creativy focused on providing both singulary and functionaly to the structures: they should be admired and utilized at the same time, a monument for civil society to be used.The work presented here attemps to make an scientist-historical trip along these Leonardo’s bridges, highlighting their technical, geometrical and aesthetic characteristics, as well as emphasizing Leonardo’s human, scientist and artistic nature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diffusion controls the gaseous transport process in soils when advective transport is almost null. Knowledge of the soil structure and pore connectivity are critical issues to understand and modelling soil aeration, sequestration or emission of greenhouse gasses, volatilization of volatile organic chemicals among other phenomena. In the last decades these issues increased our attention as scientist have realize that soil is one of the most complex materials on the earth, within which many biological, physical and chemical processes that support life and affect climate change take place. A quantitative and explicit characterization of soil structure is difficult because of the complexity of the pore space. This is the main reason why most theoretical approaches to soil porosity are idealizations to simplify this system. In this work, we proposed a more realistic attempt to capture the complexity of the system developing a model that considers the size and location of pores in order to relate them into a network. In the model we interpret porous soils as heterogeneous networks where pores are represented by nodes, characterized by their size and spatial location, and the links representing flows between them. In this work we perform an analysis of the community structure of porous media of soils represented as networks. For different real soils samples, modelled as heterogeneous complex networks, spatial communities of pores have been detected depending on the values of the parameters of the porous soil model used. These types of models are named as Heterogeneous Preferential Attachment (HPA). Developing an exhaustive analysis of the model, analytical solutions are obtained for the degree densities and degree distribution of the pore networks generated by the model in the thermodynamic limit and shown that the networks exhibit similar properties to those observed in other complex networks. With the aim to study in more detail topological properties of these networks, the presence of soil pore community structures is studied. The detection of communities of pores, as groups densely connected with only sparser connections between groups, could contribute to understand the mechanisms of the diffusion phenomena in soils.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents some brief considerations on the role of Computational Logic in the construction of Artificial Intelligence systems and in programming in general. It does not address how the many problems in AI can be solved but, rather more modestly, tries to point out some advantages of Computational Logic as a tool for the AI scientist in his quest. It addresses the interaction between declarative and procedural views of programs (deduction and action), the impact of the intrinsic limitations of logic, the relationship with other apparently competing computational paradigms, and finally discusses implementation-related issues, such as the efficiency of current implementations and their capability for efficiently exploiting existing and future sequential and parallel hardware. The purpose of the discussion is in no way to present Computational Logic as the unique overall vehicle for the development of intelligent systems (in the firm belief that such a panacea is yet to be found) but rather to stress its strengths in providing reasonable solutions to several aspects of the task.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The educational platform Virtual Science Hub (ViSH) has been developed as part of the GLOBAL excursion European project. ViSH (http://vishub.org/) is a portal where teachers and scientist interact to create virtual excursions to science infrastructures. The main motivation behind the project was to connect teachers - and in consequence their students - to scientific institutions and their wide amount of infrastructures and resources they are working with. Thus the idea of a hub was born that would allow the two worlds of scientists and teachers to connect and to innovate science teaching. The core of the ViSH?s concept design is based on virtual excursions, which allow for a number of pedagogical models to be applied. According to our internal definition a virtual excursion is a tour through some digital context by teachers and pupils on a given topic that is attractive and has an educational purpose. Inquiry-based learning, project-based and problem-based learning are the most prominent approaches that a virtual excursion may serve. The domain specific resources and scientific infrastructures currently available on the ViSH are focusing on life sciences, nano-technology, biotechnology, grid and volunteer computing. The virtual excursion approach allows an easy combination of these resources into interdisciplinary teaching scenarios. In addition, social networking features support the users in collaborating and communicating in relation to these excursions and thus create a community of interest for innovative science teaching. The design and development phases were performed following a participatory design approach. An important aspect in this process was to create design partnerships amongst all actors involved, researchers, developers, infrastructure providers, teachers, social scientists, and pedagogical experts early in the project. A joint sense of ownership was created and important changes during the conceptual phase were implemented in the ViSH due to early user feedback. Technology-wise the ViSH is based on the latest web technologies in order to make it cross-platform compatible so that it works on several operative systems such as Windows, Mac or Linux and multi-device accessible, such as desktop, tablet and mobile devices. The platform has been developed in HTML5, the latest standard for web development, assuring that it can run on any modern browser. In addition to social networking features a core element on the ViSH is the virtual excursions editor. It is a web tool that allows teachers and scientists to create rich mash-ups of learning resources provided by the e-Infrastructures (i.e. remote laboratories and live webcams). These rich mash-ups can be presented in either slides or flashcards format. Taking advantage of the web architecture supported, additional powerful components have been integrated like a recommendation engine to provide personalized suggestions about educational content or interesting users and a videoconference tool to enhance real-time collaboration like MashMeTV (http://www.mashme.tv/).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La astronomía de rayos γ estudia las partículas más energéticas que llegan a la Tierra desde el espacio. Estos rayos γ no se generan mediante procesos térmicos en simples estrellas, sino mediante mecanismos de aceleración de partículas en objetos celestes como núcleos de galaxias activos, púlsares, supernovas, o posibles procesos de aniquilación de materia oscura. Los rayos γ procedentes de estos objetos y sus características proporcionan una valiosa información con la que los científicos tratan de comprender los procesos físicos que ocurren en ellos y desarrollar modelos teóricos que describan su funcionamiento con fidelidad. El problema de observar rayos γ es que son absorbidos por las capas altas de la atmósfera y no llegan a la superficie (de lo contrario, la Tierra será inhabitable). De este modo, sólo hay dos formas de observar rayos γ embarcar detectores en satélites, u observar los efectos secundarios que los rayos γ producen en la atmósfera. Cuando un rayo γ llega a la atmósfera, interacciona con las partículas del aire y genera un par electrón - positrón, con mucha energía. Estas partículas secundarias generan a su vez más partículas secundarias cada vez menos energéticas. Estas partículas, mientras aún tienen energía suficiente para viajar más rápido que la velocidad de la luz en el aire, producen una radiación luminosa azulada conocida como radiación Cherenkov durante unos pocos nanosegundos. Desde la superficie de la Tierra, algunos telescopios especiales, conocidos como telescopios Cherenkov o IACTs (Imaging Atmospheric Cherenkov Telescopes), son capaces de detectar la radiación Cherenkov e incluso de tomar imágenes de la forma de la cascada Cherenkov. A partir de estas imágenes es posible conocer las principales características del rayo γ original, y con suficientes rayos se pueden deducir características importantes del objeto que los emitió, a cientos de años luz de distancia. Sin embargo, detectar cascadas Cherenkov procedentes de rayos γ no es nada fácil. Las cascadas generadas por fotones γ de bajas energías emiten pocos fotones, y durante pocos nanosegundos, y las correspondientes a rayos γ de alta energía, si bien producen más electrones y duran más, son más improbables conforme mayor es su energía. Esto produce dos líneas de desarrollo de telescopios Cherenkov: Para observar cascadas de bajas energías son necesarios grandes reflectores que recuperen muchos fotones de los pocos que tienen estas cascadas. Por el contrario, las cascadas de altas energías se pueden detectar con telescopios pequeños, pero conviene cubrir con ellos una superficie grande en el suelo para aumentar el número de eventos detectados. Con el objetivo de mejorar la sensibilidad de los telescopios Cherenkov actuales, en el rango de energía alto (> 10 TeV), medio (100 GeV - 10 TeV) y bajo (10 GeV - 100 GeV), nació el proyecto CTA (Cherenkov Telescope Array). Este proyecto en el que participan más de 27 países, pretende construir un observatorio en cada hemisferio, cada uno de los cuales contará con 4 telescopios grandes (LSTs), unos 30 medianos (MSTs) y hasta 70 pequeños (SSTs). Con un array así, se conseguirán dos objetivos. En primer lugar, al aumentar drásticamente el área de colección respecto a los IACTs actuales, se detectarán más rayos γ en todos los rangos de energía. En segundo lugar, cuando una misma cascada Cherenkov es observada por varios telescopios a la vez, es posible analizarla con mucha más precisión gracias a las técnicas estereoscópicas. La presente tesis recoge varios desarrollos técnicos realizados como aportación a los telescopios medianos y grandes de CTA, concretamente al sistema de trigger. Al ser las cascadas Cherenkov tan breves, los sistemas que digitalizan y leen los datos de cada píxel tienen que funcionar a frecuencias muy altas (≈1 GHz), lo que hace inviable que funcionen de forma continua, ya que la cantidad de datos guardada será inmanejable. En su lugar, las señales analógicas se muestrean, guardando las muestras analógicas en un buffer circular de unos pocos µs. Mientras las señales se mantienen en el buffer, el sistema de trigger hace un análisis rápido de las señales recibidas, y decide si la imagen que hay en el buér corresponde a una cascada Cherenkov y merece ser guardada, o por el contrario puede ignorarse permitiendo que el buffer se sobreescriba. La decisión de si la imagen merece ser guardada o no, se basa en que las cascadas Cherenkov producen detecciones de fotones en píxeles cercanos y en tiempos muy próximos, a diferencia de los fotones de NSB (night sky background), que llegan aleatoriamente. Para detectar cascadas grandes es suficiente con comprobar que más de un cierto número de píxeles en una región hayan detectado más de un cierto número de fotones en una ventana de tiempo de algunos nanosegundos. Sin embargo, para detectar cascadas pequeñas es más conveniente tener en cuenta cuántos fotones han sido detectados en cada píxel (técnica conocida como sumtrigger). El sistema de trigger desarrollado en esta tesis pretende optimizar la sensibilidad a bajas energías, por lo que suma analógicamente las señales recibidas en cada píxel en una región de trigger y compara el resultado con un umbral directamente expresable en fotones detectados (fotoelectrones). El sistema diseñado permite utilizar regiones de trigger de tamaño seleccionable entre 14, 21 o 28 píxeles (2, 3, o 4 clusters de 7 píxeles cada uno), y con un alto grado de solapamiento entre ellas. De este modo, cualquier exceso de luz en una región compacta de 14, 21 o 28 píxeles es detectado y genera un pulso de trigger. En la versión más básica del sistema de trigger, este pulso se distribuye por toda la cámara de forma que todos los clusters sean leídos al mismo tiempo, independientemente de su posición en la cámara, a través de un delicado sistema de distribución. De este modo, el sistema de trigger guarda una imagen completa de la cámara cada vez que se supera el número de fotones establecido como umbral en una región de trigger. Sin embargo, esta forma de operar tiene dos inconvenientes principales. En primer lugar, la cascada casi siempre ocupa sólo una pequeña zona de la cámara, por lo que se guardan muchos píxeles sin información alguna. Cuando se tienen muchos telescopios como será el caso de CTA, la cantidad de información inútil almacenada por este motivo puede ser muy considerable. Por otro lado, cada trigger supone guardar unos pocos nanosegundos alrededor del instante de disparo. Sin embargo, en el caso de cascadas grandes la duración de las mismas puede ser bastante mayor, perdiéndose parte de la información debido al truncamiento temporal. Para resolver ambos problemas se ha propuesto un esquema de trigger y lectura basado en dos umbrales. El umbral alto decide si hay un evento en la cámara y, en caso positivo, sólo las regiones de trigger que superan el nivel bajo son leídas, durante un tiempo más largo. De este modo se evita guardar información de píxeles vacíos y las imágenes fijas de las cascadas se pueden convertir en pequeños \vídeos" que representen el desarrollo temporal de la cascada. Este nuevo esquema recibe el nombre de COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), y se ha descrito detalladamente en el capítulo 5. Un problema importante que afecta a los esquemas de sumtrigger como el que se presenta en esta tesis es que para sumar adecuadamente las señales provenientes de cada píxel, estas deben tardar lo mismo en llegar al sumador. Los fotomultiplicadores utilizados en cada píxel introducen diferentes retardos que deben compensarse para realizar las sumas adecuadamente. El efecto de estos retardos ha sido estudiado, y se ha desarrollado un sistema para compensarlos. Por último, el siguiente nivel de los sistemas de trigger para distinguir efectivamente las cascadas Cherenkov del NSB consiste en buscar triggers simultáneos (o en tiempos muy próximos) en telescopios vecinos. Con esta función, junto con otras de interfaz entre sistemas, se ha desarrollado un sistema denominado Trigger Interface Board (TIB). Este sistema consta de un módulo que irá montado en la cámara de cada LST o MST, y que estará conectado mediante fibras ópticas a los telescopios vecinos. Cuando un telescopio tiene un trigger local, este se envía a todos los vecinos conectados y viceversa, de modo que cada telescopio sabe si sus vecinos han dado trigger. Una vez compensadas las diferencias de retardo debidas a la propagación en las fibras ópticas y de los propios fotones Cherenkov en el aire dependiendo de la dirección de apuntamiento, se buscan coincidencias, y en el caso de que la condición de trigger se cumpla, se lee la cámara en cuestión, de forma sincronizada con el trigger local. Aunque todo el sistema de trigger es fruto de la colaboración entre varios grupos, fundamentalmente IFAE, CIEMAT, ICC-UB y UCM en España, con la ayuda de grupos franceses y japoneses, el núcleo de esta tesis son el Level 1 y la Trigger Interface Board, que son los dos sistemas en los que que el autor ha sido el ingeniero principal. Por este motivo, en la presente tesis se ha incluido abundante información técnica relativa a estos sistemas. Existen actualmente importantes líneas de desarrollo futuras relativas tanto al trigger de la cámara (implementación en ASICs), como al trigger entre telescopios (trigger topológico), que darán lugar a interesantes mejoras sobre los diseños actuales durante los próximos años, y que con suerte serán de provecho para toda la comunidad científica participante en CTA. ABSTRACT -ray astronomy studies the most energetic particles arriving to the Earth from outer space. This -rays are not generated by thermal processes in mere stars, but by means of particle acceleration mechanisms in astronomical objects such as active galactic nuclei, pulsars, supernovas or as a result of dark matter annihilation processes. The γ rays coming from these objects and their characteristics provide with valuable information to the scientist which try to understand the underlying physical fundamentals of these objects, as well as to develop theoretical models able to describe them accurately. The problem when observing rays is that they are absorbed in the highest layers of the atmosphere, so they don't reach the Earth surface (otherwise the planet would be uninhabitable). Therefore, there are only two possible ways to observe γ rays: by using detectors on-board of satellites, or by observing their secondary effects in the atmosphere. When a γ ray reaches the atmosphere, it interacts with the particles in the air generating a highly energetic electron-positron pair. These secondary particles generate in turn more particles, with less energy each time. While these particles are still energetic enough to travel faster than the speed of light in the air, they produce a bluish radiation known as Cherenkov light during a few nanoseconds. From the Earth surface, some special telescopes known as Cherenkov telescopes or IACTs (Imaging Atmospheric Cherenkov Telescopes), are able to detect the Cherenkov light and even to take images of the Cherenkov showers. From these images it is possible to know the main parameters of the original -ray, and with some -rays it is possible to deduce important characteristics of the emitting object, hundreds of light-years away. However, detecting Cherenkov showers generated by γ rays is not a simple task. The showers generated by low energy -rays contain few photons and last few nanoseconds, while the ones corresponding to high energy -rays, having more photons and lasting more time, are much more unlikely. This results in two clearly differentiated development lines for IACTs: In order to detect low energy showers, big reflectors are required to collect as much photons as possible from the few ones that these showers have. On the contrary, small telescopes are able to detect high energy showers, but a large area in the ground should be covered to increase the number of detected events. With the aim to improve the sensitivity of current Cherenkov showers in the high (> 10 TeV), medium (100 GeV - 10 TeV) and low (10 GeV - 100 GeV) energy ranges, the CTA (Cherenkov Telescope Array) project was created. This project, with more than 27 participating countries, intends to build an observatory in each hemisphere, each one equipped with 4 large size telescopes (LSTs), around 30 middle size telescopes (MSTs) and up to 70 small size telescopes (SSTs). With such an array, two targets would be achieved. First, the drastic increment in the collection area with respect to current IACTs will lead to detect more -rays in all the energy ranges. Secondly, when a Cherenkov shower is observed by several telescopes at the same time, it is possible to analyze it much more accurately thanks to the stereoscopic techniques. The present thesis gathers several technical developments for the trigger system of the medium and large size telescopes of CTA. As the Cherenkov showers are so short, the digitization and readout systems corresponding to each pixel must work at very high frequencies (_ 1 GHz). This makes unfeasible to read data continuously, because the amount of data would be unmanageable. Instead, the analog signals are sampled, storing the analog samples in a temporal ring buffer able to store up to a few _s. While the signals remain in the buffer, the trigger system performs a fast analysis of the signals and decides if the image in the buffer corresponds to a Cherenkov shower and deserves to be stored, or on the contrary it can be ignored allowing the buffer to be overwritten. The decision of saving the image or not, is based on the fact that Cherenkov showers produce photon detections in close pixels during near times, in contrast to the random arrival of the NSB phtotons. Checking if more than a certain number of pixels in a trigger region have detected more than a certain number of photons during a certain time window is enough to detect large showers. However, taking also into account how many photons have been detected in each pixel (sumtrigger technique) is more convenient to optimize the sensitivity to low energy showers. The developed trigger system presented in this thesis intends to optimize the sensitivity to low energy showers, so it performs the analog addition of the signals received in each pixel in the trigger region and compares the sum with a threshold which can be directly expressed as a number of detected photons (photoelectrons). The trigger system allows to select trigger regions of 14, 21, or 28 pixels (2, 3 or 4 clusters with 7 pixels each), and with extensive overlapping. In this way, every light increment inside a compact region of 14, 21 or 28 pixels is detected, and a trigger pulse is generated. In the most basic version of the trigger system, this pulse is just distributed throughout the camera in such a way that all the clusters are read at the same time, independently from their position in the camera, by means of a complex distribution system. Thus, the readout saves a complete camera image whenever the number of photoelectrons set as threshold is exceeded in a trigger region. However, this way of operating has two important drawbacks. First, the shower usually covers only a little part of the camera, so many pixels without relevant information are stored. When there are many telescopes as will be the case of CTA, the amount of useless stored information can be very high. On the other hand, with every trigger only some nanoseconds of information around the trigger time are stored. In the case of large showers, the duration of the shower can be quite larger, loosing information due to the temporal cut. With the aim to solve both limitations, a trigger and readout scheme based on two thresholds has been proposed. The high threshold decides if there is a relevant event in the camera, and in the positive case, only the trigger regions exceeding the low threshold are read, during a longer time. In this way, the information from empty pixels is not stored and the fixed images of the showers become to little \`videos" containing the temporal development of the shower. This new scheme is named COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), and it has been described in depth in chapter 5. An important problem affecting sumtrigger schemes like the one presented in this thesis is that in order to add the signals from each pixel properly, they must arrive at the same time. The photomultipliers used in each pixel introduce different delays which must be compensated to perform the additions properly. The effect of these delays has been analyzed, and a delay compensation system has been developed. The next trigger level consists of looking for simultaneous (or very near in time) triggers in neighbour telescopes. These function, together with others relating to interfacing different systems, have been developed in a system named Trigger Interface Board (TIB). This system is comprised of one module which will be placed inside the LSTs and MSTs cameras, and which will be connected to the neighbour telescopes through optical fibers. When a telescope receives a local trigger, it is resent to all the connected neighbours and vice-versa, so every telescope knows if its neighbours have been triggered. Once compensated the delay differences due to propagation in the optical fibers and in the air depending on the pointing direction, the TIB looks for coincidences, and in the case that the trigger condition is accomplished, the camera is read a fixed time after the local trigger arrived. Despite all the trigger system is the result of the cooperation of several groups, specially IFAE, Ciemat, ICC-UB and UCM in Spain, with some help from french and japanese groups, the Level 1 and the Trigger Interface Board constitute the core of this thesis, as they have been the two systems designed by the author of the thesis. For this reason, a large amount of technical information about these systems has been included. There are important future development lines regarding both the camera trigger (implementation in ASICS) and the stereo trigger (topological trigger), which will produce interesting improvements for the current designs during the following years, being useful for all the scientific community participating in CTA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: One of the main challenges for biomedical research lies in the computer-assisted integrative study of large and increasingly complex combinations of data in order to understand molecular mechanisms. The preservation of the materials and methods of such computational experiments with clear annotations is essential for understanding an experiment, and this is increasingly recognized in the bioinformatics community. Our assumption is that offering means of digital, structured aggregation and annotation of the objects of an experiment will provide necessary meta-data for a scientist to understand and recreate the results of an experiment. To support this we explored a model for the semantic description of a workflow-centric Research Object (RO), where an RO is defined as a resource that aggregates other resources, e.g., datasets, software, spreadsheets, text, etc. We applied this model to a case study where we analysed human metabolite variation by workflows. Results: We present the application of the workflow-centric RO model for our bioinformatics case study. Three workflows were produced following recently defined Best Practices for workflow design. By modelling the experiment as an RO, we were able to automatically query the experiment and answer questions such as “which particular data was input to a particular workflow to test a particular hypothesis?”, and “which particular conclusions were drawn from a particular workflow?”. Conclusions: Applying a workflow-centric RO model to aggregate and annotate the resources used in a bioinformatics experiment, allowed us to retrieve the conclusions of the experiment in the context of the driving hypothesis, the executed workflows and their input data. The RO model is an extendable reference model that can be used by other systems as well.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La presente Tesis Doctoral establece, con criterios científico-técnicos y como primera aproximación, una metodología para evaluar la protección ante los riesgos naturales que proporciona la restauración hidrológico-forestal de las cuencas hidrográficas de montaña, a los habitantes en ellas y a los transeúntes por las mismas. La investigación se ha planificado dividida en tres secciones en las que se analizan: 1) la protección que proporcionan las cubiertas forestales, tanto si son de regeneración natural o si proceden de reforestación; 2) la que se consigue con las obras ejecutadas en las propias cuencas y sus cauces de drenaje, que en el ámbito de la restauración hidrológico-forestal se vinculan con las reforestaciones, por lo que se hace intervenir a éstas en su evaluación y 3) la que se obtiene con las sinergias que surgen a lo largo de la consolidación de las reforestaciones y de las obras ejecutadas en la cuenca, cumpliendo con el proyecto para su restauración hidrológico-forestal; que se estiman en función del grado de cumplimiento de los objetivos específicos del mismo. La incidencia de las cubiertas forestales en el control de los riesgos naturales en la montaña se ha evaluado: a) teniendo en cuenta las experiencias de las investigaciones sobre la materia desarrolladas en la última década en el área alpina y b) analizando las características dasocráticas de las cubiertas forestales objeto de la investigación y, en función de ellas, identificando los parámetros más representativos que intervienen en el control de los principales riesgos naturales en la montaña (crecidas torrenciales, aludes, deslizamientos del terreno y caídas de bloques). La protección aportada por las obras de corrección se ha evaluado, considerado a las cuencas en las que están ubicadas como unidades específicas de corrección y analizando su comportamiento ante el mayor número de eventos torrenciales posible (que se han definido a partir de todas las precipitaciones registradas en las estaciones meteorológicas de serie histórica más larga, situadas en la cuenca en cuestión o más próximas a ella) y verificando a continuación incidencias que hayan ocurrido en la cuenca y el estado en que han quedado las obras. Con la evaluación de las sinergias surgidas a lo largo de la consolidación del proyecto de restauración, se ha tratado de precisar el grado de cumplimiento de sus principales objetivos; teniendo en cuenta que los resultados del proyecto, por su propia dinámica, se experimentan a medio y largo plazo; intervalo en el que pueden surgir distintos imponderables. En cualquier caso, la restauración de las cuencas de montaña no implica la desaparición en ellas de todos de los riesgos; sino un control de éstos y la consiguiente reducción de sus efectos. Por lo que es necesario realizar trabajos de mantenimiento de las reforestaciones y de las obras ejecutadas en ellas, para que conserven las condiciones de protección inicialmente diseñadas. La metodología se ha aplicado en cinco escenarios del Pirineo Aragonés; tres en los que en el pasado se efectuaron trabajos y obras de restauración hidrológico-forestal (las cuencas vertientes a los torrentes de Arratiecho y de Arás y el paraje de Los Arañones) y otros dos que no fueron intervenidos (la ladera de la margen derecha vertiente al cauce de Canal Roya y la ladera de solana de la cabera de la cuenca de Fondo de Pineta) que sirvan de contraste con los anteriores. ABSTRACT The present Thesis establish a methodology in first approach with scientist and technical criteria to assess the protection of persons provided by the water and forest restoration before natural risks in the mountain watersheds. The research has been planned into three sections where it is analysed: 1) the protection provided by the forest cover itself, either it comes from natural regeneration or reforestation; 2) the protection provided by the works executed within the watersheds and in the drainage channels, which it is bound together with the reforestations of water and forest restorations, assessing both effects at a time; and 3) the protection provided by the synergy that arises along the consolidation of the reforestations and the woks executed in the watersheds as the water and forest restoration project considered. This is estimated according the degree of accomplishment of its specific objectives. The impact of the forest covers in the control of natural risks in the mountain has been assessed: a) having into account the experience in the research about the topic developed in the last decades in the alpine area, and b) analysing the dasocratic characteristics of the forest covers and identifying the more representative parameters that take part in the control of the main natural risks in the mountain (torrential rises, avalanches, landslides and rock falls). The protection supplied by the correction works has been assessed considering the watershed as the specific correction unit, as well as analysing their behaviour before the largest number of torrential events possible. Those were defined from the precipitation recorded in the meteorological stations placed within or the closest to the watershed with long historic data. Then the incidents presented in the watershed and the state of the works are verified. The grade of accomplishment of the main objectives has been specified with the evaluation of the synergies raised along the restoration project. It has to be taken into account that the project has its own dynamics and its results show in mid and long term during a period with events unexpected. In any case, the restoration of the mountain basins doesn't imply the disappearance of all risk, but a control of them and the reduction of their effects. Then, it is necessary maintenance of the reforestations and of the works executed to conserve the protection conditions originally designed. The methodology has been applied into five scenes in the Aragonese Pyrenees; three in which works and water and forest restorations were executed in the past (watershed of Arratiecho and Aras torrents, and the Arañones location), and other two without any intervention that make contrast (the right hill-slope of Canal Roya and the south hill-slope of the headwaters of Pineta valley).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Las alteraciones del sistema climático debido al aumento de concentraciones de gases de efecto invernadero (GEI) en la atmósfera, tendrán implicaciones importantes para la agricultura, el medio ambiente y la sociedad. La agricultura es una fuente importante de emisiones de gases de efecto invernadero (globalmente contribuye al 12% del total de GEI), y al mismo tiempo puede ser parte de la solución para mitigar las emisiones y adaptarse al cambio climático. Las acciones frente al desafío del cambio climático deben priorizar estrategias de adaptación y mitigación en la agricultura dentro de la agenda para el desarrollo de políticas. La agricultura es por tanto crucial para la conservación y el uso sostenible de los recursos naturales, que ya están sometidos a impactos del cambio climático, al mismo tiempo que debe suministrar alimentos para una población creciente. Por tanto, es necesaria una coordinación entre las actuales estrategias de política climática y agrícola. El concepto de agricultura climáticamente inteligente ha surgido para integrar todos estos servicios de la producción agraria. Al evaluar opciones para reducir las amenazas del cambio climático para la agricultura y el medio ambiente, surgen dos preguntas de investigación: • ¿Qué información es necesaria para definir prácticas agrarias inteligentes? • ¿Qué factores influyen en la implementación de las prácticas agrarias inteligentes? Esta Tesis trata de proporcionar información relevante sobre estas cuestiones generales con el fin de apoyar el desarrollo de la política climática. Se centra en sistemas agrícolas Mediterráneos. Esta Tesis integra diferentes métodos y herramientas para evaluar las alternativas de gestión agrícola y políticas con potencial para responder a las necesidades de mitigación y adaptación al cambio climático. La investigación incluye enfoques cuantitativos y cualitativos e integra variables agronómicas, de clima y socioeconómicas a escala local y regional. La investigación aporta una recopilación de datos sobre evidencia experimental existente, y un estudio integrado sobre el comportamiento de los agricultores y las posibles alternativas de cambio (por ejemplo, la tecnología, la gestión agrícola y la política climática). Los casos de estudio de esta Tesis - el humedal de Doñana (S España) y la región de Aragón (NE España) - permiten ilustrar dos sistemas Mediterráneos representativos, donde el uso intensivo de la agricultura y las condiciones semiáridas son ya una preocupación. Por este motivo, la adopción de estrategias de mitigación y adaptación puede desempeñar un papel muy importante a la hora de encontrar un equilibrio entre la equidad, la seguridad económica y el medio ambiente en los escenarios de cambio climático. La metodología multidisciplinar de esta tesis incluye una amplia gama de enfoques y métodos para la recopilación y el análisis de datos. La toma de datos se apoya en la revisión bibliográfica de evidencia experimental, bases de datos públicas nacionales e internacionales y datos primarios recopilados mediante entrevistas semi-estructuradas con los grupos de interés (administraciones públicas, responsables políticos, asesores agrícolas, científicos y agricultores) y encuestas con agricultores. Los métodos de análisis incluyen: meta-análisis, modelos de gestión de recursos hídricos (modelo WAAPA), análisis multicriterio para la toma de decisiones, métodos estadísticos (modelos de regresión logística y de Poisson) y herramientas para el desarrollo de políticas basadas en la ciencia. El meta-análisis identifica los umbrales críticos de temperatura que repercuten en el crecimiento y el desarrollo de los tres cultivos principales para la seguridad alimentaria (arroz, maíz y trigo). El modelo WAAPA evalúa el efecto del cambio climático en la gestión del agua para la agricultura de acuerdo a diferentes alternativas políticas y escenarios climáticos. El análisis multicriterio evalúa la viabilidad de las prácticas agrícolas de mitigación en dos escenarios climáticos de acuerdo a la percepción de diferentes expertos. Los métodos estadísticos analizan los determinantes y las barreras para la adopción de prácticas agrícolas de mitigación. Las herramientas para el desarrollo de políticas basadas en la ciencia muestran el potencial y el coste para reducir GEI mediante las prácticas agrícolas. En general, los resultados de esta Tesis proporcionan información sobre la adaptación y la mitigación del cambio climático a nivel de explotación para desarrollar una política climática más integrada y ayudar a los agricultores en la toma de decisiones. Los resultados muestran las temperaturas umbral y la respuesta del arroz, el maíz y el trigo a temperaturas extremas, siendo estos valores de gran utilidad para futuros estudios de impacto y adaptación. Los resultados obtenidos también aportan una serie de estrategias flexibles para la adaptación y la mitigación a escala local, proporcionando a su vez una mejor comprensión sobre las barreras y los incentivos para su adopción. La capacidad de mejorar la disponibilidad de agua y el potencial y el coste de reducción de GEI se han estimado para estas estrategias en los casos de estudio. Estos resultados podrían ayudar en el desarrollo de planes locales de adaptación y políticas regionales de mitigación, especialmente en las regiones Mediterráneas. ABSTRACT Alterations in the climatic system due to increased atmospheric concentrations of greenhouse gas emissions (GHG) are expected to have important implications for agriculture, the environment and society. Agriculture is an important source of GHG emissions (12 % of global anthropogenic GHG), but it is also part of the solution to mitigate emissions and to adapt to climate change. Responses to face the challenge of climate change should place agricultural adaptation and mitigation strategies at the heart of the climate change agenda. Agriculture is crucial for the conservation and sustainable use of natural resources, which already stand under pressure due to climate change impacts, increased population, pollution and fragmented and uncoordinated climate policy strategies. The concept of climate smart agriculture has emerged to encompass all these issues as a whole. When assessing choices aimed at reducing threats to agriculture and the environment under climate change, two research questions arise: • What information defines smart farming choices? • What drives the implementation of smart farming choices? This Thesis aims to provide information on these broad questions in order to support climate policy development focusing in some Mediterranean agricultural systems. This Thesis integrates methods and tools to evaluate potential farming and policy choices to respond to mitigation and adaptation to climate change. The assessment involves both quantitative and qualitative approaches and integrates agronomic, climate and socioeconomic variables at local and regional scale. The assessment includes the collection of data on previous experimental evidence, and the integration of farmer behaviour and policy choices (e.g., technology, agricultural management and climate policy). The case study areas -- the Doñana coastal wetland (S Spain) and the Aragón region (NE Spain) – illustrate two representative Mediterranean regions where the intensive use of agriculture and the semi-arid conditions are already a concern. Thus the adoption of mitigation and adaptation measures can play a significant role for reaching a balance among equity, economic security and the environment under climate change scenarios. The multidisciplinary methodology of this Thesis includes a wide range of approaches for collecting and analysing data. The data collection process include revision of existing experimental evidence, public databases and the contribution of primary data gathering by semi-structured interviews with relevant stakeholders (i.e., public administrations, policy makers, agricultural advisors, scientist and farmers among others) and surveys given to farmers. The analytical methods include meta-analysis, water availability models (WAAPA model), decision making analysis (MCA, multi-criteria analysis), statistical approaches (Logistic and Poisson regression models) and science-base policy tools (MACC, marginal abatement cost curves and SOC abatement wedges). The meta-analysis identifies the critical temperature thresholds which impact on the growth and development of three major crops (i.e., rice, maize and wheat). The WAAPA model assesses the effect of climate change for agricultural water management under different policy choices and climate scenarios. The multi-criteria analysis evaluates the feasibility of mitigation farming practices under two climate scenarios according to the expert views. The statistical approaches analyses the drivers and the barriers for the adoption of mitigation farming practices. The science-base policy tools illustrate the mitigation potential and cost effectiveness of the farming practices. Overall, the results of this Thesis provide information to adapt to, and mitigate of, climate change at farm level to support the development of a comprehensive climate policy and to assist farmers. The findings show the key temperature thresholds and response to extreme temperature effects for rice, maize and wheat, so such responses can be included into crop impact and adaptation models. A portfolio of flexible adaptation and mitigation choices at local scale are identified. The results also provide a better understanding of the stakeholders oppose or support to adopt the choices which could be used to incorporate in local adaptation plans and mitigation regional policy. The findings include estimations for the farming and policy choices on the capacity to improve water supply reliability, abatement potential and cost-effective in Mediterranean regions.