150 resultados para DTM
Resumo:
The topography of the eastern margin of the Porcupine Seabight was surveyed in June 2000 utilizing swath bathymetry. The survey was carried out during RV Polarstern cruise ANT XVII/4 as part of the GEOMOUND project. The main objective was to map and investigate the seafloor topography of this region. The investigated area contains a variability of morphological features such as deep sea channels and giant mounds. The survey was planned and realized on the basis of existing data so as to guarantee the complete coverage of the margin. In order to achieve a resolution of the final digital terrain model (DTM) that meets the project demands, data processing was adjusted accordingly. The grid spacing of the DTM was set to 50 m and an accuracy better than 1% of the water depth was achieved for 96% of the soundings.
Resumo:
Based on data from R.V. Pelagia, R.V. Sonne and R.V. Meteor multibeam sonar surveys, a high resolution bathymetry was generated for the Mozambique Ridge. The mapping area is divided into five sheets, one overview and four sub-sheets. The boundaries are (west/east/south/north): Sheet 1: 28°30' E/37°00' E/36°20' S/24°50' S; Sheet 2: 32°45' E/36°45' E/28°20' S/25°20' S; Sheet 3: 31°30' E/36°45' E/30°20' S/28°10' S; Sheet 4: 30°30' E/36°30' E/33°15' S/30°15' S; Sheet 5: 28°30' E/36°10' E/36°20' S/33°10' S. Each sheet was generated twice: one from swath sonar bathymetry only, the other one is completed with depths from ETOPO2 predicted bathymetry. Basic outcome of the investigation are Digital Terrain Models (DTM), one for each sheet with 0.05 arcmin (~91 meter) grid spacing and one for the entire area (sheet 1) with 0.1 arcmin grid spacing. The DTM's were utilized for contouring and generating maps. The grid formats are NetCDF (Network Common Data Form) and ASCII (ESRI ArcGIS exchange format). The Maps are formatted as jpg-images and as small sized PNG (Portable Network Graphics) preview images. The provided maps have a paper size of DIN A0 (1189 x 841 mm).
Resumo:
Based on data from R/V Sonne multibeam sonar surveys in 2005 a high resolution bathymetry was generated for the Mozambique Basin. The area covers approx. 466,475 sqkm. The mapping area is divided into four sheets with boundaries (west/east/south/north): Sheet I (north-west), 37:00/39:45/-24:00/-20:20; Sheet II (north-east), 39:45/42:30/-24:00/-20:20; Sheet III (south-west), 37:00/39:45/-27:40/-24:00; Sheet IV (south-east), 39:45/42:30/-27:40/-24:00. Basic outcome of the investigation are Digital Terrain Models (DTM), one for each sheet with 0.05 arcmin (~91 meter) grid spacing and one for the entire area with 0.1 arcmin grid spacing. The DTM's were utilized for contouring and generating maps. Moreover the measured bathymetry was combined and compared with GEBCO bathymetry and predicted bathymetry, derived from altimeter satellites. The provided maps have a paper size of DIN A0 (1188.9 x 841 mm).
Resumo:
A new topographic database for King George Island, one of the most visited areas in Antarctica, is presented. Data from differential GPS surveys, gained during the summers 1997/98 and 1999/2000, were combined with up to date coastlines from a SPOT satellite image mosaic, and topographic information from maps as well as from the Antarctic Digital Database. A digital terrain model (DTM) was generated using ARC/INFO GIS. From contour lines derived from the DTM and the satellite image mosaic a satellite image map was assembled. Extensive information on data accuracy, the database as well as on the criteria applied to select place names is given in the multilingual map. A lack of accurate topographic information in the eastern part of the island was identified. It was concluded that additional topographic surveying or radar interferometry should be conducted to improve the data quality in this area. In three case studies, the potential applications of the improved topographic database are demonstrated. The first two examples comprise the verification of glacier velocities and the study of glacier retreat from the various input data-sets as well as the use of the DTM for climatological modelling. The last case study focuses on the use of the new digital database as a basic GIS (Geographic Information System) layer for environmental monitoring and management on King George Island.
Resumo:
Bathymetry based on data recorded during M51-4 between 13.12.2001 and 28.12.2001 in the Black Sea. The purpose of the present-study was to sample sediments and the water columns of the nw/sw Black Sea and the E Marmara Sea to study a) a high resolution sediment records of Holocene climate, b) biogeochemical associated with deep anaerobic methane oxidation, and c) element cycling in the stratified water column. Bathymetric data (hydrosweep + parasound) was primarily used to choose appropriate sites for coring of undisturbed sediments. Samples were taken for future analyses of abundance and activity of bacteria, geochemistry and dating.
Resumo:
Bathymetry based on data recorded during M72-3 between 17.03.2007 and 23.04.2007 in the Black Sea. This cruise concentrated on interdisciplinary work on gas hydrates with a main focus on the gas hydrate transition zone in and below 750 m water depth. Gas hydrate environments have been studied in various geological settings, mainly of the eastern Black Sea. Origins, distributions and dynamics of methane and gas hydrates in sediments and also methane fluxes from the sediment to the water column have been the focus. Main working areas were the Sorokin Trough, an area south of the Kerch Strait and the Andrusov Ridge in Ukrainian waters and the Gudauta Ridge and Gurian Trough in Georgian waters.
Resumo:
Bathymetry based on data recorded during POS317-3 between 19.09.2004 and 13.10.2004 in the Black Sea. This cruise concentrated on bathymetric mapping and mapping of gas seeps by hydro-acoustic detection of gas flares in the water column and the quantification of microbial turnover of gassy sediments and microbial mats. The major objective during POS317-3 was the characterization and identification of microorganisms involved in the anaerobic methane oxidation in the sediment and in microbial mats. As part of these investigations characteristic organic molecules will be identified, which can be used as biomarkers for anaerobic methane oxidizing microorganisms.
Resumo:
Bathymetry based on data recorded during M84-2 between 26.02.2011 and 02.04.2011 in the Black Sea. The aim of the cruise was to investigate the gas hydrate distribution in sediments of the Black Sea by using several coring technics. In addition to the coring activities the installed EM122 and the PARASOUND system were used to detect gas emissions in the water column and to map large areas of possible seep sites.
Resumo:
Bathymetry based on data recorded during M84-2 between 26.02.2011 and 02.04.2011 in the Black Sea. The aim of the cruise was to investigate the gas hydrate distribution in sediments of the Black Sea by using several coring technics. In addition to the coring activities the installed EM122 and the PARASOUND system were used to detect gas emissions in the water column and to map large areas of possible seep sites.
Resumo:
Bathymetry based on data recorded during M72-2 between 23.02.2007 and 13.03.2007 in the Black Sea. The main focus of the cruise was to study the fluxes and turnover of methane and sulphur in the Black Sea hydrocarbon seep systems and investigating the microbial diversity in two contrasting permanently anoxic settings associated with fluid flow and gas seepage: the methane seeps at the shelf break of the Palaeo-Dnepr area and the hydrocarbon seeps of the mud volcanoes in the 2000 m deep Sorokin trough east of Crimea.
Resumo:
Bathymetry based on data recorded during MSM15-1 between 12.04.2010 and 08.05.2010 in the Black Sea. The aim of this cruise was to quantify the concentration and uptake of oxygen at the anoxic boundaries in the water column and at the sediment water interface of the Black Sea, in parallel with the measurement of nitrogen, carbon, sulfur and iron fluxes (HYPOX project).
Resumo:
Bathymetry based on data recorded during M72-1 between 07.02.2007 and 20.02.2007 in the Black Sea. The main focus of the cruise were gas vents and seeps in the north-western Black Sea below 700 m water depth which is the zone of gas hydrate stability. The main target area was the deep Dnepr Canyon west of the Crimea Peninsula where previous investigations had indicated the occurrence of gas seepage.
Resumo:
La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.
Resumo:
El objetivo central de la presente investigación es profundizar la interpretación de los parámetros multifractales en el caso de las series de precipitación. Para ello se aborda, en primer lugar, la objetivación de la selección de la parte lineal de las curvas log-log que se encuentra en la base de los métodos de análisis fractal y multifractal; y, en segundo lugar, la generación de series artificiales de precipitación, con características similares a las reales, que permitan manipular los datos y evaluar la influencia de las modificaciones controladas de las series en los resultados de los parámetros multifractales derivados. En cuanto al problema de la selección de la parte lineal de las curvas log-log se desarrollaron dos métodos: a. Cambio de tendencia, que consiste en analizar el cambio de pendiente de las rectas ajustadas a dos subconjuntos consecutivos de los datos. b. Eliminación de casos, que analiza la mejora en el p-valor asociado al coeficiente de correlación al eliminar secuencialmente los puntos finales de la regresión. Los resultados obtenidos respecto a la regresión lineal establecen las siguientes conclusiones: - La metodología estadística de la regresión muestra la dificultad para encontrar el valor de la pendiente de tramos rectos de curvas en el procedimiento base del análisis fractal, indicando que la toma de decisión de los puntos a considerar redunda en diferencias significativas de las pendientes encontradas. - La utilización conjunta de los dos métodos propuestos ayuda a objetivar la toma de decisión sobre la parte lineal de las familias de curvas en el análisis fractal, pero su utilidad sigue dependiendo del número de datos de que se dispone y de las altas significaciones que se obtienen. En cuanto al significado empírico de los parámetros multifratales de la precipitación, se han generado 19 series de precipitación por medio de un simulador de datos diarios en cascada a partir de estimaciones anuales y mensuales, y en base a estadísticos reales de 4 estaciones meteorológicas españolas localizadas en un gradiente de NW a SE. Para todas las series generadas, se calculan los parámetros multifractales siguiendo la técnica de estimación de la DTM (Double Trace Moments - Momentos de Doble Traza) desarrollado por Lavalle et al. (1993) y se observan las modificaciones producidas. Los resultados obtenidos arrojaron las siguientes conclusiones: - La intermitencia, C1, aumenta al concentrar las precipitaciones en menos días, al hacerla más variable, o al incrementar su concentración en los días de máxima, mientras no se ve afectado por la modificación en la variabilidad del número de días de lluvia. - La multifractalidad, α, se ve incrementada con el número de días de lluvia y la variabilidad de la precipitación, tanto anual como mensual, así como también con la concentración de precipitación en el día de máxima. - La singularidad probable máxima, γs, se ve incrementada con la concentración de la lluvia en el día de precipitación máxima mensual y la variabilidad a nivel anual y mensual. - El grado no- conservativo, H, depende del número de los días de lluvia que aparezcan en la serie y secundariamente de la variabilidad general de la lluvia. - El índice de Hurst generalizado se halla muy ligado a la singularidad probable máxima. ABSTRACT The main objective of this research is to interpret the multifractal parameters in the case of precipitation series from an empirical approach. In order to do so the first proposed task was to objectify the selection of the linear part of the log-log curves that is a fundamental step of the fractal and multifractal analysis methods. A second task was to generate precipitation series, with real like features, which allow evaluating the influence of controlled series modifications on the values of the multifractal parameters estimated. Two methods are developed for selecting the linear part of the log-log curves in the fractal and multifractal analysis: A) Tendency change, which means analyzing the change in slope of the fitted lines to two consecutive subsets of data. B) Point elimination, which analyzes the improvement in the p- value associated to the coefficient of correlation when the final regression points are sequentially eliminated. The results indicate the following conclusions: - Statistical methodology of the regression shows the difficulty of finding the slope value of straight sections of curves in the base procedure of the fractal analysis, pointing that the decision on the points to be considered yield significant differences in slopes values. - The simultaneous use of the two proposed methods helps to objectify the decision about the lineal part of a family of curves in fractal analysis, but its usefulness are still depending on the number of data and the statistical significances obtained. Respect to the empiric meaning of the precipitation multifractal parameters, nineteen precipitation series were generated with a daily precipitation simulator derived from year and month estimations and considering statistics from actual data of four Spanish rain gauges located in a gradient from NW to SE. For all generated series the multifractal parameters were estimated following the technique DTM (Double Trace Moments) developed by Lavalle et al. (1993) and the variations produced considered. The results show the following conclusions: 1. The intermittency, C1, increases when precipitation is concentrating for fewer days, making it more variable, or when increasing its concentration on maximum monthly precipitation days, while it is not affected due to the modification in the variability in the number of days it rained. 2. Multifractility, α, increases with the number of rainy days and the variability of the precipitation, yearly as well as monthly, as well as with the concentration of precipitation on the maximum monthly precipitation day. 3. The maximum probable singularity, γs, increases with the concentration of rain on the day of the maximum monthly precipitation and the variability in yearly and monthly level. 4. The non-conservative degree, H’, depends on the number of rainy days that appear on the series and secondly on the general variability of the rain. 5. The general Hurst index is linked to the maximum probable singularity.
Resumo:
Tras la llegada de la medición mediante LiDAR, la obtención de cartografía se ha visto facilitada, obteniendo modelos digitales con gran rapidez y precisión. No obstante, para poder tratar la gran cantidad de información registrada, se necesita emplear un conjunto de algoritmos que permita extraer los detalles importantes y necesarios de la zona registrada. Por ello, se presenta este trabajo donde se expondrá una metodología de actuación para obtener cartografía a escala 1/1000 de una zona rústica, basada en el cálculo de mapas de curvas de nivel y ortofotografías, generadas a partir de los MDT y MDS de la zona. Todas las pruebas se han realizado mediante el software MDTopX. Abstract: After the arrival of the LiDAR measurement, mapping has been facilitated, obtaining digital models very quickly and accurately. However, in order to manage the great amount of recorded information, a set of algorithms is required which allows the extracting of important and necessary details of the recorded area. Therefore, a methodology is presented for mapping at 1/1000 scale of a rural area, based on contour maps and orthophotos, generated from the DTM and DSM of the area. All tests were performed using MDTopX software.