49 resultados para DYNAMIC FOREST DATA STRUCTURES
em Universidad Politécnica de Madrid
Resumo:
Los tipos de datos concurrentes son implementaciones concurrentes de las abstracciones de datos clásicas, con la diferencia de que han sido específicamente diseñados para aprovechar el gran paralelismo disponible en las modernas arquitecturas multiprocesador y multinúcleo. La correcta manipulación de los tipos de datos concurrentes resulta esencial para demostrar la completa corrección de los sistemas de software que los utilizan. Una de las mayores dificultades a la hora de diseñar y verificar tipos de datos concurrentes surge de la necesidad de tener que razonar acerca de un número arbitrario de procesos que invocan estos tipos de datos de manera concurrente. Esto requiere considerar sistemas parametrizados. En este trabajo estudiamos la verificación formal de propiedades temporales de sistemas concurrentes parametrizados, poniendo especial énfasis en programas que manipulan estructuras de datos concurrentes. La principal dificultad a la hora de razonar acerca de sistemas concurrentes parametrizados proviene de la interacción entre el gran nivel de concurrencia que éstos poseen y la necesidad de razonar al mismo tiempo acerca de la memoria dinámica. La verificación de sistemas parametrizados resulta en sí un problema desafiante debido a que requiere razonar acerca de estructuras de datos complejas que son accedidas y modificadas por un numero ilimitado de procesos que manipulan de manera simultánea el contenido de la memoria dinámica empleando métodos de sincronización poco estructurados. En este trabajo, presentamos un marco formal basado en métodos deductivos capaz de ocuparse de la verificación de propiedades de safety y liveness de sistemas concurrentes parametrizados que manejan estructuras de datos complejas. Nuestro marco formal incluye reglas de prueba y técnicas especialmente adaptadas para sistemas parametrizados, las cuales trabajan en colaboración con procedimientos de decisión especialmente diseñados para analizar complejas estructuras de datos concurrentes. Un aspecto novedoso de nuestro marco formal es que efectúa una clara diferenciación entre el análisis del flujo de control del programa y el análisis de los datos que se manejan. El flujo de control del programa se analiza utilizando reglas de prueba y técnicas de verificación deductivas especialmente diseñadas para lidiar con sistemas parametrizados. Comenzando a partir de un programa concurrente y la especificación de una propiedad temporal, nuestras técnicas deductivas son capaces de generar un conjunto finito de condiciones de verificación cuya validez implican la satisfacción de dicha especificación temporal por parte de cualquier sistema, sin importar el número de procesos que formen parte del sistema. Las condiciones de verificación generadas se corresponden con los datos manipulados. Estudiamos el diseño de procedimientos de decisión especializados capaces de lidiar con estas condiciones de verificación de manera completamente automática. Investigamos teorías decidibles capaces de describir propiedades de tipos de datos complejos que manipulan punteros, tales como implementaciones imperativas de pilas, colas, listas y skiplists. Para cada una de estas teorías presentamos un procedimiento de decisión y una implementación práctica construida sobre SMT solvers. Estos procedimientos de decisión son finalmente utilizados para verificar de manera automática las condiciones de verificación generadas por nuestras técnicas de verificación parametrizada. Para concluir, demostramos como utilizando nuestro marco formal es posible probar no solo propiedades de safety sino además de liveness en algunas versiones de protocolos de exclusión mutua y programas que manipulan estructuras de datos concurrentes. El enfoque que presentamos en este trabajo resulta ser muy general y puede ser aplicado para verificar un amplio rango de tipos de datos concurrentes similares. Abstract Concurrent data types are concurrent implementations of classical data abstractions, specifically designed to exploit the great deal of parallelism available in modern multiprocessor and multi-core architectures. The correct manipulation of concurrent data types is essential for the overall correctness of the software system built using them. A major difficulty in designing and verifying concurrent data types arises by the need to reason about any number of threads invoking the data type simultaneously, which requires considering parametrized systems. In this work we study the formal verification of temporal properties of parametrized concurrent systems, with a special focus on programs that manipulate concurrent data structures. The main difficulty to reason about concurrent parametrized systems comes from the combination of their inherently high concurrency and the manipulation of dynamic memory. This parametrized verification problem is very challenging, because it requires to reason about complex concurrent data structures being accessed and modified by threads which simultaneously manipulate the heap using unstructured synchronization methods. In this work, we present a formal framework based on deductive methods which is capable of dealing with the verification of safety and liveness properties of concurrent parametrized systems that manipulate complex data structures. Our framework includes special proof rules and techniques adapted for parametrized systems which work in collaboration with specialized decision procedures for complex data structures. A novel aspect of our framework is that it cleanly differentiates the analysis of the program control flow from the analysis of the data being manipulated. The program control flow is analyzed using deductive proof rules and verification techniques specifically designed for coping with parametrized systems. Starting from a concurrent program and a temporal specification, our techniques generate a finite collection of verification conditions whose validity entails the satisfaction of the temporal specification by any client system, in spite of the number of threads. The verification conditions correspond to the data manipulation. We study the design of specialized decision procedures to deal with these verification conditions fully automatically. We investigate decidable theories capable of describing rich properties of complex pointer based data types such as stacks, queues, lists and skiplists. For each of these theories we present a decision procedure, and its practical implementation on top of existing SMT solvers. These decision procedures are ultimately used for automatically verifying the verification conditions generated by our specialized parametrized verification techniques. Finally, we show how using our framework it is possible to prove not only safety but also liveness properties of concurrent versions of some mutual exclusion protocols and programs that manipulate concurrent data structures. The approach we present in this work is very general, and can be applied to verify a wide range of similar concurrent data types.
Resumo:
We present the data structures and algorithms used in the approach for building domain ontologies from folksonomies and linked data. In this approach we extracts domain terms from folksonomies and enrich them with semantic information from the Linked Open Data cloud. As a result, we obtain a domain ontology that combines the emergent knowledge of social tagging systems with formal knowledge from Ontologies.
Resumo:
This paper presents the innovations in the practical work of the Data Structures subject carried out in the last five years, including a transition period and a first year of implantation of the European Higher Education Area. The practical coursework is inspired by a project-based methodology and from 2008/2009 additional laboratory sessions are included in the subject schedule. We will present the academic results and ratios of the mentioned time period which imply a significant improvement on students' performance.
Resumo:
Improving the knowledge of demand evolution over time is a key aspect in the evaluation of transport policies and in forecasting future investment needs. It becomes even more critical for the case of toll roads, which in recent decades has become an increasingly common device to fund road projects. However, literature regarding demand elasticity estimates in toll roads is sparse and leaves some important aspects to be analyzed in greater detail. In particular, previous research on traffic analysis does not often disaggregate heavy vehicle demand from the total volume, so that the specific behavioral patternsof this traffic segment are not taken into account. Furthermore, GDP is the main socioeconomic variable most commonly chosen to explain road freight traffic growth over time. This paper seeks to determine the variables that better explain the evolution of heavy vehicle demand in toll roads over time. To that end, we present a dynamic panel data methodology aimed at identifying the key socioeconomic variables that explain the behavior of road freight traffic throughout the years. The results show that, despite the usual practice, GDP may not constitute a suitable explanatory variable for heavy vehicle demand. Rather, considering only the GDP of those sectors with a high impact on transport demand, such as construction or industry, leads to more consistent results. The methodology is applied to Spanish toll roads for the 1990?2011 period. This is an interesting case in the international context, as road freight demand has experienced an even greater reduction in Spain than elsewhere, since the beginning of the economic crisis in 2008.
Resumo:
Tolls have increasingly become a common mechanism to fund road projects in recent decades. Therefore, improving knowledge of demand behavior constitutes a key aspect for stakeholders dealing with the management of toll roads. However, the literature concerning demand elasticity estimates for interurban toll roads is still limited due to their relatively scarce number in the international context. Furthermore, existing research has left some aspects to be investigated, among others, the choice of GDP as the most common socioeconomic variable to explain traffic growth over time. This paper intends to determine the variables that better explain the evolution of light vehicle demand in toll roads throughout the years. To that end, we establish a dynamic panel data methodology aimed at identifying the key socioeconomic variables explaining changes in light vehicle demand over time. The results show that, despite some usefulness, GDP does not constitute the most appropriate explanatory variable, while other parameters such as employment or GDP per capita lead to more stable and consistent results. The methodology is applied to Spanish toll roads for the 1990?2011 period, which constitutes a very interesting case on variations in toll road use, as road demand has experienced a significant decrease since the beginning of the economic crisis in 2008.
Resumo:
Since the early days of logic programming, researchers in the field realized the potential for exploitation of parallelism present in the execution of logic programs. Their high-level nature, the presence of nondeterminism, and their referential transparency, among other characteristics, make logic programs interesting candidates for obtaining speedups through parallel execution. At the same time, the fact that the typical applications of logic programming frequently involve irregular computations, make heavy use of dynamic data structures with logical variables, and involve search and speculation, makes the techniques used in the corresponding parallelizing compilers and run-time systems potentially interesting even outside the field. The objective of this article is to provide a comprehensive survey of the issues arising in parallel execution of logic programming languages along with the most relevant approaches explored to date in the field. Focus is mostly given to the challenges emerging from the parallel execution of Prolog programs. The article describes the major techniques used for shared memory implementation of Or-parallelism, And-parallelism, and combinations of the two. We also explore some related issues, such as memory management, compile-time analysis, and execution visualization.
Resumo:
Nowadays, Internet is a place where social networks have reached an important impact in collaboration among people over the world in different ways. This article proposes a new paradigm for building CSCW business tools following the novel ideas provided by the social web to collaborate and generate awareness. An implementation of these concepts is described, including the components we provide to collaborate in workspaces, (such as videoconference, chat, desktop sharing, forums or temporal events), and the way we generate awareness from these complex social data structures. Figures and validation results are also presented to stress that this architecture has been defined to support awareness generation via joining current and future social data from business and social networks worlds, based on the idea of using social data stored in the cloud.
Resumo:
Managing large medical image collections is an increasingly demanding important issue in many hospitals and other medical settings. A huge amount of this information is daily generated, which requires robust and agile systems. In this paper we present a distributed multi-agent system capable of managing very large medical image datasets. In this approach, agents extract low-level information from images and store them in a data structure implemented in a relational database. The data structure can also store semantic information related to images and particular regions. A distinctive aspect of our work is that a single image can be divided so that the resultant sub-images can be stored and managed separately by different agents to improve performance in data accessing and processing. The system also offers the possibility of applying some region-based operations and filters on images, facilitating image classification. These operations can be performed directly on data structures in the database.
Resumo:
Irregular computations pose sorne of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures, which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. Starting in the mid 80s there has been significant progress in the development of parallelizing compilers for logic programming (and more recently, constraint programming) resulting in quite capable parallelizers. The typical applications of these paradigms frequently involve irregular computations, and make heavy use of dynamic data structures with pointers, since logical variables represent in practice a well-behaved form of pointers. This arguably makes the techniques used in these compilers potentially interesting. In this paper, we introduce in a tutoríal way, sorne of the problems faced by parallelizing compilers for logic and constraint programs and provide pointers to sorne of the significant progress made in the area. In particular, this work has resulted in a series of achievements in the areas of inter-procedural pointer aliasing analysis for independence detection, cost models and cost analysis, cactus-stack memory management, techniques for managing speculative and irregular computations through task granularity control and dynamic task allocation such as work-stealing schedulers), etc.
Resumo:
Precise modeling of the program heap is fundamental for understanding the behavior of a program, and is thus of signiflcant interest for many optimization applications. One of the fundamental properties of the heap that can be used in a range of optimization techniques is the sharing relationships between the elements in an array or collection. If an analysis can determine that the memory locations pointed to by different entries of an array (or collection) are disjoint, then in many cases loops that traverse the array can be vectorized or transformed into a thread-parallel versión. This paper introduces several novel sharing properties over the concrete heap and corresponding abstractions to represent them. In conjunction with an existing shape analysis technique, these abstractions allow us to precisely resolve the sharing relations in a wide range of heap structures (arrays, collections, recursive data structures, composite heap structures) in a computationally efflcient manner. The effectiveness of the approach is evaluated on a set of challenge problems from the JOlden and SPECjvm98 suites. Sharing information obtained from the analysis is used to achieve substantial thread-level parallel speedups.
Resumo:
Irregular computations pose some of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. In the past decade there has been significant progress in the development of parallelizing compilers for logic programming and, more recently, constraint programming. The typical applications of these paradigms frequently involve irregular computations, which arguably makes the techniques used in these compilers potentially interesting. In this paper we introduce in a tutorial way some of the problems faced by parallelizing compilers for logic and constraint programs. These include the need for inter-procedural pointer aliasing analysis for independence detection and having to manage speculative and irregular computations through task granularity control and dynamic task allocation. We also provide pointers to some of the progress made in these áreas. In the associated talk we demónstrate representatives of several generations of these parallelizing compilers.
Resumo:
Es conocido que la variación del comportamiento dinámico de las estructuras puede ser empleado dentro de un sistema de monitorización de su integridad estructural. Así, este estudio tiene como objetivo comprender el comportamiento dinámico de edificios esbeltos, frente a diferentes agentes ambientales como la temperatura y/o dirección y velocidad del viento. En el marco de esta investigación, se estudian dos edificios: la Torre de la ETSI (Escuela Técnica Superior de Ingenieros) de Caminos, Canales y Puertos de la UPM (Universidad Politécnica de Madrid) y un edificio de viviendas situado en la calle de Arturo Soria de Madrid. Los datos medioambientales antes mencionados, se registraron con sendas estacionales meteorológicas situadas en las azoteas de ambos edificios. Se realiza el análisis modal operacional de ambas estructuras. Este análisis se realiza a partir de las mediciones de las aceleraciones ante excitaciones ambientales, es un análisis basado sólo en la respuesta de la estructura. Por tanto, no es necesario interrumpir el funcionamiento en servicio de la instalación, obteniendo su comportamientos en este estado. A partir de este análisis, se obtienen las frecuencias naturales, los amortiguamientos modales y las formas modales. Así, en este trabajo se ha estudiado la relación existente entre la variación en la estimación de las frecuencias naturales y la variación de los agentes ambientales (fundamentalmente la temperatura). Los ensayos dinámicos en los dos edificios mencionados anteriormente, se han realizado utilizando acelerómetros de alta sensibilidad sincronizados inalámbricamente, lo cual ha simplificado el trabajo experimental si lo comparamos con los sistemas tradicionales. Como resultado del trabajo realizado se pueden destacar los siguientes puntos: (i) se ha visto que con el equipamiento disponible se pueden realizar análisis dinámicos de edificios, (ii) se ha mejorado el conocimiento dinámico de estas estructuras, y (iii) se ha visto la importancia que pueden tener los agentes ambientales dependiendo por un lado del tipo estructura del edificio. A partir del trabajo, se podrían actualizar modelos matemáticos que sirvan para la predicción de daños en las estructuras, y por otro, se podrán eliminar los efectos de los agentes ambientales, lo cual es un punto vital si se quiere emplear los parámetros modales para el cálculo de indices de daño. La aplicación de este tipo de investigación ayudará a tener una información mayor sobre el comportamiento de las estructuras y así, en el futuro, poder realizar distintos tipos de procesos, como la formulación de modelos matemáticos que reflejen con mayor fidelidad el comportamiento real. De esta forma, la monitorización de los agentes medioambientales permitirán valorar la influencia de estas variaciones sobre la estructura pudiéndose eliminar estos efectos. Con ello, se mejora la incertidumbre en la variación de frecuencias que puede ser utilizada como un sistema de activación de alarmas frente a la detección de daños estructurales. It is known that the variation of the dynamic behavior of structures can be used within a system to monitor structural integrity. So, this study aims to understand the dynamic behavior of slender buildings, against different environmental agents such as temperature and / or wind direction and velocity. As part of this investigation, two buildings are studied: the ETSI's (Escuela Técnica Superior de Ingenieros) main tower of Escuela de Caminos, Canales y Puertos of UPM (Universidad Politécnica de Madrid) and a residential building located in the streets Arturo Soria Madrid. The environmental data were recorded with weather stations located on the roof of both buildings. In both structures a modal operational analysis has been carried out. This analysis is performed from the measurements of the acceleration to the environmental excitation, this analysis is based only on the response of the structure. Therefore, it is not necessary to interrupt the operation of the structure, getting its behavior in this state. From this analysis, the natural frequencies, modal damping and mode shapes are obtained. So, in this work we have studied the existing relationship between the variation in the estimate of the natural frequencies and the variation of environmental agents (mainly temperature). The dynamic tests in the two buildings mentioned above, have been made using high-sensitivity accelerometers wirelessly synchronized, which has simplified the experimental work when compared to traditional systems. As a result of work performed can highlight the following points: (i) it has been found that with the available equipment can perform dynamic analysis of buildings, (ii) has improved dynamic knowledge of these structures, and, (iii) can be seen the potential importance of environmental agents depending on the type of building structure. From the work, mathematical models can be updated that serve to prediction of damage to structures, and on the other side, may eliminate the effects of environmental agents, which is a vital point if you want to use the modal parameters for calculating damage ratings. The application of this type of research will help to have more information about the behavior of structures and so, in the future, conduct various processes, as the formulation of mathematical models that reflect more accurately an actual behavior. In this way the monitoring of environmental agents will allow evaluate the influence of these variations on the structure being possible eliminate these effects. Thereby, improvement the uncertainty in the frequencies variation that can be used as an alarm activation system from detection of structural damage.
Resumo:
La influencia de un fluido en las características dinámicas de estructuras se ha estudiado desde hace tiempo. Sin embargo muchos estudios se refieren a aplicaciones bajo el agua, como es el caso del sonar de un submarino por lo que el fluido circundante se considera líquido (sin efectos de compresibilidad). Más recientemente en aplicaciones acústicas y espaciales tales como antenas o paneles muy ligeros, ha sido estudiada la influencia en las características dinámicas de una estructura rodeada por un fluido de baja densidad. Por ejemplo se ha mostrado que el efecto del aire en el transmisor-reflector del Intelsat VI C-B con un diámetro de 3,2 metros y con un peso de sólo 34,7 kg disminuye la primera frecuencia en torno a un 20% con respecto a su valor en vacío. Por tanto es importante en el desarrollo de estas grandes y ligeras estructuras disponer de un método con el que estimar el efecto del fluido circundante sobre las frecuencias naturales de éstas. De esta manera se puede evitar el ensayo de la estructura en una cámara de vacío que para el caso de una gran antena o panel puede ser difícil y costoso. Se ha desarrollado un método de elementos de contorno (BEM) para la determinación del efecto del fluido en las características dinámicas de una placa circular. Una vez calculados analíticamente los modos de vibración de la placa en vacío, la matriz de masa añadida debido a la carga del fluido se determina por el método de elementos de contorno. Este método utiliza anillos circulares de manera que el número de elementos para obtener unos resultados precisos es muy bajo. Se utiliza un procedimiento de iteración para el cálculo de las frecuencias naturales del acoplamiento fluido-estructura para el caso de fluido compresible. Los resultados del método se comparan con datos experimentales y otros modelos teóricos mostrando la precisión y exactitud para distintas condiciones de contorno de la placa. Por otro lado, a veces la geometría de la placa no es circular sino casi-circular y se ha desarrollado un método de perturbaciones para determinar la influencia de un fluido incompresible en las características dinámicas de placas casi-circulares. El método se aplica a placas con forma elíptica y pequeña excentricidad. Por una parte se obtienen las frecuencias naturales y los modos de deformación de la placa vibrando en vacío. A continuación, se calculan los coeficientes adimensionales de masa virtual añadida (factores NAVMI). Se presentan los resultados de estos factores y el efecto del fluido en las frecuencias naturales. ABSTRACT The influence of the surrounding fluid on the dynamic characteristics of structures has been well known for many years. However most of these works were more concerned with underwater applications, such as the sonar of a submarine and therefore the surrounding fluid was considered a liquid (negligible compressibility effects). Recently for acoustical and spatial applications such as antennas or very light panels the influence on the dynamic characteristics of a structure surrounded by a fluid of low density has been studied. Thus it has been shown that the air effect for the Intelsat VI C-B transmit reflector with a diameter of 3,2 meters and weighting only 34,7 kg decreases the first modal frequency by 20% with respect to the value in vacuum. It is important then, in the development of these light and large structures to have a method that estimates the effect that the surrounding fluid will have on the natural frequencies of the structure. In this way it can be avoided to test the structure in a vacuum chamber which for a large antenna or panel can be difficult and expensive A BEM method for the determination of the effect of the surrounding fluid on the dynamic characteristics of a circular plate has been developed. After the modes of the plate in vacuum are calculated in an analytical form, the added mass matrix due to the fluid loading is determined by a boundary element method. This method uses circular rings so the number of elements to obtain an accurate result is very low. An iteration procedure for the computation of the natural frequencies of the couple fluid-structure system is presented for the case of the compressibility effect of air. Comparisons of the present method with various experimental data and other theories show the efficiency and accuracy of the method for any support condition of the plate. On the other hand, sometimes the geometry of the plate is not circular but almost-circular, so a perturbation method is developed to determine the influence of an incompressible fluid on the dynamic characteristics of almost-circular plates. The method is applied to plates of elliptical shape with low eccentricity. First, the natural frequencies and the mode shapes of the plate vibrating in vacuum are obtained. Next, the nondimensional added virtual mass coefficients (NAVMI factors) are calculated. Results of this factors and the effect of the fluid on the natural frequencies are presented.
Resumo:
Disponer de información precisa y actualizada de inventario forestal es una pieza clave para mejorar la gestión forestal sostenible y para proponer y evaluar políticas de conservación de bosques que permitan la reducción de emisiones de carbono debidas a la deforestación y degradación forestal (REDD). En este sentido, la tecnología LiDAR ha demostrado ser una herramienta perfecta para caracterizar y estimar de forma continua y en áreas extensas la estructura del bosque y las principales variables de inventario forestal. Variables como la biomasa, el número de pies, el volumen de madera, la altura dominante, el diámetro o la altura media son estimadas con una calidad comparable a los inventarios tradicionales de campo. La presente tesis se centra en analizar la aplicación de los denominados métodos de masa de inventario forestal con datos LIDAR bajo diferentes condiciones y características de masa forestal (bosque templados puros y mixtos) y utilizando diferentes bases de datos LiDAR (información proveniente de vuelo nacionales e información capturada de forma específica). Como consecuencia de lo anterior, se profundiza en la generación de inventarios forestales continuos con LiDAR en grandes áreas. Los métodos de masa se basan en la búsqueda de relaciones estadísticas entre variables predictoras derivadas de la nube de puntos LiDAR y las variables de inventario forestal medidas en campo con el objeto de generar una cartografía continua de inventario forestal. El rápido desarrollo de esta tecnología en los últimos años ha llevado a muchos países a implantar programas nacionales de captura de información LiDAR aerotransportada. Estos vuelos nacionales no están pensados ni diseñados para fines forestales por lo que es necesaria la evaluación de la validez de esta información LiDAR para la descripción de la estructura del bosque y la medición de variables forestales. Esta información podría suponer una drástica reducción de costes en la generación de información continua de alta resolución de inventario forestal. En el capítulo 2 se evalúa la estimación de variables forestales a partir de la información LiDAR capturada en el marco del Plan Nacional de Ortofotografía Aérea (PNOA-LiDAR) en España. Para ello se compara un vuelo específico diseñado para inventario forestal con la información de la misma zona capturada dentro del PNOA-LiDAR. El caso de estudio muestra cómo el ángulo de escaneo, la pendiente y orientación del terreno afectan de forma estadísticamente significativa, aunque con pequeñas diferencias, a la estimación de biomasa y variables de estructura forestal derivadas del LiDAR. La cobertura de copas resultó más afectada por estos factores que los percentiles de alturas. Considerando toda la zona de estudio, la estimación de la biomasa con ambas bases de datos no presentó diferencias estadísticamente significativas. Las simulaciones realizadas muestran que las diferencias medias en la estimación de biomasa entre un vuelo específico y el vuelo nacional podrán superar el 4% en áreas abruptas, con ángulos de escaneo altos y cuando la pendiente de la ladera no esté orientada hacia la línea de escaneo. En el capítulo 3 se desarrolla un estudio en masas mixtas y puras de pino silvestre y haya, con un enfoque multi-fuente empleando toda la información disponible (vuelos LiDAR nacionales de baja densidad de puntos, imágenes satelitales Landsat y parcelas permanentes del inventario forestal nacional español). Se concluye que este enfoque multi-fuente es adecuado para realizar inventarios forestales continuos de alta resolución en grandes superficies. Los errores obtenidos en la fase de ajuste y de validación de los modelos de área basimétrica y volumen son similares a los registrados por otros autores (usando un vuelo específico y parcelas de campo específicas). Se observan errores mayores en la variable número de pies que los encontrados en la literatura, que pueden ser explicados por la influencia de la metodología de parcelas de radio variable en esta variable. En los capítulos 4 y 5 se evalúan los métodos de masa para estimar biomasa y densidad de carbono en bosques tropicales. Para ello se trabaja con datos del Parque Nacional Volcán Poás (Costa Rica) en dos situaciones diferentes: i) se dispone de una cobertura completa LiDAR del área de estudio (capitulo 4) y ii) la cobertura LiDAR completa no es técnica o económicamente posible y se combina una cobertura incompleta de LiDAR con imágenes Landsat e información auxiliar para la estimación de biomasa y carbono (capitulo 5). En el capítulo 4 se valida un modelo LiDAR general de estimación de biomasa aérea en bosques tropicales y se compara con los resultados obtenidos con un modelo ajustado de forma específica para el área de estudio. Ambos modelos están basados en la variable altura media de copas (TCH por sus siglas en inglés) derivada del modelo digital LiDAR de altura de la vegetación. Los resultados en el área de estudio muestran que el modelo general es una alternativa fiable al ajuste de modelos específicos y que la biomasa aérea puede ser estimada en una nueva zona midiendo en campo únicamente la variable área basimétrica (BA). Para mejorar la aplicación de esta metodología es necesario definir en futuros trabajos procedimientos adecuados de medición de la variable área basimétrica en campo (localización, tamaño y forma de las parcelas de campo). La relación entre la altura media de copas del LiDAR y el área basimétrica (Coeficiente de Stock) obtenida en el área de estudio varía localmente. Por tanto es necesario contar con más información de campo para caracterizar la variabilidad del Coeficiente de Stock entre zonas de vida y si estrategias como la estratificación pueden reducir los errores en la estimación de biomasa y carbono en bosques tropicales. En el capítulo 5 se concluye que la combinación de una muestra sistemática de información LiDAR con una cobertura completa de imagen satelital de moderada resolución (e información auxiliar) es una alternativa efectiva para la realización de inventarios continuos en bosques tropicales. Esta metodología permite estimar altura de la vegetación, biomasa y carbono en grandes zonas donde la captura de una cobertura completa de LiDAR y la realización de un gran volumen de trabajo de campo es económica o/y técnicamente inviable. Las alternativas examinadas para la predicción de biomasa a partir de imágenes Landsat muestran una ligera disminución del coeficiente de determinación y un pequeño aumento del RMSE cuando la cobertura de LiDAR es reducida de forma considerable. Los resultados indican que la altura de la vegetación, la biomasa y la densidad de carbono pueden ser estimadas en bosques tropicales de forma adecuada usando coberturas de LIDAR bajas (entre el 5% y el 20% del área de estudio). ABSTRACT The availability of accurate and updated forest data is essential for improving sustainable forest management, promoting forest conservation policies and reducing carbon emissions from deforestation and forest degradation (REDD). In this sense, LiDAR technology proves to be a clear-cut tool for characterizing forest structure in large areas and assessing main forest-stand variables. Forest variables such as biomass, stem volume, basal area, mean diameter, mean height, dominant height, and stem number can be thus predicted with better or comparable quality than with costly traditional field inventories. In this thesis, it is analysed the potential of LiDAR technology for the estimation of plot-level forest variables under a range of conditions (conifer & broadleaf temperate forests and tropical forests) and different LiDAR capture characteristics (nationwide LiDAR information vs. specific forest LiDAR data). This study evaluates the application of LiDAR-based plot-level methods in large areas. These methods are based on statistical relationships between predictor variables (derived from airborne data) and field-measured variables to generate wall to wall forest inventories. The fast development of this technology in recent years has led to an increasing availability of national LiDAR datasets, usually developed for multiple purposes throughout an expanding number of countries and regions. The evaluation of the validity of nationwide LiDAR databases (not designed specifically for forest purposes) is needed and presents a great opportunity for substantially reducing the costs of forest inventories. In chapter 2, the suitability of Spanish nationwide LiDAR flight (PNOA) to estimate forest variables is analyzed and compared to a specifically forest designed LiDAR flight. This study case shows that scan angle, terrain slope and aspect significantly affect the assessment of most of the LiDAR-derived forest variables and biomass estimation. Especially, the estimation of canopy cover is more affected than height percentiles. Considering the entire study area, biomass estimations from both databases do not show significant differences. Simulations show that differences in biomass could be larger (more than 4%) only in particular situations, such as steep areas when the slopes are non-oriented towards the scan lines and the scan angles are larger than 15º. In chapter 3, a multi-source approach is developed, integrating available databases such as nationwide LiDAR flights, Landsat imagery and permanent field plots from SNFI, with good resultos in the generation of wall to wall forest inventories. Volume and basal area errors are similar to those obtained by other authors (using specific LiDAR flights and field plots) for the same species. Errors in the estimation of stem number are larger than literature values as a consequence of the great influence that variable-radius plots, as used in SNFI, have on this variable. In chapters 4 and 5 wall to wall plot-level methodologies to estimate aboveground biomass and carbon density in tropical forest are evaluated. The study area is located in the Poas Volcano National Park (Costa Rica) and two different situations are analyzed: i) available complete LiDAR coverage (chapter 4) and ii) a complete LiDAR coverage is not available and wall to wall estimation is carried out combining LiDAR, Landsat and ancillary data (chapter 5). In chapter 4, a general aboveground biomass plot-level LiDAR model for tropical forest (Asner & Mascaro, 2014) is validated and a specific model for the study area is fitted. Both LiDAR plot-level models are based on the top-of-canopy height (TCH) variable that is derived from the LiDAR digital canopy model. Results show that the pantropical plot-level LiDAR methodology is a reliable alternative to the development of specific models for tropical forests and thus, aboveground biomass in a new study area could be estimated by only measuring basal area (BA). Applying this methodology, the definition of precise BA field measurement procedures (e.g. location, size and shape of the field plots) is decisive to achieve reliable results in future studies. The relation between BA and TCH (Stocking Coefficient) obtained in our study area in Costa Rica varied locally. Therefore, more field work is needed for assessing Stocking Coefficient variations between different life zones and the influence of the stratification of the study areas in tropical forests on the reduction of uncertainty. In chapter 5, the combination of systematic LiDAR information sampling and full coverage Landsat imagery (and ancillary data) prove to be an effective alternative for forest inventories in tropical areas. This methodology allows estimating wall to wall vegetation height, biomass and carbon density in large areas where full LiDAR coverage and traditional field work are technically and/or economically unfeasible. Carbon density prediction using Landsat imaginery shows a slight decrease in the determination coefficient and an increase in RMSE when harshly decreasing LiDAR coverage area. Results indicate that feasible estimates of vegetation height, biomass and carbon density can be accomplished using low LiDAR coverage areas (between 5% and 20% of the total area) in tropical locations.
Resumo:
For years, the Human Computer Interaction (HCI) community has crafted usability guidelines that clearly define what characteristics a software system should have in order to be easy to use. However, in the Software Engineering (SE) community keep falling short of successfully incorporating these recommendations into software projects. From a SE perspective, the process of incorporating usability features into software is not always straightforward, as a large number of these features have heavy implications in the underlying software architecture. For example, successfully including an “undo” feature in an application requires the design and implementation of many complex interrelated data structures and functionalities. Our work is focused upon providing developers with a set of software design patterns to assist them in the process of designing more usable software. This would contribute to the proper inclusion of specific usability features with high impact on the software design. Preliminary validation data show that usage of the guidelines also has positive effects on development time and overall software design quality.