861 resultados para Multi Domain Information Model
Resumo:
Information retrieval has been much discussed within Information Science lately. The search for quality information compatible with the users needs became the object of constant research.Using the Internet as a source of dissemination of knowledge has suggested new models of information storage, such as digital repositories, which have been used in academic research as the main form of autoarchiving and disseminating information, but with an information structure that suggests better descriptions of resources and hence better retrieval.Thus the objective is to improve the process of information retrieval, presenting a proposal for a structural model in the context of the semantic web, addressing the use of web 2.0 and web 3.0 in digital repositories, enabling semantic retrieval of information through building a data layer called Iterative Representation.The present study is characterized as descriptive and analytical, based on document analysis, divided into two parts: the first, characterized by direct observation of non-participatory tools that implement digital repositories, as well as digital repositories already instantiated, and the second with scanning feature, which suggests an innovative model for repositories, with the use of structures of knowledge representation and user participation in building a vocabulary domain. The model suggested and proposed ─ Iterative Representation ─ will allow to tailor the digital repositories using Folksonomy and also controlled vocabulary of the field in order to generate a data layer iterative, which allows feedback information, and semantic retrieval of information, through the structural model designed for repositories. The suggested model resulted in the formulation of the thesis that through Iterative Representation it is possible to establish a process of semantic retrieval of information in digital repositories.
Resumo:
Pós-graduação em Ciência e Tecnologia Animal - FEIS
Resumo:
Magnetotactic bacteria biomineralize magnetic minerals with precisely controlled size, morphology, and stoichiometry. These cosmopolitan bacteria are widely observed in aquatic environments. If preserved after burial, the inorganic remains of magnetotactic bacteria act as magnetofossils that record ancient geomagnetic field variations. They also have potential to provide paleoenvironmental information. In contrast to conventional magnetofossils, giant magnetofossils (most likely produced by eukaryotic organisms) have only been reported once before from Paleocene-Eocene Thermal Maximum (PETM; 55.8 Ma) sediments on the New Jersey coastal plain. Here, using transmission electron microscopic observations, we present evidence for abundant giant magnetofossils, including previously reported elongated prisms and spindles, and new giant bullet-shaped magnetite crystals, in the Southern Ocean near Antarctica, not only during the PETM, but also shortly before and after the PETM. Moreover, we have discovered giant bullet-shaped magnetite crystals from the equatorial Indian Ocean during the Mid-Eocene Climatic Optimum (similar to 40 Ma). Our results indicate a more widespread geographic, environmental, and temporal distribution of giant magnetofossils in the geological record with a link to "hyperthermal" events. Enhanced global weathering during hyperthermals, and expanded suboxic diagenetic environments, probably provided more bioavailable iron that enabled biomineralization of giant magnetofossils. Our micromagnetic modelling indicates the presence of magnetic multi-domain (i.e., not ideal for navigation) and single domain (i.e., ideal for navigation) structures in the giant magnetite particles depending on their size, morphology and spatial arrangement. Different giant magnetite crystal morphologies appear to have had different biological functions, including magnetotaxis and other non-navigational purposes. Our observations suggest that hyperthermals provided ideal conditions for giant magnetofossils, and that these organisms were globally distributed. Much more work is needed to understand the interplay between magnetofossil morphology, climate, nutrient availability, and environmental variability.
Resumo:
Die vorliegende Dissertation untersucht die biogeochemischen Vorgänge in der Vegetationsschicht (Bestand) und die Rückkopplungen zwischen physiologischen und physikalischen Umweltprozessen, die das Klima und die Chemie der unteren Atmosphäre beeinflussen. Ein besondere Schwerpunkt ist die Verwendung theoretischer Ansätze zur Quantifizierung des vertikalen Austauschs von Energie und Spurengasen (Vertikalfluss) unter besonderer Berücksichtigung der Wechselwirkungen der beteiligten Prozesse. Es wird ein differenziertes Mehrschicht-Modell der Vegetation hergeleitet, implementiert, für den amazonischen Regenwald parametrisiert und auf einen Standort in Rondonia (Südwest Amazonien) angewendet, welches die gekoppelten Gleichungen zur Energiebilanz der Oberfläche und CO2-Assimilation auf der Blattskala mit einer Lagrange-Beschreibung des Vertikaltransports auf der Bestandesskala kombiniert. Die hergeleiteten Parametrisierungen beinhalten die vertikale Dichteverteilung der Blattfläche, ein normalisiertes Profil der horizontalen Windgeschwindigkeit, die Lichtakklimatisierung der Photosynthesekapazität und den Austausch von CO2 und Wärme an der Bodenoberfläche. Desweiteren werden die Berechnungen zur Photosynthese, stomatären Leitfähigkeit und der Strahlungsabschwächung im Bestand mithilfe von Feldmessungen evaluiert. Das Teilmodell zum Vertikaltransport wird im Detail unter Verwendung von 222-Radon-Messungen evaluiert. Die ``Vorwärtslösung'' und der ``inverse Ansatz'' des Lagrangeschen Dispersionsmodells werden durch den Vergleich von beobachteten und vorhergesagten Konzentrationsprofilen bzw. Bodenflüssen bewertet. Ein neuer Ansatz wird hergeleitet, um die Unsicherheiten des inversen Ansatzes aus denjenigen des Eingabekonzentrationsprofils zu quantifizieren. Für nächtliche Bedingungen wird eine modifizierte Parametrisierung der Turbulenz vorgeschlagen, welche die freie Konvektion während der Nacht im unteren Bestand berücksichtigt und im Vergleich zu früheren Abschätzungen zu deutlich kürzeren Aufenthaltszeiten im Bestand führt. Die vorhergesagte Stratifizierung des Bestandes am Tage und in der Nacht steht im Einklang mit Beobachtungen in dichter Vegetation. Die Tagesgänge der vorhergesagten Flüsse und skalaren Profile von Temperatur, H2O, CO2, Isopren und O3 während der späten Regen- und Trockenzeit am Rondonia-Standort stimmen gut mit Beobachtungen überein. Die Ergebnisse weisen auf saisonale physiologische Änderungen hin, die sich durch höhere stomatäre Leitfähigkeiten bzw. niedrigere Photosyntheseraten während der Regen- und Trockenzeit manifestieren. Die beobachteten Depositionsgeschwindigkeiten für Ozon während der Regenzeit überschreiten diejenigen der Trockenzeit um 150-250%. Dies kann nicht durch realistische physiologische Änderungen erklärt werden, jedoch durch einen zusätzlichen cuticulären Aufnahmemechanismus, möglicherweise an feuchten Oberflächen. Der Vergleich von beobachteten und vorhergesagten Isoprenkonzentrationen im Bestand weist auf eine reduzierte Isoprenemissionskapazität schattenadaptierter Blätter und zusätzlich auf eine Isoprenaufnahme des Bodens hin, wodurch sich die globale Schätzung für den tropischen Regenwald um 30% reduzieren würde. In einer detaillierten Sensitivitätsstudie wird die VOC Emission von amazonischen Baumarten unter Verwendung eines neuronalen Ansatzes in Beziehung zu physiologischen und abiotischen Faktoren gesetzt. Die Güte einzelner Parameterkombinationen bezüglich der Vorhersage der VOC Emission wird mit den Vorhersagen eines Modells verglichen, das quasi als Standardemissionsalgorithmus für Isopren dient und Licht sowie Temperatur als Eingabeparameter verwendet. Der Standardalgorithmus und das neuronale Netz unter Verwendung von Licht und Temperatur als Eingabeparameter schneiden sehr gut bei einzelnen Datensätzen ab, scheitern jedoch bei der Vorhersage beobachteter VOC Emissionen, wenn Datensätze von verschiedenen Perioden (Regen/Trockenzeit), Blattentwicklungsstadien, oder gar unterschiedlichen Spezies zusammengeführt werden. Wenn dem Netzwerk Informationen über die Temperatur-Historie hinzugefügt werden, reduziert sich die nicht erklärte Varianz teilweise. Eine noch bessere Leistung wird jedoch mit physiologischen Parameterkombinationen erzielt. Dies verdeutlicht die starke Kopplung zwischen VOC Emission und Blattphysiologie.
Resumo:
Many industries and academic institutions share the vision that an appropriate use of information originated from the environment may add value to services in multiple domains and may help humans in dealing with the growing information overload which often seems to jeopardize our life. It is also clear that information sharing and mutual understanding between software agents may impact complex processes where many actors (humans and machines) are involved, leading to relevant socioeconomic benefits. Starting from these two input, architectural and technological solutions to enable “environment-related cooperative digital services” are here explored. The proposed analysis starts from the consideration that our environment is physical space and here diversity is a major value. On the other side diversity is detrimental to common technological solutions, and it is an obstacle to mutual understanding. An appropriate environment abstraction and a shared information model are needed to provide the required levels of interoperability in our heterogeneous habitat. This thesis reviews several approaches to support environment related applications and intends to demonstrate that smart-space-based, ontology-driven, information-sharing platforms may become a flexible and powerful solution to support interoperable services in virtually any domain and even in cross-domain scenarios. It also shows that semantic technologies can be fruitfully applied not only to represent application domain knowledge. For example semantic modeling of Human-Computer Interaction may support interaction interoperability and transformation of interaction primitives into actions, and the thesis shows how smart-space-based platforms driven by an interaction ontology may enable natural ad flexible ways of accessing resources and services, e.g, with gestures. An ontology for computational flow execution has also been built to represent abstract computation, with the goal of exploring new ways of scheduling computation flows with smart-space-based semantic platforms.
Resumo:
Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.
Resumo:
Throughout this research, the whole life cycle of a building will be analyzed, with a special focus on the most common issues that affect the construction sector nowadays, such as safety. In fact, the goal is to enhance the management of the entire construction process in order to reduce the risk of accidents. The contemporary trend is that of researching new tools capable of reducing, or even eliminating, the most common mistakes that usually lead to safety risks. That is one of the main reasons why new technologies and tools have been introduced in the field. The one we will focus on is the so-called BIM: Building Information Modeling. With the term BIM we refer to wider and more complex analysis tool than a simple 3D modeling software. Through BIM technologies we are able to generate a multi-dimension 3D model which contains all the information about the project. This innovative approach aims at a better understanding and control of the project by taking into consideration the entire life cycle and resulting in a faster and more sustainable way of management. Furthermore, BIM software allows for the sharing of all the information among the different aspects of the project and among the different participants involved thus improving the cooperation and communication. In addition, BIM software utilizes smart tools that simulate and visualize the process in advance, thus preventing issues that might not have been taking into consideration during the design process. This leads to higher chances of avoiding risks, delays and cost increases. Using a hospital case study, we will apply this approach for the completion of a safety plan, with a special focus onto the construction phase.
Resumo:
In linear mixed models, model selection frequently includes the selection of random effects. Two versions of the Akaike information criterion (AIC) have been used, based either on the marginal or on the conditional distribution. We show that the marginal AIC is no longer an asymptotically unbiased estimator of the Akaike information, and in fact favours smaller models without random effects. For the conditional AIC, we show that ignoring estimation uncertainty in the random effects covariance matrix, as is common practice, induces a bias that leads to the selection of any random effect not predicted to be exactly zero. We derive an analytic representation of a corrected version of the conditional AIC, which avoids the high computational cost and imprecision of available numerical approximations. An implementation in an R package is provided. All theoretical results are illustrated in simulation studies, and their impact in practice is investigated in an analysis of childhood malnutrition in Zambia.
Resumo:
Systems must co-evolve with their context. Reverse engineering tools are a great help in this process of required adaption. In order for these tools to be flexible, they work with models, abstract representations of the source code. The extraction of such information from source code can be done using a parser. However, it is fairly tedious to build new parsers. And this is made worse by the fact that it has to be done over and over again for every language we want to analyze. In this paper we propose a novel approach which minimizes the knowledge required of a certain language for the extraction of models implemented in that language by reflecting on the implementation of preparsed ASTs provided by an IDE. In a second phase we use a technique referred to as Model Mapping by Example to map platform dependent models onto domain specific model.
Resumo:
This dissertation explores phase I dose-finding designs in cancer trials from three perspectives: the alternative Bayesian dose-escalation rules, a design based on a time-to-dose-limiting toxicity (DLT) model, and a design based on a discrete-time multi-state (DTMS) model. We list alternative Bayesian dose-escalation rules and perform a simulation study for the intra-rule and inter-rule comparisons based on two statistical models to identify the most appropriate rule under certain scenarios. We provide evidence that all the Bayesian rules outperform the traditional ``3+3'' design in the allocation of patients and selection of the maximum tolerated dose. The design based on a time-to-DLT model uses patients' DLT information over multiple treatment cycles in estimating the probability of DLT at the end of treatment cycle 1. Dose-escalation decisions are made whenever a cycle-1 DLT occurs, or two months after the previous check point. Compared to the design based on a logistic regression model, the new design shows more safety benefits for trials in which more late-onset toxicities are expected. As a trade-off, the new design requires more patients on average. The design based on a discrete-time multi-state (DTMS) model has three important attributes: (1) Toxicities are categorized over a distribution of severity levels, (2) Early toxicity may inform dose escalation, and (3) No suspension is required between accrual cohorts. The proposed model accounts for the difference in the importance of the toxicity severity levels and for transitions between toxicity levels. We compare the operating characteristics of the proposed design with those from a similar design based on a fully-evaluated model that directly models the maximum observed toxicity level within the patients' entire assessment window. We describe settings in which, under comparable power, the proposed design shortens the trial. The proposed design offers more benefit compared to the alternative design as patient accrual becomes slower.
Resumo:
Software dependencies play a vital role in programme comprehension, change impact analysis and other software maintenance activities. Traditionally, these activities are supported by source code analysis; however, the source code is sometimes inaccessible or difficult to analyse, as in hybrid systems composed of source code in multiple languages using various paradigms (e.g. object-oriented programming and relational databases). Moreover, not all stakeholders have adequate knowledge to perform such analyses. For example, non-technical domain experts and consultants raise most maintenance requests; however, they cannot predict the cost and impact of the requested changes without the support of the developers. We propose a novel approach to predicting software dependencies by exploiting the coupling present in domain-level information. Our approach is independent of the software implementation; hence, it can be used to approximate architectural dependencies without access to the source code or the database. As such, it can be applied to hybrid systems with heterogeneous source code or legacy systems with missing source code. In addition, this approach is based solely on information visible and understandable to domain users; therefore, it can be efficiently used by domain experts without the support of software developers. We evaluate our approach with a case study on a large-scale enterprise system, in which we demonstrate how up to 65 of the source code dependencies and 77% of the database dependencies are predicted solely based on domain information.
Resumo:
The runtime management of the infrastructure providing service-based systems is a complex task, up to the point where manual operation struggles to be cost effective. As the functionality is provided by a set of dynamically composed distributed services, in order to achieve a management objective multiple operations have to be applied over the distributed elements of the managed infrastructure. Moreover, the manager must cope with the highly heterogeneous characteristics and management interfaces of the runtime resources. With this in mind, this paper proposes to support the configuration and deployment of services with an automated closed control loop. The automation is enabled by the definition of a generic information model, which captures all the information relevant to the management of the services with the same abstractions, describing the runtime elements, service dependencies, and business objectives. On top of that, a technique based on satisfiability is described which automatically diagnoses the state of the managed environment and obtains the required changes for correcting it (e.g., installation, service binding, update, or configuration). The results from a set of case studies extracted from the banking domain are provided to validate the feasibility of this proposa
Resumo:
El Análisis de Consumo de Recursos o Análisis de Coste trata de aproximar el coste de ejecutar un programa como una función dependiente de sus datos de entrada. A pesar de que existen trabajos previos a esta tesis doctoral que desarrollan potentes marcos para el análisis de coste de programas orientados a objetos, algunos aspectos avanzados, como la eficiencia, la precisión y la fiabilidad de los resultados, todavía deben ser estudiados en profundidad. Esta tesis aborda estos aspectos desde cuatro perspectivas diferentes: (1) Las estructuras de datos compartidas en la memoria del programa son una pesadilla para el análisis estático de programas. Trabajos recientes proponen una serie de condiciones de localidad para poder mantener de forma consistente información sobre los atributos de los objetos almacenados en memoria compartida, reemplazando éstos por variables locales no almacenadas en la memoria compartida. En esta tesis presentamos dos extensiones a estos trabajos: la primera es considerar, no sólo los accesos a los atributos, sino también los accesos a los elementos almacenados en arrays; la segunda se centra en los casos en los que las condiciones de localidad no se cumplen de forma incondicional, para lo cual, proponemos una técnica para encontrar las precondiciones necesarias para garantizar la consistencia de la información acerca de los datos almacenados en memoria. (2) El objetivo del análisis incremental es, dado un programa, los resultados de su análisis y una serie de cambios sobre el programa, obtener los nuevos resultados del análisis de la forma más eficiente posible, evitando reanalizar aquellos fragmentos de código que no se hayan visto afectados por los cambios. Los analizadores actuales todavía leen y analizan el programa completo de forma no incremental. Esta tesis presenta un análisis de coste incremental, que, dado un cambio en el programa, reconstruye la información sobre el coste del programa de todos los métodos afectados por el cambio de forma incremental. Para esto, proponemos (i) un algoritmo multi-dominio y de punto fijo que puede ser utilizado en todos los análisis globales necesarios para inferir el coste, y (ii) una novedosa forma de almacenar las expresiones de coste que nos permite reconstruir de forma incremental únicamente las funciones de coste de aquellos componentes afectados por el cambio. (3) Las garantías de coste obtenidas de forma automática por herramientas de análisis estático no son consideradas totalmente fiables salvo que la implementación de la herramienta o los resultados obtenidos sean verificados formalmente. Llevar a cabo el análisis de estas herramientas es una tarea titánica, ya que se trata de herramientas de gran tamaño y complejidad. En esta tesis nos centramos en el desarrollo de un marco formal para la verificación de las garantías de coste obtenidas por los analizadores en lugar de analizar las herramientas. Hemos implementado esta idea mediante la herramienta COSTA, un analizador de coste para programas Java y KeY, una herramienta de verificación de programas Java. De esta forma, COSTA genera las garantías de coste, mientras que KeY prueba la validez formal de los resultados obtenidos, generando de esta forma garantías de coste verificadas. (4) Hoy en día la concurrencia y los programas distribuidos son clave en el desarrollo de software. Los objetos concurrentes son un modelo de concurrencia asentado para el desarrollo de sistemas concurrentes. En este modelo, los objetos son las unidades de concurrencia y se comunican entre ellos mediante llamadas asíncronas a sus métodos. La distribución de las tareas sugiere que el análisis de coste debe inferir el coste de los diferentes componentes distribuidos por separado. En esta tesis proponemos un análisis de coste sensible a objetos que, utilizando los resultados obtenidos mediante un análisis de apunta-a, mantiene el coste de los diferentes componentes de forma independiente. Abstract Resource Analysis (a.k.a. Cost Analysis) tries to approximate the cost of executing programs as functions on their input data sizes and without actually having to execute the programs. While a powerful resource analysis framework on object-oriented programs existed before this thesis, advanced aspects to improve the efficiency, the accuracy and the reliability of the results of the analysis still need to be further investigated. This thesis tackles this need from the following four different perspectives. (1) Shared mutable data structures are the bane of formal reasoning and static analysis. Analyses which keep track of heap-allocated data are referred to as heap-sensitive. Recent work proposes locality conditions for soundly tracking field accesses by means of ghost non-heap allocated variables. In this thesis we present two extensions to this approach: the first extension is to consider arrays accesses (in addition to object fields), while the second extension focuses on handling cases for which the locality conditions cannot be proven unconditionally by finding aliasing preconditions under which tracking such heap locations is feasible. (2) The aim of incremental analysis is, given a program, its analysis results and a series of changes to the program, to obtain the new analysis results as efficiently as possible and, ideally, without having to (re-)analyze fragments of code that are not affected by the changes. During software development, programs are permanently modified but most analyzers still read and analyze the entire program at once in a non-incremental way. This thesis presents an incremental resource usage analysis which, after a change in the program is made, is able to reconstruct the upper-bounds of all affected methods in an incremental way. To this purpose, we propose (i) a multi-domain incremental fixed-point algorithm which can be used by all global analyses required to infer the cost, and (ii) a novel form of cost summaries that allows us to incrementally reconstruct only those components of cost functions affected by the change. (3) Resource guarantees that are automatically inferred by static analysis tools are generally not considered completely trustworthy, unless the tool implementation or the results are formally verified. Performing full-blown verification of such tools is a daunting task, since they are large and complex. In this thesis we focus on the development of a formal framework for the verification of the resource guarantees obtained by the analyzers, instead of verifying the tools. We have implemented this idea using COSTA, a state-of-the-art cost analyzer for Java programs and KeY, a state-of-the-art verification tool for Java source code. COSTA is able to derive upper-bounds of Java programs while KeY proves the validity of these bounds and provides a certificate. The main contribution of our work is to show that the proposed tools cooperation can be used for automatically producing verified resource guarantees. (4) Distribution and concurrency are today mainstream. Concurrent objects form a well established model for distributed concurrent systems. In this model, objects are the concurrency units that communicate via asynchronous method calls. Distribution suggests that analysis must infer the cost of the diverse distributed components separately. In this thesis we propose a novel object-sensitive cost analysis which, by using the results gathered by a points-to analysis, can keep the cost of the diverse distributed components separate.
Resumo:
Las aplicaciones de la teledetección al seguimiento de lo que ocurre en la superficie terrestre se han ido multiplicando y afinando con el lanzamiento de nuevos sensores por parte de las diferentes agencias espaciales. La necesidad de tener información actualizada cada poco tiempo y espacialmente homogénea, ha provocado el desarrollo de nuevos programas como el Earth Observing System (EOS) de la National Aeronautics and Space Administration (NASA). Uno de los sensores que incorpora el buque insignia de ese programa, el satélite TERRA, es el Multi-angle Imaging SpectroRadiometer (MISR), diseñado para capturar información multiangular de la superficie terrestre. Ya desde los años 1970, se conocía que la reflectancia de las diversas ocupaciones y usos del suelo variaba en función del ángulo de observación y de iluminación, es decir, que eran anisotrópicas. Tal variación estaba además relacionada con la estructura tridimensional de tales ocupaciones, por lo que se podía aprovechar tal relación para obtener información de esa estructura, más allá de la que pudiera proporcionar la información meramente espectral. El sensor MISR incorpora 9 cámaras a diferentes ángulos para capturar 9 imágenes casi simultáneas del mismo punto, lo que permite estimar con relativa fiabilidad la respuesta anisotrópica de la superficie terrestre. Varios trabajos han demostrado que se pueden estimar variables relacionadas con la estructura de la vegetación con la información que proporciona MISR. En esta Tesis se ha realizado una primera aplicación a la Península Ibérica, para comprobar su utilidad a la hora de estimar variables de interés forestal. En un primer paso se ha analizado la variabilidad temporal que se produce en los datos, debido a los cambios en la geometría de captación, es decir, debido a la posición relativa de sensores y fuente de iluminación, que en este caso es el Sol. Se ha comprobado cómo la anisotropía es mayor desde finales de otoño hasta principios de primavera debido a que la posición del Sol es más cercana al plano de los sensores. También se ha comprobado que los valores máximo y mínimo se van desplazando temporalmente entre el centro y el extremo angular. En la caracterización multiangular de ocupaciones del suelo de CORINE Land Cover que se ha realizado, se puede observar cómo la forma predominante en las imágenes con el Sol más alto es convexa con un máximo en la cámara más cercana a la fuente de iluminación. Sin embargo, cuando el Sol se encuentra mucho más bajo, ese máximo es muy externo. Por otra parte, los datos obtenidos en verano son mucho más variables para cada ocupación que los de noviembre, posiblemente debido al aumento proporcional de las zonas en sombra. Para comprobar si la información multiangular tiene algún efecto en la obtención de imágenes clasificadas según ocupación y usos del suelo, se han realizado una serie de clasificaciones variando la información utilizada, desde sólo multiespectral, a multiangular y multiespectral. Los resultados muestran que, mientras para las clasificaciones más genéricas la información multiangular proporciona los peores resultados, a medida que se amplían el número de clases a obtener tal información mejora a lo obtenido únicamente con información multiespectral. Por otra parte, se ha realizado una estimación de variables cuantitativas como la fracción de cabida cubierta (Fcc) y la altura de la vegetación a partir de información proporcionada por MISR a diferentes resoluciones. En el valle de Alcudia (Ciudad Real) se ha estimado la fracción de cabida cubierta del arbolado para un píxel de 275 m utilizando redes neuronales. Los resultados muestran que utilizar información multiespectral y multiangular puede mejorar casi un 20% las estimaciones realizadas sólo con datos multiespectrales. Además, las relaciones obtenidas llegan al 0,7 de R con errores inferiores a un 10% en Fcc, siendo éstos mucho mejores que los obtenidos con el producto elaborado a partir de datos multiespectrales del sensor Moderate Resolution Imaging Spectroradiometer (MODIS), también a bordo de Terra, para la misma variable. Por último, se ha estimado la fracción de cabida cubierta y la altura efectiva de la vegetación para 700.000 ha de la provincia de Murcia, con una resolución de 1.100 m. Los resultados muestran la relación existente entre los datos espectrales y los multiangulares, obteniéndose coeficientes de Spearman del orden de 0,8 en el caso de la fracción de cabida cubierta de la vegetación, y de 0,4 en el caso de la altura efectiva. Las estimaciones de ambas variables con redes neuronales y diversas combinaciones de datos, arrojan resultados con R superiores a 0,85 para el caso del grado de cubierta vegetal, y 0,6 para la altura efectiva. Los parámetros multiangulares proporcionados en los productos elaborados con MISR a 1.100 m, no obtienen buenos resultados por sí mismos pero producen cierta mejora al incorporarlos a la información espectral. Los errores cuadráticos medios obtenidos son inferiores a 0,016 para la Fcc de la vegetación en tanto por uno, y 0,7 m para la altura efectiva de la misma. Regresiones geográficamente ponderadas muestran además que localmente se pueden obtener mejores resultados aún mejores, especialmente cuando hay una mayor variabilidad espacial de las variables estimadas. En resumen, la utilización de los datos proporcionados por MISR ofrece una prometedora vía de mejora de resultados en la media-baja resolución, tanto para la clasificación de imágenes como para la obtención de variables cuantitativas de la estructura de la vegetación. ABSTRACT Applications of remote sensing for monitoring what is happening on the land surface have been multiplied and refined with the launch of new sensors by different Space Agencies. The need of having up to date and spatially homogeneous data, has led to the development of new programs such as the Earth Observing System (EOS) of the National Aeronautics and Space Administration (NASA). One of the sensors incorporating the flagship of that program, the TERRA satellite, is Multi-angle Imaging Spectroradiometer (MISR), designed to capture the multi-angle information of the Earth's surface. Since the 1970s, it was known that the reflectance of various land covers and land uses varied depending on the viewing and ilumination angles, so they are anisotropic. Such variation was also related to the three dimensional structure of such covers, so that one could take advantage of such a relationship to obtain information from that structure, beyond which spectral information could provide. The MISR sensor incorporates 9 cameras at different angles to capture 9 almost simultaneous images of the same point, allowing relatively reliable estimates of the anisotropic response of the Earth's surface. Several studies have shown that we can estimate variables related to the vegetation structure with the information provided by this sensor, so this thesis has made an initial application to the Iberian Peninsula, to check their usefulness in estimating forest variables of interest. In a first step we analyzed the temporal variability that occurs in the data, due to the changes in the acquisition geometry, i.e. the relative position of sensor and light source, which in this case is the Sun. It has been found that the anisotropy is greater from late fall through early spring due to the Sun's position closer to the plane of the sensors. It was also found that the maximum and minimum values are displaced temporarily between the center and the ends. In characterizing CORINE Land Covers that has been done, one could see how the predominant form in the images with the highest sun is convex with a maximum in the camera closer to the light source. However, when the sun is much lower, the maximum is external. Moreover, the data obtained for each land cover are much more variable in summer that in November, possibly due to the proportional increase in shadow areas. To check whether the information has any effect on multi-angle imaging classification of land cover and land use, a series of classifications have been produced changing the data used, from only multispectrally, to multi-angle and multispectral. The results show that while for the most generic classifications multi-angle information is the worst, as there are extended the number of classes to obtain such information it improves the results. On the other hand, an estimate was made of quantitative variables such as canopy cover and vegetation height using information provided by MISR at different resolutions. In the valley of Alcudia (Ciudad Real), we estimated the canopy cover of trees for a pixel of 275 m by using neural networks. The results showed that using multispectral and multiangle information can improve by almost 20% the estimates that only used multispectral data. Furthermore, the relationships obtained reached an R coefficient of 0.7 with errors below 10% in canopy cover, which is much better result than the one obtained using data from the Moderate Resolution Imaging Spectroradiometer (MODIS), also onboard Terra, for the same variable. Finally we estimated the canopy cover and the effective height of the vegetation for 700,000 hectares in the province of Murcia, with a spatial resolution of 1,100 m. The results show a relationship between the spectral and the multi-angle data, and provide estimates of the canopy cover with a Spearman’s coefficient of 0.8 in the case of the vegetation canopy cover, and 0.4 in the case of the effective height. The estimates of both variables using neural networks and various combinations of data, yield results with an R coefficient greater than 0.85 for the case of the canopy cover, and 0.6 for the effective height. Multi-angle parameters provided in the products made from MISR at 1,100 m pixel size, did not produce good results from themselves but improved the results when included to the spectral information. The mean square errors were less than 0.016 for the canopy cover, and 0.7 m for the effective height. Geographically weighted regressions also showed that locally we can have even better results, especially when there is high spatial variability of estimated variables. In summary, the use of the data provided by MISR offers a promising way of improving remote sensing performance in the low-medium spatial resolution, both for image classification and for the estimation of quantitative variables of the vegetation structure.
Resumo:
Knowledge resource reuse has become a popular approach within the ontology engineering field, mainly because it can speed up the ontology development process, saving time and money and promoting the application of good practices. The NeOn Methodology provides guidelines for reuse. These guidelines include the selection of the most appropriate knowledge resources for reuse in ontology development. This is a complex decision-making problem where different conflicting objectives, like the reuse cost, understandability, integration workload and reliability, have to be taken into account simultaneously. GMAA is a PC-based decision support system based on an additive multi-attribute utility model that is intended to allay the operational difficulties involved in the Decision Analysis methodology. The paper illustrates how it can be applied to select multimedia ontologies for reuse to develop a new ontology in the multimedia domain. It also demonstrates that the sensitivity analyses provided by GMAA are useful tools for making a final recommendation.