963 resultados para integrating data
Resumo:
Environmental Education (EE) is a key component in any marine protected area management. However, its visibility and action plans are still poorly developed and structured as a clear element in management procedures. The objective of this study is to contribute with a methodological route that integrates EE to the existing model of management planning and strategies, taking the Colombian National Natural Parks System as a case study. The creation of the route is proposed as a participatory research with different stakeholders in order to respond to the specific conservation needs and goals for the National Parks System. The EE national diagnosis has shown that its integration within the parks management structure is a first priority need, being a converging result on the two case studies on National Parks from the Pacific Coast of Colombia. The diagnosis also demonstrates that communication, participation, training and evaluation have to be reinforced, linking the community and stakeholders involved in the park management to the whole EE process. The proposed methodology route has been agreed upon by the National Parks staff and incorporates advice and recommendations from different stakeholders, in order to better include the park users. This step will help us to advance toward sustainable management in marine and coastal protected areas elsewhere, taking into account not only the biological but also the social-cultural prism. The main challenges in the management and conservation of coastal and marine ecosystems today are discussed.
Resumo:
SST variability within the Atlantic cold tongue (ACT) region is of climatic relevance for the surrounding continents. A multi cruise data set of microstructure observations is used to infer regional as well as seasonal variability of upper ocean mixing and diapycnal heat flux within the ACT region. The variability in mixing intensity is related to the variability in large scale background conditions, which were additionally observed during the cruises. The observations indicate fundamental differences in background conditions in terms of shear and stratification below the mixed layer (ML) for the western and eastern equatorial ACT region causing critical Froude numbers (Fr) to be more frequently observed in the western equatorial ACT. The distribution of critical Fr occurrence below the ML reflects the regional and seasonal variability of mixing intensity. Turbulent dissipation rates (?) at the equator (2°N-2°S) are strongly increased in the upper thermocline compared to off-equatorial locations. In addition, ? is elevated in the western equatorial ACT compared to the east from May to November, whereas boreal summer appears as the season of highest mixing intensities throughout the equatorial ACT region, coinciding with ACT development. Diapycnal heat fluxes at the base of the ML in the western equatorial ACT region inferred from ? and stratification range from a maximum of 90 Wm-2 in boreal summer to 55 Wm-2 in September and 40 Wm-2 in November. In the eastern equatorial ACT region maximum values of about 25 Wm-2 were estimated during boreal summer reducing to about 5 Wm-2 towards the end of the year. Outside the equatorial region, inferred diapycnal heat fluxes are comparably low rarely exceeding 10 Wm-2. Integrating the obtained heat flux estimates in the ML heat budget at 10°W on the equator accentuates the diapycnal heat flux as the largest ML cooling term during boreal summer and early autumn. In the western equatorial ACT elevated meridional velocity shear in the upper thermocline contributes to the enhanced diapycnal heat flux within this region during boreal summer and autumn. The elevated meridional velocity shear appears to be associated with intra-seasonal wave activity.
Resumo:
This paper describes seagrass species and percentage cover point-based field data sets derived from georeferenced photo transects. Annually or biannually over a ten year period (2004-2015) data sets were collected using 30-50 transects, 500-800 m in length distributed across a 142 km**2 shallow, clear water seagrass habitat, the Eastern Banks, Moreton Bay, Australia. Each of the eight data sets include seagrass property information derived from approximately 3000 georeferenced, downward looking photographs captured at 2-4 m intervals along the transects. Photographs were manually interpreted to estimate seagrass species composition and percentage cover (Coral Point Count excel; CPCe). Understanding seagrass biology, ecology and dynamics for scientific and management purposes requires point-based data on species composition and cover. This data set, and the methods used to derive it are a globally unique example for seagrass ecological applications. It provides the basis for multiple further studies at this site, regional to global comparative studies, and, for the design of similar monitoring programs elsewhere.
Resumo:
Sedimentary sequences in ancient or long-lived lakes can reach several thousands of meters in thickness and often provide an unrivalled perspective of the lake's regional climatic, environmental, and biological history. Over the last few years, deep-drilling projects in ancient lakes became increasingly multi- and interdisciplinary, as, among others, seismological, sedimentological, biogeochemical, climatic, environmental, paleontological, and evolutionary information can be obtained from sediment cores. However, these multi- and interdisciplinary projects pose several challenges. The scientists involved typically approach problems from different scientific perspectives and backgrounds, and setting up the program requires clear communication and the alignment of interests. One of the most challenging tasks, besides the actual drilling operation, is to link diverse datasets with varying resolution, data quality, and age uncertainties to answer interdisciplinary questions synthetically and coherently. These problems are especially relevant when secondary data, i.e., datasets obtained independently of the drilling operation, are incorporated in analyses. Nonetheless, the inclusion of secondary information, such as isotopic data from fossils found in outcrops or genetic data from extant species, may help to achieve synthetic answers. Recent technological and methodological advances in paleolimnology are likely to increase the possibilities of integrating secondary information. Some of the new approaches have started to revolutionize scientific drilling in ancient lakes, but at the same time, they also add a new layer of complexity to the generation and analysis of sediment-core data. The enhanced opportunities presented by new scientific approaches to study the paleolimnological history of these lakes, therefore, come at the expense of higher logistic, communication, and analytical efforts. Here we review types of data that can be obtained in ancient lake drilling projects and the analytical approaches that can be applied to empirically and statistically link diverse datasets to create an integrative perspective on geological and biological data. In doing so, we highlight strengths and potential weaknesses of new methods and analyses, and provide recommendations for future interdisciplinary deep-drilling projects.
Resumo:
Spatial Data Infrastructures have become a methodological and technological benchmark enabling distributed access to historical-cartographic archives. However, it is essential to offer enhanced virtual tools that imitate the current processes and methodologies that are carried out by librarians, historians and academics in the existing map libraries around the world. These virtual processes must be supported by a generic framework for managing, querying, and accessing distributed georeferenced resources and other content types such as scientific data or information. The authors have designed and developed support tools to provide enriched browsing, measurement and geometrical analysis capabilities, and dynamical querying methods, based on SDI foundations. The DIGMAP engine and the IBERCARTO collection enable access to georeferenced historical-cartographical archives. Based on lessons learned from the CartoVIRTUAL and DynCoopNet projects, a generic service architecture scheme is proposed. This way, it is possible to achieve the integration of virtual map rooms and SDI technologies bringing support to researchers within the historical and social domains.
Resumo:
We have designed and implemented a framework that unifies unit testing and run-time verification (as well as static verification and static debugging). A key contribution of our approach is that a unified assertion language is used for all of these tasks. We first propose methods for compiling runtime checks for (parts of) assertions which cannot be verified at compile-time via program transformation. This transformation allows checking preconditions and postconditions, including conditional postconditions, properties at arbitrary program points, and certain computational properties. The implemented transformation includes several optimizations to reduce run-time overhead. We also propose a minimal addition to the assertion language which allows defining unit tests to be run in order to detect possible violations of the (partial) specifications expressed by the assertions. This language can express for example the input data for performing the unit tests or the number of times that the unit tests should be repeated. We have implemented the framework within the Ciao/CiaoPP system and effectively applied it to the verification of ISO-prolog compliance and to the detection of different types of bugs in the Ciao system source code. Several experimental results are presented that ¡Ilústrate different trade-offs among program size, running time, or levéis of verbosity of the messages shown to the user.
Resumo:
We have designed and implemented a framework that unifies unit testing and run-time verification (as well as static verification and static debugging). A key contribution of our approach is that a unified assertion language is used for all of these tasks. We first propose methods for compiling runtime checks for (parts of) assertions which cannot be verified at compile-time via program transformation. This transformation allows checking preconditions and postconditions, including conditional postconditions, properties at arbitrary program points, and certain computational properties. The implemented transformation includes several optimizations to reduce run-time overhead. We also propose a minimal addition to the assertion language which allows defining unit tests to be run in order to detect possible violations of the (partial) specifications expressed by the assertions. This language can express for example the input data for performing the unit tests or the number of times that the unit tests should be repeated. We have implemented the framework within the Ciao/CiaoPP system and effectively applied it to the verification of ISO-prolog compliance and to the detection of different types of bugs in the Ciao system source code. Several experimental results are presented that illustrate different trade-offs among program size, running time, or levels of verbosity of the messages shown to the user.
Resumo:
Nowadays, there is a significant quantity of linguistic data available on the Web. However, linguistic resources are often published using proprietary formats and, as such, it can be difficult to interface with one another and they end up confined in “data silos”. The creation of web standards for the publishing of data on the Web and projects to create Linked Data have lead to interest in the creation of resources that can be published using Web principles. One of the most important aspects of “Lexical Linked Data” is the sharing of lexica and machine readable dictionaries. It is for this reason, that the lemon format has been proposed, which we briefly describe. We then consider two resources that seem ideal candidates for the Linked Data cloud, namely WordNet 3.0 and Wiktionary, a large document based dictionary. We discuss the challenges of converting both resources to lemon , and in particular for Wiktionary, the challenge of processing the mark-up, and handling inconsistencies and underspecification in the source material. Finally, we turn to the task of creating links between the two resources and present a novel algorithm for linking lexica as lexical Linked Data.
Resumo:
Introduction Diffusion weighted Imaging (DWI) techniques are able to measure, in vivo and non-invasively, the diffusivity of water molecules inside the human brain. DWI has been applied on cerebral ischemia, brain maturation, epilepsy, multiple sclerosis, etc. [1]. Nowadays, there is a very high availability of these images. DWI allows the identification of brain tissues, so its accurate segmentation is a common initial step for the referred applications. Materials and Methods We present a validation study on automated segmentation of DWI based on the Gaussian mixture and hidden Markov random field models. This methodology is widely solved with iterative conditional modes algorithm, but some studies suggest [2] that graph-cuts (GC) algorithms improve the results when initialization is not close to the final solution. We implemented a segmentation tool integrating ITK with a GC algorithm [3], and a validation software using fuzzy overlap measures [4]. Results Segmentation accuracy of each tool is tested against a gold-standard segmentation obtained from a T1 MPRAGE magnetic resonance image of the same subject, registered to the DWI space. The proposed software shows meaningful improvements by using the GC energy minimization approach on DTI and DSI (Diffusion Spectrum Imaging) data. Conclusions The brain tissues segmentation on DWI is a fundamental step on many applications. Accuracy and robustness improvements are achieved with the proposed software, with high impact on the application’s final result.
Resumo:
Many progresses have been made since the Digital Earth notion was envisioned thirteen years ago. However, the mechanism for integrating geographic information into the Digital Earth is still quite limited. In this context, we have developed a process to generate, integrate and publish geospatial Linked Data from several Spanish National data-sets. These data-sets are related to four Infrastructure for Spatial Information in the European Community (INSPIRE) themes, specifically with Administrative units, Hydrography, Statistical units, and Meteorology. Our main goal is to combine different sources (heterogeneous, multidisciplinary, multitemporal, multiresolution, and multilingual) using Linked Data principles. This goal allows the overcoming of current problems of information integration and driving geographical information toward the next decade scenario, that is, ?Linked Digital Earth.?
Resumo:
Sentiment analysis has recently gained popularity in the financial domain thanks to its capability to predict the stock market based on the wisdom of the crowds. Nevertheless, current sentiment indicators are still silos that cannot be combined to get better insight about the mood of different communities. In this article we propose a Linked Data approach for modelling sentiment and emotions about financial entities. We aim at integrating sentiment information from different communities or providers, and complements existing initiatives such as FIBO. The ap- proach has been validated in the semantic annotation of tweets of several stocks in the Spanish stock market, including its sentiment information.
Resumo:
Currently there are an overwhelming number of scientific publications in Life Sciences, especially in Genetics and Biotechnology. This huge amount of information is structured in corporate Data Warehouses (DW) or in Biological Databases (e.g. UniProt, RCSB Protein Data Bank, CEREALAB or GenBank), whose main drawback is its cost of updating that makes it obsolete easily. However, these Databases are the main tool for enterprises when they want to update their internal information, for example when a plant breeder enterprise needs to enrich its genetic information (internal structured Database) with recently discovered genes related to specific phenotypic traits (external unstructured data) in order to choose the desired parentals for breeding programs. In this paper, we propose to complement the internal information with external data from the Web using Question Answering (QA) techniques. We go a step further by providing a complete framework for integrating unstructured and structured information by combining traditional Databases and DW architectures with QA systems. The great advantage of our framework is that decision makers can compare instantaneously internal data with external data from competitors, thereby allowing taking quick strategic decisions based on richer data.
Resumo:
From the Introduction. The main focus of this study is to examine whether the euro has been an economic, monetary, fiscal, and social stabilizer for the Eurozone. In order to do this, the underpinnings of the euro are analysed, and the requirements and benchmarks that have to be achieved, maintained, and respected are tested against the data found in three major statistics data sources: the European Central Bank’s Statistics Data Warehouse (http://sdw.ecb.europa.eu/), Economagic (www.economagic.com), and E-signal. The purpose of this work is to analyse if the euro was a stabilizing factor from its inception to the break of the financial crisis in summer 2008 in the European Union. To answer this question, this study analyses a number of indexes to understand the impact of the euro in three markets: (1) the foreign exchange market, (2) the stock market, and the Crude Oil and commodities markets, (3) the money market.