946 resultados para Towards Seamless Integration of Geoscience Models and Data
Resumo:
Sediment samples and hydrographic conditions were studied at 28 stations around Iceland. At these sites, Conductivity-Temperature-Depth (CTD) casts were conducted to collect hydrographic data and multicorer casts were conductd to collect data on sediment characteristics including grain size distribution, carbon and nitrogen concentration, and chloroplastic pigment concentration. A total of 14 environmental predictors were used to model sediment characteristics around Iceland on regional geographic space. For these, two approaches were used: Multivariate Adaptation Regression Splines (MARS) and randomForest regression models. RandomForest outperformed MARS in predicting grain size distribution. MARS models had a greater tendency to over- and underpredict sediment values in areas outside the environmental envelope defined by the training dataset. We provide first GIS layers on sediment characteristics around Iceland, that can be used as predictors in future models. Although models performed well, more samples, especially from the shelf areas, will be needed to improve the models in future.
Resumo:
The Benguela Current, located off the west coast of southern Africa, is tied to a highly productive upwelling system**1. Over the past 12 million years, the current has cooled, and upwelling has intensified**2, 3, 4. These changes have been variously linked to atmospheric and oceanic changes associated with the glaciation of Antarctica and global cooling**5, the closure of the Central American Seaway**1, 6 or the further restriction of the Indonesian Seaway**3. The upwelling intensification also occurred during a period of substantial uplift of the African continent**7, 8. Here we use a coupled ocean-atmosphere general circulation model to test the effect of African uplift on Benguela upwelling. In our simulations, uplift in the East African Rift system and in southern and southwestern Africa induces an intensification of coastal low-level winds, which leads to increased oceanic upwelling of cool subsurface waters. We compare the effect of African uplift with the simulated impact of the Central American Seaway closure9, Indonesian Throughflow restriction10 and Antarctic glaciation**11, and find that African uplift has at least an equally strong influence as each of the three other factors. We therefore conclude that African uplift was an important factor in driving the cooling and strengthening of the Benguela Current and coastal upwelling during the late Miocene and Pliocene epochs.
Resumo:
The metabolic rate of organisms may either be viewed as a basic property from which other vital rates and many ecological patterns emerge and that follows a universal allometric mass scaling law; or it may be considered a property of the organism that emerges as a result of the organism's adaptation to the environment, with consequently less universal mass scaling properties. Data on body mass, maximum ingestion and clearance rates, respiration rates and maximum growth rates of animals living in the ocean epipelagic were compiled from the literature, mainly from original papers but also from previous compilations by other authors. Data were read from tables or digitized from graphs. Only measurements made on individuals of know size, or groups of individuals of similar and known size were included. We show that clearance and respiration rates have life-form-dependent allometries that have similar scaling but different elevations, such that the mass-specific rates converge on a rather narrow size-independent range. In contrast, ingestion and growth rates follow a near-universal taxa-independent ~3/4 mass scaling power law. We argue that the declining mass-specific clearance rates with size within taxa is related to the inherent decrease in feeding efficiency of any particular feeding mode. The transitions between feeding mode and simultaneous transitions in clearance and respiration rates may then represent adaptations to the food environment and be the result of the optimization of tradeoffs that allow sufficient feeding and growth rates to balance mortality.
Resumo:
Paired radiocarbon measurements on haptophyte biomarkers (alkenones) and on co-occurring tests of planktic foraminifera (Neogloboquadrina dutertrei and Globogerinoides sacculifer) from late glacial to Holocene sediments at core locations ME0005-24JC, Y69-71P, and MC16 from the south-western and central Panama Basin indicate no significant addition of pre-aged alkenones by lateral advection. The strong temporal correspondence between alkenones, foraminifera and total organic carbon (TOC) also implies negligible contributions of aged terrigenous material. Considering controversial evidence for sediment redistribution in previous studies of these sites, our data imply that the laterally supplied material cannot stem from remobilization of substantially aged sediments. Transport, if any, requires syn-depositional nepheloid layer transport and redistribution of low-density or fine-grained components within decades of particle formation. Such rapid and local transport minimizes the potential for temporal decoupling of proxies residing in different grain-size fractions and thus facilitates comparison of various proxies for paleoceanographic reconstructions in this study area. Anomalously old foraminiferal tests from a glacial depth interval of core Y69-71P may result from episodic spillover of fast bottom currents across the Carnegie Ridge transporting foraminiferal sands towards the north.
Resumo:
Ever since the handover of the territory in 1997, Hong Kong has had its own unique law and its own economic system and international legal personality, and has not been integrated with Mainland China. The Basic Law guarantees the uniqueness of the Hong Kong SAR until 2047. But close economic ties between Hong Kong and the Mainland will promote closer economic integration. The Basic Law limits only a customs union and the introduction of a single currency, but not the formation of a Free Trade Agreement (hereafter FTA) and monetary union. FTA has already been realized in the form of the Closer Economic Partnership Arrangement (hereafter CEPA). The Hong Kong SAR government, including the bureaucrat as well as the Chief Executive Tung Chee Hwa, was opposed to, and hesitant towards, the formation of a regional trade agreement with the Mainland, but the business community made them to adopt a positive attitude towards the CEPA. It is unclear how much integration can been deepened, but it can be argued that the current policy of the Hong Kong SAR is too supportive of business, and an excessive degree of economic integration may threaten the uniqueness of Hong Kong. But if Hong Kong achieves democracy and enjoys complete autonomy, it will be easy for economic integration to co-exist with the 'One Country, Two Systems' approach, in the interests of the business community and of the citizens of the SAR.
Resumo:
Measures have been developed to understand tendencies in the distribution of economic activity. The merits of these measures are in the convenience of data collection and processing. In this interim report, investigating the property of such measures to determine the geographical spread of economic activities, we summarize the merits and limitations of measures, and make clear that we must apply caution in their usage. As a first trial to access areal data, this project focus on administrative areas, not on point data and input-output data. Firm level data is not within the scope of this article. The rest of this article is organized as follows. In Section 2, we touch on the the limitations and problems associated with the measures and areal data. Specific measures are introduced in Section 3, and applied in Section 4. The conclusion summarizes the findings and discusses future work.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Aiming to address requirements concerning integration of services in the context of ?big data?, this paper presents an innovative approach that (i) ensures a flexible, adaptable and scalable information and computation infrastructure, and (ii) exploits the competences of stakeholders and information workers to meaningfully confront information management issues such as information characterization, classification and interpretation, thus incorporating the underlying collective intelligence. Our approach pays much attention to the issues of usability and ease-of-use, not requiring any particular programming expertise from the end users. We report on a series of technical issues concerning the desired flexibility of the proposed integration framework and we provide related recommendations to developers of such solutions. Evaluation results are also discussed.
Resumo:
The integration of correlation processes in design systems has as a target measurements in 3D directly and according to the users criteria in order to generate the required database for the development of the project. In the phase of photogrammetric works, internal and external orientation parameters are calculated and stereo models are created from standard images. The aforementioned are integrated in the system where the measurement of the selected items is done by applying developed correlation algorithms. The processing period has the tools to carry out the calculations in an easy and automatic way, as well as image measurement techniques to acquire the most correct information. The proposed software development is done on Visual Studio platforms for PC, applying the most apt codes and symbols according to the terms of reference required for the design. The results of generating the data base in an interactive way with the geometric study of the structures, facilitates and improves the quality of the works in the projects.
Resumo:
A hybrid Eulerian-Lagrangian approach is employed to simulate heavy particle dispersion in turbulent pipe flow. The mean flow is provided by the Eulerian simulations developed by mean of JetCode, whereas the fluid fluctuations seen by particles are prescribed by a stochastic differential equation based on normalized Langevin. The statistics of particle velocity are compared to LES data which contain detailed statistics of velocity for particles with diameter equal to 20.4 µm. The model is in good agreement with the LES data for axial mean velocity whereas rms of axial and radial velocities should be adjusted.
Resumo:
This work focuses on the analysis of a structural element of MetOP-A satellite. Given the special interest in the influence of equipment installed on structural elements, the paper studies one of the lateral faces on which the Advanced SCATterometer (ASCAT) is installed. The work is oriented towards the modal characterization of the specimen, describing the experimental set-up and the application of results to the development of a Finite Element Method (FEM) model to study the vibro-acoustic response. For the high frequency range, characterized by a high modal density, a Statistical Energy Analysis (SEA) model is considered, and the FEM model is used when modal density is low. The methodology for developing the SEA model and a compound FEM and Boundary Element Method (BEM) model to provide continuity in the medium frequency range is presented, as well as the necessary updating, characterization and coupling between models required to achieve numerical models that match experimental results.
Resumo:
Replication Data Management (RDM) aims at enabling the use of data collections from several iterations of an experiment. However, there are several major challenges to RDM from integrating data models and data from empirical study infrastructures that were not designed to cooperate, e.g., data model variation of local data sources. [Objective] In this paper we analyze RDM needs and evaluate conceptual RDM approaches to support replication researchers. [Method] We adapted the ATAM evaluation process to (a) analyze RDM use cases and needs of empirical replication study research groups and (b) compare three conceptual approaches to address these RDM needs: central data repositories with a fixed data model, heterogeneous local repositories, and an empirical ecosystem. [Results] While the central and local approaches have major issues that are hard to resolve in practice, the empirical ecosystem allows bridging current gaps in RDM from heterogeneous data sources. [Conclusions] The empirical ecosystem approach should be explored in diverse empirical environments.
Resumo:
Applications based on Wireless Sensor Networks for Internet of Things scenarios are on the rise. The multiple possibilities they offer have spread towards previously hard to imagine fields, like e-health or human physiological monitoring. An application has been developed for its usage in scenarios where data collection is applied to smart spaces, aiming at its usage in fire fighting and sports. This application has been tested in a gymnasium with real, non-simulated nodes and devices. A Graphic User Interface has been implemented to suggest a series of exercises to improve a sportsman/woman s condition, depending on the context and their profile. This system can be adapted to a wide variety of e-health applications with minimum changes, and the user will interact using different devices, like smart phones, smart watches and/or tablets.