929 resultados para fragmentation and integration
Resumo:
Sinking organic particles were collected from the Porcupine Abyssal Plain in 2013. Collection was done using a marine snow catcher (MSC), which is essentially a large (95 L) settling column. The marine snow catcher is deployed to one depth, the water trapped inside and then brought to the surface and left to stand on deck for 2 hours during which time the particles settle down (or up) the MSC depending on their settling rate. The particles are then collected and due to position of collection from the snow catcher are determined as fast or slow sinking particles. Some fluxes are negative as they were positively buoyant and not sinking.
Resumo:
Acoustic estimates of herring and blue whiting abundance were obtained during the surveys using the Simrad ER60 scientific echosounder. The allocation of NASC-values to herring, blue whiting and other acoustic targets were based on the composition of the trawl catches and the appearance of echo recordings. To estimate the abundance, the allocated NASC -values were averaged for ICES-squares (0.5° latitude by 1° longitude). For each statistical square, the unit area density of fish (rA) in number per square nautical mile (N*nm-2) was calculated using standard equations (Foote et al., 1987; Toresen et al., 1998). To estimate the total abundance of fish, the unit area abundance for each statistical square was multiplied by the number of square nautical miles in each statistical square and then summed for all the statistical squares within defined subareas and over the total area. Biomass estimation was calculated by multiplying abundance in numbers by the average weight of the fish in each statistical square then summing all squares within defined subareas and over the total area. The Norwegian BEAM soft-ware (Totland and Godø 2001) was used to make estimates of total biomass.
Resumo:
Think piece by Pierre Sauvé for the E15 Initiative on Strengthening the Global Trade System In his latest essay for the ICTSD-World Economic Forum E15 initiative on Strengthening the Global Trade and Investment System for Sustainable Development, WTI Director of External Programmes and Academic Partnerships and faculty member Pierre Sauvé explores the case for fusing the law of goods with that of services in a world of global value chains. The paper does so by directing attention to the questions of whether the current architectures of multilateral and preferential trade governance are compatible with a world of trade in tasks; whether the existing rules offer globally active firms a coherent structure for doing business in a predictable environment; whether it is feasible to redesign the structure and content of existing trade rules to align them to the reality of production fragmentation; and what steps can be envisaged to better align policy and realities in the marketplace if the prospects for restructuring appear unfavourable. The paper argues that fusing trade disciplines for goods and services is neither needed nor feasible and may actually deflect attention from a number of worthwhile policy initiatives where more realistic (if never easily secured) prospects of generic rule-making may well exist.
Resumo:
Acoustic estimates of herring and blue whiting abundance were obtained during the surveys using the Simrad ER60 scientific echosounder. The allocation of NASC-values to herring, blue whiting and other acoustic targets were based on the composition of the trawl catches and the appearance of echo recordings. To estimate the abundance, the allocated NASC -values were averaged for ICES-squares (0.5° latitude by 1° longitude). For each statistical square, the unit area density of fish (rA) in number per square nautical mile (N*nm-2) was calculated using standard equations (Foote et al., 1987; Toresen et al., 1998). To estimate the total abundance of fish, the unit area abundance for each statistical square was multiplied by the number of square nautical miles in each statistical square and then summed for all the statistical squares within defined subareas and over the total area. Biomass estimation was calculated by multiplying abundance in numbers by the average weight of the fish in each statistical square then summing all squares within defined subareas and over the total area. The Norwegian BEAM soft-ware (Totland and Godø 2001) was used to make estimates of total biomass.
Resumo:
Acoustic estimates of herring and blue whiting abundance were obtained during the surveys using the Simrad ER60 scientific echosounder. The allocation of NASC-values to herring, blue whiting and other acoustic targets were based on the composition of the trawl catches and the appearance of echo recordings. To estimate the abundance, the allocated NASC -values were averaged for ICES-squares (0.5° latitude by 1° longitude). For each statistical square, the unit area density of fish (rA) in number per square nautical mile (N*nm-2) was calculated using standard equations (Foote et al., 1987; Toresen et al., 1998). To estimate the total abundance of fish, the unit area abundance for each statistical square was multiplied by the number of square nautical miles in each statistical square and then summed for all the statistical squares within defined subareas and over the total area. Biomass estimation was calculated by multiplying abundance in numbers by the average weight of the fish in each statistical square then summing all squares within defined subareas and over the total area. The Norwegian BEAM soft-ware (Totland and Godø 2001) was used to make estimates of total biomass.
Resumo:
This paper reviews the relationship between public sector investment and private sector investment through government expenditures financed by government bonds in the Japanese economy. This study hypothesizes that deficit financing by bond issues does not crowd out private sector investment, and this finance method may crowd in. Thus the government increases bond issues and sells them in the domestic and international financial markets. This method does not affect interest rates because they are insensitive to government expenditures and they depend on interest rates levels in the international financial market more than in the domestic financial market because of globalization and integration among financial markets.
Resumo:
Predictions about electric energy needs, based on current electric energy models, forecast that the global energy consumption on Earth for 2050 will double present rates. Using distributed procedures for control and integration, the expected needs can be halved. Therefore implementation of Smart Grids is necessary. Interaction between final consumers and utilities is a key factor of future Smart Grids. This interaction is aimed to reach efficient and responsible energy consumption. Energy Residential Gateways (ERG) are new in-building devices that will govern the communication between user and utility and will control electric loads. Utilities will offer new services empowering residential customers to lower their electric bill. Some of these services are Smart Metering, Demand Response and Dynamic Pricing. This paper presents a practical development of an ERG for residential buildings.
Resumo:
In the beginning of the 90s, ontology development was similar to an art: ontology developers did not have clear guidelines on how to build ontologies but only some design criteria to be followed. Work on principles, methods and methodologies, together with supporting technologies and languages, made ontology development become an engineering discipline, the so-called Ontology Engineering. Ontology Engineering refers to the set of activities that concern the ontology development process and the ontology life cycle, the methods and methodologies for building ontologies, and the tool suites and languages that support them. Thanks to the work done in the Ontology Engineering field, the development of ontologies within and between teams has increased and improved, as well as the possibility of reusing ontologies in other developments and in final applications. Currently, ontologies are widely used in (a) Knowledge Engineering, Artificial Intelligence and Computer Science, (b) applications related to knowledge management, natural language processing, e-commerce, intelligent information integration, information retrieval, database design and integration, bio-informatics, education, and (c) the Semantic Web, the Semantic Grid, and the Linked Data initiative. In this paper, we provide an overview of Ontology Engineering, mentioning the most outstanding and used methodologies, languages, and tools for building ontologies. In addition, we include some words on how all these elements can be used in the Linked Data initiative.
Resumo:
We propose a new methodology to evaluate the balance between segregation and integration in functional brain networks by using singular value decomposition techniques. By means of magnetoencephalography, we obtain the brain activity of a control group of 19 individuals during a memory task. Next, we project the node-to-node correlations into a complex network that is analyzed from the perspective of its modular structure encoded in the contribution matrix. In this way, we are able to study the role that nodes play I/O its community and to identify connector and local hubs. At the mesoscale level, the analysis of the contribution matrix allows us to measure the degree of overlapping between communities and quantify how far the functional networks are from the configuration that better balances the integrated and segregated activity
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based