869 resultados para Emotional Processing Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Uncertainty information for global leaf area index (LAI) products is important for global modeling studies but usually difficult to systematically obtain at a global scale. Here, we present a new method that cross-validates existing global LAI products and produces consistent uncertainty information. The method is based on a triple collocation error model (TCEM) that assumes errors among LAI products are not correlated. Global monthly absolute and relative uncertainties, in 0.05° spatial resolutions, were generated for MODIS, CYCLOPES, and GLOBCARBON LAI products, with reasonable agreement in terms of spatial patterns and biome types. CYCLOPES shows the lowest absolute and relative uncertainties, followed by GLOBCARBON and MODIS. Grasses, crops, shrubs, and savannas usually have lower uncertainties than forests in association with the relatively larger forest LAI. With their densely vegetated canopies, tropical regions exhibit the highest absolute uncertainties but the lowest relative uncertainties, the latter of which tend to increase with higher latitudes. The estimated uncertainties of CYCLOPES generally meet the quality requirements (± 0.5) proposed by the Global Climate Observing System (GCOS), whereas for MODIS and GLOBCARBON only non-forest biome types have met the requirement. Nevertheless, none of the products seems to be within a relative uncertainty requirements of 20%. Further independent validation and comparative studies are expected to provide a fair assessment of uncertainties derived from TCEM. Overall, the proposed TCEM is straightforward and could be automated for the systematic processing of real time remote sensing observations to provide theoretical uncertainty information for a wider range of land products.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Håkon Mosby Mud Volcano is a natural laboratory to study geological, geochemical, and ecological processes related to deep-water mud volcanism. High resolution bathymetry of the Håkon Mosby Mud Volcano was recorded during RV Polarstern expedition ARK-XIX/3 utilizing the multibeam system Hydrosweep DS-2. Dense spacing of the survey lines and slow ship speed (5 knots) provided necessary point density to generate a regular 10 m grid. Generalization was applied to preserve and represent morphological structures appropriately. Contour lines were derived showing detailed topography at the centre of the Håkon Mosby Mud Volcano and generalized contours in the vicinity. We provide a brief introduction to the Håkon Mosby Mud Volcano area and describe in detail data recording and processing methods, as well as the morphology of the area. Accuracy assessment was made to evaluate the reliability of a 10 m resolution terrain model. Multibeam sidescan data were recorded along with depth measurements and show reflectivity variations from light grey values at the centre of the Håkon Mosby Mud Volcano to dark grey values (less reflective) at the surrounding moat.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This talk illustrates how results from various Stata commands can be processed efficiently for inclusion in customized reports. A two-step procedure is proposed in which results are gathered and archived in the first step and then tabulated in the second step. Such an approach disentangles the tasks of computing results (which may take long) and preparing results for inclusion in presentations, papers, and reports (which you may have to do over and over). Examples using results from model estimation commands and various other Stata commands such as tabulate, summarize, or correlate are presented. Users will also be shown how to dynamically link results into word processors or into LaTeX documents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This tutorial will show how results from various Stata commands can be processed efficiently for inclusion in customized reports. A two-step procedure is proposed in which results are gathered and archived in the first step and then tabulated in the second step. Such an approach disentangles the tasks of computing results (which may take long) and preparing results for inclusion in presentations, papers, and reports (which you may have to do over and over). Examples using results from model estimation commands and also various other Stata commands such as tabulate, summarize, or correlate are presented. Furthermore, this tutorial shows how to dynamically link results into word processors or into LaTeX documents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This chapter attempts to identify whether product differentiation or geographical differentiation is the main source of profit for firms in developing economies by employing a simple idea from the recently developed method of empirical industrial organization. Theoretically, location choice and product choice have been considered as analogues in differentiation, but in the real world, which of these strategies is chosen will result in an immense difference in firm behavior and in the development process of the industry. Development of the technique of empirical industrial organization enabled us to identify market outcomes with endogeneity. A typical case is the market outcome with differentiation, where price or product choice is endogenously determined. Our original survey contains data on market location, differences in product types, and price. The results show that product differentiation rather than geographical differentiation mitigates pressure on price competition, but 70 per cent secures geographical monopoly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A study on the manoeuvrability of a riverine support patrol vessel is made to derive a mathematical model and simulate maneuvers with this ship. The vessel is mainly characterized by both its wide-beam and the unconventional propulsion system, that is, a pump-jet type azimuthal propulsion. By processing experimental data and the ship characteristics with diverse formulae to find the proper hydrodynamic coefficients and propulsion forces, a system of three differential equations is completed and tuned to carry out simulations of the turning test. The simulation is able to accept variable speed, jet angle and water depth as input parameters and its output consists of time series of the state variables and a plot of the simulated path and heading of the ship during the maneuver. Thanks to the data of full-scale trials previously performed with the studied vessel, a process of validation was made, which shows a good fit between simulated and full-scale experimental results, especially on the turning diameter

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Membrane systems are parallel and bioinspired systems which simulate membranes behavior when processing information. As a part of unconventional computing, P-systems are proven to be effective in solvingcomplexproblems. A software technique is presented here that obtain good results when dealing with such problems. The rules application phase is studied and updated accordingly to obtain the desired results. Certain rules are candidate to be eliminated which can make the model improving in terms of time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To properly understand and model animal embryogenesis it is crucial to obtain detailed measurements, both in time and space, about their gene expression domains and cell dynamics. Such challenge has been confronted in recent years by a surge of atlases which integrate a statistically relevant number of different individuals to get robust, complete information about their spatiotemporal locations of gene patterns. This paper will discuss the fundamental image analysis strategies required to build such models and the most common problems found along the way. We also discuss the main challenges and future goals in the field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To properly understand and model animal embryogenesis it is crucial to obtain detailed measurements, both in time and space, about their gene expression domains and cell dynamics. Such challenge has been confronted in recent years by a surge of atlases which integrate a statistically relevant number of different individuals to get robust, complete information about their spatiotemporal locations of gene patterns. This paper will discuss the fundamental image analysis strategies required to build such models and the most common problems found along the way. We also discuss the main challenges and future goals in the field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adaptive systems use feedback as a key strategy to cope with uncertainty and change in their environments. The information fed back from the sensorimotor loop into the control architecture can be used to change different elements of the controller at four different levels: parameters of the control model, the control model itself, the functional organization of the agent and the functional components of the agent. The complexity of such a space of potential configurations is daunting. The only viable alternative for the agent ?in practical, economical, evolutionary terms? is the reduction of the dimensionality of the configuration space. This reduction is achieved both by functionalisation —or, to be more precise, by interface minimization— and by patterning, i.e. the selection among a predefined set of organisational configurations. This last analysis let us state the central problem of how autonomy emerges from the integration of the cognitive, emotional and autonomic systems in strict functional terms: autonomy is achieved by the closure of functional dependency. In this paper we will show a general model of how the emotional biological systems operate following this theoretical analysis and how this model is also of applicability to a wide spectrum of artificial systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web 1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs. These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools. Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate. However, linguistic annotation tools have still some limitations, which can be summarised as follows: 1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.). 2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts. 3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc. A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved. In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool. Therefore, it would be quite useful to find a way to (i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools; (ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate. Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned. Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section. 2. GOALS OF THE PRESENT WORK As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based triples, as in the usual Semantic Web languages (namely RDF(S) and OWL), in order for the model to be considered suitable for the Semantic Web. Besides, to be useful for the Semantic Web, this model should provide a way to automate the annotation of web pages. As for the present work, this requirement involved reusing the linguistic annotation tools purchased by the OEG research group (http://www.oeg-upm.net), but solving beforehand (or, at least, minimising) some of their limitations. Therefore, this model had to minimise these limitations by means of the integration of several linguistic annotation tools into a common architecture. Since this integration required the interoperation of tools and their annotations, ontologies were proposed as the main technological component to make them effectively interoperate. From the very beginning, it seemed that the formalisation of the elements and the knowledge underlying linguistic annotations within an appropriate set of ontologies would be a great step forward towards the formulation of such a model (henceforth referred to as OntoTag). Obviously, first, to combine the results of the linguistic annotation tools that operated at the same level, their annotation schemas had to be unified (or, preferably, standardised) in advance. This entailed the unification (id. standardisation) of their tags (both their representation and their meaning), and their format or syntax. Second, to merge the results of the linguistic annotation tools operating at different levels, their respective annotation schemas had to be (a) made interoperable and (b) integrated. And third, in order for the resulting annotations to suit the Semantic Web, they had to be specified by means of an ontology-based vocabulary, and structured by means of ontology-based triples, as hinted above. Therefore, a new annotation scheme had to be devised, based both on ontologies and on this type of triples, which allowed for the combination and the integration of the annotations of any set of linguistic annotation tools. This annotation scheme was considered a fundamental part of the model proposed here, and its development was, accordingly, another major objective of the present work. All these goals, aims and objectives could be re-stated more clearly as follows: Goal 1: Development of a set of ontologies for the formalisation of the linguistic knowledge relating linguistic annotation. Sub-goal 1.1: Ontological formalisation of the EAGLES (1996a; 1996b) de facto standards for morphosyntactic and syntactic annotation, in a way that helps respect the triple structure recommended for annotations in these works (which is isomorphic to the triple structures used in the context of the Semantic Web). Sub-goal 1.2: Incorporation into this preliminary ontological formalisation of other existing standards and standard proposals relating the levels mentioned above, such as those currently under development within ISO/TC 37 (the ISO Technical Committee dealing with Terminology, which deals also with linguistic resources and annotations). Sub-goal 1.3: Generalisation and extension of the recommendations in EAGLES (1996a; 1996b) and ISO/TC 37 to the semantic level, for which no ISO/TC 37 standards have been developed yet. Sub-goal 1.4: Ontological formalisation of the generalisations and/or extensions obtained in the previous sub-goal as generalisations and/or extensions of the corresponding ontology (or ontologies). Sub-goal 1.5: Ontological formalisation of the knowledge required to link, combine and unite the knowledge represented in the previously developed ontology (or ontologies). Goal 2: Development of OntoTag’s annotation scheme, a standard-based abstract scheme for the hybrid (linguistically-motivated and ontological-based) annotation of texts. Sub-goal 2.1: Development of the standard-based morphosyntactic annotation level of OntoTag’s scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996a) and also the recommendations included in the ISO/MAF (2008) standard draft. Sub-goal 2.2: Development of the standard-based syntactic annotation level of the hybrid abstract scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996b) and the ISO/SynAF (2010) standard draft. Sub-goal 2.3: Development of the standard-based semantic annotation level of OntoTag’s (abstract) scheme. Sub-goal 2.4: Development of the mechanisms for a convenient integration of the three annotation levels already mentioned. These mechanisms should take into account the recommendations included in the ISO/LAF (2009) standard draft. Goal 3: Design of OntoTag’s (abstract) annotation architecture, an abstract architecture for the hybrid (semantic) annotation of texts (i) that facilitates the integration and interoperation of different linguistic annotation tools, and (ii) whose results comply with OntoTag’s annotation scheme. Sub-goal 3.1: Specification of the decanting processes that allow for the classification and separation, according to their corresponding levels, of the results of the linguistic tools annotating at several different levels. Sub-goal 3.2: Specification of the standardisation processes that allow (a) complying with the standardisation requirements of OntoTag’s annotation scheme, as well as (b) combining the results of those linguistic tools that share some level of annotation. Sub-goal 3.3: Specification of the merging processes that allow for the combination of the output annotations and the interoperation of those linguistic tools that share some level of annotation. Sub-goal 3.4: Specification of the merge processes that allow for the integration of the results and the interoperation of those tools performing their annotations at different levels. Goal 4: Generation of OntoTagger’s schema, a concrete instance of OntoTag’s abstract scheme for a concrete set of linguistic annotations. These linguistic annotations result from the tools and the resources available in the research group, namely • Bitext’s DataLexica (http://www.bitext.com/EN/datalexica.asp), • LACELL’s (POS) tagger (http://www.um.es/grupos/grupo-lacell/quees.php), • Connexor’s FDG (http://www.connexor.eu/technology/machinese/glossary/fdg/), and • EuroWordNet (Vossen et al., 1998). This schema should help evaluate OntoTag’s underlying hypotheses, stated below. Consequently, it should implement, at least, those levels of the abstract scheme dealing with the annotations of the set of tools considered in this implementation. This includes the morphosyntactic, the syntactic and the semantic levels. Goal 5: Implementation of OntoTagger’s configuration, a concrete instance of OntoTag’s abstract architecture for this set of linguistic tools and annotations. This configuration (1) had to use the schema generated in the previous goal; and (2) should help support or refute the hypotheses of this work as well (see the next section). Sub-goal 5.1: Implementation of the decanting processes that facilitate the classification and separation of the results of those linguistic resources that provide annotations at several different levels (on the one hand, LACELL’s tagger operates at the morphosyntactic level and, minimally, also at the semantic level; on the other hand, FDG operates at the morphosyntactic and the syntactic levels and, minimally, at the semantic level as well). Sub-goal 5.2: Implementation of the standardisation processes that allow (i) specifying the results of those linguistic tools that share some level of annotation according to the requirements of OntoTagger’s schema, as well as (ii) combining these shared level results. In particular, all the tools selected perform morphosyntactic annotations and they had to be conveniently combined by means of these processes. Sub-goal 5.3: Implementation of the merging processes that allow for the combination (and possibly the improvement) of the annotations and the interoperation of the tools that share some level of annotation (in particular, those relating the morphosyntactic level, as in the previous sub-goal). Sub-goal 5.4: Implementation of the merging processes that allow for the integration of the different standardised and combined annotations aforementioned, relating all the levels considered. Sub-goal 5.5: Improvement of the semantic level of this configuration by adding a named entity recognition, (sub-)classification and annotation subsystem, which also uses the named entities annotated to populate a domain ontology, in order to provide a concrete application of the present work in the two areas involved (the Semantic Web and Corpus Linguistics). 3. MAIN RESULTS: ASSESSMENT OF ONTOTAG’S UNDERLYING HYPOTHESES The model developed in the present thesis tries to shed some light on (i) whether linguistic annotation tools can effectively interoperate; (ii) whether their results can be combined and integrated; and, if they can, (iii) how they can, respectively, interoperate and be combined and integrated. Accordingly, several hypotheses had to be supported (or rejected) by the development of the OntoTag model and OntoTagger (its implementation). The hypotheses underlying OntoTag are surveyed below. Only one of the hypotheses (H.6) was rejected; the other five could be confirmed. H.1 The annotations of different levels (or layers) can be integrated into a sort of overall, comprehensive, multilayer and multilevel annotation, so that their elements can complement and refer to each other. • CONFIRMED by the development of: o OntoTag’s annotation scheme, o OntoTag’s annotation architecture, o OntoTagger’s (XML, RDF, OWL) annotation schemas, o OntoTagger’s configuration. H.2 Tool-dependent annotations can be mapped onto a sort of tool-independent annotations and, thus, can be standardised. • CONFIRMED by means of the standardisation phase incorporated into OntoTag and OntoTagger for the annotations yielded by the tools. H.3 Standardisation should ease: H.3.1: The interoperation of linguistic tools. H.3.2: The comparison, combination (at the same level and layer) and integration (at different levels or layers) of annotations. • H.3 was CONFIRMED by means of the development of OntoTagger’s ontology-based configuration: o Interoperation, comparison, combination and integration of the annotations of three different linguistic tools (Connexor’s FDG, Bitext’s DataLexica and LACELL’s tagger); o Integration of EuroWordNet-based, domain-ontology-based and named entity annotations at the semantic level. o Integration of morphosyntactic, syntactic and semantic annotations. H.4 Ontologies and Semantic Web technologies (can) play a crucial role in the standardisation of linguistic annotations, by providing consensual vocabularies and standardised formats for annotation (e.g., RDF triples). • CONFIRMED by means of the development of OntoTagger’s RDF-triple-based annotation schemas. H.5 The rate of errors introduced by a linguistic tool at a given level, when annotating, can be reduced automatically by contrasting and combining its results with the ones coming from other tools, operating at the same level. However, these other tools might be built following a different technological (stochastic vs. rule-based, for example) or theoretical (dependency vs. HPS-grammar-based, for instance) approach. • CONFIRMED by the results yielded by the evaluation of OntoTagger. H.6 Each linguistic level can be managed and annotated independently. • REJECTED: OntoTagger’s experiments and the dependencies observed among the morphosyntactic annotations, and between them and the syntactic annotations. In fact, Hypothesis H.6 was already rejected when OntoTag’s ontologies were developed. We observed then that several linguistic units stand on an interface between levels, belonging thereby to both of them (such as morphosyntactic units, which belong to both the morphological level and the syntactic level). Therefore, the annotations of these levels overlap and cannot be handled independently when merged into a unique multileveled annotation. 4. OTHER MAIN RESULTS AND CONTRIBUTIONS First, interoperability is a hot topic for both the linguistic annotation community and the whole Computer Science field. The specification (and implementation) of OntoTag’s architecture for the combination and integration of linguistic (annotation) tools and annotations by means of ontologies shows a way to make these different linguistic annotation tools and annotations interoperate in practice. Second, as mentioned above, the elements involved in linguistic annotation were formalised in a set (or network) of ontologies (OntoTag’s linguistic ontologies). • On the one hand, OntoTag’s network of ontologies consists of − The Linguistic Unit Ontology (LUO), which includes a mostly hierarchical formalisation of the different types of linguistic elements (i.e., units) identifiable in a written text; − The Linguistic Attribute Ontology (LAO), which includes also a mostly hierarchical formalisation of the different types of features that characterise the linguistic units included in the LUO; − The Linguistic Value Ontology (LVO), which includes the corresponding formalisation of the different values that the attributes in the LAO can take; − The OIO (OntoTag’s Integration Ontology), which  Includes the knowledge required to link, combine and unite the knowledge represented in the LUO, the LAO and the LVO;  Can be viewed as a knowledge representation ontology that describes the most elementary vocabulary used in the area of annotation. • On the other hand, OntoTag’s ontologies incorporate the knowledge included in the different standards and recommendations for linguistic annotation released so far, such as those developed within the EAGLES and the SIMPLE European projects or by the ISO/TC 37 committee: − As far as morphosyntactic annotations are concerned, OntoTag’s ontologies formalise the terms in the EAGLES (1996a) recommendations and their corresponding terms within the ISO Morphosyntactic Annotation Framework (ISO/MAF, 2008) standard; − As for syntactic annotations, OntoTag’s ontologies incorporate the terms in the EAGLES (1996b) recommendations and their corresponding terms within the ISO Syntactic Annotation Framework (ISO/SynAF, 2010) standard draft; − Regarding semantic annotations, OntoTag’s ontologies generalise and extend the recommendations in EAGLES (1996a; 1996b) and, since no stable standards or standard drafts have been released for semantic annotation by ISO/TC 37 yet, they incorporate the terms in SIMPLE (2000) instead; − The terms coming from all these recommendations and standards were supplemented by those within the ISO Data Category Registry (ISO/DCR, 2008) and also of the ISO Linguistic Annotation Framework (ISO/LAF, 2009) standard draft when developing OntoTag’s ontologies. Third, we showed that the combination of the results of tools annotating at the same level can yield better results (both in precision and in recall) than each tool separately. In particular, 1. OntoTagger clearly outperformed two of the tools integrated into its configuration, namely DataLexica and FDG in all the combination sub-phases in which they overlapped (i.e. POS tagging, lemma annotation and morphological feature annotation). As far as the remaining tool is concerned, i.e. LACELL’s tagger, it was also outperformed by OntoTagger in POS tagging and lemma annotation, and it did not behave better than OntoTagger in the morphological feature annotation layer. 2. As an immediate result, this implies that a) This type of combination architecture configurations can be applied in order to improve significantly the accuracy of linguistic annotations; and b) Concerning the morphosyntactic level, this could be regarded as a way of constructing more robust and more accurate POS tagging systems. Fourth, Semantic Web annotations are usually performed by humans or else by machine learning systems. Both of them leave much to be desired: the former, with respect to their annotation rate; the latter, with respect to their (average) precision and recall. In this work, we showed how linguistic tools can be wrapped in order to annotate automatically Semantic Web pages using ontologies. This entails their fast, robust and accurate semantic annotation. As a way of example, as mentioned in Sub-goal 5.5, we developed a particular OntoTagger module for the recognition, classification and labelling of named entities, according to the MUC and ACE tagsets (Chinchor, 1997; Doddington et al., 2004). These tagsets were further specified by means of a domain ontology, namely the Cinema Named Entities Ontology (CNEO). This module was applied to the automatic annotation of ten different web pages containing cinema reviews (that is, around 5000 words). In addition, the named entities annotated with this module were also labelled as instances (or individuals) of the classes included in the CNEO and, then, were used to populate this domain ontology. • The statistical results obtained from the evaluation of this particular module of OntoTagger can be summarised as follows. On the one hand, as far as recall (R) is concerned, (R.1) the lowest value was 76,40% (for file 7); (R.2) the highest value was 97, 50% (for file 3); and (R.3) the average value was 88,73%. On the other hand, as far as the precision rate (P) is concerned, (P.1) its minimum was 93,75% (for file 4); (R.2) its maximum was 100% (for files 1, 5, 7, 8, 9, and 10); and (R.3) its average value was 98,99%. • These results, which apply to the tasks of named entity annotation and ontology population, are extraordinary good for both of them. They can be explained on the basis of the high accuracy of the annotations provided by OntoTagger at the lower levels (mainly at the morphosyntactic level). However, they should be conveniently qualified, since they might be too domain- and/or language-dependent. It should be further experimented how our approach works in a different domain or a different language, such as French, English, or German. • In any case, the results of this application of Human Language Technologies to Ontology Population (and, accordingly, to Ontological Engineering) seem very promising and encouraging in order for these two areas to collaborate and complement each other in the area of semantic annotation. Fifth, as shown in the State of the Art of this work, there are different approaches and models for the semantic annotation of texts, but all of them focus on a particular view of the semantic level. Clearly, all these approaches and models should be integrated in order to bear a coherent and joint semantic annotation level. OntoTag shows how (i) these semantic annotation layers could be integrated together; and (ii) they could be integrated with the annotations associated to other annotation levels. Sixth, we identified some recommendations, best practices and lessons learned for annotation standardisation, interoperation and merge. They show how standardisation (via ontologies, in this case) enables the combination, integration and interoperation of different linguistic tools and their annotations into a multilayered (or multileveled) linguistic annotation, which is one of the hot topics in the area of Linguistic Annotation. And last but not least, OntoTag’s annotation scheme and OntoTagger’s annotation schemas show a way to formalise and annotate coherently and uniformly the different units and features associated to the different levels and layers of linguistic annotation. This is a great scientific step ahead towards the global standardisation of this area, which is the aim of ISO/TC 37 (in particular, Subcommittee 4, dealing with the standardisation of linguistic annotations and resources).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adaptive agents use feedback as a key strategy to cope with un- certainty and change in their environments. The information fed back from the sensorimotor loop into the control subsystem can be used to change four different elements of the controller: parameters associated to the control model, the control model itself, the functional organization of the agent and the functional realization of the agent. There are many change alternatives and hence the complexity of the agent’s space of potential configurations is daunting. The only viable alternative for space- and time-constrained agents —in practical, economical, evolutionary terms— is to achieve a reduction of the dimensionality of this configuration space. Emotions play a critical role in this reduction. The reduction is achieved by func- tionalization, interface minimization and by patterning, i.e. by selection among a predefined set of organizational configurations. This analysis lets us state how autonomy emerges from the integration of cognitive, emotional and autonomic systems in strict functional terms: autonomy is achieved by the closure of functional dependency. Emotion-based morphofunctional systems are able to exhibit complex adaptation patterns at a reduced cognitive cost. In this article we show a general model of how emotion supports functional adaptation and how the emotional biological systems operate following this theoretical model. We will also show how this model is also of applicability to the construction of a wide spectrum of artificial systems1.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La Organización Mundial de la Salud (OMS) prevé que para el año 2020, el Daño Cerebral Adquirido (DCA) estará entre las 10 causas más comunes de discapacidad. Estas lesiones, dadas sus consecuencias físicas, sensoriales, cognitivas, emocionales y socioeconómicas, cambian dramáticamente la vida de los pacientes y sus familias. Las nuevas técnicas de intervención precoz y el desarrollo de la medicina intensiva en la atención al DCA han mejorado notablemente la probabilidad de supervivencia. Sin embargo, hoy por hoy, las lesiones cerebrales no tienen ningún tratamiento quirúrgico que tenga por objetivo restablecer la funcionalidad perdida, sino que las terapias rehabilitadoras se dirigen hacia la compensación de los déficits producidos. Uno de los objetivos principales de la neurorrehabilitación es, por tanto, dotar al paciente de la capacidad necesaria para ejecutar las Actividades de Vida Diaria (AVDs) necesarias para desarrollar una vida independiente, siendo fundamentales aquellas en las que la Extremidad Superior (ES) está directamente implicada, dada su gran importancia a la hora de la manipulación de objetos. Con la incorporación de nuevas soluciones tecnológicas al proceso de neurorrehabilitación se pretende alcanzar un nuevo paradigma centrado en ofrecer una práctica personalizada, monitorizada y ubicua con una valoración continua de la eficacia y de la eficiencia de los procedimientos y con capacidad de generar conocimientos que impulsen la ruptura del paradigma de actual. Los nuevos objetivos consistirán en minimizar el impacto de las enfermedades que afectan a la capacidad funcional de las personas, disminuir el tiempo de incapacidad y permitir una gestión más eficiente de los recursos. Estos objetivos clínicos, de gran impacto socio-económico, sólo pueden alcanzarse desde una apuesta decidida en nuevas tecnologías, metodologías y algoritmos capaces de ocasionar la ruptura tecnológica necesaria que permita superar las barreras que hasta el momento han impedido la penetración tecnológica en el campo de la rehabilitación de manera universal. De esta forma, los trabajos y resultados alcanzados en la Tesis son los siguientes: 1. Modelado de AVDs: como paso previo a la incorporación de ayudas tecnológicas al proceso rehabilitador, se hace necesaria una primera fase de modelado y formalización del conocimiento asociado a la ejecución de las actividades que se realizan como parte de la terapia. En particular, las tareas más complejas y a su vez con mayor repercusión terapéutica son las AVDs, cuya formalización permitirá disponer de modelos de movimiento sanos que actuarán de referencia para futuros desarrollos tecnológicos dirigidos a personas con DCA. Siguiendo una metodología basada en diagramas de estados UML se han modelado las AVDs 'servir agua de una jarra' y 'coger un botella' 2. Monitorización ubícua del movimiento de la ES: se ha diseñado, desarrollado y validado un sistema de adquisición de movimiento basado en tecnología inercial que mejora las limitaciones de los dispositivos comerciales actuales (coste muy elevado e incapacidad para trabajar en entornos no controlados); los altos coeficientes de correlación y los bajos niveles de error obtenidos en los corregistros llevados a cabo con el sistema comercial BTS SMART-D demuestran la alta precisión del sistema. También se ha realizado un trabajo de investigación exploratorio de un sistema de captura de movimiento de coste muy reducido basado en visión estereoscópica, habiéndose detectado los puntos clave donde se hace necesario incidir desde un punto de vista tecnológico para su incorporación en un entorno real 3. Resolución del Problema Cinemático Inverso (PCI): se ha diseñado, desarrollado y validado una solución al PCI cuando el manipulador se corresponde con una ES humana estudiándose 2 posibles alternativas, una basada en la utilización de un Perceptrón Multicapa (PMC) y otra basada en sistemas Artificial Neuro-Fuzzy Inference Systems (ANFIS). La validación, llevada a cabo utilizando información relativa a los modelos disponibles de AVDs, indica que una solución basada en un PMC con 3 neuronas en la capa de entrada, una capa oculta también de 3 neuronas y una capa de salida con tantas neuronas como Grados de Libertad (GdLs) tenga el modelo de la ES, proporciona resultados, tanto de precisión como de tiempo de cálculo, que la hacen idónea para trabajar en sistemas con requisitos de tiempo real 4. Control inteligente assisted-as-needed: se ha diseñado, desarrollado y validado un algoritmo de control assisted-as-needed para una ortesis robótica con capacidades de actuación anticipatoria de la que existe un prototipo implementado en la actualidad. Los resultados obtenidos demuestran cómo el sistema es capaz de adaptarse al perfil disfuncional del paciente activando la ayuda en instantes anteriores a la ocurrencia de movimientos incorrectos. Esta estrategia implica un aumento en la participación del paciente y, por tanto, en su actividad muscular, fomentándose los procesos la plasticidad cerebral responsables del reaprendizaje o readaptación motora 5. Simuladores robóticos para planificación: se propone la utilización de un simulador robótico assisted-as-needed como herramienta de planificación de sesiones de rehabilitación personalizadas y con un objetivo clínico marcado en las que interviene una ortesis robotizada. Los resultados obtenidos evidencian como, tras la ejecución de ciertos algoritmos sencillos, es posible seleccionar automáticamente una configuración para el algoritmo de control assisted-as-needed que consigue que la ortesis se adapte a los criterios establecidos desde un punto de vista clínico en función del paciente estudiado. Estos resultados invitan a profundizar en el desarrollo de algoritmos más avanzados de selección de parámetros a partir de baterías de simulaciones Estos trabajos han servido para corroborar las hipótesis de investigación planteadas al inicio de la misma, permitiendo, asimismo, la apertura de nuevas líneas de investigación. Summary The World Health Organization (WHO) predicts that by the year 2020, Acquired Brain Injury (ABI) will be among the ten most common ailments. These injuries dramatically change the life of the patients and their families due to their physical, sensory, cognitive, emotional and socio-economic consequences. New techniques of early intervention and the development of intensive ABI care have noticeably improved the survival rate. However, in spite of these advances, brain injuries still have no surgical or pharmacological treatment to re-establish the lost functions. Neurorehabilitation therapies address this problem by restoring, minimizing or compensating the functional alterations in a person disabled because of a nervous system injury. One of the main objectives of Neurorehabilitation is to provide patients with the capacity to perform specific Activities of the Daily Life (ADL) required for an independent life, especially those in which the Upper Limb (UL) is directly involved due to its great importance in manipulating objects within the patients' environment. The incorporation of new technological aids to the neurorehabilitation process tries to reach a new paradigm focused on offering a personalized, monitored and ubiquitous practise with continuous assessment of both the efficacy and the efficiency of the procedures and with the capacity of generating new knowledge. New targets will be to minimize the impact of the sicknesses affecting the functional capabilitiies of the subjects, to decrease the time of the physical handicap and to allow a more efficient resources handling. These targets, of a great socio-economic impact, can only be achieved by means of new technologies and algorithms able to provoke the technological break needed to beat the barriers that are stopping the universal penetration of the technology in the field of rehabilitation. In this way, this PhD Thesis has achieved the following results: 1. ADL Modeling: as a previous step to the incorporation of technological aids to the neurorehabilitation process, it is necessary a first modelling and formalization phase of the knowledge associated to the execution of the activities that are performed as a part of the therapy. In particular, the most complex and therapeutically relevant tasks are the ADLs, whose formalization will produce healthy motion models to be used as a reference for future technological developments. Following a methodology based on UML state-chart diagrams, the ADLs 'serving water from a jar' and 'picking up a bottle' have been modelled 2. Ubiquitous monitoring of the UL movement: it has been designed, developed and validated a motion acquisition system based on inertial technology that improves the limitations of the current devices (high monetary cost and inability of working within uncontrolled environments); the high correlation coefficients and the low error levels obtained throughout several co-registration sessions with the commercial sys- tem BTS SMART-D show the high precision of the system. Besides an exploration of a very low cost stereoscopic vision-based motion capture system has been carried out and the key points where it is necessary to insist from a technological point of view have been detected 3. Inverse Kinematics (IK) problem solving: a solution to the IK problem has been proposed for a manipulator that corresponds to a human UL. This solution has been faced by means of two different alternatives, one based on a Mulilayer Perceptron (MLP) and another based on Artificial Neuro-Fuzzy Inference Systems (ANFIS). The validation of these solutions, carried out using the information regarding the previously generated motion models, indicate that a MLP-based solution, with an architecture consisting in 3 neurons in the input layer, one hidden layer of 3 neurons and an output layer with as many neurons as the number of Degrees of Freedom (DoFs) that the UL model has, is the one that provides the best results both in terms of precission and in terms of processing time, making in idoneous to be integrated within a system with real time restrictions 4. Assisted-as-needed intelligent control: an assisted-as-needed control algorithm with anticipatory actuation capabilities has been designed, developed and validated for a robotic orthosis of which there is an already implemented prototype. Obtained results demonstrate that the control system is able to adapt to the dysfunctional profile of the patient by triggering the assistance right before an incorrect movement is going to take place. This strategy implies an increase in the participation of the patients and in his or her muscle activity, encouraging the neural plasticity processes in charge of the motor learning 5. Planification with a robotic simulator: in this work a robotic simulator is proposed as a planification tool for personalized rehabilitation sessions under a certain clinical criterium. Obtained results indicate that, after the execution of simple parameter selection algorithms, it is possible to automatically choose a specific configuration that makes the assisted-as-needed control algorithm to adapt both to the clinical criteria and to the patient. These results invite researchers to work in the development of more complex parameter selection algorithms departing from simulation batteries Obtained results have been useful to corroborate the hypotheses set out at the beginning of this PhD Thesis. Besides, they have allowed the creation of new research lines in all the studied application fields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The term "Logic Programming" refers to a variety of computer languages and execution models which are based on the traditional concept of Symbolic Logic. The expressive power of these languages offers promise to be of great assistance in facing the programming challenges of present and future symbolic processing applications in Artificial Intelligence, Knowledge-based systems, and many other areas of computing. The sequential execution speed of logic programs has been greatly improved since the advent of the first interpreters. However, higher inference speeds are still required in order to meet the demands of applications such as those contemplated for next generation computer systems. The execution of logic programs in parallel is currently considered a promising strategy for attaining such inference speeds. Logic Programming in turn appears as a suitable programming paradigm for parallel architectures because of the many opportunities for parallel execution present in the implementation of logic programs. This dissertation presents an efficient parallel execution model for logic programs. The model is described from the source language level down to an "Abstract Machine" level suitable for direct implementation on existing parallel systems or for the design of special purpose parallel architectures. Few assumptions are made at the source language level and therefore the techniques developed and the general Abstract Machine design are applicable to a variety of logic (and also functional) languages. These techniques offer efficient solutions to several areas of parallel Logic Programming implementation previously considered problematic or a source of considerable overhead, such as the detection and handling of variable binding conflicts in AND-Parallelism, the specification of control and management of the execution tree, the treatment of distributed backtracking, and goal scheduling and memory management issues, etc. A parallel Abstract Machine design is offered, specifying data areas, operation, and a suitable instruction set. This design is based on extending to a parallel environment the techniques introduced by the Warren Abstract Machine, which have already made very fast and space efficient sequential systems a reality. Therefore, the model herein presented is capable of retaining sequential execution speed similar to that of high performance sequential systems, while extracting additional gains in speed by efficiently implementing parallel execution. These claims are supported by simulations of the Abstract Machine on sample programs.