634 resultados para Entails


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este artículo indaga acerca de qué elementos de análisis están presentes en las estrategias de los agricultores de las pampas argentinas a la hora de tomar decisiones de producción considerando el factor climático. El énfasis está puesto en cómo perciben la variabilidad climática y qué información manejan acerca de sus perspectivas a mediano plazo. Durante 2005 se entrevistaron a 60 productores, seleccionados de dos zonas pampeanas de diferentes características físicas. 30 personas correspondieron al área central húmeda y 30 personas a un área marginal semiárida. Los resultados del estudio apuntan a caracterizar los esquemas decisionales presentes en las percepciones de los individuos, teniendo en cuenta que su actividad supone una exposición al riesgo. El objetivo de fondo del trabajo de investigación es proponer acciones de comunicación que ayuden a un mejor uso de la información climática, considerando que se trata de una herramienta disponible con gran potencial para dar un soporte más científico a los procedimientos de los agentes productivos y mejorar su rentabilidad económica.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La calidad ambiental se refiere a la contribución del ambiente al bienestar humano. Los usos del suelo de tipo peligroso en un ambiente urbano pueden afectar dicha calidad. En este trabajo, se analiza y evalúa la calidad ambiental urbana con respecto a los usos del suelo de tipo peligroso ubicados dentro del ejido urbano: depósitos de agroquímicos; silos; garajes de fumigadores terrestres; depósitos de garrafas y tubos de gas licuado; estaciones de servicio. La metodología propuesta para este estudio es la aplicación de Sistemas de Indicadores Ambientales bajo el modelo "Presión-Estado-Respuesta" (OCDE), con el fin de plantear y medir un índice de Calidad Ambiental. El objetivo final es identificar factores que se comportan como profundizadores o mitigadores del riesgo, medidos a través de indicadores de presión, estado y respuesta

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En este artículo se desarrollan una serie puntos para entender al istmo oaxaqueño como una zona de convergencia cultural diversa, múltiple y por lo tanto diferenciada tanto en el ámbito de la historia de los pueblos que en ella viven, como en el ámbito de la estructura económica que permite su reproducción no sólo como grupos étnicos, sino como comunidades. El autor plantea que las manifestaciones culturales de cada grupo se confunden, se entrelazan se influencian mutuamente y en una lógica asociada a la perdurabilidad de dichos procesos, terminan por imponerse. Para comprender las relaciones interétnicas no basta con dar cuenta de las características generales de estos grupos sino que es necesario actualizar la información que se ha obtenido de ellos, sobre todo ahora que nuevos procesos nacionales o internacionales están afectando de manera irreversible la composición y la estructura de estas culturas. Aspectos como la falta de vías de comercialización de productos agrícolas y pesqueros, la marginación sempiterna, la falta de empleo a escala regional, la migración, etc.. son entre otros, las aristas de una realidad que se revela en su irremediable avance frente a comunidades que ponen en funcionamiento estrategias de reproducción y de sobreviviencia para mantenerse como comunidad. Las manifestaciones de las relaciones interétnicas aquí se definen como asimétricas horizontales en virtud de una diferenciación entre etnias y las relaciones entre ellas, de tal forma que así como se ha analizado la relación grupos étnicos-sociedad nacional, en la que se constatan desigualdades flagrantes a nivel del acceso a la riqueza generada en el país, calidad de vida, educación, comunicación, y otras variables importantes, de la misma manera se constata que en los grupos étnicos hay unos que tienen una posición privilegiada en el acceso a vías de comunicación, redes comerciales e influencia política, entre otros aspectos, mientras que otros se encuentran excluidos de ellos, no sólo por su condición de indígenas, sino también como consecuencia de la dominación entre etnias que existe en la región. Finalmente, el autor enfatiza que en esta zona del estado de Oaxaca existe una gama considerable de recursos naturales cuya apropiación y explotación para beneficios económicos está en el centro de la discusión actual. En efecto, el sistema de lagunas del Golfo de Tehuantepec, las reservas de la biodiversidad como la selva de los Chimalapas, o la actividad ganadera agrícola en la zona mixe son, entre otras, no sólo zonas importantes de explotación económica, que han fortalecido a grupos de poder locales. También son espacios de control estratégico para el futuro desarrollo del país pues en el istmo oaxaqueño se ha ubicado una fuente importante de riqueza en agua, bosques, especies endémicas, etc... que puede tener un papel importante en la vida económica regional y nacional. La cuestión es saber qué dispositivos sociales o legales se han puesto en marcha para definir a los beneficiarios de esa riqueza natural.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Se toma como punto de partida el modelo de diseño de investigación cualitativa desarrollado por Joseph Maxwell, cuya concepción sobre el diseño de una investigación es el de una estructura subyacente basada en la interconexión de los componentes del estudio y las implicancias que estos tienen sobre otros, para analizar, si es posible, la corriente neorrealista, considerada como la escuela predominante en el estudio de las relaciones internacionales. El propósito de este trabajo -sustentado en el paradigma interpretativo, que conlleva como supuesto fundacional la necesaria comprensión del sentido de la acción social en el contexto del mundo de la vida y desde la perspectiva de los participantes- es el de relevar el aporte testimonial de quienes han conducido o participado activamente, es decir, los Ministros de Relaciones Exteriores, en la formación de la política exterior del país a partir de la vuelta de la democracia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A mediados del siglo XVIII los grandes comerciantes de distintos espacios hispanoamericanos, acumulan suficientes caudales que les permiten comprar títulos de nobleza, distinciones o formar mayorazgos que relumbren sus nombres y perpetúen sus bienes adquiridos. Este proceso es mayormente evidente en los espacios mexicanos y peruanos; pero no se conocen casos concretos para el espacio rioplatense. Como planteó José Torre Revello, esto no implica que los comerciantes rioplatenses no intentasen ennoblecerse. El presente estudio de caso detalla como Don Vicente de Azcuénaga intenta fundar un mayorazgo en la ciudad de Buenos Aires a favor de su primogénito Miguel. A través de este estudio basado en las "probanzas" se puede observar como la familia Azcuénaga pretende resaltar su nombre frente al resto de sus contemporáneos, pero las relaciones entre padre e hijo nos conducen a la vez a replantearnos interrogantes referentes a las tradiciones de acumulación y conservación de patrimonios

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Un plan cartográfico es un proceso contradictorio en el cual intervienen decisiones técnicas y políticas que pueden cambiar e incluso modificar los objetivos iniciales del plan. Sin embardo, la historiografía clásica de la cartografía sostiene que la toma de tales decisiones es el resultado de medidas únicamente científicas y técnicas en las que no existen intereses ni contradicciones políticas. En este trabajo intentamos rastrear -en las etapas de la producción cartográfica argentina- los momentos en los que la ciencia y la política se entrelazan de tal manera que son constitutivas de la ciencia cartográfica. Para ello tomamos los proyectos cartográficos del IGM: el Plan de la Carta y la Carta Militar Provisional; y la determinación geodésica del DATUM altimétrico que se llevó a cabo en torno a la Comisión para la Medición del Arco de Meridiano

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Las instituciones para la gestión hídrica tienen serias dificultades para asir la realidad a intervenir porque parten de supuestos abstractos y homogéneos que poco se asemejan a las complejas problemáticas locales, conflictivas y heterogéneas que estarían destinadas a mejorar. Se genera así una problemática brecha entre las políticas públicas y los problemas concretos. El trabajo propone discutir los marcos teóricos institucionales dominantes señalando limitaciones e implicaciones. A partir de una revisión crítica de bibliografía sobre el tema, el artículo establece líneas de base así como un conjunto de dimensiones de análisis que -desde una mirada diferente- aporte a la conformación de un régimen institucional que contribuya a la reproducción sustentable de los recursos hídricos

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ZooScan with ZooProcess and Plankton Identifier (PkID) software is an integrated analysis system for acquisition and classification of digital zooplankton images from preserved zooplankton samples. Zooplankton samples are digitized by the ZooScan and processed by ZooProcess and PkID in order to detect, enumerate, measure and classify the digitized objects. Here we present a semi-automatic approach that entails automated classification of images followed by manual validation, which allows rapid and accurate classification of zooplankton and abiotic objects. We demonstrate this approach with a biweekly zooplankton time series from the Bay of Villefranche-sur-mer, France. The classification approach proposed here provides a practical compromise between a fully automatic method with varying degrees of bias and a manual but accurate classification of zooplankton. We also evaluate the appropriate number of images to include in digital learning sets and compare the accuracy of six classification algorithms. We evaluate the accuracy of the ZooScan for automated measurements of body size and present relationships between machine measures of size and C and N content of selected zooplankton taxa. We demonstrate that the ZooScan system can produce useful measures of zooplankton abundance, biomass and size spectra, for a variety of ecological studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Future ocean acidification (OA) will affect physiological traits of marine species, with calcifying species being particularly vulnerable. As OA entails high energy demands, particularly during the rapid juvenile growth phase, food supply may play a key role in the response of marine organisms to OA. We experimentally evaluated the role of food supply in modulating physiological responses and biomineralization processes in juveniles of the Chilean scallop, Argopecten purpuratus, that were exposed to control (pH 8.0) and low pH (pH 7.6) conditions using three food supply treatments (high, intermediate, and low). We found that pH and food levels had additive effects on the physiological response of the juvenile scallops. Metabolic rates, shell growth, net calcification, and ingestion rates increased significantly at low pH conditions, independent of food. These physiological responses increased significantly in organisms exposed to intermediate and high levels of food supply. Hence, food supply seems to play a major role modulating organismal response by providing the energetic means to bolster the physiological response of OA stress. On the contrary, the relative expression of chitin synthase, a functional molecule for biomineralization, increased significantly in scallops exposed to low food supply and low pH, which resulted in a thicker periostracum enriched with chitin polysaccharides. Under reduced food and low pH conditions, the adaptive organismal response was to trade-off growth for the expression of biomineralization molecules and altering of the organic composition of shell periostracum, suggesting that the future performance of these calcifiers will depend on the trajectories of both OA and food supply. Thus, incorporating a suite of traits and multiple stressors in future studies of the adaptive organismal response may provide key insights on OA impacts on marine calcifiers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The structural continuity of fully integral bridges entails many advantages and some drawbacks. Among the latter, the cyclic expansions and contractions of the deck caused by seasonal thermal variations impose alternating displacements at the piers and abutments, with effects that may be difficult to establish reliably. The advantages include easier construction and cheaper maintenance but, especially, horizontal loads can be transmitted to the ground in a much better way than in conventional bridges. This paper first presents a methodology for dealing with the problems that the cyclic displacements imposed raise at the abutments and at the bridge piers. At the former, large pressures may develop, possibly accompanied by undesirable surface settlements. At the latter, the degree of cracking and the ability to carry the specified loads may be in question. Having quantified the drawbacks, simplified but realistic analyses are conducted of the response of an integral bridge to braking and seismic loads. It is shown that integral bridges constitute an excellent alternative in the context of the requirements posed by new high-speed railway lines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En los últimos tiempos, el estudio japonés SANAA ha realizado una serie de obras que le han puesto en el punto de mira de la arquitectura internacional, y que se ha visto reflejado en la concesión en el año 2010 del premio Pritzker a Kazuyo Sejima y Ryue Nishizawa por toda una carrera de trabajo ininterrumpido. SANAA ha construido una serie de edificios que han supuesto una gran influencia en el contexto de la arquitectura mundial y que está contribuyendo al trabajo de muchísimos arquitectos jóvenes, tanto en Japón como fuera de él, y su trabajo se estudia y analiza en las escuelas de arquitectura de medio mundo. La tesis doctoral, estudia el parque de SANAA desde las diversas formas en las que éste se presenta, parques por continuidad, o parques por acumulación, como una excusa para el estudio de los diferentes mecanismos arquitectónicos en los que se basa su modo de proyectar. Dichos mecanismos forman parte ya de una forma de proyectar contemporánea y su estudio permite la comprensión de algunos de los fenómenos arquitectónicos más interesantes que se están produciendo en la arquitectura de nuestro tiempo. La revelación de estos mecanismos supone el sacar a la luz gran parte de los procesos y desarrollos de proyecto, que por la lacónica manera de dibujar de SANAA, quedan ocultos la mayoría de las veces y, tan solo es percibido en una visita lenta y pausada por el edificio. El documento aquí presentado se convierte entonces, en una investigación, en la que el doctorando, como un detective, rastrea los proyectos de SANAA buscando aquellos mecanismos que permiten discernir el verdadero significado de conceptos como laberinto, jerarquía, orden, atmósfera o experiencia a lo largo de los principales proyectos de espacio horizontal de Sejima y Nishizawa. La tesis intenta por tanto dar forma a las principales referencias que constituyen el universo imaginario del estudio japonés y que, ya son parte de la cultura arquitectónica contemporánea. ABSTRACT In recent times, the Japanese Studio SANAA has produced a number of works that have taken it to the front sight of international architecture. This has led Kazuyo Sejima and Ryue Nishizawa to receiving the 2010 Pritzer award for an uninterrupted career. SANAA has built a series of buildings that have greatly influenced global architecture. Moreover, it is contributing to the work of many young architects, both in Japan and outside Japan. Its work is studied and assessed by architecture schools all over the world. This Doctoral thesis analyses the SANAA Park from all the perspectives it shows, parks by continuity or parks by accumulation, as an excuse to study different architectural mechanisms in which its way of projecting is based. Such mechanisms already are part of a contemporaneous way of projecting. In addition, its analysis allows for the further understanding of some of the most interesting architectural phenomena that are being produced nowadays. The release of these mechanisms entails shedding a light to most of the proceedings and courses of the project that, because of the SANAA’s laconic way of designing, remain hidden most of the times and are often noticed through a slow and thorough look at the building. This document becomes then a work of research, where the Doctorate, as a detective, tracks SANAA’s projects, searching for those mechanisms that allow for truly defining the meaning of concepts such as maze, hierarchy, order, atmosphere and experience throughout Sejima’s and Nishizawa’s main horizontal space projects. Therefore, this thesis aims at shaping key references that constitute the imaginary universe of the Japanese study and that already are part of the contemporaneous architectural culture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help teachers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with a two-fold objective: a) To develop a model for teaching and evaluating core competences that is useful and easily applicable to its different degrees, and b) to provide support to teachers by creating an area within the Website for Educational Innovation where they can search for information on the model corresponding to each core competence approved by UPM. Information available on each competence includes its definition, the formulation of indicators providing evidence on the level of acquisition, the recommended teaching and evaluation methodology, examples of evaluation rules for the different levels of competence acquisition, and descriptions of best practices. These best practices correspond to pilot tests applied to several of the academic subjects conducted at UPM in order to validate the model. This work describes the general procedure that was used and presents the model developed specifically for the problem-solving competence. Some of the pilot experiences are also summarised and their results analysed

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help teachers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with a two-fold objective: a) To develop a model for teaching and evaluating core competences that is useful and easily applicable to its different degrees, and b) to provide support to teachers by creating an area within the Website for Educational Innovation where they can search for information on the model corresponding to each core competence approved by UPM. Information available on each competence includes its definition, the formulation of indicators providing evidence on the level of acquisition, the recommended teaching and evaluation methodology, examples of evaluation rules for the different levels of competence acquisition, and descriptions of best practices. These best practices correspond to pilot tests applied to several of the academic subjects conducted at UPM in order to validate the model. This work describes the general procedure that was used and presents the model developed specifically for the problem-solving competence. Some of the pilot experiences are also summarised and their results analysed

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web 1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs. These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools. Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate. However, linguistic annotation tools have still some limitations, which can be summarised as follows: 1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.). 2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts. 3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc. A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved. In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool. Therefore, it would be quite useful to find a way to (i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools; (ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate. Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned. Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section. 2. GOALS OF THE PRESENT WORK As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based triples, as in the usual Semantic Web languages (namely RDF(S) and OWL), in order for the model to be considered suitable for the Semantic Web. Besides, to be useful for the Semantic Web, this model should provide a way to automate the annotation of web pages. As for the present work, this requirement involved reusing the linguistic annotation tools purchased by the OEG research group (http://www.oeg-upm.net), but solving beforehand (or, at least, minimising) some of their limitations. Therefore, this model had to minimise these limitations by means of the integration of several linguistic annotation tools into a common architecture. Since this integration required the interoperation of tools and their annotations, ontologies were proposed as the main technological component to make them effectively interoperate. From the very beginning, it seemed that the formalisation of the elements and the knowledge underlying linguistic annotations within an appropriate set of ontologies would be a great step forward towards the formulation of such a model (henceforth referred to as OntoTag). Obviously, first, to combine the results of the linguistic annotation tools that operated at the same level, their annotation schemas had to be unified (or, preferably, standardised) in advance. This entailed the unification (id. standardisation) of their tags (both their representation and their meaning), and their format or syntax. Second, to merge the results of the linguistic annotation tools operating at different levels, their respective annotation schemas had to be (a) made interoperable and (b) integrated. And third, in order for the resulting annotations to suit the Semantic Web, they had to be specified by means of an ontology-based vocabulary, and structured by means of ontology-based triples, as hinted above. Therefore, a new annotation scheme had to be devised, based both on ontologies and on this type of triples, which allowed for the combination and the integration of the annotations of any set of linguistic annotation tools. This annotation scheme was considered a fundamental part of the model proposed here, and its development was, accordingly, another major objective of the present work. All these goals, aims and objectives could be re-stated more clearly as follows: Goal 1: Development of a set of ontologies for the formalisation of the linguistic knowledge relating linguistic annotation. Sub-goal 1.1: Ontological formalisation of the EAGLES (1996a; 1996b) de facto standards for morphosyntactic and syntactic annotation, in a way that helps respect the triple structure recommended for annotations in these works (which is isomorphic to the triple structures used in the context of the Semantic Web). Sub-goal 1.2: Incorporation into this preliminary ontological formalisation of other existing standards and standard proposals relating the levels mentioned above, such as those currently under development within ISO/TC 37 (the ISO Technical Committee dealing with Terminology, which deals also with linguistic resources and annotations). Sub-goal 1.3: Generalisation and extension of the recommendations in EAGLES (1996a; 1996b) and ISO/TC 37 to the semantic level, for which no ISO/TC 37 standards have been developed yet. Sub-goal 1.4: Ontological formalisation of the generalisations and/or extensions obtained in the previous sub-goal as generalisations and/or extensions of the corresponding ontology (or ontologies). Sub-goal 1.5: Ontological formalisation of the knowledge required to link, combine and unite the knowledge represented in the previously developed ontology (or ontologies). Goal 2: Development of OntoTag’s annotation scheme, a standard-based abstract scheme for the hybrid (linguistically-motivated and ontological-based) annotation of texts. Sub-goal 2.1: Development of the standard-based morphosyntactic annotation level of OntoTag’s scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996a) and also the recommendations included in the ISO/MAF (2008) standard draft. Sub-goal 2.2: Development of the standard-based syntactic annotation level of the hybrid abstract scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996b) and the ISO/SynAF (2010) standard draft. Sub-goal 2.3: Development of the standard-based semantic annotation level of OntoTag’s (abstract) scheme. Sub-goal 2.4: Development of the mechanisms for a convenient integration of the three annotation levels already mentioned. These mechanisms should take into account the recommendations included in the ISO/LAF (2009) standard draft. Goal 3: Design of OntoTag’s (abstract) annotation architecture, an abstract architecture for the hybrid (semantic) annotation of texts (i) that facilitates the integration and interoperation of different linguistic annotation tools, and (ii) whose results comply with OntoTag’s annotation scheme. Sub-goal 3.1: Specification of the decanting processes that allow for the classification and separation, according to their corresponding levels, of the results of the linguistic tools annotating at several different levels. Sub-goal 3.2: Specification of the standardisation processes that allow (a) complying with the standardisation requirements of OntoTag’s annotation scheme, as well as (b) combining the results of those linguistic tools that share some level of annotation. Sub-goal 3.3: Specification of the merging processes that allow for the combination of the output annotations and the interoperation of those linguistic tools that share some level of annotation. Sub-goal 3.4: Specification of the merge processes that allow for the integration of the results and the interoperation of those tools performing their annotations at different levels. Goal 4: Generation of OntoTagger’s schema, a concrete instance of OntoTag’s abstract scheme for a concrete set of linguistic annotations. These linguistic annotations result from the tools and the resources available in the research group, namely • Bitext’s DataLexica (http://www.bitext.com/EN/datalexica.asp), • LACELL’s (POS) tagger (http://www.um.es/grupos/grupo-lacell/quees.php), • Connexor’s FDG (http://www.connexor.eu/technology/machinese/glossary/fdg/), and • EuroWordNet (Vossen et al., 1998). This schema should help evaluate OntoTag’s underlying hypotheses, stated below. Consequently, it should implement, at least, those levels of the abstract scheme dealing with the annotations of the set of tools considered in this implementation. This includes the morphosyntactic, the syntactic and the semantic levels. Goal 5: Implementation of OntoTagger’s configuration, a concrete instance of OntoTag’s abstract architecture for this set of linguistic tools and annotations. This configuration (1) had to use the schema generated in the previous goal; and (2) should help support or refute the hypotheses of this work as well (see the next section). Sub-goal 5.1: Implementation of the decanting processes that facilitate the classification and separation of the results of those linguistic resources that provide annotations at several different levels (on the one hand, LACELL’s tagger operates at the morphosyntactic level and, minimally, also at the semantic level; on the other hand, FDG operates at the morphosyntactic and the syntactic levels and, minimally, at the semantic level as well). Sub-goal 5.2: Implementation of the standardisation processes that allow (i) specifying the results of those linguistic tools that share some level of annotation according to the requirements of OntoTagger’s schema, as well as (ii) combining these shared level results. In particular, all the tools selected perform morphosyntactic annotations and they had to be conveniently combined by means of these processes. Sub-goal 5.3: Implementation of the merging processes that allow for the combination (and possibly the improvement) of the annotations and the interoperation of the tools that share some level of annotation (in particular, those relating the morphosyntactic level, as in the previous sub-goal). Sub-goal 5.4: Implementation of the merging processes that allow for the integration of the different standardised and combined annotations aforementioned, relating all the levels considered. Sub-goal 5.5: Improvement of the semantic level of this configuration by adding a named entity recognition, (sub-)classification and annotation subsystem, which also uses the named entities annotated to populate a domain ontology, in order to provide a concrete application of the present work in the two areas involved (the Semantic Web and Corpus Linguistics). 3. MAIN RESULTS: ASSESSMENT OF ONTOTAG’S UNDERLYING HYPOTHESES The model developed in the present thesis tries to shed some light on (i) whether linguistic annotation tools can effectively interoperate; (ii) whether their results can be combined and integrated; and, if they can, (iii) how they can, respectively, interoperate and be combined and integrated. Accordingly, several hypotheses had to be supported (or rejected) by the development of the OntoTag model and OntoTagger (its implementation). The hypotheses underlying OntoTag are surveyed below. Only one of the hypotheses (H.6) was rejected; the other five could be confirmed. H.1 The annotations of different levels (or layers) can be integrated into a sort of overall, comprehensive, multilayer and multilevel annotation, so that their elements can complement and refer to each other. • CONFIRMED by the development of: o OntoTag’s annotation scheme, o OntoTag’s annotation architecture, o OntoTagger’s (XML, RDF, OWL) annotation schemas, o OntoTagger’s configuration. H.2 Tool-dependent annotations can be mapped onto a sort of tool-independent annotations and, thus, can be standardised. • CONFIRMED by means of the standardisation phase incorporated into OntoTag and OntoTagger for the annotations yielded by the tools. H.3 Standardisation should ease: H.3.1: The interoperation of linguistic tools. H.3.2: The comparison, combination (at the same level and layer) and integration (at different levels or layers) of annotations. • H.3 was CONFIRMED by means of the development of OntoTagger’s ontology-based configuration: o Interoperation, comparison, combination and integration of the annotations of three different linguistic tools (Connexor’s FDG, Bitext’s DataLexica and LACELL’s tagger); o Integration of EuroWordNet-based, domain-ontology-based and named entity annotations at the semantic level. o Integration of morphosyntactic, syntactic and semantic annotations. H.4 Ontologies and Semantic Web technologies (can) play a crucial role in the standardisation of linguistic annotations, by providing consensual vocabularies and standardised formats for annotation (e.g., RDF triples). • CONFIRMED by means of the development of OntoTagger’s RDF-triple-based annotation schemas. H.5 The rate of errors introduced by a linguistic tool at a given level, when annotating, can be reduced automatically by contrasting and combining its results with the ones coming from other tools, operating at the same level. However, these other tools might be built following a different technological (stochastic vs. rule-based, for example) or theoretical (dependency vs. HPS-grammar-based, for instance) approach. • CONFIRMED by the results yielded by the evaluation of OntoTagger. H.6 Each linguistic level can be managed and annotated independently. • REJECTED: OntoTagger’s experiments and the dependencies observed among the morphosyntactic annotations, and between them and the syntactic annotations. In fact, Hypothesis H.6 was already rejected when OntoTag’s ontologies were developed. We observed then that several linguistic units stand on an interface between levels, belonging thereby to both of them (such as morphosyntactic units, which belong to both the morphological level and the syntactic level). Therefore, the annotations of these levels overlap and cannot be handled independently when merged into a unique multileveled annotation. 4. OTHER MAIN RESULTS AND CONTRIBUTIONS First, interoperability is a hot topic for both the linguistic annotation community and the whole Computer Science field. The specification (and implementation) of OntoTag’s architecture for the combination and integration of linguistic (annotation) tools and annotations by means of ontologies shows a way to make these different linguistic annotation tools and annotations interoperate in practice. Second, as mentioned above, the elements involved in linguistic annotation were formalised in a set (or network) of ontologies (OntoTag’s linguistic ontologies). • On the one hand, OntoTag’s network of ontologies consists of − The Linguistic Unit Ontology (LUO), which includes a mostly hierarchical formalisation of the different types of linguistic elements (i.e., units) identifiable in a written text; − The Linguistic Attribute Ontology (LAO), which includes also a mostly hierarchical formalisation of the different types of features that characterise the linguistic units included in the LUO; − The Linguistic Value Ontology (LVO), which includes the corresponding formalisation of the different values that the attributes in the LAO can take; − The OIO (OntoTag’s Integration Ontology), which  Includes the knowledge required to link, combine and unite the knowledge represented in the LUO, the LAO and the LVO;  Can be viewed as a knowledge representation ontology that describes the most elementary vocabulary used in the area of annotation. • On the other hand, OntoTag’s ontologies incorporate the knowledge included in the different standards and recommendations for linguistic annotation released so far, such as those developed within the EAGLES and the SIMPLE European projects or by the ISO/TC 37 committee: − As far as morphosyntactic annotations are concerned, OntoTag’s ontologies formalise the terms in the EAGLES (1996a) recommendations and their corresponding terms within the ISO Morphosyntactic Annotation Framework (ISO/MAF, 2008) standard; − As for syntactic annotations, OntoTag’s ontologies incorporate the terms in the EAGLES (1996b) recommendations and their corresponding terms within the ISO Syntactic Annotation Framework (ISO/SynAF, 2010) standard draft; − Regarding semantic annotations, OntoTag’s ontologies generalise and extend the recommendations in EAGLES (1996a; 1996b) and, since no stable standards or standard drafts have been released for semantic annotation by ISO/TC 37 yet, they incorporate the terms in SIMPLE (2000) instead; − The terms coming from all these recommendations and standards were supplemented by those within the ISO Data Category Registry (ISO/DCR, 2008) and also of the ISO Linguistic Annotation Framework (ISO/LAF, 2009) standard draft when developing OntoTag’s ontologies. Third, we showed that the combination of the results of tools annotating at the same level can yield better results (both in precision and in recall) than each tool separately. In particular, 1. OntoTagger clearly outperformed two of the tools integrated into its configuration, namely DataLexica and FDG in all the combination sub-phases in which they overlapped (i.e. POS tagging, lemma annotation and morphological feature annotation). As far as the remaining tool is concerned, i.e. LACELL’s tagger, it was also outperformed by OntoTagger in POS tagging and lemma annotation, and it did not behave better than OntoTagger in the morphological feature annotation layer. 2. As an immediate result, this implies that a) This type of combination architecture configurations can be applied in order to improve significantly the accuracy of linguistic annotations; and b) Concerning the morphosyntactic level, this could be regarded as a way of constructing more robust and more accurate POS tagging systems. Fourth, Semantic Web annotations are usually performed by humans or else by machine learning systems. Both of them leave much to be desired: the former, with respect to their annotation rate; the latter, with respect to their (average) precision and recall. In this work, we showed how linguistic tools can be wrapped in order to annotate automatically Semantic Web pages using ontologies. This entails their fast, robust and accurate semantic annotation. As a way of example, as mentioned in Sub-goal 5.5, we developed a particular OntoTagger module for the recognition, classification and labelling of named entities, according to the MUC and ACE tagsets (Chinchor, 1997; Doddington et al., 2004). These tagsets were further specified by means of a domain ontology, namely the Cinema Named Entities Ontology (CNEO). This module was applied to the automatic annotation of ten different web pages containing cinema reviews (that is, around 5000 words). In addition, the named entities annotated with this module were also labelled as instances (or individuals) of the classes included in the CNEO and, then, were used to populate this domain ontology. • The statistical results obtained from the evaluation of this particular module of OntoTagger can be summarised as follows. On the one hand, as far as recall (R) is concerned, (R.1) the lowest value was 76,40% (for file 7); (R.2) the highest value was 97, 50% (for file 3); and (R.3) the average value was 88,73%. On the other hand, as far as the precision rate (P) is concerned, (P.1) its minimum was 93,75% (for file 4); (R.2) its maximum was 100% (for files 1, 5, 7, 8, 9, and 10); and (R.3) its average value was 98,99%. • These results, which apply to the tasks of named entity annotation and ontology population, are extraordinary good for both of them. They can be explained on the basis of the high accuracy of the annotations provided by OntoTagger at the lower levels (mainly at the morphosyntactic level). However, they should be conveniently qualified, since they might be too domain- and/or language-dependent. It should be further experimented how our approach works in a different domain or a different language, such as French, English, or German. • In any case, the results of this application of Human Language Technologies to Ontology Population (and, accordingly, to Ontological Engineering) seem very promising and encouraging in order for these two areas to collaborate and complement each other in the area of semantic annotation. Fifth, as shown in the State of the Art of this work, there are different approaches and models for the semantic annotation of texts, but all of them focus on a particular view of the semantic level. Clearly, all these approaches and models should be integrated in order to bear a coherent and joint semantic annotation level. OntoTag shows how (i) these semantic annotation layers could be integrated together; and (ii) they could be integrated with the annotations associated to other annotation levels. Sixth, we identified some recommendations, best practices and lessons learned for annotation standardisation, interoperation and merge. They show how standardisation (via ontologies, in this case) enables the combination, integration and interoperation of different linguistic tools and their annotations into a multilayered (or multileveled) linguistic annotation, which is one of the hot topics in the area of Linguistic Annotation. And last but not least, OntoTag’s annotation scheme and OntoTagger’s annotation schemas show a way to formalise and annotate coherently and uniformly the different units and features associated to the different levels and layers of linguistic annotation. This is a great scientific step ahead towards the global standardisation of this area, which is the aim of ISO/TC 37 (in particular, Subcommittee 4, dealing with the standardisation of linguistic annotations and resources).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstraction-Carrying Code (ACC) has recently been proposed as a framework for mobile code safety in which the code supplier provides a program together with an abstraction whose validity entails compliance with a predefined safety policy. The abstraction plays thus the role of safety certifícate and its generation is carried out automatically by a fixed-point analyzer. The advantage of providing a (fixedpoint) abstraction to the code consumer is that its validity is checked in a single pass of an abstract interpretation-based checker. A main challenge is to reduce the size of certificates as much as possible while at the same time not increasing checking time. We introduce the notion of reduced certifícate which characterizes the subset of the abstraction which a checker needs in order to validate (and re-construct) the full certifícate in a single pass. Based on this notion, we instrument a generic analysis algorithm with the necessary extensions in order to identify the information relevant to the checker. We also provide a correct checking algorithm together with sufficient conditions for ensuring its completeness. The experimental results within the CiaoPP system show that our proposal is able to greatly reduce the size of certificates in practice.