818 resultados para Hiking -- Tools and equipment


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Environmental quality monitoring of water resources is challenged with providing the basis for safeguarding the environment against adverse biological effects of anthropogenic chemical contamination from diffuse and point sources. While current regulatory efforts focus on monitoring and assessing a few legacy chemicals, many more anthropogenic chemicals can be detected simultaneously in our aquatic resources. However, exposure to chemical mixtures does not necessarily translate into adverse biological effects nor clearly shows whether mitigation measures are needed. Thus, the question which mixtures are present and which have associated combined effects becomes central for defining adequate monitoring and assessment strategies. Here we describe the vision of the international, EU-funded project SOLUTIONS, where three routes are explored to link the occurrence of chemical mixtures at specific sites to the assessment of adverse biological combination effects. First of all, multi-residue target and non-target screening techniques covering a broader range of anticipated chemicals co-occurring in the environment are being developed. By improving sensitivity and detection limits for known bioactive compounds of concern, new analytical chemistry data for multiple components can be obtained and used to characterise priority mixtures. This information on chemical occurrence will be used to predict mixture toxicity and to derive combined effect estimates suitable for advancing environmental quality standards. Secondly, bioanalytical tools will be explored to provide aggregate bioactivity measures integrating all components that produce common (adverse) outcomes even for mixtures of varying compositions. The ambition is to provide comprehensive arrays of effect-based tools and trait-based field observations that link multiple chemical exposures to various environmental protection goals more directly and to provide improved in situ observations for impact assessment of mixtures. Thirdly, effect-directed analysis (EDA) will be applied to identify major drivers of mixture toxicity. Refinements of EDA include the use of statistical approaches with monitoring information for guidance of experimental EDA studies. These three approaches will be explored using case studies at the Danube and Rhine river basins as well as rivers of the Iberian Peninsula. The synthesis of findings will be organised to provide guidance for future solution-oriented environmental monitoring and explore more systematic ways to assess mixture exposures and combination effects in future water quality monitoring.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Beryllium is a widely distributed, highly toxic metal. When beryllium particulates enter the body, the body's defense mechanisms are engaged. When the body's defenses cannot easily remove the particulates, then a damage and repair cycle is initiated. This cycle produces chronic beryllium disease (CBD), a progressive, fibrotic respiratory involvement which eventually suffocates exposed individuals. ^ Beryllium disease is an occupational disease, and as such it can be prevented by limiting exposures. In the 1940s journalists reported beryllium deaths at Atomic Energy Commission (AEC) facilities, the Department of Energy's (DOE) predecessor organization. These reports energized public pressure for exposure limits, and in 1949 AEC implemented a 2 μg/m3 permissible exposure limit (PEL). ^ The limits appeared to stop acute disease. In contrast, CBD has a long latency period between exposure and diagnosable disease, between one and thirty years. The lack of immediate adverse health consequences masked the seriousness of chronic disease and pragmatically removed CBD from AEC/DOE's political concern. ^ Presently the PEL for beryllium at DOE sites remains at 2 μg/m 3. This limit does not prevent CBD. This conclusion has long been known, although denied until recently. In 1999 DOE acknowledged the limit's ineffectiveness in its federal regulation governing beryllium exposure, 10 CFR 850. ^ Despite this admission, the PEL has not been reduced. The beryllium manufacturer and AEC/DOE have a history of exerting efforts to maintain and protect the status quo. Primary amongst these efforts has been creation and promotion of disinformation within peer reviewed health literature which discusses beryllium, exposures, health effects and treatment, and targeting graduate school students so that their perspective is shaped early. ^ Once indoctrinated with incorrect information, professionals tend to overlook aerosol and respiratory mechanics, immunologic and carcinogenic factors. They then apply tools and perspectives derived from the beryllium manufacturer and DOE's propaganda. Conclusions drawn are incorrect. The result is: health research and associated policy is conducted with incorrect premises. Effective disease management practices are not implemented. ^ Public health protection requires recognition of the disinformation and its implications. When disinformation is identified, then effective health policies and practices can be developed and implemented. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the complex landscape of public education, participants at all levels are searching for policy and practice levers that can raise overall performance and close achievement gaps. The collection of articles in this edition of the Journal of Applied Research on Children takes a big step toward providing the tools and tactics needed for an evidence-based approach to educational policy and practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The influence of respiratory motion on patient anatomy poses a challenge to accurate radiation therapy, especially in lung cancer treatment. Modern radiation therapy planning uses models of tumor respiratory motion to account for target motion in targeting. The tumor motion model can be verified on a per-treatment session basis with four-dimensional cone-beam computed tomography (4D-CBCT), which acquires an image set of the dynamic target throughout the respiratory cycle during the therapy session. 4D-CBCT is undersampled if the scan time is too short. However, short scan time is desirable in clinical practice to reduce patient setup time. This dissertation presents the design and optimization of 4D-CBCT to reduce the impact of undersampling artifacts with short scan times. This work measures the impact of undersampling artifacts on the accuracy of target motion measurement under different sampling conditions and for various object sizes and motions. The results provide a minimum scan time such that the target tracking error is less than a specified tolerance. This work also presents new image reconstruction algorithms for reducing undersampling artifacts in undersampled datasets by taking advantage of the assumption that the relevant motion of interest is contained within a volume-of-interest (VOI). It is shown that the VOI-based reconstruction provides more accurate image intensity than standard reconstruction. The VOI-based reconstruction produced 43% fewer least-squares error inside the VOI and 84% fewer error throughout the image in a study designed to simulate target motion. The VOI-based reconstruction approach can reduce acquisition time and improve image quality in 4D-CBCT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clinical text understanding (CTU) is of interest to health informatics because critical clinical information frequently represented as unconstrained text in electronic health records are extensively used by human experts to guide clinical practice, decision making, and to document delivery of care, but are largely unusable by information systems for queries and computations. Recent initiatives advocating for translational research call for generation of technologies that can integrate structured clinical data with unstructured data, provide a unified interface to all data, and contextualize clinical information for reuse in multidisciplinary and collaborative environment envisioned by CTSA program. This implies that technologies for the processing and interpretation of clinical text should be evaluated not only in terms of their validity and reliability in their intended environment, but also in light of their interoperability, and ability to support information integration and contextualization in a distributed and dynamic environment. This vision adds a new layer of information representation requirements that needs to be accounted for when conceptualizing implementation or acquisition of clinical text processing tools and technologies for multidisciplinary research. On the other hand, electronic health records frequently contain unconstrained clinical text with high variability in use of terms and documentation practices, and without commitmentto grammatical or syntactic structure of the language (e.g. Triage notes, physician and nurse notes, chief complaints, etc). This hinders performance of natural language processing technologies which typically rely heavily on the syntax of language and grammatical structure of the text. This document introduces our method to transform unconstrained clinical text found in electronic health information systems to a formal (computationally understandable) representation that is suitable for querying, integration, contextualization and reuse, and is resilient to the grammatical and syntactic irregularities of the clinical text. We present our design rationale, method, and results of evaluation in processing chief complaints and triage notes from 8 different emergency departments in Houston Texas. At the end, we will discuss significance of our contribution in enabling use of clinical text in a practical bio-surveillance setting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to design, synthesize and develop novel transporter targeting agents for image-guided therapy and drug delivery. Two novel agents, N4-guanine (N4amG) and glycopeptide (GP) were synthesized for tumor cell proliferation assessment and cancer theranostic platform, respectively. N4amG and GP were synthesized and radiolabeled with 99mTc and 68Ga. The chemical and radiochemical purities as well as radiochemical stabilities of radiolabeled N4amG and GP were tested. In vitro stability assessment showed both 99mTc-N4amG and 99mTc-GP were stable up to 6 hours, whereas 68Ga-GP was stable up to 2 hours. Cell culture studies confirmed radiolabeled N4amG and GP could penetrate the cell membrane through nucleoside transporters and amino acid transporters, respectively. Up to 40% of intracellular 99mTc-N4amG and 99mTc-GP was found within cell nucleus following 2 hours of incubation. Flow cytometry analysis revealed 99mTc-N4amG was a cell cycle S phase-specific agent. There was a significant difference of the uptake of 99mTc-GP between pre- and post- paclitaxel-treated cells, which suggests that 99mTc-GP may be useful in chemotherapy treatment monitoring. Moreover, radiolabeled N4amG and GP were tested in vivo using tumor-bearing animal models. 99mTc-N4amG showed an increase in tumor-to-muscle count density ratios up to 5 at 4 hour imaging. Both 99mTc-labeled agents showed decreased tumor uptake after paclitaxel treatment. Immunohistochemistry analysis demonstrated that the uptake of 99mTc-N4amG was correlated with Ki-67 expression. Both 99mTc-N4amG and 99mTc-GP could differentiate between tumor and inflammation in animal studies. Furthermore, 68Ga-GP was compared to 18F-FDG in rabbit PET imaging studies. 68Ga-GP had lower tumor standardized uptake values (SUV), but similar uptake dynamics, and different biodistribution compared with 18F-FDG. Finally, to demonstrate that GP can be a potential drug carrier for cancer theranostics, several drugs, including doxorubicin, were selected to be conjugated to GP. Imaging studies demonstrated that tumor uptake of GP-drug conjugates was increased as a function of time. GP-doxorubicin (GP-DOX) showed a slow-release pattern in in vitro cytotoxicity assay and exhibited anti-cancer efficacy with reduced toxicity in in vivo tumor growth delay study. In conclusion, both N4amG and GP are transporter-based targeting agents. Radiolabeled N4amG can be used for tumor cell proliferation assessment. GP is a potential agent for image-guided therapy and drug delivery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Documented risks of physical activity include reduced bone mineral density at high activity volume, and sudden cardiac death among adults and adolescents. Further illumination of these risks is needed to inform future public health guidelines. The present research seeks to 1) quantify the association between physical activity and bone mineral density (BMD) across a broad range of activity volume, 2) assess the utility of an existing pre-screening questionnaire among US adults, and 3) determine if pre-screening risk stratification by questionnaire predicts referral to physician among Texas adolescents. ^ Among 9,468 adults 20 years of age or older in the National Health and Nutrition Examination Survey (NHANES) 2007-2010, linear regression analyses revealed generally higher BMD at the lumbar spine and proximal femur with greater reported activity volume. Only lumbar BMD in women was unassociated with activity volume. Among men, BMD was similar at activity beyond four times the minimum volume recommended in the Physical Activity Guidelines. These results suggest that the range of activity reported by US adults is not associated with low BMD at either site. ^ The American Heart Association / American College of Sports Medicine Preparticipation Questionnaire (AAPQ) was applied to 6,661 adults 40 years of age or older from NHANES 2001-2004 by using NHANES responses to complete AAPQ items. Following AAPQ referral criteria, 95.5% of women and 93.5% of men would be referred to a physician before exercise initiation, suggesting little utility for the AAPQ among adults aged 40 years or older. Unnecessary referral before exercise initiation may present a barrier to exercise adoption and may strain an already stressed healthcare infrastructure. ^ Among 3181 athletes in the Texas Adolescent Athlete Heart Screening Registry, 55.2% of boys and 62.2% of girls were classified as high-risk based on questionnaire answers. Using sex-stratified contingency table analyses, risk categories were not significantly associated with referral to physician based on electrocardiogram or echocardiogram, nor were they associated with confirmed diagnoses on follow-up. Additional research is needed to identify which symptoms are most closely related to sudden cardiac death, and determine the best methods for rapid and reliable assessment. ^ In conclusion, this research suggests that the volume of activity reported by US adults is not associated with low BMD at two clinically relevant sites, casts doubts on the utility of two existing cardiac screening tools, and raises concern about barriers to activity erected through ineffective screening. These findings augment existing research in this area that may inform revisions to the Physical Activity Guidelines regarding risk mitigation.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This cross-sectional analysis of the data from the Third National Health and Nutrition Examination Survey was conducted to determine the prevalence and determinants of asthma and wheezing among US adults, and to identify the occupations and industries at high risk of developing work-related asthma and work-related wheezing. Separate logistic models were developed for physician-diagnosed asthma (MD asthma), wheezing in the previous 12 months (wheezing), work-related asthma and work-related wheezing. Major risk factors including demographic, socioeconomic, indoor air quality, allergy, and other characteristics were analyzed. The prevalence of lifetime MD asthma was 7.7% and the prevalence of wheezing was 17.2%. Mexican-Americans exhibited the lowest prevalence of MD asthma (4.8%; 95% confidence interval (CI): 4.2, 5.4) when compared to other race-ethnic groups. The prevalence of MD asthma or wheezing did not vary by gender. Multiple logistic regression analysis showed that Mexican-Americans were less likely to develop MD asthma (adjusted odds ratio (ORa) = 0.64, 95%CI: 0.45, 0.90) and wheezing (ORa = 0.55, 95%CI: 0.44, 0.69) when compared to non-Hispanic whites. Low education level, current and past smoking status, pet ownership, lifetime diagnosis of physician-diagnosed hay fever and obesity were all significantly associated with MD asthma and wheezing. No significant effect of indoor air pollutants on asthma and wheezing was observed in this study. The prevalence of work-related asthma was 3.70% (95%CI: 2.88, 4.52) and the prevalence of work-related wheezing was 11.46% (95%CI: 9.87, 13.05). The major occupations identified at risk of developing work-related asthma and wheezing were cleaners; farm and agriculture related occupations; entertainment related occupations; protective service occupations; construction; mechanics and repairers; textile; fabricators and assemblers; other transportation and material moving occupations; freight, stock and material movers; motor vehicle operators; and equipment cleaners. The population attributable risk for work-related asthma and wheeze were 26% and 27% respectively. The major industries identified at risk of work-related asthma and wheeze include entertainment related industry; agriculture, forestry and fishing; construction; electrical machinery; repair services; and lodging places. The population attributable risk for work-related asthma was 36.5% and work-related wheezing was 28.5% for industries. Asthma remains an important public health issue in the US and in the other regions of the world. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This presentation explains a dozen tools and paradigm shifts that teachers should apply in transformative ways to working with their students, how Web 2.0, tagging, and RSS are crucial to this process, and how teachers can develop their own personal learning networks to practice continuous lifelong learning and 'teacher autonomy' before applying these concepts to students.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This presentation explains a dozen tools and paradigm shifts that teachers should apply in transformative ways to working with their students, how Web 2.0, tagging, and RSS are crucial to this process, and how teachers can develop their own personal learning networks to practice continuous lifelong learning and 'teacher autonomy' before applying these concepts to students.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This presentation explains a dozen tools and paradigm shifts that teachers should apply in transformative ways to working with their students, how Web 2.0, tagging, and RSS are crucial to this process, and how teachers can develop their own personal learning networks to practice continuous lifelong learning and 'teacher autonomy' before applying these concepts to students.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While developing new IT products, reusability of existing components is a key aspect that can considerably improve the success rate. This fact has become even more important with the rise of the open source paradigm. However, integrating different products and technologies is not always an easy task. Different communities employ different standards and tools, and most times is not clear which dependencies a particular piece of software has. This is exacerbated by the transitive nature of these dependencies, making component integration a complicated affair. To help reducing this complexity we propose a model-based repository, capable of automatically resolve the required dependencies. This repository needs to be expandable, so new constraints can be analyzed, and also have federation support, for the integration with other sources of artifacts. The solution we propose achieves these working with OSGi components and using OSGi itself.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Profiting by the increasing availability of laser sources delivering intensities above 10 9 W/cm 2 with pulse energies in the range of several Joules and pulse widths in the range of nanoseconds, laser shock processing (LSP) is being consolidating as an effective technology for the improvement of surface mechanical and corrosion resistance properties of metals and is being developed as a practical process amenable to production engineering. The main acknowledged advantage of the laser shock processing technique consists on its capability of inducing a relatively deep compression residual stresses field into metallic alloy pieces allowing an improved mechanical behaviour, explicitly, the life improvement of the treated specimens against wear, crack growth and stress corrosion cracking. Following a short description of the theoretical/computational and experimental methods developed by the authors for the predictive assessment and experimental implementation of LSP treatments, experimental results on the residual stress profiles and associated surface properties modification successfully reached in typical materials (specifically steels and Al and Ti alloys) under different LSP irradiation conditions are presented

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The dramatic impact of neurological degenerative pathologies in life quality is a growing concern. It is well known that many neurological diseases leave a fingerprint in voice and speech production. Many techniques have been designed for the detection, diagnose and monitoring the neurological disease. Most of them are costly or difficult to extend to primary attention medical services. Through the present paper it will be shown how some neurological diseases can be traced at the level of phonation. The detection procedure would be based on a simple voice test. The availability of advanced tools and methodologies to monitor the organic pathology of voice would facilitate the implantation of these tests. The paper hypothesizes that some of the underlying mechanisms affecting the production of voice produce measurable correlates in vocal fold biomechanics. A general description of the methodological foundations for the voice analysis system which can estimate correlates to the neurological disease is shown. Some study cases will be presented to illustrate the possibilities of the methodology to monitor neurological diseases by voice

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web 1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs. These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools. Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate. However, linguistic annotation tools have still some limitations, which can be summarised as follows: 1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.). 2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts. 3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc. A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved. In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool. Therefore, it would be quite useful to find a way to (i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools; (ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate. Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned. Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section. 2. GOALS OF THE PRESENT WORK As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based triples, as in the usual Semantic Web languages (namely RDF(S) and OWL), in order for the model to be considered suitable for the Semantic Web. Besides, to be useful for the Semantic Web, this model should provide a way to automate the annotation of web pages. As for the present work, this requirement involved reusing the linguistic annotation tools purchased by the OEG research group (http://www.oeg-upm.net), but solving beforehand (or, at least, minimising) some of their limitations. Therefore, this model had to minimise these limitations by means of the integration of several linguistic annotation tools into a common architecture. Since this integration required the interoperation of tools and their annotations, ontologies were proposed as the main technological component to make them effectively interoperate. From the very beginning, it seemed that the formalisation of the elements and the knowledge underlying linguistic annotations within an appropriate set of ontologies would be a great step forward towards the formulation of such a model (henceforth referred to as OntoTag). Obviously, first, to combine the results of the linguistic annotation tools that operated at the same level, their annotation schemas had to be unified (or, preferably, standardised) in advance. This entailed the unification (id. standardisation) of their tags (both their representation and their meaning), and their format or syntax. Second, to merge the results of the linguistic annotation tools operating at different levels, their respective annotation schemas had to be (a) made interoperable and (b) integrated. And third, in order for the resulting annotations to suit the Semantic Web, they had to be specified by means of an ontology-based vocabulary, and structured by means of ontology-based triples, as hinted above. Therefore, a new annotation scheme had to be devised, based both on ontologies and on this type of triples, which allowed for the combination and the integration of the annotations of any set of linguistic annotation tools. This annotation scheme was considered a fundamental part of the model proposed here, and its development was, accordingly, another major objective of the present work. All these goals, aims and objectives could be re-stated more clearly as follows: Goal 1: Development of a set of ontologies for the formalisation of the linguistic knowledge relating linguistic annotation. Sub-goal 1.1: Ontological formalisation of the EAGLES (1996a; 1996b) de facto standards for morphosyntactic and syntactic annotation, in a way that helps respect the triple structure recommended for annotations in these works (which is isomorphic to the triple structures used in the context of the Semantic Web). Sub-goal 1.2: Incorporation into this preliminary ontological formalisation of other existing standards and standard proposals relating the levels mentioned above, such as those currently under development within ISO/TC 37 (the ISO Technical Committee dealing with Terminology, which deals also with linguistic resources and annotations). Sub-goal 1.3: Generalisation and extension of the recommendations in EAGLES (1996a; 1996b) and ISO/TC 37 to the semantic level, for which no ISO/TC 37 standards have been developed yet. Sub-goal 1.4: Ontological formalisation of the generalisations and/or extensions obtained in the previous sub-goal as generalisations and/or extensions of the corresponding ontology (or ontologies). Sub-goal 1.5: Ontological formalisation of the knowledge required to link, combine and unite the knowledge represented in the previously developed ontology (or ontologies). Goal 2: Development of OntoTag’s annotation scheme, a standard-based abstract scheme for the hybrid (linguistically-motivated and ontological-based) annotation of texts. Sub-goal 2.1: Development of the standard-based morphosyntactic annotation level of OntoTag’s scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996a) and also the recommendations included in the ISO/MAF (2008) standard draft. Sub-goal 2.2: Development of the standard-based syntactic annotation level of the hybrid abstract scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996b) and the ISO/SynAF (2010) standard draft. Sub-goal 2.3: Development of the standard-based semantic annotation level of OntoTag’s (abstract) scheme. Sub-goal 2.4: Development of the mechanisms for a convenient integration of the three annotation levels already mentioned. These mechanisms should take into account the recommendations included in the ISO/LAF (2009) standard draft. Goal 3: Design of OntoTag’s (abstract) annotation architecture, an abstract architecture for the hybrid (semantic) annotation of texts (i) that facilitates the integration and interoperation of different linguistic annotation tools, and (ii) whose results comply with OntoTag’s annotation scheme. Sub-goal 3.1: Specification of the decanting processes that allow for the classification and separation, according to their corresponding levels, of the results of the linguistic tools annotating at several different levels. Sub-goal 3.2: Specification of the standardisation processes that allow (a) complying with the standardisation requirements of OntoTag’s annotation scheme, as well as (b) combining the results of those linguistic tools that share some level of annotation. Sub-goal 3.3: Specification of the merging processes that allow for the combination of the output annotations and the interoperation of those linguistic tools that share some level of annotation. Sub-goal 3.4: Specification of the merge processes that allow for the integration of the results and the interoperation of those tools performing their annotations at different levels. Goal 4: Generation of OntoTagger’s schema, a concrete instance of OntoTag’s abstract scheme for a concrete set of linguistic annotations. These linguistic annotations result from the tools and the resources available in the research group, namely • Bitext’s DataLexica (http://www.bitext.com/EN/datalexica.asp), • LACELL’s (POS) tagger (http://www.um.es/grupos/grupo-lacell/quees.php), • Connexor’s FDG (http://www.connexor.eu/technology/machinese/glossary/fdg/), and • EuroWordNet (Vossen et al., 1998). This schema should help evaluate OntoTag’s underlying hypotheses, stated below. Consequently, it should implement, at least, those levels of the abstract scheme dealing with the annotations of the set of tools considered in this implementation. This includes the morphosyntactic, the syntactic and the semantic levels. Goal 5: Implementation of OntoTagger’s configuration, a concrete instance of OntoTag’s abstract architecture for this set of linguistic tools and annotations. This configuration (1) had to use the schema generated in the previous goal; and (2) should help support or refute the hypotheses of this work as well (see the next section). Sub-goal 5.1: Implementation of the decanting processes that facilitate the classification and separation of the results of those linguistic resources that provide annotations at several different levels (on the one hand, LACELL’s tagger operates at the morphosyntactic level and, minimally, also at the semantic level; on the other hand, FDG operates at the morphosyntactic and the syntactic levels and, minimally, at the semantic level as well). Sub-goal 5.2: Implementation of the standardisation processes that allow (i) specifying the results of those linguistic tools that share some level of annotation according to the requirements of OntoTagger’s schema, as well as (ii) combining these shared level results. In particular, all the tools selected perform morphosyntactic annotations and they had to be conveniently combined by means of these processes. Sub-goal 5.3: Implementation of the merging processes that allow for the combination (and possibly the improvement) of the annotations and the interoperation of the tools that share some level of annotation (in particular, those relating the morphosyntactic level, as in the previous sub-goal). Sub-goal 5.4: Implementation of the merging processes that allow for the integration of the different standardised and combined annotations aforementioned, relating all the levels considered. Sub-goal 5.5: Improvement of the semantic level of this configuration by adding a named entity recognition, (sub-)classification and annotation subsystem, which also uses the named entities annotated to populate a domain ontology, in order to provide a concrete application of the present work in the two areas involved (the Semantic Web and Corpus Linguistics). 3. MAIN RESULTS: ASSESSMENT OF ONTOTAG’S UNDERLYING HYPOTHESES The model developed in the present thesis tries to shed some light on (i) whether linguistic annotation tools can effectively interoperate; (ii) whether their results can be combined and integrated; and, if they can, (iii) how they can, respectively, interoperate and be combined and integrated. Accordingly, several hypotheses had to be supported (or rejected) by the development of the OntoTag model and OntoTagger (its implementation). The hypotheses underlying OntoTag are surveyed below. Only one of the hypotheses (H.6) was rejected; the other five could be confirmed. H.1 The annotations of different levels (or layers) can be integrated into a sort of overall, comprehensive, multilayer and multilevel annotation, so that their elements can complement and refer to each other. • CONFIRMED by the development of: o OntoTag’s annotation scheme, o OntoTag’s annotation architecture, o OntoTagger’s (XML, RDF, OWL) annotation schemas, o OntoTagger’s configuration. H.2 Tool-dependent annotations can be mapped onto a sort of tool-independent annotations and, thus, can be standardised. • CONFIRMED by means of the standardisation phase incorporated into OntoTag and OntoTagger for the annotations yielded by the tools. H.3 Standardisation should ease: H.3.1: The interoperation of linguistic tools. H.3.2: The comparison, combination (at the same level and layer) and integration (at different levels or layers) of annotations. • H.3 was CONFIRMED by means of the development of OntoTagger’s ontology-based configuration: o Interoperation, comparison, combination and integration of the annotations of three different linguistic tools (Connexor’s FDG, Bitext’s DataLexica and LACELL’s tagger); o Integration of EuroWordNet-based, domain-ontology-based and named entity annotations at the semantic level. o Integration of morphosyntactic, syntactic and semantic annotations. H.4 Ontologies and Semantic Web technologies (can) play a crucial role in the standardisation of linguistic annotations, by providing consensual vocabularies and standardised formats for annotation (e.g., RDF triples). • CONFIRMED by means of the development of OntoTagger’s RDF-triple-based annotation schemas. H.5 The rate of errors introduced by a linguistic tool at a given level, when annotating, can be reduced automatically by contrasting and combining its results with the ones coming from other tools, operating at the same level. However, these other tools might be built following a different technological (stochastic vs. rule-based, for example) or theoretical (dependency vs. HPS-grammar-based, for instance) approach. • CONFIRMED by the results yielded by the evaluation of OntoTagger. H.6 Each linguistic level can be managed and annotated independently. • REJECTED: OntoTagger’s experiments and the dependencies observed among the morphosyntactic annotations, and between them and the syntactic annotations. In fact, Hypothesis H.6 was already rejected when OntoTag’s ontologies were developed. We observed then that several linguistic units stand on an interface between levels, belonging thereby to both of them (such as morphosyntactic units, which belong to both the morphological level and the syntactic level). Therefore, the annotations of these levels overlap and cannot be handled independently when merged into a unique multileveled annotation. 4. OTHER MAIN RESULTS AND CONTRIBUTIONS First, interoperability is a hot topic for both the linguistic annotation community and the whole Computer Science field. The specification (and implementation) of OntoTag’s architecture for the combination and integration of linguistic (annotation) tools and annotations by means of ontologies shows a way to make these different linguistic annotation tools and annotations interoperate in practice. Second, as mentioned above, the elements involved in linguistic annotation were formalised in a set (or network) of ontologies (OntoTag’s linguistic ontologies). • On the one hand, OntoTag’s network of ontologies consists of − The Linguistic Unit Ontology (LUO), which includes a mostly hierarchical formalisation of the different types of linguistic elements (i.e., units) identifiable in a written text; − The Linguistic Attribute Ontology (LAO), which includes also a mostly hierarchical formalisation of the different types of features that characterise the linguistic units included in the LUO; − The Linguistic Value Ontology (LVO), which includes the corresponding formalisation of the different values that the attributes in the LAO can take; − The OIO (OntoTag’s Integration Ontology), which  Includes the knowledge required to link, combine and unite the knowledge represented in the LUO, the LAO and the LVO;  Can be viewed as a knowledge representation ontology that describes the most elementary vocabulary used in the area of annotation. • On the other hand, OntoTag’s ontologies incorporate the knowledge included in the different standards and recommendations for linguistic annotation released so far, such as those developed within the EAGLES and the SIMPLE European projects or by the ISO/TC 37 committee: − As far as morphosyntactic annotations are concerned, OntoTag’s ontologies formalise the terms in the EAGLES (1996a) recommendations and their corresponding terms within the ISO Morphosyntactic Annotation Framework (ISO/MAF, 2008) standard; − As for syntactic annotations, OntoTag’s ontologies incorporate the terms in the EAGLES (1996b) recommendations and their corresponding terms within the ISO Syntactic Annotation Framework (ISO/SynAF, 2010) standard draft; − Regarding semantic annotations, OntoTag’s ontologies generalise and extend the recommendations in EAGLES (1996a; 1996b) and, since no stable standards or standard drafts have been released for semantic annotation by ISO/TC 37 yet, they incorporate the terms in SIMPLE (2000) instead; − The terms coming from all these recommendations and standards were supplemented by those within the ISO Data Category Registry (ISO/DCR, 2008) and also of the ISO Linguistic Annotation Framework (ISO/LAF, 2009) standard draft when developing OntoTag’s ontologies. Third, we showed that the combination of the results of tools annotating at the same level can yield better results (both in precision and in recall) than each tool separately. In particular, 1. OntoTagger clearly outperformed two of the tools integrated into its configuration, namely DataLexica and FDG in all the combination sub-phases in which they overlapped (i.e. POS tagging, lemma annotation and morphological feature annotation). As far as the remaining tool is concerned, i.e. LACELL’s tagger, it was also outperformed by OntoTagger in POS tagging and lemma annotation, and it did not behave better than OntoTagger in the morphological feature annotation layer. 2. As an immediate result, this implies that a) This type of combination architecture configurations can be applied in order to improve significantly the accuracy of linguistic annotations; and b) Concerning the morphosyntactic level, this could be regarded as a way of constructing more robust and more accurate POS tagging systems. Fourth, Semantic Web annotations are usually performed by humans or else by machine learning systems. Both of them leave much to be desired: the former, with respect to their annotation rate; the latter, with respect to their (average) precision and recall. In this work, we showed how linguistic tools can be wrapped in order to annotate automatically Semantic Web pages using ontologies. This entails their fast, robust and accurate semantic annotation. As a way of example, as mentioned in Sub-goal 5.5, we developed a particular OntoTagger module for the recognition, classification and labelling of named entities, according to the MUC and ACE tagsets (Chinchor, 1997; Doddington et al., 2004). These tagsets were further specified by means of a domain ontology, namely the Cinema Named Entities Ontology (CNEO). This module was applied to the automatic annotation of ten different web pages containing cinema reviews (that is, around 5000 words). In addition, the named entities annotated with this module were also labelled as instances (or individuals) of the classes included in the CNEO and, then, were used to populate this domain ontology. • The statistical results obtained from the evaluation of this particular module of OntoTagger can be summarised as follows. On the one hand, as far as recall (R) is concerned, (R.1) the lowest value was 76,40% (for file 7); (R.2) the highest value was 97, 50% (for file 3); and (R.3) the average value was 88,73%. On the other hand, as far as the precision rate (P) is concerned, (P.1) its minimum was 93,75% (for file 4); (R.2) its maximum was 100% (for files 1, 5, 7, 8, 9, and 10); and (R.3) its average value was 98,99%. • These results, which apply to the tasks of named entity annotation and ontology population, are extraordinary good for both of them. They can be explained on the basis of the high accuracy of the annotations provided by OntoTagger at the lower levels (mainly at the morphosyntactic level). However, they should be conveniently qualified, since they might be too domain- and/or language-dependent. It should be further experimented how our approach works in a different domain or a different language, such as French, English, or German. • In any case, the results of this application of Human Language Technologies to Ontology Population (and, accordingly, to Ontological Engineering) seem very promising and encouraging in order for these two areas to collaborate and complement each other in the area of semantic annotation. Fifth, as shown in the State of the Art of this work, there are different approaches and models for the semantic annotation of texts, but all of them focus on a particular view of the semantic level. Clearly, all these approaches and models should be integrated in order to bear a coherent and joint semantic annotation level. OntoTag shows how (i) these semantic annotation layers could be integrated together; and (ii) they could be integrated with the annotations associated to other annotation levels. Sixth, we identified some recommendations, best practices and lessons learned for annotation standardisation, interoperation and merge. They show how standardisation (via ontologies, in this case) enables the combination, integration and interoperation of different linguistic tools and their annotations into a multilayered (or multileveled) linguistic annotation, which is one of the hot topics in the area of Linguistic Annotation. And last but not least, OntoTag’s annotation scheme and OntoTagger’s annotation schemas show a way to formalise and annotate coherently and uniformly the different units and features associated to the different levels and layers of linguistic annotation. This is a great scientific step ahead towards the global standardisation of this area, which is the aim of ISO/TC 37 (in particular, Subcommittee 4, dealing with the standardisation of linguistic annotations and resources).