851 resultados para Systems and practices


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ore-forming and geoenviromental systems commonly involve coupled fluid flowand chemical reaction processes. The advanced numerical methods and computational modeling have become indispensable tools for simulating such processes in recent years. This enables many hitherto unsolvable geoscience problems to be addressed using numerical methods and computational modeling approaches. For example, computational modeling has been successfully used to solve ore-forming and mine site contamination/remediation problems, in which fluid flow and geochemical processes play important roles in the controlling dynamic mechanisms. The main purpose of this paper is to present a generalized overview of: (1) the various classes and models associated with fluid flow/chemically reacting systems in order to highlight possible opportunities and developments for the future; (2) some more general issues that need attention in the development of computational models and codes for simulating ore-forming and geoenviromental systems; (3) the related progresses achieved on the geochemical modeling over the past 50 years or so; (4) the general methodology for modeling of oreforming and geoenvironmental systems; and (5) the future development directions associated with modeling of ore-forming and geoenviromental systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the framework of the European Research Area for Agricultural Research for Development (ERA-ARD) project, a survey of innovative approaches in capacity development (CD) was undertaken. All Consortium members were asked to describe innovative approaches and best practices of CD mechanisms within their ARD programmes. A tabular overview of all the programmes or mechanisms can be found on page 4. Abstracts of the programmes or mechanisms are compiled in alphabetic order of the consortium members in this document. The intention of this catalogue of mechanisms is to give an overview of different approaches and practices and not to provide a comprehensive mapping of all the ongoing CD activities of the Consortium members. Thus, the programmes described represent only a fraction of all the ongoing CD programmes on the national level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Agriculture is the back borne of the economy of Tanzania and its main objective is to ensure food security and eradication of rural poverty through the promotion of production systems, technologies and practices that are environmental sound (Tanzania National Environmental Policy, 1999). However, this has not been achieved due to rapid land degradation, which has consequently lead to massive soil loss, decline in crop yields, disruption of water resources and the destruction of the natural resources in general. This report highlights the extent to which agricultural related activities like agronomic and cultural practices such as use of fire for preparation of farms and cutting of trees to meet villagers’ needs have devastating effect on the quality of the environment. Besides these observed difficulties this paper argued that as the survival, well being and future of the Uluguru and Usambara people it is essential to provide continuous training to farmers, so that they know how best to continue farming and harvesting forest products on a sustainable basis without causing much harm to the environment. Most of all this paper recommends the introduction of Ngolo cultivation technology on steep slopes of Usambara and Uluguru mountains in order to enhance the conservation of the environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Regime shifts, defined as a radical and persistent reconfiguration of an ecosystem following a disturbance, have been acknowledged by scientists as a very important aspect of the dynamic of ecosystems. However, their consideration in land management planning remains marginal and limited to specific processes and systems. Current research focuses on mathematical modeling and statistical analysis of spatio-temporal data for specific environmental variables. These methods do not fulfill the needs of land managers, who are confronted with a multitude of processes and pressure types and require clear and simple strategies to prevent regime shift or to increase the resilience of their environment. The EU-FP7 CASCADE project is looking at regime shifts of dryland ecosystems in southern Europe and specifically focuses on rangeland and forest systems which are prone to various land degradation threats. One of the aims of the project is to evaluate the impact of different management practices on the dynamic of the environment in a participatory manner, including a multi-stakeholder evaluation of the state of the environment and of the management potential. To achieve this objective we have organized several stakeholder meetings and we have compiled a review of management practices using the WOCAT methodology, which enables merging scientific and land users knowledge. We highlight here the main challenges we have encountered in applying the notion of regime shift to real world socio-ecological systems and in translating related concepts such as tipping points, stable states, hysteresis and resilience to land managers, using concrete examples from CASCADE study sites. Secondly, we explore the advantages of including land users’ knowledge in the scientific understanding of regime shifts. Moreover, we discuss useful alternative concepts and lessons learnt that will allow us to build a participatory method for the assessment of resilient management practices in specific socio-ecological systems and to foster adaptive dryland management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper considers ocean fisheries as complex adaptive systems and addresses the question of how human institutions might be best matched to their structure and function. Ocean ecosystems operate at multiple scales, but the management of fisheries tends to be aimed at a single species considered at a single broad scale. The paper argues that this mismatch of ecological and management scale makes it difficult to address the fine-scale aspects of ocean ecosystems, and leads to fishing rights and strategies that tend to erode the underlying structure of populations and the system itself. A successful transition to ecosystem-based management will require institutions better able to economize on the acquisition of feedback about the impact of human activities. This is likely to be achieved by multiscale institutions whose organization mirrors the spatial organization of the ecosystem and whose communications occur through a polycentric network. Better feedback will allow the exploration of fine-scale science and the employment of fine-scale fishing restraints, better adapted to the behavior of fish and habitat. The scale and scope of individual fishing rights also needs to be congruent with the spatial structure of the ecosystem. Place-based rights can be expected to create a longer private planning horizon as well as stronger incentives for the private and public acquisition of system relevant knowledge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While many tend to think of memory systems in the brain as a single process, in reality several experiments have supported multiple dissociations of different forms of learning, such as spatial learning and response learning. In both humans and rats, the hippocampus has long been shown to be specialized in the storage of spatial and contextual memory whereas the striatum is associated with motor responses and habitual behaviors. Previous studies have examined how damage to hippocampus or striatum has affected the acquisition of either a spatial or response navigation task. However even in a very familiar environment organisms must continuously switch between place and response strategies depending upon circumstances. The current research investigates how these two brain systems interact under normal conditions to produce navigational behavior. Rats were tested using a task developed by Jacobson and colleagues (2006) in which the two types of navigation could be controlled and studied simultaneously. Rats were trained to solve a plus maze using both a spatial and a response strategy. A cue (flashing light) was employed to indicate the correct strategy on a given trial. When no light was present, the animals were rewarded for making a 90º right turn (motor response). When the light was on, the animals were rewarded for going to a specific goal location (place strategy). After learning the task, animals had a sham surgery or dorsal striatum or hippocampus damaged. In order to investigate the individual role of each brain system and evaluate whether these brain regions compete or cooperate for control over strategy, we utilized a within-animal comparisons. The configuration of the maze allowed for the comparison of behavior in individual animals before and after specific brain areas were damaged. Animals with hippocampal lesions showed selective deficits on place trials after surgery and learned the reversal of the motor response more rapidly than striatal lesioned or sham rats. Unlike previous findings regarding maze learning, animals with striatal lesions showed deficits in both place and response trials and had difficulty learning the reversal of motor response. Therefore, the effects of lesions on the ability to switch back and forth between strategies were more complex than previously suggested. This work may reveal important new insight on the integration of hippocampal and striatal learning systems, and facilitate a better understanding of the brain dynamics underlying similar navigational processes in humans.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Evaporative cooling systems continue to be associated with outbreaks of Legionnaires’ disease despite widely available maintenance guidelines intended to reduce these outbreaks. Yet, the guidelines vary widely regarding the recommendations that are made to maintain evaporative cooling systems and it is unclear whether guidelines were in place or, if they were, whether they were being followed when the outbreaks of Legionnaires’ disease occurred. Thus, this study was designed to conduct two systematic reviews of (1) evaporative cooling system maintenance guidelines; and (2) published Legionnaires’ disease outbreaks. For each maintenance guideline identified in the systematic review, recommended maintenance practices were abstracted and similarities and/or differences in the reported recommendations were assessed. Following the systematic review of outbreak investigations that meet the inclusion criteria established for the study, information about the state of the evaporative cooling system during the outbreak investigation was abstracted to summarize, when reported, which maintenance practices were implemented. As expected, the recommended maintenance procedures varied greatly across the guidelines and were not always specific. Overall, the outbreak investigations tended to report similar maintenance issues that were unclear in the maintenance guidelines. Generally, these maintenance issues were biocide use, microbiological testing, frequency of general inspections, and protocols and frequency of total system cleanings. The role in which non-standardized and generalized maintenance guidelines plays in the continued association between Legionnaires’ disease and evaporative cooling systems is still not fully understood. However, this study suggests that more specific and standardized maintenance guidelines, that have been scientifically established to be effective in controlling Legionella bacteria, are needed and then these guidelines must be properly implemented in order to help reduce further Legionnaires’ disease outbreaks associated with evaporative cooling systems.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Influenza (the flu) is a serious respiratory illness that can cause severe complications, often leading to hospitalization and even death. Influenza epidemics occur in most countries every year, usually during the winter months. Despite recommendations from the Centers for Disease Control and Prevention (CDC) and efforts by health care institutions across the United States, influenza vaccination rates among health care workers in the United States remain low. How to increase the number of vaccinated health care workers is an important public health question and is examined in two journal articles included here. ^ The first journal article evaluates the effectiveness of an Intranet intervention in increasing the proportion of health care workers (HCWs) who received influenza vaccination. Hospital employees were required go to the hospital's Intranet and select "vaccine received," "contraindicated," or "declined" from the online questionnaire. Declining employees automatically received an online pop-up window with education about vaccination; managers were provided feedback on employees' participation rates via e-mail messages. Employees were reminded of the Intranet requirement in articles in the employee newsletter and on the hospital's Intranet. Reminders about the Intranet questionnaire were provided through managers and newsletters to the HCWs. Fewer than half the employees (43.7%) completed the online questionnaire. Yet the hospital witnessed a statistically significant increase in the percentage of employees who received the flu vaccine at the hospital – 48.5% in the 2008-09 season as compared to 36.5%, 38.5% and 29.8% in the previous three years (P < 0.05). ^ The second article assesses current interventions employed by hospitals, health systems and nursing homes to determine which policies have been the most effective in boosting vaccination rates among American health care workers. A systematic review of research published between January 1994 and March 2010 suggests that education is necessary but not usually sufficient to increase vaccine uptake. Education about the flu and flu vaccines is most effective when complemented with easy access and making the vaccine free, although this combination may not be sufficient to achieve the desired vaccination levels among HCWs. The findings point toward adding incentives for HCWs to get vaccinated and requiring them to record their vaccination status on a declination/consent form – either written or electronic. ^ Based on these findings, American health care organizations, such as hospitals, nursing homes, and long-term care facilities, should consider using online declination forms as a method for increasing influenza vaccination rates among their employees. These online forms should be used in conjunction with other policies, including free vaccine, mobile distribution and incentives. ^ To further spur health care organizations to adopt policies and practices that will raise influenza vaccination rates among employees, The Joint Commission – an independent, not-for- profit organization that accredits and certifies more than 17,000 health care organizations and programs in the United States – should consider altering its standards. Currently, The Joint Commission does not require signed declination forms from employees who eschew vaccination; it only echoes the CDC's recommendations: "Health care facilities should require personnel who refuse vaccination to complete a declination form." Because participation in Joint Commission accreditation is required for Medicare reimbursement, action taken by the Joint Commission to require interventions such as mandatory declination/consent forms might result in immediate action by health care organizations to follow these new standards and lead to higher vaccination rates among HCWs.^ 1“Frequently Asked Questions for H1N1 and Seasonal Influenza.” The Joint Commission - Infection Control: http://www.jointcommission.org/PatientSafety/InfectionControl/h1n1_faq.htm. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This Doctoral Thesis entitled Contribution to the analysis, design and assessment of compact antenna test ranges at millimeter wavelengths aims to deepen the knowledge of a particular antenna measurement system: the compact range, operating in the frequency bands of millimeter wavelengths. The thesis has been developed at Radiation Group (GR), an antenna laboratory which belongs to the Signals, Systems and Radiocommunications department (SSR), from Technical University of Madrid (UPM). The Radiation Group owns an extensive experience on antenna measurements, running at present four facilities which operate in different configurations: Gregorian compact antenna test range, spherical near field, planar near field and semianechoic arch system. The research work performed in line with this thesis contributes the knowledge of the first measurement configuration at higher frequencies, beyond the microwaves region where Radiation Group features customer-level performance. To reach this high level purpose, a set of scientific tasks were sequentially carried out. Those are succinctly described in the subsequent paragraphs. A first step dealed with the State of Art review. The study of scientific literature dealed with the analysis of measurement practices in compact antenna test ranges in addition with the particularities of millimeter wavelength technologies. Joint study of both fields of knowledge converged, when this measurement facilities are of interest, in a series of technological challenges which become serious bottlenecks at different stages: analysis, design and assessment. Thirdly after the overview study, focus was set on Electromagnetic analysis algorithms. These formulations allow to approach certain electromagnetic features of interest, such as field distribution phase or stray signal analysis of particular structures when they interact with electromagnetic waves sources. Properly operated, a CATR facility features electromagnetic waves collimation optics which are large, in terms of wavelengths. Accordingly, the electromagnetic analysis tasks introduce an extense number of mathematic unknowns which grow with frequency, following different polynomic order laws depending on the used algorithmia. In particular, the optics configuration which was of our interest consisted on the reflection type serrated edge collimator. The analysis of these devices requires a flexible handling of almost arbitrary scattering geometries, becoming this flexibility the nucleus of the algorithmia’s ability to perform the subsequent design tasks. This thesis’ contribution to this field of knowledge consisted on reaching a formulation which was powerful at the same time when dealing with various analysis geometries and computationally speaking. Two algorithmia were developed. While based on the same principle of hybridization, they reached different order Physics performance at the cost of the computational efficiency. Inter-comparison of their CATR design capabilities was performed, reaching both qualitative as well as quantitative conclusions on their scope. In third place, interest was shifted from analysis - design tasks towards range assessment. Millimetre wavelengths imply strict mechanical tolerances and fine setup adjustment. In addition, the large number of unknowns issue already faced in the analysis stage appears as well in the on chamber field probing stage. Natural decrease of dynamic range available by semiconductor millimeter waves sources requires in addition larger integration times at each probing point. These peculiarities increase exponentially the difficulty of performing assessment processes in CATR facilities beyond microwaves. The bottleneck becomes so tight that it compromises the range characterization beyond a certain limit frequency which typically lies on the lowest segment of millimeter wavelength frequencies. However the value of range assessment moves, on the contrary, towards the highest segment. This thesis contributes this technological scenario developing quiet zone probing techniques which achieves substantial data reduction ratii. Collaterally, it increases the robustness of the results to noise, which is a virtual rise of the setup’s available dynamic range. In fourth place, the environmental sensitivity of millimeter wavelengths issue was approached. It is well known the drifts of electromagnetic experiments due to the dependance of the re sults with respect to the surrounding environment. This feature relegates many industrial practices of microwave frequencies to the experimental stage, at millimeter wavelengths. In particular, evolution of the atmosphere within acceptable conditioning bounds redounds in drift phenomena which completely mask the experimental results. The contribution of this thesis on this aspect consists on modeling electrically the indoor atmosphere existing in a CATR, as a function of environmental variables which affect the range’s performance. A simple model was developed, being able to handle high level phenomena, such as feed - probe phase drift as a function of low level magnitudes easy to be sampled: relative humidity and temperature. With this model, environmental compensation can be performed and chamber conditioning is automatically extended towards higher frequencies. Therefore, the purpose of this thesis is to go further into the knowledge of millimetre wavelengths involving compact antenna test ranges. This knowledge is dosified through the sequential stages of a CATR conception, form early low level electromagnetic analysis towards the assessment of an operative facility, stages for each one of which nowadays bottleneck phenomena exist and seriously compromise the antenna measurement practices at millimeter wavelengths.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Runtime management of distributed information systems is a complex and costly activity. One of the main challenges that must be addressed is obtaining a complete and updated view of all the managed runtime resources. This article presents a monitoring architecture for heterogeneous and distributed information systems. It is composed of two elements: an information model and an agent infrastructure. The model negates the complexity and variability of these systems and enables the abstraction over non-relevant details. The infrastructure uses this information model to monitor and manage the modeled environment, performing and detecting changes in execution time. The agents infrastructure is further detailed and its components and the relationships between them are explained. Moreover, the proposal is validated through a set of agents that instrument the JEE Glassfish application server, paying special attention to support distributed configuration scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes new approaches to improve the local and global approximation (matching) and modeling capability of Takagi–Sugeno (T-S) fuzzy model. The main aim is obtaining high function approximation accuracy and fast convergence. The main problem encountered is that T-S identification method cannot be applied when the membership functions are overlapped by pairs. This restricts the application of the T-S method because this type of membership function has been widely used during the last 2 decades in the stability, controller design of fuzzy systems and is popular in industrial control applications. The approach developed here can be considered as a generalized version of T-S identification method with optimized performance in approximating nonlinear functions. We propose a noniterative method through weighting of parameters approach and an iterative algorithm by applying the extended Kalman filter, based on the same idea of parameters’ weighting. We show that the Kalman filter is an effective tool in the identification of T-S fuzzy model. A fuzzy controller based linear quadratic regulator is proposed in order to show the effectiveness of the estimation method developed here in control applications. An illustrative example of an inverted pendulum is chosen to evaluate the robustness and remarkable performance of the proposed method locally and globally in comparison with the original T-S model. Simulation results indicate the potential, simplicity, and generality of the algorithm. An illustrative example is chosen to evaluate the robustness. In this paper, we prove that these algorithms converge very fast, thereby making them very practical to use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis proposes how to apply the Semantic Web tech- nologies for the Idea Management Systems to deliver a solution to knowl- edge management and information over ow problems. Firstly, the aim is to present a model that introduces rich metadata annotations and their usage in the domain of Idea Management Systems. Furthermore, the the- sis shall investigate how to link innovation data with information from other systems and use it to categorize and lter out the most valuable elements. In addition, the thesis presents a Generic Idea and Innovation Management Ontology (Gi2MO) and aims to back its creation with a set of case studies followed by evaluations that prove how Semantic Web can work as tool to create new opportunities and leverage the contemporary Idea Management legacy systems into the next level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web 1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs. These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools. Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate. However, linguistic annotation tools have still some limitations, which can be summarised as follows: 1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.). 2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts. 3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc. A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved. In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool. Therefore, it would be quite useful to find a way to (i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools; (ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate. Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned. Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section. 2. GOALS OF THE PRESENT WORK As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based triples, as in the usual Semantic Web languages (namely RDF(S) and OWL), in order for the model to be considered suitable for the Semantic Web. Besides, to be useful for the Semantic Web, this model should provide a way to automate the annotation of web pages. As for the present work, this requirement involved reusing the linguistic annotation tools purchased by the OEG research group (http://www.oeg-upm.net), but solving beforehand (or, at least, minimising) some of their limitations. Therefore, this model had to minimise these limitations by means of the integration of several linguistic annotation tools into a common architecture. Since this integration required the interoperation of tools and their annotations, ontologies were proposed as the main technological component to make them effectively interoperate. From the very beginning, it seemed that the formalisation of the elements and the knowledge underlying linguistic annotations within an appropriate set of ontologies would be a great step forward towards the formulation of such a model (henceforth referred to as OntoTag). Obviously, first, to combine the results of the linguistic annotation tools that operated at the same level, their annotation schemas had to be unified (or, preferably, standardised) in advance. This entailed the unification (id. standardisation) of their tags (both their representation and their meaning), and their format or syntax. Second, to merge the results of the linguistic annotation tools operating at different levels, their respective annotation schemas had to be (a) made interoperable and (b) integrated. And third, in order for the resulting annotations to suit the Semantic Web, they had to be specified by means of an ontology-based vocabulary, and structured by means of ontology-based triples, as hinted above. Therefore, a new annotation scheme had to be devised, based both on ontologies and on this type of triples, which allowed for the combination and the integration of the annotations of any set of linguistic annotation tools. This annotation scheme was considered a fundamental part of the model proposed here, and its development was, accordingly, another major objective of the present work. All these goals, aims and objectives could be re-stated more clearly as follows: Goal 1: Development of a set of ontologies for the formalisation of the linguistic knowledge relating linguistic annotation. Sub-goal 1.1: Ontological formalisation of the EAGLES (1996a; 1996b) de facto standards for morphosyntactic and syntactic annotation, in a way that helps respect the triple structure recommended for annotations in these works (which is isomorphic to the triple structures used in the context of the Semantic Web). Sub-goal 1.2: Incorporation into this preliminary ontological formalisation of other existing standards and standard proposals relating the levels mentioned above, such as those currently under development within ISO/TC 37 (the ISO Technical Committee dealing with Terminology, which deals also with linguistic resources and annotations). Sub-goal 1.3: Generalisation and extension of the recommendations in EAGLES (1996a; 1996b) and ISO/TC 37 to the semantic level, for which no ISO/TC 37 standards have been developed yet. Sub-goal 1.4: Ontological formalisation of the generalisations and/or extensions obtained in the previous sub-goal as generalisations and/or extensions of the corresponding ontology (or ontologies). Sub-goal 1.5: Ontological formalisation of the knowledge required to link, combine and unite the knowledge represented in the previously developed ontology (or ontologies). Goal 2: Development of OntoTag’s annotation scheme, a standard-based abstract scheme for the hybrid (linguistically-motivated and ontological-based) annotation of texts. Sub-goal 2.1: Development of the standard-based morphosyntactic annotation level of OntoTag’s scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996a) and also the recommendations included in the ISO/MAF (2008) standard draft. Sub-goal 2.2: Development of the standard-based syntactic annotation level of the hybrid abstract scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996b) and the ISO/SynAF (2010) standard draft. Sub-goal 2.3: Development of the standard-based semantic annotation level of OntoTag’s (abstract) scheme. Sub-goal 2.4: Development of the mechanisms for a convenient integration of the three annotation levels already mentioned. These mechanisms should take into account the recommendations included in the ISO/LAF (2009) standard draft. Goal 3: Design of OntoTag’s (abstract) annotation architecture, an abstract architecture for the hybrid (semantic) annotation of texts (i) that facilitates the integration and interoperation of different linguistic annotation tools, and (ii) whose results comply with OntoTag’s annotation scheme. Sub-goal 3.1: Specification of the decanting processes that allow for the classification and separation, according to their corresponding levels, of the results of the linguistic tools annotating at several different levels. Sub-goal 3.2: Specification of the standardisation processes that allow (a) complying with the standardisation requirements of OntoTag’s annotation scheme, as well as (b) combining the results of those linguistic tools that share some level of annotation. Sub-goal 3.3: Specification of the merging processes that allow for the combination of the output annotations and the interoperation of those linguistic tools that share some level of annotation. Sub-goal 3.4: Specification of the merge processes that allow for the integration of the results and the interoperation of those tools performing their annotations at different levels. Goal 4: Generation of OntoTagger’s schema, a concrete instance of OntoTag’s abstract scheme for a concrete set of linguistic annotations. These linguistic annotations result from the tools and the resources available in the research group, namely • Bitext’s DataLexica (http://www.bitext.com/EN/datalexica.asp), • LACELL’s (POS) tagger (http://www.um.es/grupos/grupo-lacell/quees.php), • Connexor’s FDG (http://www.connexor.eu/technology/machinese/glossary/fdg/), and • EuroWordNet (Vossen et al., 1998). This schema should help evaluate OntoTag’s underlying hypotheses, stated below. Consequently, it should implement, at least, those levels of the abstract scheme dealing with the annotations of the set of tools considered in this implementation. This includes the morphosyntactic, the syntactic and the semantic levels. Goal 5: Implementation of OntoTagger’s configuration, a concrete instance of OntoTag’s abstract architecture for this set of linguistic tools and annotations. This configuration (1) had to use the schema generated in the previous goal; and (2) should help support or refute the hypotheses of this work as well (see the next section). Sub-goal 5.1: Implementation of the decanting processes that facilitate the classification and separation of the results of those linguistic resources that provide annotations at several different levels (on the one hand, LACELL’s tagger operates at the morphosyntactic level and, minimally, also at the semantic level; on the other hand, FDG operates at the morphosyntactic and the syntactic levels and, minimally, at the semantic level as well). Sub-goal 5.2: Implementation of the standardisation processes that allow (i) specifying the results of those linguistic tools that share some level of annotation according to the requirements of OntoTagger’s schema, as well as (ii) combining these shared level results. In particular, all the tools selected perform morphosyntactic annotations and they had to be conveniently combined by means of these processes. Sub-goal 5.3: Implementation of the merging processes that allow for the combination (and possibly the improvement) of the annotations and the interoperation of the tools that share some level of annotation (in particular, those relating the morphosyntactic level, as in the previous sub-goal). Sub-goal 5.4: Implementation of the merging processes that allow for the integration of the different standardised and combined annotations aforementioned, relating all the levels considered. Sub-goal 5.5: Improvement of the semantic level of this configuration by adding a named entity recognition, (sub-)classification and annotation subsystem, which also uses the named entities annotated to populate a domain ontology, in order to provide a concrete application of the present work in the two areas involved (the Semantic Web and Corpus Linguistics). 3. MAIN RESULTS: ASSESSMENT OF ONTOTAG’S UNDERLYING HYPOTHESES The model developed in the present thesis tries to shed some light on (i) whether linguistic annotation tools can effectively interoperate; (ii) whether their results can be combined and integrated; and, if they can, (iii) how they can, respectively, interoperate and be combined and integrated. Accordingly, several hypotheses had to be supported (or rejected) by the development of the OntoTag model and OntoTagger (its implementation). The hypotheses underlying OntoTag are surveyed below. Only one of the hypotheses (H.6) was rejected; the other five could be confirmed. H.1 The annotations of different levels (or layers) can be integrated into a sort of overall, comprehensive, multilayer and multilevel annotation, so that their elements can complement and refer to each other. • CONFIRMED by the development of: o OntoTag’s annotation scheme, o OntoTag’s annotation architecture, o OntoTagger’s (XML, RDF, OWL) annotation schemas, o OntoTagger’s configuration. H.2 Tool-dependent annotations can be mapped onto a sort of tool-independent annotations and, thus, can be standardised. • CONFIRMED by means of the standardisation phase incorporated into OntoTag and OntoTagger for the annotations yielded by the tools. H.3 Standardisation should ease: H.3.1: The interoperation of linguistic tools. H.3.2: The comparison, combination (at the same level and layer) and integration (at different levels or layers) of annotations. • H.3 was CONFIRMED by means of the development of OntoTagger’s ontology-based configuration: o Interoperation, comparison, combination and integration of the annotations of three different linguistic tools (Connexor’s FDG, Bitext’s DataLexica and LACELL’s tagger); o Integration of EuroWordNet-based, domain-ontology-based and named entity annotations at the semantic level. o Integration of morphosyntactic, syntactic and semantic annotations. H.4 Ontologies and Semantic Web technologies (can) play a crucial role in the standardisation of linguistic annotations, by providing consensual vocabularies and standardised formats for annotation (e.g., RDF triples). • CONFIRMED by means of the development of OntoTagger’s RDF-triple-based annotation schemas. H.5 The rate of errors introduced by a linguistic tool at a given level, when annotating, can be reduced automatically by contrasting and combining its results with the ones coming from other tools, operating at the same level. However, these other tools might be built following a different technological (stochastic vs. rule-based, for example) or theoretical (dependency vs. HPS-grammar-based, for instance) approach. • CONFIRMED by the results yielded by the evaluation of OntoTagger. H.6 Each linguistic level can be managed and annotated independently. • REJECTED: OntoTagger’s experiments and the dependencies observed among the morphosyntactic annotations, and between them and the syntactic annotations. In fact, Hypothesis H.6 was already rejected when OntoTag’s ontologies were developed. We observed then that several linguistic units stand on an interface between levels, belonging thereby to both of them (such as morphosyntactic units, which belong to both the morphological level and the syntactic level). Therefore, the annotations of these levels overlap and cannot be handled independently when merged into a unique multileveled annotation. 4. OTHER MAIN RESULTS AND CONTRIBUTIONS First, interoperability is a hot topic for both the linguistic annotation community and the whole Computer Science field. The specification (and implementation) of OntoTag’s architecture for the combination and integration of linguistic (annotation) tools and annotations by means of ontologies shows a way to make these different linguistic annotation tools and annotations interoperate in practice. Second, as mentioned above, the elements involved in linguistic annotation were formalised in a set (or network) of ontologies (OntoTag’s linguistic ontologies). • On the one hand, OntoTag’s network of ontologies consists of − The Linguistic Unit Ontology (LUO), which includes a mostly hierarchical formalisation of the different types of linguistic elements (i.e., units) identifiable in a written text; − The Linguistic Attribute Ontology (LAO), which includes also a mostly hierarchical formalisation of the different types of features that characterise the linguistic units included in the LUO; − The Linguistic Value Ontology (LVO), which includes the corresponding formalisation of the different values that the attributes in the LAO can take; − The OIO (OntoTag’s Integration Ontology), which  Includes the knowledge required to link, combine and unite the knowledge represented in the LUO, the LAO and the LVO;  Can be viewed as a knowledge representation ontology that describes the most elementary vocabulary used in the area of annotation. • On the other hand, OntoTag’s ontologies incorporate the knowledge included in the different standards and recommendations for linguistic annotation released so far, such as those developed within the EAGLES and the SIMPLE European projects or by the ISO/TC 37 committee: − As far as morphosyntactic annotations are concerned, OntoTag’s ontologies formalise the terms in the EAGLES (1996a) recommendations and their corresponding terms within the ISO Morphosyntactic Annotation Framework (ISO/MAF, 2008) standard; − As for syntactic annotations, OntoTag’s ontologies incorporate the terms in the EAGLES (1996b) recommendations and their corresponding terms within the ISO Syntactic Annotation Framework (ISO/SynAF, 2010) standard draft; − Regarding semantic annotations, OntoTag’s ontologies generalise and extend the recommendations in EAGLES (1996a; 1996b) and, since no stable standards or standard drafts have been released for semantic annotation by ISO/TC 37 yet, they incorporate the terms in SIMPLE (2000) instead; − The terms coming from all these recommendations and standards were supplemented by those within the ISO Data Category Registry (ISO/DCR, 2008) and also of the ISO Linguistic Annotation Framework (ISO/LAF, 2009) standard draft when developing OntoTag’s ontologies. Third, we showed that the combination of the results of tools annotating at the same level can yield better results (both in precision and in recall) than each tool separately. In particular, 1. OntoTagger clearly outperformed two of the tools integrated into its configuration, namely DataLexica and FDG in all the combination sub-phases in which they overlapped (i.e. POS tagging, lemma annotation and morphological feature annotation). As far as the remaining tool is concerned, i.e. LACELL’s tagger, it was also outperformed by OntoTagger in POS tagging and lemma annotation, and it did not behave better than OntoTagger in the morphological feature annotation layer. 2. As an immediate result, this implies that a) This type of combination architecture configurations can be applied in order to improve significantly the accuracy of linguistic annotations; and b) Concerning the morphosyntactic level, this could be regarded as a way of constructing more robust and more accurate POS tagging systems. Fourth, Semantic Web annotations are usually performed by humans or else by machine learning systems. Both of them leave much to be desired: the former, with respect to their annotation rate; the latter, with respect to their (average) precision and recall. In this work, we showed how linguistic tools can be wrapped in order to annotate automatically Semantic Web pages using ontologies. This entails their fast, robust and accurate semantic annotation. As a way of example, as mentioned in Sub-goal 5.5, we developed a particular OntoTagger module for the recognition, classification and labelling of named entities, according to the MUC and ACE tagsets (Chinchor, 1997; Doddington et al., 2004). These tagsets were further specified by means of a domain ontology, namely the Cinema Named Entities Ontology (CNEO). This module was applied to the automatic annotation of ten different web pages containing cinema reviews (that is, around 5000 words). In addition, the named entities annotated with this module were also labelled as instances (or individuals) of the classes included in the CNEO and, then, were used to populate this domain ontology. • The statistical results obtained from the evaluation of this particular module of OntoTagger can be summarised as follows. On the one hand, as far as recall (R) is concerned, (R.1) the lowest value was 76,40% (for file 7); (R.2) the highest value was 97, 50% (for file 3); and (R.3) the average value was 88,73%. On the other hand, as far as the precision rate (P) is concerned, (P.1) its minimum was 93,75% (for file 4); (R.2) its maximum was 100% (for files 1, 5, 7, 8, 9, and 10); and (R.3) its average value was 98,99%. • These results, which apply to the tasks of named entity annotation and ontology population, are extraordinary good for both of them. They can be explained on the basis of the high accuracy of the annotations provided by OntoTagger at the lower levels (mainly at the morphosyntactic level). However, they should be conveniently qualified, since they might be too domain- and/or language-dependent. It should be further experimented how our approach works in a different domain or a different language, such as French, English, or German. • In any case, the results of this application of Human Language Technologies to Ontology Population (and, accordingly, to Ontological Engineering) seem very promising and encouraging in order for these two areas to collaborate and complement each other in the area of semantic annotation. Fifth, as shown in the State of the Art of this work, there are different approaches and models for the semantic annotation of texts, but all of them focus on a particular view of the semantic level. Clearly, all these approaches and models should be integrated in order to bear a coherent and joint semantic annotation level. OntoTag shows how (i) these semantic annotation layers could be integrated together; and (ii) they could be integrated with the annotations associated to other annotation levels. Sixth, we identified some recommendations, best practices and lessons learned for annotation standardisation, interoperation and merge. They show how standardisation (via ontologies, in this case) enables the combination, integration and interoperation of different linguistic tools and their annotations into a multilayered (or multileveled) linguistic annotation, which is one of the hot topics in the area of Linguistic Annotation. And last but not least, OntoTag’s annotation scheme and OntoTagger’s annotation schemas show a way to formalise and annotate coherently and uniformly the different units and features associated to the different levels and layers of linguistic annotation. This is a great scientific step ahead towards the global standardisation of this area, which is the aim of ISO/TC 37 (in particular, Subcommittee 4, dealing with the standardisation of linguistic annotations and resources).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inverter features are reviewed from a PV systems perspective, with a view to contributing to possible codes, procurement specifications and testing procedures, in order to assure the technical quality of these systems. A laboratory testing campaign has been carried out on a representative set of sixteen currently available inverters and a set of the most common AC appliances. The results of the tests are discussed with the aim of divulging the particular features of operating AC appliances in PV systems and the provisions to be taken into account in PV system design. The development of testing procedures has followed the motto ?keep it as simple as possible?, in order to make their application easier in conventional laboratories in developing countries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nondeterminism and partially instantiated data structures give logic programming expressive power beyond that of functional programming. However, functional programming often provides convenient syntactic features, such as having a designated implicit output argument, which allow function cali nesting and sometimes results in more compact code. Functional programming also sometimes allows a more direct encoding of lazy evaluation, with its ability to deal with infinite data structures. We present a syntactic functional extensión, used in the Ciao system, which can be implemented in ISO-standard Prolog systems and covers function application, predefined evaluable functors, functional definitions, quoting, and lazy evaluation. The extensión is also composable with higher-order features and can be combined with other extensions to ISO-Prolog such as constraints. We also highlight the features of the Ciao system which help implementation and present some data on the overhead of using lazy evaluation with respect to eager evaluation.