959 resultados para maximum ontological completeness


Relevância:

30.00% 30.00%

Publicador:

Resumo:

OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web 1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs. These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools. Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate. However, linguistic annotation tools have still some limitations, which can be summarised as follows: 1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.). 2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts. 3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc. A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved. In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool. Therefore, it would be quite useful to find a way to (i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools; (ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate. Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned. Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section. 2. GOALS OF THE PRESENT WORK As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based triples, as in the usual Semantic Web languages (namely RDF(S) and OWL), in order for the model to be considered suitable for the Semantic Web. Besides, to be useful for the Semantic Web, this model should provide a way to automate the annotation of web pages. As for the present work, this requirement involved reusing the linguistic annotation tools purchased by the OEG research group (http://www.oeg-upm.net), but solving beforehand (or, at least, minimising) some of their limitations. Therefore, this model had to minimise these limitations by means of the integration of several linguistic annotation tools into a common architecture. Since this integration required the interoperation of tools and their annotations, ontologies were proposed as the main technological component to make them effectively interoperate. From the very beginning, it seemed that the formalisation of the elements and the knowledge underlying linguistic annotations within an appropriate set of ontologies would be a great step forward towards the formulation of such a model (henceforth referred to as OntoTag). Obviously, first, to combine the results of the linguistic annotation tools that operated at the same level, their annotation schemas had to be unified (or, preferably, standardised) in advance. This entailed the unification (id. standardisation) of their tags (both their representation and their meaning), and their format or syntax. Second, to merge the results of the linguistic annotation tools operating at different levels, their respective annotation schemas had to be (a) made interoperable and (b) integrated. And third, in order for the resulting annotations to suit the Semantic Web, they had to be specified by means of an ontology-based vocabulary, and structured by means of ontology-based triples, as hinted above. Therefore, a new annotation scheme had to be devised, based both on ontologies and on this type of triples, which allowed for the combination and the integration of the annotations of any set of linguistic annotation tools. This annotation scheme was considered a fundamental part of the model proposed here, and its development was, accordingly, another major objective of the present work. All these goals, aims and objectives could be re-stated more clearly as follows: Goal 1: Development of a set of ontologies for the formalisation of the linguistic knowledge relating linguistic annotation. Sub-goal 1.1: Ontological formalisation of the EAGLES (1996a; 1996b) de facto standards for morphosyntactic and syntactic annotation, in a way that helps respect the triple structure recommended for annotations in these works (which is isomorphic to the triple structures used in the context of the Semantic Web). Sub-goal 1.2: Incorporation into this preliminary ontological formalisation of other existing standards and standard proposals relating the levels mentioned above, such as those currently under development within ISO/TC 37 (the ISO Technical Committee dealing with Terminology, which deals also with linguistic resources and annotations). Sub-goal 1.3: Generalisation and extension of the recommendations in EAGLES (1996a; 1996b) and ISO/TC 37 to the semantic level, for which no ISO/TC 37 standards have been developed yet. Sub-goal 1.4: Ontological formalisation of the generalisations and/or extensions obtained in the previous sub-goal as generalisations and/or extensions of the corresponding ontology (or ontologies). Sub-goal 1.5: Ontological formalisation of the knowledge required to link, combine and unite the knowledge represented in the previously developed ontology (or ontologies). Goal 2: Development of OntoTag’s annotation scheme, a standard-based abstract scheme for the hybrid (linguistically-motivated and ontological-based) annotation of texts. Sub-goal 2.1: Development of the standard-based morphosyntactic annotation level of OntoTag’s scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996a) and also the recommendations included in the ISO/MAF (2008) standard draft. Sub-goal 2.2: Development of the standard-based syntactic annotation level of the hybrid abstract scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996b) and the ISO/SynAF (2010) standard draft. Sub-goal 2.3: Development of the standard-based semantic annotation level of OntoTag’s (abstract) scheme. Sub-goal 2.4: Development of the mechanisms for a convenient integration of the three annotation levels already mentioned. These mechanisms should take into account the recommendations included in the ISO/LAF (2009) standard draft. Goal 3: Design of OntoTag’s (abstract) annotation architecture, an abstract architecture for the hybrid (semantic) annotation of texts (i) that facilitates the integration and interoperation of different linguistic annotation tools, and (ii) whose results comply with OntoTag’s annotation scheme. Sub-goal 3.1: Specification of the decanting processes that allow for the classification and separation, according to their corresponding levels, of the results of the linguistic tools annotating at several different levels. Sub-goal 3.2: Specification of the standardisation processes that allow (a) complying with the standardisation requirements of OntoTag’s annotation scheme, as well as (b) combining the results of those linguistic tools that share some level of annotation. Sub-goal 3.3: Specification of the merging processes that allow for the combination of the output annotations and the interoperation of those linguistic tools that share some level of annotation. Sub-goal 3.4: Specification of the merge processes that allow for the integration of the results and the interoperation of those tools performing their annotations at different levels. Goal 4: Generation of OntoTagger’s schema, a concrete instance of OntoTag’s abstract scheme for a concrete set of linguistic annotations. These linguistic annotations result from the tools and the resources available in the research group, namely • Bitext’s DataLexica (http://www.bitext.com/EN/datalexica.asp), • LACELL’s (POS) tagger (http://www.um.es/grupos/grupo-lacell/quees.php), • Connexor’s FDG (http://www.connexor.eu/technology/machinese/glossary/fdg/), and • EuroWordNet (Vossen et al., 1998). This schema should help evaluate OntoTag’s underlying hypotheses, stated below. Consequently, it should implement, at least, those levels of the abstract scheme dealing with the annotations of the set of tools considered in this implementation. This includes the morphosyntactic, the syntactic and the semantic levels. Goal 5: Implementation of OntoTagger’s configuration, a concrete instance of OntoTag’s abstract architecture for this set of linguistic tools and annotations. This configuration (1) had to use the schema generated in the previous goal; and (2) should help support or refute the hypotheses of this work as well (see the next section). Sub-goal 5.1: Implementation of the decanting processes that facilitate the classification and separation of the results of those linguistic resources that provide annotations at several different levels (on the one hand, LACELL’s tagger operates at the morphosyntactic level and, minimally, also at the semantic level; on the other hand, FDG operates at the morphosyntactic and the syntactic levels and, minimally, at the semantic level as well). Sub-goal 5.2: Implementation of the standardisation processes that allow (i) specifying the results of those linguistic tools that share some level of annotation according to the requirements of OntoTagger’s schema, as well as (ii) combining these shared level results. In particular, all the tools selected perform morphosyntactic annotations and they had to be conveniently combined by means of these processes. Sub-goal 5.3: Implementation of the merging processes that allow for the combination (and possibly the improvement) of the annotations and the interoperation of the tools that share some level of annotation (in particular, those relating the morphosyntactic level, as in the previous sub-goal). Sub-goal 5.4: Implementation of the merging processes that allow for the integration of the different standardised and combined annotations aforementioned, relating all the levels considered. Sub-goal 5.5: Improvement of the semantic level of this configuration by adding a named entity recognition, (sub-)classification and annotation subsystem, which also uses the named entities annotated to populate a domain ontology, in order to provide a concrete application of the present work in the two areas involved (the Semantic Web and Corpus Linguistics). 3. MAIN RESULTS: ASSESSMENT OF ONTOTAG’S UNDERLYING HYPOTHESES The model developed in the present thesis tries to shed some light on (i) whether linguistic annotation tools can effectively interoperate; (ii) whether their results can be combined and integrated; and, if they can, (iii) how they can, respectively, interoperate and be combined and integrated. Accordingly, several hypotheses had to be supported (or rejected) by the development of the OntoTag model and OntoTagger (its implementation). The hypotheses underlying OntoTag are surveyed below. Only one of the hypotheses (H.6) was rejected; the other five could be confirmed. H.1 The annotations of different levels (or layers) can be integrated into a sort of overall, comprehensive, multilayer and multilevel annotation, so that their elements can complement and refer to each other. • CONFIRMED by the development of: o OntoTag’s annotation scheme, o OntoTag’s annotation architecture, o OntoTagger’s (XML, RDF, OWL) annotation schemas, o OntoTagger’s configuration. H.2 Tool-dependent annotations can be mapped onto a sort of tool-independent annotations and, thus, can be standardised. • CONFIRMED by means of the standardisation phase incorporated into OntoTag and OntoTagger for the annotations yielded by the tools. H.3 Standardisation should ease: H.3.1: The interoperation of linguistic tools. H.3.2: The comparison, combination (at the same level and layer) and integration (at different levels or layers) of annotations. • H.3 was CONFIRMED by means of the development of OntoTagger’s ontology-based configuration: o Interoperation, comparison, combination and integration of the annotations of three different linguistic tools (Connexor’s FDG, Bitext’s DataLexica and LACELL’s tagger); o Integration of EuroWordNet-based, domain-ontology-based and named entity annotations at the semantic level. o Integration of morphosyntactic, syntactic and semantic annotations. H.4 Ontologies and Semantic Web technologies (can) play a crucial role in the standardisation of linguistic annotations, by providing consensual vocabularies and standardised formats for annotation (e.g., RDF triples). • CONFIRMED by means of the development of OntoTagger’s RDF-triple-based annotation schemas. H.5 The rate of errors introduced by a linguistic tool at a given level, when annotating, can be reduced automatically by contrasting and combining its results with the ones coming from other tools, operating at the same level. However, these other tools might be built following a different technological (stochastic vs. rule-based, for example) or theoretical (dependency vs. HPS-grammar-based, for instance) approach. • CONFIRMED by the results yielded by the evaluation of OntoTagger. H.6 Each linguistic level can be managed and annotated independently. • REJECTED: OntoTagger’s experiments and the dependencies observed among the morphosyntactic annotations, and between them and the syntactic annotations. In fact, Hypothesis H.6 was already rejected when OntoTag’s ontologies were developed. We observed then that several linguistic units stand on an interface between levels, belonging thereby to both of them (such as morphosyntactic units, which belong to both the morphological level and the syntactic level). Therefore, the annotations of these levels overlap and cannot be handled independently when merged into a unique multileveled annotation. 4. OTHER MAIN RESULTS AND CONTRIBUTIONS First, interoperability is a hot topic for both the linguistic annotation community and the whole Computer Science field. The specification (and implementation) of OntoTag’s architecture for the combination and integration of linguistic (annotation) tools and annotations by means of ontologies shows a way to make these different linguistic annotation tools and annotations interoperate in practice. Second, as mentioned above, the elements involved in linguistic annotation were formalised in a set (or network) of ontologies (OntoTag’s linguistic ontologies). • On the one hand, OntoTag’s network of ontologies consists of − The Linguistic Unit Ontology (LUO), which includes a mostly hierarchical formalisation of the different types of linguistic elements (i.e., units) identifiable in a written text; − The Linguistic Attribute Ontology (LAO), which includes also a mostly hierarchical formalisation of the different types of features that characterise the linguistic units included in the LUO; − The Linguistic Value Ontology (LVO), which includes the corresponding formalisation of the different values that the attributes in the LAO can take; − The OIO (OntoTag’s Integration Ontology), which  Includes the knowledge required to link, combine and unite the knowledge represented in the LUO, the LAO and the LVO;  Can be viewed as a knowledge representation ontology that describes the most elementary vocabulary used in the area of annotation. • On the other hand, OntoTag’s ontologies incorporate the knowledge included in the different standards and recommendations for linguistic annotation released so far, such as those developed within the EAGLES and the SIMPLE European projects or by the ISO/TC 37 committee: − As far as morphosyntactic annotations are concerned, OntoTag’s ontologies formalise the terms in the EAGLES (1996a) recommendations and their corresponding terms within the ISO Morphosyntactic Annotation Framework (ISO/MAF, 2008) standard; − As for syntactic annotations, OntoTag’s ontologies incorporate the terms in the EAGLES (1996b) recommendations and their corresponding terms within the ISO Syntactic Annotation Framework (ISO/SynAF, 2010) standard draft; − Regarding semantic annotations, OntoTag’s ontologies generalise and extend the recommendations in EAGLES (1996a; 1996b) and, since no stable standards or standard drafts have been released for semantic annotation by ISO/TC 37 yet, they incorporate the terms in SIMPLE (2000) instead; − The terms coming from all these recommendations and standards were supplemented by those within the ISO Data Category Registry (ISO/DCR, 2008) and also of the ISO Linguistic Annotation Framework (ISO/LAF, 2009) standard draft when developing OntoTag’s ontologies. Third, we showed that the combination of the results of tools annotating at the same level can yield better results (both in precision and in recall) than each tool separately. In particular, 1. OntoTagger clearly outperformed two of the tools integrated into its configuration, namely DataLexica and FDG in all the combination sub-phases in which they overlapped (i.e. POS tagging, lemma annotation and morphological feature annotation). As far as the remaining tool is concerned, i.e. LACELL’s tagger, it was also outperformed by OntoTagger in POS tagging and lemma annotation, and it did not behave better than OntoTagger in the morphological feature annotation layer. 2. As an immediate result, this implies that a) This type of combination architecture configurations can be applied in order to improve significantly the accuracy of linguistic annotations; and b) Concerning the morphosyntactic level, this could be regarded as a way of constructing more robust and more accurate POS tagging systems. Fourth, Semantic Web annotations are usually performed by humans or else by machine learning systems. Both of them leave much to be desired: the former, with respect to their annotation rate; the latter, with respect to their (average) precision and recall. In this work, we showed how linguistic tools can be wrapped in order to annotate automatically Semantic Web pages using ontologies. This entails their fast, robust and accurate semantic annotation. As a way of example, as mentioned in Sub-goal 5.5, we developed a particular OntoTagger module for the recognition, classification and labelling of named entities, according to the MUC and ACE tagsets (Chinchor, 1997; Doddington et al., 2004). These tagsets were further specified by means of a domain ontology, namely the Cinema Named Entities Ontology (CNEO). This module was applied to the automatic annotation of ten different web pages containing cinema reviews (that is, around 5000 words). In addition, the named entities annotated with this module were also labelled as instances (or individuals) of the classes included in the CNEO and, then, were used to populate this domain ontology. • The statistical results obtained from the evaluation of this particular module of OntoTagger can be summarised as follows. On the one hand, as far as recall (R) is concerned, (R.1) the lowest value was 76,40% (for file 7); (R.2) the highest value was 97, 50% (for file 3); and (R.3) the average value was 88,73%. On the other hand, as far as the precision rate (P) is concerned, (P.1) its minimum was 93,75% (for file 4); (R.2) its maximum was 100% (for files 1, 5, 7, 8, 9, and 10); and (R.3) its average value was 98,99%. • These results, which apply to the tasks of named entity annotation and ontology population, are extraordinary good for both of them. They can be explained on the basis of the high accuracy of the annotations provided by OntoTagger at the lower levels (mainly at the morphosyntactic level). However, they should be conveniently qualified, since they might be too domain- and/or language-dependent. It should be further experimented how our approach works in a different domain or a different language, such as French, English, or German. • In any case, the results of this application of Human Language Technologies to Ontology Population (and, accordingly, to Ontological Engineering) seem very promising and encouraging in order for these two areas to collaborate and complement each other in the area of semantic annotation. Fifth, as shown in the State of the Art of this work, there are different approaches and models for the semantic annotation of texts, but all of them focus on a particular view of the semantic level. Clearly, all these approaches and models should be integrated in order to bear a coherent and joint semantic annotation level. OntoTag shows how (i) these semantic annotation layers could be integrated together; and (ii) they could be integrated with the annotations associated to other annotation levels. Sixth, we identified some recommendations, best practices and lessons learned for annotation standardisation, interoperation and merge. They show how standardisation (via ontologies, in this case) enables the combination, integration and interoperation of different linguistic tools and their annotations into a multilayered (or multileveled) linguistic annotation, which is one of the hot topics in the area of Linguistic Annotation. And last but not least, OntoTag’s annotation scheme and OntoTagger’s annotation schemas show a way to formalise and annotate coherently and uniformly the different units and features associated to the different levels and layers of linguistic annotation. This is a great scientific step ahead towards the global standardisation of this area, which is the aim of ISO/TC 37 (in particular, Subcommittee 4, dealing with the standardisation of linguistic annotations and resources).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Didanosine-loaded chitosan microspheres were developed applying a surface-response methodology and using a modified Maximum Likelihood Classification. The operational conditions were optimized with the aim of maintaining the active form of didanosine (ddI), which is sensitive to acid pH, and to develop a modified and mucoadhesive formulation. The loading of the drug within the chitosan microspheres was carried out by ionotropic gelation technique with sodium tripolyphosphate (TPP) as cross-linking agent and magnesium hydroxide (Mg(OH)2) to assure the stability of ddI. The optimization conditions were set using a surface-response methodology and applying the Maximum Likelihood Classification, where the initial chitosan concentration, TPP and ddI concentration were set as the independent variables. The maximum ddI-loaded in microspheres (i.e. 1433mg of ddI/g chitosan), was obtained with 2% (w/v) chitosan and 10% TPP. The microspheres depicted an average diameter of 11.42μm and ddI was gradually released during 2h in simulated enteric fluid.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: The aim of the present study was to determine the in vitro maximum inhibitory dilution (MID) of two chlorhexidinebased oral mouthwashes (CHX): Noplak®, Periogard®, and one polyhexamethylene biguanide-based mouthwash (PHMB): Sanifill Premium® against 28 field Staphylococcus aureus strains using the agar dilution method. MATERIALS AND METHODS: For each product, decimal dilutions ranging from 1/10 to 1/655,360 were prepared in distilled water and added to Mueller Hinton Agar culture medium. After homogenization, the culture medium was poured onto Petri dishes. Strains were inoculated using a Steers multipoint inoculator and dishes were incubated at 37ºC for 24hours. For reading, MID was considered as the maximum dilution of the mouthwash still capable of inhibiting microbial growth. RESULTS: Sanifill Premium® inhibited the growth of all strains at 1/40 dilution and of 1 strain at 1/80 dilution. Noplak® inhibited the growth of 23 strains at 1/640 dilution and of all 28 strains at 1/320 dilution. Periogard® showed inhibited growth of 7 strains at 1/640 dilution and of all 28 strains at 1/320 dilution. Data were submitted to Kruskal-Wallis statistical test, showing significant differences between the mouthwashes evaluated (p<0.05). No significant difference was found between Noplak® and Periogard® (p>0.05). Sanifill Premium® was the least effective (p<0.05). CONCLUSION: It was concluded that CHX-based mouthwashes present better antimicrobial activity against S. Aureus than the PHMB-based mouthwash.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this in vitro study was to determine the maximum inhibitory dilution (MID) of four cetylpyridinium chloride (CPC)-based mouthwashes: CPC+Propolis, CPC+Malva, CPC+Eucaliptol+Juá+Romã+Propolis (Natural Honey®) and CPC (Cepacol®), against 28 Staphylococcus aureus field strains, using the agar dilution method. Decimal dilutions ranging from 1/10 to 1/655,360 were prepared and added to Mueller Hinton Agar. Strains were inoculated using Steers multipoint inoculator. The inocula were seeded onto the surface of the culture medium in Petri dishes containing different dilutions of the mouthwashes. The dishes were incubated at 37ºC for 24 h. For readings, the MID was considered as the maximum dilution of mouthwash still capable of inhibiting microbial growth. The obtained data showed that CPC+Propolis had antimicrobial activity against 27 strains at 1/320 dilution and against all 28 strains at 1/160 dilution, CPC+Malva inhibited the growth of all 28 strains at 1/320 dilution, CPC+Eucaliptol+Juá+Romã+Propolis inhibited the growth of 2 strains at 1/640 dilution and all 28 strains at 1/320 dilution, and Cepacol® showed antimicrobial activity against 3 strains at 1/320 dilution and against all 28 strains at 1/160 dilution. Data were submitted to Kruskal-Wallis test, showing that the MID of Cepacol® was lower than that determined for the other products (p<0.05). In conclusion, CPC-mouthwashes showed antimicrobial activity against S. aureus and the addition of other substances to CPC improved its antimicrobial effect.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O objetivo deste experimento foi avaliar o efeito da adição de doses crescentes de P2O5 sobre a altura do dossel, o número de perfilhos e a produção de matéria seca de folhas e de colmos do capim-Mombaça, em diferentes idades. Foi conduzido um experimento implantado em um Nitossolo Vermelho Eutrófico. O delineamento experimental usado foi o de blocos completos casualizados, com quatro repetições, cinco doses de P2O5 (30, 60, 90, 120 e 150kg ha-1) e uma testemunha. Para a primeira e segunda coletas, observou-se o efeito linear do fósforo sobre o perfilhamento. Para a terceira e quarta coletas, os dados ajustaram-se ao modelo quadrático. A participação das lâminas foliares na matéria seca da parte aérea diminuiu com as doses de P2O5. Por outro lado, a participação de colmos aumentou com as doses de P2O5. A produção de matéria seca (MS) da parte aérea para primeira, segunda e terceira coletas respondeu de forma linear à aplicação de P2O5 observando-se um aumento estimado de 7, 15 e 19kg ha-1 de MS por kg ha-1 de P2O5, respectivamente. Para a quarta coleta, os dados ajustaram-se ao modelo quadrático de regressão, sendo a máxima produção, 8,3Mg ha-1 de MS, obtida com a aplicação de 103kg ha-1 de P2O5.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of the present study was to evaluate herbage accumulation, morphological composition, growth rate and structural characteristics in Mombasa grass swards subject to different cutting intervals (3, 5 and 7 wk) during the rainy and dry seasons of the year. Treatments were assigned to experimental units (17.5 m(2)) according to a complete randomised block design, with four replicates. Herbage accumulation was greater in the rainy than in the dry season (83 and 17%, respectively). Herbage accumulation (24,300 kg DM ha(-1)), average growth rate (140 kg DM ha(-1) d(-1)) and sward height (111 cm) were highest in the 7 wk cutting interval, but leaf proportion (56%), leaf:stem (1.6) and leaf:non leaf (1.3) ratios decreased. Herbage accumulation, morphological composition and sward structure of Mombasa grass sward may be manipulated through defoliation frequency. The highest leaf proportion was recorded in the 3-wk cutting interval. Longer cutting intervals affected negatively sward structure, with potential negative effects on utilization efficiency, animal intake and performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: To measure condylar displacement between centric relation (CR) and maximum intercuspation (MIC) in symptomatic and asymptomatic subjects. Materials and Methods: The sample comprised 70 non-deprogrammed individuals, divided equally into two groups, one symptomatic and the other asymptomatic, grouped according to the research diagnostic criteria for temporomandibular disorders (RDC/TMD). Condylar displacement was measured in three dimensions with the condylar position indicator (CPI) device. Dahlberg's index, intraclass correlation coefficient, repeated measures analysis of variance, analysis of variance, and generalized estimating equations were used for statistical analysis. Results: A greater magnitude of difference was observed on the vertical plane on the left side in both symptomatic and asymptomatic individuals (P = .033). The symptomatic group presented higher measurements on the transverse plane (P = .015). The percentage of displacement in the mesial direction was significantly higher in the asymptomatic group than in the symptomatic one (P = .049). Both groups presented a significantly higher percentage of mesial direction on the right side than on the left (P = .036). The presence of bilateral condylar displacement (left and right sides) in an inferior and distal direction was significantly greater in symptomatic individuals (P = .012). However, no statistical difference was noted between genders. Conclusion: Statistically significant differences between CR and MIC were quantifiable at the condylar level in asymptomatic and symptomatic individuals. (Angle Orthod. 2010;80:835-842.)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The VISTA near infrared survey of the Magellanic System (VMC) will provide deep YJK(s) photometry reaching stars in the oldest turn-off point throughout the Magellanic Clouds (MCs). As part of the preparation for the survey, we aim to access the accuracy in the star formation history (SFH) that can be expected from VMC data, in particular for the Large Magellanic Cloud (LMC). To this aim, we first simulate VMC images containing not only the LMC stellar populations but also the foreground Milky Way (MW) stars and background galaxies. The simulations cover the whole range of density of LMC field stars. We then perform aperture photometry over these simulated images, access the expected levels of photometric errors and incompleteness, and apply the classical technique of SFH-recovery based on the reconstruction of colour-magnitude diagrams (CMD) via the minimisation of a chi-squared-like statistics. We verify that the foreground MW stars are accurately recovered by the minimisation algorithms, whereas the background galaxies can be largely eliminated from the CMD analysis due to their particular colours and morphologies. We then evaluate the expected errors in the recovered star formation rate as a function of stellar age, SFR(t), starting from models with a known age-metallicity relation (AMR). It turns out that, for a given sky area, the random errors for ages older than similar to 0.4 Gyr seem to be independent of the crowding. This can be explained by a counterbalancing effect between the loss of stars from a decrease in the completeness and the gain of stars from an increase in the stellar density. For a spatial resolution of similar to 0.1 deg(2), the random errors in SFR(t) will be below 20% for this wide range of ages. On the other hand, due to the lower stellar statistics for stars younger than similar to 0.4 Gyr, the outer LMC regions will require larger areas to achieve the same level of accuracy in the SFR( t). If we consider the AMR as unknown, the SFH-recovery algorithm is able to accurately recover the input AMR, at the price of an increase of random errors in the SFR(t) by a factor of about 2.5. Experiments of SFH-recovery performed for varying distance modulus and reddening indicate that these parameters can be determined with (relative) accuracies of Delta(m-M)(0) similar to 0.02 mag and Delta E(B-V) similar to 0.01 mag, for each individual field over the LMC. The propagation of these errors in the SFR(t) implies systematic errors below 30%. This level of accuracy in the SFR(t) can reveal significant imprints in the dynamical evolution of this unique and nearby stellar system, as well as possible signatures of the past interaction between the MCs and the MW.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe the measurement of the depth of maximum, X(max), of the longitudinal development of air showers induced by cosmic rays. Almost 4000 events above 10(18) eV observed by the fluorescence detector of the Pierre Auger Observatory in coincidence with at least one surface detector station are selected for the analysis. The average shower maximum was found to evolve with energy at a rate of (106 +/- 35-21) g/cm(2)/decade below 10(18.24) +/- (0.05) eV, and d24 +/- 3 g/cm(2)/ecade above this energy. The measured shower-to-shower fluctuations decrease from about 55 to 26 g/cm(2). The interpretation of these results in terms of the cosmic ray mass composition is briefly discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Changes in the oxygen isotopic composition of the planktonic foraminifer Globigerinoides ruber and in the foraminifera faunal composition in a core retrieved from the southeastern Brazilian continental margin were used to infer past changes in the hydrological balance and monsoon precipitation in the western South Atlantic since the Last Glacial Maximum (LGM). The results suggest a first-order orbital (precessional) control on the South American Monsoon precipitation. This agrees with previous studies based on continental proxies except for LGM estimates provided by pollen records. The causes for this disagreement are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study was carried out in a continental Atlantic Forest located in the southern region of Sao Paulo State, southeastern Brazil. The aim of the study was to evaluate the vegetation dynamics in similar to 70 km forest ecosystem transect that occurred during the late Pleistocene and Holocene in this region, using the stable carbon isotopes (delta C-13) analysis on soil organic matter (SOM) and the C-14 dating of buried charcoal fragments and the humin fraction of SOM. The isotope data (delta C-13) of SOM in the deeper horizons, indicating the presence of more open vegetation than the present, with a probable mixture of C-3 and C-4 plants, suggesting the presence of a drier climate in the period of similar to 20 ka to similar to 16-14 ka BP. From similar to 16 to 14 ka BP to the present, a significant predominance Of C3 plants was observed, indicating an expansion of the forest, probably associated with the presence of a more humid climate than the previous period. The results indicated the presence of open vegetation during the late glacial, probably associated with a drier period, also observed in other regions of Brazil. The Atlantic Forest ecosystem seems to have developed at least since the early Holocene in Southeastern Brazil. (c) 2007 Elsevier Ltd and INQUA. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Refractory castables are composed of fractions of fine to fairly coarse particles. The fine fraction is constituted primarily of raw materials and calcium aluminate cement, which becomes hydrated, forming chemical bonds that stiffen the concrete during the curing process. The present study focused on an evaluation of several characteristics of two refractory castables with similar chemical compositions but containing aggregates of different sizes. The features evaluated were the maximum load, the fracture energy, and the ""relative crack-propagation work"" of the two castables heat-treated at 110, 650, 1100 and 1550 degrees C. The results enabled us to draw the following conclusions: the heat treatment temperature exerts a significant influence on the matrix/aggregate interaction, different microstructures form in the castables with temperature, and a relationship was noted between the maximum load and the fracture energy. (C) 2009 Elsevier Ltd and Techna Group S.r.l. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an Adaptive Maximum Entropy (AME) approach for modeling biological species. The Maximum Entropy algorithm (MaxEnt) is one of the most used methods in modeling biological species geographical distribution. The approach presented here is an alternative to the classical algorithm. Instead of using the same set features in the training, the AME approach tries to insert or to remove a single feature at each iteration. The aim is to reach the convergence faster without affect the performance of the generated models. The preliminary experiments were well performed. They showed an increasing on performance both in accuracy and in execution time. Comparisons with other algorithms are beyond the scope of this paper. Some important researches are proposed as future works.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Twelve samples with different grain sizes were prepared by normal grain growth and by primary recrystallization, and the hysteresis dissipated energy was measured by a quasi-static method. Results showed a linear relation between hysteresis energy loss and the inverse of grain size, which is here called Mager`s law, for maximum inductions from 0.6 to 1.5 T, and a Steinmetz power law relation between hysteresis loss and maximum induction for all samples. The combined effect is better described by a Mager`s law where the coefficients follow Steinmetz law.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although theoretical models have already been proposed, experimental data is still lacking to quantify the influence of grain size upon coercivity of electrical steels. Some authors consider a linear inverse proportionality, while others suggest a square root inverse proportionality. Results also differ with regard to the slope of the reciprocal of grain size-coercive field relation for a given material. This paper discusses two aspects of the problem: the maximum induction used for determining coercive force and the possible effect of lurking variables such as the grain size distribution breadth and crystallographic texture. Electrical steel sheets containing 0.7% Si, 0.3% Al and 24 ppm C were cold-rolled and annealed in order to produce different grain sizes (ranging from 20 to 150 mu m). Coercive field was measured along the rolling direction and found to depend linearly on reciprocal of grain size with a slope of approximately 0.9 (A/m)mm at 1.0 T induction. A general relation for coercive field as a function of grain size and maximum induction was established, yielding an average absolute error below 4%. Through measurement of B(50) and image analysis of micrographs, the effects of crystallographic texture and grain size distribution breadth were qualitatively discussed. (C) 2011 Elsevier B.V. All rights reserved.