885 resultados para Semi-Automatic Indexing System (SISA)
Resumo:
Goat breeding in the state of Rio Grande do Norte, Brazil has promising economic possibilities, with the proper handling of the natural resources. The introduction of specialized animals has been one of the ways used to improve herd genetics and increase productivity. However, climate has been one of the regional factors that most interferes with the adaptation of the new genetic prevalence resulting from the introduction of exotic breeds, because in their country of origin, the air temperature during most of the year is lower than the animals body temperature. With this in mind, the aim of this study was to characterize behavioral, physiological and morphological profiles and milk production of female Saanen goats belonging to different genetic groups raised in the semi-arid region of Rio Grande do Norte in Northeast Brazil. The study was conducted in the city of Lages (5° 42 00 S and 36° 14 41 W). We used 25 lactating female Saanen goats, distributed into 3 genetic groups: 5 purebred animals, 11 three-quarter bred and 9 half-bred. Behavioral observations were made over three consecutive days in the months of August and September, between 09:00 and 11:30h, when the animals were grazing. Physiological and meteorological data were recorded in the last three days of June, July, August and September at 05:00h and at 16:00h. In the semi-intensive breeding system, the animals from different genetic groups were similar in both field behavior and physiological response patterns. Although the purebred goats had longer hair, they did not show symptoms of thermal discomfort. Their white hair helped to reflect the short wavelength rays and thus eliminate those at the longer wave lengths. We concluded that the animals raised in the semi-intensive milk production system in this study seem to have adapted to the climatic conditions of the semi-arid region of Rio Grande do Norte, Brazil
Resumo:
The occurrence of problems related to the scattering and tangling phenomenon, such as the difficulty to do system maintenance, increasingly frequent. One way to solve this problem is related to the crosscutting concerns identification. To maximize its benefits, the identification must be performed from early stages of development process, but some works have reported that this has not been done in most of cases, making the system development susceptible to the errors incidence and prone to the refactoring later. This situation affects directly to the quality and cost of the system. PL-AOVgraph is a goal-oriented requirements modeling language which offers support to the relationships representation among requirements and provides separation of crosscutting concerns by crosscutting relationships representation. Therefore, this work presents a semi-automatic method to crosscutting concern identification in requirements specifications written in PL-AOVgraph. An adjacency matrix is used to identify the contributions relationships among the elements. The crosscutting concern identification is based in fan-out analysis of contribution relationships from the informations of adjacency matrix. When identified, the crosscutting relationships are created. And also, this method is implemented as a new module of ReqSys-MDD tool
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The purpose of this paper is to introduce a methodology for semi-automatic road extraction from aerial digital image pairs by using dynamic programming and epipolar geometry. The method uses both images from where each road feature pair is extracted. The operator identifies the corresponding road featuresand s/he selects sparse seed points along them. After all road pairs have been extracted, epipolar geometry is applied to determine the automatic point-to-point correspondence between each correspondent feature. Finally, each correspondent road pair is georeferenced by photogrammetric intersection. Experiments were made with rural aerial images. The results led to the conclusion that the methodology is robust and efficient, even in the presence of shadows of trees and buildings or other irregularities.
Resumo:
Methods based on visual estimation still is the most widely used analysis of the distances that is covered by soccer players during matches, and most description available in the literature were obtained using such an approach. Recently, systems based on computer vision techniques have appeared and the very first results are available for comparisons. The aim of the present study was to analyse the distances covered by Brazilian soccer players and compare the results to the European players', both data measured by automatic tracking system. Four regular Brazilian First Division Championship matches between different teams were filmed. Applying a previously developed automatic tracking system (DVideo, Campinas, Brazil), the results of 55 outline players participated in the whole game (n = 55) are presented. The results of mean distances covered, standard deviations (s) and coefficient of variation (cv) after 90 minutes were 10,012 m, s = 1,024 m and cv = 10.2%, respectively. The results of three-way ANOVA according to playing positions, showed that the distances covered by external defender (10642 ± 663 m), central midfielders (10476 ± 702 m) and external midfielders (10598 ± 890 m) were greater than forwards (9612 ± 772 m) and forwards covered greater distances than central defenders (9029 ± 860 m). The greater distances were covered in standing, walking, or jogging, 5537 ± 263 m, followed by moderate-speed running, 1731 ± 399 m; low speed running, 1615 ± 351 m; high-speed running, 691 ± 190 m and sprinting, 437 ± 171 m. Mean distance covered in the first half was 5,173 m (s = 394 m, cv = 7.6%) highly significant greater (p < 0.001) than the mean value 4,808 m (s = 375 m, cv = 7.8%) in the second half. A minute-by-minute analysis revealed that after eight minutes of the second half, player performance has already decreased and this reduction is maintained throughout the second half. ©Journal of Sports Science and Medicine (2007).
Resumo:
Most of the tasks in genome annotation can be at least partially automated. Since this annotation is time-consuming, facilitating some parts of the process - thus freeing the specialist to carry out more valuable tasks - has been the motivation of many tools and annotation environments. In particular, annotation of protein function can benefit from knowledge about enzymatic processes. The use of sequence homology alone is not a good approach to derive this knowledge when there are only a few homologues of the sequence to be annotated. The alternative is to use motifs. This paper uses a symbolic machine learning approach to derive rules for the classification of enzymes according to the Enzyme Commission (EC). Our results show that, for the top class, the average global classification error is 3.13%. Our technique also produces a set of rules relating structural to functional information, which is important to understand the protein tridimensional structure and determine its biological function. © 2009 Springer Berlin Heidelberg.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Pós-graduação em Ciências Cartográficas - FCT
Resumo:
Pós-graduação em Ciência da Informação - FFC
Resumo:
The indexing process aims to represent synthetically the informational content of documents by a set of terms whose meanings indicate the themes or subjects treated by them. With the emergence of the Web, research in automatic indexing received major boost with the necessity of retrieving documents from this huge collection. The traditional indexing languages, used to translate the thematic content of documents in standardized terms, always proved efficient in manual indexing. Ontologies open new perspectives for research in automatic indexing, offering a computer-process able language restricted to a particular domain. The use of ontologies in the automatic indexing process allows using a specific domain language and a logical and conceptual framework to make inferences, and whose relations allow an expansion of the terms extracted directly from the text of the document. This paper presents techniques for the construction and use of ontologies in the automatic indexing process. We conclude that the use of ontologies in the indexing process allows to add not only new feature to the indexing process, but also allows us to think in new and advanced features in an information retrieval system.
Resumo:
Pós-graduação em Ciência da Informação - FFC
Resumo:
Pós-graduação em Ciência da Informação - FFC
Resumo:
Two experiments in vitro were conducted to evaluate four Egyptian forage legume browses, i.e., leaves of prosopis (Prosopis juliflora), acacia (Acacia saligna), atriplex (A triplex halimus), and leucaena (Leucaena leucocephala), in comparison with Tifton (Cynodon sp.) grass hay for their gas production, methanogenic potential, and ruminal fermentation using a semi-automatic system for gas production (first experiment) and for ruminal and post ruminal protein degradability (second experiment). Acacia and leucaena showed pronounced methane inhibition compared with Tifton, while prosopis and leucaena decreased the acetate:propionate ratio (P<0.01). Acacia and leucaena presented a lower (P<0.01) ruminal NH3-N concentration associated with the decreasing (P<0.01) ruminal protein degradability. Leucaena, however, showed higher (P<0.01) intestinal protein digestibility than acacia. This study suggests that the potential methanogenic properties of leguminous browses may be related not only to tannin content, but also to other factors.
Resumo:
Mr. Kubon's project was inspired by the growing need for an automatic, syntactic analyser (parser) of Czech, which could be used in the syntactic processing of large amounts of texts. Mr. Kubon notes that such a tool would be very useful, especially in the field of corpus linguistics, where creating a large-scale "tree bank" (a collection of syntactic representations of natural language sentences) is a very important step towards the investigation of the properties of a given language. The work involved in syntactically parsing a whole corpus in order to get a representative set of syntactic structures would be almost inconceivable without the help of some kind of robust (semi)automatic parser. The need for the automatic natural language parser to be robust increases with the size of the linguistic data in the corpus or in any other kind of text which is going to be parsed. Practical experience shows that apart from syntactically correct sentences, there are many sentences which contain a "real" grammatical error. These sentences may be corrected in small-scale texts, but not generally in the whole corpus. In order to be able to complete the overall project, it was necessary to address a number of smaller problems. These were; 1. the adaptation of a suitable formalism able to describe the formal grammar of the system; 2. the definition of the structure of the system's dictionary containing all relevant lexico-syntactic information, and the development of a formal grammar able to robustly parse Czech sentences from the test suite; 3. filling the syntactic dictionary with sample data allowing the system to be tested and debugged during its development (about 1000 words); 4. the development of a set of sample sentences containing a reasonable amount of grammatical and ungrammatical phenomena covering some of the most typical syntactic constructions being used in Czech. Number 3, building a formal grammar, was the main task of the project. The grammar is of course far from complete (Mr. Kubon notes that it is debatable whether any formal grammar describing a natural language may ever be complete), but it covers the most frequent syntactic phenomena, allowing for the representation of a syntactic structure of simple clauses and also the structure of certain types of complex sentences. The stress was not so much on building a wide coverage grammar, but on the description and demonstration of a method. This method uses a similar approach as that of grammar-based grammar checking. The problem of reconstructing the "correct" form of the syntactic representation of a sentence is closely related to the problem of localisation and identification of syntactic errors. Without a precise knowledge of the nature and location of syntactic errors it is not possible to build a reliable estimation of a "correct" syntactic tree. The incremental way of building the grammar used in this project is also an important methodological issue. Experience from previous projects showed that building a grammar by creating a huge block of metarules is more complicated than the incremental method, which begins with the metarules covering most common syntactic phenomena first, and adds less important ones later, especially from the point of view of testing and debugging the grammar. The sample of the syntactic dictionary containing lexico-syntactical information (task 4) now has slightly more than 1000 lexical items representing all classes of words. During the creation of the dictionary it turned out that the task of assigning complete and correct lexico-syntactic information to verbs is a very complicated and time-consuming process which would itself be worth a separate project. The final task undertaken in this project was the development of a method allowing effective testing and debugging of the grammar during the process of its development. The problem of the consistency of new and modified rules of the formal grammar with the rules already existing is one of the crucial problems of every project aiming at the development of a large-scale formal grammar of a natural language. This method allows for the detection of any discrepancy or inconsistency of the grammar with respect to a test-bed of sentences containing all syntactic phenomena covered by the grammar. This is not only the first robust parser of Czech, but also one of the first robust parsers of a Slavic language. Since Slavic languages display a wide range of common features, it is reasonable to claim that this system may serve as a pattern for similar systems in other languages. To transfer the system into any other language it is only necessary to revise the grammar and to change the data contained in the dictionary (but not necessarily the structure of primary lexico-syntactic information). The formalism and methods used in this project can be used in other Slavic languages without substantial changes.