903 resultados para traducción automática
Resumo:
ABSTRACT: We present here a methodology for the rapid interpretation of aeromagnetic data in three dimensions. An estimation of the x, y and z coordinates of prismatic elements is obtained through the application of "Euler's Homogeneous equation" to the data. In this application, it is necessary to have only the total magnetic field and its derivatives. These components can be measured or calculated from the total field data. In the use of Euler's Homogeneous equation, the structural index, the coordinates of the corners of the prism and the depth to the top of the prism are unknown vectors. Inversion of the data by classical least-squares methods renders the problem ill-conditioned. However, the inverse problem can be stabilized by the introduction of both a priori information within the parameter vector together with a weighting matrix. The algorithm was tested with synthetic and real data in a low magnetic latitude region and the results were satisfactory. The applicability of the theorem and its ambiguity caused by the lack of information about the direction of total magnetization, inherent in all automatic methods, is also discussed. As an application, an area within the Solimões basin was chosen to test the method. Since 1977, the Solimões basin has become a center of exploration activity, motivated by the first discovery of gas bearing sandstones within the Monte Alegre formation. Since then, seismic investigations and drilling have been carried on in the region. A knowledge of basement structures is of great importance in the location of oil traps and understanding the tectonic history of this region. Through the application of this method a preliminary estimate of the areal distribution and depth of interbasement and sedimentary magnetic sources was obtained.
Resumo:
O presente artigo pretende contribuir para a discussão sobre o processo de organização escolar a partir dos conceitos de ciclos, progressão continuada e aprovação automática. Para tanto, utilizou-se o resgate histórico, a análise dos documentos oficiais, bem como da literatura pertinente à temática para contextualização da origem dessas propostas e das concepções de avaliação subjacentes a elas. O texto procura esclarecer a questão dos ciclos, da progressão continuada e promoção/aprovação automática, buscando analisar de que modo essas idéias carregam, potencialmente, a possibilidade de os profissionais da educação rediscutirem a organização escolar e, implicitamente, as marcas das políticas educacionais que as propõe.
Resumo:
Pós-graduação em Ciências Cartográficas - FCT
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
Pós-graduação em Engenharia Mecânica - FEIS
Resumo:
The indexing process aims to represent synthetically the informational content of documents by a set of terms whose meanings indicate the themes or subjects treated by them. With the emergence of the Web, research in automatic indexing received major boost with the necessity of retrieving documents from this huge collection. The traditional indexing languages, used to translate the thematic content of documents in standardized terms, always proved efficient in manual indexing. Ontologies open new perspectives for research in automatic indexing, offering a computer-process able language restricted to a particular domain. The use of ontologies in the automatic indexing process allows using a specific domain language and a logical and conceptual framework to make inferences, and whose relations allow an expansion of the terms extracted directly from the text of the document. This paper presents techniques for the construction and use of ontologies in the automatic indexing process. We conclude that the use of ontologies in the indexing process allows to add not only new feature to the indexing process, but also allows us to think in new and advanced features in an information retrieval system.
Resumo:
Given the exponential growth in the spread of the virus world wide web (Internet) and its increasing complexity, it is necessary to adopt more complex systems for the extraction of malware finger-prints (malware fingerprints - malicious software; is the name given to extracting unique information leading to identification of the virus, equivalent to humans, the fingerprint). The architecture and protocol proposed here aim to achieve more efficient fingerprints, using techniques that make a single fingerprint enough to compromise an entire group of viruses. This efficiency is given by the use of a hybrid approach of extracting fingerprints, taking into account the analysis of the code and the behavior of the sample, so called viruses. The main targets of this proposed system are Polymorphics and Metamorphics Malwares, given the difficulty in creating fingerprints that identify an entire family from these viruses. This difficulty is created by the use of techniques that have as their main objective compromise analysis by experts. The parameters chosen for the behavioral analysis are: File System; Records Windows; RAM Dump and API calls. As for the analysis of the code, the objective is to create, in binary virus, divisions in blocks, where it is possible to extract hashes. This technique considers the instruction there and its neighborhood, characterized as being accurate. In short, with this information is intended to predict and draw a profile of action of the virus and then create a fingerprint based on the degree of kinship between them (threshold), whose goal is to increase the ability to detect viruses that do not make part of the same family
Resumo:
Pós-graduação em Ciência da Informação - FFC
Localização automática de pontos de controle em imagens aéreas baseada em cenas terrestres verticais
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Machine translation systems have been increasingly used for translation of large volumes of specialized texts. The efficiency of these systems depends directly on the implementation of strategies for controlling lexical use of source texts as a way to guarantee machine performance and, ultimately, human revision and post-edition work. This paper presents a brief history of application of machine translation, introduces the concept of lexicon and ambiguity and focuses on some of the lexical control strategies presently used, discussing their possible implications for the production and reading of specialized texts.
Resumo:
This paper analyzes how machine translation has changed the way translation is conceived and practiced in the information age. From a brief review of the early designs of machine translation programs, I discuss the changes implemented in the past decades in these systems to combine mechanical processing and the accessory work by the translator.
Resumo:
The term poetic expressiveness refers to the multiple joints of the plan of expression, derived from the expressive value of the linguistic sign (ROSSET: 1970, 135) and its particular role in the field of poetry. The features of meaning, such as projection, elevation and salience, make it possible to consider expressive all poetic statements which constitute particularly dense instances in the formal consolidation of a convergence between the two planes (expression/content), and therefore it stands out from the others due to the high density of structural parallelisms and isomorphisms, which are procedures responsible for the impression that a particular form of content can only be expressed by cutting that same specific form of expression out. These considerations have an immediate impact on the reading, interpretation and practice of translating poems, which is intended to be demonstrated here, through an example of translation of a Phaedrus' fable, written in iambic meter.
Resumo:
Pós-graduação em Ciências Cartográficas - FCT
Resumo:
Identify opportunities for software parallelism is a task that takes a lot of human time, but once some code patterns for parallelism are identified, a software could quickly accomplish this task. Thus, automating this process brings many benefits such as saving time and reducing errors caused by the programmer [1]. This work aims at developing a software environment that identifies opportunities for parallelism in a source code written in C language, and generates a program with the same behavior, but with higher degree of parallelism, compatible with a graphics processor compatible with CUDA architecture.