97 resultados para Divisão automática


Relevância:

20.00% 20.00%

Publicador:

Resumo:

O presente artigo pretende contribuir para a discussão sobre o processo de organização escolar a partir dos conceitos de ciclos, progressão continuada e aprovação automática. Para tanto, utilizou-se o resgate histórico, a análise dos documentos oficiais, bem como da literatura pertinente à temática para contextualização da origem dessas propostas e das concepções de avaliação subjacentes a elas. O texto procura esclarecer a questão dos ciclos, da progressão continuada e promoção/aprovação automática, buscando analisar de que modo essas idéias carregam, potencialmente, a possibilidade de os profissionais da educação rediscutirem a organização escolar e, implicitamente, as marcas das políticas educacionais que as propõe.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pós-graduação em Ciências Cartográficas - FCT

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Elétrica - FEIS

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The indexing process aims to represent synthetically the informational content of documents by a set of terms whose meanings indicate the themes or subjects treated by them. With the emergence of the Web, research in automatic indexing received major boost with the necessity of retrieving documents from this huge collection. The traditional indexing languages, used to translate the thematic content of documents in standardized terms, always proved efficient in manual indexing. Ontologies open new perspectives for research in automatic indexing, offering a computer-process able language restricted to a particular domain. The use of ontologies in the automatic indexing process allows using a specific domain language and a logical and conceptual framework to make inferences, and whose relations allow an expansion of the terms extracted directly from the text of the document. This paper presents techniques for the construction and use of ontologies in the automatic indexing process. We conclude that the use of ontologies in the indexing process allows to add not only new feature to the indexing process, but also allows us to think in new and advanced features in an information retrieval system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In radiation theraphy with electron beam, the electrons are produced in linear accelerators, and energy the most used have between 4MeV and 20MeV. Generally, the treatments are done for superficial injuries, because the low penetration of these particles. In this work a system for calculation of monitor units (U.M.) for cases of treatments with electron beam was developed. The Excel program of Microsoft was used and is easily found in the operational system of the personal microcomputers. In the Excel has been inserted the pertinent data of the linear accelerator of Varian, model 2100C, used in the Service of radiation theraphy of the Hospital of the Clinics of the College of Medicine of the UNESP of Botucatu. For some values of the physical parameters, such as: factors field and factors calibration, not supplied in the tests of acceptance of the machine, still proceeded calculations from interpolation and extrapolation. The mathematical formulas for automatic search of these and others factors used in the calculations of the determination of the U.M had been developed in agreement available routines in Excel. For this the functions had been used the function IF (that it imposes search condition) and the PROCH (that looks a value in a column from determined line), beyond the basic functions of addition, multiplication and division. It is intended to optimize the routine of the Services of radiation theraphy that perform through eletrontheraphy procedures, speeding the calculations and minimizing the occurrence of errors and uncertainties deriving of the maken a mistake manipulation of the parameters gotten in tables of data of electron beams

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given the exponential growth in the spread of the virus world wide web (Internet) and its increasing complexity, it is necessary to adopt more complex systems for the extraction of malware finger-prints (malware fingerprints - malicious software; is the name given to extracting unique information leading to identification of the virus, equivalent to humans, the fingerprint). The architecture and protocol proposed here aim to achieve more efficient fingerprints, using techniques that make a single fingerprint enough to compromise an entire group of viruses. This efficiency is given by the use of a hybrid approach of extracting fingerprints, taking into account the analysis of the code and the behavior of the sample, so called viruses. The main targets of this proposed system are Polymorphics and Metamorphics Malwares, given the difficulty in creating fingerprints that identify an entire family from these viruses. This difficulty is created by the use of techniques that have as their main objective compromise analysis by experts. The parameters chosen for the behavioral analysis are: File System; Records Windows; RAM Dump and API calls. As for the analysis of the code, the objective is to create, in binary virus, divisions in blocks, where it is possible to extract hashes. This technique considers the instruction there and its neighborhood, characterized as being accurate. In short, with this information is intended to predict and draw a profile of action of the virus and then create a fingerprint based on the degree of kinship between them (threshold), whose goal is to increase the ability to detect viruses that do not make part of the same family

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Informação - FFC

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Machine translation systems have been increasingly used for translation of large volumes of specialized texts. The efficiency of these systems depends directly on the implementation of strategies for controlling lexical use of source texts as a way to guarantee machine performance and, ultimately, human revision and post-edition work. This paper presents a brief history of application of machine translation, introduces the concept of lexicon and ambiguity and focuses on some of the lexical control strategies presently used, discussing their possible implications for the production and reading of specialized texts.