999 resultados para Conjuntos densificables
Resumo:
Um programa baseado na técnica de evolução diferencial foi desenvolvido para a definição da contribuição genética ótima na seleção de candidatos a reprodução. A função- objetivo a ser otimizada foi composta pelo mérito genético esperado da futura progênie e pela coascendência média dos animais em reprodução. Conjuntos de dados reais e simulados de populações com gerações sobrepostas foram usados para validar e testar o desempenho do programa desenvolvido. O programa se mostrou computacionalmente eficiente e viável para ser aplicado na prática e as consequências esperadas de sua aplicação, em comparação a procedimentos empíricos de controle da endogamia e/ou com a seleção baseada apenas no valor genético esperado, seriam a melhora da resposta genética futura e limitação mais efetiva da taxa de endogamia.
Resumo:
In this work we use Interval Mathematics to establish interval counterparts for the main tools used in digital signal processing. More specifically, the approach developed here is oriented to signals, systems, sampling, quantization, coding and Fourier transforms. A detailed study for some interval arithmetics which handle with complex numbers is provided; they are: complex interval arithmetic (or rectangular), circular complex arithmetic, and interval arithmetic for polar sectors. This lead us to investigate some properties that are relevant for the development of a theory of interval digital signal processing. It is shown that the sets IR and R(C) endowed with any correct arithmetic is not an algebraic field, meaning that those sets do not behave like real and complex numbers. An alternative to the notion of interval complex width is also provided and the Kulisch- Miranker order is used in order to write complex numbers in the interval form enabling operations on endpoints. The use of interval signals and systems is possible thanks to the representation of complex values into floating point systems. That is, if a number x 2 R is not representable in a floating point system F then it is mapped to an interval [x;x], such that x is the largest number in F which is smaller than x and x is the smallest one in F which is greater than x. This interval representation is the starting point for definitions like interval signals and systems which take real or complex values. It provides the extension for notions like: causality, stability, time invariance, homogeneity, additivity and linearity to interval systems. The process of quantization is extended to its interval counterpart. Thereafter the interval versions for: quantization levels, quantization error and encoded signal are provided. It is shown that the interval levels of quantization represent complex quantization levels and the classical quantization error ranges over the interval quantization error. An estimation for the interval quantization error and an interval version for Z-transform (and hence Fourier transform) is provided. Finally, the results of an Matlab implementation is given
Resumo:
O objetivo deste trabalho foi estimar correlações genéticas e fenotípicas entre escores visuais e características de carcaça medidas por ultrassom, para verificar a eficácia desses escores na determinação da musculosidade e na avaliação da carcaça. As características de carcaça medidas por ultrassom foram área de olho de lombo (AOL) e espessura de gordura subcutânea (EG), mensuradas entre a região da 12ª e 13ª costelas, bem como a espessura de gordura subcutânea na garupa (EGP8). As características de estrutura (E), precocidade (P) e musculosidade (M) foram avaliadas por meio de escores visuais. Os componentes de covariância usados para estimar as correlações genéticas e fenotípicas foram obtidos pelo método da máxima verossimilhança restrita, em uma análise multicaracterística. As estimativas de correlações genéticas entre AOL e E, P e M foram 0,54, 0,58 e 0,61, respectivamente, e indicaram que, a longo prazo, a utilização da AOL como critério de seleção poderá produzir animais com maiores escores visuais para essas características. As correlações genéticas estimadas entre as espessuras de gordura (EG e EGP8) e os escores P e M apresentaram comportamento semelhante. Entretanto, as correlações genéticas entre as espessuras de gordura (EG e EGP8) e E foram próximas de zero. As correlações fenotípicas seguiram as mesmas tendências das respectivas correlações genéticas. Essas estimativas indicam que os escores visuais são determinados, em parte, pelos mesmos conjuntos de genes que influenciam a AOL.
Resumo:
We propose a multi-resolution approach for surface reconstruction from clouds of unorganized points representing an object surface in 3D space. The proposed method uses a set of mesh operators and simple rules for selective mesh refinement, with a strategy based on Kohonen s self-organizing map. Basically, a self-adaptive scheme is used for iteratively moving vertices of an initial simple mesh in the direction of the set of points, ideally the object boundary. Successive refinement and motion of vertices are applied leading to a more detailed surface, in a multi-resolution, iterative scheme. Reconstruction was experimented with several point sets, induding different shapes and sizes. Results show generated meshes very dose to object final shapes. We include measures of performance and discuss robustness.
Resumo:
The idea of considering imprecision in probabilities is old, beginning with the Booles George work, who in 1854 wanted to reconcile the classical logic, which allows the modeling of complete ignorance, with probabilities. In 1921, John Maynard Keynes in his book made explicit use of intervals to represent the imprecision in probabilities. But only from the work ofWalley in 1991 that were established principles that should be respected by a probability theory that deals with inaccuracies. With the emergence of the theory of fuzzy sets by Lotfi Zadeh in 1965, there is another way of dealing with uncertainty and imprecision of concepts. Quickly, they began to propose several ways to consider the ideas of Zadeh in probabilities, to deal with inaccuracies, either in the events associated with the probabilities or in the values of probabilities. In particular, James Buckley, from 2003 begins to develop a probability theory in which the fuzzy values of the probabilities are fuzzy numbers. This fuzzy probability, follows analogous principles to Walley imprecise probabilities. On the other hand, the uses of real numbers between 0 and 1 as truth degrees, as originally proposed by Zadeh, has the drawback to use very precise values for dealing with uncertainties (as one can distinguish a fairly element satisfies a property with a 0.423 level of something that meets with grade 0.424?). This motivated the development of several extensions of fuzzy set theory which includes some kind of inaccuracy. This work consider the Krassimir Atanassov extension proposed in 1983, which add an extra degree of uncertainty to model the moment of hesitation to assign the membership degree, and therefore a value indicate the degree to which the object belongs to the set while the other, the degree to which it not belongs to the set. In the Zadeh fuzzy set theory, this non membership degree is, by default, the complement of the membership degree. Thus, in this approach the non-membership degree is somehow independent of the membership degree, and this difference between the non-membership degree and the complement of the membership degree reveals the hesitation at the moment to assign a membership degree. This new extension today is called of Atanassov s intuitionistic fuzzy sets theory. It is worth noting that the term intuitionistic here has no relation to the term intuitionistic as known in the context of intuitionistic logic. In this work, will be developed two proposals for interval probability: the restricted interval probability and the unrestricted interval probability, are also introduced two notions of fuzzy probability: the constrained fuzzy probability and the unconstrained fuzzy probability and will eventually be introduced two notions of intuitionistic fuzzy probability: the restricted intuitionistic fuzzy probability and the unrestricted intuitionistic fuzzy probability
Resumo:
Os objetivos neste trabalho foram estudar os efeitos de ambiente sobre a espessura de gordura subcutânea (EGS), a área de olho-de-lombo (AOL) e o peso aos 19 meses de idade e estimar parâmetros genéticos para essas características. Utilizaram-se informações obtidas de 987 bovinos da raça Canchim (5/8 Charolês + 3/8 Zebu) e do grupo genético animal MA (filhos de touros charoleses e vacas 1/2 Canchim + 1/2 Zebu) nascidos em 2003, 2004 e 2005. Os componentes de covariância foram estimados pelo método da máxima verossimilhança restrita utilizando-se um modelo animal com efeitos fixos (ano de nascimento, grupo genético, rebanho e sexo) e os efeitos aleatórios genético aditivo direto e residual. As médias de área de olho-de-lombo e peso foram mais altas nos machos que nas fêmeas. No grupo genético MA, as médias para todas as características foram mais altas que na raça Canchim e houve ainda efeitos de rebanho e de ano de nascimento. As estimativas de herdabilidade para AOL (0,33 ± 0,09), EGS (0,24 ± 0,09) e peso (0,23 ± 0,09) foram moderadas, enquanto que a estimativa de correlação genética (0,21 ± 0,24) entre EGS e AOL foi baixa, o que sugere que essas características são controladas por diferentes conjuntos de genes de ação aditiva. As correlações genéticas para peso estimadas com EGS (0,57 ± 0,23) e com AOL (0,62 ± 0,16) foram moderadas. Conclui-se que as características ao sobreano devem responder à seleção nos rebanhos estudados e que a seleção para aumento de peso também eleva EGS e AOL e vice-versa.
Resumo:
In academia, it is common to create didactic processors, facing practical disciplines in the area of Hardware Computer and can be used as subjects in software platforms, operating systems and compilers. Often, these processors are described without ISA standard, which requires the creation of compilers and other basic software to provide the hardware / software interface and hinder their integration with other processors and devices. Using reconfigurable devices described in a HDL language allows the creation or modification of any microarchitecture component, leading to alteration of the functional units of data path processor as well as the state machine that implements the control unit even as new needs arise. In particular, processors RISP enable modification of machine instructions, allowing entering or modifying instructions, and may even adapt to a new architecture. This work, as the object of study addressing educational soft-core processors described in VHDL, from a proposed methodology and its application on two processors with different complexity levels, shows that it s possible to tailor processors for a standard ISA without causing an increase in the level hardware complexity, ie without significant increase in chip area, while its level of performance in the application execution remains unchanged or is enhanced. The implementations also allow us to say that besides being possible to replace the architecture of a processor without changing its organization, RISP processor can switch between different instruction sets, which can be expanded to toggle between different ISAs, allowing a single processor become adaptive hybrid architecture, which can be used in embedded systems and heterogeneous multiprocessor environments
Resumo:
This work presents a model of bearingless induction machine with divided winding. The main goal is to obtain a machine model to use a simpler control system as used in conventional induction machine and to know its behavior. The same strategies used in conventional machines were used to reach the bearingless induction machine model, which has made possible an easier treatment of the involved parameters. The studied machine is adapted from the conventional induction machine, the stator windings were divided and all terminals had been available. This method does not need an auxiliary stator winding for the radial position control which results in a more compact machine. Another issue about this machine is the variation of inductances array also present in result of the rotor displacement. The changeable air-gap produces variation in magnetic flux and in inductances consequently. The conventional machine model can be used for the bearingless machine when the rotor is centered, but in rotor displacement condition this model is not applicable. The bearingless machine has two sets of motor-bearing, both sets with four poles. It was constructed in horizontal position and this increases difficulty in implementation. The used rotor has peculiar characteristics; it is projected according to the stator to yield the greatest torque and force possible. It is important to observe that the current unbalance generated by the position control does not modify the machine characteristics, this only occurs due the radial rotor displacement. The obtained results validate the work; the data reached by a supervisory system corresponds the foreseen results of simulation which verify the model veracity
Resumo:
In this work we present a new clustering method that groups up points of a data set in classes. The method is based in a algorithm to link auxiliary clusters that are obtained using traditional vector quantization techniques. It is described some approaches during the development of the work that are based in measures of distances or dissimilarities (divergence) between the auxiliary clusters. This new method uses only two a priori information, the number of auxiliary clusters Na and a threshold distance dt that will be used to decide about the linkage or not of the auxiliary clusters. The number os classes could be automatically found by the method, that do it based in the chosen threshold distance dt, or it is given as additional information to help in the choice of the correct threshold. Some analysis are made and the results are compared with traditional clustering methods. In this work different dissimilarities metrics are analyzed and a new one is proposed based on the concept of negentropy. Besides grouping points of a set in classes, it is proposed a method to statistical modeling the classes aiming to obtain a expression to the probability of a point to belong to one of the classes. Experiments with several values of Na e dt are made in tests sets and the results are analyzed aiming to study the robustness of the method and to consider heuristics to the choice of the correct threshold. During this work it is explored the aspects of information theory applied to the calculation of the divergences. It will be explored specifically the different measures of information and divergence using the Rényi entropy. The results using the different metrics are compared and commented. The work also has appendix where are exposed real applications using the proposed method
Resumo:
Nowadays, classifying proteins in structural classes, which concerns the inference of patterns in their 3D conformation, is one of the most important open problems in Molecular Biology. The main reason for this is that the function of a protein is intrinsically related to its spatial conformation. However, such conformations are very difficult to be obtained experimentally in laboratory. Thus, this problem has drawn the attention of many researchers in Bioinformatics. Considering the great difference between the number of protein sequences already known and the number of three-dimensional structures determined experimentally, the demand of automated techniques for structural classification of proteins is very high. In this context, computational tools, especially Machine Learning (ML) techniques, have become essential to deal with this problem. In this work, ML techniques are used in the recognition of protein structural classes: Decision Trees, k-Nearest Neighbor, Naive Bayes, Support Vector Machine and Neural Networks. These methods have been chosen because they represent different paradigms of learning and have been widely used in the Bioinfornmatics literature. Aiming to obtain an improvment in the performance of these techniques (individual classifiers), homogeneous (Bagging and Boosting) and heterogeneous (Voting, Stacking and StackingC) multiclassification systems are used. Moreover, since the protein database used in this work presents the problem of imbalanced classes, artificial techniques for class balance (Undersampling Random, Tomek Links, CNN, NCL and OSS) are used to minimize such a problem. In order to evaluate the ML methods, a cross-validation procedure is applied, where the accuracy of the classifiers is measured using the mean of classification error rate, on independent test sets. These means are compared, two by two, by the hypothesis test aiming to evaluate if there is, statistically, a significant difference between them. With respect to the results obtained with the individual classifiers, Support Vector Machine presented the best accuracy. In terms of the multi-classification systems (homogeneous and heterogeneous), they showed, in general, a superior or similar performance when compared to the one achieved by the individual classifiers used - especially Boosting with Decision Tree and the StackingC with Linear Regression as meta classifier. The Voting method, despite of its simplicity, has shown to be adequate for solving the problem presented in this work. The techniques for class balance, on the other hand, have not produced a significant improvement in the global classification error. Nevertheless, the use of such techniques did improve the classification error for the minority class. In this context, the NCL technique has shown to be more appropriated
Resumo:
Reinforcement learning is a machine learning technique that, although finding a large number of applications, maybe is yet to reach its full potential. One of the inadequately tested possibilities is the use of reinforcement learning in combination with other methods for the solution of pattern classification problems. It is well documented in the literature the problems that support vector machine ensembles face in terms of generalization capacity. Algorithms such as Adaboost do not deal appropriately with the imbalances that arise in those situations. Several alternatives have been proposed, with varying degrees of success. This dissertation presents a new approach to building committees of support vector machines. The presented algorithm combines Adaboost algorithm with a layer of reinforcement learning to adjust committee parameters in order to avoid that imbalances on the committee components affect the generalization performance of the final hypothesis. Comparisons were made with ensembles using and not using the reinforcement learning layer, testing benchmark data sets widely known in area of pattern classification
Resumo:
Self-organizing maps (SOM) are artificial neural networks widely used in the data mining field, mainly because they constitute a dimensionality reduction technique given the fixed grid of neurons associated with the network. In order to properly the partition and visualize the SOM network, the various methods available in the literature must be applied in a post-processing stage, that consists of inferring, through its neurons, relevant characteristics of the data set. In general, such processing applied to the network neurons, instead of the entire database, reduces the computational costs due to vector quantization. This work proposes a post-processing of the SOM neurons in the input and output spaces, combining visualization techniques with algorithms based on gravitational forces and the search for the shortest path with the greatest reward. Such methods take into account the connection strength between neighbouring neurons and characteristics of pattern density and distances among neurons, both associated with the position that the neurons occupy in the data space after training the network. Thus, the goal consists of defining more clearly the arrangement of the clusters present in the data. Experiments were carried out so as to evaluate the proposed methods using various artificially generated data sets, as well as real world data sets. The results obtained were compared with those from a number of well-known methods existent in the literature
Resumo:
Bayesian networks are powerful tools as they represent probability distributions as graphs. They work with uncertainties of real systems. Since last decade there is a special interest in learning network structures from data. However learning the best network structure is a NP-Hard problem, so many heuristics algorithms to generate network structures from data were created. Many of these algorithms use score metrics to generate the network model. This thesis compare three of most used score metrics. The K-2 algorithm and two pattern benchmarks, ASIA and ALARM, were used to carry out the comparison. Results show that score metrics with hyperparameters that strength the tendency to select simpler network structures are better than score metrics with weaker tendency to select simpler network structures for both metrics (Heckerman-Geiger and modified MDL). Heckerman-Geiger Bayesian score metric works better than MDL with large datasets and MDL works better than Heckerman-Geiger with small datasets. The modified MDL gives similar results to Heckerman-Geiger for large datasets and close results to MDL for small datasets with stronger tendency to select simpler network structures
Resumo:
Um experimento foi conduzido no NuPAM/FCA/UNESP, Botucatu-SP, objetivando avaliar a dinâmica de retenção de água e o caminhamento de um traçante (simulando um herbicida) em diferentes coberturas mortas. Os tratamentos foram constituídos pelo monitoramento do traçante FD&C-1 pulverizado sobre coberturas mortas de cevada, trigo, aveia-preta colhida, aveia-preta rolada, azevém, milheto e capim-braquiária, nas quantidades de 3.000, 6.000 e 9.000 kg ha-1, antes e após simulação de chuvas. As repetições constituíram-se de oito conjuntos de PVC + funil + béquer com palha, onde, através da chuva lixiviada pelas palhadas e do peso dos suportes de PVC, foram estimadas a retenção e transposição da água, assim como quantificado o traçante extraído, através de procedimentos espectrofotométricos. Os diferentes tipos de resíduos culturais mostraram-se similares quanto à retenção da água da chuva, ocorrendo uniformização entre os primeiros 7,5 e 15 mm de precipitação. A formação de pontos secos associados a canais preferenciais de escorrimento induziu menor capacidade de embebição e retenção da água das chuvas pelas palhadas. As máximas capacidades médias de retenção da chuva pelas coberturas foram de 1,22, 1,99 e 2,59 mm para 3.000, 6.000 e 9.000 kg de matéria seca ha-1, respectivamente. As precipitações iniciais entre 10 e 20 mm foram fundamentais para o molhamento uniforme das palhadas e carregamento do traçante até o solo, independentemente do tipo e da quantidade de palha. Esse comportamento indica ser viável a utilização de programas similares de controle de plantas daninhas para diferentes tipos e quantidades de palha em sistemas de plantio direto.
Resumo:
O teste de germinação é realizado em laboratório, sob condições de ambiente controlado e favorável, visando a obtenção da mais completa e rápida germinação dos lotes de sementes. O substrato utilizado deve manter umidade suficiente para o processo de germinação, sendo que, muitas vezes os rolos de papel umedecidos necessitam ser acondicionados em sacos plásticos. O excesso de umidade também pode ser prejudicial à germinação, provocando atraso ou paralisação do desenvolvimento das plântulas. Essas alterações podem tornar o teste não representativo da verdadeira qualidade do lote. O objetivo do trabalho foi avaliar o efeito de embalagens plásticas, no acondicionamento dos conjuntos de rolo de papel mais sementes, durante o teste de germinação conduzido em germinadores de câmara vertical tipo B.O.D., visando a maximização dos resultados. Foram avaliadas duas espessuras (0,033 mm e 0,050 mm) e a presença ou a ausência de perfurações (128 furos de 5mm de diâmetro por face de 60 cm x 40 cm), nos sacos plásticos transparentes utilizados durante a realização do teste de germinação, para as seguintes espécies: milho doce (cv. 'Doce Cristal' e cv. 'Super Doce'), feijão (cv. 'Pérola' e cv. 'IAC-Carioca Tybatã') e soja (cv. 'Empresa Brasileira de Pesquisa Agropecuária (EMBRAPA)-48' dois lotes). Para sementes de milho doce e feijão, os tratamentos plástico grosso ou fino perfurados e plástico fino inteiro promoveram os melhores resultados do teste de germinação. Concluiu-se que, a espessura do plástico e a presença ou ausência de perfurações são fatores que interferem nos resultados do teste de germinação conduzido em germinadores de câmara vertical tipo B.O.D.