987 resultados para Set-valued map
Resumo:
Coupled map lattices (CML) can describe many relaxation and optimization algorithms currently used in image processing. We recently introduced the ‘‘plastic‐CML’’ as a paradigm to extract (segment) objects in an image. Here, the image is applied by a set of forces to a metal sheet which is allowed to undergo plastic deformation parallel to the applied forces. In this paper we present an analysis of our ‘‘plastic‐CML’’ in one and two dimensions, deriving the nature and stability of its stationary solutions. We also detail how to use the CML in image processing, how to set the system parameters and present examples of it at work. We conclude that the plastic‐CML is able to segment images with large amounts of noise and large dynamic range of pixel values, and is suitable for a very large scale integration(VLSI) implementation.
Resumo:
O projeto desenvolvido tem como objetivo principal a melhoria da eficiência na prestação de serviços de reparação de chapa e pintura na Caetano Auto Colisão, através da aplicação de ferramentas associadas à filosofia Lean. Apesar das ferramentas e técnicas lean estarem bem exploradas nas empresas de produção e manufatura, o mesmo não se verifica em relação às empresas da área dos serviços. O Value Stream Mapping é uma ferramenta lean que consiste no mapeamento do fluxo de materiais e informação necessários para a realização das atividades (que acrescentam e não acrescentam valor), desempenhadas pelos colaboradores, fornecedores e distribuidores, desde a obtenção do pedido do cliente até à entrega final do serviço. Através desta ferramenta é possível identificar as atividades que não acrescentam valor para o processo e propor medidas de melhoria que resultem na eliminação ou redução das mesmas. Com base neste conceito, foi realizado o mapeamento do processo de prestação de serviços de chapa e pintura e identificados os focos de ineficiência. A partir desta análise foram sugeridas melhorias que têm como objetivo atingir o estado futuro proposto assim como tornar o processo mais eficiente. Duas destas melhorias passaram pela implementação dos 5S na sala das tintas e pela elaboração de um relatório A3 para o centro de lavagens. O projeto realizado permitiu o estudo de um problema real numa empresa de serviços, bem como a proposta de um conjunto de melhorias que a médio prazo se espera virem a contribuir para a melhoria da eficiência na prestação de serviços de chapa e pintura.
Resumo:
Image (Video) retrieval is an interesting problem of retrieving images (videos) similar to the query. Images (Videos) are represented in an input (feature) space and similar images (videos) are obtained by finding nearest neighbors in the input representation space. Numerous input representations both in real valued and binary space have been proposed for conducting faster retrieval. In this thesis, we present techniques that obtain improved input representations for retrieval in both supervised and unsupervised settings for images and videos. Supervised retrieval is a well known problem of retrieving same class images of the query. We address the practical aspects of achieving faster retrieval with binary codes as input representations for the supervised setting in the first part, where binary codes are used as addresses into hash tables. In practice, using binary codes as addresses does not guarantee fast retrieval, as similar images are not mapped to the same binary code (address). We address this problem by presenting an efficient supervised hashing (binary encoding) method that aims to explicitly map all the images of the same class ideally to a unique binary code. We refer to the binary codes of the images as `Semantic Binary Codes' and the unique code for all same class images as `Class Binary Code'. We also propose a new class based Hamming metric that dramatically reduces the retrieval times for larger databases, where only hamming distance is computed to the class binary codes. We also propose a Deep semantic binary code model, by replacing the output layer of a popular convolutional Neural Network (AlexNet) with the class binary codes and show that the hashing functions learned in this way outperforms the state of the art, and at the same time provide fast retrieval times. In the second part, we also address the problem of supervised retrieval by taking into account the relationship between classes. For a given query image, we want to retrieve images that preserve the relative order i.e. we want to retrieve all same class images first and then, the related classes images before different class images. We learn such relationship aware binary codes by minimizing the similarity between inner product of the binary codes and the similarity between the classes. We calculate the similarity between classes using output embedding vectors, which are vector representations of classes. Our method deviates from the other supervised binary encoding schemes as it is the first to use output embeddings for learning hashing functions. We also introduce new performance metrics that take into account the related class retrieval results and show significant gains over the state of the art. High Dimensional descriptors like Fisher Vectors or Vector of Locally Aggregated Descriptors have shown to improve the performance of many computer vision applications including retrieval. In the third part, we will discuss an unsupervised technique for compressing high dimensional vectors into high dimensional binary codes, to reduce storage complexity. In this approach, we deviate from adopting traditional hyperplane hashing functions and instead learn hyperspherical hashing functions. The proposed method overcomes the computational challenges of directly applying the spherical hashing algorithm that is intractable for compressing high dimensional vectors. A practical hierarchical model that utilizes divide and conquer techniques using the Random Select and Adjust (RSA) procedure to compress such high dimensional vectors is presented. We show that our proposed high dimensional binary codes outperform the binary codes obtained using traditional hyperplane methods for higher compression ratios. In the last part of the thesis, we propose a retrieval based solution to the Zero shot event classification problem - a setting where no training videos are available for the event. To do this, we learn a generic set of concept detectors and represent both videos and query events in the concept space. We then compute similarity between the query event and the video in the concept space and videos similar to the query event are classified as the videos belonging to the event. We show that we significantly boost the performance using concept features from other modalities.
Resumo:
Recent efforts to develop large-scale neural architectures have paid relatively little attention to the use of self-organizing maps (SOMs). Part of the reason is that most conventional SOMs use a static encoding representation: Each input is typically represented by the fixed activation of a single node in the map layer. This not only carries information in an inefficient and unreliable way that impedes building robust multi-SOM neural architectures, but it is also inconsistent with rhythmic oscillations in biological neural networks. Here I develop and study an alternative encoding scheme that instead uses limit cycle attractors of multi-focal activity patterns to represent input patterns/sequences. Such a fundamental change in representation raises several questions: Can this be done effectively and reliably? If so, will map formation still occur? What properties would limit cycle SOMs exhibit? Could multiple such SOMs interact effectively? Could robust architectures based on such SOMs be built for practical applications? The principal results of examining these questions are as follows. First, conditions are established for limit cycle attractors to emerge in a SOM through self-organization when encoding both static and temporal sequence inputs. It is found that under appropriate conditions a set of learned limit cycles are stable, unique, and preserve input relationships. In spite of the continually changing activity in a limit cycle SOM, map formation continues to occur reliably. Next, associations between limit cycles in different SOMs are learned. It is shown that limit cycles in one SOM can be successfully retrieved by another SOM’s limit cycle activity. Control timings can be set quite arbitrarily during both training and activation. Importantly, the learned associations generalize to new inputs that have never been seen during training. Finally, a complete neural architecture based on multiple limit cycle SOMs is presented for robotic arm control. This architecture combines open-loop and closed-loop methods to achieve high accuracy and fast movements through smooth trajectories. The architecture is robust in that disrupting or damaging the system in a variety of ways does not completely destroy the system. I conclude that limit cycle SOMs have great potentials for use in constructing robust neural architectures.
Resumo:
Accompanied by text of Guide to the map of fairyland. Designed & written by Bernard Sleigh. London, Sidgwick & Jackson, [1920?] 16 pages : text, illustrations ; 19 cm.
Resumo:
Chemical cross-linking has emerged as a powerful approach for the structural characterization of proteins and protein complexes. However, the correct identification of covalently linked (cross-linked or XL) peptides analyzed by tandem mass spectrometry is still an open challenge. Here we present SIM-XL, a software tool that can analyze data generated through commonly used cross-linkers (e.g., BS3/DSS). Our software introduces a new paradigm for search-space reduction, which ultimately accounts for its increase in speed and sensitivity. Moreover, our search engine is the first to capitalize on reporter ions for selecting tandem mass spectra derived from cross-linked peptides. It also makes available a 2D interaction map and a spectrum-annotation tool unmatched by any of its kind. We show SIM-XL to be more sensitive and faster than a competing tool when analyzing a data set obtained from the human HSP90. The software is freely available for academic use at http://patternlabforproteomics.org/sim-xl. A video demonstrating the tool is available at http://patternlabforproteomics.org/sim-xl/video. SIM-XL is the first tool to support XL data in the mzIdentML format; all data are thus available from the ProteomeXchange consortium (identifier PXD001677).
Resumo:
Evolving interfaces were initially focused on solutions to scientific problems in Fluid Dynamics. With the advent of the more robust modeling provided by Level Set method, their original boundaries of applicability were extended. Specifically to the Geometric Modeling area, works published until then, relating Level Set to tridimensional surface reconstruction, centered themselves on reconstruction from a data cloud dispersed in space; the approach based on parallel planar slices transversal to the object to be reconstructed is still incipient. Based on this fact, the present work proposes to analyse the feasibility of Level Set to tridimensional reconstruction, offering a methodology that simultaneously integrates the proved efficient ideas already published about such approximation and the proposals to process the inherent limitations of the method not satisfactorily treated yet, in particular the excessive smoothing of fine characteristics of contours evolving under Level Set. In relation to this, the application of the variant Particle Level Set is suggested as a solution, for its intrinsic proved capability to preserve mass of dynamic fronts. At the end, synthetic and real data sets are used to evaluate the presented tridimensional surface reconstruction methodology qualitatively.
Resumo:
Evolving interfaces were initially focused on solutions to scientific problems in Fluid Dynamics. With the advent of the more robust modeling provided by Level Set method, their original boundaries of applicability were extended. Specifically to the Geometric Modeling area, works published until then, relating Level Set to tridimensional surface reconstruction, centered themselves on reconstruction from a data cloud dispersed in space; the approach based on parallel planar slices transversal to the object to be reconstructed is still incipient. Based on this fact, the present work proposes to analyse the feasibility of Level Set to tridimensional reconstruction, offering a methodology that simultaneously integrates the proved efficient ideas already published about such approximation and the proposals to process the inherent limitations of the method not satisfactorily treated yet, in particular the excessive smoothing of fine characteristics of contours evolving under Level Set. In relation to this, the application of the variant Particle Level Set is suggested as a solution, for its intrinsic proved capability to preserve mass of dynamic fronts. At the end, synthetic and real data sets are used to evaluate the presented tridimensional surface reconstruction methodology qualitatively.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Com o objetivo de se avaliar a freqüência de caprinos leiteiros soropositivos para Neospora caninum, no estado de São Paulo, e se verificarem possíveis associações com idade, sexo e problemas reprodutivos, nos capris, e, também, presença de cães, nas propriedades, foram obtidos soros de 923 caprinos de ambos os sexos e idade acima de 3 meses. Os animais eram provenientes de 17 propriedades de diferentes municípios. Para o diagnóstico, foi utilizado o teste de aglutinação para Neospora (NATe"25), e, em todos os capris, aplicou-se um inquérito a partir do qual se obtiveram informações epidemiológicas e de esfera reprodutiva. Todos os resultados estatísticos foram discutidos no nível de 5% de significância. Assim, chegou-se à conclusão de que a freqüência percentual de positividade para N. caninum foi de 19,77%, e, em apenas uma propriedade, não houve registro de animal soropositivo, o que revela difusão do agente, no Estado. Não foram verificadas diferenças significativas entre freqüências de positividade quanto ao sexo, idade ou problemas reprodutivos. Porém, ressalta-se que a presença de cães, nos capris, foi associada a uma maior freqüência de caprinos soropositivos a N. caninum. A representação geográfica da distribuição de caprinos soropositivos para o protozoário, em mapa coroplético em hachuras, pode implicar em um ganho considerável para estudos da epidemiologia geográfica, na elaboração de um planejamento de controle da enfermidade.
Resumo:
O advento da terapia anti-retroviral de alta potência (HAART) alterou a história natural da aids, diminuindo sua mortalidade e a incidência de doenças oportunistas e aumentando a esperança de vida das pessoas vivendo com aids.Como uma doença crônica, outras questões passam a ser relevantes, entre elas a adesão ao tratamento, seus efeitos adversos e a qualidade de vida das pessoas nessa condição. A CIF constitui um instrumento adequado para identificar as características da funcionalidade, do ambiente e condições pessoais que interferem na qualidade de vida. Instrumentos para a sua aplicação, core sets, têm sido desenvolvidos para várias condições de saúde. Com o objetivo de propor um core set para aids, foram desenvolvidas duas etapas preliminares do modelo proposto para a construção desses instrumentos. A primeira etapa, de revisão sistemática buscou no MEDLINE artigos com descritores HAART e qualidade de vida, publicados em inglês, de 2000 a 2004. Foram selecionados 31 estudos que resultou em 87 conceitos dos quais 66 puderam ser identificados como categorias da CIF. Estas formaram as perguntas da entrevista aplicada em 42 voluntários, pacientes de um centro de referência para DST e Aids de São Paulo. Entre as condições mais freqüentemente associadas ao tratamento, estão às mudanças na imagem corporal, conseqüência da lipodistrofia, apontada em 84 por cento dos estudos e em 93 por cento das entrevistas. Alterações das funções digestivas, das relações íntimas, e das funções sexuais foram condições importantes identificadas no estudo. As duas etapas definiram 40 categorias da CIF como proposta preliminar de um core set para pacientes com aids
Resumo:
Premise of study: Microsatellite primers were developed for castor bean (Ricinus communis L.) to investigate genetic diversity and population structure, and to provide support to germplasm management. Methods and Results: Eleven microsatellite loci were isolated using an enrichment cloning protocol and used to characterize castor bean germplasm from the collection at the Instituto Agronomico de Campinas (IAC). In a survey of 76 castor bean accessions, the investigated loci displayed polymorphism ranging from two to five alleles. Conclusions: The information derived from microsatellite markers led to significant gains in conserved allelic richness and provides support to the implementation of several molecular breeding strategies for castor bean.
Resumo:
Background: High-throughput SNP genotyping has become an essential requirement for molecular breeding and population genomics studies in plant species. Large scale SNP developments have been reported for several mainstream crops. A growing interest now exists to expand the speed and resolution of genetic analysis to outbred species with highly heterozygous genomes. When nucleotide diversity is high, a refined diagnosis of the target SNP sequence context is needed to convert queried SNPs into high-quality genotypes using the Golden Gate Genotyping Technology (GGGT). This issue becomes exacerbated when attempting to transfer SNPs across species, a scarcely explored topic in plants, and likely to become significant for population genomics and inter specific breeding applications in less domesticated and less funded plant genera. Results: We have successfully developed the first set of 768 SNPs assayed by the GGGT for the highly heterozygous genome of Eucalyptus from a mixed Sanger/454 database with 1,164,695 ESTs and the preliminary 4.5X draft genome sequence for E. grandis. A systematic assessment of in silico SNP filtering requirements showed that stringent constraints on the SNP surrounding sequences have a significant impact on SNP genotyping performance and polymorphism. SNP assay success was high for the 288 SNPs selected with more rigorous in silico constraints; 93% of them provided high quality genotype calls and 71% of them were polymorphic in a diverse panel of 96 individuals of five different species. SNP reliability was high across nine Eucalyptus species belonging to three sections within subgenus Symphomyrtus and still satisfactory across species of two additional subgenera, although polymorphism declined as phylogenetic distance increased. Conclusions: This study indicates that the GGGT performs well both within and across species of Eucalyptus notwithstanding its nucleotide diversity >= 2%. The development of a much larger array of informative SNPs across multiple Eucalyptus species is feasible, although strongly dependent on having a representative and sufficiently deep collection of sequences from many individuals of each target species. A higher density SNP platform will be instrumental to undertake genome-wide phylogenetic and population genomics studies and to implement molecular breeding by Genomic Selection in Eucalyptus.