618 resultados para SURF Descriptor
Resumo:
[EN] Sea turtles bury their eggs in the sand of the beach, where they incuba te. After a period of approximately two months, hatchlings break the eggshell and remain inside the chamber for three to seven days (Hays & Speakman, 1993). Then they leave the nest and emerge to the surface of the beach, going quickly towards the surf, to begin their pelagic and developmental stage (e.g., López-Jurado & Andreu, 1998). Hatchlings usually do not emerge from the nest as a single group. They emerge in groups at different moments, resulting in more than one emergence per nest during sorne days (Whitherington et al.,4 1990; Hays et al., 1992; Peters et al., 1994).
Resumo:
AIRES, Kelson R. T. ; ARAÚJO, Hélder J. ; MEDEIROS, Adelardo A. D. . Plane Detection from Monocular Image Sequences. In: VISUALIZATION, IMAGING AND IMAGE PROCESSING, 2008, Palma de Mallorca, Spain. Proceedings..., Palma de Mallorca: VIIP, 2008
Resumo:
AIRES, Kelson R. T.; ARAÚJO, Hélder J.; MEDEIROS, Adelardo A. D. Plane Detection Using Affine Homography. In: CONGRESSO BRASILEIRO DE AUTOMÁTICA, 2008, Juiz de Fora, MG: Anais... do CBA 2008.
Resumo:
Au cours des dernières décennies, l’effort sur les applications de capteurs infrarouges a largement progressé dans le monde. Mais, une certaine difficulté demeure, en ce qui concerne le fait que les objets ne sont pas assez clairs ou ne peuvent pas toujours être distingués facilement dans l’image obtenue pour la scène observée. L’amélioration de l’image infrarouge a joué un rôle important dans le développement de technologies de la vision infrarouge de l’ordinateur, le traitement de l’image et les essais non destructifs, etc. Cette thèse traite de la question des techniques d’amélioration de l’image infrarouge en deux aspects, y compris le traitement d’une seule image infrarouge dans le domaine hybride espacefréquence, et la fusion d’images infrarouges et visibles employant la technique du nonsubsampled Contourlet transformer (NSCT). La fusion d’images peut être considérée comme étant la poursuite de l’exploration du modèle d’amélioration de l’image unique infrarouge, alors qu’il combine les images infrarouges et visibles en une seule image pour représenter et améliorer toutes les informations utiles et les caractéristiques des images sources, car une seule image ne pouvait contenir tous les renseignements pertinents ou disponibles en raison de restrictions découlant de tout capteur unique de l’imagerie. Nous examinons et faisons une enquête concernant le développement de techniques d’amélioration d’images infrarouges, et ensuite nous nous consacrons à l’amélioration de l’image unique infrarouge, et nous proposons un schéma d’amélioration de domaine hybride avec une méthode d’évaluation floue de seuil amélioré, qui permet d’obtenir une qualité d’image supérieure et améliore la perception visuelle humaine. Les techniques de fusion d’images infrarouges et visibles sont établies à l’aide de la mise en oeuvre d’une mise en registre précise des images sources acquises par différents capteurs. L’algorithme SURF-RANSAC est appliqué pour la mise en registre tout au long des travaux de recherche, ce qui conduit à des images mises en registre de façon très précise et des bénéfices accrus pour le traitement de fusion. Pour les questions de fusion d’images infrarouges et visibles, une série d’approches avancées et efficaces sont proposés. Une méthode standard de fusion à base de NSCT multi-canal est présente comme référence pour les approches de fusion proposées suivantes. Une approche conjointe de fusion, impliquant l’Adaptive-Gaussian NSCT et la transformée en ondelettes (Wavelet Transform, WT) est propose, ce qui conduit à des résultats de fusion qui sont meilleurs que ceux obtenus avec les méthodes non-adaptatives générales. Une approche de fusion basée sur le NSCT employant la détection comprime (CS, compressed sensing) et de la variation totale (TV) à des coefficients d’échantillons clairsemés et effectuant la reconstruction de coefficients fusionnés de façon précise est proposée, qui obtient de bien meilleurs résultats de fusion par le biais d’une pré-amélioration de l’image infrarouge et en diminuant les informations redondantes des coefficients de fusion. Une procédure de fusion basée sur le NSCT utilisant une technique de détection rapide de rétrécissement itératif comprimé (fast iterative-shrinking compressed sensing, FISCS) est proposée pour compresser les coefficients décomposés et reconstruire les coefficients fusionnés dans le processus de fusion, qui conduit à de meilleurs résultats plus rapidement et d’une manière efficace.
Resumo:
In computer vision, training a model that performs classification effectively is highly dependent on the extracted features, and the number of training instances. Conventionally, feature detection and extraction are performed by a domain-expert who, in many cases, is expensive to employ and hard to find. Therefore, image descriptors have emerged to automate these tasks. However, designing an image descriptor still requires domain-expert intervention. Moreover, the majority of machine learning algorithms require a large number of training examples to perform well. However, labelled data is not always available or easy to acquire, and dealing with a large dataset can dramatically slow down the training process. In this paper, we propose a novel Genetic Programming based method that automatically synthesises a descriptor using only two training instances per class. The proposed method combines arithmetic operators to evolve a model that takes an image and generates a feature vector. The performance of the proposed method is assessed using six datasets for texture classification with different degrees of rotation, and is compared with seven domain-expert designed descriptors. The results show that the proposed method is robust to rotation, and has significantly outperformed, or achieved a comparable performance to, the baseline methods.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica
Resumo:
AIRES, Kelson R. T. ; ARAÚJO, Hélder J. ; MEDEIROS, Adelardo A. D. . Plane Detection from Monocular Image Sequences. In: VISUALIZATION, IMAGING AND IMAGE PROCESSING, 2008, Palma de Mallorca, Spain. Proceedings..., Palma de Mallorca: VIIP, 2008
Resumo:
Dados batimétricos detalhados obtidos na antepraia adjacente ao Molhe Oeste conhecida como banco das Três Marias, evidenciaram a presença de uma fossa junto a extremidade do molhe e um banco na forma de domo a SO da depressão. O banco tem sua base e crista respectivamente nas cotas de 8 e 6 m. A feição é aproximadamente oval, com o eixo maior semiparalelo à costa (sentido NNE – SSO) medindo aproximadamente 1600 m. O eixo menor, quase transversal à costa (sentido L-O), apresenta cerca de 1200 m de extensão. O banco representa o resquício do lobo terminal do delta de maré vazante (barra) da laguna formado durante a fixação de sua desembocadura. A fossa junto ao cabeço do molhe apresenta pendente íngreme, que inicia na isóbata de 8 m e alcança a isóbata dos 17 m, sendo provavelmente formada pela ação da corrente longitudinal de SO para NE. Foram coletadas também amostras de sedimento. Estes dados evidenciaram ao menos três subambientes refletindo diferentes níveis de energia. Do mais enérgico para o menos: o banco, onde o sedimento é mais grosso e melhor selecionado, a antepraia, com o sedimento pouco mais fino e a fossa, onde foi amostrada lama, mal selecionada.
Resumo:
Estudo de caso único exploratório e descritivo voltado a analisar a indexação em uma das bibliotecas universitárias do SIB/FURG. Os objetivos específicos, delimitados a partir do contexto já citado, foram: a) Identificar e analisar, através de mapeamento cognitivo, os procedimentos metodológicos empregados na indexação nas atividades de análise, síntese e representação da informação; b) Identificar os conceitos/noções com maior importância na percepção da indexadora quanto ao processo de indexação e as relações entre tais conceitos de forma a construir o mapa cognitivo do processo a partir da percepção da indexadora; e c) Descrever e analisar a indexação de livros na unidade em estudo sob aspecto da análise, síntese e representação destes através da aplicação do Protocolo Verbal. As técnicas utilizadas para a coleta de informação no estudo de caso único foram a Self-Q e o Protocolo Verbal, ambas centradas na abordagem qualitativa. Conclui-se, a partir da construção do mapa cognitivo da indexadora, que as noções/conceitos que sustentam sua prática voltam-se, em sua maioria, a aspectos de caráter procedimental. Percebeu-se também que a prática de indexação ocorre desconectada dos princípios de especificidade e exaustividade. Sobre a indexação de livros conclui-se que, na unidade sob estudo, as operações de análise são desenvolvidas de modo empírico através da leitura e interpretação de partes do documento indexado. Identificou-se que o enfoque da prática não recai apenas no documento mas também, no usuário. A análise e síntese ocorrem de forma integrada, sendo que, em alguns momentos, a síntese é desenvolvida a partir do conhecimento dos descritores do tesauro. A delimitação dos conceitos, por sua vez, foi influenciada, por vezes, pelo(a): uso de termos já empregados na unidade em que atua/sistema, presença do descritor no sumário, conhecimento das demandas dos usuários, área de domínio em que indexa e percepção enquanto profissional. Percebeu-se que não existem níveis definidos quanto a exaustividade e especificidade na indexação. Na representação dos conceitos foram identificadas dificuldades ocasionadas pela ausência de relacionamentos entre termos e/ou ausência de termos voltados a área indexada no tesauro empregado. Conclui-se que faz-se necessário o desenvolvimento de uma política de indexação formalizada para basilar a prática desenvolvida no SIB/FURG.
Resumo:
The studies have aimed to overcome the confusing variety of existing persistent identifier systems, by; analysing the current national URN:NBN and other identifier initiatives providing guidelines for an international harmonized persistent identifier framework that serves the long-term preservation needs of the research and cultural heritage communities advising these communities about a roadmap to gain the potential benefits. This roadmap also includes a blueprint for an organisation for the distribution and maintenance of the Persistent Identifier infrastructure. These studies are connected to the broader PersId project with DEFF, SURF, DANS, the national libraries of Germany, Finland and Sweden and CNR and FDR from Italy. A number of organisations have been involved in the process: Europeana, the British library, the Dutch Royal Library, the National library of Norway and the Ministry of Education, Flanders, Belgium. PersID - III: Current State and State of the Art (IIIa) & User Requirements (IIIb) (Persistent Identifier: urn:nbn:nl:ui:13-9g4-i1s) PersID - IV: Prototype for a Meta Resolver System/ Work on Standards (Persistent Identifier: urn:nbn:nl:ui:13-wt1-6n9) PersID - V: Sustainability (Persistent Identifier: urn:nbn:nl:ui:13-o4p-8py) Please note that there are also two broader reports on the project as a whole: PersID - I: Project report and II:Communication. For further information please visit the website of the Persistent Identifier project: www.persid.org
Resumo:
For some years now the Internet and World Wide Web communities have envisaged moving to a next generation of Web technologies by promoting a globally unique, and persistent, identifier for identifying and locating many forms of published objects . These identifiers are called Universal Resource Names (URNs) and they hold out the prospect of being able to refer to an object by what it is (signified by its URN), rather than by where it is (the current URL technology). One early implementation of URN ideas is the Unicode-based Handle technology, developed at CNRI in Reston Virginia. The Digital Object Identifier (DOI) is a specific URN naming convention proposed just over 5 years ago and is now administered by the International DOI organisation, founded by a consortium of publishers and based in Washington DC. The DOI is being promoted for managing electronic content and for intellectual rights management of it, either using the published work itself, or, increasingly via metadata descriptors for the work in question. This paper describes the use of the CNRI handle parser to navigate a corpus of papers for the Electronic Publishing journal. These papers are in PDF format and based on our server in Nottingham. For each paper in the corpus a metadata descriptor is prepared for every citation appearing in the References section. The important factor is that the underlying handle is resolved locally in the first instance. In some cases (e.g. cross-citations within the corpus itself and links to known resources elsewhere) the handle can be handed over to CNRI for further resolution. This work shows the encouraging prospect of being able to use persistent URNs not only for intellectual property negotiations but also for search and discovery. In the test domain of this experiment every single resource, referred to within a given paper, can be resolved, at least to the level of metadata about the referred object. If the Web were to become more fully URN aware then a vast directed graph of linked resources could be accessed, via persistent names. Moreover, if these names delivered embedded metadata when resolved, the way would be open for a new generation of vastly more accurate and intelligent Web search engines.
Resumo:
The workshop took place on 16-17 January in Utrecht, with Seventy experts from eight European countries in attendance. The workshop was structured in six sessions: usage statistics research paper metadata exchanging information author identification Open Archives Initiative eTheses Following the workshop, the discussion groups were asked to continue their collaboration and to produce a report for circulation to all participants. The results can be downloaded below. The recommendations contained in the reports above have been reviewed by the Knowledge Exchange partner organisations and formed the basis for new proposals and the next steps in Knowledge Exchange work with institutional repositories. Institutional Repository Workshop - Next steps During April and May 2007 Knowledge Exchange had expert reviewers from the partner organisations go though the workshop strand reports and make their recommendations about the best way to move forward, to set priorities, and find possibilities for furthering the institutional repository cause. The KE partner representatives reviewed the reviews and consulted with their partner organisation management to get an indication of support and funding for the latest ideas and proposals, as follows: Pragmatic interoperability During a review meeting at JISC offices in London on 31 May, the expert reviewers and the KE partner representatives agreed that ‘pragmatic interoperability' is the primary area of interest. It was also agreed that the most relevant and beneficial choice for a Knowledge Exchange approach would be to aim for CRIS-OAR interoperability as a step towards integrated services. Within this context, interlinked joint projects could be undertaken by the partner organisations regarding the areas that most interested them. Interlinked projects The proposed Knowledge Exchange activities involve interlinked joint projects on metadata, persistent author identifiers, and eTheses which are intended to connect to and build on projects such as ISPI, Jisc NAMES and the Digital Author Identifier (DAI) developed by SURF. It is important to stress that the projects are not intended to overlap, but rather to supplement the DRIVER 2 (EU project) approaches. Focus on CRIS and OAR It is believed that the focus on practical interoperability between Current Research Information Systems and Open Access Repository systems will be of genuine benefit to research scientists, research administrators and librarian communities in the Knowledge Exchange countries; accommodating the specific needs of each group. Timing June 2007: Write the draft proposal by KE Working Group members July 2007: Final proposal to be sent to partner organisations by KE Group August 2007: Decision by Knowledge Exchange partner organisations.
Resumo:
Following the workshop on new developments in daily licensing practice in November 2011, we brought together fourteen representatives from national consortia (from Denmark, Germany, Netherlands and the UK) and publishers (Elsevier, SAGE and Springer) met in Copenhagen on 9 March 2012 to discuss provisions in licences to accommodate new developments. The one day workshop aimed to: present background and ideas regarding the provisions KE Licensing Expert Group developed; introduce and explain the provisions the invited publishers currently use;ascertain agreement on the wording for long term preservation, continuous access and course packs; give insight and more clarity about the use of open access provisions in licences; discuss a roadmap for inclusion of the provisions in the publishers’ licences; result in report to disseminate the outcome of the meeting. Participants of the workshop were: United Kingdom: Lorraine Estelle (Jisc Collections) Denmark: Lotte Eivor Jørgensen (DEFF), Lone Madsen (Southern University of Denmark), Anne Sandfær (DEFF/Knowledge Exchange) Germany: Hildegard Schaeffler (Bavarian State Library), Markus Brammer (TIB) The Netherlands: Wilma Mossink (SURF), Nol Verhagen (University of Amsterdam), Marc Dupuis (SURF/Knowledge Exchange) Publishers: Alicia Wise (Elsevier), Yvonne Campfens (Springer), Bettina Goerner (Springer), Leo Walford (Sage) Knowledge Exchange: Keith Russell The main outcome of the workshop was that it would be valuable to have a standard set of clauses which could used in negotiations, this would make concluding licences a lot easier and more efficient. The comments on the model provisions the Licensing Expert group had drafted will be taken into account and the provisions will be reformulated. Data and text mining is a new development and demand for access to allow for this is growing. It would be easier if there was a simpler way to access materials so they could be more easily mined. However there are still outstanding questions on how authors of articles that have been mined can be properly attributed.
Resumo:
In June 2009 a study was completed that had been commissioned by Knowledge Exchange and written by Professor John Houghton, Victoria University, Australia. This report on the study was titled: "Open Access – What are the economic benefits? A comparison of the United Kingdom, Netherlands and Denmark." This report was based on the findings of studies in which John Houghton had modelled the costs and benefits of Open Access in three countries. These studies had been undertaken in the UK by JISC, in the Netherlands by SURF and in Denmark by DEFF. In the three national studies the costs and benefits of scholarly communication were compared based on three different publication models. The modelling revealed that the greatest advantage would be offered by the Open Access model, which means that the research institution or the party financing the research pays for publication and the article is then freely accessible. Adopting this model could lead to annual savings of around EUR 70 million in Denmark, EUR 133 million in The Netherlands and EUR 480 in the UK. The report concludes that the advantages would not just be in the long term; in the transitional phase too, more open access to research results would have positive effects. In this case the benefits would also outweigh the costs.
Resumo:
AIRES, Kelson R. T.; ARAÚJO, Hélder J.; MEDEIROS, Adelardo A. D. Plane Detection Using Affine Homography. In: CONGRESSO BRASILEIRO DE AUTOMÁTICA, 2008, Juiz de Fora, MG: Anais... do CBA 2008.