956 resultados para Zoological taxonomy
Resumo:
Copyright © 2014 Magnolia Press.
Resumo:
Introdução Actualmente, as mensagens electrónicas são consideradas um importante meio de comunicação. As mensagens electrónicas – vulgarmente conhecidas como emails – são utilizadas fácil e frequentemente para enviar e receber o mais variado tipo de informação. O seu uso tem diversos fins gerando diariamente um grande número de mensagens e, consequentemente um enorme volume de informação. Este grande volume de informação requer uma constante manipulação das mensagens de forma a manter o conjunto organizado. Tipicamente esta manipulação consiste em organizar as mensagens numa taxonomia. A taxonomia adoptada reflecte os interesses e as preferências particulares do utilizador. Motivação A organização manual de emails é uma actividade morosa e que consome tempo. A optimização deste processo através da implementação de um método automático, tende a melhorar a satisfação do utilizador. Cada vez mais existe a necessidade de encontrar novas soluções para a manipulação de conteúdo digital poupando esforços e custos ao utilizador; esta necessidade, concretamente no âmbito da manipulação de emails, motivou a realização deste trabalho. Hipótese O objectivo principal deste projecto consiste em permitir a organização ad-hoc de emails com um esforço reduzido por parte do utilizador. A metodologia proposta visa organizar os emails num conjunto de categorias, disjuntas, que reflectem as preferências do utilizador. A principal finalidade deste processo é produzir uma organização onde as mensagens sejam classificadas em classes apropriadas requerendo o mínimo número esforço possível por parte do utilizador. Para alcançar os objectivos estipulados, este projecto recorre a técnicas de mineração de texto, em especial categorização automática de texto, e aprendizagem activa. Para reduzir a necessidade de inquirir o utilizador – para etiquetar exemplos de acordo com as categorias desejadas – foi utilizado o algoritmo d-confidence. Processo de organização automática de emails O processo de organizar automaticamente emails é desenvolvido em três fases distintas: indexação, classificação e avaliação. Na primeira fase, fase de indexação, os emails passam por um processo transformativo de limpeza que visa essencialmente gerar uma representação dos emails adequada ao processamento automático. A segunda fase é a fase de classificação. Esta fase recorre ao conjunto de dados resultantes da fase anterior para produzir um modelo de classificação, aplicando-o posteriormente a novos emails. Partindo de uma matriz onde são representados emails, termos e os seus respectivos pesos, e um conjunto de exemplos classificados manualmente, um classificador é gerado a partir de um processo de aprendizagem. O classificador obtido é então aplicado ao conjunto de emails e a classificação de todos os emails é alcançada. O processo de classificação é feito com base num classificador de máquinas de vectores de suporte recorrendo ao algoritmo de aprendizagem activa d-confidence. O algoritmo d-confidence tem como objectivo propor ao utilizador os exemplos mais significativos para etiquetagem. Ao identificar os emails com informação mais relevante para o processo de aprendizagem, diminui-se o número de iterações e consequentemente o esforço exigido por parte dos utilizadores. A terceira e última fase é a fase de avaliação. Nesta fase a performance do processo de classificação e a eficiência do algoritmo d-confidence são avaliadas. O método de avaliação adoptado é o método de validação cruzada denominado 10-fold cross validation. Conclusões O processo de organização automática de emails foi desenvolvido com sucesso, a performance do classificador gerado e do algoritmo d-confidence foi relativamente boa. Em média as categorias apresentam taxas de erro relativamente baixas, a não ser as classes mais genéricas. O esforço exigido pelo utilizador foi reduzido, já que com a utilização do algoritmo d-confidence obteve-se uma taxa de erro próxima do valor final, mesmo com um número de casos etiquetados abaixo daquele que é requerido por um método supervisionado. É importante salientar, que além do processo automático de organização de emails, este projecto foi uma excelente oportunidade para adquirir conhecimento consistente sobre mineração de texto e sobre os processos de classificação automática e recuperação de informação. O estudo de áreas tão interessantes despertou novos interesses que consistem em verdadeiros desafios futuros.
Resumo:
Radio link quality estimation in Wireless Sensor Networks (WSNs) has a fundamental impact on the network performance and also affects the design of higher-layer protocols. Therefore, for about a decade, it has been attracting a vast array of research works. Reported works on link quality estimation are typically based on different assumptions, consider different scenarios, and provide radically different (and sometimes contradictory) results. This article provides a comprehensive survey on related literature, covering the characteristics of low-power links, the fundamental concepts of link quality estimation in WSNs, a taxonomy of existing link quality estimators, and their performance analysis. To the best of our knowledge, this is the first survey tackling in detail link quality estimation in WSNs. We believe our efforts will serve as a reference to orient researchers and system designers in this area.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
Relatório Final apresentado à Escola Superior de Educação de Lisboa para obtenção de grau de mestre em Ensino do 1.º e 2.º Ciclos do Ensino Básico
Resumo:
Mathematical models and statistical analysis are key instruments in soil science scientific research as they can describe and/or predict the current state of a soil system. These tools allow us to explore the behavior of soil related processes and properties as well as to generate new hypotheses for future experimentation. A good model and analysis of soil properties variations, that permit us to extract suitable conclusions and estimating spatially correlated variables at unsampled locations, is clearly dependent on the amount and quality of data and of the robustness techniques and estimators. On the other hand, the quality of data is obviously dependent from a competent data collection procedure and from a capable laboratory analytical work. Following the standard soil sampling protocols available, soil samples should be collected according to key points such as a convenient spatial scale, landscape homogeneity (or non-homogeneity), land color, soil texture, land slope, land solar exposition. Obtaining good quality data from forest soils is predictably expensive as it is labor intensive and demands many manpower and equipment both in field work and in laboratory analysis. Also, the sampling collection scheme that should be used on a data collection procedure in forest field is not simple to design as the sampling strategies chosen are strongly dependent on soil taxonomy. In fact, a sampling grid will not be able to be followed if rocks at the predicted collecting depth are found, or no soil at all is found, or large trees bar the soil collection. Considering this, a proficient design of a soil data sampling campaign in forest field is not always a simple process and sometimes represents a truly huge challenge. In this work, we present some difficulties that have occurred during two experiments on forest soil that were conducted in order to study the spatial variation of some soil physical-chemical properties. Two different sampling protocols were considered for monitoring two types of forest soils located in NW Portugal: umbric regosol and lithosol. Two different equipments for sampling collection were also used: a manual auger and a shovel. Both scenarios were analyzed and the results achieved have allowed us to consider that monitoring forest soil in order to do some mathematical and statistical investigations needs a sampling procedure to data collection compatible to established protocols but a pre-defined grid assumption often fail when the variability of the soil property is not uniform in space. In this case, sampling grid should be conveniently adapted from one part of the landscape to another and this fact should be taken into consideration of a mathematical procedure.
Resumo:
Neste artigo é apresentada uma taxonomia e estrutura para procedimentos de análise de riscos ocupacionais (designada por Matriz Perigo-Risco-Danos). É apresentada uma classificação normalizada de perigos/riscos, com identificação das consequências potenciais associadas, na Matriz para Identificação de Perigos-Riscos-Danos (dominantes). Para cada perigo/risco são identificados os danos potenciais individuais, em resultado de acidentes de trabalho (lesões), de doenças profissionais legais (patologias ocupacionais), de doenças relacionadas com o trabalho e de incomodidade ocupacional. Para a caracterização dos danos individuais são utilizadas as nomenclaturas existentes na metodologia EEAT (Estatísticas Europeias de acidentes de Trabalho) e no Decreto Regulamentar 76/ 2007. Cada dano é associado à região anatómica potencialmente atingida. A valoração do risco é organizada em termos de riscos para acidentes - doenças profissionais – e incomodidade ocupacional. Para cada perigo/risco são identificadas medidas de controlo, de acordo com a hierarquia referida na NP 4397:2008 (modificada). A implementação das medidas de controlo foi associada a um critério temporal de curto-médio-longo prazo que teve em conta a oportunidade da implementação e os grupos de medidas a implementar conjuntamente. Este procedimento metódico, designado por Matriz Perigo-Risco-Danos foi aplicado a uma empresa de fabricação de produtos de betão para a construção, pretendendo valorizar os procedimentos correntes de an
Resumo:
Dinoflagellates are planktonic unicellular microorganisms, that under certain conditions may produce cysts prone to fossilization. These cysts are abundant in the sedimentary record since the Palaeozoic, supplying important biostratigraphical and palaeoecological information. In Portugal, the study of dinoflagellates is still in its beginnings. Considering the late developments in this domain, an updated nomenclature in Portuguese language is presented, pertaining to it's biology, taxonomy, ecology, palaeoecology and biostratigraphy.
Resumo:
After some remarks on the protection of sites recognized as most interesting, two less known items about dinosaurs and Portugal are dealt with. The first of theme concerns the first published account on dinosaur tracks. Jacinto Pedro Gomes, then (1884) preparing a report on the Cabo Mondego coal mines, was told of the occurrence of large footprint casts that subsequently were sent to the Museum of the Escola Politécnica in Lisbon. Gomes has shown drawings of them to B. Geinitz (Dresden), who ascribed the casts to dinosaurs. Karl Zittel (München), corroborated this viewpoint, and Louis Dollo (Brussels) reported them to Ornithopods. A posthumous note by GOMES (1915-1916) is the first scientific paper on dinosaur tracks in Portugal. However, it is not the first published report. João Bonança, a reporter, presented in his large book "HISTORIA / DA / LUZITANIA E DA IBERIA ..." (1891), a new (both irrealistic and useless) stratigraphic classification. He also replaced Zoological and Botanic Nomenclature by another one devised by him. Having seen the footprint casts at the Museum of the Escola Politécnica, he referred bird or dinosaur footprints in Cabo Mondego's Upper Jurassic, this being the first published report on such fossils as far as Portugal is concerned. The second theme is about Late Cretaceous dinosaurs from Viso, Aveiro and Taveiro. Faunas are marked by generalized nannism, and seem impoverished by previous extinctions of larger forms; their probable insular character has been acknowledged. Extinctions may well be explained by non-catastrophic causes. The general fall of temperatures may have been far more important.
Resumo:
This paper proposes and reports the development of an open source solution for the integrated management of Infrastructure as a Service (IaaS) cloud computing resources, through the use of a common API taxonomy, to incorporate open source and proprietary platforms. This research included two surveys on open source IaaS platforms (OpenNebula, OpenStack and CloudStack) and a proprietary platform (Parallels Automation for Cloud Infrastructure - PACI) as well as on IaaS abstraction solutions (jClouds, Libcloud and Deltacloud), followed by a thorough comparison to determine the best approach. The adopted implementation reuses the Apache Deltacloud open source abstraction framework, which relies on the development of software driver modules to interface with different IaaS platforms, and involved the development of a new Deltacloud driver for PACI. The resulting interoperable solution successfully incorporates OpenNebula, OpenStack (reuses pre-existing drivers) and PACI (includes the developed Deltacloud PACI driver) nodes and provides a Web dashboard and a Representational State Transfer (REST) interface library. The results of the exchanged data payload and time response tests performed are presented and discussed. The conclusions show that open source abstraction tools like Deltacloud allow the modular and integrated management of IaaS platforms (open source and proprietary), introduce relevant time and negligible data overheads and, as a result, can be adopted by Small and Medium-sized Enterprise (SME) cloud providers to circumvent the vendor lock-in problem whenever service response time is not critical.
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
Resumo:
Previous work by our group introduced a novel concept and sensor design for “off-the-person” ECG, for which evidence on how it compares against standard clinical-grade equipment has been largely missing. Our objectives with this work are to characterise the off-the-person approach in light of the current ECG systems landscape, and assess how the signals acquired using this simplified setup compare with clinical-grade recordings. Empirical tests have been performed with real-world data collected from a population of 38 control subjects, to analyze the correlation between both approaches. Results show off-the-person data to be correlated with clinical-grade data, demonstrating the viability of this approach to potentially extend preventive medicine practices by enabling the integration of ECG monitoring into multiple dimensions of people’s everyday lives. © 2015, IUPESM and Springer-Verlag Berlin Heidelberg.
Resumo:
Systematics is the study of diversity of the organisms and their relationships comprising classification, nomenclature and identification. The term classification or taxonomy means the arrangement of the organisms in groups (rate) and the nomenclature is the attribution of correct international scientific names to organisms and identification is the inclusion of unknown strains in groups derived from classification. Therefore, classification for a stable nomenclature and a perfect identification are required previously. The beginning of the new bacterial systematics era can be remembered by the introduction and application of new taxonomic concepts and techniques, from the 50s and 60s. Important progress were achieved using numerical taxonomy and molecular taxonomy. Molecular taxonomy, brought into effect after the emergence of the Molecular Biology resources, provided knowledge that comprises systematics of bacteria, in which occurs great evolutionary interest, or where is observed the necessity of eliminating any environmental interference. When you study the composition and disposition of nucleotides in certain portions of the genetic material, you study searching their genome, much less susceptible to environmental alterations than proteins, codified based on it. In the molecular taxonomy, you can research both DNA and RNA, and the main techniques that have been used in the systematics comprise the build of restriction maps, DNA-DNA hybridization, DNA-RNA hybridization, sequencing of DNA sequencing of sub-units 16S and 23S of rRNA, RAPD, RFLP, PFGE etc. Techniques such as base sequencing, though they are extremely sensible and greatly precise, are relatively onerous and impracticable to the great majority of the bacterial taxonomy laboratories. Several specialized techniques have been applied to taxonomic studies of microorganisms. In the last years, these have included preliminary electrophoretic analysis of soluble proteins and isoenzymes, and subsequently determination of deoxyribonucleic acid base composition and assessment of base sequence homology by means of DNA-RNA hybrid experiments beside others. These various techniques, as expected, have generally indicated a lack of taxonomic information in microbial systematics. There are numberless techniques and methodologies that make bacteria identification and classification study possible, part of them described here, allowing establish different degrees of subspecific and interspecific similarity through phenetic-genetic polymorphism analysis. However, was pointed out the necessity of using more than one technique for better establish similarity degrees within microorganisms. Obtaining data resulting from application of a sole technique isolatedly may not provide significant information from Bacterial Systematics viewpoint
Resumo:
The evolution of the electrical grid into a smart grid, allowing user production, storage and exchange of energy, remote control of appliances, and in general optimizations over how the energy is managed and consumed, is also an evolution into a complex Information and Communication Technology (ICT) system. With the goal of promoting an integrated and interoperable smart grid, a number of organizations all over the world started uncoordinated standardization activities, which caused the emergence of a large number of incompatible architectures and standards. There are now new standardization activities which have the goal of organizing existing standards and produce best practices to choose the right approach(es) to be employed in specific smart grid designs. This paper follows the lead of NIST and ETSI/CEN/CENELEC approaches in trying to provide taxonomy of existing solutions; our contribution reviews and relates current ICT state-of-the-art, with the objective of forecasting future trends based on the orientation of current efforts and on relationships between them. The resulting taxonomy provides guidelines for further studies of the architectures, and highlights how the standards in the last mile of the smart grid are converging to common solutions to improve ICT infrastructure interoperability.