956 resultados para Lichen taxonomy
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Here, we describe the development of sporangial and gametangial conceptacles for Amphiroa beauvoisii and A. vanbosseae; sporangial conceptacles only for A. misakiensis; and gametangial conceptacles only for A. cryptarthrodia and A. rigida. The descriptions are based on the observation of histological preparations obtained from 112 specimens collected from the Gulf of California, in Mexico, and the Azores archipelago of Portugal. Information on the development of the sporangial conceptacle pore and conceptacle senescence is here described and illustrated for the first time. Four development patterns were observed: two for sporangial conceptacles; one for spermatangial conceptacles; and one for carposporangial conceptacles. The phases of development of the sporangial conceptacle were found to be useful in delimiting species within the genus. Based on the sporangium location on the cavity floor and the pore canal anatomy, the species A. beauvoisii, A. misakiensis and A. vanbosseae can be distinguished from each other.
Resumo:
Jornadas "Ciência nos Açores – que futuro? Tema Ciências Naturais e Ambiente", Ponta Delgada, 7-8 de Junho de 2013.
Resumo:
Copyright © 2014 Magnolia Press.
Resumo:
Throughout the world, epidemiological studies were established to examine the relationship between air pollution and mortality rates and adverse respiratory health effects. However, despite the years of discussion the correlation between adverse health effects and atmospheric pollution remains controversial, partly because these studies are frequently restricted to small and well-monitored areas. Monitoring air pollution is complex due to the large spatial and temporal variations of pollution phenomena, the high costs of recording instruments, and the low sampling density of a purely instrumental approach. Therefore, together with the traditional instrumental monitoring, bioindication techniques allow for the mapping of pollution effects over wide areas with a high sampling density. In this study, instrumental and biomonitoring techniques were integrated to support an epidemiological study that will be developed in an industrial area located in Gijon in the coastal of central Asturias, Spain. Three main objectives were proposed to (i) analyze temporal patterns of PM10 concentrations in order to apportion emissions sources, (ii) investigate spatial patterns of lichen conductivity to identify the impact of the studied industrial area in air quality, and (iii) establish relationships amongst lichen conductivity with some site-specific characteristics. Samples of the epiphytic lichen Parmelia sulcata were transplanted in a grid of 18 by 20 km with an industrial area in the center. Lichens were exposed for a 5-mo period starting in April 2010. After exposure, lichen samples were soaked in 18-MΩ water aimed at determination of water electrical conductivity and, consequently, lichen vitality and cell damage. A marked decreasing gradient of lichens conductivity relative to distance from the emitting sources was observed. Transplants from a sampling site proximal to the industrial area reached values 10-fold higher than levels far from it. This finding showed that lichens reacted physiologically in the polluted industrial area as evidenced by increased conductivity correlated to contamination level. The integration of temporal PM10 measurements and analysis of wind direction corroborated the importance of this industrialized region for air quality measurements and identified the relevance of traffic for the urban area.
Resumo:
Introdução Actualmente, as mensagens electrónicas são consideradas um importante meio de comunicação. As mensagens electrónicas – vulgarmente conhecidas como emails – são utilizadas fácil e frequentemente para enviar e receber o mais variado tipo de informação. O seu uso tem diversos fins gerando diariamente um grande número de mensagens e, consequentemente um enorme volume de informação. Este grande volume de informação requer uma constante manipulação das mensagens de forma a manter o conjunto organizado. Tipicamente esta manipulação consiste em organizar as mensagens numa taxonomia. A taxonomia adoptada reflecte os interesses e as preferências particulares do utilizador. Motivação A organização manual de emails é uma actividade morosa e que consome tempo. A optimização deste processo através da implementação de um método automático, tende a melhorar a satisfação do utilizador. Cada vez mais existe a necessidade de encontrar novas soluções para a manipulação de conteúdo digital poupando esforços e custos ao utilizador; esta necessidade, concretamente no âmbito da manipulação de emails, motivou a realização deste trabalho. Hipótese O objectivo principal deste projecto consiste em permitir a organização ad-hoc de emails com um esforço reduzido por parte do utilizador. A metodologia proposta visa organizar os emails num conjunto de categorias, disjuntas, que reflectem as preferências do utilizador. A principal finalidade deste processo é produzir uma organização onde as mensagens sejam classificadas em classes apropriadas requerendo o mínimo número esforço possível por parte do utilizador. Para alcançar os objectivos estipulados, este projecto recorre a técnicas de mineração de texto, em especial categorização automática de texto, e aprendizagem activa. Para reduzir a necessidade de inquirir o utilizador – para etiquetar exemplos de acordo com as categorias desejadas – foi utilizado o algoritmo d-confidence. Processo de organização automática de emails O processo de organizar automaticamente emails é desenvolvido em três fases distintas: indexação, classificação e avaliação. Na primeira fase, fase de indexação, os emails passam por um processo transformativo de limpeza que visa essencialmente gerar uma representação dos emails adequada ao processamento automático. A segunda fase é a fase de classificação. Esta fase recorre ao conjunto de dados resultantes da fase anterior para produzir um modelo de classificação, aplicando-o posteriormente a novos emails. Partindo de uma matriz onde são representados emails, termos e os seus respectivos pesos, e um conjunto de exemplos classificados manualmente, um classificador é gerado a partir de um processo de aprendizagem. O classificador obtido é então aplicado ao conjunto de emails e a classificação de todos os emails é alcançada. O processo de classificação é feito com base num classificador de máquinas de vectores de suporte recorrendo ao algoritmo de aprendizagem activa d-confidence. O algoritmo d-confidence tem como objectivo propor ao utilizador os exemplos mais significativos para etiquetagem. Ao identificar os emails com informação mais relevante para o processo de aprendizagem, diminui-se o número de iterações e consequentemente o esforço exigido por parte dos utilizadores. A terceira e última fase é a fase de avaliação. Nesta fase a performance do processo de classificação e a eficiência do algoritmo d-confidence são avaliadas. O método de avaliação adoptado é o método de validação cruzada denominado 10-fold cross validation. Conclusões O processo de organização automática de emails foi desenvolvido com sucesso, a performance do classificador gerado e do algoritmo d-confidence foi relativamente boa. Em média as categorias apresentam taxas de erro relativamente baixas, a não ser as classes mais genéricas. O esforço exigido pelo utilizador foi reduzido, já que com a utilização do algoritmo d-confidence obteve-se uma taxa de erro próxima do valor final, mesmo com um número de casos etiquetados abaixo daquele que é requerido por um método supervisionado. É importante salientar, que além do processo automático de organização de emails, este projecto foi uma excelente oportunidade para adquirir conhecimento consistente sobre mineração de texto e sobre os processos de classificação automática e recuperação de informação. O estudo de áreas tão interessantes despertou novos interesses que consistem em verdadeiros desafios futuros.
Resumo:
Radio link quality estimation in Wireless Sensor Networks (WSNs) has a fundamental impact on the network performance and also affects the design of higher-layer protocols. Therefore, for about a decade, it has been attracting a vast array of research works. Reported works on link quality estimation are typically based on different assumptions, consider different scenarios, and provide radically different (and sometimes contradictory) results. This article provides a comprehensive survey on related literature, covering the characteristics of low-power links, the fundamental concepts of link quality estimation in WSNs, a taxonomy of existing link quality estimators, and their performance analysis. To the best of our knowledge, this is the first survey tackling in detail link quality estimation in WSNs. We believe our efforts will serve as a reference to orient researchers and system designers in this area.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
Relatório Final apresentado à Escola Superior de Educação de Lisboa para obtenção de grau de mestre em Ensino do 1.º e 2.º Ciclos do Ensino Básico
Resumo:
Mathematical models and statistical analysis are key instruments in soil science scientific research as they can describe and/or predict the current state of a soil system. These tools allow us to explore the behavior of soil related processes and properties as well as to generate new hypotheses for future experimentation. A good model and analysis of soil properties variations, that permit us to extract suitable conclusions and estimating spatially correlated variables at unsampled locations, is clearly dependent on the amount and quality of data and of the robustness techniques and estimators. On the other hand, the quality of data is obviously dependent from a competent data collection procedure and from a capable laboratory analytical work. Following the standard soil sampling protocols available, soil samples should be collected according to key points such as a convenient spatial scale, landscape homogeneity (or non-homogeneity), land color, soil texture, land slope, land solar exposition. Obtaining good quality data from forest soils is predictably expensive as it is labor intensive and demands many manpower and equipment both in field work and in laboratory analysis. Also, the sampling collection scheme that should be used on a data collection procedure in forest field is not simple to design as the sampling strategies chosen are strongly dependent on soil taxonomy. In fact, a sampling grid will not be able to be followed if rocks at the predicted collecting depth are found, or no soil at all is found, or large trees bar the soil collection. Considering this, a proficient design of a soil data sampling campaign in forest field is not always a simple process and sometimes represents a truly huge challenge. In this work, we present some difficulties that have occurred during two experiments on forest soil that were conducted in order to study the spatial variation of some soil physical-chemical properties. Two different sampling protocols were considered for monitoring two types of forest soils located in NW Portugal: umbric regosol and lithosol. Two different equipments for sampling collection were also used: a manual auger and a shovel. Both scenarios were analyzed and the results achieved have allowed us to consider that monitoring forest soil in order to do some mathematical and statistical investigations needs a sampling procedure to data collection compatible to established protocols but a pre-defined grid assumption often fail when the variability of the soil property is not uniform in space. In this case, sampling grid should be conveniently adapted from one part of the landscape to another and this fact should be taken into consideration of a mathematical procedure.
Resumo:
Neste artigo é apresentada uma taxonomia e estrutura para procedimentos de análise de riscos ocupacionais (designada por Matriz Perigo-Risco-Danos). É apresentada uma classificação normalizada de perigos/riscos, com identificação das consequências potenciais associadas, na Matriz para Identificação de Perigos-Riscos-Danos (dominantes). Para cada perigo/risco são identificados os danos potenciais individuais, em resultado de acidentes de trabalho (lesões), de doenças profissionais legais (patologias ocupacionais), de doenças relacionadas com o trabalho e de incomodidade ocupacional. Para a caracterização dos danos individuais são utilizadas as nomenclaturas existentes na metodologia EEAT (Estatísticas Europeias de acidentes de Trabalho) e no Decreto Regulamentar 76/ 2007. Cada dano é associado à região anatómica potencialmente atingida. A valoração do risco é organizada em termos de riscos para acidentes - doenças profissionais – e incomodidade ocupacional. Para cada perigo/risco são identificadas medidas de controlo, de acordo com a hierarquia referida na NP 4397:2008 (modificada). A implementação das medidas de controlo foi associada a um critério temporal de curto-médio-longo prazo que teve em conta a oportunidade da implementação e os grupos de medidas a implementar conjuntamente. Este procedimento metódico, designado por Matriz Perigo-Risco-Danos foi aplicado a uma empresa de fabricação de produtos de betão para a construção, pretendendo valorizar os procedimentos correntes de an
Resumo:
Dinoflagellates are planktonic unicellular microorganisms, that under certain conditions may produce cysts prone to fossilization. These cysts are abundant in the sedimentary record since the Palaeozoic, supplying important biostratigraphical and palaeoecological information. In Portugal, the study of dinoflagellates is still in its beginnings. Considering the late developments in this domain, an updated nomenclature in Portuguese language is presented, pertaining to it's biology, taxonomy, ecology, palaeoecology and biostratigraphy.
Resumo:
This paper proposes and reports the development of an open source solution for the integrated management of Infrastructure as a Service (IaaS) cloud computing resources, through the use of a common API taxonomy, to incorporate open source and proprietary platforms. This research included two surveys on open source IaaS platforms (OpenNebula, OpenStack and CloudStack) and a proprietary platform (Parallels Automation for Cloud Infrastructure - PACI) as well as on IaaS abstraction solutions (jClouds, Libcloud and Deltacloud), followed by a thorough comparison to determine the best approach. The adopted implementation reuses the Apache Deltacloud open source abstraction framework, which relies on the development of software driver modules to interface with different IaaS platforms, and involved the development of a new Deltacloud driver for PACI. The resulting interoperable solution successfully incorporates OpenNebula, OpenStack (reuses pre-existing drivers) and PACI (includes the developed Deltacloud PACI driver) nodes and provides a Web dashboard and a Representational State Transfer (REST) interface library. The results of the exchanged data payload and time response tests performed are presented and discussed. The conclusions show that open source abstraction tools like Deltacloud allow the modular and integrated management of IaaS platforms (open source and proprietary), introduce relevant time and negligible data overheads and, as a result, can be adopted by Small and Medium-sized Enterprise (SME) cloud providers to circumvent the vendor lock-in problem whenever service response time is not critical.
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática