27 resultados para Landscape-scale Variations
em Instituto Politécnico do Porto, Portugal
Resumo:
Mathematical models and statistical analysis are key instruments in soil science scientific research as they can describe and/or predict the current state of a soil system. These tools allow us to explore the behavior of soil related processes and properties as well as to generate new hypotheses for future experimentation. A good model and analysis of soil properties variations, that permit us to extract suitable conclusions and estimating spatially correlated variables at unsampled locations, is clearly dependent on the amount and quality of data and of the robustness techniques and estimators. On the other hand, the quality of data is obviously dependent from a competent data collection procedure and from a capable laboratory analytical work. Following the standard soil sampling protocols available, soil samples should be collected according to key points such as a convenient spatial scale, landscape homogeneity (or non-homogeneity), land color, soil texture, land slope, land solar exposition. Obtaining good quality data from forest soils is predictably expensive as it is labor intensive and demands many manpower and equipment both in field work and in laboratory analysis. Also, the sampling collection scheme that should be used on a data collection procedure in forest field is not simple to design as the sampling strategies chosen are strongly dependent on soil taxonomy. In fact, a sampling grid will not be able to be followed if rocks at the predicted collecting depth are found, or no soil at all is found, or large trees bar the soil collection. Considering this, a proficient design of a soil data sampling campaign in forest field is not always a simple process and sometimes represents a truly huge challenge. In this work, we present some difficulties that have occurred during two experiments on forest soil that were conducted in order to study the spatial variation of some soil physical-chemical properties. Two different sampling protocols were considered for monitoring two types of forest soils located in NW Portugal: umbric regosol and lithosol. Two different equipments for sampling collection were also used: a manual auger and a shovel. Both scenarios were analyzed and the results achieved have allowed us to consider that monitoring forest soil in order to do some mathematical and statistical investigations needs a sampling procedure to data collection compatible to established protocols but a pre-defined grid assumption often fail when the variability of the soil property is not uniform in space. In this case, sampling grid should be conveniently adapted from one part of the landscape to another and this fact should be taken into consideration of a mathematical procedure.
Resumo:
O óxido de polifenileno com a marca comercial PPO® é uma das resinas principais produzidas na SABIC IP e o ingrediente principal do plástico de engenharia com a marca registada, Noryl®. A equipa de tecnologia de processo de PPO® desenvolve uma série de novos produtos em reactores de pequena escala, tanto em Selkirk como em Bergen op Zoom. Para se efectuar uma transição rápida da escala laboratorial para a fábrica, é necessário um conhecimento completo do reactor. O objectivo deste projecto consiste em esboçar linhas gerais para o scale-up de novos produtos de PPO1, do laboratório para a escala industrial, baseado no estudo de um tipo de PPO, PPO 803. Este estudo pode ser dividido em duas fases. Numa primeira fase, as receitas e os perfis da reacção são comparados, de onde se retiram as primeiras conclusões. Posteriormente, com base nestas conclusões, é realizado um planeamento experimental. O estudo inicial sugeriu que a receita, a temperatura inicial do reactor e a velocidade do agitador poderiam influenciar o tempo da reacção bem como a queda da velocidade intrínseca do polímero (IV drop). As reacções experimentais mostraram que a receita é o principal factor que influencia, tanto o tempo de reacção, como a queda de viscosidade intrínseca. O tempo de reacção será tanto maior quanto menor a agitação devido à má dispersão do oxigénio na mistura. O uso de temperaturas iniciais elevadas conduz a uma queda maior da viscosidade intrínseca devido à desactivação do catalisador. O método experimental utilizado no laboratório de Bergen op Zoom é um bom exemplo, simulador, do procedimento utilizado na fábrica.
Resumo:
This paper describes the communication stack of the REMPLI system: a structure using power-lines and IPbased networks for communication, for data acquisition and control of energy distribution and consumption. It is furthermore prepared to use alternative communication media like GSM or analog modem connections. The REMPLI system provides communication service for existing applications, namely automated meter reading, energy billing and domotic applications. The communication stack, consisting of physical, network, transport, and application layer is described as well as the communication services provided by the system. We show how the peculiarities of the power-line communication influence the design of the communication stack, by introducing requirements to efficiently use the limited bandwidth, optimize traffic and implement fair use of the communication medium for the extensive communication partners.
Resumo:
Collective behaviours can be observed in both natural and man-made systems composed of a large number of elemental subsystems. Typically, each elemental subsystem has its own dynamics but, whenever interaction between individuals occurs, the individual behaviours tend to be relaxed, and collective behaviours emerge. In this paper, the collective behaviour of a large-scale system composed of several coupled elemental particles is analysed. The dynamics of the particles are governed by the same type of equations but having different parameter values and initial conditions. Coupling between particles is based on statistical feedback, which means that each particle is affected by the average behaviour of its neighbours. It is shown that the global system may unveil several types of collective behaviours, corresponding to partial synchronisation, characterised by the existence of several clusters of synchronised subsystems, and global synchronisation between particles, where all the elemental particles synchronise completely.
Resumo:
In spite of the significant amount of scientific work in Wireless Sensor Networks (WSNs), there is a clear lack of effective, feasible and usable WSN system architectures that address both functional and non-functional requirements in an integrated fashion. This poster abstract outlines the EMMON system architecture for large-scale, dense, real-time embedded monitoring. EMMON relies on a hierarchical network architecture together with integrated middleware and command&control mechanisms. It has been designed to use standard commercially– available technologies, while maintaining as much flexibility as possible to meet specific applications’ requirements. The EMMON WSN architecture has been validated through extensive simulation and experimental evaluation, including through a 300+ node test-bed, the largest WSN test-bed in Europe to date
Resumo:
Wireless sensor networks (WSNs) have attracted growing interest in the last decade as an infrastructure to support a diversity of ubiquitous computing and cyber-physical systems. However, most research work has focused on protocols or on specific applications. As a result, there remains a clear lack of effective and usable WSN system architectures that address both functional and non-functional requirements in an integrated fashion. This poster outlines the EMMON system architecture for large-scale, dense, real-time embedded monitoring. It provides a hierarchical communication architecture together with integrated middleware and command and control software. It has been designed to maintain as much as flexibility as possible while meeting specific applications requirements. EMMON has been validated through extensive analytical, simulation and experimental evaluations, including through a 300+ nodes test-bed the largest single-site WSN test-bed in Europe.
Resumo:
Most research work on WSNs has focused on protocols or on specific applications. There is a clear lack of easy/ready-to-use WSN technologies and tools for planning, implementing, testing and commissioning WSN systems in an integrated fashion. While there exists a plethora of papers about network planning and deployment methodologies, to the best of our knowledge none of them helps the designer to match coverage requirements with network performance evaluation. In this paper we aim at filling this gap by presenting an unified toolset, i.e., a framework able to provide a global picture of the system, from the network deployment planning to system test and validation. This toolset has been designed to back up the EMMON WSN system architecture for large-scale, dense, real-time embedded monitoring. It includes network deployment planning, worst-case analysis and dimensioning, protocol simulation and automatic remote programming and hardware testing tools. This toolset has been paramount to validate the system architecture through DEMMON1, the first EMMON demonstrator, i.e., a 300+ node test-bed, which is, to the best of our knowledge, the largest single-site WSN test-bed in Europe to date.
Resumo:
Wireless sensor networks (WSNs) have attracted growing interest in the last decade as an infrastructure to support a diversity of ubiquitous computing and cyber-physical systems. However, most research work has focused on protocols or on specific applications. As a result, there remains a clear lack of effective, feasible and usable system architectures that address both functional and non-functional requirements in an integrated fashion. In this paper, we outline the EMMON system architecture for large-scale, dense, real-time embedded monitoring. EMMON provides a hierarchical communication architecture together with integrated middleware and command and control software. It has been designed to use standard commercially-available technologies, while maintaining as much flexibility as possible to meet specific applications requirements. The EMMON architecture has been validated through extensive simulation and experimental evaluation, including a 300+ node test-bed, which is, to the best of our knowledge, the largest single-site WSN test-bed in Europe to date.
Resumo:
We focus on large-scale and dense deeply embedded systems where, due to the large amount of information generated by all nodes, even simple aggregate computations such as the minimum value (MIN) of the sensor readings become notoriously expensive to obtain. Recent research has exploited a dominance-based medium access control(MAC) protocol, the CAN bus, for computing aggregated quantities in wired systems. For example, MIN can be computed efficiently and an interpolation function which approximates sensor data in an area can be obtained efficiently as well. Dominance-based MAC protocols have recently been proposed for wireless channels and these protocols can be expected to be used for achieving highly scalable aggregate computations in wireless systems. But no experimental demonstration is currently available in the research literature. In this paper, we demonstrate that highly scalable aggregate computations in wireless networks are possible. We do so by (i) building a new wireless hardware platform with appropriate characteristics for making dominance-based MAC protocols efficient, (ii) implementing dominance-based MAC protocols on this platform, (iii) implementing distributed algorithms for aggregate computations (MIN, MAX, Interpolation) using the new implementation of the dominance-based MAC protocol and (iv) performing experiments to prove that such highly scalable aggregate computations in wireless networks are possible.
Resumo:
We use the term Cyber-Physical Systems to refer to large-scale distributed sensor systems. Locating the geographic coordinates of objects of interest is an important problemin such systems. We present a new distributed approach to localize objects and events of interest in time complexity independent of number of nodes.
Resumo:
Para uma melhor avaliação e definição do plano de intervenção do indivíduo, é cada vez mais importante a existência instrumentos de avaliação válidos e fiáveis para a população portuguesa. Objetivo: Traduzir e adaptar para a população Portuguesa a escala Trunk Impairment Scale (TIS) em pacientes pós-AVE, e avaliar as propriedades psicométricas da mesma. Metodologia: A TIS foi traduzida para o Português e adaptada culturalmente para a população portuguesa. As propriedades psicométricas da mesma, incluindo validade, fiabilidade, concordância inter-observadores, consistência interna, sensibilidade, especificidade, poder de resposta, foram avaliadas numa população diagnosticada com AVE e num grupo de controlo de participantes saudáveis. Participaram neste estudo 80 indivíduos, divididos em dois grupos, nomeadamente indivíduos pós-AVE (40) e um grupo sem patologia (40). Os participantes foram submetidos à aplicação das escalas de Berg, Medida de Independência Funcional e Escala de Desempenho Físico Fugl Meyer e a TIS de modo a avaliar as propriedades psicométricas desta. As avaliações foram realizadas por duas fisioterapeutas experientes e o re-teste foi realizado após 48 horas. Os dados foram registados e trabalhados com o programa informático SPSS 21.0. Resultados: Relativamente aos valores obtidos, verificou-se que, quanto à consistência interna da TIS estes apresentam-se de forma moderada a elevada (alfa Cronbach = 0,909). Quanto à fiabilidade inter-observadores, os itens com menor valor são os itens 1 e 4 (0,759 e 0,527, respetivamente) e os itens com valor de Kappa mais alto são os itens 5 e 6 (0,830 e 0,893, respetivamente). Relativamente à validade de critério, verificou-se que não houve correlação entre a escala de Desempenho Físico Fugl-Meyer, a escala de Equilibrio de Berg e a Medida de Independência Funcional, ou seja, os valores obtidos r=0,166; r=0,017; r= -0,002, respetivamente. Quanto à validade de construção, constatou-se que o valor da mediana é mais elevado nos itens 1 a 5, logo sugere que haja diferenças entre o grupo de indivíduos pós-AVE e o grupo de indivíduos saudáveis (p<0,001). Entre os outros dois itens (6 e 7) não foram encontradas diferenças nas respostas nos dois grupos, sendo o valor de p > 0,001. Conclusão: Os resultados obtidos neste estudo sugerem que a versão portuguesa da TIS apresenta bons níveis de fiabilidade, consistência interna e também apresenta bons resultados no que refere à concordância inter-observadores.
Resumo:
In this study, the concentration probability distributions of 82 pharmaceutical compounds detected in the effluents of 179 European wastewater treatment plants were computed and inserted into a multimedia fate model. The comparative ecotoxicological impact of the direct emission of these compounds from wastewater treatment plants on freshwater ecosystems, based on a potentially affected fraction (PAF) of species approach, was assessed to rank compounds based on priority. As many pharmaceuticals are acids or bases, the multimedia fate model accounts for regressions to estimate pH-dependent fate parameters. An uncertainty analysis was performed by means of Monte Carlo analysis, which included the uncertainty of fate and ecotoxicity model input variables, as well as the spatial variability of landscape characteristics on the European continental scale. Several pharmaceutical compounds were identified as being of greatest concern, including 7 analgesics/anti-inflammatories, 3 β-blockers, 3 psychiatric drugs, and 1 each of 6 other therapeutic classes. The fate and impact modelling relied extensively on estimated data, given that most of these compounds have little or no experimental fate or ecotoxicity data available, as well as a limited reported occurrence in effluents. The contribution of estimated model input variables to the variance of freshwater ecotoxicity impact, as well as the lack of experimental abiotic degradation data for most compounds, helped in establishing priorities for further testing. Generally, the effluent concentration and the ecotoxicity effect factor were the model input variables with the most significant effect on the uncertainty of output results.
Resumo:
Phonological development was assessed in six alphabetic orthographies (English, French, Greek, Icelandic, Portuguese and Spanish) at the beginning and end of the first year of reading instruction. The aim was to explore contrasting theoretical views regarding: the question of the availability of phonology at the outset of learning to read (Study 1); the influence of orthographic depth on the pace of phonological development during the transition to literacy (Study 2); and the impact of literacy instruction (Study 3). Results from 242 children did not reveal a consistent sequence of development as performance varied according to task demands and language. Phonics instruction appeared more influential than orthographic depth in the emergence of an early meta-phonological capacity to manipulate phonemes, and preliminary indications were that cross-linguistic variation was associated with speech rhythm more than factors such as syllable complexity. The implications of the outcome for current models of phonological development are discussed.