64 resultados para effective area
Resumo:
The performance of the Weather Research and Forecast (WRF) model in wind simulation was evaluated under different numerical and physical options for an area of Portugal, located in complex terrain and characterized by its significant wind energy resource. The grid nudging and integration time of the simulations were the tested numerical options. Since the goal is to simulate the near-surface wind, the physical parameterization schemes regarding the boundary layer were the ones under evaluation. Also, the influences of the local terrain complexity and simulation domain resolution on the model results were also studied. Data from three wind measuring stations located within the chosen area were compared with the model results, in terms of Root Mean Square Error, Standard Deviation Error and Bias. Wind speed histograms, occurrences and energy wind roses were also used for model evaluation. Globally, the model accurately reproduced the local wind regime, despite a significant underestimation of the wind speed. The wind direction is reasonably simulated by the model especially in wind regimes where there is a clear dominant sector, but in the presence of low wind speeds the characterization of the wind direction (observed and simulated) is very subjective and led to higher deviations between simulations and observations. Within the tested options, results show that the use of grid nudging in simulations that should not exceed an integration time of 2 days is the best numerical configuration, and the parameterization set composed by the physical schemes MM5–Yonsei University–Noah are the most suitable for this site. Results were poorer in sites with higher terrain complexity, mainly due to limitations of the terrain data supplied to the model. The increase of the simulation domain resolution alone is not enough to significantly improve the model performance. Results suggest that error minimization in the wind simulation can be achieved by testing and choosing a suitable numerical and physical configuration for the region of interest together with the use of high resolution terrain data, if available.
Resumo:
Mestrado em Engenharia Civil – Ramo Tecnologias da Construção
Resumo:
A vulgarização do uso de dispositivos móveis promoveu a proliferação de aplicações dos mais diversos âmbitos para estes dispositivos não sendo a área clínica uma excepção. Tanto a nível profissional, como a nível de ensino, as tecnologias móveis foram já há muito adoptadas nesta área para as mais diversas finalidades. O trabalho aqui apresentado pretende essencialmente provar a real importância desempenhada pelo mobile learning no contexto da aprendizagem clínica. Mais do que implementar um simples recurso educativo, pretendeu-se conceber um sistema integrado que respondesse a todas as necessidades do aluno quer durante o estudo nas suas diversas fases e locais, como também no próprio serviço hospitalar onde se encontre a desempenhar funções como interno da especialidade. Após uma exaustiva análise das aplicações móveis relevantes da área médica, verificou-se a inexistência de uma ferramenta integradora de vários módulos de aprendizagem com um custo comportável para a maioria dos alunos. Desta forma, idealizou-se uma aplicação capaz de superar esta lacuna que será detalhada ao longo desta tese. Para o desenvolvimento deste trabalho contou-se com a preciosa colaboração dos possíveis utilizadores finais desta ferramenta uma vez que a escolha dos módulos a integrar foi essencialmente baseada nas suas opiniões. Ainda no âmbito desta tese, encontra-se a avaliação do protótipo por parte dos alunos. Esta avaliação pretende validar a efectiva importância de uma ferramenta desta natureza para um aluno de medicina assim como o impacto que o protótipo teve na sua opinião acerca do conceito de mobile learning na aprendizagem clínica. Com vista a uma futura implementação de um recurso educativo deste âmbito, foram também recolhidos os pontos negativos e positivos mais relevantes para o aluno. Em suma, este trabalho valida a importância do papel que as aplicações de aprendizagem para dispositivos móveis podem desempenhar para um aluno de medicina tanto nos seus locais de estudo, como no serviço onde se possa encontrar.
Resumo:
A área da simulação computacional teve um rápido crescimento desde o seu apareciment, sendo actualmente uma das ciências de gestão e de investigação operacional mais utilizadas. O seu princípio baseia-se na replicação da operação de processos ou sistemas ao longo de períodos de tempo, tornando-se assim uma metodologia indispensável para a resolução de variados problemas do mundo real, independentemente da sua complexidade. Das inúmeras áreas de aplicação, nos mais diversos campos, a que mais se destaca é a utilização em sistemas de produção, onde o leque de aplicações disponível é muito vasto. A sua aplicação tem vindo a ser utilizada para solucionar problemas em sistemas de produção, uma vez que permite às empresas ajustar e planear de uma maneira rápida, eficaz e ponderada as suas operações e os seus sistemas, permitindo assim uma rápida adaptação das mesmas às constantes mudanças das necessidades da economia global. As aplicações e packages de simulação têm seguindo as tendências tecnológicas pelo que é notório o recurso a tecnologias orientadas a objectos para o desenvolvimento das mesmas. Este estudo baseou-se, numa primeira fase, na recolha de informação de suporte aos conceitos de modelação e simulação, bem como a respectiva aplicação a sistemas de produção em tempo real. Posteriormente centralizou-se no desenvolvimento de um protótipo de uma aplicação de simulação de ambientes de fabrico em tempo real. O desenvolvimento desta ferramenta teve em vista eventuais fins pedagógicos e uma utilização a nível académico, sendo esta capaz de simular um modelo de um sistema de produção, estando também dotada de animação. Sem deixar de parte a possibilidade de integração de outros módulos ou, até mesmo, em outras plataformas, houve ainda a preocupação acrescida de que a sua implementação recorresse a metodologias de desenvolvimento orientadas a objectos.
Resumo:
Portugal, as well as the Mediterranean basin, is favorable to the occurrence of forest fires. In this work a statistical analysis was carried out based on the official information, considering the forest fires occurrences and the corresponding burned area for each of the districts of the mainland Portugal, between 1996 and 2010. Concerning to the forest fires occurrence it was possible to identify three main regions in mainland Portugal, while the burned area can be characterized in two main regions. Associations between districts and years are different in the two approaches. The results obtained provide a synthetic analysis of the phenomenon of forest fires in continental Portugal, based on all the official information available to date.
Resumo:
Este documento apresenta o trabalho desenvolvido no âmbito da disciplina de “Dissertação/Projeto/Estágio”, do 2º ano do Mestrado em Energias Sustentáveis. O crescente consumo energético das sociedades desenvolvidas e emergentes, associado ao consequente aumento dos custos de energia e dos danos ambientais resultantes, promove o desenvolvimento de novas formas de produção de energia, as quais têm como prioridade a sua obtenção ao menor custo possível e com reduzidos impactos ambientais. De modo a poupar os recursos naturais e reduzir a emissão com gases de efeito de estufa, é necessária a diminuição do consumo de energia produzida a partir de combustíveis fósseis. Assim, devem ser criadas alternativas para um futuro sustentável, onde as fontes renováveis de energia assumam um papel fundamental. Neste sentido, a produção de energia elétrica, através de sistemas solares fotovoltaicos, surge como uma das soluções. A presente dissertação tem como principal objetivo a realização do dimensionamento de uma central de miniprodução fotovoltaica, com ligação à rede elétrica, em uma exploração agrícola direcionada à indústria de laticínios, e o seu respetivo estudo de viabilidade económica. A exploração agrícola, que serve de objeto de estudo, está localizada na Ilha Graciosa, Açores, sendo a potência máxima a injetar na Rede Elétrica de Serviço Público, pela central de miniprodução, de 10 kW. Para o dimensionamento foi utilizado um software apropriado e reconhecido na área da produção de energia elétrica através de sistemas fotovoltaicos – o PVsyst –, compreendendo as seguintes etapas: a) definição das caraterísticas do local e do projeto; b) seleção dos módulos fotovoltaicos; c) seleção do inversor; d) definição da potência de ligação à rede elétrica da unidade de miniprodução. Posteriormente, foram estudadas diferentes hipóteses de sistemas fotovoltaicos, que se distinguem na opção de estrutura de fixação utilizada: dois sistemas fixos e dois com eixo incorporado. No estudo de viabilidade económica foram realizadas duas análises distintas a cada um dos sistemas fotovoltaicos considerados no dimensionamento, nomeadamente: uma análise em regime remuneratório bonificado e uma análise em regime remuneratório geral. Os resultados obtidos nos indicadores económicos do estudo de viabilidade económica realizado, serviram de apoio à decisão pelo sistema fotovoltaico mais favorável ao investimento. Conclui-se que o sistema fotovoltaico com inclinação adicional é a opção mais vantajosa em ambos os regimes remuneratórios analisados. Comprova-se, assim, que o sistema fotovoltaico com maior valor de produção de energia elétrica anual, que corresponde ao sistema fotovoltaico de dois eixos, não é a opção com maior rentabilidade em termos económicos, isto porque a remuneração proveniente da sua produção excedente não é suficiente para colmatar o valor do investimento mais acentuado de modo a obter indicadores económicos mais favoráveis, que os do sistema fotovoltaico com inclinação adicional. De acordo com o estudo de viabilidade económica efetuado independentemente do sistema fotovoltaico que seja adotado, é recuperado o investimento realizado, sendo a remuneração efetiva superior à que foi exigida. Assim, mesmo tendo em consideração o risco associado, comprova-se que todos os sistemas fotovoltaicos, em qualquer dos regimes remuneratórios, correspondem a investimentos rentáveis.
Resumo:
Esta dissertação incide sobre a problemática da construção de um data warehouse para a empresa AdClick que opera na área de marketing digital. O marketing digital é um tipo de marketing que utiliza os meios de comunicação digital, com a mesma finalidade do método tradicional que se traduz na divulgação de bens, negócios e serviços e a angariação de novos clientes. Existem diversas estratégias de marketing digital tendo em vista atingir tais objetivos, destacando-se o tráfego orgânico e tráfego pago. Onde o tráfego orgânico é caracterizado pelo desenvolvimento de ações de marketing que não envolvem quaisquer custos inerentes à divulgação e/ou angariação de potenciais clientes. Por sua vez o tráfego pago manifesta-se pela necessidade de investimento em campanhas capazes de impulsionar e atrair novos clientes. Inicialmente é feita uma abordagem do estado da arte sobre business intelligence e data warehousing, e apresentadas as suas principais vantagens as empresas. Os sistemas business intelligence são necessários, porque atualmente as empresas detêm elevados volumes de dados ricos em informação, que só serão devidamente explorados fazendo uso das potencialidades destes sistemas. Nesse sentido, o primeiro passo no desenvolvimento de um sistema business intelligence é concentrar todos os dados num sistema único integrado e capaz de dar apoio na tomada de decisões. É então aqui que encontramos a construção do data warehouse como o sistema único e ideal para este tipo de requisitos. Nesta dissertação foi elaborado o levantamento das fontes de dados que irão abastecer o data warehouse e iniciada a contextualização dos processos de negócio existentes na empresa. Após este momento deu-se início à construção do data warehouse, criação das dimensões e tabelas de factos e definição dos processos de extração e carregamento dos dados para o data warehouse. Assim como a criação das diversas views. Relativamente ao impacto que esta dissertação atingiu destacam-se as diversas vantagem a nível empresarial que a empresa parceira neste trabalho retira com a implementação do data warehouse e os processos de ETL para carregamento de todas as fontes de informação. Sendo que algumas vantagens são a centralização da informação, mais flexibilidade para os gestores na forma como acedem à informação. O tratamento dos dados de forma a ser possível a extração de informação a partir dos mesmos.
Resumo:
Consolidation consists in scheduling multiple virtual machines onto fewer servers in order to improve resource utilization and to reduce operational costs due to power consumption. However, virtualization technologies do not offer performance isolation, causing applications’ slowdown. In this work, we propose a performance enforcing mechanism, composed of a slowdown estimator, and a interference- and power-aware scheduling algorithm. The slowdown estimator determines, based on noisy slowdown data samples obtained from state-of-the-art slowdown meters, if tasks will complete within their deadlines, invoking the scheduling algorithm if needed. When invoked, the scheduling algorithm builds performance and power aware virtual clusters to successfully execute the tasks. We conduct simulations injecting synthetic jobs which characteristics follow the last version of the Google Cloud tracelogs. The results indicate that our strategy can be efficiently integrated with state-of-the-art slowdown meters to fulfil contracted SLAs in real-world environments, while reducing operational costs in about 12%.
Resumo:
Among the most important measures to prevent wild forest fires is the use of prescribed and controlled burning actions in order to reduce the availability of fuel mass. However, the impact of these activities on soil physical and chemical properties varies according to the type of both soil and vegetation and is not fully understood. Therefore, soil monitoring campaigns are often used to measure these impacts. In this paper we have successfully used three statistical data treatments - the Kolmogorov-Smirnov test followed by the ANOVA and the Kruskall-Wallis tests – to investigate the variability among the soil pH, soil moisture, soil organic matter and soil iron variables for different monitoring times and sampling procedures.
Resumo:
The prescribed fire is a technique that is often used, it has several advantages. Pedological and hydropedological techniques were tested to assess the prescribed fire changes may cause in soils. This work was performed in Tresminas area (Vila Pouca de Aguiar, Northern Portugal), during February and March 2011. In the present study we applied several techniques. For the field sampling was followed the ISO 10381-1[1], ISO 10381-2[2], and FAO rules [3], as well as were used a grid with 17 points for measuring the soil parameters. During the fire, we have tried to check, with the assistance of the Portuguese Forestry Authority, some important parameters such as, the propagation speed, the size of the flame front and the intensity of energy emitted per unit area. Before the fire, was collected carefully soil disturbed and undisturbed samples for laboratory analysis, and measured soil water content; we also have placed four sets of thermocouples for measuring soil temperature. After the fire, were collected the thermocouples and new soil samples; the water content were measured in the soil and collected ashes. In the laboratory, after preparing and sieving the samples, were determined the soil particle size. The soil pH and electrical conductivity in water was also determined. The total carbon (TC) and inorganic carbon (IC)[4] was measured by a Shimadzu TOC-Vcsn. The water content in soil has not varied significantly before and after the fire, as well as soil pH and soil electrical conductivity. The TC and IC did not change, which was expected, since the fire not overcome the 200° C. Through the various parameters, we determined that the prescribed fire didn’t affect the soil. The low temperature of the fire and its rapid implementation that lead to the possible adverse effects caused by the wild fire didn’t occurred.
Resumo:
Certain materials used and produced in a wide range of non-nuclear industries contain enhanced activity concentrations of natural radionuclides. In particular, electricity production from coal is one of the major sources of increased exposure to man from enhanced naturally occurring materials. Over the past decades there has been some discussion about the elevated natural background radiation in the area near coal-fired power plants due to high uranium and thorium content present in coal. This work describes the methodology developed to assess the radiological impact due to natural radiation background increasing levels, potentially originated by a coal-fired power plant’s operation. Gamma radiation measurements have been done with two different instruments: a scintillometer (SPP2 NF, Saphymo) and a gamma ray spectrometer with energy discrimination (Falcon 5000, Canberra). A total of 40 relevant sampling points were established at locations within 20 km from the power plant: 15 urban and 25 suburban measured stations. The highest values were measured at the sampling points near to the power plant and those located in the area within the 6 and 20 km from the stacks. This may be explained by the presence of a huge coal pile (1.3 million tons) located near the stacks contributing to the dispersion of unburned coal and, on the other hand, the height of the stacks (225 m) which may influence ash’s dispersion up to a distance of 20 km. In situ gamma radiation measurements with energy discrimination identified natural emitting nuclides as well as their decay products (212Pb, 214Pb, 226Ra 232Th, 228Ac, 234Th 234Pa, 235U, etc.). This work has been primarily done to in order to assess the impact of a coal-fired power plant operation on the background radiation level in the surrounding area. According to the results, an increase or at least an influence has been identified both qualitatively and quantitatively.
Resumo:
To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.
Resumo:
Due to their detrimental effects on human health, the scientific interest in ultrafine particles (UFP) has been increasing, but available information is far from comprehensive. Compared to the remaining population, the elderly are potentially highly susceptible to the effects of outdoor air pollution. Thus, this study aimed to (1) determine the levels of outdoor pollutants in an urban area with emphasis on UFP concentrations and (2) estimate the respective dose rates of exposure for elderly populations. UFP were continuously measured over 3 weeks at 3 sites in north Portugal: 2 urban (U1 and U2) and 1 rural used as reference (R1). Meteorological parameters and outdoor pollutants including particulate matter (PM10), ozone (O3), nitric oxide (NO), and nitrogen dioxide (NO2) were also measured. The dose rates of inhalation exposure to UFP were estimated for three different elderly age categories: 64–70, 71–80, and >81 years. Over the sampling period levels of PM10, O3 and NO2 were in compliance with European legislation. Mean UFP were 1.7 × 104 and 1.2 × 104 particles/cm3 at U1 and U2, respectively, whereas at rural site levels were 20–70% lower (mean of 1 ×104 particles/cm3). Vehicular traffic and local emissions were the predominant identified sources of UFP at urban sites. In addition, results of correlation analysis showed that UFP were meteorologically dependent. Exposure dose rates were 1.2- to 1.4-fold higher at urban than reference sites with the highest levels noted for adults at 71–80 yr, attributed mainly to higher inhalation rates.
Resumo:
Wireless Body Area Network (WBAN) is the most convenient, cost-effective, accurate, and non-invasive technology for e-health monitoring. The performance of WBAN may be disturbed when coexisting with other wireless networks. Accordingly, this paper provides a comprehensive study and in-depth analysis of coexistence issues and interference mitigation solutions in WBAN technologies. A thorough survey of state-of-the art research in WBAN coexistence issues is conducted. The survey classified, discussed, and compared the studies according to the parameters used to analyze the coexistence problem. Solutions suggested by the studies are then classified according to the followed techniques and concomitant shortcomings are identified. Moreover, the coexistence problem in WBAN technologies is mathematically analyzed and formulas are derived for the probability of successful channel access for different wireless technologies with the coexistence of an interfering network. Finally, extensive simulations are conducted using OPNET with several real-life scenarios to evaluate the impact of coexistence interference on different WBAN technologies. In particular, three main WBAN wireless technologies are considered: IEEE 802.15.6, IEEE 802.15.4, and low-power WiFi. The mathematical analysis and the simulation results are discussed and the impact of interfering network on the different wireless technologies is compared and analyzed. The results show that an interfering network (e.g., standard WiFi) has an impact on the performance of WBAN and may disrupt its operation. In addition, using low-power WiFi for WBANs is investigated and proved to be a feasible option compared to other wireless technologies.
Resumo:
Wireless Body Area Networks (WBANs) have emerged as a promising technology for medical and non-medical applications. WBANs consist of a number of miniaturized, portable, and autonomous sensor nodes that are used for long-term health monitoring of patients. These sensor nodes continuously collect information of patients, which are used for ubiquitous health monitoring. In addition, WBANs may be used for managing catastrophic events and increasing the effectiveness and performance of rescue forces. The huge amount of data collected by WBAN nodes demands scalable, on-demand, powerful, and secure storage and processing infrastructure. Cloud computing is expected to play a significant role in achieving the aforementioned objectives. The cloud computing environment links different devices ranging from miniaturized sensor nodes to high-performance supercomputers for delivering people-centric and context-centric services to the individuals and industries. The possible integration of WBANs with cloud computing (WBAN-cloud) will introduce viable and hybrid platform that must be able to process the huge amount of data collected from multiple WBANs. This WBAN-cloud will enable users (including physicians and nurses) to globally access the processing and storage infrastructure at competitive costs. Because WBANs forward useful and life-critical information to the cloud – which may operate in distributed and hostile environments, novel security mechanisms are required to prevent malicious interactions to the storage infrastructure. Both the cloud providers and the users must take strong security measures to protect the storage infrastructure.