999 resultados para Land-side
Resumo:
O principal objetivo deste trabalho foi identificar e caracterizar a evolução diária da Camada Limite Atmosférica (CLA) na Região da Grande Vitória (RGV), Estado do Espírito Santo, Brasil e na Região de Dunkerque (RD), Departamento Nord Pas-de-Calais, França, avaliando a acurácia de parametrizações usadas no modelo meteorológico Weather Research and Forecasting (WRF) em detectar a formação e atributos da Camada Limite Interna (CLI) que é formada pelas brisas marítimas. A RGV tem relevo complexo, em uma região costeira de topografia acidentada e uma cadeia de montanhas paralela à costa. A RD tem relevo simples, em uma região costeira com pequenas ondulações que não chegam a ultrapassar 150 metros, ao longo do domínio de estudos. Para avaliar os resultados dos prognósticos feitos pelo modelo, foram utilizados os resultados de duas campanhas: uma realizada na cidade de Dunkerque, no norte da França, em Julho de 2009, utilizando um sistema light detection and ranging (LIDAR), um sonic detection and ranging (SODAR) e dados de uma estação meteorológica de superfície (EMS); outra realizada na cidade de Vitória – Espírito Santo, no mês de julho de 2012, também usando um LIDAR, um SODAR e dados de uma EMS. Foram realizadas simulações usando três esquemas de parametrizações para a CLA, dois de fechamento não local, Yonsei University (YSU) e Asymmetric Convective Model 2 (ACM2) e um de fechamento local, Mellor Yamada Janjic (MYJ) e dois esquemas de camada superficial do solo (CLS), Rapid Update Cycle (RUC) e Noah. Tanto para a RGV quanto para a RD, foram feitas simulações com as seis possíveis combinações das três parametrizações de CLA e as duas de CLS, para os períodos em que foram feitas as campanhas, usando quatro domínios aninhados, sendo os três maiores quadrados com dimensões laterais de 1863 km, 891 km e 297 km, grades de 27 km, 9 km e 3 km, respectivamente, e o domínio de estudo, com dimensões de 81 km na direção Norte-Sul e 63 km na Leste-Oeste, grade de 1 km, com 55 níveis verticais, até um máximo de, aproximadamente, 13.400 m, mais concentrados próximos ao solo. Os resultados deste trabalho mostraram que: a) dependendo da configuração adotada, o esforço computacional pode aumentar demasiadamente, sem que ocorra um grande aumento na acurácia dos resultados; b) para a RD, a simulação usando o conjunto de parametrizações MYJ para a CLA com a parametrização Noah produziu a melhor estimativa captando os fenômenos da CLI. As simulações usando as parametrizações ACM2 e YSU inferiram a entrada da brisa com atraso de até três horas; c) para a RGV, a simulação que usou as parametrizações YSU para a CLA em conjunto com a parametrização Noah para CLS foi a que conseguiu fazer melhores inferências sobre a CLI. Esses resultados sugerem a necessidade de avaliações prévias do esforço computacional necessário para determinadas configurações, e sobre a acurácia de conjuntos de parametrizações específicos para cada região pesquisada. As diferenças estão associadas com a capacidade das diferentes parametrizações em captar as informações superficiais provenientes das informações globais, essenciais para determinar a intensidade de mistura turbulenta vertical e temperatura superficial do solo, sugerindo que uma melhor representação do uso de solo é fundamental para melhorar as estimativas sobre a CLI e demais parâmetros usados por modelos de dispersão de poluentes atmosféricos.
Resumo:
Este estudo sistematiza e aborda a temática da formação de educadores desenvolvida nas inter-relações entre universidades brasileiras e o Movimento dos Trabalhadores Rurais Sem Terra, especificamente nos cursos de graduação em História na Universidade Federal da Paraíba e de Engenharia Agronômica na Universidade Federal de Sergipe, no período de 2004 a 2008. Tal processo desenvolve-se em um contexto difícil e complexo da luta pela reforma agrária no Brasil, principalmente pelas transformações ocorridas nos últimos anos oriundas da ampliação das lógicas de produção do agronegócio. Esta condição leva o MST a discutir e propor uma nova concepção de reforma agrária que a designa de popular em substituição à proposta de reforma agrária clássica. Por outro lado, apresenta uma visão das universidades brasileiras e dos projetos que são construídos e implementados nos últimos anos, inclusive, observando coincidências com as políticas econômicas gerais para a sociedade. Apresenta uma concepção de formação que vem sendo construída no interior das práticas do MST que também procura as universidades para firmar convênios e desenvolver processos de escolarização/formação de seus militantes educadores. Dentre as dimensões desse processo educativo/formativo, destacam-se: vínculo permanente com os processos orgânicos; a formação como um processo ético, estético, místico que trata das atitudes/comportamentos e como um processo dialógico, crítico e articulado que contempla saberes, experiências, em uma interação que busca superar as monoculturas. Captura por intermédio da pesquisa de campo: estranhamentos, entraves, sentidos da ocupação pedagógica e coletiva, alternativas, legados que permanecem tanto no MST como na universidade e apresenta o resultado do envolvimento e atuação dos egressos de ambos os cursos na atualidade. Aponta também para desafios, possibilidades outras de enfrentar a difícil mas necessária tarefa de formar educadores, militantes capazes de coletivamente levar adiante a luta por um mundo mais justo, solidário e democrático, em que a terra e o conhecimento, juntamente com os demais bens econômicos e culturais sejam profundamente democratizados. Pretende ser uma contribuição para o debate acerca da relevância dessas inter-relações entre universidade e movimentos sociais, em que essas experiências demarcam novas possibilidades de abertura e avanços democráticos e menos elitista da universidade e novos patamares de escolarização/formação para integrantes do Movimento dos Sem Terra.
Resumo:
The Wyner-Ziv video coding (WZVC) rate distortion performance is highly dependent on the quality of the side information, an estimation of the original frame, created at the decoder. This paper, characterizes the WZVC efficiency when motion compensated frame interpolation (MCFI) techniques are used to generate the side information, a difficult problem in WZVC especially because the decoder only has available some reference decoded frames. The proposed WZVC compression efficiency rate model relates the power spectral of the estimation error to the accuracy of the MCFI motion field. Then, some interesting conclusions may be derived related to the impact of the motion field smoothness and the correlation to the true motion trajectories on the compression performance.
Resumo:
One of the most efficient approaches to generate the side information (SI) in distributed video codecs is through motion compensated frame interpolation where the current frame is estimated based on past and future reference frames. However, this approach leads to significant spatial and temporal variations in the correlation noise between the source at the encoder and the SI at the decoder. In such scenario, it would be useful to design an architecture where the SI can be more robustly generated at the block level, avoiding the creation of SI frame regions with lower correlation, largely responsible for some coding efficiency losses. In this paper, a flexible framework to generate SI at the block level in two modes is presented: while the first mode corresponds to a motion compensated interpolation (MCI) technique, the second mode corresponds to a motion compensated quality enhancement (MCQE) technique where a low quality Intra block sent by the encoder is used to generate the SI by doing motion estimation with the help of the reference frames. The novel MCQE mode can be overall advantageous from the rate-distortion point of view, even if some rate has to be invested in the low quality Intra coding blocks, for blocks where the MCI produces SI with lower correlation. The overall solution is evaluated in terms of RD performance with improvements up to 2 dB, especially for high motion video sequences and long Group of Pictures (GOP) sizes.
Resumo:
Recently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner-Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner-Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner-Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB: moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.
Resumo:
The advances made in channel-capacity codes, such as turbo codes and low-density parity-check (LDPC) codes, have played a major role in the emerging distributed source coding paradigm. LDPC codes can be easily adapted to new source coding strategies due to their natural representation as bipartite graphs and the use of quasi-optimal decoding algorithms, such as belief propagation. This paper tackles a relevant scenario in distributedvideo coding: lossy source coding when multiple side information (SI) hypotheses are available at the decoder, each one correlated with the source according to different correlation noise channels. Thus, it is proposed to exploit multiple SI hypotheses through an efficient joint decoding technique withmultiple LDPC syndrome decoders that exchange information to obtain coding efficiency improvements. At the decoder side, the multiple SI hypotheses are created with motion compensated frame interpolation and fused together in a novel iterative LDPC based Slepian-Wolf decoding algorithm. With the creation of multiple SI hypotheses and the proposed decoding algorithm, bitrate savings up to 8.0% are obtained for similar decoded quality.
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
In distributed video coding, motion estimation is typically performed at the decoder to generate the side information, increasing the decoder complexity while providing low complexity encoding in comparison with predictive video coding. Motion estimation can be performed once to create the side information or several times to refine the side information quality along the decoding process. In this paper, motion estimation is performed at the decoder side to generate multiple side information hypotheses which are adaptively and dynamically combined, whenever additional decoded information is available. The proposed iterative side information creation algorithm is inspired in video denoising filters and requires some statistics of the virtual channel between each side information hypothesis and the original data. With the proposed denoising algorithm for side information creation, a RD performance gain up to 1.2 dB is obtained for the same bitrate.
Resumo:
Workplace aggression is a factor that shapes the interaction between individuals and their work environment and produces many undesirable outcomes, sometimes introducing heavy costs for organizations. Only through a comprehensive understanding of the genesis of workplace aggression is possible to develop strategies and interventions to minimize its nefarious effects. The existent body of knowledge has already identified several individual, situational and contextual antecedents of workplace aggression, although this is a research area where significant gaps occur and many issues were still not addressed Dupré and Barling (2006). According to Baron and Neuman (1998) one of these predictors is organizational change, since certain changes in the work environment (e.g., changes in management) can lead to increased aggression. This paper intends to contribute to workplace aggression research by studying its relationship with organizational change, considering a moderating role of political behaviors and organizational cynicism (Ammeter et al., 2002, Ferris et al., 2002). The literature review suggests that mediators and moderators that intervene in the relationships between workplace aggression and its antecedents are understudied topics. James (2005) sustains that organizational politics is related to cynicism and the empirical research of Miranda (2008) has identified leadership political behavior as an antecedent of cynicism but these two variables were not yet investigated regarding their relationship with workplace aggression. This investigation was operationalized using several scales including the Organizational Change Questionnaire-climate of change, processes, and readiness (Bouckenooghe, Devos and Broeck, 2009), a Workplace Aggression Scale (Vicente and D’Oliveira, 2008, 2009, 2010), an Organizational Cynicism Scale (Wanous, Reichers and Austin, 1994) and a Political Behavior Questionnaire (Yukl and Falbe, 1990). Participants representing a wide variety of jobs across many organizations were surveyed. The results of the study and its implications will be presented and discussed. This study contribution is also discussed in what concerns organizational change practices in organizations.
Resumo:
Most research work on WSNs has focused on protocols or on specific applications. There is a clear lack of easy/ready-to-use WSN technologies and tools for planning, implementing, testing and commissioning WSN systems in an integrated fashion. While there exists a plethora of papers about network planning and deployment methodologies, to the best of our knowledge none of them helps the designer to match coverage requirements with network performance evaluation. In this paper we aim at filling this gap by presenting an unified toolset, i.e., a framework able to provide a global picture of the system, from the network deployment planning to system test and validation. This toolset has been designed to back up the EMMON WSN system architecture for large-scale, dense, real-time embedded monitoring. It includes network deployment planning, worst-case analysis and dimensioning, protocol simulation and automatic remote programming and hardware testing tools. This toolset has been paramount to validate the system architecture through DEMMON1, the first EMMON demonstrator, i.e., a 300+ node test-bed, which is, to the best of our knowledge, the largest single-site WSN test-bed in Europe to date.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies