906 resultados para Geoffroy`s side-necked turtle
Resumo:
The Wyner-Ziv video coding (WZVC) rate distortion performance is highly dependent on the quality of the side information, an estimation of the original frame, created at the decoder. This paper, characterizes the WZVC efficiency when motion compensated frame interpolation (MCFI) techniques are used to generate the side information, a difficult problem in WZVC especially because the decoder only has available some reference decoded frames. The proposed WZVC compression efficiency rate model relates the power spectral of the estimation error to the accuracy of the MCFI motion field. Then, some interesting conclusions may be derived related to the impact of the motion field smoothness and the correlation to the true motion trajectories on the compression performance.
Resumo:
One of the most efficient approaches to generate the side information (SI) in distributed video codecs is through motion compensated frame interpolation where the current frame is estimated based on past and future reference frames. However, this approach leads to significant spatial and temporal variations in the correlation noise between the source at the encoder and the SI at the decoder. In such scenario, it would be useful to design an architecture where the SI can be more robustly generated at the block level, avoiding the creation of SI frame regions with lower correlation, largely responsible for some coding efficiency losses. In this paper, a flexible framework to generate SI at the block level in two modes is presented: while the first mode corresponds to a motion compensated interpolation (MCI) technique, the second mode corresponds to a motion compensated quality enhancement (MCQE) technique where a low quality Intra block sent by the encoder is used to generate the SI by doing motion estimation with the help of the reference frames. The novel MCQE mode can be overall advantageous from the rate-distortion point of view, even if some rate has to be invested in the low quality Intra coding blocks, for blocks where the MCI produces SI with lower correlation. The overall solution is evaluated in terms of RD performance with improvements up to 2 dB, especially for high motion video sequences and long Group of Pictures (GOP) sizes.
Resumo:
Recently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner-Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner-Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner-Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB: moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.
Resumo:
The advances made in channel-capacity codes, such as turbo codes and low-density parity-check (LDPC) codes, have played a major role in the emerging distributed source coding paradigm. LDPC codes can be easily adapted to new source coding strategies due to their natural representation as bipartite graphs and the use of quasi-optimal decoding algorithms, such as belief propagation. This paper tackles a relevant scenario in distributedvideo coding: lossy source coding when multiple side information (SI) hypotheses are available at the decoder, each one correlated with the source according to different correlation noise channels. Thus, it is proposed to exploit multiple SI hypotheses through an efficient joint decoding technique withmultiple LDPC syndrome decoders that exchange information to obtain coding efficiency improvements. At the decoder side, the multiple SI hypotheses are created with motion compensated frame interpolation and fused together in a novel iterative LDPC based Slepian-Wolf decoding algorithm. With the creation of multiple SI hypotheses and the proposed decoding algorithm, bitrate savings up to 8.0% are obtained for similar decoded quality.
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
In distributed video coding, motion estimation is typically performed at the decoder to generate the side information, increasing the decoder complexity while providing low complexity encoding in comparison with predictive video coding. Motion estimation can be performed once to create the side information or several times to refine the side information quality along the decoding process. In this paper, motion estimation is performed at the decoder side to generate multiple side information hypotheses which are adaptively and dynamically combined, whenever additional decoded information is available. The proposed iterative side information creation algorithm is inspired in video denoising filters and requires some statistics of the virtual channel between each side information hypothesis and the original data. With the proposed denoising algorithm for side information creation, a RD performance gain up to 1.2 dB is obtained for the same bitrate.
Resumo:
Copyright © 2014 Elsevier Science Ltd.
Resumo:
A sub adult Caretta caretta was found on the 23rd August, 2014 ca. 16 nautical miles south off S. Miguel Island, Azores (Northeast Atlantic), with a large pelagic trawl hook inside its mouth. The individual was kept in a basin of sea water and sent by boat to Terceira Island following instructions by the Azores Regional Government via the Environmental Authority in order to be examined by the author and, if possible, undergo the necessary hook removal procedures. In this note, we describe the surgical procedures and how the turtle was evaluated both pre- and post-surgery.
Resumo:
Workplace aggression is a factor that shapes the interaction between individuals and their work environment and produces many undesirable outcomes, sometimes introducing heavy costs for organizations. Only through a comprehensive understanding of the genesis of workplace aggression is possible to develop strategies and interventions to minimize its nefarious effects. The existent body of knowledge has already identified several individual, situational and contextual antecedents of workplace aggression, although this is a research area where significant gaps occur and many issues were still not addressed Dupré and Barling (2006). According to Baron and Neuman (1998) one of these predictors is organizational change, since certain changes in the work environment (e.g., changes in management) can lead to increased aggression. This paper intends to contribute to workplace aggression research by studying its relationship with organizational change, considering a moderating role of political behaviors and organizational cynicism (Ammeter et al., 2002, Ferris et al., 2002). The literature review suggests that mediators and moderators that intervene in the relationships between workplace aggression and its antecedents are understudied topics. James (2005) sustains that organizational politics is related to cynicism and the empirical research of Miranda (2008) has identified leadership political behavior as an antecedent of cynicism but these two variables were not yet investigated regarding their relationship with workplace aggression. This investigation was operationalized using several scales including the Organizational Change Questionnaire-climate of change, processes, and readiness (Bouckenooghe, Devos and Broeck, 2009), a Workplace Aggression Scale (Vicente and D’Oliveira, 2008, 2009, 2010), an Organizational Cynicism Scale (Wanous, Reichers and Austin, 1994) and a Political Behavior Questionnaire (Yukl and Falbe, 1990). Participants representing a wide variety of jobs across many organizations were surveyed. The results of the study and its implications will be presented and discussed. This study contribution is also discussed in what concerns organizational change practices in organizations.
Resumo:
Most research work on WSNs has focused on protocols or on specific applications. There is a clear lack of easy/ready-to-use WSN technologies and tools for planning, implementing, testing and commissioning WSN systems in an integrated fashion. While there exists a plethora of papers about network planning and deployment methodologies, to the best of our knowledge none of them helps the designer to match coverage requirements with network performance evaluation. In this paper we aim at filling this gap by presenting an unified toolset, i.e., a framework able to provide a global picture of the system, from the network deployment planning to system test and validation. This toolset has been designed to back up the EMMON WSN system architecture for large-scale, dense, real-time embedded monitoring. It includes network deployment planning, worst-case analysis and dimensioning, protocol simulation and automatic remote programming and hardware testing tools. This toolset has been paramount to validate the system architecture through DEMMON1, the first EMMON demonstrator, i.e., a 300+ node test-bed, which is, to the best of our knowledge, the largest single-site WSN test-bed in Europe to date.
Employment of the side product of biodiesel production in the formation of surfactant like molecules
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Química e Bioquímica
Resumo:
Beam-like structures are the most common components in real engineering, while single side damage is often encountered. In this study, a numerical analysis of single side damage in a free-free beam is analysed with three different finite element models; namely solid, shell and beam models for demonstrating their performance in simulating real structures. Similar to experiment, damage is introduced into one side of the beam, and natural frequencies are extracted from the simulations and compared with experimental and analytical results. Mode shapes are also analysed with modal assurance criterion. The results from simulations reveal a good performance of the three models in extracting natural frequencies, and solid model performs better than shell while shell model performs better than beam model under intact state. For damaged states, the natural frequencies captured from solid model show more sensitivity to damage severity than shell model and shell model performs similar to the beam model in distinguishing damage. The main contribution of this paper is to perform a comparison between three finite element models and experimental data as well as analytical solutions. The finite element results show a relatively well performance.
Resumo:
In this study, energy production for autonomous underwater vehicles is investigated. This project is part of a bigger project called TURTLE. The autonomous vehicles perform oceanic researches at seabed for which they are intended to be kept operational underwater for several months. In order to ful l a long-term underwater condition, powerful batteries are combined with \micro- scale" energy production on the spot. This work tends to develop a system that generates power up to a maximum of 30 W. Latter energy harvesting structure consists basically of a turbine combined with a generator and low-power electronics to adjust the achieved voltage to a required battery charger voltage. Every component is examined separately hence an optimum can be de ned for all, and subsequently also an overall optimum. Di erent design parameters as e.g. number of blades, solidity ratio and cross-section area are compared for di erent turbines, in order to see what is the most feasible type. Further, a generator is chosen by studying how ux distributions might be adjusted to low velocities, and how cogging torque can be excluded by adapted designs. Low-power electronics are con gured in order to convert and stabilize heavily varying three-phase voltages to a constant, recti ed voltage which is usable for battery storage. Clearly, di erent component parameters as maximum power and torque are matched here to increase the overall power generation. Furthermore an overall maximum power is set up for achieving a maximum power ow at load side. Due to among others typical low velocities of about 0.1 to 0.5 m/s, and constructing limits of the prototype, the vast range of components is restricted to only a few that could be used. Hence, a helical turbine is combined in a direct drive mode to a coreless-stator axial- ux permanent-magnet generator, from which the output voltage is adjusted subsequently by a recti er, impedance matching unit, upconverter circuit and an overall control unit to regulate di erent component parameters. All these electronics are combined in a closed-loop design to involve positive feedback signals. Furthermore a theoretical con guration for the TURTLE vehicle is described in this work and a solution is proposed that might be implemented, for which several design tests are performable in a future study.
Resumo:
This paper describes the TURTLE project that aim to develop sub-systems with the capability of deep-sea long-term presence. Our motivation is to produce new robotic ascend and descend energy efficient technologies to be incorporated in robotic vehicles used by civil and military stakeholders for underwater operations. TURTLE contribute to the sustainable presence and operations in the sea bottom. Long term presence on sea bottom, increased awareness and operation capabilities in underwater sea and in particular on benthic deeps can only be achieved through the use of advanced technologies, leading to automation of operation, reducing operational costs and increasing efficiency of human activity.