16 resultados para side coupling

em Repositório Científico do Instituto Politécnico de Lisboa - Portugal


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Wyner-Ziv video coding (WZVC) rate distortion performance is highly dependent on the quality of the side information, an estimation of the original frame, created at the decoder. This paper, characterizes the WZVC efficiency when motion compensated frame interpolation (MCFI) techniques are used to generate the side information, a difficult problem in WZVC especially because the decoder only has available some reference decoded frames. The proposed WZVC compression efficiency rate model relates the power spectral of the estimation error to the accuracy of the MCFI motion field. Then, some interesting conclusions may be derived related to the impact of the motion field smoothness and the correlation to the true motion trajectories on the compression performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Novel alternating copolymers comprising biscalix[4]arene-p-phenylene ethynylene and m-phenylene ethynylene units (CALIX-m-PPE) were synthesized using the Sonogashira-Hagihara cross-coupling polymerization. Good isolated yields (60-80%) were achieved for the polymers that show M-n ranging from 1.4 x 10(4) to 5.1 x 10(4) gmol(-1) (gel permeation chromatography analysis), depending on specific polymerization conditions. The structural analysis of CALIX-m-PPE was performed by H-1, C-13, C-13-H-1 heteronuclear single quantum correlation (HSQC), C-13-H-1 heteronuclear multiple bond correlation (HMBC), correlation spectroscopy (COSY), and nuclear overhauser effect spectroscopy (NOESY) in addition to Fourier transform-Infrared spectroscopy and microanalysis allowing its full characterization. Depending on the reaction setup, variable amounts (16-45%) of diyne units were found in polymers although their photophysical properties are essentially the same. It is demonstrated that CALIX-m-PPE does not form ground-or excited-state interchain interactions owing to the highly crowded environment of the main-chain imparted by both calix[4]arene side units which behave as insulators inhibiting main-chain pi-pi staking. It was also found that the luminescent properties of CALIX-m-PPE are markedly different from those of an all-p-linked phenylene ethynylene copolymer (CALIX-p-PPE) previously reported. The unexpected appearance of a low-energy emission band at 426 nm, in addition to the locally excited-state emission (365 nm), together with a quite low fluorescence quantum yield (Phi = 0.02) and a double-exponential decay dynamics led to the formulation of an intramolecular exciplex as the new emissive species.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the most efficient approaches to generate the side information (SI) in distributed video codecs is through motion compensated frame interpolation where the current frame is estimated based on past and future reference frames. However, this approach leads to significant spatial and temporal variations in the correlation noise between the source at the encoder and the SI at the decoder. In such scenario, it would be useful to design an architecture where the SI can be more robustly generated at the block level, avoiding the creation of SI frame regions with lower correlation, largely responsible for some coding efficiency losses. In this paper, a flexible framework to generate SI at the block level in two modes is presented: while the first mode corresponds to a motion compensated interpolation (MCI) technique, the second mode corresponds to a motion compensated quality enhancement (MCQE) technique where a low quality Intra block sent by the encoder is used to generate the SI by doing motion estimation with the help of the reference frames. The novel MCQE mode can be overall advantageous from the rate-distortion point of view, even if some rate has to be invested in the low quality Intra coding blocks, for blocks where the MCI produces SI with lower correlation. The overall solution is evaluated in terms of RD performance with improvements up to 2 dB, especially for high motion video sequences and long Group of Pictures (GOP) sizes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner-Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner-Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner-Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB: moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The advances made in channel-capacity codes, such as turbo codes and low-density parity-check (LDPC) codes, have played a major role in the emerging distributed source coding paradigm. LDPC codes can be easily adapted to new source coding strategies due to their natural representation as bipartite graphs and the use of quasi-optimal decoding algorithms, such as belief propagation. This paper tackles a relevant scenario in distributedvideo coding: lossy source coding when multiple side information (SI) hypotheses are available at the decoder, each one correlated with the source according to different correlation noise channels. Thus, it is proposed to exploit multiple SI hypotheses through an efficient joint decoding technique withmultiple LDPC syndrome decoders that exchange information to obtain coding efficiency improvements. At the decoder side, the multiple SI hypotheses are created with motion compensated frame interpolation and fused together in a novel iterative LDPC based Slepian-Wolf decoding algorithm. With the creation of multiple SI hypotheses and the proposed decoding algorithm, bitrate savings up to 8.0% are obtained for similar decoded quality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The salient feature of liquid crystal elastomers and networks is strong coupling between orientational order and mechanical strain. Orientational order can be changed by a wide variety of stimuli, including the presence of moisture. Changes in the orientation of constituents give rise to stresses and strains, which result in changes in sample shape. We have utilized this effect to build soft cellulose-based motor driven by humidity. The motor consists of a circular loop of cellulose film, which passes over two wheels. When humid air is present near one of the wheels on one side of the film, with drier air elsewhere, rotation of the wheels results. As the wheels rotate, the humid film dries. The motor runs so long as the difference in humidity is maintained. Our cellulose liquid crystal motor thus extracts mechanical work from a difference in humidity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work a new probabilistic and dynamical approach to an extension of the Gompertz law is proposed. A generalized family of probability density functions, designated by Beta* (p, q), which is proportional to the right hand side of the Tsoularis-Wallace model, is studied. In particular, for p = 2, the investigation is extended to the extreme value models of Weibull and Frechet type. These models, described by differential equations, are proportional to the hyper-Gompertz growth model. It is proved that the Beta* (2, q) densities are a power of betas mixture, and that its dynamics are determined by a non-linear coupling of probabilities. The dynamical analysis is performed using techniques of symbolic dynamics and the system complexity is measured using topological entropy. Generally, the natural history of a malignant tumour is reflected through bifurcation diagrams, in which are identified regions of regression, stability, bifurcation, chaos and terminus.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In distributed video coding, motion estimation is typically performed at the decoder to generate the side information, increasing the decoder complexity while providing low complexity encoding in comparison with predictive video coding. Motion estimation can be performed once to create the side information or several times to refine the side information quality along the decoding process. In this paper, motion estimation is performed at the decoder side to generate multiple side information hypotheses which are adaptively and dynamically combined, whenever additional decoded information is available. The proposed iterative side information creation algorithm is inspired in video denoising filters and requires some statistics of the virtual channel between each side information hypothesis and the original data. With the proposed denoising algorithm for side information creation, a RD performance gain up to 1.2 dB is obtained for the same bitrate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is devoted to the synchronization of a dynamical system defined by two different coupling versions of two identical piecewise linear bimodal maps. We consider both local and global studies, using different tools as natural transversal Lyapunov exponent, Lyapunov functions, eigenvalues and eigenvectors and numerical simulations. We obtain theoretical results for the existence of synchronization on coupling parameter range. We characterize the synchronization manifold as an attractor and measure the synchronization speed. In one coupling version, we give a necessary and sufficient condition for the synchronization. We study the basins of synchronization and show that, depending upon the type of coupling, they can have very different shapes and are not necessarily constituted by the whole phase space; in some cases, they can be riddled.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Workplace aggression is a factor that shapes the interaction between individuals and their work environment and produces many undesirable outcomes, sometimes introducing heavy costs for organizations. Only through a comprehensive understanding of the genesis of workplace aggression is possible to develop strategies and interventions to minimize its nefarious effects. The existent body of knowledge has already identified several individual, situational and contextual antecedents of workplace aggression, although this is a research area where significant gaps occur and many issues were still not addressed Dupré and Barling (2006). According to Baron and Neuman (1998) one of these predictors is organizational change, since certain changes in the work environment (e.g., changes in management) can lead to increased aggression. This paper intends to contribute to workplace aggression research by studying its relationship with organizational change, considering a moderating role of political behaviors and organizational cynicism (Ammeter et al., 2002, Ferris et al., 2002). The literature review suggests that mediators and moderators that intervene in the relationships between workplace aggression and its antecedents are understudied topics. James (2005) sustains that organizational politics is related to cynicism and the empirical research of Miranda (2008) has identified leadership political behavior as an antecedent of cynicism but these two variables were not yet investigated regarding their relationship with workplace aggression. This investigation was operationalized using several scales including the Organizational Change Questionnaire-climate of change, processes, and readiness (Bouckenooghe, Devos and Broeck, 2009), a Workplace Aggression Scale (Vicente and D’Oliveira, 2008, 2009, 2010), an Organizational Cynicism Scale (Wanous, Reichers and Austin, 1994) and a Political Behavior Questionnaire (Yukl and Falbe, 1990). Participants representing a wide variety of jobs across many organizations were surveyed. The results of the study and its implications will be presented and discussed. This study contribution is also discussed in what concerns organizational change practices in organizations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a distributed predictive control methodology for indoor thermal comfort that optimizes the consumption of a limited shared energy resource using an integrated demand-side management approach that involves a power price auction and an appliance loads allocation scheme. The control objective for each subsystem (house or building) aims to minimize the energy cost while maintaining the indoor temperature inside comfort limits. In a distributed coordinated multi-agent ecosystem, each house or building control agent achieves its objectives while sharing, among them, the available energy through the introduction of particular coupling constraints in their underlying optimization problem. Coordination is maintained by a daily green energy auction bring in a demand-side management approach. Also the implemented distributed MPC algorithm is described and validated with simulation studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Supramolecular chirality was achieved in solutions and thin films of a calixarene-containing chiral aryleneethynylene copolymer. The observed chiroptical activity, which is primarily allied with the formation of aggregates of high molecular weight polymer chains, is the result of a combination of intrachain and interchain effects. The former arises by the adoption of an induced helix-sense by the polymer main-chain while the latter comes from the exciton coupling of aromatic backbone transitions. The co-existence of bulky bis-calixKlarene units and chiral side-chains on the polymer skeleton prevents efficient pi-stacking of neighbouring chains, keeping the chiral assembly highly emissive. In contrast, for a model polymer lacking calixarene moieties, the chiroptical activity is dominated by strong interchain exciton couplings as a result of more favourable packing of polymer chains, leading to a marked decrease of photoluminescence in the aggregate state. The enantiomeric recognition abilities of both polymers towards (R)- and (S)-alpha-methylbenzylamine were examined. It was found that a significant enantiodiscrimination is exhibited by the calixarene-based polymer in the aggregate state.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We look for minimal chiral sets of fermions beyond the standard model that are anomaly free and, simultaneously, vectorlike particles with respect to color SU(3) and electromagnetic U(1). We then study whether the addition of such particles to the standard model particle content allows for the unification of gauge couplings at a high energy scale, above 5.0 x 10(15) GeV so as to be safely consistent with proton decay bounds. The possibility to have unification at the string scale is also considered. Inspired in grand unified theories, we also search for minimal chiral fermion sets that belong to SU(5) multiplets, restricted to representations up to dimension 50. It is shown that, in various cases, it is possible to achieve gauge unification provided that some of the extra fermions decouple at relatively high intermediate scales.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It has been pointed out recently that current experiments still allow for a two Higgs doublet model where the hbb¯ coupling (kDmb/v) is negative; a sign opposite to that of the Standard Model. Due to the importance of delayed decoupling in the hH+H− coupling, h→γγ improved measurements will have a strong impact on this issue. For the same reason, measurements or even bounds on h→Zγ are potentially interesting. In this article, we revisit this problem, highlighting the crucial importance of h→VV, which can be understood with simple arguments. We show that the impacts on kD<0 models of both h→bb¯ and h→τ+τ− are very sensitive to input values for the gluon fusion production mechanism; in contrast, h→γγ and h→Zγ are not. We also inquire if the search for h→Zγ and its interplay with h→γγ will impact the sign of the hbb¯ coupling. Finally, we study these issues in the context of the flipped two Higgs doublet model.