964 resultados para norm-based coding


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Obesity, among both children and adults, is a growing public health epidemic. One area of interest relates to how and why obesity is developing at such a rapid pace among children. Despite a broad consensus about how controlling feeding practices relate to child food consumption and obesity prevalence, much less is known about how non-controlling feeding practices, including modeling, relate to child food consumption. This study investigates how different forms of parent modeling (no modeling, simple modeling, and enthusiastic modeling) and parent adiposity relate to child food consumption, food preferences, and behaviors towards foods. Participants in this experimental study were 65 children (25 boys and 40 girls) aged 3-9 and their parents. Each parent was trained on how to perform their assigned modeling behavior towards a food identified as neutral (not liked, nor disliked) by their child during a pre-session food-rating task. Parents performed their assigned modeling behavior when cued during a ten-minute observation period with their child. Child food consumption (pieces eaten, grams eaten, and calories consumed) was measured and food behaviors (positive comments toward food and food requests) were recorded by event-based coding. After the session, parents self-reported on their height and weight, and children completed a post-session food-rating task. Results indicate that parent modeling (both simple and enthusiastic forms) did not significantly relate to child food consumption, food preferences, or food requests. However, enthusiastic modeling significantly increased the number of positive food comments made by children. Children's food consumption in response to parent modeling did not differ based on parent obesity status. The practical implications of this study are discussed, along with its strengths and limitations, and directions for future research.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, a fully automatic goal-oriented hp-adaptive finite element strategy for open region electromagnetic problems (radiation and scattering) is presented. The methodology leads to exponential rates of convergence in terms of an upper bound of an user-prescribed quantity of interest. Thus, the adaptivity may be guided to provide an optimal error, not globally for the field in the whole finite element domain, but for specific parameters of engineering interest. For instance, the error on the numerical computation of the S-parameters of an antenna array, the field radiated by an antenna, or the Radar Cross Section on given directions, can be minimized. The efficiency of the approach is illustrated with several numerical simulations with two dimensional problem domains. Results include the comparison with the previously developed energy-norm based hp-adaptivity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Parliamentary debates about the resolution of the EU debt crisis seem to provide a good example for the frequently assumed “politicizationˮ of European governance. Against this background, the paper argues that in order to make sense of this assumption, a clearer differentiation of three thematic focal points of controversies – with regard to the assessment of government leadership, concerning the debate between competing party ideologies within the left/right dimension, and with regard to the assessment of supranational integration – is needed. Applying this threefold distinction, the paper uses a theory of differential Europeanization to explain differences in the thematic structure of debates in the Austrian Nationalrat, the British House of Commons, and the German Bundestag. Empirically, the paper is based on data gained from the computer-based coding of plenary debates about the resolution of the European debt crisis between 2010 and 2011.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Translation norms are understood as internalized behavioural constraints which embody the values shared by a community. In the two main contributions it is argued that all decisions in the translation process are primarily governed by norms. The explanatory power of a norm-based theory of translation is then critically reflected upon in the debates and the responses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The North Atlantic Treaty Organization (NATO) is a product of the Cold War through which its members organized their military forces for the purpose of collective defense against the common threat of Soviet-backed aggression. Employing the terminology of regime theory, the creation of NATO can be viewed as the introduction of an international security regime. Throughout the Cold War, NATO member states preserved their commitment to mutual defense while increasingly engaging in activities aimed at overcoming the division of Europe and promoting regional stability. The end of the Cold War has served as the catalyst for a new period of regime change as the Alliance introduced elements of a collective security regime by expanding its mandate to address new security challenges and reorganizing both its political and military organizational structures. ^ This research involves an interpretive analysis of NATO's evolution applying ideal theoretical constructs associated with distinct approaches to regime analysis. The process of regime change is investigated over several periods throughout the history of the Alliance in an effort to understand the Alliance's changing commitment to collective security. This research involves a review of regime theory literature, consisting of an examination of primary source documentation, including official documents and treaties, as well as a review of numerous secondary sources. This review is organized around a typology of power-based, organization-based, and norm-based approaches to regime analysis. This dissertation argues that the process of regime change within NATO is best understood by examining factors associated with multiple theoretical constructs. Relevant factors provide insights into the practice of collective security among NATO member states within Europe, while accounting for the inability of the NATO allies to build on the experience gained within Europe to play a more central role in operations outside of this region. This research contributes to a greater understanding of the nature of international regimes and the process of regime change, while offering recommendations aimed at increasing NATO's viability as a source of greater security and more meaningful international cooperation.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The performance of multiuser dual-hop relaying over mixed radio frequency/free-space optical (RF/FSO) links is investigated. RF links are used for the simultaneous data transmission from m single-antenna sources to the relay, which is equipped with n ≥ m receive antennas and a photo-aperture transmitter. The relay operates under the decode-and-forward protocol and utilizes the popular ordered V-BLAST technique to successively decode each user's transmitted stream. A common norm-based ordering approach is adopted, where the streams are decoded in an ascending order. After the V-BLAST decoding, the relay retransmits the initial information to the destination, which is equipped with a photo-detector, via a point-to-point FSO link in m consecutive timeslots. Analytical expressions for the end-to-end outage probability and average symbol error probability of each user are derived. Some engineering insights are manifested, such as the diversity order, the impact of the pointing error displacement on the FSO link and the severity on the turbulence-induced channel fading.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Compressed covariance sensing using quadratic samplers is gaining increasing interest in recent literature. Covariance matrix often plays the role of a sufficient statistic in many signal and information processing tasks. However, owing to the large dimension of the data, it may become necessary to obtain a compressed sketch of the high dimensional covariance matrix to reduce the associated storage and communication costs. Nested sampling has been proposed in the past as an efficient sub-Nyquist sampling strategy that enables perfect reconstruction of the autocorrelation sequence of Wide-Sense Stationary (WSS) signals, as though it was sampled at the Nyquist rate. The key idea behind nested sampling is to exploit properties of the difference set that naturally arises in quadratic measurement model associated with covariance compression. In this thesis, we will focus on developing novel versions of nested sampling for low rank Toeplitz covariance estimation, and phase retrieval, where the latter problem finds many applications in high resolution optical imaging, X-ray crystallography and molecular imaging. The problem of low rank compressive Toeplitz covariance estimation is first shown to be fundamentally related to that of line spectrum recovery. In absence if noise, this connection can be exploited to develop a particular kind of sampler called the Generalized Nested Sampler (GNS), that can achieve optimal compression rates. In presence of bounded noise, we develop a regularization-free algorithm that provably leads to stable recovery of the high dimensional Toeplitz matrix from its order-wise minimal sketch acquired using a GNS. Contrary to existing TV-norm and nuclear norm based reconstruction algorithms, our technique does not use any tuning parameters, which can be of great practical value. The idea of nested sampling idea also finds a surprising use in the problem of phase retrieval, which has been of great interest in recent times for its convex formulation via PhaseLift, By using another modified version of nested sampling, namely the Partial Nested Fourier Sampler (PNFS), we show that with probability one, it is possible to achieve a certain conjectured lower bound on the necessary measurement size. Moreover, for sparse data, an l1 minimization based algorithm is proposed that can lead to stable phase retrieval using order-wise minimal number of measurements.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A data-driven background dataset refinement technique was recently proposed for SVM based speaker verification. This method selects a refined SVM background dataset from a set of candidate impostor examples after individually ranking examples by their relevance. This paper extends this technique to the refinement of the T-norm dataset for SVM-based speaker verification. The independent refinement of the background and T-norm datasets provides a means of investigating the sensitivity of SVM-based speaker verification performance to the selection of each of these datasets. Using refined datasets provided improvements of 13% in min. DCF and 9% in EER over the full set of impostor examples on the 2006 SRE corpus with the majority of these gains due to refinement of the T-norm dataset. Similar trends were observed for the unseen data of the NIST 2008 SRE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Recently Li and Xia have proposed a transmission scheme for wireless relay networks based on the Alamouti space time code and orthogonal frequency division multiplexing to combat the effect of timing errors at the relay nodes. This transmission scheme is amazingly simple and achieves a diversity order of two for any number of relays. Motivated by its simplicity, this scheme is extended to a more general transmission scheme that can achieve full cooperative diversity for any number of relays. The conditions on the distributed space time block code (DSTBC) structure that admit its application in the proposed transmission scheme are identified and it is pointed out that the recently proposed full diversity four group decodable DST-BCs from precoded co-ordinate interleaved orthogonal designs and extended Clifford algebras satisfy these conditions. It is then shown how differential encoding at the source can be combined with the proposed transmission scheme to arrive at a new transmission scheme that can achieve full cooperative diversity in asynchronous wireless relay networks with no channel information and also no timing error knowledge at the destination node. Finally, four group decodable distributed differential space time block codes applicable in this new transmission scheme for power of two number of relays are also provided.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Non-uniform sampling of a signal is formulated as an optimization problem which minimizes the reconstruction signal error. Dynamic programming (DP) has been used to solve this problem efficiently for a finite duration signal. Further, the optimum samples are quantized to realize a speech coder. The quantizer and the DP based optimum search for non-uniform samples (DP-NUS) can be combined in a closed-loop manner, which provides distinct advantage over the open-loop formulation. The DP-NUS formulation provides a useful control over the trade-off between bitrate and performance (reconstruction error). It is shown that 5-10 dB SNR improvement is possible using DP-NUS compared to extrema sampling approach. In addition, the close-loop DP-NUS gives a 4-5 dB improvement in reconstruction error.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Introduction of processor based instruments in power systems is resulting in the rapid growth of the measured data volume. The present practice in most of the utilities is to store only some of the important data in a retrievable fashion for a limited period. Subsequently even this data is either deleted or stored in some back up devices. The investigations presented here explore the application of lossless data compression techniques for the purpose of archiving all the operational data - so that they can be put to more effective use. Four arithmetic coding methods suitably modified for handling power system steady state operational data are proposed here. The performance of the proposed methods are evaluated using actual data pertaining to the Southern Regional Grid of India. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Representing images and videos in the form of compact codes has emerged as an important research interest in the vision community, in the context of web scale image/video search. Recently proposed Vector of Locally Aggregated Descriptors (VLAD), has been shown to outperform the existing retrieval techniques, while giving a desired compact representation. VLAD aggregates the local features of an image in the feature space. In this paper, we propose to represent the local features extracted from an image, as sparse codes over an over-complete dictionary, which is obtained by K-SVD based dictionary training algorithm. The proposed VLAD aggregates the residuals in the space of these sparse codes, to obtain a compact representation for the image. Experiments are performed over the `Holidays' database using SIFT features. The performance of the proposed method is compared with the original VLAD. The 4% increment in the mean average precision (mAP) indicates the better retrieval performance of the proposed sparse coding based VLAD.