933 resultados para Minkowski Sum of Sets
Resumo:
In this paper, we consider the problem of finding a spectrum hole of a specified bandwidth in a given wide band of interest. We propose a new, simple and easily implementable sub-Nyquist sampling scheme for signal acquisition and a spectrum hole search algorithm that exploits sparsity in the primary spectral occupancy in the frequency domain by testing a group of adjacent subbands in a single test. The sampling scheme deliberately introduces aliasing during signal acquisition, resulting in a signal that is the sum of signals from adjacent sub-bands. Energy-based hypothesis tests are used to provide an occupancy decision over the group of subbands, and this forms the basis of the proposed algorithm to find contiguous spectrum holes. We extend this framework to a multi-stage sensing algorithm that can be employed in a variety of spectrum sensing scenarios, including non-contiguous spectrum hole search. Further, we provide the analytical means to optimize the hypothesis tests with respect to the detection thresholds, number of samples and group size to minimize the detection delay under a given error rate constraint. Depending on the sparsity and SNR, the proposed algorithms can lead to significantly lower detection delays compared to a conventional bin-by-bin energy detection scheme; the latter is in fact a special case of the group test when the group size is set to 1. We validate our analytical results via Monte Carlo simulations.
Resumo:
Our work is motivated by impromptu (or ``as-you-go'') deployment of wireless relay nodes along a path, a need that arises in many situations. In this paper, the path is modeled as starting at the origin (where there is the data sink, e.g., the control center), and evolving randomly over a lattice in the positive quadrant. A person walks along the path deploying relay nodes as he goes. At each step, the path can, randomly, either continue in the same direction or take a turn, or come to an end, at which point a data source (e.g., a sensor) has to be placed, that will send packets to the data sink. A decision has to be made at each step whether or not to place a wireless relay node. Assuming that the packet generation rate by the source is very low, and simple link-by-link scheduling, we consider the problem of sequential relay placement so as to minimize the expectation of an end-to-end cost metric (a linear combination of the sum of convex hop costs and the number of relays placed). This impromptu relay placement problem is formulated as a total cost Markov decision process. First, we derive the optimal policy in terms of an optimal placement set and show that this set is characterized by a boundary (with respect to the position of the last placed relay) beyond which it is optimal to place the next relay. Next, based on a simpler one-step-look-ahead characterization of the optimal policy, we propose an algorithm which is proved to converge to the optimal placement set in a finite number of steps and which is faster than value iteration. We show by simulations that the distance threshold based heuristic, usually assumed in the literature, is close to the optimal, provided that the threshold distance is carefully chosen. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
In a double slit interference experiment, the wave function at the screen with both slits open is not exactly equal to the sum of the wave functions with the slits individually open one at a time. The three scenarios represent three different boundary conditions and as such, the superposition principle should not be applicable. However, most well-known text books in quantum mechanics implicitly and/or explicitly use this assumption that is only approximately true. In our present study, we have used the Feynman path integral formalism to quantify contributions from nonclassical paths in quantum interference experiments that provide a measurable deviation from a naive application of the superposition principle. A direct experimental demonstration for the existence of these nonclassical paths is difficult to present. We find that contributions from such paths can be significant and we propose simple three-slit interference experiments to directly confirm their existence.
Resumo:
This paper investigates the use of adaptive group testing to find a spectrum hole of a specified bandwidth in a given wideband of interest. We propose a group testing-based spectrum hole search algorithm that exploits sparsity in the primary spectral occupancy by testing a group of adjacent subbands in a single test. This is enabled by a simple and easily implementable sub-Nyquist sampling scheme for signal acquisition by the cognitive radios (CRs). The sampling scheme deliberately introduces aliasing during signal acquisition, resulting in a signal that is the sum of signals from adjacent subbands. Energy-based hypothesis tests are used to provide an occupancy decision over the group of subbands, and this forms the basis of the proposed algorithm to find contiguous spectrum holes of a specified bandwidth. We extend this framework to a multistage sensing algorithm that can be employed in a variety of spectrum sensing scenarios, including noncontiguous spectrum hole search. Furthermore, we provide the analytical means to optimize the group tests with respect to the detection thresholds, number of samples, group size, and number of stages to minimize the detection delay under a given error probability constraint. Our analysis allows one to identify the sparsity and SNR regimes where group testing can lead to significantly lower detection delays compared with a conventional bin-by-bin energy detection scheme; the latter is, in fact, a special case of the group test when the group size is set to 1 bin. We validate our analytical results via Monte Carlo simulations.
Resumo:
The fluctuations of a Markovian jump process with one or more unidirectional transitions, where R-ij > 0 but R-ji = 0, are studied. We find that such systems satisfy an integral fluctuation theorem. The fluctuating quantity satisfying the theorem is a sum of the entropy produced in the bidirectional transitions and a dynamical contribution, which depends on the residence times in the states connected by the unidirectional transitions. The convergence of the integral fluctuation theorem is studied numerically and found to show the same qualitative features as systems exhibiting microreversibility.
Resumo:
We consider the basic bidirectional relaying problem, in which two users in a wireless network wish to exchange messages through an intermediate relay node. In the compute-and-forward strategy, the relay computes a function of the two messages using the naturally occurring sum of symbols simultaneously transmitted by user nodes in a Gaussian multiple-access channel (MAC), and the computed function value is forwarded to the user nodes in an ensuing broadcast phase. In this paper, we study the problem under an additional security constraint, which requires that each user's message be kept secure from the relay. We consider two types of security constraints: 1) perfect secrecy, in which the MAC channel output seen by the relay is independent of each user's message and 2) strong secrecy, which is a form of asymptotic independence. We propose a coding scheme based on nested lattices, the main feature of which is that given a pair of nested lattices that satisfy certain goodness properties, we can explicitly specify probability distributions for randomization at the encoders to achieve the desired security criteria. In particular, our coding scheme guarantees perfect or strong secrecy even in the absence of channel noise. The noise in the channel only affects reliability of computation at the relay, and for Gaussian noise, we derive achievable rates for reliable and secure computation. We also present an application of our methods to the multihop line network in which a source needs to transmit messages to a destination through a series of intermediate relays.
Resumo:
We present an analysis of the rate of sign changes in the discrete Fourier spectrum of a sequence. The sign changes of either the real or imaginary parts of the spectrum are considered, and the rate of sign changes is termed as the spectral zero-crossing rate (SZCR). We show that SZCR carries information pertaining to the locations of transients within the temporal observation window. We show duality with temporal zero-crossing rate analysis by expressing the spectrum of a signal as a sum of sinusoids with random phases. This extension leads to spectral-domain iterative filtering approaches to stabilize the spectral zero-crossing rate and to improve upon the location estimates. The localization properties are compared with group-delay-based localization metrics in a stylized signal setting well-known in speech processing literature. We show applications to epoch estimation in voiced speech signals using the SZCR on the integrated linear prediction residue. The performance of the SZCR-based epoch localization technique is competitive with the state-of-the-art epoch estimation techniques that are based on average pitch period.
Resumo:
The von Neumann entropy of a generic quantum state is not unique unless the state can be uniquely decomposed as a sum of extremal or pure states. Therefore one reaches the remarkable possibility that there may be many entropies for a given state. We show that this happens if the GNS representation (of the algebra of observables in some quantum state) is reducible, and some representations in the decomposition occur with non-trivial degeneracy. This ambiguity in entropy, which can occur at zero temperature, can often be traced to a gauge symmetry emergent from the non-trivial topological character of the configuration space of the underlying system. We also establish the analogue of an H-theorem for this entropy by showing that its evolution is Markovian, determined by a stochastic matrix. After demonstrating this entropy ambiguity for the simple example of the algebra of 2 x 2 matrices, we argue that the degeneracies in the GNS representation can be interpreted as an emergent broken gauge symmetry, and play an important role in the analysis of emergent entropy due to non-Abelian anomalies. We work out the simplest situation with such non-Abelian symmetry, that of an ethylene molecule.
Resumo:
The 3-Hitting Set problem involves a family of subsets F of size at most three over an universe U. The goal is to find a subset of U of the smallest possible size that intersects every set in F. The version of the problem with parity constraints asks for a subset S of size at most k that, in addition to being a hitting set, also satisfies certain parity constraints on the sizes of the intersections of S with each set in the family F. In particular, an odd (even) set is a hitting set that hits every set at either one or three (two) elements, and a perfect code is a hitting set that intersects every set at exactly one element. These questions are of fundamental interest in many contexts for general set systems. Just as for Hitting Set, we find these questions to be interesting for the case of families consisting of sets of size at most three. In this work, we initiate an algorithmic study of these problems in this special case, focusing on a parameterized analysis. We show, for each problem, efficient fixed-parameter tractable algorithms using search trees that are tailor-made to the constraints in question, and also polynomial kernels using sunflower-like arguments in a manner that accounts for equivalence under the additional parity constraints.
Resumo:
We find the sum of series of the form Sigma(infinity)(i=1) f(i)/i(r) for some special functions f. The above series is a generalization of the Riemann zeta function. In particular, we take f as some values of Hurwitz zeta functions, harmonic numbers, and combination of both. These generalize some of the results given in Mezo's paper (2013). We use multiple zeta theory to prove all results. The series sums we have obtained are in terms of Bernoulli numbers and powers of pi.
Resumo:
Climate change in response to a change in external forcing can be understood in terms of fast response to the imposed forcing and slow feedback associated with surface temperature change. Previous studies have investigated the characteristics of fast response and slow feedback for different forcing agents. Here we examine to what extent that fast response and slow feedback derived from time-mean results of climate model simulations can be used to infer total climate change. To achieve this goal, we develop a multivariate regression model of climate change, in which the change in a climate variable is represented by a linear combination of its sensitivity to CO2 forcing, solar forcing, and change in global mean surface temperature. We derive the parameters of the regression model using time-mean results from a set of HadCM3L climate model step-forcing simulations, and then use the regression model to emulate HadCM3L-simulated transient climate change. Our results show that the regression model emulates well HadCM3L-simulated temporal evolution and spatial distribution of climate change, including surface temperature, precipitation, runoff, soil moisture, cloudiness, and radiative fluxes under transient CO2 and/or solar forcing scenarios. Our findings suggest that temporal and spatial patterns of total change for the climate variables considered here can be represented well by the sum of fast response and slow feedback. Furthermore, by using a simple 1-D heat-diffusion climate model, we show that the temporal and spatial characteristics of climate change under transient forcing scenarios can be emulated well using information from step-forcing simulations alone.
Resumo:
It is known that all the vector bundles of the title can be obtained by holomorphic induction from representations of a certain parabolic group on finite-dimensional inner product spaces. The representations, and the induced bundles, have composition series with irreducible factors. We write down an equivariant constant coefficient differential operator that intertwines the bundle with the direct sum of its irreducible factors. As an application, we show that in the case of the closed unit ball in C-n all homogeneous n-tuples of Cowen-Douglas operators are similar to direct sums of certain basic n-tuples. (c) 2015 Academie des sciences. Published by Elsevier Masson SAS. All rights reserved.
Resumo:
We perceive objects as containing a variety of attributes: local features, relations between features, internal details, and global properties. But we know little about how they combine. Here, we report a remarkably simple additive rule that governs how these diverse object attributes combine in vision. The perceived dissimilarity between two objects was accurately explained as a sum of (a) spatially tuned local contour-matching processes modulated by part decomposition; (b) differences in internal details, such as texture; (c) differences in emergent attributes, such as symmetry; and (d) differences in global properties, such as orientation or overall configuration of parts. Our results elucidate an enduring question in object vision by showing that the whole object is not a sum of its parts but a sum of its many attributes.
Resumo:
In this paper, a new phenomenological theory with strain gradient effects is proposed to account for the size dependence of plastic deformation at micro- and submicro-length scales. The theory fits within the framework of general couple stress theory and three rotational degrees of freedom omega(i) are introduced in addition to the conventional three translational degrees of freedom mu(i). omega(i) is called micro-rotation and is the sum of material rotation plus the particles' relative rotation. While the new theory is used to analyze the crack tip field or the indentation problems, the stretch gradient is considered through a new hardening law. The key features of the theory are that the rotation gradient influences the material character through the interaction between the Cauchy stresses and the couple stresses; the term of stretch gradient is represented as an internal variable to increase the tangent modulus. In fact the present new strain gradient theory is the combination of the strain gradient theory proposed by Chen and Wang (Int. J. Plast., in press) and the hardening law given by Chen and Wang (Acta Mater. 48 (2000a) 3997). In this paper we focus on the finite element method to investigate material fracture for an elastic-power law hardening solid. With remotely imposed classical K fields, the full field solutions are obtained numerically. It is found that the size of the strain gradient dominance zone is characterized by the intrinsic material length l(1). Outside the strain gradient dominance zone, the computed stress field tends to be a classical plasticity field and then K field. The singularity of stresses ahead of the crack tip is higher than that of the classical field and tends to the square root singularity, which has important consequences for crack growth in materials by decohesion at the atomic scale. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Resumen: El clima emocional no es la simple suma de las emociones individuales sino un afecto colectivo generado por cómo los individuos interactúan unos con otros como respuestas colectivas a sus condiciones económicas, políticas y sociales (de Rivera, 2014). Por su parte, en el contexto argentino la inseguridad es el principal problema social percibido en los últimos años. Sobre este marco, se realizó un estudio con el objetivo de analizar el clima emocional, la percepción de inseguridad y el miedo al delito junto a otros factores psicosociales asociados, y de explorar los perfiles perceptivos diferenciales según el auto-posicionamiento ideológico. La muestra, intencional, estuvo compuesta por 516 estudiantes universitarios. Los resultados dan cuenta de un clima emocional negativo (enfado y desesperanza), baja confianza institucional, frustración anómica y alta percepción de inseguridad. Se observan diferencias al comparar a los participantes en función de su autoposicionamiento ideológico. La percepción del clima emocional es más positivo cuánto más a la izquierda se ubican los participantes, manifestando mayor seguridad, menor desesperanza y enfado. Exhiben además menos miedo al delito, menor preocupación por la inseguridad y menor probabilidad de victimización. Sin embargo, quienes se posicionan ideológicamente hacia la derecha muestran mayores niveles de frustración anómica, y la confianza (o desconfianza) institucional varía por el posicionamiento ideológico en función de la institución. Finalmente, la heteropercepción de inseguridad es mayor que la autopercepción, surgiendo de este modo mecanismos defensivos como la ilusión de invulnerabilidad, que conllevan mayor riesgo.