960 resultados para Averaging Theorem


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Automated image segmentation techniques are useful tools in biological image analysis and are an essential step in tracking applications. Typically, snakes or active contours are used for segmentation and they evolve under the influence of certain internal and external forces. Recently, a new class of shape-specific active contours have been introduced, which are known as Snakuscules and Ovuscules. These contours are based on a pair of concentric circles and ellipses as the shape templates, and the optimization is carried out by maximizing a contrast function between the outer and inner templates. In this paper, we present a unified approach to the formulation and optimization of Snakuscules and Ovuscules by considering a specific form of affine transformations acting on a pair of concentric circles. We show how the parameters of the affine transformation may be optimized for, to generate either Snakuscules or Ovuscules. Our approach allows for a unified formulation and relies only on generic regularization terms and not shape-specific regularization functions. We show how the calculations of the partial derivatives may be made efficient thanks to the Green's theorem. Results on synthesized as well as real data are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The classical approach to A/D conversion has been uniform sampling and we get perfect reconstruction for bandlimited signals by satisfying the Nyquist Sampling Theorem. We propose a non-uniform sampling scheme based on level crossing (LC) time information. We show stable reconstruction of bandpass signals with correct scale factor and hence a unique reconstruction from only the non-uniform time information. For reconstruction from the level crossings we make use of the sparse reconstruction based optimization by constraining the bandpass signal to be sparse in its frequency content. While overdetermined system of equations is resorted to in the literature we use an undetermined approach along with sparse reconstruction formulation. We could get a reconstruction SNR > 20dB and perfect support recovery with probability close to 1, in noise-less case and with lower probability in the noisy case. Random picking of LC from different levels over the same limited signal duration and for the same length of information, is seen to be advantageous for reconstruction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Clock synchronization is an extremely important requirement of wireless sensor networks(WSNs). There are many application scenarios such as weather monitoring and forecasting etc. where external clock synchronization may be required because WSN itself may consists of components which are not connected to each other. A usual approach for external clock synchronization in WSNs is to synchronize the clock of a reference node with an external source such as UTC, and the remaining nodes synchronize with the reference node using an internal clock synchronization protocol. In order to provide highly accurate time, both the offset and the drift rate of each clock with respect to reference node are estimated from time to time, and these are used for getting correct time from local clock reading. A problem with this approach is that it is difficult to estimate the offset of a clock with respect to the reference node when drift rate of clocks varies over a period of time. In this paper, we first propose a novel internal clock synchronization protocol based on weighted averaging technique, which synchronizes all the clocks of a WSN to a reference node periodically. We call this protocol weighted average based internal clock synchronization(WICS) protocol. Based on this protocol, we then propose our weighted average based external clock synchronization(WECS) protocol. We have analyzed the proposed protocols for maximum synchronization error and shown that it is always upper bounded. Extensive simulation studies of the proposed protocols have been carried out using Castalia simulator. Simulation results validate our theoretical claim that the maximum synchronization error is always upper bounded and also show that the proposed protocols perform better in comparison to other protocols in terms of synchronization accuracy. A prototype implementation of the proposed internal clock synchronization protocol using a few TelosB motes also validates our claim.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The solution of the forward equation that models the transport of light through a highly scattering tissue material in diffuse optical tomography (DOT) using the finite element method gives flux density (Phi) at the nodal points of the mesh. The experimentally measured flux (U-measured) on the boundary over a finite surface area in a DOT system has to be corrected to account for the system transfer functions (R) of various building blocks of the measurement system. We present two methods to compensate for the perturbations caused by R and estimate true flux density (Phi) from U-measured(cal). In the first approach, the measurement data with a homogeneous phantom (U-measured(homo)) is used to calibrate the measurement system. The second scheme estimates the homogeneous phantom measurement using only the measurement from a heterogeneous phantom, thereby eliminating the necessity of a homogeneous phantom. This is done by statistically averaging the data (U-measured(hetero)) and redistributing it to the corresponding detector positions. The experiments carried out on tissue mimicking phantom with single and multiple inhomogeneities, human hand, and a pork tissue phantom demonstrate the robustness of the approach. (C) 2013 Society of Photo-Optical Instrumentation Engineers (SPIE) DOI: 10.1117/1.JBO.18.2.026023]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper analyzes the error exponents in Bayesian decentralized spectrum sensing, i.e., the detection of occupancy of the primary spectrum by a cognitive radio, with probability of error as the performance metric. At the individual sensors, the error exponents of a Central Limit Theorem (CLT) based detection scheme are analyzed. At the fusion center, a K-out-of-N rule is employed to arrive at the overall decision. It is shown that, in the presence of fading, for a fixed number of sensors, the error exponents with respect to the number of observations at both the individual sensors as well as at the fusion center are zero. This motivates the development of the error exponent with a certain probability as a novel metric that can be used to compare different detection schemes in the presence of fading. The metric is useful, for example, in answering the question of whether to sense for a pilot tone in a narrow band (and suffer Rayleigh fading) or to sense the entire wide-band signal (and suffer log-normal shadowing), in terms of the error exponent performance. The error exponents with a certain probability at both the individual sensors and at the fusion center are derived, with both Rayleigh as well as log-normal shadow fading. Numerical results are used to illustrate and provide a visual feel for the theoretical expressions obtained.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper considers the problem of weak signal detection in the presence of navigation data bits for Global Navigation Satellite System (GNSS) receivers. Typically, a set of partial coherent integration outputs are non-coherently accumulated to combat the effects of model uncertainties such as the presence of navigation data-bits and/or frequency uncertainty, resulting in a sub-optimal test statistic. In this work, the test-statistic for weak signal detection is derived in the presence of navigation data-bits from the likelihood ratio. It is highlighted that averaging the likelihood ratio based test-statistic over the prior distributions of the unknown data bits and the carrier phase uncertainty leads to the conventional Post Detection Integration (PDI) technique for detection. To improve the performance in the presence of model uncertainties, a novel cyclostationarity based sub-optimal PDI technique is proposed. The test statistic is analytically characterized, and shown to be robust to the presence of navigation data-bits, frequency, phase and noise uncertainties. Monte Carlo simulation results illustrate the validity of the theoretical results and the superior performance offered by the proposed detector in the presence of model uncertainties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We show that Riesz transforms associated to the Grushin operator G = -Delta - |x|(2 similar to) (t) (2) are bounded on L (p) (a''e (n+1)). We also establish an analogue of the Hormander-Mihlin Multiplier Theorem and study Bochner-Riesz means associated to the Grushin operator. The main tools used are Littlewood-Paley theory and an operator-valued Fourier multiplier theorem due to L. Weis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The scattering of carriers by charged dislocations in semiconductors is studied within the framework of the linearized Boltzmann transport theory with an emphasis on examining consequences of the extreme anisotropy of the cylindrically symmetric scattering potential. A new closed-form approximate expression for the carrier mobility valid for all temperatures is proposed. The ratios of quantum and transport scattering times are evaluated after averaging over the anisotropy in the relaxation time. The value of the Hall scattering factor computed for charged dislocation scattering indicates that there may be a factor of two error in the experimental mobility estimates using the Hall data. An expression for the resistivity tensor when the dislocations are tilted with respect to the plane of transport is derived. Finally, an expression for the isotropic relaxation time is derived when the dislocations are located within the sample with a uniform angular distribution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let M be the completion of the polynomial ring C(z) under bar] with respect to some inner product, and for any ideal I subset of C (z) under bar], let I] be the closure of I in M. For a homogeneous ideal I, the joint kernel of the submodule I] subset of M is shown, after imposing some mild conditions on M, to be the linear span of the set of vectors {p(i)(partial derivative/partial derivative(w) over bar (1),...,partial derivative/partial derivative(w) over bar (m)) K-I] (., w)vertical bar(w=0), 1 <= i <= t}, where K-I] is the reproducing kernel for the submodule 2] and p(1),..., p(t) is some minimal ``canonical set of generators'' for the ideal I. The proof includes an algorithm for constructing this canonical set of generators, which is determined uniquely modulo linear relations, for homogeneous ideals. A short proof of the ``Rigidity Theorem'' using the sheaf model for Hilbert modules over polynomial rings is given. We describe, via the monoidal transformation, the construction of a Hermitian holomorphic line bundle for a large class of Hilbert modules of the form I]. We show that the curvature, or even its restriction to the exceptional set, of this line bundle is an invariant for the unitary equivalence class of I]. Several examples are given to illustrate the explicit computation of these invariants.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent data from high-statistics experiments that have measured the modulus of the pion electromagnetic form factor from threshold to relatively high energies are used as input in a suitable mathematical framework of analytic continuation to find stringent constraints on the shape parameters of the form factor at t = 0. The method uses also as input a precise description of the phase of the form factor in the elastic region based on Fermi-Watson theorem and the analysis of the pi pi scattering amplitude with dispersive Roy equations, and some information on the spacelike region coming from recent high precision experiments. Our analysis confirms the inconsistencies of several data on the modulus, especially from low energies, with analyticity and the input phase, noted in our earlier work. Using the data on the modulus from energies above 0.65 GeV, we obtain, with no specific parametrisation, the prediction < r(pi)(2)> is an element of (0.42, 0.44) fm(2) for the charge radius. The same formalism leads also to very narrow allowed ranges for the higher-order shape parameters at t = 0, with a strong correlation among them.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose-In the present work, a numerical method, based on the well established enthalpy technique, is developed to simulate the growth of binary alloy equiaxed dendrites in presence of melt convection. The paper aims to discuss these issues. Design/methodology/approach-The principle of volume-averaging is used to formulate the governing equations (mass, momentum, energy and species conservation) which are solved using a coupled explicit-implicit method. The velocity and pressure fields are obtained using a fully implicit finite volume approach whereas the energy and species conservation equations are solved explicitly to obtain the enthalpy and solute concentration fields. As a model problem, simulation of the growth of a single crystal in a two-dimensional cavity filled with an undercooled melt is performed. Findings-Comparison of the simulation results with available solutions obtained using level set method and the phase field method shows good agreement. The effects of melt flow on dendrite growth rate and solute distribution along the solid-liquid interface are studied. A faster growth rate of the upstream dendrite arm in case of binary alloys is observed, which can be attributed to the enhanced heat transfer due to convection as well as lower solute pile-up at the solid-liquid interface. Subsequently, the influence of thermal and solutal Peclet number and undercooling on the dendrite tip velocity is investigated. Originality/value-As the present enthalpy based microscopic solidification model with melt convection is based on a framework similar to popularly used enthalpy models at the macroscopic scale, it lays the foundation to develop effective multiscale solidification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Entropy is a fundamental thermodynamic property that has attracted a wide attention across domains, including chemistry. Inference of entropy of chemical compounds using various approaches has been a widely studied topic. However, many aspects of entropy in chemical compounds remain unexplained. In the present work, we propose two new information-theoretical molecular descriptors for the prediction of gas phase thermal entropy of organic compounds. The descriptors reflect the bulk and size of the compounds as well as the gross topological symmetry in their structures, all of which are believed to determine entropy. A high correlation () between the entropy values and our information-theoretical indices have been found and the predicted entropy values, obtained from the corresponding statistically significant regression model, have been found to be within acceptable approximation. We provide additional mathematical result in the form of a theorem and proof that might further help in assessing changes in gas phase thermal entropy values with the changes in molecular structures. The proposed information-theoretical molecular descriptors, regression model and the mathematical result are expected to augment predictions of gas phase thermal entropy for a large number of chemical compounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The von Neumann entropy of a generic quantum state is not unique unless the state can be uniquely decomposed as a sum of extremal or pure states. As pointed out to us by Sorkin, this happens if the GNS representation (of the algebra of observables in some quantum state) is reducible, and some representations in the decomposition occur with non-trivial degeneracy. This non-unique entropy can occur at zero temperature. We will argue elsewhere in detail that the degeneracies in the GNS representation can be interpreted as an emergent broken gauge symmetry, and play an important role in the analysis of emergent entropy due to non-Abelian anomalies. Finally, we establish the analogue of an H-theorem for this entropy by showing that its evolution is Markovian, determined by a stochastic matrix.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Given a smooth, projective variety Y over an algebraically closed field of characteristic zero, and a smooth, ample hyperplane section X subset of Y, we study the question of when a bundle E on X, extends to a bundle epsilon on a Zariski open set U subset of Y containing X. The main ingredients used are explicit descriptions of various obstruction classes in the deformation theory of bundles, together with Grothendieck-Lefschetz theory. As a consequence, we prove a Noether-Lefschetz theorem for higher rank bundles, which recovers and unifies the Noether-Lefschetz theorems of Joshi and Ravindra-Srinivas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present a machine learning approach for subject independent human action recognition using depth camera, emphasizing the importance of depth in recognition of actions. The proposed approach uses the flow information of all 3 dimensions to classify an action. In our approach, we have obtained the 2-D optical flow and used it along with the depth image to obtain the depth flow (Z motion vectors). The obtained flow captures the dynamics of the actions in space time. Feature vectors are obtained by averaging the 3-D motion over a grid laid over the silhouette in a hierarchical fashion. These hierarchical fine to coarse windows capture the motion dynamics of the object at various scales. The extracted features are used to train a Meta-cognitive Radial Basis Function Network (McRBFN) that uses a Projection Based Learning (PBL) algorithm, referred to as PBL-McRBFN, henceforth. PBL-McRBFN begins with zero hidden neurons and builds the network based on the best human learning strategy, namely, self-regulated learning in a meta-cognitive environment. When a sample is used for learning, PBLMcRBFN uses the sample overlapping conditions, and a projection based learning algorithm to estimate the parameters of the network. The performance of PBL-McRBFN is compared to that of a Support Vector Machine (SVM) and Extreme Learning Machine (ELM) classifiers with representation of every person and action in the training and testing datasets. Performance study shows that PBL-McRBFN outperforms these classifiers in recognizing actions in 3-D. Further, a subject-independent study is conducted by leave-one-subject-out strategy and its generalization performance is tested. It is observed from the subject-independent study that McRBFN is capable of generalizing actions accurately. The performance of the proposed approach is benchmarked with Video Analytics Lab (VAL) dataset and Berkeley Multimodal Human Action Database (MHAD). (C) 2013 Elsevier Ltd. All rights reserved.