20 resultados para WOLF SPIDERS ARANEAE

em Indian Institute of Science - Bangalore - Índia


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The perception of ultraviolet (UV) light by spiders has so far been only demonstrated in salticids. Crab spiders (Thomisidae) hunt mostly on flowers and need to find appropriate hunting sites. Previous studies have shown that some crab spiders that reflect UV light use UV contrast to enhance prey capture. The high UV contrast can be obtained either by modulation of body colouration or active selection of appropriate backgrounds for foraging. We show that crab spiders (Thomisus sp.)hunting on Spathiphyllum plants use chromatic contrast, especially UV contrast, to make themselves attractive to hymenopteran prey. Apart from that, they are able to achieve high UV contrast by active selection of non-UV reflecting surfaces when given a choice of UV-reflecting and non-UV reflecting surfaces in the absence of odour cues. Honeybees (Apis cerana) approached Spathiphyllum plants bearing crab spiders on which the spiders were high UV-contrast targets with greater frequency than those plants on which the UV-contrast of the spiders was low. Thus, crab spiders can perceive UV and may use it to choose appropriate backgrounds to enhance prey capture, by exploiting the attraction of prey such as honeybees to UV.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The recently discovered twist phase is studied in the context of the full ten-parameter family of partially coherent general anisotropic Gaussian Schell-model beams. It is shown that the nonnegativity requirement on the cross-spectral density of the beam demands that the strength of the twist phase be bounded from above by the inverse of the transverse coherence area of the beam. The twist phase as a two-point function is shown to have the structure of the generalized Huygens kernel or Green's function of a first-order system. The ray-transfer matrix of this system is exhibited. Wolf-type coherent-mode decomposition of the twist phase is carried out. Imposition of the twist phase on an otherwise untwisted beam is shown to result in a linear transformation in the ray phase space of the Wigner distribution. Though this transformation preserves the four-dimensional phase-space volume, it is not symplectic and hence it can, when impressed on a Wigner distribution, push it out of the convex set of all bona fide Wigner distributions unless the original Wigner distribution was sufficiently deep into the interior of the set.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a complete solution to the problem of coherent-mode decomposition of the most general anisotropic Gaussian Schell-model (AGSM) beams, which constitute a ten-parameter family. Our approach is based on symmetry considerations. Concepts and techniques familiar from the context of quantum mechanics in the two-dimensional plane are used to exploit the Sp(4, R) dynamical symmetry underlying the AGSM problem. We take advantage of the fact that the symplectic group of first-order optical system acts unitarily through the metaplectic operators on the Hilbert space of wave amplitudes over the transverse plane, and, using the Iwasawa decomposition for the metaplectic operator and the classic theorem of Williamson on the normal forms of positive definite symmetric matrices under linear canonical transformations, we demonstrate the unitary equivalence of the AGSM problem to a separable problem earlier studied by Li and Wolf [Opt. Lett. 7, 256 (1982)] and Gori and Guattari [Opt. Commun. 48, 7 (1983)]. This conn ction enables one to write down, almost by inspection, the coherent-mode decomposition of the general AGSM beam. A universal feature of the eigenvalue spectrum of the AGSM family is noted.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of compression via homomorphic encoding of a source having a group alphabet. This is motivated by the problem of distributed function computation, where it is known that if one is only interested in computing a function of several sources, then one can at times improve upon the compression rate required by the Slepian-Wolf bound. The functions of interest are those which could be represented by the binary operation in the group. We first consider the case when the source alphabet is the cyclic Abelian group, Zpr. In this scenario, we show that the set of achievable rates provided by Krithivasan and Pradhan [1], is indeed the best possible. In addition to that, we provide a simpler proof of their achievability result. In the case of a general Abelian group, an improved achievable rate region is presented than what was obtained by Krithivasan and Pradhan. We then consider the case when the source alphabet is a non-Abelian group. We show that if all the source symbols have non-zero probability and the center of the group is trivial, then it is impossible to compress such a source if one employs a homomorphic encoder. Finally, we present certain non-homomorphic encoders, which also are suitable in the context of function computation over non-Abelian group sources and provide rate regions achieved by these encoders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pyruvate conversion to acetyl-CoA by the pyruvate dehydrogenase (PDH) multienzyme complex is known as a key node in affecting the metabolic fluxes of animal cell culture. However, its possible role in causing possible nonlinear dynamic behavior such as oscillations and multiplicity of animal cells has received little attention. In this work, the kinetic and dynamic behavior of PDH of eucaryotic cells has been analyzed by using both in vitro and simplified in vivo models. With the in vitro model the overall reaction rate (v(1)) of PDH is shown to be a nonlinear function of pyruvate concentration, leading to oscillations under certain conditions. All enzyme components affect v, and the nonlinearity of PDH significantly, the protein X and the core enzyme dihydrolipoamide acyltransferase (E2) being mostly predominant. By considering the synthesis rates of pyruvate and PDH components the in vitro model is expanded to emulate in vivo conditions. Analysis using the in vivo model reveals another interesting kinetic feature of the PDH system, namely, multiple steady states. Depending on the pyruvate and enzyme levels or the operation mode, either a steady state with high pyruvate decarboxylation rate or a steady state with significantly lower decarboxylation rate can be achieved under otherwise identical conditions. In general, the more efficient steady state is associated with a lower pyruvate concentration. A possible time delay in the substrate supply and enzyme synthesis can also affect the steady state to be achieved and lead's to oscillations under certain conditions. Overall, the predictions of multiplicity for the PDH system agree qualitatively well with recent experimental observations in animal cell cultures. The model analysis gives some hints for improving pyruavte metabolism in animal cell culture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Results of a study of dc magnetization M(T,H), performed on a Nd(0.6)Pb(0.4)MnO(3) single crystal in the temperature range around T(C) (Curie temperature) which embraces the supposed critical region \epsilon\=\T-T(C)\/T(C)less than or equal to0.05 are reported. The magnetic data analyzed in the critical region using the Kouvel-Fisher method give the values for the T(C)=156.47+/-0.06 K and the critical exponents beta=0.374+/-0.006 (from the temperature dependence of magnetization) and gamma=1.329+/-0.003 (from the temperature dependence of initial susceptibility). The critical isotherm M(T(C),H) gives delta=4.54+/-0.10. Thus the scaling law gamma+beta=deltabeta is fulfilled. The critical exponents obey the single scaling equation of state M(H,epsilon)=epsilon(beta)f(+/-)(H/epsilon(beta+gamma)), where f(+) for T>T(C) and f(-) for T

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of compression of a non-Abelian source.This is motivated by the problem of distributed function computation,where it is known that if one is only interested in computing a function of several sources, then one can often improve upon the compression rate required by the Slepian-Wolf bound. Let G be a non-Abelian group having center Z(G). We show here that it is impossible to compress a source with symbols drawn from G when Z(G) is trivial if one employs a homomorphic encoder and a typical-set decoder.We provide achievable upper bounds on the minimum rate required to compress a non-Abelian group with non-trivial center. Also, in a two source setting, we provide achievable upper bounds for compression of any non-Abelian group, using a non-homomorphic encoder.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we explore the use of LDPC codes for nonuniform sources under distributed source coding paradigm. Our analysis reveals that several capacity approaching LDPC codes indeed do approach the Slepian-Wolf bound for nonuniform sources as well. The Monte Carlo simulation results show that highly biased sources can be compressed to 0.049 bits/sample away from Slepian-Wolf bound for moderate block lengths.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The setting considered in this paper is one of distributed function computation. More specifically, there is a collection of N sources possessing correlated information and a destination that would like to acquire a specific linear combination of the N sources. We address both the case when the common alphabet of the sources is a finite field and the case when it is a finite, commutative principal ideal ring with identity. The goal is to minimize the total amount of information needed to be transmitted by the N sources while enabling reliable recovery at the destination of the linear combination sought. One means of achieving this goal is for each of the sources to compress all the information it possesses and transmit this to the receiver. The Slepian-Wolf theorem of information theory governs the minimum rate at which each source must transmit while enabling all data to be reliably recovered at the receiver. However, recovering all the data at the destination is often wasteful of resources since the destination is only interested in computing a specific linear combination. An alternative explored here is one in which each source is compressed using a common linear mapping and then transmitted to the destination which then proceeds to use linearity to directly recover the needed linear combination. The article is part review and presents in part, new results. The portion of the paper that deals with finite fields is previously known material, while that dealing with rings is mostly new.Attempting to find the best linear map that will enable function computation forces us to consider the linear compression of source. While in the finite field case, it is known that a source can be linearly compressed down to its entropy, it turns out that the same does not hold in the case of rings. An explanation for this curious interplay between algebra and information theory is also provided in this paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The critical behaviour has been investigated in single crystalline Nd0.6Pb0.4MnO3 near the paramagnetic to ferromagnetic transition temperature (TC) by static magnetic measurements. The values of TC and the critical exponents β, γ and δ are estimated by analysing the data in the critical region. The exponent values are very close to those expected for 3D Heisenberg ferromagnets with short-range interactions. Specific heat measurements show a broad cusp at TC (i.e., exponent α<0) being consistent with Heisenberg-like behaviour.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we consider a distributed function computation setting, where there are m distributed but correlated sources X1,...,Xm and a receiver interested in computing an s-dimensional subspace generated by [X1,...,Xm]Γ for some (m × s) matrix Γ of rank s. We construct a scheme based on nested linear codes and characterize the achievable rates obtained using the scheme. The proposed nested-linear-code approach performs at least as well as the Slepian-Wolf scheme in terms of sum-rate performance for all subspaces and source distributions. In addition, for a large class of distributions and subspaces, the scheme improves upon the Slepian-Wolf approach. The nested-linear-code scheme may be viewed as uniting under a common framework, both the Korner-Marton approach of using a common linear encoder as well as the Slepian-Wolf approach of employing different encoders at each source. Along the way, we prove an interesting and fundamental structural result on the nature of subspaces of an m-dimensional vector space V with respect to a normalized measure of entropy. Here, each element in V corresponds to a distinct linear combination of a set {Xi}im=1 of m random variables whose joint probability distribution function is given.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let X-1,..., X-m be a set of m statistically dependent sources over the common alphabet F-q, that are linearly independent when considered as functions over the sample space. We consider a distributed function computation setting in which the receiver is interested in the lossless computation of the elements of an s-dimensional subspace W spanned by the elements of the row vector X-1,..., X-m]Gamma in which the (m x s) matrix Gamma has rank s. A sequence of three increasingly refined approaches is presented, all based on linear encoders. The first approach uses a common matrix to encode all the sources and a Korner-Marton like receiver to directly compute W. The second improves upon the first by showing that it is often more efficient to compute a carefully chosen superspace U of W. The superspace is identified by showing that the joint distribution of the {X-i} induces a unique decomposition of the set of all linear combinations of the {X-i}, into a chain of subspaces identified by a normalized measure of entropy. This subspace chain also suggests a third approach, one that employs nested codes. For any joint distribution of the {X-i} and any W, the sum-rate of the nested code approach is no larger than that under the Slepian-Wolf (SW) approach. Under the SW approach, W is computed by first recovering each of the {X-i}. For a large class of joint distributions and subspaces W, the nested code approach is shown to improve upon SW. Additionally, a class of source distributions and subspaces are identified, for which the nested-code approach is sum-rate optimal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

1. The relationship between species richness and ecosystem function, as measured by productivity or biomass, is of long-standing theoretical and practical interest in ecology. This is especially true for forests, which represent a majority of global biomass, productivity and biodiversity. 2. Here, we conduct an analysis of relationships between tree species richness, biomass and productivity in 25 forest plots of area 8-50ha from across the world. The data were collected using standardized protocols, obviating the need to correct for methodological differences that plague many studies on this topic. 3. We found that at very small spatial grains (0.04ha) species richness was generally positively related to productivity and biomass within plots, with a doubling of species richness corresponding to an average 48% increase in productivity and 53% increase in biomass. At larger spatial grains (0.25ha, 1ha), results were mixed, with negative relationships becoming more common. The results were qualitatively similar but much weaker when we controlled for stem density: at the 0.04ha spatial grain, a doubling of species richness corresponded to a 5% increase in productivity and 7% increase in biomass. Productivity and biomass were themselves almost always positively related at all spatial grains. 4. Synthesis. This is the first cross-site study of the effect of tree species richness on forest biomass and productivity that systematically varies spatial grain within a controlled methodology. The scale-dependent results are consistent with theoretical models in which sampling effects and niche complementarity dominate at small scales, while environmental gradients drive patterns at large scales. Our study shows that the relationship of tree species richness with biomass and productivity changes qualitatively when moving from scales typical of forest surveys (0.04ha) to slightly larger scales (0.25 and 1ha). This needs to be recognized in forest conservation policy and management.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop several novel signal detection algorithms for two-dimensional intersymbol-interference channels. The contribution of the paper is two-fold: (1) We extend the one-dimensional maximum a-posteriori (MAP) detection algorithm to operate over multiple rows and columns in an iterative manner. We study the performance vs. complexity trade-offs for various algorithmic options ranging from single row/column non-iterative detection to a multi-row/column iterative scheme and analyze the performance of the algorithm. (2) We develop a self-iterating 2-D linear minimum mean-squared based equalizer by extending the 1-D linear equalizer framework, and present an analysis of the algorithm. The iterative multi-row/column detector and the self-iterating equalizer are further connected together within a turbo framework. We analyze the combined 2-D iterative equalization and detection engine through analysis and simulations. The performance of the overall equalizer and detector is near MAP estimate with tractable complexity, and beats the Marrow Wolf detector by about at least 0.8 dB over certain 2-D ISI channels. The coded performance indicates about 8 dB of significant SNR gain over the uncoded 2-D equalizer-detector system.