980 resultados para Wolf, Hieronymus, 1516-1580.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we explore the use of LDPC codes for nonuniform sources under distributed source coding paradigm. Our analysis reveals that several capacity approaching LDPC codes indeed do approach the Slepian-Wolf bound for nonuniform sources as well. The Monte Carlo simulation results show that highly biased sources can be compressed to 0.049 bits/sample away from Slepian-Wolf bound for moderate block lengths.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The setting considered in this paper is one of distributed function computation. More specifically, there is a collection of N sources possessing correlated information and a destination that would like to acquire a specific linear combination of the N sources. We address both the case when the common alphabet of the sources is a finite field and the case when it is a finite, commutative principal ideal ring with identity. The goal is to minimize the total amount of information needed to be transmitted by the N sources while enabling reliable recovery at the destination of the linear combination sought. One means of achieving this goal is for each of the sources to compress all the information it possesses and transmit this to the receiver. The Slepian-Wolf theorem of information theory governs the minimum rate at which each source must transmit while enabling all data to be reliably recovered at the receiver. However, recovering all the data at the destination is often wasteful of resources since the destination is only interested in computing a specific linear combination. An alternative explored here is one in which each source is compressed using a common linear mapping and then transmitted to the destination which then proceeds to use linearity to directly recover the needed linear combination. The article is part review and presents in part, new results. The portion of the paper that deals with finite fields is previously known material, while that dealing with rings is mostly new.Attempting to find the best linear map that will enable function computation forces us to consider the linear compression of source. While in the finite field case, it is known that a source can be linearly compressed down to its entropy, it turns out that the same does not hold in the case of rings. An explanation for this curious interplay between algebra and information theory is also provided in this paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The critical behaviour has been investigated in single crystalline Nd0.6Pb0.4MnO3 near the paramagnetic to ferromagnetic transition temperature (TC) by static magnetic measurements. The values of TC and the critical exponents β, γ and δ are estimated by analysing the data in the critical region. The exponent values are very close to those expected for 3D Heisenberg ferromagnets with short-range interactions. Specific heat measurements show a broad cusp at TC (i.e., exponent α<0) being consistent with Heisenberg-like behaviour.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of human detection is challenging, more so, when faced with adverse conditions such as occlusion and background clutter. This paper addresses the problem of human detection by representing an extracted feature of an image using a sparse linear combination of chosen dictionary atoms. The detection along with the scale finding, is done by using the coefficients obtained from sparse representation. This is of particular interest as we address the problem of scale using a scale-embedded dictionary where the conventional methods detect the object by running the detection window at all scales.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we consider a distributed function computation setting, where there are m distributed but correlated sources X1,...,Xm and a receiver interested in computing an s-dimensional subspace generated by [X1,...,Xm]Γ for some (m × s) matrix Γ of rank s. We construct a scheme based on nested linear codes and characterize the achievable rates obtained using the scheme. The proposed nested-linear-code approach performs at least as well as the Slepian-Wolf scheme in terms of sum-rate performance for all subspaces and source distributions. In addition, for a large class of distributions and subspaces, the scheme improves upon the Slepian-Wolf approach. The nested-linear-code scheme may be viewed as uniting under a common framework, both the Korner-Marton approach of using a common linear encoder as well as the Slepian-Wolf approach of employing different encoders at each source. Along the way, we prove an interesting and fundamental structural result on the nature of subspaces of an m-dimensional vector space V with respect to a normalized measure of entropy. Here, each element in V corresponds to a distinct linear combination of a set {Xi}im=1 of m random variables whose joint probability distribution function is given.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The fracture of eutectic Si particles dictates the fracture characteristics of Al-Si based cast alloys. The morphology of these particles is found to play an important role in fracture initiation. In the current study, the effects of strain rate, temperature, strain, and heat treatment on Si particle fracture under compression were investigated. Strain rates ranging from 3 x 10(-4)/s to 10(2)/s and three temperatures RT, 373 K, and 473 K (100 A degrees C and 200 A degrees C) are considered in this study. It is found that the Si particle fracture shows a small increase with increase in strain rate and decreases with increase in temperature at 10 pct strain. The flow stress at 10 pct strain exhibits the trend similar to particle fracture with strain rate and temperature. Particle fracture also increases with increase in strain. Large and elongated particles show a greater tendency for cracking. Most fracture occurs on particles oriented nearly perpendicular to the loading axis, and the cracks are found to occur almost parallel to the loading axis. At any strain rate, temperature, and strain, the Si particle fracture is greater for the heat-treated condition than for the non-heat-treated condition because of higher flow stress in the heat-treated condition. In addition to Si particle fracture, elongated Fe-rich intermetallic particles are also seen to fracture. These particles have specific crystallographic orientations and fracture along their major axis with the cleavage planes for their fracture being (100). Fracture of these particles might also play a role in the overall fracture behavior of this alloy since these particles cleave along their major axis leading to cracks longer than 200 mu m.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let X-1,..., X-m be a set of m statistically dependent sources over the common alphabet F-q, that are linearly independent when considered as functions over the sample space. We consider a distributed function computation setting in which the receiver is interested in the lossless computation of the elements of an s-dimensional subspace W spanned by the elements of the row vector X-1,..., X-m]Gamma in which the (m x s) matrix Gamma has rank s. A sequence of three increasingly refined approaches is presented, all based on linear encoders. The first approach uses a common matrix to encode all the sources and a Korner-Marton like receiver to directly compute W. The second improves upon the first by showing that it is often more efficient to compute a carefully chosen superspace U of W. The superspace is identified by showing that the joint distribution of the {X-i} induces a unique decomposition of the set of all linear combinations of the {X-i}, into a chain of subspaces identified by a normalized measure of entropy. This subspace chain also suggests a third approach, one that employs nested codes. For any joint distribution of the {X-i} and any W, the sum-rate of the nested code approach is no larger than that under the Slepian-Wolf (SW) approach. Under the SW approach, W is computed by first recovering each of the {X-i}. For a large class of joint distributions and subspaces W, the nested code approach is shown to improve upon SW. Additionally, a class of source distributions and subspaces are identified, for which the nested-code approach is sum-rate optimal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new approach that can easily incorporate any generic penalty function into the diffuse optical tomographic image reconstruction is introduced to show the utility of nonquadratic penalty functions. The penalty functions that were used include quadratic (l(2)), absolute (l(1)), Cauchy, and Geman-McClure. The regularization parameter in each of these cases was obtained automatically by using the generalized cross-validation method. The reconstruction results were systematically compared with each other via utilization of quantitative metrics, such as relative error and Pearson correlation. The reconstruction results indicate that, while the quadratic penalty may be able to provide better separation between two closely spaced targets, its contrast recovery capability is limited, and the sparseness promoting penalties, such as l(1), Cauchy, and Geman-McClure have better utility in reconstructing high-contrast and complex-shaped targets, with the Geman-McClure penalty being the most optimal one. (C) 2013 Optical Society of America

Relevância:

10.00% 10.00%

Publicador:

Resumo:

1. The relationship between species richness and ecosystem function, as measured by productivity or biomass, is of long-standing theoretical and practical interest in ecology. This is especially true for forests, which represent a majority of global biomass, productivity and biodiversity. 2. Here, we conduct an analysis of relationships between tree species richness, biomass and productivity in 25 forest plots of area 8-50ha from across the world. The data were collected using standardized protocols, obviating the need to correct for methodological differences that plague many studies on this topic. 3. We found that at very small spatial grains (0.04ha) species richness was generally positively related to productivity and biomass within plots, with a doubling of species richness corresponding to an average 48% increase in productivity and 53% increase in biomass. At larger spatial grains (0.25ha, 1ha), results were mixed, with negative relationships becoming more common. The results were qualitatively similar but much weaker when we controlled for stem density: at the 0.04ha spatial grain, a doubling of species richness corresponded to a 5% increase in productivity and 7% increase in biomass. Productivity and biomass were themselves almost always positively related at all spatial grains. 4. Synthesis. This is the first cross-site study of the effect of tree species richness on forest biomass and productivity that systematically varies spatial grain within a controlled methodology. The scale-dependent results are consistent with theoretical models in which sampling effects and niche complementarity dominate at small scales, while environmental gradients drive patterns at large scales. Our study shows that the relationship of tree species richness with biomass and productivity changes qualitatively when moving from scales typical of forest surveys (0.04ha) to slightly larger scales (0.25 and 1ha). This needs to be recognized in forest conservation policy and management.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop several novel signal detection algorithms for two-dimensional intersymbol-interference channels. The contribution of the paper is two-fold: (1) We extend the one-dimensional maximum a-posteriori (MAP) detection algorithm to operate over multiple rows and columns in an iterative manner. We study the performance vs. complexity trade-offs for various algorithmic options ranging from single row/column non-iterative detection to a multi-row/column iterative scheme and analyze the performance of the algorithm. (2) We develop a self-iterating 2-D linear minimum mean-squared based equalizer by extending the 1-D linear equalizer framework, and present an analysis of the algorithm. The iterative multi-row/column detector and the self-iterating equalizer are further connected together within a turbo framework. We analyze the combined 2-D iterative equalization and detection engine through analysis and simulations. The performance of the overall equalizer and detector is near MAP estimate with tractable complexity, and beats the Marrow Wolf detector by about at least 0.8 dB over certain 2-D ISI channels. The coded performance indicates about 8 dB of significant SNR gain over the uncoded 2-D equalizer-detector system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, we consider two-dimensional (2-D) binary channels in which the 2-D error patterns are constrained so that errors cannot occur in adjacent horizontal or vertical positions. We consider probabilistic and combinatorial models for such channels. A probabilistic model is obtained from a 2-D random field defined by Roth, Siegel and Wolf (2001). Based on the conjectured ergodicity of this random field, we obtain an expression for the capacity of the 2-D non-adjacent-errors channel. We also derive an upper bound for the asymptotic coding rate in the combinatorial model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Iodination of tris(trimethylsilyl)methanethiol (trisylthiol, TsiSH) in tetrahydrofuran provides the new thermally stable alkanesulfenyl iodide iodo(trisyl)sulfane, TsiSI] as a violet solid. Iodo(trisyl)sulfane exhibits iodine-iodine contacts between pairs of TsiSI molecules in the solid state. Properties of TsiSI were studied by vibrational spectroscopy and with the help of density functional calculations. TsiSI reacts in the presence of triethylamine with the antithyroid drugs 6-n-propyl- and 6-methylthiouracil (PTU, MTU) and with N-methylmethimazole (MMI) to form unsymmetric disulfides that were investigated by means of X-ray crystallography. In the solid state, the PTU and MTU derivatives exist as hydrogen-bonded centrosymmetric dimers, whereas the MMI-derived disulfide is an unsymmetric monomer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Advances in forest carbon mapping have the potential to greatly reduce uncertainties in the global carbon budget and to facilitate effective emissions mitigation strategies such as REDD+ (Reducing Emissions from Deforestation and Forest Degradation). Though broad-scale mapping is based primarily on remote sensing data, the accuracy of resulting forest carbon stock estimates depends critically on the quality of field measurements and calibration procedures. The mismatch in spatial scales between field inventory plots and larger pixels of current and planned remote sensing products for forest biomass mapping is of particular concern, as it has the potential to introduce errors, especially if forest biomass shows strong local spatial variation. Here, we used 30 large (8-50 ha) globally distributed permanent forest plots to quantify the spatial variability in aboveground biomass density (AGBD in Mgha(-1)) at spatial scales ranging from 5 to 250m (0.025-6.25 ha), and to evaluate the implications of this variability for calibrating remote sensing products using simulated remote sensing footprints. We found that local spatial variability in AGBD is large for standard plot sizes, averaging 46.3% for replicate 0.1 ha subplots within a single large plot, and 16.6% for 1 ha subplots. AGBD showed weak spatial autocorrelation at distances of 20-400 m, with autocorrelation higher in sites with higher topographic variability and statistically significant in half of the sites. We further show that when field calibration plots are smaller than the remote sensing pixels, the high local spatial variability in AGBD leads to a substantial ``dilution'' bias in calibration parameters, a bias that cannot be removed with standard statistical methods. Our results suggest that topography should be explicitly accounted for in future sampling strategies and that much care must be taken in designing calibration schemes if remote sensing of forest carbon is to achieve its promise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Global change is impacting forests worldwide, threatening biodiversity and ecosystem services including climate regulation. Understanding how forests respond is critical to forest conservation and climate protection. This review describes an international network of 59 long-term forest dynamics research sites (CTFS-ForestGEO) useful for characterizing forest responses to global change. Within very large plots (median size 25ha), all stems 1cm diameter are identified to species, mapped, and regularly recensused according to standardized protocols. CTFS-ForestGEO spans 25 degrees S-61 degrees N latitude, is generally representative of the range of bioclimatic, edaphic, and topographic conditions experienced by forests worldwide, and is the only forest monitoring network that applies a standardized protocol to each of the world's major forest biomes. Supplementary standardized measurements at subsets of the sites provide additional information on plants, animals, and ecosystem and environmental variables. CTFS-ForestGEO sites are experiencing multifaceted anthropogenic global change pressures including warming (average 0.61 degrees C), changes in precipitation (up to +/- 30% change), atmospheric deposition of nitrogen and sulfur compounds (up to 3.8g Nm(-2)yr(-1) and 3.1g Sm(-2)yr(-1)), and forest fragmentation in the surrounding landscape (up to 88% reduced tree cover within 5km). The broad suite of measurements made at CTFS-ForestGEO sites makes it possible to investigate the complex ways in which global change is impacting forest dynamics. Ongoing research across the CTFS-ForestGEO network is yielding insights into how and why the forests are changing, and continued monitoring will provide vital contributions to understanding worldwide forest diversity and dynamics in an era of global change.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dynamic analysis techniques have been proposed to detect potential deadlocks. Analyzing and comprehending each potential deadlock to determine whether the deadlock is feasible in a real execution requires significant programmer effort. Moreover, empirical evidence shows that existing analyses are quite imprecise. This imprecision of the analyses further void the manual effort invested in reasoning about non-existent defects. In this paper, we address the problems of imprecision of existing analyses and the subsequent manual effort necessary to reason about deadlocks. We propose a novel approach for deadlock detection by designing a dynamic analysis that intelligently leverages execution traces. To reduce the manual effort, we replay the program by making the execution follow a schedule derived based on the observed trace. For a real deadlock, its feasibility is automatically verified if the replay causes the execution to deadlock. We have implemented our approach as part of WOLF and have analyzed many large (upto 160KLoC) Java programs. Our experimental results show that we are able to identify 74% of the reported defects as true (or false) positives automatically leaving very few defects for manual analysis. The overhead of our approach is negligible making it a compelling tool for practical adoption.