913 resultados para Averaging principle
Resumo:
The principle of microscopic reversibility is one of the few generalising principles used in organic chemistry which have their roots in the fundamental laws of thermodynamics. It has, therefore, been highly popular. However, although the principle has some important uses, its general application is not without pitfalls. The principle is easy to misunderstand and to misapply: indeed, some of its formulations are semantically dubious. The principle is most dangerous when used as a charm, for it is more subtle than some of its formulations suggest. But above all, the principle may not be used for deducing or disproving the mechanism of a reaction, except when the mechanism in the reverse direction is known independently. For, such use is, perhaps, the deadliest misapplication.
Resumo:
Ab-initio calculations are used to determine the parameters that determine magnonic band structure of PdnFem multilayers (n = 2, m <= 8). We obtain the layer-resolved magnetization, the exchange coupling, and the magnetic anisotropy of the Pd-Fe structures. The Fe moment is 3.0 mu(B) close to the Pd layers and 2.2 mu(B) in the middle of the Fe layers. An intriguing but not usually considered aspect is that the elemental Pd is nonmagnetic, similar to Cu spacer layers in other multilayer systems. This leads to a pre-asymptotic ferromagnetic coupling through the Pd (about 40 mJ/m(2)). Furthermore, the Pd acquires a small moment due to spin polarization by neighboring Fe atoms, which translates into magnetic anisotropy. The anisotropies are large, in the range typical for L1(0) structures, which is beneficial for high-frequency applications. (C) 2011 American Institute of Physics. doi:10.1063/1.3556763]
Resumo:
In this paper, we report on the concept and the design principle of ultrafast Raman loss spectroscopy (URLS) as a structure-elucidating tool. URLS is an analogue of stimulated Raman scattering (SRS) but more sensitive than SRS with better signal-to-noise ratio. It involves the interaction of two laser sources, namely, a picosecond (ps) Raman pump pulse and a white-light (WL) continuum, with a sample, leading to the generation of loss signals on the higher energy (blue) side with respect to the wavelength of the Raman pump unlike the gain signal observed on the lower energy (red) side in SRS. These loss signals are at least 1.5 times more intense than the SRS signals. An experimental study providing an insight into the origin of this extra intensity in URLS as compared to SRS is reported. Furthermore, the very requirement of the experimental protocol for the signal detection to be on the higher energy side by design eliminates the interference from fluorescence, which appears on the red side. Unlike CARS, URLS signals are not precluded by the non-resonant background and, being a self-phase-matched process, URLS is experimentally easier. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
We develop a continuum theory to model low energy excitations of a generic four-band time reversal invariant electronic system with boundaries. We propose a variational energy functional for the wavefunctions which allows us to derive natural boundary conditions valid for such systems. Our formulation is particularly suited for developing a continuum theory of the protected edge/surface excitations of topological insulators both in two and three dimensions. By a detailed comparison of our analytical formulation with tight binding calculations of ribbons of topological insulators modelled by the Bernevig-Hughes-Zhang (BHZ) Hamiltonian, we show that the continuum theory with a natural boundary condition provides an appropriate description of the low energy physics.
Resumo:
Bhutani N, Ray S, Murthy A. Is saccade averaging determined by visual processing or movement planning? J Neurophysiol 108: 3161-3171, 2012. First published September 26, 2012; doi:10.1152/jn.00344.2012.-Saccadic averaging that causes subjects' gaze to land between the location of two targets when faced with simultaneously or sequentially presented stimuli has been often used as a probe to investigate the nature of computations that transform sensory representations into an oculomotor plan. Since saccadic movements involve at least two processing stages-a visual stage that selects a target and a movement stage that prepares the response-saccade averaging can either occur due to interference in visual processing or movement planning. By having human subjects perform two versions of a saccadic double-step task, in which the stimuli remained the same, but different instructions were provided (REDIRECT gaze to the later-appearing target vs. FOLLOW the sequence of targets in their order of appearance), we tested two alternative hypotheses. If saccade averaging were due to visual processing alone, the pattern of saccade averaging is expected to remain the same across task conditions. However, whereas subjects produced averaged saccades between two targets in the FOLLOW condition, they produced hypometric saccades in the direction of the initial target in the REDIRECT condition, suggesting that the interaction between competing movement plans produces saccade averaging.
Resumo:
Low density parity-check (LDPC) codes are a class of linear block codes that are decoded by running belief propagation (BP) algorithm or log-likelihood ratio belief propagation (LLR-BP) over the factor graph of the code. One of the disadvantages of LDPC codes is the onset of an error floor at high values of signal to noise ratio caused by trapping sets. In this paper, we propose a two stage decoder to deal with different types of trapping sets. Oscillating trapping sets are taken care by the first stage of the decoder and the elementary trapping sets are handled by the second stage of the decoder. Simulation results on the regular PEG (504,252,3,6) code and the irregular PEG (1024,518,15,8) code shows that the proposed two stage decoder performs significantly better than the standard decoder.
Resumo:
Purpose-In the present work, a numerical method, based on the well established enthalpy technique, is developed to simulate the growth of binary alloy equiaxed dendrites in presence of melt convection. The paper aims to discuss these issues. Design/methodology/approach-The principle of volume-averaging is used to formulate the governing equations (mass, momentum, energy and species conservation) which are solved using a coupled explicit-implicit method. The velocity and pressure fields are obtained using a fully implicit finite volume approach whereas the energy and species conservation equations are solved explicitly to obtain the enthalpy and solute concentration fields. As a model problem, simulation of the growth of a single crystal in a two-dimensional cavity filled with an undercooled melt is performed. Findings-Comparison of the simulation results with available solutions obtained using level set method and the phase field method shows good agreement. The effects of melt flow on dendrite growth rate and solute distribution along the solid-liquid interface are studied. A faster growth rate of the upstream dendrite arm in case of binary alloys is observed, which can be attributed to the enhanced heat transfer due to convection as well as lower solute pile-up at the solid-liquid interface. Subsequently, the influence of thermal and solutal Peclet number and undercooling on the dendrite tip velocity is investigated. Originality/value-As the present enthalpy based microscopic solidification model with melt convection is based on a framework similar to popularly used enthalpy models at the macroscopic scale, it lays the foundation to develop effective multiscale solidification.
Resumo:
In this paper, we present an extension of the iterative closest point (ICP) algorithm that simultaneously registers multiple 3D scans. While ICP fails to utilize the multiview constraints available, our method exploits the information redundancy in a set of 3D scans by using the averaging of relative motions. This averaging method utilizes the Lie group structure of motions, resulting in a 3D registration method that is both efficient and accurate. In addition, we present two variants of our approach, i.e., a method that solves for multiview 3D registration while obeying causality and a transitive correspondence variant that efficiently solves the correspondence problem across multiple scans. We present experimental results to characterize our method and explain its behavior as well as those of some other multiview registration methods in the literature. We establish the superior accuracy of our method in comparison to these multiview methods with registration results on a set of well-known real datasets of 3D scans.
Resumo:
Hydrogen peroxide (H2O2) level in biological samples is used as an important index in various studies. Quantification of H2O2 level in tissue fractions in presence of H2O2 metabolizing enzymes may always provide an incorrect result. A modification is proposed for the spectrofluorimetric determination of H2O2 in homovanillic acid (HVA) oxidation method. The modification was included to precipitate biological samples with cold trichloroacetic acid (TCA, 5% w/v) followed by its neutralization with K2HPO4 before the fluorimetric estimation of H2O2 is performed. TCA was used to precipitate the protein portions contained in the tissue fractions. After employing the above modification, it was observed that H2O2 content in tissue samples was >= 2 fold higher than the content observed in unmodified method. Minimum 2 h incubation of samples in reaction mixture was required for completion of the reaction. The stability of the HVA dimer as reaction product was found to be > 12 h. The method was validated by using known concentrations of H2O2 and catalase enzyme that quenches H2O2 as substrate. This method can be used efficiently to determine more accurate tissue H2O2 level without using internal standard and multiple samples can be processed at a time with additional low cost reagents such as TCA and K2HPO4.
Resumo:
We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.
Resumo:
The standard approach to signal reconstruction in frequency-domain optical-coherence tomography (FDOCT) is to apply the inverse Fourier transform to the measurements. This technique offers limited resolution (due to Heisenberg's uncertainty principle). We propose a new super-resolution reconstruction method based on a parametric representation. We consider multilayer specimens, wherein each layer has a constant refractive index and show that the backscattered signal from such a specimen fits accurately in to the framework of finite-rate-of-innovation (FRI) signal model and is represented by a finite number of free parameters. We deploy the high-resolution Prony method and show that high-quality, super-resolved reconstruction is possible with fewer measurements (about one-fourth of the number required for the standard Fourier technique). To further improve robustness to noise in practical scenarios, we take advantage of an iterated singular-value decomposition algorithm (Cadzow denoiser). We present results of Monte Carlo analyses, and assess statistical efficiency of the reconstruction techniques by comparing their performance against the Cramer-Rao bound. Reconstruction results on experimental data obtained from technical as well as biological specimens show a distinct improvement in resolution and signal-to-reconstruction noise offered by the proposed method in comparison with the standard approach.