298 resultados para Exponential Sum
Resumo:
We report the observation of persistent photoconductivity (PPC) in flower shaped PbS dendrites grown by the hydrothermal method. Potential fluctuations, due to the presence of various confinement regimes in the branches of dendrites, and surface traps, are likely responsible for the PPC observed here. We also observed photocurrent quenching and decreased dark current in the PPC below 40 K, due to the presence of a metastable state, whereas positive PPC was observed in the temperature region 40-220 K. Dark conductivity measurements, time constant parameters obtained from the stretched exponential fittings of PPC, also showed the metastable state related transition around 50 K.
Resumo:
Suspension cultures of Catharanthus roseus were used to evaluate ultraviolet-B (UV-B) treatment as an abiotic elicitor of secondary metabolites. A dispersed cell suspension culture from C. roseus leaves in late exponential phase and stationary phase were irradiated with UV-B for 5 min. The stationary phase cultures were more responsive to UV-B irradiation than late exponential phase cultures. Catharanthine and vindoline increased 3-fold and 12-fold, respectively, on treatment with a 5-min UV-B irradiation.
Resumo:
A new form of a multi-step transversal linearization (MTL) method is developed and numerically explored in this study for a numeric-analytical integration of non-linear dynamical systems under deterministic excitations. As with other transversal linearization methods, the present version also requires that the linearized solution manifold transversally intersects the non-linear solution manifold at a chosen set of points or cross-section in the state space. However, a major point of departure of the present method is that it has the flexibility of treating non-linear damping and stiffness terms of the original system as damping and stiffness terms in the transversally linearized system, even though these linearized terms become explicit functions of time. From this perspective, the present development is closely related to the popular practice of tangent-space linearization adopted in finite element (FE) based solutions of non-linear problems in structural dynamics. The only difference is that the MTL method would require construction of transversal system matrices in lieu of the tangent system matrices needed within an FE framework. The resulting time-varying linearized system matrix is then treated as a Lie element using Magnus’ characterization [W. Magnus, On the exponential solution of differential equations for a linear operator, Commun. Pure Appl. Math., VII (1954) 649–673] and the associated fundamental solution matrix (FSM) is obtained through repeated Lie-bracket operations (or nested commutators). An advantage of this approach is that the underlying exponential transformation could preserve certain intrinsic structural properties of the solution of the non-linear problem. Yet another advantage of the transversal linearization lies in the non-unique representation of the linearized vector field – an aspect that has been specifically exploited in this study to enhance the spectral stability of the proposed family of methods and thus contain the temporal propagation of local errors. A simple analysis of the formal orders of accuracy is provided within a finite dimensional framework. Only a limited numerical exploration of the method is presently provided for a couple of popularly known non-linear oscillators, viz. a hardening Duffing oscillator, which has a non-linear stiffness term, and the van der Pol oscillator, which is self-excited and has a non-linear damping term.
Resumo:
Barrierless chemical reactions have often been modeled as a Brownian motion on a one-dimensional harmonic potential energy surface with a position-dependent reaction sink or window located near the minimum of the surface. This simple (but highly successful) description leads to a nonexponential survival probability only at small to intermediate times but exponential decay in the long-time limit. However, in several reactive events involving proteins and glasses, the reactions are found to exhibit a strongly nonexponential (power law) decay kinetics even in the long time. In order to address such reactions, here, we introduce a model of barrierless chemical reaction where the motion along the reaction coordinate sustains dispersive diffusion. A complete analytical solution of the model can be obtained only in the frequency domain, but an asymptotic solution is obtained in the limit of long time. In this case, the asymptotic long-time decay of the survival probability is a power law of the Mittag−Leffler functional form. When the barrier height is increased, the decay of the survival probability still remains nonexponential, in contrast to the ordinary Brownian motion case where the rate is given by the Smoluchowski limit of the well-known Kramers' expression. Interestingly, the reaction under dispersive diffusion is shown to exhibit strong dependence on the initial state of the system, thus predicting a strong dependence on the excitation wavelength for photoisomerization reactions in a dispersive medium. The theory also predicts a fractional viscosity dependence of the rate, which is often observed in the reactions occurring in complex environments.
Resumo:
The properties of the generalized survival probability, that is, the probability of not crossing an arbitrary location R during relaxation, have been investigated experimentally (via scanning tunneling microscope observations) and numerically. The results confirm that the generalized survival probability decays exponentially with a time constant tau(s)(R). The distance dependence of the time constant is shown to be tau(s)(R)=tau(s0)exp[-R/w(T)], where w(2)(T) is the material-dependent mean-squared width of the step fluctuations. The result reveals the dependence on the physical parameters of the system inherent in the prior prediction of the time constant scaling with R/L-alpha, with L the system size and alpha the roughness exponent. The survival behavior is also analyzed using a contrasting concept, the generalized inside survival S-in(t,R), which involves fluctuations to an arbitrary location R further from the average. Numerical simulations of the inside survival probability also show an exponential time dependence, and the extracted time constant empirically shows (R/w)(lambda) behavior, with lambda varying over 0.6 to 0.8 as the sampling conditions are changed. The experimental data show similar behavior, and can be well fit with lambda=1.0 for T=300 K, and 0.5
Resumo:
It is shown that pure exponential discs in spiral galaxies are capable of supporting slowly varying discrete global lopsided modes, which can explain the observed features of lopsidedness in the stellar discs. Using linearized fluid dynamical equations with the softened self-gravity and pressure of the perturbation as the collective effect, we derive self-consistently a quadratic eigenvalue equation for the lopsided perturbation in the galactic disc. On solving this, we find that the ground-state mode shows the observed characteristics of the lopsidedness in a galactic disc, namely the fractional Fourier amplitude A(1), increases smoothly with the radius. These lopsided patterns precess in the disc with a very slow pattern speed with no preferred sense of precession. We show that the lopsided modes in the stellar disc are long-lived because of a substantial reduction (approximately a factor of 10 compared to the local free precession rate) in the differential precession. The numerical solution of the equations shows that the groundstate lopsided modes are either very slowly precessing stationary normal mode oscillations of the disc or growing modes with a slow growth rate depending on the relative importance of the collective effect of the self-gravity. N-body simulations are performed to test the spontaneous growth of lopsidedness in a pure stellar disc. Both approaches are then compared and interpreted in terms of long-lived global m = 1 instabilities, with almost zero pattern speed.
Resumo:
Hydrographic observations were taken along two coastal sections and one open ocean section in the Bay of Bengal during the 1999 southwest monsoon, as a part of the Bay of Bengal Monsoon Experiment (BOBMEX). The coastal section in the northwestern Bay of Bengal, which was occupied twice, captured a freshwater plume in its two stages: first when the plume was restricted to the coastal region although separated from the coast, and then when the plume spread offshore. Below the freshwater layer there were indications of an undercurrent. The coastal section in the southern Bay of Bengal was marked by intense coastal upwelling in a 50 km wide band. In regions under the influence of the freshwater plume, the mixed layer was considerably thinner and occasionally led to the formation of a temperature inversion. The mixed layer and isothermal layer were of similar depth for most of the profiles within and outside the freshwater plume and temperature below the mixed layer decreased rapidly till the top of seasonal thermocline. There was no barrier layer even in regions well under the influence of the freshwater plume. The freshwater plume in the open Bay of Bengal does not advect to the south of 16 degrees N during the southwest monsoon. A model of the Indian Ocean, forced by heat, momentum and freshwater fluxes for the year 1999, reproduces the freshwater plume in the Bay of Bengal reasonably well. Model currents as well as the surface circulation calculated as the sum of geostrophic and Ekman drift show a southeastward North Bay Monsoon Current (NBMC) across the Bay, which forms the southern arm of a cyclonic gyre. The NBMC separates the very low salinity waters of the northern Bay from the higher salinities in the south and thus plays an important role in the regulation of near surface stratification. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Hydrographic observations were taken along two coastal sections and one open ocean section in the Bay of Bengal during the 1999 southwest monsoon, as a part of the Bay of Bengal Monsoon Experiment (BOBMEX). The coastal section in the northwestern Bay of Bengal, which was occupied twice, captured a freshwater plume in its two stages: first when the plume was restricted to the coastal region although separated from the coast, and then when the plume spread offshore. Below the freshwater layer there were indications of an undercurrent. The coastal section in the southern Bay of Bengal was marked by intense coastal upwelling in a 50 km wide band. In regions under the influence of the freshwater plume, the mixed layer was considerably thinner and occasionally led to the formation of a temperature inversion. The mixed layer and isothermal layer were of similar depth for most of the profiles within and outside the freshwater plume and temperature below the mixed layer decreased rapidly till the top of seasonal thermocline. There was no barrier layer even in regions well under the influence of the freshwater plume. The freshwater plume in the open Bay of Bengal does not advect to the south of 16 degrees N during the southwest monsoon. A model of the Indian Ocean, forced by heat, momentum and freshwater fluxes for the year 1999, reproduces the freshwater plume in the Bay of Bengal reasonably well. Model currents as well as the surface circulation calculated as the sum of geostrophic and Ekman drift show a southeastward North Bay Monsoon Current (NBMC) across the Bay, which forms the southern arm of a cyclonic gyre. The NBMC separates the very low salinity waters of the northern Bay from the higher salinities in the south and thus plays an important role in the regulation of near surface stratification. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The coherent quantum evolution of a one-dimensional many-particle system after slowly sweeping the Hamiltonian through a critical point is studied using a generalized quantum Ising model containing both integrable and nonintegrable regimes. It is known from previous work that universal power laws of the sweep rate appear in such quantities as the mean number of excitations created by the sweep. Several other phenomena are found that are not reflected by such averages: there are two different scaling behaviors of the entanglement entropy and a relaxation that is power law in time rather than exponential. The final state of evolution after the quench is not characterized by any effective temperature, and the Loschmidt echo converges algebraically for long times, with cusplike singularities in the integrable case that are dynamically broadened by nonintegrable perturbations.
Resumo:
Structural relaxation behavior of a rapidly quenched (RQ) and a slowly cooled Pd40Cu30Ni10P20 metallic glass was investigated and compared. Differential scanning calorimetry was employed to monitor the relaxation enthalpies at the glass transition temperature, T-g , and the Kolrausch-Williams-Watts (KWW) stretched exponential function was used to describe its variation with annealing time. It was found that the rate of enthalpy recovery is higher in the ribbon, implying that the bulk is more resistant to relaxation at low temperatures of annealing. This was attributed to the possibility of cooling rate affecting the locations where the glasses get trapped within the potential energy landscape. The RQ process traps a larger amount of free volume, resulting in higher fragility, and in turn relaxes at the slightest thermal excitation (annealing). The slowly cooled bulk metallic glass (BMG), on the other hand, entraps lower free volume and has more short-range ordering, hence requiring a large amount of perturbation to access lower energy basins.
Resumo:
In this paper we consider the problems of computing a minimum co-cycle basis and a minimum weakly fundamental co-cycle basis of a directed graph G. A co-cycle in G corresponds to a vertex partition (S,V ∖ S) and a { − 1,0,1} edge incidence vector is associated with each co-cycle. The vector space over ℚ generated by these vectors is the co-cycle space of G. Alternately, the co-cycle space is the orthogonal complement of the cycle space of G. The minimum co-cycle basis problem asks for a set of co-cycles that span the co-cycle space of G and whose sum of weights is minimum. Weakly fundamental co-cycle bases are a special class of co-cycle bases, these form a natural superclass of strictly fundamental co-cycle bases and it is known that computing a minimum weight strictly fundamental co-cycle basis is NP-hard. We show that the co-cycle basis corresponding to the cuts of a Gomory-Hu tree of the underlying undirected graph of G is a minimum co-cycle basis of G and it is also weakly fundamental.
Resumo:
We consider the problem of computing an approximate minimum cycle basis of an undirected edge-weighted graph G with m edges and n vertices; the extension to directed graphs is also discussed. In this problem, a {0,1} incidence vector is associated with each cycle and the vector space over F-2 generated by these vectors is the cycle space of G. A set of cycles is called a cycle basis of G if it forms a basis for its cycle space. A cycle basis where the sum of the weights of the cycles is minimum is called a minimum cycle basis of G. Cycle bases of low weight are useful in a number of contexts, e.g. the analysis of electrical networks, structural engineering, chemistry, and surface reconstruction. We present two new algorithms to compute an approximate minimum cycle basis. For any integer k >= 1, we give (2k - 1)-approximation algorithms with expected running time 0(kmn(1+2/k) + mn((1+1/k)(omega-1))) and deterministic running time 0(n(3+2/k)), respectively. Here omega is the best exponent of matrix multiplication. It is presently known that omega < 2.376. Both algorithms are o(m(omega)) for dense graphs. This is the first time that any algorithm which computes sparse cycle bases with a guarantee drops below the Theta(m(omega)) bound. We also present a 2-approximation algorithm with O(m(omega) root n log n) expected running time, a linear time 2-approximation algorithm for planar graphs and an O(n(3)) time 2.42-approximation algorithm for the complete Euclidean graph in the plane.
Resumo:
In this paper, we consider robust joint linear precoder/receive filter designs for multiuser multi-input multi-output (MIMO) downlink that minimize the sum mean square error (SMSE) in the presence of imperfect channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas, and each user terminal is equipped with one or more receive antennas. We consider a stochastic error (SE) model and a norm-bounded error (NBE) model for the CSIT error. In the case of CSIT error following SE model, we compute the desired downlink precoder/receive filter matrices by solving the simpler uplink problem by exploiting the uplink-downlink duality for the MSE region. In the case of the CSIT error following the NBE model, we consider the worst-case SMSE as the objective function, and propose an iterative algorithm for the robust transceiver design. The robustness of the proposed algorithms to imperfections in CSIT is illustrated through simulations.
Resumo:
In this paper, new results and insights are derived for the performance of multiple-input, single-output systems with beamforming at the transmitter, when the channel state information is quantized and sent to the transmitter over a noisy feedback channel. It is assumed that there exists a per-antenna power constraint at the transmitter, hence, the equal gain transmission (EGT) beamforming vector is quantized and sent from the receiver to the transmitter. The loss in received signal-to-noise ratio (SNR) relative to perfect beamforming is analytically characterized, and it is shown that at high rates, the overall distortion can be expressed as the sum of the quantization-induced distortion and the channel error-induced distortion, and that the asymptotic performance depends on the error-rate behavior of the noisy feedback channel as the number of codepoints gets large. The optimum density of codepoints (also known as the point density) that minimizes the overall distortion subject to a boundedness constraint is shown to be the same as the point density for a noiseless feedback channel, i.e., the uniform density. The binary symmetric channel with random index assignment is a special case of the analysis, and it is shown that as the number of quantized bits gets large the distortion approaches the same as that obtained with random beamforming. The accuracy of the theoretical expressions obtained are verified through Monte Carlo simulations.
Resumo:
Sequence design problems are considered in this paper. The problem of sum power minimization in a spread spectrum system can be reduced to the problem of sum capacity maximization, and vice versa. A solution to one of the problems yields a solution to the other. Subsequently, conceptually simple sequence design algorithms known to hold for the white-noise case are extended to the colored noise case. The algorithms yield an upper bound of 2N - L on the number of sequences where N is the processing gain and L the number of non-interfering subsets of users. If some users (at most N - 1) are allowed to signal along a limited number of multiple dimensions, then N orthogonal sequences suffice.