917 resultados para improvement of Lagrangian bounds
Resumo:
Structural and electrical properties of Eu2O3 films grown on Si(100) in 500–600 °C temperature range by low pressure metalorganic chemical vapor deposition are reported. As-grown films also possess the impurity Eu1−xO phase, which has been removed upon annealing in O2 ambient. Film’s morphology comprises uniform spherical mounds (40–60 nm). Electrical properties of the films, as examined by capacitance-voltage measurements, exhibit fixed oxide charges in the range of −1.5×1011 to −6.0×1010 cm−2 and dielectric constant in the range of 8–23. Annealing has resulted in drastic improvement of their electrical properties. Effect of oxygen nonstoichiometry on the film’s property is briefly discussed.
Resumo:
Homogenization of partial differential equations is relatively a new area and has tremendous applications in various branches of engineering sciences like: material science,porous media, study of vibrations of thin structures, composite materials to name a few. Though the material scientists and others had reasonable idea about the homogenization process, it was lacking a good mathematical theory till early seventies. The first proper mathematical procedure was developed in the seventies and later in the last 30 years or so it has flourished in various ways both application wise and mathematically. This is not a full survey article and on the other hand we will not be concentrating on a specialized problem. Indeed, we do indicate certain specialized problems of our interest without much details and that is not the main theme of the article. I plan to give an introductory presentation with the aim of catering to a wider audience. We go through few examples to understand homogenization procedure in a general perspective together with applications. We also present various mathematical techniques available and if possible some details about some of the techniques. A possible definition of homogenization would be that it is a process of understanding a heterogeneous (in-homogeneous) media, where the heterogeneties are at the microscopic level, like in composite materials, by a homogeneous media. In other words, one would like to obtain a homogeneous description of a highly oscillating in-homogeneous media. We also present other generalizations to non linear problems, porous media and so on. Finally, we will like to see a closely related issue of optimal bounds which itself is an independent area of research.
Resumo:
In this paper, we analyze the throughput and energy efficiency performance of user datagram protocol (UDP) using linear, binary exponential, and geometric backoff algorithms at the link layer (LL) on point-to-point wireless fading links. Using a first-order Markov chain representation of the packet success/failure process on fading channels, we derive analytical expressions for throughput and energy efficiency of UDP/LL with and without LL backoff. The analytical results are verified through simulations. We also evaluate the mean delay and delay variation of voice packets and energy efficiency performance over a wireless link that uses UDP for transport of voice packets and the proposed backoff algorithms at the LL. We show that the proposed LL backoff algorithms achieve energy efficiency improvement of the order of 2-3 dB compared to LL with no backoff, without compromising much on the throughput and delay performance at the UDP layer. Such energy savings through protocol means will improve the battery life in wireless mobile terminals.
Resumo:
We analyze the performance of an SIR based admission control strategy in cellular CDMA systems with both voice and data traffic. Most studies In the current literature to estimate CDMA system capacity with both voice and data traf-Bc do not take signal-tlFlnterference ratio (SIR) based admission control into account In this paper, we present an analytical approach to evaluate the outage probability for voice trafllc, the average system throughput and the mean delay for data traffic for a volce/data CDMA system which employs an SIR based admission controL We show that for a dataaniy system, an improvement of about 25% In both the Erlang capacity as well as the mean delay performance is achieved with an SIR based admission control as compared to code availability based admission control. For a mixed voice/data srtem with 10 Erlangs of voice traffic, the Lmprovement in the mean delay performance for data Is about 40%.Ah, for a mean delay of 50 ms with 10 Erlangs voice traffic, the data Erlang capacity improves by about 9%.
Resumo:
A numerically stable sequential Primal–Dual LP algorithm for the reactive power optimisation (RPO) is presented in this article. The algorithm minimises the voltage stability index C 2 [1] of all the load buses to improve the system static voltage stability. Real time requirements such as numerical stability, identification of the most effective subset of controllers for curtailing the number of controllers and their movement can be handled effectively by the proposed algorithm. The algorithm has a natural characteristic of selecting the most effective subset of controllers (and hence curtailing insignificant controllers) for improving the objective. Comparison with transmission loss minimisation objective indicates that the most effective subset of controllers and their solution identified by the static voltage stability improvement objective is not the same as that of the transmission loss minimisation objective. The proposed algorithm is suitable for real time application for the improvement of the system static voltage stability.
Resumo:
This study examines the thermal efficiency of the operation of arc furnace and the effects of harmonics and voltage dips of a factory located near Bangkok. It also attempts to find ways to improve the performance of the arc furnace operation and minimize the effects of both harmonics and voltage dips. A dynamic model of the arc furnace has been developed incorporating both electrical and thermal characteristics. The model can be used to identify potential areas for improvement of the furnace and its operation. Snapshots of waveforms and measurement of RMS values of voltage, current and power at the furnace, at other feeders and at the point of common coupling were recorded. Harmonic simulation program and electromagnetic transient simulation program were used in the study to model the effects of harmonics and voltage dips and to identify appropriate static and dynamic filters to minimize their effects within the factory. The effects of harmonics and voltage dips were identified in records taken at the point of common coupling of another factory supplied by another feeder of the same substation. Simulation studies were made to examine the results on the second feeder when dynamic filters were used in the factory which operated the arc furnace. The methodology used and the mitigation strategy identified in the study are applicable to general situation in a power distribution system where an arc furnace is a part of the load of a customer
Resumo:
A robust numerical solution of the input voltage equations (IVEs) for the independent-double-gate metal-oxide-semiconductor field-effect transistor requires root bracketing methods (RBMs) instead of the commonly used Newton-Raphson (NR) technique due to the presence of nonremovable discontinuity and singularity. In this brief, we do an exhaustive study of the different RBMs available in the literature and propose a single derivative-free RBM that could be applied to both trigonometric and hyperbolic IVEs and offers faster convergence than the earlier proposed hybrid NR-Ridders algorithm. We also propose some adjustments to the solution space for the trigonometric IVE that leads to a further reduction of the computation time. The improvement of computational efficiency is demonstrated to be about 60% for trigonometric IVE and about 15% for hyperbolic IVE, by implementing the proposed algorithm in a commercial circuit simulator through the Verilog-A interface and simulating a variety of circuit blocks such as ring oscillator, ripple adder, and twisted ring counter.
Resumo:
The factorization theorem for exclusive processes in perturbative QCD predicts the behavior of the pion electromagnetic form factor F(t) at asymptotic spacelike momenta t(= -Q(2)) < 0. We address the question of the onset energy using a suitable mathematical framework of analytic continuation, which uses as input the phase of the form factor below the first inelastic threshold, known with great precision through the Fermi-Watson theorem from pi pi elastic scattering, and the modulus measured from threshold up to 3 GeV by the BABAR Collaboration. The method leads to almost model-independent upper and lower bounds on the spacelike form factor. Further inclusion of the value of the charge radius and the experimental value at -2.45 GeV2 measured at JLab considerably increases the strength of the bounds in the region Q(2) less than or similar to 10 GeV2, excluding the onset of the asymptotic perturbative QCD regime for Q(2) < 7 GeV2. We also compare the bounds with available experimental data and with several theoretical models proposed for the low and intermediate spacelike region.
Resumo:
DNA Ligase IV is responsible for sealing of double-strand breaks (DSBs) during nonhomologous end-joining (NHEJ). Inhibiting Ligase IV could result in amassing of DSBs, thereby serving as a strategy toward treatment of cancer. Here, we identify a molecule, SCR7 that inhibits joining of DSBs in cell-free repair system. SCR7 blocks Ligase IV-mediated joining by interfering with its DNA binding but not that of T4 DNA Ligase or Ligase I. SCR7 inhibits NHEJ in a Ligase IV-dependent manner within cells, and activates the intrinsic apoptotic pathway. More importantly, SCR7 impedes tumor progression in mouse models and when coadministered with DSB-inducing therapeutic modalities enhances their sensitivity significantly. This inhibitor to target NHEJ offers a strategy toward the treatment of cancer and improvement of existing regimens.
Resumo:
In this paper, we derive Hybrid, Bayesian and Marginalized Cramer-Rao lower bounds (HCRB, BCRB and MCRB) for the single and multiple measurement vector Sparse Bayesian Learning (SBL) problem of estimating compressible vectors and their prior distribution parameters. We assume the unknown vector to be drawn from a compressible Student-prior distribution. We derive CRBs that encompass the deterministic or random nature of the unknown parameters of the prior distribution and the regression noise variance. We extend the MCRB to the case where the compressible vector is distributed according to a general compressible prior distribution, of which the generalized Pareto distribution is a special case. We use the derived bounds to uncover the relationship between the compressibility and Mean Square Error (MSE) in the estimates. Further, we illustrate the tightness and utility of the bounds through simulations, by comparing them with the MSE performance of two popular SBL-based estimators. We find that the MCRB is generally the tightest among the bounds derived and that the MSE performance of the Expectation-Maximization (EM) algorithm coincides with the MCRB for the compressible vector. We also illustrate the dependence of the MSE performance of SBL based estimators on the compressibility of the vector for several values of the number of observations and at different signal powers.
Resumo:
CrSi and Cr1-x Fe (x) Si particles embedded in a CrSi2 matrix have been prepared by hot pressing from CrSi1.9, CrSi2, and CrSi2.1 powders produced by ball milling using either WC or stainless steel milling media. The samples were characterized by powder X-ray diffraction, scanning, and transmission electron microscopy and electron microprobe analysis. The final crystallite size of CrSi2 obtained from the XRD patterns is about 40 and 80 nm for SS- and WC-milled powders, respectively, whereas the size of the second phase inclusions in the hot pressed samples is about 1-5 mu m. The temperature dependence of the electrical resistivity, Seebeck coefficient, thermal conductivity, and figure of merit (ZT) were analyzed in the temperature range from 300 to 800 K. While the ball-milling process results in a lower electrical resistivity and thermal conductivity due to the presence of the inclusions and the refinement of the matrix microstructure, respectively, the Seebeck coefficient is negatively affected by the formation of the inclusions which leads to a modest improvement of ZT.
Resumo:
CsI can be used as a photocathode material in UV photon detectors. The detection efficiency of the detector strongly depends on the photoemission property of the photocathode. CsI is very hygroscopic in nature. This limits the photoelectron yield from the photocathode when exposed to humid air even for a short duration during photocathode mounting or transfer. We report here on the improvement of photoemission properties of both thick (300 nm) and thin (30 nm) UV-sensitive CsI film exposed to humid air by the process of vacuum treatment. (C) 2013 Optical Society of America
Resumo:
Accurate and timely prediction of weather phenomena, such as hurricanes and flash floods, require high-fidelity compute intensive simulations of multiple finer regions of interest within a coarse simulation domain. Current weather applications execute these nested simulations sequentially using all the available processors, which is sub-optimal due to their sub-linear scalability. In this work, we present a strategy for parallel execution of multiple nested domain simulations based on partitioning the 2-D processor grid into disjoint rectangular regions associated with each domain. We propose a novel combination of performance prediction, processor allocation methods and topology-aware mapping of the regions on torus interconnects. Experiments on IBM Blue Gene systems using WRF show that the proposed strategies result in performance improvement of up to 33% with topology-oblivious mapping and up to additional 7% with topology-aware mapping over the default sequential strategy.
Resumo:
The miniaturization of electronic and ionic devices with thermionic cathodes and thc improvement of their vacuum properties are questions of very great interest to the electronic engineer. However there have bcen no proposals so far to analyse the problem of miniaturization of such devices In a fundamental way. The present work suggests a choice of the geometrical shape of the cathode, the anode and the envelope of the device, that may help towards such a fundamcnlal approach.It is shown that a design, in which the cathode and the envelope of the tube are made of thm prismatic shape and the anode coincides with the cnvclope, offers a slriknrg advantage over the conventional cylindrical design, in respect of over-all size. The use of the prismatic shape will lead to considerable economy in msterials and may facilitate simpler prodoct~ont echn~ques. I n respect of the miin criteria of vacuum, namely the grade of vacuum, the internal volume occupied by residual gases, the evolution of gases in the internal space and the diffusion of gases from outside into the devicc, it is shown that the prismatic form is at least as good as, if not somewhat superior lo, the cylindrical form.In the actual construction of thin prismatic tubes, manv practical problems will arise, the most important being the mechanical strength and stablity of the structure. But the changeover from the conventional cylindrical to the new prirmaiic form, with its basic advantages, is a development that merits close attention.
Resumo:
Recently, it has been shown that fusion of the estimates of a set of sparse recovery algorithms result in an estimate better than the best estimate in the set, especially when the number of measurements is very limited. Though these schemes provide better sparse signal recovery performance, the higher computational requirement makes it less attractive for low latency applications. To alleviate this drawback, in this paper, we develop a progressive fusion based scheme for low latency applications in compressed sensing. In progressive fusion, the estimates of the participating algorithms are fused progressively according to the availability of estimates. The availability of estimates depends on computational complexity of the participating algorithms, in turn on their latency requirement. Unlike the other fusion algorithms, the proposed progressive fusion algorithm provides quick interim results and successive refinements during the fusion process, which is highly desirable in low latency applications. We analyse the developed scheme by providing sufficient conditions for improvement of CS reconstruction quality and show the practical efficacy by numerical experiments using synthetic and real-world data. (C) 2013 Elsevier B.V. All rights reserved.