70 resultados para post-Newtonian approximation to general relativity
Resumo:
The concept of a fully-developed flow based on the hypothesis of selective memory is here applied to general wall-jet type flows. In the presence of a (constant) external stream, the free-stream velocity and the jet momentum flux are taken to be the chief quantities governing the development of the wall jet: two additional nondimensional parameters, representing a momentum flux Reynolds number and the relative momentum defect in the initial boundary layer, are shown to have only a secondary effect on the fully-developed flow. The standard correlations so determined are also found to predict quite well the flow development in Gartshore and Newman's experiments on wall jets in adverse pressure gradients; possible reasons for this somewhat surprising result are discussed. Finally it is shown, by application to the still-air case, that the parameters discovered in incompressible flow are, with appropriate but straightforward modification, successful in describing compressible wall jets also.
Resumo:
We provide a survey of some of our recent results ([9], [13], [4], [6], [7]) on the analytical performance modeling of IEEE 802.11 wireless local area networks (WLANs). We first present extensions of the decoupling approach of Bianchi ([1]) to the saturation analysis of IEEE 802.11e networks with multiple traffic classes. We have found that even when analysing WLANs with unsaturated nodes the following state dependent service model works well: when a certain set of nodes is nonempty, their channel attempt behaviour is obtained from the corresponding fixed point analysis of the saturated system. We will present our experiences in using this approximation to model multimedia traffic over an IEEE 802.11e network using the enhanced DCF channel access (EDCA) mechanism. We have found that we can model TCP controlled file transfers, VoIP packet telephony, and streaming video in the IEEE802.11e setting by this simple approximation.
Resumo:
A software and a microprocessor based hardware for waveform synthesis using Walsh functions are described. The software is based on Walsh function generation using Hadamard matrices and on the truncated Walsh series expansion for the waveform to be synthesized. The hardware employs six microprocessor controlled programmable Walsh function generators (PWFGs) for generating the first six non-vanishing terms of the truncated Walsh series. Improved approximation to a given waveform may be achieved by employing additional PWFGs.
Resumo:
The coherent flame model uses the strain rate to predict reaction rate per unit flame surface area and some procedure that solves for the dynamics of flame surfaces to predict species distributions. The strainrate formula for the reaction rate is obtained from the analytical solution for a flame in a laminar, plane stagnation point flow. Here, the formula's effectiveness is examined by comparisons with data from a direct numerical simulation (DNS) of a round jetlike flow that undergoes transition to turbulence. Significant differences due to general flow features can be understood qualitatively: Model predictions are good in the braids between vortex rings, which are present in the near field of round jets, as the strain rate is extensional and reaction surfaces are isolated. In several other regions, the strain rate is compressive or flame surfaces are folded close together. There, the predictions are poor as the local flow no longer resembles the model flow. Quantitative comparisons showed some discrepancies. A modified, consistent application of the strain-rate solution did not show significant changes in the prediction of mean reaction rate distributions.
Resumo:
We report enhanced emission and gain narrowing in Rhodamine 590 perchlorate dye in an aqueous suspension of polystyrene microspheres. A systematic experimental study of the threshold condition for and the gain narrowing of the stimulated emission over a wide range of dye concentrations and scatterer number densities showed several interesting features, even though the transport mean free path far exceeded the system size. The conventional diffusive-reactive approximation to radiative transfer in an inhomogeneously illuminated random amplifying medium, which is valid for a transport mean-free path much smaller than the system size, is clearly inapplicable here. We propose a new probabilistic approach for the present case of dense, random, weak scatterers involving the otherwise rare and ignorable sub-mean-free-path scatterings, now made effective by the high gain in the medium, which is consistent: with experimentally observed features. (C) 1997 Optical Society of America.
Resumo:
This paper presents a novel hypothesis on the function of massive feedback pathways in mammalian visual systems. We propose that the cortical feature detectors compete not for the right to represent the output at a point, but for exclusive rights to abstract and represent part of the underlying input. Feedback can do this very naturally. A computational model that implements the above idea for the problem of line detection is presented and based on that we suggest a functional role for the thalamo-cortical loop during perception of lines. We show that the model successfully tackles the so called Cross problem. Based on some recent experimental results, we discuss the biological plausibility of our model. We also comment on the relevance of our hypothesis (on the role of feedback) to general sensory information processing and recognition. (C) 1998 Published by Elsevier Science Ltd. All rights reserved.
Resumo:
We consider the breaking of a polymer molecule which is fixed at one end and is acted upon by a force at the other. The polymer is assumed to be a linear chain joined together by bonds which satisfy the Morse potential. The applied force is found to modify the Morse potential so that the minimum becomes metastable. Breaking is just the decay of this metastable bond, by causing it to go over the barrier. Increasing the force causes the potential to become more and more distorted and eventually leads to the disappearance of the barrier. The limiting force at which the barrier disappears is D(e)a/2,D-e with a the parameters characterizing the Morse potential. The rate of breaking is first calculated using multidimensional quantum transition state theory. We use the harmonic approximation to account for vibrations of all the units. It includes tunneling contributions to the rate, but is valid only above a certain critical temperature. It is possible to get an analytical expression for the rate of breaking. We have calculated the rate of breaking for a model, which mimics polyethylene. First we calculate the rate of breaking of a single bond, without worrying about the other bonds. Inclusion of other bonds under the harmonic approximation is found to lower this rate by at the most one order of magnitude. Quantum effects are found to increase the rate of breaking and are significant only at temperatures less than 150 K. At 300 K, the calculations predict a bond in polyethylene to have a lifetime of only seconds at a force which is only half the limiting force. Calculations were also done using the Lennard-Jones potential. The results for Lennard-Jones and Morse potentials were rather different, due to the different long-range behaviors of the two potentials. A calculation including friction was carried out, at the classical level, by assuming that each atom of the chain is coupled to its own collection of harmonic oscillators. Comparison of the results with the simulations of Oliveira and Taylor [J. Chem. Phys. 101, 10 118 (1994)] showed the rate to be two to three orders of magnitude higher. As a possible explanation of discrepancy, we consider the translational motion of the ends of the broken chains. Using a continuum approximation for the chain, we find that in the absence of friction, the rate of the process can be limited by the rate at which the two broken ends separate from one another and the lowering of the rate is at the most a factor of 2, for the parameters used in the simulation (for polyethylene). In the presence of friction, we find that the rate can be lowered by one to two orders of magnitude, making our results to be in reasonable agreement with the simulations.
Resumo:
The 4ÃÂ4 discrete cosine transform is one of the most important building blocks for the emerging video coding standard, viz. H.264. The conventional implementation does some approximation to the transform matrix elements to facilitate integer arithmetic, for which hardware is suitably prepared. Though the transform coding does not involve any multiplications, quantization process requires sixteen 16-bit multiplications. The algorithm used here eliminates the process of approximation in transform coding and multiplication in the quantization process, by usage of algebraic integer coding. We propose an area-efficient implementation of the transform and quantization blocks based on the algebraic integer coding. The designs were synthesized with 90 nm TSMC CMOS technology and were also implemented on a Xilinx FPGA. The gate counts and throughput achievable in this case are 7000 and 125 Msamples/sec.
Resumo:
The Gibbs energy of mixing for the system Fe3O4-FeAl2O4 was determined at 1573 K using a gas-metal-oxide equilibration technique. Oxide solid solution samples were equilibrated with Pt foils under controlled CO+CO2 gas streams. The equilibrium iron concentration in the foil was determined by chemical analysis. The cation distribution between tetrahedral and octahedral sites in the spinel crystal can be calculated from site-preference energies and used as an alternate method of determining some thermodynamic properties, including the Gibbs energy of mixing. The solvus occurring at low temperatures in the system Fe3C4-FeAl2C4 was used to derive the effect of lattice distortion due to cation size difference on the enthalpy of mixing and to obtain a better approximation to the measured thermodynamic quantities.
Resumo:
Electric power systems are exposed to various contingencies. Network contingencies often contribute to over-loading of network branches, unsatisfactory voltages and also leading to problems of stability/voltage collapse. To maintain security of the systems, it is desirable to estimate the effect of contingencies and plan suitable measures to improve system security/stability. This paper presents an approach for selection of unified power flow controller (UPFC) suitable locations considering normal and network contingencies after evaluating the degree of severity of the contingencies. The ranking is evaluated using composite criteria based fuzzy logic for eliminating masking effect. The fuzzy approach, in addition to real power loadings and bus voltage violations, voltage stability indices at the load buses also used as the post-contingent quantities to evaluate the network contingency ranking. The selection of UPFC suitable locations uses the criteria on the basis of improved system security/stability. The proposed approach for selection of UPFC suitable locations has been tested under simulated conditions on a few power systems and the results for a 24-node real-life equivalent EHV power network and 39-node New England (modified) test system are presented for illustration purposes.
Resumo:
This study examines the thermal efficiency of the operation of arc furnace and the effects of harmonics and voltage dips of a factory located near Bangkok. It also attempts to find ways to improve the performance of the arc furnace operation and minimize the effects of both harmonics and voltage dips. A dynamic model of the arc furnace has been developed incorporating both electrical and thermal characteristics. The model can be used to identify potential areas for improvement of the furnace and its operation. Snapshots of waveforms and measurement of RMS values of voltage, current and power at the furnace, at other feeders and at the point of common coupling were recorded. Harmonic simulation program and electromagnetic transient simulation program were used in the study to model the effects of harmonics and voltage dips and to identify appropriate static and dynamic filters to minimize their effects within the factory. The effects of harmonics and voltage dips were identified in records taken at the point of common coupling of another factory supplied by another feeder of the same substation. Simulation studies were made to examine the results on the second feeder when dynamic filters were used in the factory which operated the arc furnace. The methodology used and the mitigation strategy identified in the study are applicable to general situation in a power distribution system where an arc furnace is a part of the load of a customer
Resumo:
We study a State Dependent Attempt Rate (SDAR) approximation to model M queues (one queue per node) served by the Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) protocol as standardized in the IEEE 802.11 Distributed Coordination Function (DCF). The approximation is that, when n of the M queues are non-empty, the (transmission) attempt probability of each of the n non-empty nodes is given by the long-term (transmission) attempt probability of n saturated nodes. With the arrival of packets into the M queues according to independent Poisson processes, the SDAR approximation reduces a single cell with non-saturated nodes to a Markovian coupled queueing system. We provide a sufficient condition under which the joint queue length Markov chain is positive recurrent. For the symmetric case of equal arrival rates and finite and equal buffers, we develop an iterative method which leads to accurate predictions for important performance measures such as collision probability, throughput and mean packet delay. We replace the MAC layer with the SDAR model of contention by modifying the NS-2 source code pertaining to the MAC layer, keeping all other layers unchanged. By this model-based simulation technique at the MAC layer, we achieve speed-ups (w.r.t. MAC layer operations) up to 5.4. Through extensive model-based simulations and numerical results, we show that the SDAR model is an accurate model for the DCF MAC protocol in single cells. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Edge-preserving smoothing is widely used in image processing and bilateral filtering is one way to achieve it. Bilateral filter is a nonlinear combination of domain and range filters. Implementing the classical bilateral filter is computationally intensive, owing to the nonlinearity of the range filter. In the standard form, the domain and range filters are Gaussian functions and the performance depends on the choice of the filter parameters. Recently, a constant time implementation of the bilateral filter has been proposed based on raisedcosine approximation to the Gaussian to facilitate fast implementation of the bilateral filter. We address the problem of determining the optimal parameters for raised-cosine-based constant time implementation of the bilateral filter. To determine the optimal parameters, we propose the use of Stein's unbiased risk estimator (SURE). The fast bilateral filter accelerates the search for optimal parameters by faster optimization of the SURE cost. Experimental results show that the SURE-optimal raised-cosine-based bilateral filter has nearly the same performance as the SURE-optimal standard Gaussian bilateral filter and the Oracle mean squared error (MSE)-based optimal bilateral filter.
Resumo:
Fast and efficient channel estimation is key to achieving high data rate performance in mobile and vehicular communication systems, where the channel is fast time-varying. To this end, this work proposes and optimizes channel-dependent training schemes for reciprocal Multiple-Input Multiple-Output (MIMO) channels with beamforming (BF) at the transmitter and receiver. First, assuming that Channel State Information (CSI) is available at the receiver, a channel-dependent Reverse Channel Training (RCT) signal is proposed that enables efficient estimation of the BF vector at the transmitter with a minimum training duration of only one symbol. In contrast, conventional orthogonal training requires a minimum training duration equal to the number of receive antennas. A tight approximation to the capacity lower bound on the system is derived, which is used as a performance metric to optimize the parameters of the RCT. Next, assuming that CSI is available at the transmitter, a channel-dependent forward-link training signal is proposed and its power and duration are optimized with respect to an approximate capacity lower bound. Monte Carlo simulations illustrate the significant performance improvement offered by the proposed channel-dependent training schemes over the existing channel-agnostic orthogonal training schemes.
Resumo:
The sparse recovery methods utilize the l(p)-normbased regularization in the estimation problem with 0 <= p <= 1. These methods have a better utility when the number of independent measurements are limited in nature, which is a typical case for diffuse optical tomographic image reconstruction problem. These sparse recovery methods, along with an approximation to utilize the l(0)-norm, have been deployed for the reconstruction of diffuse optical images. Their performancewas compared systematically using both numerical and gelatin phantom cases to show that these methods hold promise in improving the reconstructed image quality.