155 resultados para Armington Assumption
Resumo:
We present planforms of line plumes formed on horizontal surfaces in turbulent convection, along with the length of line plumes measured from these planforms, in a six decade range of Rayleigh numbers (10(5) < Ra < 10(11)) and at three Prandtl numbers (Pr = 0.7, 5.2, 602). Using geometric constraints on the relations for the mean plume spacings, we obtain expressions for the total length of near-wall plumes on horizontal surfaces in turbulent convection. The plume length per unit area (L(p)/A), made dimensionless by the near-wall length scale in turbulent convection (Z(w)), remains constant for a given fluid. The Nusselt number is shown to be directly proportional to L(p)H/A for a given fluid layer of height H. The increase in Pr has a weak influence in decreasing L(p)/A. These expressions match the measurements, thereby showing that the assumption of laminar natural convection boundary layers in turbulent convection is consistent with the observed total length of line plumes. We then show that similar relationships are obtained based on the assumption that the line plumes are the outcome of the instability of laminar natural convection boundary layers on the horizontal surfaces.
Resumo:
In terabit-density magnetic recording, several bits of data can be replaced by the values of their neighbors in the storage medium. As a result, errors in the medium are dependent on each other and also on the data written. We consider a simple 1-D combinatorial model of this medium. In our model, we assume a setting where binary data is sequentially written on the medium and a bit can erroneously change to the immediately preceding value. We derive several properties of codes that correct this type of errors, focusing on bounds on their cardinality. We also define a probabilistic finite-state channel model of the storage medium, and derive lower and upper estimates of its capacity. A lower bound is derived by evaluating the symmetric capacity of the channel, i.e., the maximum transmission rate under the assumption of the uniform input distribution of the channel. An upper bound is found by showing that the original channel is a stochastic degradation of another, related channel model whose capacity we can compute explicitly.
Resumo:
The paper presents a graphical-numerical method for determining the transient stability limits of a two-machine system under the usual assumptions of constant input, no damping and constant voltage behind transient reactance. The method presented is based on the phase-plane criterion,1, 2 in contrast to the usual step-by-step and equal-area methods. For the transient stability limit of a two-machine system, under the assumptions stated, the sum of the kinetic energy and the potential energy, at the instant of fault clearing, should just be equal to the maximum value of the potential energy which the machines can accommodate with the fault cleared. The assumption of constant voltage behind transient reactance is then discarded in favour of the more accurate assumption of constant field flux linkages. Finally, the method is extended to include the effect of field decrement and damping. A number of examples corresponding to each case are worked out, and the results obtained by the proposed method are compared with those obtained by the usual methods.
Resumo:
The widely used Bayesian classifier is based on the assumption of equal prior probabilities for all the classes. However, inclusion of equal prior probabilities may not guarantee high classification accuracy for the individual classes. Here, we propose a novel technique-Hybrid Bayesian Classifier (HBC)-where the class prior probabilities are determined by unmixing a supplemental low spatial-high spectral resolution multispectral (MS) data that are assigned to every pixel in a high spatial-low spectral resolution MS data in Bayesian classification. This is demonstrated with two separate experiments-first, class abundances are estimated per pixel by unmixing Moderate Resolution Imaging Spectroradiometer data to be used as prior probabilities, while posterior probabilities are determined from the training data obtained from ground. These have been used for classifying the Indian Remote Sensing Satellite LISS-III MS data through Bayesian classifier. In the second experiment, abundances obtained by unmixing Landsat Enhanced Thematic Mapper Plus are used as priors, and posterior probabilities are determined from the ground data to classify IKONOS MS images through Bayesian classifier. The results indicated that HBC systematically exploited the information from two image sources, improving the overall accuracy of LISS-III MS classification by 6% and IKONOS MS classification by 9%. Inclusion of prior probabilities increased the average producer's and user's accuracies by 5.5% and 6.5% in case of LISS-III MS with six classes and 12.5% and 5.4% in IKONOS MS for five classes considered.
Resumo:
In this paper, we address the design of codes which achieve modulation diversity in block fading single-input single-output (SISO) channels with signal quantization at the receiver. With an unquantized receiver, coding based on algebraic rotations is known to achieve maximum modulation coding diversity. On the other hand, with a quantized receiver, algebraic rotations may not guarantee gains in diversity. Through analysis, we propose specific rotations which result in the codewords having equidistant component-wise projections. We show that the proposed coding scheme achieves maximum modulation diversity with a low-complexity minimum distance decoder and perfect channel knowledge. Relaxing the perfect channel knowledge assumption we propose a novel channel training/estimation technique to estimate the channel. We show that our coding/training/estimation scheme and minimum distance decoding achieves an error probability performance similar to that achieved with perfect channel knowledge.
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.
Resumo:
An exact classical theory of the motion of a point dipole in a meson field is given which takes into account the effects of the reaction of the emitted meson field. The meson field is characterized by a constant $\chi =\mu /\hslash $ of the dimensions of a reciprocal length, $\mu $ being the meson mass, and as $\chi \rightarrow $ 0 the theory of this paper goes over continuously into the theory of the preceding paper for the motion of a spinning particle in a Maxwell field. The mass of the particle and the spin angular momentum are arbitrary mechanical constants. The field contributes a small finite addition to the mass, and a negative moment of inertia about an axis perpendicular to the spin axis. A cross-section (formula (88 a)) is given for the scattering of transversely polarized neutral mesons by the rotation of the spin of the neutron or proton which should be valid up to energies of 10$^{9}$ eV. For low energies E it agrees completely with the old quantum cross-section, having a dependence on energy proportional to p$^{4}$/E$^{2}$ (p being the meson momentum). At higher energies it deviates completely from the quantum cross-section, which it supersedes by taking into account the effects of radiation reaction on the rotation of the spin. The cross-section is a maximum at E $\sim $ 3$\cdot $5$\mu $, its value at this point being 3 $\times $ 10$^{-26}$ cm.$^{2}$, after which it decreases rapidly, becoming proportional to E$^{-2}$ at high energies. Thus the quantum theory of the interaction of neutrons with mesons goes wrong for E $\gtrsim $ 3$\mu $. The scattering of longitudinally polarized mesons is due to the translational but not the rotational motion of the dipole and is at least twenty thousand times smaller. With the assumption previously made by the present author that the heavy partilesc may exist in states of any integral charge, and in particular that protons of charge 2e and - e may occur in nature, the above results can be applied to charged mesons. Thus transversely polarised mesons should undergo a very big scattering and consequent absorption at energies near 3$\cdot $5$\mu $. Hence the energy spectrum of transversely polarized mesons should fall off rapidly for energies below about 3$\mu $. Scattering plays a relatively unimportant part in the absorption of longitudinally polarized mesons, and they are therefore much more penetrating. The theory does not lead to Heisenberg explosions and multiple processes.
Resumo:
A low strain shear modulus plays a fundamental role in earthquake geotechnical engineering to estimate the ground response parameters for seismic microzonation. A large number of site response studies are being carried out using the standard penetration test (SPT) data, considering the existing correlation between SPT N values and shear modulus. The purpose of this paper is to review the available empirical correlations between shear modulus and SPT N values and to generate a new correlation by combining the new data obtained by the author and the old available data. The review shows that only few authors have used measured density and shear wave velocity to estimate shear modulus, which were related to the SPT N values. Others have assumed a constant density for all the shear wave velocities to estimate the shear modulus. Many authors used the SPT N values of less than 1 and more than 100 to generate the correlation by extrapolation or assumption, but practically these N values have limited applications, as measuring of the SPT N values of less than 1 is not possible and more than 100 is not carried out. Most of the existing correlations were developed based on the studies carried out in Japan, where N values are measured with a hammer energy of 78%, which may not be directly applicable for other regions because of the variation in SPT hammer energy. A new correlation has been generated using the measured values in Japan and in India by eliminating the assumed and extrapolated data. This correlation has higher regression coefficient and lower standard error. Finally modification factors are suggested for other regions, where the hammer energy is different from 78%. Crown Copyright (C) 2012 Published by Elsevier Ltd. All rights reserved.
Resumo:
The acoustical behavior of an elliptical chamber muffler having an end-inlet and side-outlet port is analyzed semi-analytically. A uniform piston source is assumed to model the 3-D acoustic field in the elliptical chamber cavity. Towards this end, we consider the modal expansion of acoustic pressure field in the elliptical cavity in terms of angular and radial Mathieu functions, subjected to rigid wall condition, whereupon under the assumption of a point source, Green's function is obtained. On integrating this function over piston area of the side or end port and dividing it by piston area, one obtains the acoustic field, whence one can find the impedance matrix parameters characterizing the 2-port system. The acoustic performance of these configurations is evaluated in terms of transmission loss (TL). The analytical results thus obtained are compared with 3-D HA carried on a commercial software for certain muffler configurations. These show excellent agreement, thereby validating the 3-D semi-analytical piston driven model. The influence of the chamber length as well as the angular and axial location of the end and side ports on TL performance is also discussed, thus providing useful guidelines to the muffler designer. (c) 2011 Elsevier B.V. All rights reserved.
Resumo:
Surface-potential-based compact charge models for symmetric double-gate metal-oxide-semiconductor field-effect transistors (SDG-MOSFETs) are based on the fundamental assumption of having equal oxide thicknesses for both gates. However, for practical devices, there will always be some amount of asymmetry between the gate oxide thicknesses due to process variations and uncertainties, which can affect device performance significantly. In this paper, we propose a simple surface-potential-based charge model, which is applicable for tied double-gate MOSFETs having same gate work function but could have any difference in gate oxide thickness. The proposed model utilizes the unique so-far-unexplored quasi-linear relationship between the surface potentials along the channel. In this model, the terminal charges could be computed by basic arithmetic operations from the surface potentials and applied biases, and thus, it could be implemented in any circuit simulator very easily and extendable to short-channel devices. We also propose a simple physics-based perturbation technique by which the surface potentials of an asymmetric device could be obtained just by solving the input voltage equation of SDG devices for small asymmetry cases. The proposed model, which shows excellent agreement with numerical and TCAD simulations, is implemented in a professional circuit simulator through the Verilog-A interface and demonstrated for a 101-stage ring oscillator simulation. It is also shown that the proposed model preserves the source/drain symmetry, which is essential for RF circuit design.
Resumo:
In the tree cricket Oecanthus henryi, females are attracted by male calls and can choose between males. To make a case for female choice based on male calls, it is necessary to examine male call variation in the field and identify repeatable call features that are reliable indicators of male size or symmetry. Female preference for these reliable call features and the underlying assumption behind this choice, female preference for larger males, also need to be examined. We found that females did prefer larger males during mating, as revealed by the longer mating durations and longer spermatophore retention times. We then examined the correlation between acoustic and morphological features and the repeatability of male calls in the field across two temporal scales, within and across nights. We found that carrier frequency was a reliable indicator of male size, with larger males calling at lower frequencies at a given temperature. Simultaneous playback of male calls differing in frequency, spanning the entire range of natural variation at a given temperature, revealed a lack of female preference for low carrier frequencies. The contrasting results between the phonotaxis and mating experiments may be because females are incapable of discriminating small differences in frequency or because the change in call carrier frequency with temperature renders this cue unreliable in tree crickets. (C) 2012 The Association for the Study of Animal Behaviour. Published by Elsevier Ltd. All rights reserved.
Resumo:
The effect of Tb/Dy ratio on the structural and magnetic properties of (Tb,Dy)Fe-2 class of alloys has been investigated using nine alloys of TbxDy1-xFe1.95 (x = 0-1) covering the entire range. Our results indicate that the three phases viz. (Tb,Dy)Fe-2 (major phase), (Tb,Dy)Fe-3 and(Tb,Dy)-solid solution (minor phases) coexist in all the alloys. The volume fraction of pro-peritectic (Tb,Dy)Fe-3 phase however, has a minimum at x = 0.4 and a maximum at x = 0.6 compositions. The volume fraction of this phase decreases upon heat treatment at 850 degrees C and 1000 degrees C. A Widmanstatten type precipitate of (Tb,Dy)Fe-3 was observed for Dy-rich compositions (0 <= x <= 0.5). The microstructural investigations indicate that the ternary phase equilibria of Tb-Dy-Fe are sensitive to Tb/Dy ratio including the expansion of (Tb,Dy)Fe-2 phase field which is in contrast to the pseudo-binary assumption that is followed in available literature to date. The lattice parameter, Curie temperature and coercivity are found to increase with Tb addition. Split of (440) peak of (Tb,Dy)Fe-2 observed in x >= 0.3 alloys indicate, a spin reorientation transition from 100] to 111] occurs with Tb addition. (C) 2012 Elsevier B. V. All rights reserved.
Resumo:
We consider a dense, ad hoc wireless network, confined to a small region. The wireless network is operated as a single cell, i.e., only one successful transmission is supported at a time. Data packets are sent between source-destination pairs by multihop relaying. We assume that nodes self-organize into a multihop network such that all hops are of length d meters, where d is a design parameter. There is a contention-based multiaccess scheme, and it is assumed that every node always has data to send, either originated from it or a transit packet (saturation assumption). In this scenario, we seek to maximize a measure of the transport capacity of the network (measured in bit-meters per second) over power controls (in a fading environment) and over the hop distance d, subject to an average power constraint. We first motivate that for a dense collection of nodes confined to a small region, single cell operation is efficient for single user decoding transceivers. Then, operating the dense ad hoc wireless network (described above) as a single cell, we study the hop length and power control that maximizes the transport capacity for a given network power constraint. More specifically, for a fading channel and for a fixed transmission time strategy (akin to the IEEE 802.11 TXOP), we find that there exists an intrinsic aggregate bit rate (Theta(opt) bits per second, depending on the contention mechanism and the channel fading characteristics) carried by the network, when operating at the optimal hop length and power control. The optimal transport capacity is of the form d(opt)((P) over bar (t)) x Theta(opt) with d(opt) scaling as (P) over bar (t) (1/eta), where (P) over bar (t) is the available time average transmit power and eta is the path loss exponent. Under certain conditions on the fading distribution, we then provide a simple characterization of the optimal operating point. Simulation results are provided comparing the performance of the optimal strategy derived here with some simple strategies for operating the network.
Resumo:
Information diffusion and influence maximization are important and extensively studied problems in social networks. Various models and algorithms have been proposed in the literature in the context of the influence maximization problem. A crucial assumption in all these studies is that the influence probabilities are known to the social planner. This assumption is unrealistic since the influence probabilities are usually private information of the individual agents and strategic agents may not reveal them truthfully. Moreover, the influence probabilities could vary significantly with the type of the information flowing in the network and the time at which the information is propagating in the network. In this paper, we use a mechanism design approach to elicit influence probabilities truthfully from the agents. Our main contribution is to design a scoring rule based mechanism in the context of the influencer-influencee model. In particular, we show the incentive compatibility of the mechanisms and propose a reverse weighted scoring rule based mechanism as an appropriate mechanism to use.
Resumo:
During the motion of one dimensional flexible objects such as ropes, chains, etc., the assumption of constant length is realistic. Moreover,their motion appears to be naturally minimizing some abstract distance measure, wherein the disturbance at one end gradually dies down along the curve defining the object. This paper presents purely kinematic strategies for deriving length-preserving transformations of flexible objects that minimize appropriate ‘motion’. The strategies involve sequential and overall optimization of the motion derived using variational calculus. Numerical simulations are performed for the motion of a planar curve and results show stable converging behavior for single-step infinitesimal and finite perturbations 1 as well as multi-step perturbations. Additionally, our generalized approach provides different intuitive motions for various problem-specific measures of motion, one of which is shown to converge to the conventional tractrix-based solution. Simulation results for arbitrary shapes and excitations are also included.