109 resultados para error-location number

em Indian Institute of Science - Bangalore - Índia


Relevância:

40.00% 40.00%

Publicador:

Resumo:

An experimental study is presented to show the effect of the cowl location and shape on the shock interaction phenomena in the inlet region for a 2D, planar scramjet inlet model. Investigations include schlieren visualization around the cowl region and heat transfer rate measurement inside the inlet chamber.Both regular and Mach reflections are observed when the forebody ramp shock reflects from the cowl plate. Mach stem heights of 3.3 mm and 4.1 mm are measured in 18.5 mm and 22.7 mm high inlet chambers respecively. Increased heat transfer rate is measured at the same location of chamber for cowls of longer lenghs is indicating additional mass flow recovery by the inlet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An on-line algorithm is developed for the location of single cross point faults in a PLA (FPLA). The main feature of the algorithm is the determination of a fault set corresponding to the response obtained for a failed test. For the apparently small number of faults in this set, all other tests are generated and a fault table is formed. Subsequently, an adaptive procedure is used to diagnose the fault. Functional equivalence test is carried out to determine the actual fault class if the adaptive testing results in a set of faults with identical tests. The large amount of computation time and storage required in the determination, a priori, of all the fault equivalence classes or in the construction of a fault dictionary are not needed here. A brief study of functional equivalence among the cross point faults is also made.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transmission loss of a rectangular expansion chamber, the inlet and outlet of which are situated at arbitrary locations of the chamber, i.e., the side wall or the face of the chamber, are analyzed here based on the Green's function of a rectangular cavity with homogeneous boundary conditions. The rectangular chamber Green's function is expressed in terms of a finite number of rigid rectangular cavity mode shapes. The inlet and outlet ports are modeled as uniform velocity pistons. If the size of the piston is small compared to wavelength, then the plane wave excitation is a valid assumption. The velocity potential inside the chamber is expressed by superimposing the velocity potentials of two different configurations. The first configuration is a piston source at the inlet port and a rigid termination at the outlet, and the second one is a piston at the outlet with a rigid termination at the inlet. Pressure inside the chamber is derived from velocity potentials using linear momentum equation. The average pressure acting on the pistons at the inlet and outlet locations is estimated by integrating the acoustic pressure over the piston area in the two constituent configurations. The transfer matrix is derived from the average pressure values and thence the transmission loss is calculated. The results are verified against those in the literature where use has been made of modal expansions and also numerical models (FEM fluid). The transfer matrix formulation for yielding wall rectangular chambers has been derived incorporating the structural–acoustic coupling. Parametric studies are conducted for different inlet and outlet configurations, and the various phenomena occurring in the TL curves that cannot be explained by the classical plane wave theory, are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An on-line algorithm is developed for the location of single cross point faults in a PLA (FPLA). The main feature of the valgorithm is the determination of a fault set corresponding to the response obtained for a failed test. For the apparently small number of faults in this set, all other tests are generated and a fault table is formed. Subsequently, an adaptive procedure is used to diagnose the fault. Functional equivalence test is carried out to determine the actual fault class if the adaptive testing results in a set of faults with identical tests. The large amount of computation time and storage required in the determination, a priori, of all the fault equivalence classes or in the construction of a fault dictionary are not needed here. A brief study of functional equivalence among the cross point faults is also made.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The following problem is considered. Given the locations of the Central Processing Unit (ar;the terminals which have to communicate with it, to determine the number and locations of the concentrators and to assign the terminals to the concentrators in such a way that the total cost is minimized. There is alao a fixed cost associated with each concentrator. There is ail upper limit to the number of terminals which can be connected to a concentrator. The terminals can be connected directly to the CPU also In this paper it is assumed that the concentrators can bo located anywhere in the area A containing the CPU and the terminals. Then this becomes a multimodal optimization problem. In the proposed algorithm a stochastic automaton is used as a search device to locate the minimum of the multimodal cost function . The proposed algorithm involves the following. The area A containing the CPU and the terminals is divided into an arbitrary number of regions (say K). An approximate value for the number of concentrators is assumed (say m). The optimum number is determined by iteration later The m concentrators can be assigned to the K regions in (mk) ways (m > K) or (km) ways (K>m).(All possible assignments are feasible, i.e. a region can contain 0,1,…, to concentrators). Each possible assignment is assumed to represent a state of the stochastic variable structure automaton. To start with, all the states are assigned equal probabilities. At each stage of the search the automaton visits a state according to the current probability distribution. At each visit the automaton selects a 'point' inside that state with uniform probability. The cost associated with that point is calculated and the average cost of that state is updated. Then the probabilities of all the states are updated. The probabilities are taken to bo inversely proportional to the average cost of the states After a certain number of searches the search probabilities become stationary and the automaton visits a particular state again and again. Then the automaton is said to have converged to that state Then by conducting a local gradient search within that state the exact locations of the concentrators are determined This algorithm was applied to a set of test problems and the results were compared with those given by Cooper's (1964, 1967) EAC algorithm and on the average it was found that the proposed algorithm performs better.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An efficient location service is a prerequisite to any robust, effective and precise location information aided Mobile Ad Hoc Network (MANET) routing protocol. Locant, presented in this paper is a nature inspired location service which derives inspiration from the insect colony framework, and it is designed to work with a host of location information aided MANET routing protocols. Using an extensive set of simulation experiments, we have compared the performance of Locant with RLS, SLS and DLS, and found that it has comparable or better performance compared to the above three location services on most metrics and has the least overhead in terms of number of bytes transmitted per location query answered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we introduce convolutional codes for network-error correction in the context of coherent network coding. We give a construction of convolutional codes that correct a given set of error patterns, as long as consecutive errors are separated by a certain interval. We also give some bounds on the field size and the number of errors that can get corrected in a certain interval. Compared to previous network error correction schemes, using convolutional codes is seen to have advantages in field size and decoding technique. Some examples are discussed which illustrate the several possible situations that arise in this context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Design of speaker identification schemes for a small number of speakers (around 10) with a high degree of accuracy in controlled environment is a practical proposition today. When the number of speakers is large (say 50–100), many of these schemes cannot be directly extended, as both recognition error and computation time increase monotonically with population size. The feature selection problem is also complex for such schemes. Though there were earlier attempts to rank order features based on statistical distance measures, it has been observed only recently that the best two independent measurements are not the same as the combination in two's for pattern classification. We propose here a systematic approach to the problem using the decision tree or hierarchical classifier with the following objectives: (1) Design of optimal policy at each node of the tree given the tree structure i.e., the tree skeleton and the features to be used at each node. (2) Determination of the optimal feature measurement and decision policy given only the tree skeleton. Applicability of optimization procedures such as dynamic programming in the design of such trees is studied. The experimental results deal with the design of a 50 speaker identification scheme based on this approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A series of isomeric cationic surfactants (S1-S5) bearing a long alkyl chain that carries a 1,4-phenylene unit and a trimethyl ammonium headgroup was synthesized; the location of the phenyl ring within the alkyl tail was varied in an effort to understand its influence on the amphiphilic properties of the surfactants. The cmc's of the surfactants were estimated using ionic conductivity measurements and isothermal calorimetric titrations (ITC); the values obtained by the two methods were found to be in excellent agreement. The ITC measurements provided additional insight into the various thermodynamic parameters associated with the micellization process. Although all five surfactants have exactly the same molecular formula, their micellar properties were seen to vary dramatically depending on the location of the phenyl ring; the cmc was seen to decrease by almost an order of magnitude when the phenyl ring was moved from the tail end (cmc of S1 is 23 mM) to the headgroup region (cmc of S5 is 3 mM). In all cases, the enthalpy of micellization was negative but the entropy of micellization was positive, suggesting that in all of these systems the formation of micelles is both enthalpically and entropically favored. As expected, the decrease in cmc values upon moving the phenyl ring from the tail end to he headgroup region is accompanied by an increase in the thermodynamic driving force (Delta G) for micellization. To understand further the differences in the micellar structure of these surfactants, small-angle neutron scattering (SANS) measurements were carried out; these measurements reveal that the aggregation number of the micelles increases as the cmc decreases. This increase in the aggregation number is also accompanied by an increase in the asphericity of the micellar aggregate and a decrease in the fractional charge. Geometric packing arguments are presented to account for these changes in aggregation behavior as a function of phenyl ring location.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present the results of our detailed pseudospectral direct numerical simulation (DNS) studies, with up to 1024(3) collocation points, of incompressible, magnetohydrodynamic (MHD) turbulence in three dimensions, without a mean magnetic field. Our study concentrates on the dependence of various statistical properties of both decaying and statistically steady MHD turbulence on the magnetic Prandtl number Pr-M over a large range, namely 0.01 <= Pr-M <= 10. We obtain data for a wide variety of statistical measures, such as probability distribution functions (PDFs) of the moduli of the vorticity and current density, the energy dissipation rates, and velocity and magnetic-field increments, energy and other spectra, velocity and magnetic-field structure functions, which we use to characterize intermittency, isosurfaces of quantities, such as the moduli of the vorticity and current density, and joint PDFs, such as those of fluid and magnetic dissipation rates. Our systematic study uncovers interesting results that have not been noted hitherto. In particular, we find a crossover from a larger intermittency in the magnetic field than in the velocity field, at large Pr-M, to a smaller intermittency in the magnetic field than in the velocity field, at low Pr-M. Furthermore, a comparison of our results for decaying MHD turbulence and its forced, statistically steady analogue suggests that we have strong universality in the sense that, for a fixed value of Pr-M, multiscaling exponent ratios agree, at least within our error bars, for both decaying and statistically steady homogeneous, isotropic MHD turbulence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The influence of temperature-dependent viscosity and Prandtl number on the unsteady laminar nonsimilar forced convection flow over two-dimensional and axisymmetric bodies has been examined where the unsteadiness and (or) nonsimilarity are (is) due to the free stream velocity, mass transfer, and transverse curvature. The partial differential equations governing the flow which involve three independent variables have been solved numerically using an implicit finite-difference scheme along with a quasilinearization technique. It is found that both the skin friction and heat transfer strongly respond to the unsteady free stream velocity distributions. The unsteadiness and injection cause the location of zero skin friction to move upstream. However, the effect of variable viscosity and Prandtl number is to move it downstream. The heat transfer is found to depend strongly on viscous dissipation, but the skin friction is little affected by it. In general, the results pertaining to variable fluid properties differ significantly, from those of constant fluid properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

‘Best’ solutions for the shock-structure problem are obtained by solving the Boltzmann equation for a rigid sphere gas by applying minimum error criteria on the Mott-Smith ansatz. The use of two such criteria minimizing respectively the local and total errors, as well as independent computations of the remaining error, establish the high accuracy of the solutions, although it is shown that the Mott-Smith distribution is not an exact solution of the Boltzmann equation even at infinite Mach number. The minimum local error method is found to be particularly simple and efficient. Adopting the present solutions as the standard of comparison, it is found that the widely used v2x-moment solutions can be as much as a third in error, but that results based on Rosen's method provide good approximations. Finally, it is shown that if the Maxwell mean free path on the hot side of the shock is chosen as the scaling length, the value of the density-slope shock thickness is relatively insensitive to the intermolecular potential. A comparison is made on this basis of present results with experiment, and very satisfactory quantitative agreement is obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Location area planning problem is to partition the cellular/mobile network into location areas with the objective of minimizing the total cost. This partitioning problem is a difficult combinatorial optimization problem. In this paper, we use the simulated annealing with a new solution representation. In our method, we can automatically generate different number of location areas using Compact Index (CI) to obtain the optimal/best partitions. We compare the results obtained in our method with the earlier results available in literature. We show that our methodology is able to perform better than earlier methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a distribution-free approach to the study of random geometric graphs. The distribution of vertices follows a Poisson point process with intensity function n f(center dot), where n is an element of N, and f is a probability density function on R-d. A vertex located at x connects via directed edges to other vertices that are within a cut-off distance r(n)(x). We prove strong law results for (i) the critical cut-off function so that almost surely, the graph does not contain any node with out-degree zero for sufficiently large n and (ii) the maximum and minimum vertex degrees. We also provide a characterization of the cut-off function for which the number of nodes with out-degree zero converges in distribution to a Poisson random variable. We illustrate this result for a class of densities with compact support that have at most polynomial rates of decay to zero. Finally, we state a sufficient condition for an enhanced version of the above graph to be almost surely connected eventually.