22 resultados para Computer Engineering


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most of the modern distance relays are designed to avoid overreaching due to the transient d.c. component of the fault current, whereas a more likely source of transients in e.h.v. systems is the oscillatory discharge of the system charging current into the fault. Until now attempts have not been made to reproduce these transients in the laboratory. This paper describes an analogue and an accurate digital simulation of these harmonic transients. The dynamic behaviour of a typical polarised mho-type relay is analysed, and results are presented. The paper also advocates the use of active filters for filtering the harmonics associated with e.h.v. system, and hence, to improve the speed of response and accuracy of the protective relays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we review the current status in the modelling of both thermotropic and lyotropic Liquid crystal. We discuss various coarse-graining schemes as well as simulation techniques such as Monte Carlo (MC) and Molecular dynamics (MD) simulations.In the area of MC simulations we discuss in detail the algorithm for simulating hard objects such as spherocylinders of various aspect ratios where excluded volume interaction enters in the simulation through overlap test. We use this technique to study the phase diagram, of a special class of thermotropic liquid crystals namely banana liquid crystals. Next we discuss a coarse-grain model of surfactant molecules and study the self-assembly of the surfactant oligomers using MD simulations. Finally we discuss an atomistically informed coarse-grained description of the lipid molecules used to study the gel to liquid crystalline phase transition in the lipid bilayer system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.