964 resultados para Computer input-outpus equipment.
Resumo:
In this article we review the current status in the modelling of both thermotropic and lyotropic Liquid crystal. We discuss various coarse-graining schemes as well as simulation techniques such as Monte Carlo (MC) and Molecular dynamics (MD) simulations.In the area of MC simulations we discuss in detail the algorithm for simulating hard objects such as spherocylinders of various aspect ratios where excluded volume interaction enters in the simulation through overlap test. We use this technique to study the phase diagram, of a special class of thermotropic liquid crystals namely banana liquid crystals. Next we discuss a coarse-grain model of surfactant molecules and study the self-assembly of the surfactant oligomers using MD simulations. Finally we discuss an atomistically informed coarse-grained description of the lipid molecules used to study the gel to liquid crystalline phase transition in the lipid bilayer system.
Resumo:
A new and efficient approach to construct a 3D wire-frame of an object from its orthographic projections is described. The input projections can be two or more and can include regular and complete auxiliary views. Each view may contain linear, circular and other conic sections. The output is a 3D wire-frame that is consistent with the input views. The approach can handle auxiliary views containing curved edges. This generality derives from a new technique to construct 3D vertices from the input 2D vertices (as opposed to matching coordinates that is prevalent in current art). 3D vertices are constructed by projecting the 2D vertices in a pair of views on the common line of the two views. The construction of 3D edges also does not require the addition of silhouette and tangential vertices and subsequently splitting edges in the views. The concepts of complete edges and n-tuples are introduced to obviate this need. Entities corresponding to the 3D edge in each view are first identified and the 3D edges are then constructed from the information available with the matching 2D edges. This allows the algorithm to handle conic sections that are not parallel to any of the viewing directions. The localization of effort in constructing 3D edges is the source of efficiency of the construction algorithm as it does not process all potential 3D edges. Working of the algorithm on typical drawings is illustrated. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Ethylene gas is burnt and the carbon soot particles are thermophoretically collected using a home-built equipment where the fuel air injection and intervention into the 7.5-cm long flame are controlled using three small pneumatic cylinders and computer-driven controllers. The physical and mechanical properties and tribological performance of the collected soot are compared with those of carbon black and diesel soot. The crystalline structures of the nanometric particles generated in the flame, as revealed by high-resolution transmission electron studies, are shown to vary from the flame root to the exhaust. As the particle journeys upwards the flame, through a purely amorphous coagulated phase at the burner nozzle, it leads to a well-defined crystalline phase shell in the mid-flame zone and to a disordered phase consisting of randomly distributed short-range crystalline order at the exhaust. In the mid-flame region, a large shell of radial-columnar order surrounds a dense amorphous core. The hardness and wear resistance as well as friction coefficient of the soot extracted from this zone are low. The mechanical properties characteristics of this zone may be attributed to microcrystalline slip. Moving towards the exhaust, the slip is inhibited and there is an increase in hardness and friction compared to those in the mid-flame zone. This study of the comparison of flame soot to carbon black and diesel soot is further extended to suggest a rationale based on additional physico-chemical study using micro-Raman spectroscopy.
Resumo:
We consider the problem of computing a minimum cycle basis in a directed graph G. The input to this problem is a directed graph whose arcs have positive weights. In this problem a {- 1, 0, 1} incidence vector is associated with each cycle and the vector space over Q generated by these vectors is the cycle space of G. A set of cycles is called a cycle basis of G if it forms a basis for its cycle space. A cycle basis where the sum of weights of the cycles is minimum is called a minimum cycle basis of G. The current fastest algorithm for computing a minimum cycle basis in a directed graph with m arcs and n vertices runs in O(m(w+1)n) time (where w < 2.376 is the exponent of matrix multiplication). If one allows randomization, then an (O) over tilde (m(3)n) algorithm is known for this problem. In this paper we present a simple (O) over tilde (m(2)n) randomized algorithm for this problem. The problem of computing a minimum cycle basis in an undirected graph has been well-studied. In this problem a {0, 1} incidence vector is associated with each cycle and the vector space over F-2 generated by these vectors is the cycle space of the graph. The fastest known algorithm for computing a minimum cycle basis in an undirected graph runs in O(m(2)n + mn(2) logn) time and our randomized algorithm for directed graphs almost matches this running time.
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.
Resumo:
Metal oxide varistors (MOV) are popularly used to protect offline electronic equipment against power line transients. The offline switched mode power supplies (SMPS) use power line filters and MOVs in the front-end. The power line filter is used to reduce the conducted noise emission into the power line and the MOVs connected before this line filter and the MOVs connected before this line filter to clamp line transients to safer levels thereby protecting the SMPS. Because of the presence of 'X' capacitors at the input of line filter the MOV clamping voltage is increased. This paper presents one such case and gives theoretical and experimental results. An approximate method to predetermine the magnitude of such clamping voltages is also presented.
Resumo:
In the present paper, the constitutive model is proposed for cemented soils, in which the cementation component and frictional component are treated separately and then added together to get overall response. The modified Cam clay is used to predict the frictional resistance and an elasto-plastic strain softening model is proposed for the cementation component. The rectangular isotropic yield curve proposed by Vatsala (1995) for the bond component has been modified in order to account for the anisotropy generally observed in the case of natural soft cemented soils. In this paper, the model proposed is used to predict the experimental results of extension tests on the soft cemented soils whereas compression test results are presented elsewhere. The model predictions compare quite satisfactorily with the observed response. A few input parameters are required which are well defined and easily determinable and the model uses associated flow rule.
Resumo:
The Radius of Direct attraction of a discrete neural network is a measure of stability of the network. it is known that Hopfield networks designed using Hebb's Rule have a radius of direct attraction of Omega(n/p) where n is the size of the input patterns and p is the number of them. This lower bound is tight if p is no larger than 4. We construct a family of such networks with radius of direct attraction Omega(n/root plog p), for any p greater than or equal to 5. The techniques used to prove the result led us to the first polynomial-time algorithm for designing a neural network with maximum radius of direct attraction around arbitrary input patterns. The optimal synaptic matrix is computed using the ellipsoid method of linear programming in conjunction with an efficient separation oracle. Restrictions of symmetry and non-negative diagonal entries in the synaptic matrix can be accommodated within this scheme.
Resumo:
A system for temporal data mining includes a computer readable medium having an application configured to receive at an input module a temporal data series and a threshold frequency. The system is further configured to identify, using a candidate identification and tracking module, one or more occurrences in the temporal data series of a candidate episode and increment a count for each identified occurrence. The system is also configured to produce at an output module an output for those episodes whose count of occurrences results in a frequency exceeding the threshold frequency.
Resumo:
A system for temporal data mining includes a computer readable medium having an application configured to receive at an input module a temporal data series having events with start times and end times, a set of allowed dwelling times and a threshold frequency. The system is further configured to identify, using a candidate identification and tracking module, one or more occurrences in the temporal data series of a candidate episode and increment a count for each identified occurrence. The system is also configured to produce at an output module an output for those episodes whose count of occurrences results in a frequency exceeding the threshold frequency.
Resumo:
In the two-user Gaussian Strong Interference Channel (GSIC) with finite constellation inputs, it is known that relative rotation between the constellations of the two users enlarges the Constellation Constrained (CC) capacity region. In this paper, a metric for finding the approximate angle of rotation to maximally enlarge the CC capacity is presented. It is shown that for some portion of the Strong Interference (SI) regime, with Gaussian input alphabets, the FDMA rate curve touches the capacity curve of the GSIC. Even as the Gaussian alphabet FDMA rate curve touches the capacity curve of the GSIC, at high powers, with both the users using the same finite constellation, we show that the CC FDMA rate curve lies strictly inside the CC capacity curve for the constellations BPSK, QPSK, 8-PSK, 16-QAM and 64-QAM. It is known that, with Gaussian input alphabets, the FDMA inner-bound at the optimum sum-rate point is always better than the simultaneous-decoding inner-bound throughout the Weak Interference (WI) regime. For a portion of the WI regime, it is shown that, with identical finite constellation inputs for both the users, the simultaneous-decoding inner-bound enlarged by relative rotation between the constellations can be strictly better than the FDMA inner-bound.
Resumo:
A spring-mass-lever (SML) model is introduced in this paper for a single-input-single-output compliant mechanism to capture its static and dynamic behavior. The SML model is a reduced-order model, and its five parameters provide physical insight and quantify the stiffness and inertia(1) at the input and output ports as well as the transformation of force and displacement between the input and output. The model parameters can be determined with reasonable accuracy without performing dynamic or modal analysis. The paper describes two uses of the SML model: computationally efficient analysis of a system of which the compliant mechanism is a part; and design of compliant mechanisms for the given user-specifications. During design, the SML model enables determining the feasible parameter space of user-specified requirements, assessing the suitability of a compliant mechanism to meet the user-specifications and also selecting and/or re-designing compliant mechanisms from an existing database. Manufacturing constraints, material choice, and other practical considerations are incorporated into this methodology. A micromachined accelerometer and a valve mechanism are used as examples to show the effectiveness of the SML model in analysis and design. (C) 2012 Published by Elsevier Ltd.