964 resultados para Computer input-outpus equipment.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Template matching is concerned with measuring the similarity between patterns of two objects. This paper proposes a memory-based reasoning approach for pattern recognition of binary images with a large template set. It seems that memory-based reasoning intrinsically requires a large database. Moreover, some binary image recognition problems inherently need large template sets, such as the recognition of Chinese characters which needs thousands of templates. The proposed algorithm is based on the Connection Machine, which is the most massively parallel machine to date, using a multiresolution method to search for the matching template. The approach uses the pyramid data structure for the multiresolution representation of templates and the input image pattern. For a given binary image it scans the template pyramid searching the match. A binary image of N × N pixels can be matched in O(log N) time complexity by our algorithm and is independent of the number of templates. Implementation of the proposed scheme is described in detail.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present study of the stability of systems governed by a linear multidimensional time-varying equation, which are encountered in spacecraft dynamics, economics, demographics, and biological systems, gives attention the lemma dealing with L(inf) stability of an integral equation that results from the differential equation of the system under consideration. Using the proof of this lemma, the main result on L(inf) stability is derived according; a corollary of the theorem deals with constant coefficient systems perturbed by small periodic terms. (O.C.)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The modes of binding of alpha- and beta-anomers of D-galactose, D-fucose and D-glucose to L-arabinose-binding protein (ABP) have been studied by energy minimization using the low resolution (2.4 A) X-ray data of the protein. These studies suggest that these sugars preferentially bind in the alpha-form to ABP, unlike L-arabinose where both alpha- and beta-anomers bind almost equally. The best modes of binding of alpha- and beta-anomers of D-galactose and D-fucose differ slightly in the nature of the possible hydrogen bonds with the protein. The residues Arg 151 and Asn 232 of ABP from bidentate hydrogen bonds with both L-arabinose and D-galactose, but not with D-fucose or D-glucose. However in the case of L-arabinose, Arg 151 forms hydrogen bonds with the hydroxyl group at the C-4 atom and the ring oxygen, whereas in case of D-galactose it forms bonds with the hydroxyl groups at the C-4 and C-6 atoms of the pyranose ring. The calculated conformational energies also predict that D-galactose is a better inhibitor than D-fucose and D-glucose, in agreement with kinetic studies. The weak inhibitor D-glucose binds preferentially to one domain of ABP leading to the formation of a weaker complex. Thus these studies provide information about the most probable binding modes of these sugars and also provide a theoretical explanation for the observed differences in their binding affinities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The CCEM method (Contact Criteria and Energy Minimisation) has been developed and applied to study protein-carbohydrate interactions. The method uses available X-ray data even on the native protein at low resolution (above 2.4 Å) to generate realistic models of a variety of proteins with various ligands.The two examples discussed in this paper are arabinose-binding protein (ABP) and pea lectin. The X-ray crystal structure data reported on ABP-β-l-arabinose complex at 2.8, 2.4 and 1.7 Å resolution differ drastically in predicting the nature of the interactions between the protein and ligand. It is shown that, using the data at 2.4 Å resolution, the CCEM method generates complexes which are as good as the higher (1.7 Å) resolution data. The CCEM method predicts some of the important hydrogen bonds between the ligand and the protein which are missing in the interpretation of the X-ray data at 2.4 Å resolution. The theoretically predicted hydrogen bonds are in good agreement with those reported at 1.7 Å resolution. Pea lectin has been solved only in the native form at 3 Å resolution. Application of the CCEM method also enables us to generate complexes of pea lectin with methyl-α-d-glucopyranoside and methyl-2,3-dimethyl-α-d-glucopyranoside which explain well the available experimental data in solution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A scheme for the detection and isolation of actuator faults in linear systems is proposed. A bank of unknown input observers is constructed to generate residual signals which will deviate in characteristic ways in the presence of actuator faults. Residual signals are unaffected by the unknown inputs acting on the system and this decreases the false alarm and miss probabilities. The results are illustrated through a simulation study of actuator fault detection and isolation in a pilot plant doubleeffect evaporator.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Metallic glasses are of interest because of their mechanical properties. They are ductile as well as brittle. This is true of Pd77.5Cu6Si16.5, a ternary glassy alloy. Actually, the most stable metallic glasses are those which are alloys of noble or transition metals A general formula is postulated as T70–80G30-20where T stands for one or several 3d transition elements, and includes the metalloid glass formers. Another general formula is A3B to A5B where B is a metalloid. A computer method utilising the MIGAP computer program of Kaufman is used to calculate the miscibility gap over a range of temperatures. The precipitation of a secondary crystalline phase is postulated around 1500K. This could produce a dispersed phase composite with interesting high temperature-strength properties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The quaternary system Sb1bTe1bBi1bSe with small amounts of suitable dopants is of interest for the manufacture of thermoelectric modules which exhibit the Peltier and Seebeck effects. This property could be useful in the production of energy from the thermoelectric effect. Other substances are bismuth telluride (Bi2Te3) and Sb1bTe1bBi and compounds such as ZnIn2Se4. In the present paper the application of computer programs such as MIGAP of Kaufman is used to indicate the stability of the ternary limits of Sb1bTe1bBi within the temperature ranges of interest, namely 273 K to 300 K.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The compounds CdHgTe and its constituent binaries CdTe, HgTe, and CdHg are semiconductors which are used in thermal, infrared, nuclear, thermoelectric and other photo sensitive devices. The compound CdHgTe has a Sphaleritic structure of possible type A1IIB1IIC6VI. The TERCP program of Kaufman is used to estimate the stable regions of the ternary phase diagram using available thermodynamic data. It was found that there was little variation in stochiometry with temperature. The compositions were calculated for temperatures ranging from 325K to 100K and the compositional limits were Cd13−20Hg12−01Te75−79, Hg varying most. By comparison with a similar compound, Cd In2Te4 of forbidden band width. 88 to .90 e.V., similar properties are postulated for Cd1Hg1Te6 with applications in the infra red region of the spectrum at 300K where this composition is given by TERCP at the limit of stability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present an interactive map-based technique for designing single-input-single-output compliant mechanisms that meet the requirements of practical applications. Our map juxtaposes user-specifications with the attributes of real compliant mechanisms stored in a database so that not only the practical feasibility of the specifications can be discerned quickly but also modifications can be done interactively to the existing compliant mechanisms. The practical utility of the method presented here exceeds that of shape and size optimizations because it accounts for manufacturing considerations, stress limits, and material selection. The premise for the method is the spring-leverage (SL) model, which characterizes the kinematic and elastostatic behavior of compliant mechanisms with only three SL constants. The user-specifications are met interactively using the beam-based 2D models of compliant mechanisms by changing their attributes such as: (i) overall size in two planar orthogonal directions, separately and together, (ii) uniform resizing of the in-plane widths of all the beam elements, (iii) uniform resizing of the out-of-plane thick-nesses of the beam elements, and (iv) the material. We present a design software program with a graphical user interface for interactive design. A case-study that describes the design procedure in detail is also presented while additional case-studies are posted on a website. DOI:10.1115/1.4001877].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Artificial neural networks (ANNs) have shown great promise in modeling circuit parameters for computer aided design applications. Leakage currents, which depend on process parameters, supply voltage and temperature can be modeled accurately with ANNs. However, the complex nature of the ANN model, with the standard sigmoidal activation functions, does not allow analytical expressions for its mean and variance. We propose the use of a new activation function that allows us to derive an analytical expression for the mean and a semi-analytical expression for the variance of the ANN-based leakage model. To the best of our knowledge this is the first result in this direction. Our neural network model also includes the voltage and temperature as input parameters, thereby enabling voltage and temperature aware statistical leakage analysis (SLA). All existing SLA frameworks are closely tied to the exponential polynomial leakage model and hence fail to work with sophisticated ANN models. In this paper, we also set up an SLA framework that can efficiently work with these ANN models. Results show that the cumulative distribution function of leakage current of ISCAS'85 circuits can be predicted accurately with the error in mean and standard deviation, compared to Monte Carlo-based simulations, being less than 1% and 2% respectively across a range of voltage and temperature values.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reorganizing a dataset so that its hidden structure can be observed is useful in any data analysis task. For example, detecting a regularity in a dataset helps us to interpret the data, compress the data, and explain the processes behind the data. We study datasets that come in the form of binary matrices (tables with 0s and 1s). Our goal is to develop automatic methods that bring out certain patterns by permuting the rows and columns. We concentrate on the following patterns in binary matrices: consecutive-ones (C1P), simultaneous consecutive-ones (SC1P), nestedness, k-nestedness, and bandedness. These patterns reflect specific types of interplay and variation between the rows and columns, such as continuity and hierarchies. Furthermore, their combinatorial properties are interlinked, which helps us to develop the theory of binary matrices and efficient algorithms. Indeed, we can detect all these patterns in a binary matrix efficiently, that is, in polynomial time in the size of the matrix. Since real-world datasets often contain noise and errors, we rarely witness perfect patterns. Therefore we also need to assess how far an input matrix is from a pattern: we count the number of flips (from 0s to 1s or vice versa) needed to bring out the perfect pattern in the matrix. Unfortunately, for most patterns it is an NP-complete problem to find the minimum distance to a matrix that has the perfect pattern, which means that the existence of a polynomial-time algorithm is unlikely. To find patterns in datasets with noise, we need methods that are noise-tolerant and work in practical time with large datasets. The theory of binary matrices gives rise to robust heuristics that have good performance with synthetic data and discover easily interpretable structures in real-world datasets: dialectical variation in the spoken Finnish language, division of European locations by the hierarchies found in mammal occurrences, and co-occuring groups in network data. In addition to determining the distance from a dataset to a pattern, we need to determine whether the pattern is significant or a mere occurrence of a random chance. To this end, we use significance testing: we deem a dataset significant if it appears exceptional when compared to datasets generated from a certain null hypothesis. After detecting a significant pattern in a dataset, it is up to domain experts to interpret the results in the terms of the application.