921 resultados para MACHINES
Resumo:
As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform’s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.
Resumo:
As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.
Resumo:
This article discusses the scope of research on the application of information technology in construction (ITC). A model of the information and material activities which together constitute the construction process is presented, using the IDEF0 activity modelling methodology. Information technology is defined to include all kinds of technology used for the storage, transfer and manipulation of information, thus also including devices such as copying machines, faxes and mobile phones. Using the model the domain of ITC research is defined as the use of information technology to facilitate and re-engineer the information process component of construction. Developments during the last decades in IT use in construction is discussed against a background of a simplified model of generic information processing tasks. The scope of ITC is compared with the scopes of research in related areas such as design methodology, construction management and facilities management. Health care is proposed as an interesting alternative (to the often used car manufacturing industry), as an IT application domain to compare with. Some of the key areas of ITC research in recent years; expert systems, company IT strategies, and product modelling are shortly discussed. The article finishes with a short discussion of the problems of applying standard scientific methodology in ITC research, in particular in product model research.
Resumo:
This paper makes explicit the relation between relative part position and kinematic freedom of the parts which is implicitly available in the literature. An extensive set of representative papers in the areas of assembly and kinematic modelling is reviewed to specifically identify how the ideas in the two areas are related and influencing the development of each other. The papers are categorised by the approaches followed in the specification, representation, and solution of the part relations. It is observed that the extent of the part geometry is not respected in modelling schemes and as a result, the causal flow of events (proximity–contact–mobility) during the assembling process is not realised in the existing modelling paradigms, which are focusing on either the relative positioning problem or the relative motion problem. Though an assembly is a static description of part configuration, achievement of this configuration requires availability of relative motion for bringing parts together during the assembly process. On the other hand, the kinematic freedom of a part depends on the nature of contacting regions with other parts in its static configuration. These two problems are thus related through the contact geometry. The chronology of the approaches that significantly contributed to the development of the subject is also included in the paper.
Resumo:
This paper aims at evaluating the methods of multiclass support vector machines (SVMs) for effective use in distance relay coordination. Also, it describes a strategy of supportive systems to aid the conventional protection philosophy in combating situations where protection systems have maloperated and/or information is missing and provide selective and secure coordinations. SVMs have considerable potential as zone classifiers of distance relay coordination. This typically requires a multiclass SVM classifier to effectively analyze/build the underlying concept between reach of different zones and the apparent impedance trajectory during fault. Several methods have been proposed for multiclass classification where typically several binary SVM classifiers are combined together. Some authors have extended binary SVM classification to one-step single optimization operation considering all classes at once. In this paper, one-step multiclass classification, one-against-all, and one-against-one multiclass methods are compared for their performance with respect to accuracy, number of iterations, number of support vectors, training, and testing time. The performance analysis of these three methods is presented on three data sets belonging to training and testing patterns of three supportive systems for a region and part of a network, which is an equivalent 526-bus system of the practical Indian Western grid.
Resumo:
A fast algorithm for the computation of maximum compatible classes (mcc) among the internal states of an incompletely specified sequential machine is presented in this paper. All the maximum compatible classes are determined by processing compatibility matrices of progressingly diminishing order, whose total number does not exceed (p + m), where p is the largest cardinality among these classes, and m is the number of such classes. Consequently the algorithm is specially suitable for the state minimization of very large sequential machines as encountered in vlsi circuits and systems.
Resumo:
This paper presents a new approach to the location of fault in the high voltage power transmission system using Support Vector Machines (SVMs). A knowledge base is developed using transient stability studies for apparent impedance swing trajectory in the R-X plane. SVM technique is applied to identify the fault location in the system. Results are presented on sample 3-power station, a 9-bus system illustrate the implementation of the proposed method.
Resumo:
Clustered architecture processors are preferred for embedded systems because centralized register file architectures scale poorly in terms of clock rate, chip area, and power consumption. Although clustering helps by improving clock speed, reducing energy consumption of the logic, and making the design simpler, it introduces extra overheads by way of inter-cluster communication. This communication happens over long global wires which leads to delay in execution and significantly high energy consumption.In this paper, we propose a new instruction scheduling algorithm that exploits scheduling slacks of instructions and communication slacks of data values together to achieve better energy-performance trade-offs for clustered architectures with heterogeneous interconnect. Our instruction scheduling algorithm achieves 35% and 40% reduction in communication energy, whereas the overall energy-delay product improves by 4.5% and 6.5% respectively for 2 cluster and 4 cluster machines with marginal increase (1.6% and 1.1%) in execution time. Our test bed uses the Trimaran compiler infrastructure.
Resumo:
Conventional three-dimensional isoparametric elements are susceptible to problems of locking when used to model plate/shell geometries or when the meshes are distorted etc. Hybrid elements that are based on a two-field variational formulation are immune to most of these problems, and hence can be used to efficiently model both "chunky" three-dimensional and plate/shell type structures. Thus, only one type of element can be used to model "all" types of structures, and also allows us to use a standard dual algorithm for carrying out the topology optimization of the structure. We also address the issue of manufacturability of the designs.
Resumo:
The notion of optimization is inherent in protein design. A long linear chain of twenty types of amino acid residues are known to fold to a 3-D conformation that minimizes the combined inter-residue energy interactions. There are two distinct protein design problems, viz. predicting the folded structure from a given sequence of amino acid monomers (folding problem) and determining a sequence for a given folded structure (inverse folding problem). These two problems have much similarity to engineering structural analysis and structural optimization problems respectively. In the folding problem, a protein chain with a given sequence folds to a conformation, called a native state, which has a unique global minimum energy value when compared to all other unfolded conformations. This involves a search in the conformation space. This is somewhat akin to the principle of minimum potential energy that determines the deformed static equilibrium configuration of an elastic structure of given topology, shape, and size that is subjected to certain boundary conditions. In the inverse-folding problem, one has to design a sequence with some objectives (having a specific feature of the folded structure, docking with another protein, etc.) and constraints (sequence being fixed in some portion, a particular composition of amino acid types, etc.) while obtaining a sequence that would fold to the desired conformation satisfying the criteria of folding. This requires a search in the sequence space. This is similar to structural optimization in the design-variable space wherein a certain feature of structural response is optimized subject to some constraints while satisfying the governing static or dynamic equilibrium equations. Based on this similarity, in this work we apply the topology optimization methods to protein design, discuss modeling issues and present some initial results.
Resumo:
In the recent past it has been found that HVDC transmission systems and turbine-generator shaft torsional dynamics can interact in an unfavourable manner. This paper presents a detailed linearised state space model of AC/DC system to study this torsional interaction. The model developed is used to study the effect of various system parameters, such as, dc line loading, converter firing angle, the firing scheme employed. The results obtained are compared with those given in[3].
Resumo:
A novel method to account for the transmission line resistances in structure preserving energy functions (SPEF) is presented in this paper. The method exploits the equivalence of a lossy network having the same conductance to susceptance ratio for all its elements to a lossless network with a new set of power injections. The system equations and the energy function are developed using centre of inertia (COI) variables and the loads are modelled as arbitrary functions of respective bus voltages. The application of SPEF to direct transient stability evaluation is presented considering a realistic power system example.
Resumo:
While plants of a single species emit a diversity of volatile organic compounds (VOCs) to attract or repel interacting organisms, these specific messages may be lost in the midst of the hundreds of VOCs produced by sympatric plants of different species, many of which may have no signal content. Receivers must be able to reduce the babel or noise in these VOCs in order to correctly identify the message. For chemical ecologists faced with vast amounts of data on volatile signatures of plants in different ecological contexts, it is imperative to employ accurate methods of classifying messages, so that suitable bioassays may then be designed to understand message content. We demonstrate the utility of `Random Forests' (RF), a machine-learning algorithm, for the task of classifying volatile signatures and choosing the minimum set of volatiles for accurate discrimination, using datam from sympatric Ficus species as a case study. We demonstrate the advantages of RF over conventional classification methods such as principal component analysis (PCA), as well as data-mining algorithms such as support vector machines (SVM), diagonal linear discriminant analysis (DLDA) and k-nearest neighbour (KNN) analysis. We show why a tree-building method such as RF, which is increasingly being used by the bioinformatics, food technology and medical community, is particularly advantageous for the study of plant communication using volatiles, dealing, as it must, with abundant noise.
Resumo:
Utilizing a circuit model [1, 2] of an induction motor, a simplified analysis of steady state performance of a voltage controlled induction motor (VCIM) drive is described in this paper. By solving a set of nonlinear algebraic equations which describe the VCIM drive under steady operation, the operating variables such as constant components of torque, rotor flux linkages, fundamental components of stator voltage and current and phase angle are obtained for any given value of slip, triggering angle and supply voltage.
Resumo:
Stability analysis is carried out considering free lateral vibrations of simply supported composite skew plates that are subjected to both direct and shear in-plane forces. An oblique stress component representation is used, consistent with the skew-geometry of the plate. A double series, expressed in Chebyshev polynomials, is used here as the assumed deflection surface and Ritz method of solution is employed. Numerical results for different combinations of side ratios, skew angle, and in-plane loadings that act individually or in combination are obtained. In this method, the in-plane load parameter is varied until the fundamental frequency goes to zero. The value of the in-plane load then corresponds to a critical buckling load. Plots of frequency parameter versus in-plane loading are given for a few typical cases. Details of crossings and quasi degeneracies of these curves are presented.