858 resultados para Milling machines
Resumo:
High end network security applications demand high speed operation and large rule set support. Packet classification is the core functionality that demands high throughput in such applications. This paper proposes a packet classification architecture to meet such high throughput. We have Implemented a Firewall with this architecture in reconfigurable hardware. We propose an extension to Distributed Crossproducting of Field Labels (DCFL) technique to achieve scalable and high performance architecture. The implemented Firewall takes advantage of inherent structure and redundancy of rule set by using, our DCFL Extended (DCFLE) algorithm. The use of DCFLE algorithm results In both speed and area Improvement when It is Implemented in hardware. Although we restrict ourselves to standard 5-tuple matching, the architecture supports additional fields.High throughput classification Invariably uses Ternary Content Addressable Memory (TCAM) for prefix matching, though TCAM fares poorly In terms of area and power efficiency. Use of TCAM for port range matching is expensive, as the range to prefix conversion results in large number of prefixes leading to storage inefficiency. Extended TCAM (ETCAM) is fast and the most storage efficient solution for range matching. We present for the first time a reconfigurable hardware Implementation of ETCAM. We have implemented our Firewall as an embedded system on Virtex-II Pro FPGA based platform, running Linux with the packet classification in hardware. The Firewall was tested in real time with 1 Gbps Ethernet link and 128 sample rules. The packet classification hardware uses a quarter of logic resources and slightly over one third of memory resources of XC2VP30 FPGA. It achieves a maximum classification throughput of 50 million packet/s corresponding to 16 Gbps link rate for file worst case packet size. The Firewall rule update Involves only memory re-initialiization in software without any hardware change.
Resumo:
This paper presents three methodologies for determining optimum locations and magnitudes of reactive power compensation in power distribution systems. Method I and Method II are suitable for complex distribution systems with a combination of both radial and ring-main feeders and having different voltage levels. Method III is suitable for low-tension single voltage level radial feeders. Method I is based on an iterative scheme with successive powerflow analyses, with formulation and solution of the optimization problem using linear programming. Method II and Method III are essentially based on the steady state performance of distribution systems. These methods are simple to implement and yield satisfactory results comparable with the results of Method I. The proposed methods have been applied to a few distribution systems, and results obtained for two typical systems are presented for illustration purposes.
Resumo:
The wedge shape is a fairly common cross-section found in many non-axisymmetric components used in machines, aircraft, ships and automobiles. If such components are forged between two mutually inclined dies the metal displaced by the dies flows into the converging as well as into the diverging channels created by the inclined dies. The extent of each type of flow (convergent/divergent) depends on the die—material interface friction and the included die angle. Given the initial cross-section, the length as well as the exact geometry of the forged cross-section are therefore uniquely determined by these parameters. In this paper a simple stress analysis is used to predict changes in the geometry of a wedge undergoing compression between inclined platens. The flow in directions normal to the cross-section is assumed to be negligible. Experiments carried out using wedge-shaped lead billets show that, knowing the interface friction and as long as the deformation is not too large, the dimensional changes in the wedge can be predicted with reasonable accuracy. The predicted flow behaviour of metal for a wide range of die angles and interface friction is presented: these characteristics can be used by the die designer to choose the die lubricant (only) if the die angle is specified and to choose both of these parameters if there is no restriction on the exact die angle. The present work shows that the length of a wedge undergoing compression is highly sensitive to die—material interface friction. Thus in a situation where the top and bottom dies are inclined to each other, a wedge made of the material to be forged could be put between the dies and then compressed, whereupon the length of the compressed wedge — given the degree of compression — affords an estimate of the die—material interface friction.
Resumo:
In my master thesis I analyse Byzantine warfare in the late period of the empire. I use military operations between Byzantines and crusader Principality of Achaia (1259–83) as a case study. Byzantine strategy was based (in “oriental manner”) on using ambushes, diplomacy, surprise attacks, deception etc. Open field battles that were risky in comparison with their benefits were usually avoided, but the Byzantines were sometimes forced to seek open encounter because their limited ability to keep strong armies in field for long periods of time. Foreign mercenaries had important place in Byzantine armies and they could simply change sides if their paymasters ran out of resources. The use of mercenaries with short contracts made it possible that the composition of an army was flexible but on the other hand heterogeneous – in result Byzantine armies were sometimes ineffective and prone to confusion. In open field battles Byzantines used formation that was made out from several lines placed one after another. This formation was especially suitable for cavalry battles. Byzantines might have also used other kinds of formations. The Byzantines were not considered equal to Latins in close combat. West-Europeans saw mainly horse archers and Latin mercenaries on Byzantine service as threats to themselves in battle. The legitimacy of rulers surrounding the Aegean sea was weak and in many cases political intrigues and personal relationships can have resolved the battles. Especially in sieges the loyalty of population was decisive. In sieges the Byzantines used plenty of siege machines and archers. This made fast conquests possible, but it was expensive. The Byzantines protected their frontiers by building castles. Military operations against the Principality of Achaia were mostly small scale raids following an intensive beginning. Byzantine raids were mostly made by privateers and mountaineers. This does not fit to the traditional picture that warfare belonged to the imperial professional army. It’s unlikely that military operations in war against the Principality of Achaia caused great demographic or economic catastrophe and some regions in the warzone might even have flourished. On the other hand people started to concentrate into villages which (with growing risks for trade) probably caused disturbance in economic development and in result birth rates might have decreased. Both sides of war sought to exchange their prisoners of war. These were treated according to conventional manners that were accepted by both sides. It was possible to sell prisoners, especially women and children, to slavery, but the scale of this trade does not seem to be great in military operations treated in this theses.
Resumo:
The analysis of transient electrical stresses in the insulation of high voltage rotating machines is rendered difficult because of the existence of capacitive and inductive couplings between phases. The Published theories ignore many of the couplings between phases to obtain the solution. A new procedure is proposed here to determine the transient voltage distribution on rotating machine windings. All the significicant capacitive and inductive couplings between different sections in a phase and between different sections in different phases have been considered in this analysis. The experimental results show good correlation with those computed.
Resumo:
The paper describes a Simultaneous Implicit (SI) approach for transient stability simulations based on an iterative technique using traingularised admittance matrix [1]. The reduced saliency of generator in the subtransient state is taken advantage of to speed up the algorithm. Accordingly, generator differential equations, except rotor swing, contain voltage proportional to fluxes in the main field, dampers and a hypothetical winding representing deep flowing eddy currents, as state variables. The simulation results are validated by comparison with two independent methods viz. Runge-Kutta simulation for a simplified system and a method based on modelling damper windings using conventional induction motor theory.
Resumo:
The aim of this study is to survey the meaning of craftmanship in goldsmith occupation. The image of craftmanship is built theoretically as well as researcher's own practical experience. The study describes a dialogue between self-employed goldsmith s everyday work and trade union's opinion. Suomen Kultaseppien Liitto (The Goldsmith Assosiation of Finland) was chosen forthe trade union, because it is the biggest, the oldest and the most influential on the occupational area. The research data are volumes 1995 - 1998 of occupational membership journal of Suomen Kultaseppien Liitto. The data analyzed with Adapted Content Analysis and Grounded Theory. The professional occupation of goldsmiths, the role of craftmanship and the future of the occupation are discussed. Additionally, the relationship between the Suomen Kultaseppien Liitto and occupational culture and profession of goldsmiths was studied. Craft and craftmanship is most often discussed in articles related to tradition and education.Craftmanship is understood very idealistically, with little meaning in practical life. St. Eligius and the skill and art of goldsmiths in St Petersburg are raised to symbols of craftmanship. The occupational image is broken and a clear conflict between education and occupation is visible. Education produces artist-craftsmen, while handicraft workers are required in industry, and retailers or specially trained store assistant in business. Computer-aided design and manufacture render handicraft workmanship unnecessary. In a pessimistic view, the future possibilities of the goldsmith occupational profession are dim, because the artist-craftsmen are bound to lose to fast-paced machines. On the other hand, people involved in goldsmith education see the future light, designer-goldsmiths developing the occupational to new dimensions. Suomen Kultaseppien Liitto represents goldsmiths in public. The union, however is governed by non-artisan goldsmiths. The union stresses business attitudes and enterpreneurship, and has succeeded in protecting the privileges of retailers and industry. Goldsmiths profession is seen in the research data as a combination of precious-metal industry, jewellery and watch stores, anda goldsmith shop is considered a specialized giftstore. The goldsmiths occupation is not a profession, and the Suomen Kultaseppien Liitto is not a trade union for artist and craftmen. Accordingly, part of the representative authority of the union could be transferred from the Association to Taidekäsityöläiset Taiko ry, a member of organization of Ornamo. Results of this study show the importance of defining the images of the goldsmith occupational profession and the trade union. The results could be applied to goldsmith education to examine what would be the optimal education and training for present employment opportunities. The important background theories has been the theories of Habermas and Lévi-Strauss.
Resumo:
The aim of the thesis was to compare the correspondence of the outcome a computer assisted program appearance compared to the original image. The aspect of the study was directed to embroidery with household machines. The study was made from the usability point of view with Brother's PE-design 6.0 embroidery design programs two automatic techniques; multicoloured fragment design and multicoloured stitch surface design. The study's subject is very current because of the fast development of machine embroidery. The theory is based on history of household sewing machines, embroidery sewing machines, stitch types in household sewing machines, embroidery design programs as well as PE-design 6.0 embroidery design program's six automatic techniques. Additionally designing of embroidery designs were included: original image, digitizing, punching, applicable sewing threads as well as the connection between embroidery designs and materials used on embroidery. Correspondences of sewn appearances were examined with sewing experimental methods. 18 research samples of five original image were sewn with both techniques. Experiments were divided into four testing stages in design program. Every testing stage was followed by experimental sewing with Brother Super Galaxie 3100D embroidery machine. Experiments were reported into process files and forms made for the techniques. Research samples were analysed on images syntactic bases with sensory perception assessment. Original images and correspondence of the embroidery appearances were analysed with a form made of it. The form was divided into colour and shape assessment in five stage-similarity-scale. Based on this correspondence analysis it can be said that with both automatic techniques the best correspondence of colour and shape was achieved by changing the standard settings and using the makers own thread chart and edited original image. According to the testing made it is impossible to inform where the image editing possibilities of the images are sufficient or does the optimum correspondence need a separate program. When aiming at correspondence between appearances of two images the computer is unable to trace by itself the appearance of the original image. Processing a computer program assisted embroidery image human perception and personal decision making are unavoidable.
Resumo:
For systems which can be decomposed into slow and fast subsystems, a near optimum linear state regulator consisting of two subsystem regulators can be developed. Depending upon the desired criteria, either a short term (fast controller) or a long term controller (slow controller) can be easily designed with minimum computational costs. Using this approach an example of a power system supplying a cyclic load is studied and the performance of the different controllers are compared.
Resumo:
Near the boundaries of shells, thin shell theories cannot always provide a satisfactory description of the kinematic situation. This imposes severe limitations on simulating the boundary conditions in theoretical shell models. Here an attempt is made to overcome the above limitation. Three-dimensional theory of elasticity is used near boundaries, while thin shell theory covers the major part of the shell away from the boundaries. Both regions are connected by means of an “interphase element.” This method is used to study typical static stress and natural vibration problems
Resumo:
Many novel computer architectures like array and multiprocessors which achieve high performance through the use of concurrency exploit variations of the von Neumann model of computation. The effective utilization of the machines makes special demands on programmers and their programming languages, such as the structuring of data into vectors or the partitioning of programs into concurrent processes. In comparison, the data flow model of computation demands only that the principle of structured programming be followed. A data flow program, often represented as a data flow graph, is a program that expresses a computation by indicating the data dependencies among operators. A data flow computer is a machine designed to take advantage of concurrency in data flow graphs by executing data independent operations in parallel. In this paper, we discuss the design of a high level language (DFL: Data Flow Language) suitable for data flow computers. Some sample procedures in DFL are presented. The implementation aspects have not been discussed in detail since there are no new problems encountered. The language DFL embodies the concepts of functional programming, but in appearance closely resembles Pascal. The language is a better vehicle than the data flow graph for expressing a parallel algorithm. The compiler has been implemented on a DEC 1090 system in Pascal.
Resumo:
This manual is a guide to establishing a set of operations to achieve high grade results in product quality and recovery, flexibility, innovation, cost, and competitiveness. The manual outlines: - economic and feasible technologies for increasing recovery and reducing avoidable loss during processing, from the log to the finished board, and - mechanisms that allow production value to be optimised in different sized mills. Part 2 includes sections 8 to 17: Air drying, pre-drying, reconditioning, controlled final drying, dry milling, storage, information assessment, drying quality assessment, moisture content monitoring, glossary. Part 1 Link: http://era.deedi.qld.gov.au/3138 Covers sections 1 to 7: Drying overview and strategy, coupe, log yard, green mill, green pack, bioprotection, rack timber.
Resumo:
This manual is a guide to establishing a set of operations to achieve high grade results in product quality and recovery, flexibility, innovation, cost, and competitiveness. The manual outlines: - economic and feasible technologies for increasing recovery and reducing avoidable loss during processing, from the log to the finished board, and - mechanisms that allow production value to be optimised in different sized mills. Part 1 covers sections 1 to 7: Drying overview and strategy, coupe, log yard, green mill, green pack, bioprotection, rack timber. Part 2 Link: http://era.deedi.qld.gov.au/3137 Includes sections 8 to 17: Air drying, pre-drying, reconditioning, controlled final drying, dry milling, storage, information assessment, drying quality assessment, moisture content monitoring, glossary.
Resumo:
In this paper, we propose an extension to the I/O device architecture, as recommended in the PCI-SIG IOV specification, for virtualizing network I/O devices. The aim is to enable fine-grained controls to a virtual machine on the I/O path of a shared device. The architecture allows native access of I/O devices to virtual machines and provides device level QoS hooks for controlling VM specific device usage. For evaluating the architecture we use layered queuing network (LQN) models. We implement the architecture and evaluate it using simulation techniques, on the LQN model, to demonstrate the benefits. With the architecture, the benefit for network I/O is 60% more than what can be expected on the existing architecture. Also, the proposed architecture improves scalability in terms of the number of virtual machines intending to share the I/O device.
Resumo:
Data flow computers are high-speed machines in which an instruction is executed as soon as all its operands are available. This paper describes the EXtended MANchester (EXMAN) data flow computer which incorporates three major extensions to the basic Manchester machine. As extensions we provide a multiple matching units scheme, an efficient, implementation of array data structure, and a facility to concurrently execute reentrant routines. A simulator for the EXMAN computer has been coded in the discrete event simulation language, SIMULA 67, on the DEC 1090 system. Performance analysis studies have been conducted on the simulated EXMAN computer to study the effectiveness of the proposed extensions. The performance experiments have been carried out using three sample problems: matrix multiplication, Bresenham's line drawing algorithm, and the polygon scan-conversion algorithm.