912 resultados para Copying machines


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Increasingly larger scale applications are generating an unprecedented amount of data. However, the increasing gap between computation and I/O capacity on High End Computing machines makes a severe bottleneck for data analysis. Instead of moving data from its source to the output storage, in-situ analytics processes output data while simulations are running. However, in-situ data analysis incurs much more computing resource contentions with simulations. Such contentions severely damage the performance of simulation on HPE. Since different data processing strategies have different impact on performance and cost, there is a consequent need for flexibility in the location of data analytics. In this paper, we explore and analyze several potential data-analytics placement strategies along the I/O path. To find out the best strategy to reduce data movement in given situation, we propose a flexible data analytics (FlexAnalytics) framework in this paper. Based on this framework, a FlexAnalytics prototype system is developed for analytics placement. FlexAnalytics system enhances the scalability and flexibility of current I/O stack on HEC platforms and is useful for data pre-processing, runtime data analysis and visualization, as well as for large-scale data transfer. Two use cases – scientific data compression and remote visualization – have been applied in the study to verify the performance of FlexAnalytics. Experimental results demonstrate that FlexAnalytics framework increases data transition bandwidth and improves the application end-to-end transfer performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A combined data matrix consisting of high performance liquid chromatography–diode array detector (HPLC–DAD) and inductively coupled plasma-mass spectrometry (ICP-MS) measurements of samples from the plant roots of the Cortex moutan (CM), produced much better classification and prediction results in comparison with those obtained from either of the individual data sets. The HPLC peaks (organic components) of the CM samples, and the ICP-MS measurements (trace metal elements) were investigated with the use of principal component analysis (PCA) and the linear discriminant analysis (LDA) methods of data analysis; essentially, qualitative results suggested that discrimination of the CM samples from three different provinces was possible with the combined matrix producing best results. Another three methods, K-nearest neighbor (KNN), back-propagation artificial neural network (BP-ANN) and least squares support vector machines (LS-SVM) were applied for the classification and prediction of the samples. Again, the combined data matrix analyzed by the KNN method produced best results (100% correct; prediction set data). Additionally, multiple linear regression (MLR) was utilized to explore any relationship between the organic constituents and the metal elements of the CM samples; the extracted linear regression equations showed that the essential metals as well as some metallic pollutants were related to the organic compounds on the basis of their concentrations

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Network topology and routing are two important factors in determining the communication costs of big data applications at large scale. As for a given Cluster, Cloud, or Grid system, the network topology is fixed and static or dynamic routing protocols are preinstalled to direct the network traffic. Users cannot change them once the system is deployed. Hence, it is hard for application developers to identify the optimal network topology and routing algorithm for their applications with distinct communication patterns. In this study, we design a CCG virtual system (CCGVS), which first uses container-based virtualization to allow users to create a farm of lightweight virtual machines on a single host. Then, it uses software-defined networking (SDN) technique to control the network traffic among these virtual machines. Users can change the network topology and control the network traffic programmingly, thereby enabling application developers to evaluate their applications on the same system with different network topologies and routing algorithms. The preliminary experimental results through both synthetic big data programs and NPB benchmarks have shown that CCGVS can represent application performance variations caused by network topology and routing algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A novel near-infrared spectroscopy (NIRS) method has been researched and developed for the simultaneous analyses of the chemical components and associated properties of mint (Mentha haplocalyx Briq.) tea samples. The common analytes were: total polysaccharide content, total flavonoid content, total phenolic content, and total antioxidant activity. To resolve the NIRS data matrix for such analyses, least squares support vector machines was found to be the best chemometrics method for prediction, although it was closely followed by the radial basis function/partial least squares model. Interestingly, the commonly used partial least squares was unsatisfactory in this case. Additionally, principal component analysis and hierarchical cluster analysis were able to distinguish the mint samples according to their four geographical provinces of origin, and this was further facilitated with the use of the chemometrics classification methods-K-nearest neighbors, linear discriminant analysis, and partial least squares discriminant analysis. In general, given the potential savings with sampling and analysis time as well as with the costs of special analytical reagents required for the standard individual methods, NIRS offered a very attractive alternative for the simultaneous analysis of mint samples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Insights Live screening and playfulness of the interactive space can be effective strategies for attracting the attention of passers-by and turn them into active participants. While urban screen interfaces increase participation by encouraging group interaction, privately-oriented tangible user interfaces give people a longer time to reflect upon their answers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over past few decades, frog species have been experiencing dramatic decline around the world. The reason for this decline includes habitat loss, invasive species, climate change and so on. To better know the status of frog species, classifying frogs has become increasingly important. In this study, acoustic features are investigated for multi-level classification of Australian frogs: family, genus and species, including three families, eleven genera and eighty five species which are collected from Queensland, Australia. For each frog species, six instances are selected from which ten acoustic features are calculated. Then, the multicollinearity between ten features are studied for selecting non-correlated features for subsequent analysis. A decision tree (DT) classifier is used to visually and explicitly determine which acoustic features are relatively important for classifying family, which for genus, and which for species. Finally, a weighted support vector machines (SVMs) classifier is used for the multi- level classification with three most important acoustic features respectively. Our experiment results indicate that using different acoustic feature sets can successfully classify frogs at different levels and the average classification accuracy can be up to 85.6%, 86.1% and 56.2% for family, genus and species respectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, efficient scheduling algorithms based on Lagrangian relaxation have been proposed for scheduling parallel machine systems and job shops. In this article, we develop real-world extensions to these scheduling methods. In the first part of the paper, we consider the problem of scheduling single operation jobs on parallel identical machines and extend the methodology to handle multiple classes of jobs, taking into account setup times and setup costs, The proposed methodology uses Lagrangian relaxation and simulated annealing in a hybrid framework, In the second part of the paper, we consider a Lagrangian relaxation based method for scheduling job shops and extend it to obtain a scheduling methodology for a real-world flexible manufacturing system with centralized material handling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Proteins are polymerized by cyclic machines called ribosomes, which use their messenger RNA (mRNA) track also as the corresponding template, and the process is called translation. We explore, in depth and detail, the stochastic nature of the translation. We compute various distributions associated with the translation process; one of them-namely, the dwell time distribution-has been measured in recent single-ribosome experiments. The form of the distribution, which fits best with our simulation data, is consistent with that extracted from the experimental data. For our computations, we use a model that captures both the mechanochemistry of each individual ribosome and their steric interactions. We also demonstrate the effects of the sequence inhomogeneities of real genes on the fluctuations and noise in translation. Finally, inspired by recent advances in the experimental techniques of manipulating single ribosomes, we make theoretical predictions on the force-velocity relation for individual ribosomes. In principle, all our predictions can be tested by carrying out in vitro experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Among the multitude of test specimen geometries used for dynamic fiacture toughness evaluation, the most widely uscd specimen is lhc Chavpy specimen due its simple geomclry and availability of testing machines. The standard Chatpy specimen dimensions may llOl always give plane st~ain condilions and hence, it may be necessary Io coilduct lcs/s using specimens of dillEvcnt thicknesses to establish the plane strain K~a. An axisymmct/ic specimen, on the otlaev hand would always give flow constraints l~n a nominal specimen thickness i~rcspcctive of the test matctial. The notched disk specimen pVOl)oscd by Bcrn:ud ctal. [1] for static and dynamic initiation toughness measurement although p~ovicles plain-strain conditions, the crack plopagatcs at an angle to the direction of applied load. This makes inteq~retation of the test results difficult us it ~Ccluivcs ~actial slices to be cut fiom the fractured specimen to ascertain the angle o1 crack growth and a linite element model l~)r tl);t{ pa~ticulat ctack o~icntalion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

High end network security applications demand high speed operation and large rule set support. Packet classification is the core functionality that demands high throughput in such applications. This paper proposes a packet classification architecture to meet such high throughput. We have implemented a Firewall with this architecture in reconflgurable hardware. We propose an extension to Distributed Crossproducting of Field Labels (DCFL) technique to achieve scalable and high performance architecture. The implemented Firewall takes advantage of inherent structure and redundancy of rule set by using our DCFL Extended (DCFLE) algorithm. The use of DCFLE algorithm results in both speed and area improvement when it is implemented in hardware. Although we restrict ourselves to standard 5-tuple matching, the architecture supports additional fields. High throughput classification invariably uses Ternary Content Addressable Memory (TCAM) for prefix matching, though TCAM fares poorly in terms of area and power efficiency. Use of TCAM for port range matching is expensive, as the range to prefix conversion results in large number of prefixes leading to storage inefficiency. Extended TCAM (ETCAM) is fast and the most storage efficient solution for range matching. We present for the first time a reconfigurable hardware implementation of ETCAM. We have implemented our Firewall as an embedded system on Virtex-II Pro FPGA based platform, running Linux with the packet classification in hardware. The Firewall was tested in real time with 1 Gbps Ethernet link and 128 sample rules. The packet classification hardware uses a quarter of logic resources and slightly over one third of memory resources of XC2VP30 FPGA. It achieves a maximum classification throughput of 50 million packet/s corresponding to 16 Gbps link rate for the worst case packet size. The Firewall rule update involves only memory re-initialization in software without any hardware change.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

High end network security applications demand high speed operation and large rule set support. Packet classification is the core functionality that demands high throughput in such applications. This paper proposes a packet classification architecture to meet such high throughput. We have Implemented a Firewall with this architecture in reconfigurable hardware. We propose an extension to Distributed Crossproducting of Field Labels (DCFL) technique to achieve scalable and high performance architecture. The implemented Firewall takes advantage of inherent structure and redundancy of rule set by using, our DCFL Extended (DCFLE) algorithm. The use of DCFLE algorithm results In both speed and area Improvement when It is Implemented in hardware. Although we restrict ourselves to standard 5-tuple matching, the architecture supports additional fields.High throughput classification Invariably uses Ternary Content Addressable Memory (TCAM) for prefix matching, though TCAM fares poorly In terms of area and power efficiency. Use of TCAM for port range matching is expensive, as the range to prefix conversion results in large number of prefixes leading to storage inefficiency. Extended TCAM (ETCAM) is fast and the most storage efficient solution for range matching. We present for the first time a reconfigurable hardware Implementation of ETCAM. We have implemented our Firewall as an embedded system on Virtex-II Pro FPGA based platform, running Linux with the packet classification in hardware. The Firewall was tested in real time with 1 Gbps Ethernet link and 128 sample rules. The packet classification hardware uses a quarter of logic resources and slightly over one third of memory resources of XC2VP30 FPGA. It achieves a maximum classification throughput of 50 million packet/s corresponding to 16 Gbps link rate for file worst case packet size. The Firewall rule update Involves only memory re-initialiization in software without any hardware change.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents three methodologies for determining optimum locations and magnitudes of reactive power compensation in power distribution systems. Method I and Method II are suitable for complex distribution systems with a combination of both radial and ring-main feeders and having different voltage levels. Method III is suitable for low-tension single voltage level radial feeders. Method I is based on an iterative scheme with successive powerflow analyses, with formulation and solution of the optimization problem using linear programming. Method II and Method III are essentially based on the steady state performance of distribution systems. These methods are simple to implement and yield satisfactory results comparable with the results of Method I. The proposed methods have been applied to a few distribution systems, and results obtained for two typical systems are presented for illustration purposes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The wedge shape is a fairly common cross-section found in many non-axisymmetric components used in machines, aircraft, ships and automobiles. If such components are forged between two mutually inclined dies the metal displaced by the dies flows into the converging as well as into the diverging channels created by the inclined dies. The extent of each type of flow (convergent/divergent) depends on the die—material interface friction and the included die angle. Given the initial cross-section, the length as well as the exact geometry of the forged cross-section are therefore uniquely determined by these parameters. In this paper a simple stress analysis is used to predict changes in the geometry of a wedge undergoing compression between inclined platens. The flow in directions normal to the cross-section is assumed to be negligible. Experiments carried out using wedge-shaped lead billets show that, knowing the interface friction and as long as the deformation is not too large, the dimensional changes in the wedge can be predicted with reasonable accuracy. The predicted flow behaviour of metal for a wide range of die angles and interface friction is presented: these characteristics can be used by the die designer to choose the die lubricant (only) if the die angle is specified and to choose both of these parameters if there is no restriction on the exact die angle. The present work shows that the length of a wedge undergoing compression is highly sensitive to die—material interface friction. Thus in a situation where the top and bottom dies are inclined to each other, a wedge made of the material to be forged could be put between the dies and then compressed, whereupon the length of the compressed wedge — given the degree of compression — affords an estimate of the die—material interface friction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In my master thesis I analyse Byzantine warfare in the late period of the empire. I use military operations between Byzantines and crusader Principality of Achaia (1259–83) as a case study. Byzantine strategy was based (in “oriental manner”) on using ambushes, diplomacy, surprise attacks, deception etc. Open field battles that were risky in comparison with their benefits were usually avoided, but the Byzantines were sometimes forced to seek open encounter because their limited ability to keep strong armies in field for long periods of time. Foreign mercenaries had important place in Byzantine armies and they could simply change sides if their paymasters ran out of resources. The use of mercenaries with short contracts made it possible that the composition of an army was flexible but on the other hand heterogeneous – in result Byzantine armies were sometimes ineffective and prone to confusion. In open field battles Byzantines used formation that was made out from several lines placed one after another. This formation was especially suitable for cavalry battles. Byzantines might have also used other kinds of formations. The Byzantines were not considered equal to Latins in close combat. West-Europeans saw mainly horse archers and Latin mercenaries on Byzantine service as threats to themselves in battle. The legitimacy of rulers surrounding the Aegean sea was weak and in many cases political intrigues and personal relationships can have resolved the battles. Especially in sieges the loyalty of population was decisive. In sieges the Byzantines used plenty of siege machines and archers. This made fast conquests possible, but it was expensive. The Byzantines protected their frontiers by building castles. Military operations against the Principality of Achaia were mostly small scale raids following an intensive beginning. Byzantine raids were mostly made by privateers and mountaineers. This does not fit to the traditional picture that warfare belonged to the imperial professional army. It’s unlikely that military operations in war against the Principality of Achaia caused great demographic or economic catastrophe and some regions in the warzone might even have flourished. On the other hand people started to concentrate into villages which (with growing risks for trade) probably caused disturbance in economic development and in result birth rates might have decreased. Both sides of war sought to exchange their prisoners of war. These were treated according to conventional manners that were accepted by both sides. It was possible to sell prisoners, especially women and children, to slavery, but the scale of this trade does not seem to be great in military operations treated in this theses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The analysis of transient electrical stresses in the insulation of high voltage rotating machines is rendered difficult because of the existence of capacitive and inductive couplings between phases. The Published theories ignore many of the couplings between phases to obtain the solution. A new procedure is proposed here to determine the transient voltage distribution on rotating machine windings. All the significicant capacitive and inductive couplings between different sections in a phase and between different sections in different phases have been considered in this analysis. The experimental results show good correlation with those computed.