983 resultados para Performance(engineering)
Resumo:
The author looks at trends in software and systems, and the current and likely implications of these trends on the discipline of performance engineering. In particular, he examines software complexity growth and its consequences for performance engineering for enhanced understanding, more efficient analysis and effective performance improvement. The pressures for adaptive and autonomous systems introduce further opportunities for performance innovation. The promise of aspect oriented software development technologies for assisting with some of these challenges is introduced.
Resumo:
Composite Applications on top of SAPs implementation of SOA (Enterprise SOA) enable the extension of already existing business logic. In this paper we show, based on a case study, how Model-Driven Engineering concepts are applied in the development of such Composite Applications. Our Case Study extends a back-end business process which is required for the specific needs of a demo company selling wine. We use this to describe how the business centric models specifying the modified business behaviour of our case study can be utilized for business performance analysis where most of the actions are performed by humans. In particular, we apply a refined version of Model-Driven Performance Engineering that we proposed in our previous work and motivate which business domain specifics have to be taken into account for business performance analysis. We additionally motivate the need for performance related decision support for domain experts, who generally lack performance related skills. Such a support should offer visual guidance about what should be changed in the design and resource mapping to get improved results with respect to modification constraints and performance objectives, or objectives for time.
Resumo:
Composite Applications on top of SAPs implementation of SOA (Enterprise SOA) enable the extension of already existing business logic. In this paper we show, based on a case study, how Model-Driven Engineering concepts are applied in the development of such Composite Applications. Our Case Study extends a back-end business process which is required for the specific needs of a demo company selling wine. We use this to describe how the business centric models specifying the modified business behaviour of our case study can be utilized for business performance analysis where most of the actions are performed by humans. In particular, we apply a refined version of Model-Driven Performance Engineering that we proposed in our previous work and motivate which business domain specifics have to be taken into account for business performance analysis. We additionally motivate the need for performance related decision support for domain experts, who generally lack performance related skills. Such a support should offer visual guidance about what should be changed in the design and resource mapping to get improved results with respect to modification constraints and performance objectives, or objectives for time.
Resumo:
We propose a trace-driven approach to predict the performance degradation of disk request response times due to storage device contention in consolidated virtualized environments. Our performance model evaluates a queueing network with fair share scheduling using trace-driven simulation. The model parameters can be deduced from measurements obtained inside Virtual Machines (VMs) from a system where a single VM accesses a remote storage server. The parameterized model can then be used to predict the effect of storage contention when multiple VMs are consolidated on the same virtualized server. The model parameter estimation relies on a search technique that tries to estimate the splitting and merging of blocks at the the Virtual Machine Monitor (VMM) level in the case of multiple competing VMs. Simulation experiments based on traces of the Postmark and FFSB disk benchmarks show that our model is able to accurately predict the impact of workload consolidation on VM disk IO response times.
Resumo:
The role of technology management in achieving improved manufacturing performance has been receiving increased attention as enterprises are becoming more exposed to competition from around the world. In the modern market for manufactured goods the demand is now for more product variety, better quality, shorter delivery and greater flexibility, while the financial and environmental cost of resources has become an urgent concern to manufacturing managers. This issue of the International Journal of Technology Management addresses the question of how the diffusion, implementation and management of technology can improve the performance of manufacturing industries. The authors come from a large number of different countries and their contributions cover a wide range of topics within this general theme. Some papers are conceptual, others report on research carried out in a range of different industries including steel production, iron founding, electronics, robotics, machinery, precision engineering, metal working and motor manufacture. In some cases they describe situations in specific countries. Several are based on presentations made at the UK Operations Management Association's Sixth International Conference held at Aston University at which the conference theme was 'Achieving Competitive Edge: Getting Ahead Through Technology and People'. The first two papers deal with questions of advanced manufacturing technology implementation and management. Firstly Beatty describes a three year longitudinal field study carried out in ten Canadian manufacturing companies using CADICAM and CIM systems. Her findings relate to speed of implementation, choice of system type, the role of individuals in implementation, organization and job design. This is followed by a paper by Bessant in which he argues that a more a strategic approach should be taken towards the management of technology in the 1990s and beyond. Also considered in this paper are the capabilities necessary in order to deploy advanced manufacturing technology as a strategic resource and the way such capabilities might be developed within the firm. These two papers, which deal largely with the implementation of hardware, are supplemented by Samson and Sohal's contribution in which they argue that a much wider perspective should be adopted based on a new approach to manufacturing strategy formulation. Technology transfer is the topic of the following two papers. Pohlen again takes the case of advanced manufacturing technology and reports on his research which considers the factors contributing to successful realisation of AMT transfer. The paper by Lee then provides a more detailed account of technology transfer in the foundry industry. Using a case study based on a firm which has implemented a number of transferred innovations a model is illustrated in which the 'performance gap' can be identified and closed. The diffusion of technology is addressed in the next two papers. In the first of these, by Lowe and Sim, the managerial technologies of 'Just in Time' and 'Manufacturing Resource Planning' (or MRP 11) are examined. A study is described from which a number of factors are found to influence the adoption process including, rate of diffusion and size. Dahlin then considers the case of a specific item of hardware technology, the industrial robot. Her paper reviews the history of robot diffusion since the early 1960s and then tries to predict how the industry will develop in the future. The following two papers deal with the future of manufacturing in a more general sense. The future implementation of advanced manufacturing technology is the subject explored by de Haan and Peters who describe the results of their Dutch Delphi forecasting study conducted among a panel of experts including scientists, consultants, users and suppliers of AMT. Busby and Fan then consider a type of organisational model, 'the extended manufacturing enterprise', which would represent a distinct alternative pure market-led and command structures by exploiting the shared knowledge of suppliers and customers. The three country-based papers consider some strategic issues relating manufacturing technology. In a paper based on investigations conducted in China He, Liff and Steward report their findings from strategy analyses carried out in the steel and watch industries with a view to assessing technology needs and organizational change requirements. This is followed by Tang and Nam's paper which examines the case of machinery industry in Korea and its emerging importance as a key sector in the Korean economy. In his paper which focuses on Venezuela, Ernst then considers the particular problem of how this country can address the problem of falling oil revenues. He sees manufacturing as being an important contributor to Venezuela's future economy and proposes a means whereby government and private enterprise can co-operate in development of the manufacturing sector. The last six papers all deal with specific topics relating to the management manufacturing. Firstly Youssef looks at the question of manufacturing flexibility, introducing and testing a conceptual model that relates computer based technologies flexibility. Dangerfield's paper which follows is based on research conducted in the steel industry. He considers the question of scale and proposes a modelling approach determining the plant configuration necessary to meet market demand. Engstrom presents the results of a detailed investigation into the need for reorganising material flow where group assembly of products has been adopted. Sherwood, Guerrier and Dale then report the findings of a study into the effectiveness of Quality Circle implementation. Stillwagon and Burns, consider how manufacturing competitiveness can be improved individual firms by describing how the application of 'human performance engineering' can be used to motivate individual performance as well as to integrate organizational goals. Finally Sohal, Lewis and Samson describe, using a case study example, how just-in-time control can be applied within the context of computer numerically controlled flexible machining lines. The papers in this issue of the International Journal of Technology Management cover a wide range of topics relating to the general question of improving manufacturing performance through the dissemination, implementation and management of technology. Although they differ markedly in content and approach, they have the collective aim addressing the concepts, principles and practices which provide a better understanding the technology of manufacturing and assist in achieving and maintaining a competitive edge.
Resumo:
Markovian models are widely used to analyse quality-of-service properties of both system designs and deployed systems. Thanks to the emergence of probabilistic model checkers, this analysis can be performed with high accuracy. However, its usefulness is heavily dependent on how well the model captures the actual behaviour of the analysed system. Our work addresses this problem for a class of Markovian models termed discrete-time Markov chains (DTMCs). We propose a new Bayesian technique for learning the state transition probabilities of DTMCs based on observations of the modelled system. Unlike existing approaches, our technique weighs observations based on their age, to account for the fact that older observations are less relevant than more recent ones. A case study from the area of bioinformatics workflows demonstrates the effectiveness of the technique in scenarios where the model parameters change over time.
Resumo:
Most of the existing WCET estimation methods directly estimate execution time, ET, in cycles. We propose to study ET as a product of two factors, ET = IC * CPI, where IC is instruction count and CPI is cycles per instruction. Considering directly the estimation of ET may lead to a highly pessimistic estimate since implicitly these methods may be using worst case IC and worst case CPI. We hypothesize that there exists a functional relationship between CPI and IC such that CPI=f(IC). This is ascertained by computing the covariance matrix and studying the scatter plots of CPI versus IC. IC and CPI values are obtained by running benchmarks with a large number of inputs using the cycle accurate architectural simulator, Simplescalar on two different architectures. It is shown that the benchmarks can be grouped into different classes based on the CPI versus IC relationship. For some benchmarks like FFT, FIR etc., both IC and CPI are almost a constant irrespective of the input. There are other benchmarks that exhibit a direct or an inverse relationship between CPI and IC. In such a case, one can predict CPI for a given IC as CPI=f(IC). We derive the theoretical worst case IC for a program, denoted as SWIC, using integer linear programming(ILP) and estimate WCET as SWIC*f(SWIC). However, if CPI decreases sharply with IC then measured maximum cycles is observed to be a better estimate. For certain other benchmarks, it is observed that the CPI versus IC relationship is either random or CPI remains constant with varying IC. In such cases, WCET is estimated as the product of SWIC and measured maximum CPI. It is observed that use of the proposed method results in tighter WCET estimates than Chronos, a static WCET analyzer, for most benchmarks for the two architectures considered in this paper.
Resumo:
Poly(aryl ether ketone ketone)s (PEKK) was a high-performance engineering plastics, By means of Wide Angle X-ray Diffraction (WAXD) and Differential Scanning Calorimetry (DSC) methods, PEKK samples crystallized in solvent induction, from glass state and from melting state were studied, Crystal forms I and II for PEKK were found, The formation of crystal form II was dependent on thermal history and solvent induction, and this form II had melting point 10 degrees C or so lower than that of form I crystallized from glass state, All PEKK samples had low melting peaks which were relevant to the polarization of PEKK molecular chain, while they had nothing to do with thermal history, The heat of fusion for PEKK low melting peaks accounted for,percentage of 2 to 10 or so of the whole heat of fusion, And PEKK has its equilibrium melting point of 409 degrees C.
Resumo:
In this paper, we consider the ATM networks in which the virtual path concept is implemented. The question of how to multiplex two or more diverse traffic classes while providing different quality of service requirements is a very complicated open problem. Two distinct options are available: integration and segregation. In an integration approach all the traffic from different connections are multiplexed onto one VP. This implies that the most restrictive QOS requirements must be applied to all services. Therefore, link utilization will be decreased because unnecessarily stringent QOS is provided to all connections. With the segregation approach the problem can be much simplified if different types of traffic are separated by assigning a VP with dedicated resources (buffers and links). Therefore, resources may not be efficiently utilized because no sharing of bandwidth can take place across the VP. The probability that the bandwidth required by the accepted connections exceeds the capacity of the link is evaluated with the probability of congestion (PC). Since the PC can be expressed as the CLP, we shall simply carry out bandwidth allocation using the PC. We first focus on the influence of some parameters (CLP, bit rate and burstiness) on the capacity required by a VP supporting a single traffic class using the new convolution approach. Numerical results are presented both to compare the required capacity and to observe which conditions under each approach are preferred
Resumo:
This paper proposes a multicast implementation based on adaptive routing with anticipated calculation. Three different cost measures for a point-to-multipoint connection: bandwidth cost, connection establishment cost and switching cost can be considered. The application of the method based on pre-evaluated routing tables makes possible the reduction of bandwidth cost and connection establishment cost individually
Resumo:
The authors focus on one of the methods for connection acceptance control (CAC) in an ATM network: the convolution approach. With the aim of reducing the cost in terms of calculation and storage requirements, they propose the use of the multinomial distribution function. This permits direct computation of the associated probabilities of the instantaneous bandwidth requirements. This in turn makes possible a simple deconvolution process. Moreover, under certain conditions additional improvements may be achieved
Resumo:
We take a broad view that ultimately Grid- or Web-services must be located via personalised, semantic-rich discovery processes. We argue that such processes must rely on the storage of arbitrary metadata about services that originates from both service providers and service users. Examples of such metadata are reliability metrics, quality of service data, or semantic service description markup. This paper presents UDDI-MT, an extension to the standard UDDI service directory approach that supports the storage of such metadata via a tunnelling technique that ties the metadata store to the original UDDI directory. We also discuss the use of a rich, graph-based RDF query language for syntactic queries on this data. Finally, we analyse the performance of each of these contributions in our implementation.