880 resultados para analysis of performance
Resumo:
The central argument to this thesis is that the nature and purpose of corporate reporting has changed over time to become a more outward looking and forward looking document designed to promote the company and its performance to a wide range of shareholders, rather than merely to report to its owners upon past performance. it is argued that the discourse of environmental accounting and reporting is one driver for this change but that this discourse has been set up as in conflicting with the discourse of traditional accounting and performance measurement. The effect of this opposition between the discourses is that the two have been interpreted to be different and incompatible dimensions of performance with good performance along one dimension only being achievable through a sacrifice of performance along the other dimension. Thus a perceived dialectic in performance is believed to exist. One of the principal purposes of this thesis is to explore this perceived dialectic and, through analysis, to show that it does not exist and that there is not incompatibility. This exploration and analysis is based upon an investigation of the inherent inconsistencies in such corporate reports and the analysis makes use of both a statistical analysis and a semiotic analysis of corporate reports and the reported performance of companies along these dimensions. Thus the development of a semiology of corporate reporting is one of the significant outcomes of this thesis. A further outcome is a consideration of the implications of the analysis for corporate performance and its measurement. The thesis concludes with a consideration of the way in which the advent of electronic reporting may affect the ability of organisations to maintain the dialectic and the implications for corporate reporting.
Resumo:
This thesis looks at two issues. Firstly, statistical work was undertaken examining profit margins, labour productivity and total factor productivity in telecommunications in ten member states of the EU over a 21-year period (not all member states of the EU could be included due to data inadequacy). Also, three non-members, namely Switzerland, Japan and US, were included for comparison. This research was to provide an understanding of how telecoms in the European Union (EU) have developed. There are two propositions in this part of the thesis: (i) privatisation and market liberalisation improve performance; (ii) countries that liberalised their telecoms sectors first show a better productivity growth than countries that liberalised later. In sum, a mixed picture is revealed. Some countries performed better than others over time, but there is no apparent relationship between productivity performance and the two propositions. Some of the results from this part of the thesis were published in Dabler et al. (2002). Secondly, the remainder of the tests the proposition that the telecoms directives of the European Commission created harmonised regulatory systems in the member states of the EU. By undertaking explanatory research, this thesis not only seeks to establish whether harmonisation has been achieved, but also tries to find an explanation as to why this is so. To accomplish this, as a first stage to questionnaire survey was administered to the fifteen telecoms regulators in the EU. The purpose of the survey was to provide knowledge of methods, rationales and approaches adopted by the regulatory offices across the EU. This allowed for the decision as to whether harmonisation in telecoms regulation has been achieved. Stemming from the results of the questionnaire analysis, follow-up case studies with four telecoms regulators were undertaken, in a second stage of this research. The objective of these case studies was to take into account the country-specific circumstances of telecoms regulation in the EU. To undertake the case studies, several sources of evidence were combined. More specifically, the annual Implementation Reports of the European Commission were reviewed, alongside the findings from the questionnaire. Then, interviews with senior members of staff in the four regulatory authorities were conducted. Finally, the evidence from the questionnaire survey and from the case studies was corroborated to provide an explanation as to why telecoms regulation in the EU has reached or has not reached a state of harmonisation. In addition to testing whether harmonisation has been achieved and why, this research has found evidence of different approaches to control over telecoms regulators and to market intervention administered by telecoms regulators within the EU. Regarding regulatory control, it was found that some member states have adopted mainly a proceduralist model, some have implemented more of a substantive model, and others have adopted a mix between both. Some findings from the second stage of the research were published in Dabler and Parker (2004). Similarly, regarding market intervention by regulatory authorities, different member states treat market intervention differently, namely according to market-driven or non-market-driven models, or a mix between both approaches.
Resumo:
The thesis is concerned with the electron properties of single-polepiece magnetic electron lenses especially under conditions of extreme polepiece saturation. The electron optical properties are first analysed under conditions of high polepiece permeability. From this analysis, a general idea can be obtained of the important parameters that affect ultimate lens performance. In addition, useful information is obtained concerning the design of improved lenses operating under conditions of extreme polepiece saturation, for example at flux densities of the order of 10 Tesla. It is shown that in a single-polepiece lens , the position and shape of the lens exciting coil plays an important role. In particular, the maximum permissible current density in the windings,rather than the properties of the iron, can set a limit to lens performance. This factor was therefore investigated in some detail. The axial field distribution of a single-polepiece lens, unlike that of a conventional lens, is highly asymmetrical. There are therefore two possible physical arrangements of the lens with respect to the incoming electron beam. In general these two orientations will result in different aberration coefficients. This feature has also been investigated in some detail. Single-pole piece lenses are thus considerably more complicated electron- optically than conventional double polepiece lenses. In particular, the absence of the usual second polepiece causes most of the axial magnetic flux density distribution to lie outside the body of the lens. This can have many advantages in electron microscopy but it creates problems in calculating the magnetic field distribution. In particular, presently available computer programs are liable to be considerably in error when applied to such structures. It was therefore necessary to find independent ways of checking the field calculations. Furthermore, if the polepiece is allowed to saturate, much more calculation is involved since the field distribution becomes a non-linear function of the lens excitation. In searching for optimum lens designs, care was therefore taken to ensure that the coil was placed in the optimum position. If this condition is satisfied there seems to be no theoretical limit to the maximum flux density that can be attained at the polepiece tip. However , under iron saturation condition, some broadening of the axial field distribution will take place, thereby changing the lens aberrations . Extensive calculations were therefore made to find the minimum spherical and chromatic aberration coefficients . The focal properties of such lens designs are presented and compared with the best conventional double-polepiece lenses presently available.
Resumo:
The finite element method is now well established among engineers as being an extremely useful tool in the analysis of problems with complicated boundary conditions. One aim of this thesis has been to produce a set of computer algorithms capable of efficiently analysing complex three dimensional structures. This set of algorithms has been designed to permit much versatility. Provisions such as the use of only those parts of the system which are relevant to a given analysis and the facility to extend the system by the addition of new elements are incorporate. Five element types have been programmed, these are, prismatic members, rectangular plates, triangular plates and curved plates. The 'in and out of plane' stiffness matrices for a curved plate element are derived using the finite element technique. The performance of this type of element is compared with two other theoretical solutions as well as with a set of independent experimental observations. Additional experimental work was then carried out by the author to further evaluate the acceptability of this element. Finally the analysis of two large civil engineering structures, the shell of an electrical precipitator and a concrete bridge, are presented to investigate the performance of the algorithms. Comparisons are made between the computer time, core store requirements and the accuracy of the analysis, for the proposed system and those of another program.
Resumo:
Progressive addition spectacle lenses (PALs) have now become the method of choice for many presbyopic individuals to alleviate the visual problems of middle-age. Such lenses are difficult to assess and characterise because of their lack of discrete geographical locators of their key features. A review of the literature (mostly patents) describing the different designs of these lenses indicates the range of approaches to solving the visual problem of presbyopia. However, very little is published about the comparative optical performance of these lenses. A method is described here based on interferometry for the assessment of PALs, with a comparison of measurements made on an automatic focimeter. The relative merits of these techniques are discussed. Although the measurements are comparable, it is considered that the interferometry method is more readily automated, and would be ultimately capable of producing a more rapid result.
Resumo:
The extent to which the surface parameters of Progressive Addition Lenses (PALs) affect successful patient tolerance was investigated. Several optico-physical evaluation techniques were employed, including a newly constructed surface reflection device which was shown to be of value for assessing semi-finished PAL blanks. Detailed physical analysis was undertaken using a computer-controlled focimeter and from these data, iso-cylindrical and mean spherical plots were produced for each PAL studied. Base curve power was shown to have little impact upon the distribution of PAL astigmatism. A power increase in reading addition primarily caused a lengthening and narrowing of the lens progression channel. Empirical measurements also indicated a marginal steepening of the progression power gradient with an increase in reading addition power. A sample of the PAL wearing population were studied using patient records and questionnaire analysis (90% were returned). This subjective analysis revealed the reading portion to be the most troublesome lens zone and showed that patients with high astigmatism (> 2.00D) adapt more readily to PALs than those with spherical or low cylindrical (2.00D) corrections. The psychophysical features of PALs were then investigated. Both grafting visual acuity (VA) and contrast sensitivity (CS) were shown to be reduced with an increase in eccentricity from the central umbilical line. Two sample populations (N= 20) of successful and unsuccessful PAL wearers were assessed for differences in their visual performance and their adaptation to optically induced distortion. The possibility of dispensing errors being the cause of poor patient tolerance amongst the unsuccessful wearer group was investigated and discounted. The contrast sensitivity of the successful group was significantly greater than that of the unsuccessful group. No differences in adaptation to or detection of curvature distortion were evinced between these presbyopic groups.
Resumo:
Firstly, we numerically model a practical 20 Gb/s undersea configuration employing the Return-to-Zero Differential Phase Shift Keying data format. The modelling is completed using the Split-Step Fourier Method to solve the Generalised Nonlinear Schrdinger Equation. We optimise the dispersion map and per-channel launch power of these channels and investigate how the choice of pre/post compensation can influence the performance. After obtaining these optimal configurations, we investigate the Bit Error Rate estimation of these systems and we see that estimation based on Gaussian electrical current systems is appropriate for systems of this type, indicating quasi-linear behaviour. The introduction of narrower pulses due to the deployment of quasi-linear transmission decreases the tolerance to chromatic dispersion and intra-channel nonlinearity. We used tools from Mathematical Statistics to study the behaviour of these channels in order to develop new methods to estimate Bit Error Rate. In the final section, we consider the estimation of Eye Closure Penalty, a popular measure of signal distortion. Using a numerical example and assuming the symmetry of eye closure, we see that we can simply estimate Eye Closure Penalty using Gaussian statistics. We also see that the statistics of the logical ones dominates the statistics of the logical ones dominates the statistics of signal distortion in the case of Return-to-Zero On-Off Keying configurations.
Resumo:
In this paper we investigate rate adaptation algorithm SampleRate, which spends a fixed time on bit-rates other than the currently measured best bit-rate. A simple but effective analytic model is proposed to study the steady-state behavior of the algorithm. Impacts of link condition, channel congestion and multi-rate retry on the algorithm performance are modeled. Simulations validate the model. It is also observed there is still a large performance gap between SampleRate and optimal scheme in case of high frame collision probability.
Resumo:
Data Envelopment Analysis (DEA) is recognized as a modern approach to the assessment of performance of a set of homogeneous Decision Making Units (DMUs) that use similar sources to produce similar outputs. While DEA commonly is used with precise data, recently several approaches are introduced for evaluating DMUs with uncertain data. In the existing approaches many information on uncertainties are lost. For example in the defuzzification, the a-level and fuzzy ranking approaches are not considered. In the tolerance approach the inequality or equality signs are fuzzified but the fuzzy coefficients (inputs and outputs) are not treated directly. The purpose of this paper is to develop a new model to evaluate DMUs under uncertainty using Fuzzy DEA and to include a-level to the model under fuzzy environment. An example is given to illustrate this method in details.
Resumo:
Based on an assumption that a steady state exists in the full-memory multidestination automatic repeat request (ARQ) scheme, we propose a novel analytical method called steady-state function method (SSFM), to evaluate the performance of the scheme with any size of receiver buffer. For a wide range of system parameters, SSFM has higher accuracy on throughput estimation as compared to the conventional analytical methods.
Resumo:
X-ray photoelectron spectroscopy (XPS) can play an important role in guiding the design of new materials, tailored to meet increasingly stringent constraints on performance devices, by providing insight into their surface compositions and the fundamental interactions between the surfaces and the environment. This chapter outlines the principles and application of XPS as a versatile, chemically specific analytical tool in determining the electronic structures and (usually surface) compositions of constituent elements within diverse functional materials. Advances in detector electronics have opened the way for development of photoelectron microscopes and instruments with XPS imaging capabilities. Advances in surface science instrumentation to enable time-resolved spectroscopic measurements offer exciting opportunities to quantitatively investigate the composition, structure and dynamics of working catalyst surfaces. Attempts to study the effects of material processing in realistic environments currently involves the use of high- or ambient-pressure XPS in which samples can be exposed to reactive environments.
Resumo:
This article draws upon developments in UK research on political rhetoric and political performance in order to examine the incident in 2013 when French President François Hollande committed French forces to a US-led punitive strike against Syria, after the use of chemical weapons in a Damascus suburb on 21 August. The US-led retaliation did not take place. This article analyses Hollande's declaration on 27 July and his TV appearance on 15 September. His rhetoric and style are best understood as generic to the nature of the presidential office of the Fifth Republic. The article concludes by appraising how analysis of the French case contributes to the developing literature on rhetoric, celebrity and performance.
Resumo:
Location estimation is important for wireless sensor network (WSN) applications. In this paper we propose a Cramer-Rao Bound (CRB) based analytical approach for two centralized multi-hop localization algorithms to get insights into the error performance and its sensitivity to the distance measurement error, anchor node density and placement. The location estimation performance is compared with four distributed multi-hop localization algorithms by simulation to evaluate the efficiency of the proposed analytical approach. The numerical results demonstrate the complex tradeoff between the centralized and distributed localization algorithms on accuracy, complexity and communication overhead. Based on this analysis, an efficient and scalable performance evaluation tool can be designed for localization algorithms in large scale WSNs, where simulation-based evaluation approaches are impractical. © 2013 IEEE.
Resumo:
Macroeconomic developments, such as the business cycle, have a remarkable influence on firms and their performance. In business-to-business (B-to-B) markets characterized by a strong emphasis on long-term customer relationships, market orientation (MO) provides a particularly important safeguard for firms against fluctuating market forces. Using panel data from an economic upturn and downturn, we examine the effectiveness of different forms of MO (i.e., customer orientation, competitor orientation, interfunctional coordination, and their combinations) on firm performance in B-to-B firms. Our findings suggest that the impact of MO increases especially during a downturn, with interfunctional coordination clearly boosting firm performance and, conversely, competitor orientation becoming even detrimental. The findings further indicate that both the role of MO and its most effective forms vary across industry sectors, MO having a particularly strong impact on performance among B-to-B service firms. The findings of our study provide guidelines for executives to better manage performance across the business cycle and tailor their investments in MO more effectively, according to the firm's specific industry sector.
A simulation analysis of spoke-terminals operating in LTL Hub-and-Spoke freight distribution systems
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT The research presented in this thesis is concerned with Discrete-Event Simulation (DES) modelling as a method to facilitate logistical policy development within the UK Less-than-Truckload (LTL) freight distribution sector which has been typified by “Pallet Networks” operating on a hub-and-spoke philosophy. Current literature relating to LTL hub-and-spoke and cross-dock freight distribution systems traditionally examines a variety of network and hub design configurations. Each is consistent with classical notions of creating process efficiency, improving productivity, reducing costs and generally creating economies of scale through notions of bulk optimisation. Whilst there is a growing abundance of papers discussing both the network design and hub operational components mentioned above, there is a shortcoming in the overall analysis when it comes to discussing the “spoke-terminal” of hub-and-spoke freight distribution systems and their capabilities for handling the diverse and discrete customer profiles of freight that multi-user LTL hub-and-spoke networks typically handle over the “last-mile” of the delivery, in particular, a mix of retail and non-retail customers. A simulation study is undertaken to investigate the impact on operational performance when the current combined spoke-terminal delivery tours are separated by ‘profile-type’ (i.e. retail or nonretail). The results indicate that a potential improvement in delivery performance can be made by separating retail and non-retail delivery runs at the spoke-terminal and that dedicated retail and non-retail delivery tours could be adopted in order to improve customer delivery requirements and adapt hub-deployed policies. The study also leverages key operator experiences to highlight the main practical implementation challenges when integrating the observed simulation results into the real-world. The study concludes that DES be harnessed as an enabling device to develop a ‘guide policy’. This policy needs to be flexible and should be applied in stages, taking into account the growing retail-exposure.