846 resultados para Post-Operating Performance
Resumo:
This paper develops an index for comparing the productivity of groups of operating units in cost terms when input prices are available. In that sense it represents an extension of a similar index available in the literature for comparing groups of units in terms of technical productivity in the absence of input prices. The index is decomposed to reveal the origins of differences in performance of the groups of units both in terms of technical and cost productivity. The index and its decomposition are of value in contexts where the need arises to compare units which perform the same function but they can be grouped by virtue of the fact that they operate in different contexts as might for example arise in comparisons of water or gas transmission companies operating in different countries.
Resumo:
This paper investigates the role of entrepreneurs' general and specific human capital on the performance of UK new technology based firms using a resource based approach to the entrepreneurship theory. The effect of entrepreneurial human capital on the performance of NTBFs is investigated using data derived from a survey of 412 firms operating in both high-tech manufacturing and the services sectors. According to the resource based theory it is found that specific human capital is more important for the performance of NTBFs in relation to general. More specifically individual entrepreneurs or entrepreneurial teams with high levels of formal business education, commercial, managerial or same sector experience are found to have created better performing NTBFs. Finally it is found that the performance of a NTBF can improve through the combination of heterogeneous but complementary skills, including, for example, technical education and commercial experience or managerial technical and managerial commercial experience. © 2010 Springer Science+Business Media, LLC.
Resumo:
Direct computation of the bit-error rate (BER) and laboratory experiments are used to assess the performance of a non-slope matched transoceanic submarine transmission link operating at 20Gb/s channel rate and employing return-to-zero differential-phase shift keying (RZ-DPSK) signal modulation. Using this system as an example, we compare the accuracies of the existing theoretical approaches to the BER estimation for the RZ-DPSK format. © 2007 Optical Society of America.
Resumo:
Liquid-level sensing technologies have attracted great prominence, because such measurements are essential to industrial applications, such as fuel storage, flood warning and in the biochemical industry. Traditional liquid level sensors are based on electromechanical techniques; however they suffer from intrinsic safety concerns in explosive environments. In recent years, given that optical fiber sensors have lots of well-established advantages such as high accuracy, costeffectiveness, compact size, and ease of multiplexing, several optical fiber liquid level sensors have been investigated which are based on different operating principles such as side-polishing the cladding and a portion of core, using a spiral side-emitting optical fiber or using silica fiber gratings. The present work proposes a novel and highly sensitive liquid level sensor making use of polymer optical fiber Bragg gratings (POFBGs). The key elements of the system are a set of POFBGs embedded in silicone rubber diaphragms. This is a new development building on the idea of determining liquid level by measuring the pressure at the bottom of a liquid container, however it has a number of critical advantages. The system features several FBG-based pressure sensors as described above placed at different depths. Any sensor above the surface of the liquid will read the same ambient pressure. Sensors below the surface of the liquid will read pressures that increase linearly with depth. The position of the liquid surface can therefore be approximately identified as lying between the first sensor to read an above-ambient pressure and the next higher sensor. This level of precision would not in general be sufficient for most liquid level monitoring applications; however a much more precise determination of liquid level can be made by linear regression to the pressure readings from the sub-surface sensors. There are numerous advantages to this multi-sensor approach. First, the use of linear regression using multiple sensors is inherently more accurate than using a single pressure reading to estimate depth. Second, common mode temperature induced wavelength shifts in the individual sensors are automatically compensated. Thirdly, temperature induced changes in the sensor pressure sensitivity are also compensated. Fourthly, the approach provides the possibility to detect and compensate for malfunctioning sensors. Finally, the system is immune to changes in the density of the monitored fluid and even to changes in the effective force of gravity, as might be obtained in an aerospace application. The performance of an individual sensor was characterized and displays a sensitivity (54 pm/cm), enhanced by more than a factor of 2 when compared to a sensor head configuration based on a silica FBG published in the literature, resulting from the much lower elastic modulus of POF. Furthermore, the temperature/humidity behavior and measurement resolution were also studied in detail. The proposed configuration also displays a highly linear response, high resolution and good repeatability. The results suggest the new configuration can be a useful tool in many different applications, such as aircraft fuel monitoring, and biochemical and environmental sensing, where accuracy and stability are fundamental. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
A simulation analysis of spoke-terminals operating in LTL Hub-and-Spoke freight distribution systems
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT The research presented in this thesis is concerned with Discrete-Event Simulation (DES) modelling as a method to facilitate logistical policy development within the UK Less-than-Truckload (LTL) freight distribution sector which has been typified by “Pallet Networks” operating on a hub-and-spoke philosophy. Current literature relating to LTL hub-and-spoke and cross-dock freight distribution systems traditionally examines a variety of network and hub design configurations. Each is consistent with classical notions of creating process efficiency, improving productivity, reducing costs and generally creating economies of scale through notions of bulk optimisation. Whilst there is a growing abundance of papers discussing both the network design and hub operational components mentioned above, there is a shortcoming in the overall analysis when it comes to discussing the “spoke-terminal” of hub-and-spoke freight distribution systems and their capabilities for handling the diverse and discrete customer profiles of freight that multi-user LTL hub-and-spoke networks typically handle over the “last-mile” of the delivery, in particular, a mix of retail and non-retail customers. A simulation study is undertaken to investigate the impact on operational performance when the current combined spoke-terminal delivery tours are separated by ‘profile-type’ (i.e. retail or nonretail). The results indicate that a potential improvement in delivery performance can be made by separating retail and non-retail delivery runs at the spoke-terminal and that dedicated retail and non-retail delivery tours could be adopted in order to improve customer delivery requirements and adapt hub-deployed policies. The study also leverages key operator experiences to highlight the main practical implementation challenges when integrating the observed simulation results into the real-world. The study concludes that DES be harnessed as an enabling device to develop a ‘guide policy’. This policy needs to be flexible and should be applied in stages, taking into account the growing retail-exposure.
Resumo:
Premium Intraocular Lenses (IOLs) such as toric IOLs, multifocal IOLs (MIOLs) and accommodating IOLs (AIOLs) can provide better refractive and visual outcomes compared to standard monofocal designs, leading to greater levels of post-operative spectacle independence. The principal theme of this thesis relates to the development of new assessment techniques that can help to improve future premium IOL design. IOLs designed to correct astigmatism form the focus of the first part of the thesis. A novel toric IOL design was devised to decrease the effect of toric rotation on patient visual acuity, but found to have neither a beneficial or detrimental impact on visual acuity retention. IOL tilt, like rotation, may curtail visual performance; however current IOL tilt measurement techniques require the use of specialist equipment not readily available in most ophthalmological clinics. Thus a new idea that applied Pythagoras’s theory to digital images of IOL optic symmetricality in order to calculate tilt was proposed, and shown to be both accurate and highly repeatable. A literature review revealed little information on the relationship between IOL tilt, decentration and rotation and so this was examined. A poor correlation between these factors was found, indicating they occur independently of each other. Next, presbyopia correcting IOLs were investigated. The light distribution of different MIOLs and an AIOL was assessed using perimetry, to establish whether this could be used to inform optimal IOL design. Anticipated differences in threshold sensitivity between IOLs were not however found, thus perimetry was concluded to be ineffective in mapping retinal projection of blur. The observed difference between subjective and objective measures of accommodation, arising from the influence of pseudoaccommodative factors, was explored next to establish how much additional objective power would be required to restore the eye’s focus with AIOLs. Blur tolerance was found to be the key contributor to the ocular depth of focus, with an approximate dioptric influence of 0.60D. Our understanding of MIOLs may be limited by the need for subjective defocus curves, which are lengthy and do not permit important additional measures to be undertaken. The use of aberrometry to provide faster objective defocus curves was examined. Although subjective and objective measures related well, the peaks of the MIOL defocus curve profile were not evident with objective prediction of acuity, indicating a need for further refinement of visual quality metrics based on ocular aberrations. The experiments detailed in the thesis evaluate methods to improve visual performance with toric IOLs. They also investigate new techniques to allow more rapid post-operative assessment of premium IOLs, which could allow greater insights to be obtained into several aspects of visual quality, in order to optimise future IOL design and ultimately enhance patient satisfaction.
Resumo:
This empirical study investigates the performance of cross border M&A. The first stage is to identify the determinants of making cross border M&A complete. One focus here is to extend the existing empirical evidence in the field of cross border M&A and exploit the likelihood of M&A from a different perspective. Given the determinants of cross border M&A completions, the second stage is to investigate the effects of cross border M&A on post-acquisition firm performance for both targets and acquirers. The thesis exploits a hitherto unused data base, which consists of those firms that are rumoured to be undertaking M&A, and then follow the deal to completion or abandonment. This approach highlights a number of limitations to the previous literature, which relies on statistical methodology to identify potential but non-existent mergers. This thesis changes some conventional understanding for M&A activity. Cross border M&A activity is underpinned by various motives such as synergy, management discipline, and acquisition of complementary resources. Traditionally, it is believed that these motives will boost the international M&A activity and improve firm performance after takeovers. However, this thesis shows that such factors based on these motives as acquirer’s profitability and liquidity and target’s intangible resource actually deter the completion of cross border M&A in the period of 2002-2011. The overall finding suggests that the cross border M&A is the efficiency-seeking activity rather than the resource-seeking activity. Furthermore, compared with firms in takeover rumours, the completion of M&A lowers firm performance. More specifically, the difficulties in transfer of competitive advantages and integration of strategic assets lead to low firm performance in terms of productivity. Besides, firms cannot realise the synergistic effect and managerial disciplinary effect once a cross border M&A is completed, which suggests a low post-acquisition profitability level.
Resumo:
In this study, we developed a DEA-based performance measurement methodology that is consistent with performance assessment frameworks such as the Balanced Scorecard. The methodology developed in this paper takes into account the direct or inverse relationships that may exist among the dimensions of performance to construct appropriate production frontiers. The production frontiers we obtained are deemed appropriate as they consist solely of firms with desirable levels for all dimensions of performance. These levels should be at least equal to the critical values set by decision makers. The properties and advantages of our methodology against competing methodologies are presented through an application to a real-world case study from retail firms operating in the US. A comparative analysis between the new methodology and existing methodologies explains the failure of the existing approaches to define appropriate production frontiers when directly or inversely related dimensions of performance are present and to express the interrelationships between the dimensions of performance.
Resumo:
This paper examines changes in the drivers of productivity in Germany over the period 1997-2012. We start by comparing the performance of German firms and inward investors before and during the recovery from the recent global financial crisis of 2008 across a range of sectors, and subsequently examine the channels through which different firms are able to generate productivity. Our results show that foreign investors are more productive than German MNEs and purely domestic firms, with the gap narrowing in the manufacturing sector, but growing in the service sector during the recovery period. We also contrast those firms for whom productivity growth is related to greater use of intangible assets, compared with those for whom productivity is linked to cash flow. Productivity of inward investors is driven by cash flow rather than intangible assets, these being limited to high-technology investors from the EU and the USA.
Resumo:
The stress sensitivity of polymer optical fibre (POF) based Fabry-Perot sensors formed by two uniform Bragg gratings with finite dimensions is investigated. POF has received high interest in recent years due to its different material properties compared to its silica counterpart. Biocompatibility, a higher failure strain and the highly elastic nature of POF are some of the main advantages. The much lower Young’s modulus of polymer materials compared to silica offers enhanced stress sensitivity to POF based sensors which renders them great candidates for acoustic wave receivers and any kind of force detection. The main drawback in POF technology is perhaps the high fibre loss. In a lossless fibre the sensitivity of an interferometer is proportional to its cavity length. However, the presence of the attenuation along the optical path can significantly reduce the finesse of the Fabry-Perot interferometer and it can negatively affect its sensitivity at some point. The reflectivity of the two gratings used to form the interferometer can be also reduced as the fibre loss increases. In this work, a numerical model is developed to study the performance of POF based Fabry-Perot sensors formed by two uniform Bragg gratings with finite dimensions. Various optical and physical properties are considered such as grating physical length, grating effective length which indicates the point where the light is effectively reflected, refractive index modulation of the grating, cavity length of the interferometer, attenuation and operating wavelength. Using this model, we are able to identify the regimes in which the PMMA based sensor offer enhanced stress sensitivity compared to silica based one.
Resumo:
Lifelong surveillance is not cost-effective after endovascular aneurysm repair (EVAR), but is required to detect aortic complications which are fatal if untreated (type 1/3 endoleak, sac expansion, device migration). Aneurysm morphology determines the probability of aortic complications and therefore the need for surveillance, but existing analyses have proven incapable of identifying patients at sufficiently low risk to justify abandoning surveillance. This study aimed to improve the prediction of aortic complications, through the application of machine-learning techniques. Patients undergoing EVAR at 2 centres were studied from 2004–2010. Aneurysm morphology had previously been studied to derive the SGVI Score for predicting aortic complications. Bayesian Neural Networks were designed using the same data, to dichotomise patients into groups at low- or high-risk of aortic complications. Network training was performed only on patients treated at centre 1. External validation was performed by assessing network performance independently of network training, on patients treated at centre 2. Discrimination was assessed by Kaplan-Meier analysis to compare aortic complications in predicted low-risk versus predicted high-risk patients. 761 patients aged 75 +/− 7 years underwent EVAR in 2 centres. Mean follow-up was 36+/− 20 months. Neural networks were created incorporating neck angu- lation/length/diameter/volume; AAA diameter/area/volume/length/tortuosity; and common iliac tortuosity/diameter. A 19-feature network predicted aor- tic complications with excellent discrimination and external validation (5-year freedom from aortic complications in predicted low-risk vs predicted high-risk patients: 97.9% vs. 63%; p < 0.0001). A Bayesian Neural-Network algorithm can identify patients in whom it may be safe to abandon surveillance after EVAR. This proposal requires prospective study.
Resumo:
The present study investigated the impact of pre-existent expectancy regarding the effects of the caffeine load of a drink and the perception of the caffeine content on subjective mood and vigilance performance. Caffeine deprived participants (N=25) were tested in four conditions (within subjects design), using a 2 × 2 design, with caffeine load and information regarding the caffeine content of the drink. In two sessions, they were given caffeinated coffee and in two were given decaffeinated coffee. Within these two conditions, on one occasion they were given accurate information about the drink and on the other they were given inaccurate information about the drink. Mood and vigilance performance were assessed post ingestion. Caffeine was found to enhance performance, but only when participants were accurately told they were receiving it. When decaffeinated coffee was given, performance was poorer, irrespective of expectancy. However, when caffeine was given, but participants were told it was decaffeinated coffee, performance was as poor as when no caffeine had been administered. There were no easily interpretable effects on mood. The pharmacological effects of caffeine appear to act synergistically with expectancy. © 2010.
Resumo:
The aim of this work is to empirically generate a shortened version of the Geriatric Depression Scale (GDS), with the intention of maximising the diagnostic performance in the detection of depression compared with previously GDS validated versions, while optimizing the size of the instrument. A total of 233 individuals (128 from a Day Hospital, 105 randomly selected from the community) aged 60 or over completed the GDS and other measures. The 30 GDS items were entered in the Day Hospital sample as independent variables in a stepwise logistic regression analysis predicting diagnosis of Major Depression. A final solution of 10 items was retained, which correctly classified 97.4% of cases. The diagnostic performance of these 10 GDS items was analysed in the random sample with a receiver operating characteristic (ROC) curve. Sensitivity (100%), specificity (97.2%), positive (81.8%) and negative (100%) predictive power, and the area under the curve (0.994) were comparable with values for GDS-30 and higher compared with GDS-15, GDS-10 and GDS-5. In addition, the new scale proposed had excellent fit when testing its unidimensionality with CFA for categorical outcomes (e.g., CFI=0.99). The 10-item version of the GDS proposed here, the GDS-R, seems to retain the diagnostic performance for detecting depression in older adults of the GDS-30 items, while increasing the sensitivity and predictive values relative to other shortened versions.
Resumo:
Purpose – The purpose of this paper is to present a conceptual framework in order to analyse and understand the twin developments of successful microeconomic reform on the one hand and failed macroeconomic stabilisation attempts on the other hand in Hungary. The case study also attempts to explore the reasons why Hungarian policymakers were willing to initiate reforms in the micro sphere, but were reluctant to initiate major changes in public finances both before and after the regime change of 1989/1990. Design/methodology/approach – The paper applies a path-dependent approach by carefully analysing Hungary's Communist and post-Communist economic development. The study restricts itself to a positive analysis but normative statements can also be drawn accordingly. Findings – The study demonstrates that the recent deteriorating economic performance of Hungary is not a recent phenomenon. By providing a path-dependent explanation, it argues that both Communist and post-Communist governments used the general budget as a buffer to compensate the losers of economic reforms, especially microeconomic restructuring. The gradualist success of the country – which dates back to at least 1968 – in the field of liberalisation, marketisation and privatisation was accompanied by a constant overspending in the general government. Practical implications – Hungary has been one of the worst-hit countries of the 2008/2009 financial crisis, not just in Central and Eastern Europe but in the whole world. The capacity and opportunity for strengthening international investors' confidence is, however, not without doubts. The current deterioration is deeply rooted in failed past macroeconomic management. The dissolution of fiscal laxity and state paternalism in a broader context requires, therefore, an all-encompassing reform of the general government, which may trigger serious challenges to the political regime as well. Originality/value – The study aims to show that a relatively high ratio of redistribution, a high and persistent public deficit and an accelerated indebtedness are not recent phenomena in Hungary. In fact, these trends characterised the country well before the transformation of 1989/1990, and have continued in the post-socialist years, too. To explain such a phenomenon, the study argues that in the last couple of decades the hardening of the budget constraint of firms have come at the cost of maintaining the soft budget constraint of the state.
Resumo:
The purpose of this study was to determine the efficacy of a writing process approach for the instruction of language arts with learning disabled elementary students. A nonequivalent control group design was used. The sample included 24 students with learning disabilities who were in second and third grade. All students were instructed in resource room settings for ninety minutes per day in language arts. The students in the treatment group received instruction using the writing process steps to create complete meaningful compositions on self-chosen topics. A literature-based reading program accompanied instruction in writing to provide examples of good writing and to provide a basis for topic selection. The students in the control group received instruction through the use of the county-adopted textbooks and accompanying worksheets. The teacher followed basic textbook and curriculum guide suggestions which consisted mainly of fill in the blank and matching type exercises. The treatment group consisted of 12 students: five second-graders and seven third-graders. The control group consisted of 12 students: four second-graders and eight third-graders. All students were pretested and posttested using the Woodcock-Johnson Tests of Achievement-Revised (WJ-R ACH) for writing samples and the Woodcock Reading Mastery Test (WRMT) for reading achievement. T-tests were also done to investigate the gain from pre to post for each reading or writing variable for each group separately. The results showed a highly significant difference from pretest to posttest for all writing and reading variables for both groups. Analysis of Covariance showed that the population mean posttest achievement scores for all variables adjusted for the pretest were higher for the treatment group than those for the control group.