29 resultados para Large detector-systems performance

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is growing peer and donor pressure on African countries to utilize available resources more efficiently in a bid to support the ongoing efforts to expand coverage of health interventions with a view to achieving the health-related Millennium Development Goals. The purpose of this study was to estimate the technical and scale efficiency of national health systems in African continent. Methods The study applied the Data Envelopment Analysis approach to estimate the technical efficiency and scale efficiency among the 53 countries of the African Continent. Results Out of the 38 low-income African countries, 12 countries national health systems manifested a constant returns to scale technical efficiency (CRSTE) score of 100%; 15 countries had a VRSTE score of 100%; and 12 countries had a SE score of one. The average variable returns to scale technical efficiency (VRSTE) score was 95% and the mean scale efficiency (SE) score was 59%; meaning that while on average the degree of inefficiency was only 5%, the magnitude of scale inefficiency was 41%. Of the 15 middle-income countries, 5 countries, 9 countries and 5 countries had CRSTE, VRSTE and SE scores of 100%. Ten countries, six countries and 10 countries had CRSTE, VRSTE and SE scores of less than 100%; and thus, they were deemed inefficient. The average VRSTE (i.e. pure efficiency) score was 97.6%. The average SE score was 49.9%. Conclusion There are large unmet need for health and health-related services among countries of the African Continent. Thus, it would not be advisable for health policy-makers address NHS inefficiencies through reduction in excess human resources for health. Instead, it would be more prudent for them to leverage health promotion approaches and universal access prepaid (tax-based, insurance-based or mixtures) health financing systems to create demand for under utilised health services/interventions with a view to increasing ultimate health outputs to efficient target levels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research, which was given the terms of reference, "To cut the lead time for getting new products into volume production", was sponsored by a company which develops and manufactures telecommunications equipment. The research described was based on studies made of the development of two processors which were designed to control telephone exchanges in the public network. It was shown that for each of these products, which were large electronic systems containing both hardware and software, most of their lead time was taken up with development. About half of this time was consumed by activities associated with redesign resulting from changes found to be necessary after the original design had been built. Analysing the causes of design changes showed the most significant to be Design Faults. The reasons why these predominated were investigated by seeking the collective opinion from design staff and their management using a questionnaire. Using the results from these studies to build upon the works of other authors, a model of the development process of large hierarchical systems is derived. An important feature of this model is its representation of iterative loops due to design changes. In order to reduce the development time, two closely related philosophies are proposed: By spending more time at the early stages of development (detecting and remedying faults in the design) even greater savings can be made later on, The collective performance of the development organisation would be improved by increasing the amount and speed of feedback about that performance. A trial was performed to test these philosophies using readily available techniques for design verification. It showed that about an 11 per cent saving would be made on the development time and that the philosophies might be equally successfully applied to other products and techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The profusion of performance measurement models suggested by Management Accounting literature in the 1990’s is one illustration of the substantial changes in Management Accounting teaching materials since the publication of “Relevance Lost” in 1987. At the same time, in the general context of increasing competition and globalisation it is widely thought that national cultural differences are tending to disappear, meaning that management techniques used in large companies, including performance measurement and management instruments (PMS), tend to be the same, irrespective of the company nationality or location. North American management practice is traditionally described as a contractually based model, mainly focused on financial performance information and measures (FPMs), more shareholder-focused than French companies. Within France, literature historically defined performance as being broadly multidimensional, driven by the idea that there are no universal rules of management and that efficient management takes into account local culture and traditions. As opposed to their North American brethren, French companies are pressured more by the financial institutions that fund them rather than by capital markets. Therefore, they pay greater attention to the long-term because they are not subject to quarterly capital market objectives. Hence, management in France should rely more on long-term qualitative information, less financial, and more multidimensional data to assess performance than their North American counterparts. The objective of this research is to investigate whether large French and US companies’ practices have changed in the way the textbooks have changed with regards to performance measurement and management, or whether cultural differences are still driving differences in performance measurement and management between them. The research findings support the idea that large US and French companies share the same PMS features, influenced by ‘universal’ PM models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We show that by proper code design, phase noise induced cycle slips causing an error floor can be mitigated for 28 Gbaud DQPSK systems. Performance of BCH codes are investigated in terms of required overhead. © 2014 OSA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The success of mainstream computing is largely due to the widespread availability of general-purpose architectures and of generic approaches that can be used to solve real-world problems cost-effectively and across a broad range of application domains. In this chapter, we propose that a similar generic framework is used to make the development of autonomic solutions cost effective, and to establish autonomic computing as a major approach to managing the complexity of today’s large-scale systems and systems of systems. To demonstrate the feasibility of general-purpose autonomic computing, we introduce a generic autonomic computing framework comprising a policy-based autonomic architecture and a novel four-step method for the effective development of self-managing systems. A prototype implementation of the reconfigurable policy engine at the core of our architecture is then used to develop autonomic solutions for case studies from several application domains. Looking into the future, we describe a methodology for the engineering of self-managing systems that extends and generalises our autonomic computing framework further.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conventional structured methods of software engineering are often based on the use of functional decomposition coupled with the Waterfall development process model. This approach is argued to be inadequate for coping with the evolutionary nature of large software systems. Alternative development paradigms, including the operational paradigm and the transformational paradigm, have been proposed to address the inadequacies of this conventional view of software developement, and these are reviewed. JSD is presented as an example of an operational approach to software engineering, and is contrasted with other well documented examples. The thesis shows how aspects of JSD can be characterised with reference to formal language theory and automata theory. In particular, it is noted that Jackson structure diagrams are equivalent to regular expressions and can be thought of as specifying corresponding finite automata. The thesis discusses the automatic transformation of structure diagrams into finite automata using an algorithm adapted from compiler theory, and then extends the technique to deal with areas of JSD which are not strictly formalisable in terms of regular languages. In particular, an elegant and novel method for dealing with so called recognition (or parsing) difficulties is described,. Various applications of the extended technique are described. They include a new method of automatically implementing the dismemberment transformation; an efficient way of implementing inversion in languages lacking a goto-statement; and a new in-the-large implementation strategy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To create hydrologically sustainable wetlands, knowledge of the water use requirements of target habitats must be known. Extensive literature reviews highlighted a dearth of water-use data associated with large reedbeds and wet woodland habitats and in response to this field experiments were established. Field experiments to measure the water use rates of large reedbeds [ET(Reed)] were completed at three sites within the UK. Reference Crop Evapotranspiration [ETo] was calculated and mean monthly crop coefficients [Kc(Reed)] were developed. Kc(Reed) was less than 1 during the growing season (March to September), ranging between 0.22 in March and reaching a peak of 0.98 in June. The developed coefficients compare favourably with published data from other large reedbed systems and support the premise that the water use of large reedbeds is lower than that from small/fringe reedbeds. A methodology for determining water use rates from wet woodland habitats (UK NVC Code: W6) is presented, in addition to provisional ET(W6) rates for two sites in the UK. Reference Crop Evapotranspiration [ETo] data was used to develop Kc(W6) values which ranged between 0.89 (LV Lysimeter 1) and 1.64 (CH Lysimeter 2) for the period March to September. The data are comparable with relevant published data and show that the water use rates of wet woodland are higher than most other wetland habitats. Initial observations suggest that water use is related to the habitat’s establishment phase and the age and size of the canopy tree species. A theoretical case study presents crop coefficients associated with wetland habitats and provides an example water budget for the creation of a wetland comprising a mosaic of wetland habitats. The case study shows the critical role that the water use of wetland habitats plays within a water budget.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Networking encompasses a variety of tasks related to the communication of information on networks; it has a substantial economic and societal impact on a broad range of areas including transportation systems, wired and wireless communications and a range of Internet applications. As transportation and communication networks become increasingly more complex, the ever increasing demand for congestion control, higher traffic capacity, quality of service, robustness and reduced energy consumption requires new tools and methods to meet these conflicting requirements. The new methodology should serve for gaining better understanding of the properties of networking systems at the macroscopic level, as well as for the development of new principled optimization and management algorithms at the microscopic level. Methods of statistical physics seem best placed to provide new approaches as they have been developed specifically to deal with nonlinear large-scale systems. This review aims at presenting an overview of tools and methods that have been developed within the statistical physics community and that can be readily applied to address the emerging problems in networking. These include diffusion processes, methods from disordered systems and polymer physics, probabilistic inference, which have direct relevance to network routing, file and frequency distribution, the exploration of network structures and vulnerability, and various other practical networking applications. © 2013 IOP Publishing Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The PMSG-based wind power generation system protection is presented in this paper. For large-scale systems, a voltagesource converter rectifier is included. Protection circuits for this topology are studied with simulation results for cable permanent fault conditions. These electrical protection methods are all in terms of dumping redundant energy resulting from disrupted path of power delivery. Pitch control of large-scale wind turbines are considered for effectively reducing rotor shaft overspeed. Detailed analysis and calculation of damping power and resistances are presented. Simulation results including fault overcurrent, DC-link overvoltage and wind turbine overspeed are shown to illustrate the system responses under different protection schemes to compare their application and effectiveness.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper determines the capability of two photogrammetric systems in terms of their measurement uncertainty in an industrial context. The first system – V-STARS inca3 from Geodetic Systems Inc. – is a commercially available measurement solution. The second system comprises an off-the-shelf Nikon D700 digital camera fitted with a 28 mm Nikkor lens and the research-based Vision Measurement Software (VMS). The uncertainty estimate of these two systems is determined with reference to a calibrated constellation of points determined by a Leica AT401 laser tracker. The calibrated points have an average associated standard uncertainty of 12·4 μm, spanning a maximum distance of approximately 14·5 m. Subsequently, the two systems’ uncertainty was determined. V-STARS inca3 had an estimated standard uncertainty of 43·1 μm, thus outperforming its manufacturer's specification; the D700/VMS combination achieved a standard uncertainty of 187 μm.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This thesis introduces and develops a novel real-time predictive maintenance system to estimate the machine system parameters using the motion current signature. Recently, motion current signature analysis has been addressed as an alternative to the use of sensors for monitoring internal faults of a motor. A maintenance system based upon the analysis of motion current signature avoids the need for the implementation and maintenance of expensive motion sensing technology. By developing nonlinear dynamical analysis for motion current signature, the research described in this thesis implements a novel real-time predictive maintenance system for current and future manufacturing machine systems. A crucial concept underpinning this project is that the motion current signature contains infor­mation relating to the machine system parameters and that this information can be extracted using nonlinear mapping techniques, such as neural networks. Towards this end, a proof of con­cept procedure is performed, which substantiates this concept. A simulation model, TuneLearn, is developed to simulate the large amount of training data required by the neural network ap­proach. Statistical validation and verification of the model is performed to ascertain confidence in the simulated motion current signature. Validation experiment concludes that, although, the simulation model generates a good macro-dynamical mapping of the motion current signature, it fails to accurately map the micro-dynamical structure due to the lack of knowledge regarding performance of higher order and nonlinear factors, such as backlash and compliance. Failure of the simulation model to determine the micro-dynamical structure suggests the pres­ence of nonlinearity in the motion current signature. This motivated us to perform surrogate data testing for nonlinearity in the motion current signature. Results confirm the presence of nonlinearity in the motion current signature, thereby, motivating the use of nonlinear tech­niques for further analysis. Outcomes of the experiment show that nonlinear noise reduction combined with the linear reverse algorithm offers precise machine system parameter estimation using the motion current signature for the implementation of the real-time predictive maintenance system. Finally, a linear reverse algorithm, BJEST, is developed and applied to the motion current signature to estimate the machine system parameters.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

WiMAX has been introduced as a competitive alternative for metropolitan broadband wireless access technologies. It is connection oriented and it can provide very high data rates, large service coverage, and flexible quality of services (QoS). Due to the large number of connections and flexible QoS supported by WiMAX, the uplink access in WiMAX networks is very challenging since the medium access control (MAC) protocol must efficiently manage the bandwidth and related channel allocations. In this paper, we propose and investigate a cost-effective WiMAX bandwidth management scheme, named the WiMAX partial sharing scheme (WPSS), in order to provide good QoS while achieving better bandwidth utilization and network throughput. The proposed bandwidth management scheme is compared with a simple but inefficient scheme, named the WiMAX complete sharing scheme (WCPS). A maximum entropy (ME) based analytical model (MEAM) is proposed for the performance evaluation of the two bandwidth management schemes. The reason for using MEAM for the performance evaluation is that MEAM can efficiently model a large-scale system in which the number of stations or connections is generally very high, while the traditional simulation and analytical (e.g., Markov models) approaches cannot perform well due to the high computation complexity. We model the bandwidth management scheme as a queuing network model (QNM) that consists of interacting multiclass queues for different service classes. Closed form expressions for the state and blocking probability distributions are derived for those schemes. Simulation results verify the MEAM numerical results and show that WPSS can significantly improve the network's performance compared to WCPS.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The amplification of demand variation up a supply chain widely termed ‘the Bullwhip Effect’ is disruptive, costly and something that supply chain management generally seeks to minimise. Originally attributed to poor system design; deficiencies in policies, organisation structure and delays in material and information flow all lead to sub-optimal reorder point calculation. It has since been attributed to exogenous random factors such as: uncertainties in demand, supply and distribution lead time but these causes are not exclusive as academic and operational studies since have shown that orders and/or inventories can exhibit significant variability even if customer demand and lead time are deterministic. This increase in the range of possible causes of dynamic behaviour indicates that our understanding of the phenomenon is far from complete. One possible, yet previously unexplored, factor that may influence dynamic behaviour in supply chains is the application and operation of supply chain performance measures. Organisations monitoring and responding to their adopted key performance metrics will make operational changes and this action may influence the level of dynamics within the supply chain, possibly degrading the performance of the very system they were intended to measure. In order to explore this a plausible abstraction of the operational responses to the Supply Chain Council’s SCOR® (Supply Chain Operations Reference) model was incorporated into a classic Beer Game distribution representation, using the dynamic discrete event simulation software Simul8. During the simulation the five SCOR Supply Chain Performance Attributes: Reliability, Responsiveness, Flexibility, Cost and Utilisation were continuously monitored and compared to established targets. Operational adjustments to the; reorder point, transportation modes and production capacity (where appropriate) for three independent supply chain roles were made and the degree of dynamic behaviour in the Supply Chain measured, using the ratio of the standard deviation of upstream demand relative to the standard deviation of the downstream demand. Factors employed to build the detailed model include: variable retail demand, order transmission, transportation delays, production delays, capacity constraints demand multipliers and demand averaging periods. Five dimensions of supply chain performance were monitored independently in three autonomous supply chain roles and operational settings adjusted accordingly. Uniqueness of this research stems from the application of the five SCOR performance attributes with modelled operational responses in a dynamic discrete event simulation model. This project makes its primary contribution to knowledge by measuring the impact, on supply chain dynamics, of applying a representative performance measurement system.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

N-tuple recognition systems (RAMnets) are normally modeled using a small number of input lines to each RAM, because the address space grows exponentially with the number of inputs. It is impossible to implement an arbitrarily-large address space as physical memory. But given modest amounts of training data, correspondingly modest numbers of bits will be set in that memory. Hash arrays can therefore be used instead of a direct implementation of the required address space. This paper describes some exploratory experiments using the hash array technique to investigate the performance of RAMnets with very large numbers of input lines. An argument is presented which concludes that performance should peak at a relatively small n-tuple size, but the experiments carried out so far contradict this. Further experiments are needed to confirm this unexpected result.