37 resultados para implementations
em Aston University Research Archive
Resumo:
The aim of this thesis is to present numerical investigations of the polarisation mode dispersion (PMD) effect. Outstanding issues on the side of the numerical implementations of PMD are resolved and the proposed methods are further optimized for computational efficiency and physical accuracy. Methods for the mitigation of the PMD effect are taken into account and simulations of transmission system with added PMD are presented. The basic outline of the work focusing on PMD can be divided as follows. At first the widely-used coarse-step method for simulating the PMD phenomenon as well as a method derived from the Manakov-PMD equation are implemented and investigated separately through the distribution of a state of polarisation on the Poincaré sphere, and the evolution of the dispersion of a signal. Next these two methods are statistically examined and compared to well-known analytical models of the probability distribution function (PDF) and the autocorrelation function (ACF) of the PMD phenomenon. Important optimisations are achieved, for each of the aforementioned implementations in the computational level. In addition the ACF of the coarse-step method is considered separately, based on the result which indicates that the numerically produced ACF, exaggerates the value of the correlation between different frequencies. Moreover the mitigation of the PMD phenomenon is considered, in the form of numerically implementing Low-PMD spun fibres. Finally, all the above are combined in simulations that demonstrate the impact of the PMD on the quality factor (Q=factor) of different transmission systems. For this a numerical solver based on the coupled nonlinear Schrödinger equation is created which is otherwise tested against the most important transmission impairments in the early chapters of this thesis.
Resumo:
This professional doctoral research reports on the relationship between Enterprise Systems, specifically Enterprise Resource Planning Systems, and enterprise structures. It offers insights and guidance to practitioners on factors for consideration in the implementation of ERP systems in organisations operating in modern enterprise structures. It reports on reflective ethnographic action research conducted in a number of companies from a diverse range of industries covering supply chains for both goods and services. The primary contribution is in highlighting areas in which clients, practitioners and ERP software vendors can bring a greater awareness of internet era enterprise structures and business requirements into the ERP arena. The concepts and insights have been explored in a focus group setting, comprised of practitioners from the enterprise systems implementation and consulting fraternity and revealed limitations and constraints in the implementation of enterprise systems. However, it also showed that current systems do not have the full capabilities required to support, in use, modern era enterprise structures, as required by practitioners and decision makers.
Resumo:
We describe a novel and potentially important tool for candidate subunit vaccine selection through in silico reverse-vaccinology. A set of Bayesian networks able to make individual predictions for specific subcellular locations is implemented in three pipelines with different architectures: a parallel implementation with a confidence level-based decision engine and two serial implementations with a hierarchical decision structure, one initially rooted by prediction between membrane types and another rooted by soluble versus membrane prediction. The parallel pipeline outperformed the serial pipeline, but took twice as long to execute. The soluble-rooted serial pipeline outperformed the membrane-rooted predictor. Assessment using genomic test sets was more equivocal, as many more predictions are made by the parallel pipeline, yet the serial pipeline identifies 22 more of the 74 proteins of known location.
Resumo:
A quantitative comparison of up to 40 Gb/s low-cost orthogonal frequency-division multiplexing access (OFDMA)-passive optical networks (PON) implementations for both upstream (US) and downstream (DS) directions is evaluated based on different modulation and detection techniques. © 2012 IEEE.
Resumo:
The Bayesian analysis of neural networks is difficult because the prior over functions has a complex form, leading to implementations that either make approximations or use Monte Carlo integration techniques. In this paper I investigate the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis to be carried out exactly using matrix operations. The method has been tested on two challenging problems and has produced excellent results.
Resumo:
Purpose - The purpose of this paper is to act as a meticulous conceptual paper probing the contemporary view towards lean and illustrate that, despite its discernible benefits, the implementation record suffers as the prevailing opinion fails to encapsulate that an aspiring lean enterprise shall only succeed if it views lean as a philosophy rather than another strategy. Design/methodology/approach - The paper is based on a thorough literature search concerning the success and failure of lean implementations and acts as a precursor for one of the authors utilising a combination of methodologies; namely, interviewing, survey questionnaire and participant observation in attempting to prove his PhD hypothesis. Findings - Evidently, a cocktail of factors are needed for lean success; not only is it necessary to implement most of the technical tools but an organisation's culture needs transforming too. Furthermore, the alterations need to be implemented throughout an organisation's value chain. Lean has a major strategic significance, though its implementation procedure, HRM implications, general approach to the supplier base coupled with the overall universal conviction of viewing lean as a set of tactics rather than embracing it as a philosophy advocates that this contributes to the relatively low number of successful lean initiatives. Originality/value - The paper would prove invaluable to lean practitioners through its summation of the intricacies towards lean enterprise success and academic researchers by focusing their attention towards the necessary cultural implications. © Emerald Group Publishing Limited.
Resumo:
Original Paper European Journal of Information Systems (2001) 10, 135–146; doi:10.1057/palgrave.ejis.3000394 Organisational learning—a critical systems thinking discipline P Panagiotidis1,3 and J S Edwards2,4 1Deloitte and Touche, Athens, Greece 2Aston Business School, Aston University, Aston Triangle, Birmingham, B4 7ET, UK Correspondence: Dr J S Edwards, Aston Business School, Aston University, Aston Triangle, Birmingham, B4 7ET, UK. E-mail: j.s.edwards@aston.ac.uk 3Petros Panagiotidis is Manager responsible for the Process and Systems Integrity Services of Deloitte and Touche in Athens, Greece. He has a BSc in Business Administration and an MSc in Management Information Systems from Western International University, Phoenix, Arizona, USA; an MSc in Business Systems Analysis and Design from City University, London, UK; and a PhD degree from Aston University, Birmingham, UK. His doctorate was in Business Systems Analysis and Design. His principal interests now are in the ERP/DSS field, where he serves as project leader and project risk managment leader in the implementation of SAP and JD Edwards/Cognos in various major clients in the telecommunications and manufacturing sectors. In addition, he is responsible for the development and application of knowledge management systems and activity-based costing systems. 4John S Edwards is Senior Lecturer in Operational Research and Systems at Aston Business School, Birmingham, UK. He holds MA and PhD degrees (in mathematics and operational research respectively) from Cambridge University. His principal research interests are in knowledge management and decision support, especially methods and processes for system development. He has written more than 30 research papers on these topics, and two books, Building Knowledge-based Systems and Decision Making with Computers, both published by Pitman. Current research work includes the effect of scale of operations on knowledge management, interfacing expert systems with simulation models, process modelling in law and legal services, and a study of the use of artifical intelligence techniques in management accounting. Top of pageAbstract This paper deals with the application of critical systems thinking in the domain of organisational learning and knowledge management. Its viewpoint is that deep organisational learning only takes place when the business systems' stakeholders reflect on their actions and thus inquire about their purpose(s) in relation to the business system and the other stakeholders they perceive to exist. This is done by reflecting both on the sources of motivation and/or deception that are contained in their purpose, and also on the sources of collective motivation and/or deception that are contained in the business system's purpose. The development of an organisational information system that captures, manages and institutionalises meaningful information—a knowledge management system—cannot be separated from organisational learning practices, since it should be the result of these very practices. Although Senge's five disciplines provide a useful starting-point in looking at organisational learning, we argue for a critical systems approach, instead of an uncritical Systems Dynamics one that concentrates only on the organisational learning practices. We proceed to outline a methodology called Business Systems Purpose Analysis (BSPA) that offers a participatory structure for team and organisational learning, upon which the stakeholders can take legitimate action that is based on the force of the better argument. In addition, the organisational learning process in BSPA leads to the development of an intrinsically motivated information organisational system that allows for the institutionalisation of the learning process itself in the form of an organisational knowledge management system. This could be a specific application, or something as wide-ranging as an Enterprise Resource Planning (ERP) implementation. Examples of the use of BSPA in two ERP implementations are presented.
Resumo:
Purpose – The purpose of the paper is to use a case study setting involving the implementation of an enterprise resource planning (ERP) system to expose and analyze the conflicts in the characterizations of the post bureaucratic organisation (PBO) in the literature. ERP implementations are often accompanied by increasing levels of stress in organizations that place pressures on organizational relationships and structures. Additionally, ERPs are regarded as introducing their own techno-logic of centralization, standardization and formalization that provides an apparent contrast to the exhortations about employee empowerment. Design/methodology/approach – A case study of ERP implementation in a medium-sized entity is presented. The paper explores aspects of ERP and PBO from the context of postmodern organization theory. Findings – Some concerns about PBO identified in the literature are reflected in the case situation. For example, there is a commitment to give up private time and work flexibly by some employees. The paper also provides evidence of the way the management team substitute their reliance on a key individual knowledge worker for that of an ERP system and external vendor support. Paradoxically, trust in that same knowledge worker and between core users of the system is essential to enable the implementation of the system. Originality/value – This paper adds empirical insight to a predominantly theoretical literature. The case evidence indicates some conflicting implications in the concurrent adoption of PBO and ERP.
Resumo:
The inclusion of high-level scripting functionality in state-of-the-art rendering APIs indicates a movement toward data-driven methodologies for structuring next generation rendering pipelines. A similar theme can be seen in the use of composition languages to deploy component software using selection and configuration of collaborating component implementations. In this paper we introduce the Fluid framework, which places particular emphasis on the use of high-level data manipulations in order to develop component based software that is flexible, extensible, and expressive. We introduce a data-driven, object oriented programming methodology to component based software development, and demonstrate how a rendering system with a similar focus on abstract manipulations can be incorporated, in order to develop a visualization application for geospatial data. In particular we describe a novel SAS script integration layer that provides access to vertex and fragment programs, producing a very controllable, responsive rendering system. The proposed system is very similar to developments speculatively planned for DirectX 10, but uses open standards and has cross platform applicability. © The Eurographics Association 2007.
Resumo:
Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.
Resumo:
The future broadband information network will undoubtedly integrate the mobility and flexibility of wireless access systems with the huge bandwidth capacity of photonics solutions to enable a communication system capable of handling the anticipated demand for interactive services. Towards wide coverage and low cost implementations of such broadband wireless photonics communication networks, various aspects of the enabling technologies are continuingly generating intense research interest. Among the core technologies, the optical generation and distribution of radio frequency signals over fibres, and the fibre optic signal processing of optical and radio frequency signals, have been the subjects for study in this thesis. Based on the intrinsic properties of single-mode optical fibres, and in conjunction with the concepts of optical fibre delay line filters and fibre Bragg gratings, a number of novel fibre-based devices, potentially suitable for applications in the future wireless photonics communication systems, have been realised. Special single-mode fibres, namely, the high birefringence (Hi-Bi) fibre and the Er/Yb doped fibre have been employed so as to exploit their merits to achieve practical and cost-effective all-fibre architectures. A number of fibre-based complex signal processors for optical and radio frequencies using novel Hi-Bi fibre delay line filter architectures have been illustrated. In particular, operations such as multichannel flattop bandpass filtering, simultaneous complementary outputs and bidirectional nonreciprocal wavelength interleaving, have been demonstrated. The proposed configurations featured greatly reduced environmental sensitivity typical of coherent fibre delay line filter schemes, reconfigurable transfer functions, negligible chromatic dispersions, and ease of implementation, not easily achievable based on other techniques. A number of unique fibre grating devices for signal filtering and fibre laser applications have been realised. The concept of the superimposed fibre Bragg gratings has been extended to non-uniform grating structures and into Hi-Bi fibres to achieve highly useful grating devices such as overwritten phase-shifted fibre grating structure and widely/narrowly spaced polarization-discriminating filters that are not limited by the intrinsic fibre properties. In terms of the-fibre-based optical millimetre wave transmitters, unique approaches based on fibre laser configurations have been proposed and demonstrated. The ability of the dual-mode distributed feedback (DFB) fibre lasers to generate high spectral purity, narrow linewidth heterodyne signals without complex feedback mechanisms has been illustrated. A novel co-located dual DFB fibre laser configuration, based on the proposed superimposed phase-shifted fibre grating structure, has been further realised with highly desired operation characteristics without the need for costly high frequency synthesizers and complex feedback controls. Lastly, a novel cavity mode condition monitoring and optimisation scheme for short length, linear-cavity fibre lasers has been proposed and achieved. Based on the concept and simplicity of the superimposed fibre laser cavities structure, in conjunction with feedback controls, enhanced output performances from the fibre lasers have been achieved. The importance of such cavity mode assessment and feedback control for optimised fibre laser output performance has been illustrated.
Resumo:
The aim of this work was to design and build an equipment which can detect ferrous and non-ferrous objects in conveyed commodities, discriminate between them and locate the object along the belt and on the width of the belt. The magnetic induction mechanism was used as a means of achieving the objectives of this research. In order to choose the appropriate geometry and size of the induction field source, the field distributions of different source geometries and sizes were studied in detail. From these investigations it was found the square loop geometry is the most appropriate as a field generating source for the purpose of this project. The phenomena of field distribution in the conductors was also investigated. An equipment was designed and built at the preliminary stages of thework based on a flux-gate magnetometer with the ability to detect only ferrous objects.The instrument was designed such that it could be used to detect ferrous objects in the coal conveyors of power stations. The advantages of employing this detector in the power industry over the present ferrous metal electromagnetic separators were also considered. The objectives of this project culminated in the design and construction of a ferrous and non-ferrous detector with the ability to discriminate between ferrous and non-ferrous metals and to locate the objects on the conveying system. An experimental study was carried out to test the performance of the equipment in the detection of ferrous and non-ferrous objects of a given size carried on the conveyor belt. The ability of the equipment to discriminate between the types of metals and to locate the object on the belt was also evaluated experimentally. The benefits which can be gained from the industrial implementations of the equipment were considered. Further topics which may be investigated as an extension of this work are given.
Resumo:
The Fibre Distributed Data Interface (FDDI) represents the new generation of local area networks (LANs). These high speed LANs are capable of supporting up to 500 users over a 100 km distance. User traffic is expected to be as diverse as file transfers, packet voice and video. As the proliferation of FDDI LANs continues, the need to interconnect these LANs arises. FDDI LAN interconnection can be achieved in a variety of different ways. Some of the most commonly used today are public data networks, dial up lines and private circuits. For applications that can potentially generate large quantities of traffic, such as an FDDI LAN, it is cost effective to use a private circuit leased from the public carrier. In order to send traffic from one LAN to another across the leased line, a routing algorithm is required. Much research has been done on the Bellman-Ford algorithm and many implementations of it exist in computer networks. However, due to its instability and problems with routing table loops it is an unsatisfactory algorithm for interconnected FDDI LANs. A new algorithm, termed ISIS which is being standardized by the ISO provides a far better solution. ISIS will be implemented in many manufacturers routing devices. In order to make the work as practical as possible, this algorithm will be used as the basis for all the new algorithms presented. The ISIS algorithm can be improved by exploiting information that is dropped by that algorithm during the calculation process. A new algorithm, called Down Stream Path Splits (DSPS), uses this information and requires only minor modification to some of the ISIS routing procedures. DSPS provides a higher network performance, with very little additional processing and storage requirements. A second algorithm, also based on the ISIS algorithm, generates a massive increase in network performance. This is achieved by selecting alternative paths through the network in times of heavy congestion. This algorithm may select the alternative path at either the originating node, or any node along the path. It requires more processing and memory storage than DSPS, but generates a higher network power. The final algorithm combines the DSPS algorithm with the alternative path algorithm. This is the most flexible and powerful of the algorithms developed. However, it is somewhat complex and requires a fairly large storage area at each node. The performance of the new routing algorithms is tested in a comprehensive model of interconnected LANs. This model incorporates the transport through physical layers and generates random topologies for routing algorithm performance comparisons. Using this model it is possible to determine which algorithm provides the best performance without introducing significant complexity and storage requirements.
Resumo:
The proliferation of data throughout the strategic, tactical and operational areas within many organisations, has provided a need for the decision maker to be presented with structured information that is appropriate for achieving allocated tasks. However, despite this abundance of data, managers at all levels in the organisation commonly encounter a condition of ‘information overload’, that results in a paucity of the correct information. Specifically, this thesis will focus upon the tactical domain within the organisation and the information needs of management who reside at this level. In doing so, it will argue that the link between decision making at the tactical level in the organisation, and low-level transaction processing data, should be through a common object model that used a framework based upon knowledge leveraged from co-ordination theory. In order to achieve this, the Co-ordinated Business Object Model (CBOM) was created. Detailing a two-tier framework, the first tier models data based upon four interactive object models, namely, processes, activities, resources and actors. The second tier analyses the data captured by the four object models, and returns information that can be used to support tactical decision making. In addition, the Co-ordinated Business Object Support System (CBOSS), is a prototype tool that has been developed in order to both support the CBOM implementation, and to also demonstrate the functionality of the CBOM as a modelling approach for supporting tactical management decision making. Containing a graphical user interface, the system’s functionality allows the user to create and explore alternative implementations of an identified tactical level process. In order to validate the CBOM, three verification tests have been completed. The results provide evidence that the CBOM framework helps bridge the gap between low level transaction data, and the information that is used to support tactical level decision making.