4 resultados para simple loop
em CORA - Cork Open Research Archive - University College Cork - Ireland
Resumo:
A method to solve the stationary state probability is presented for the first-order bang-bang phase-locked loop (BBPLL) with nonzero loop delay. This is based on a delayed Markov chain model and a state How diagram for tracing the state history due to the loop delay. As a result, an eigenequation is obtained, and its closed form solutions are derived for some cases. After obtaining the state probability, statistical characteristics such as mean gain of the binary phase detector and timing error variance are calculated and demonstrated.
Resumo:
The original solution to the high failure rate of software development projects was the imposition of an engineering approach to software development, with processes aimed at providing a repeatable structure to maintain a consistency in the ‘production process’. Despite these attempts at addressing the crisis in software development, others have argued that the rigid processes of an engineering approach did not provide the solution. The Agile approach to software development strives to change how software is developed. It does this primarily by relying on empowered teams of developers who are trusted to manage the necessary tasks, and who accept that change is a necessary part of a development project. The use of, and interest in, Agile methods in software development projects has expanded greatly, yet this has been predominantly practitioner driven. There is a paucity of scientific research on Agile methods and how they are adopted and managed. This study aims at addressing this paucity by examining the adoption of Agile through a theoretical lens. The lens used in this research is that of double loop learning theory. The behaviours required in an Agile team are the same behaviours required in double loop learning; therefore, a transition to double loop learning is required for a successful Agile adoption. The theory of triple loop learning highlights that power factors (or power mechanisms in this research) can inhibit the attainment of double loop learning. This study identifies the negative behaviours - potential power mechanisms - that can inhibit the double loop learning inherent in an Agile adoption, to determine how the Agile processes and behaviours can create these power mechanisms, and how these power mechanisms impact on double loop learning and the Agile adoption. This is a critical realist study, which acknowledges that the real world is a complex one, hierarchically structured into layers. An a priori framework is created to represent these layers, which are categorised as: the Agile context, the power mechanisms, and double loop learning. The aim of the framework is to explain how the Agile processes and behaviours, through the teams of developers and project managers, can ultimately impact on the double loop learning behaviours required in an Agile adoption. Four case studies provide further refinement to the framework, with changes required due to observations which were often different to what existing literature would have predicted. The study concludes by explaining how the teams of developers, the individual developers, and the project managers, working with the Agile processes and required behaviours, can inhibit the double loop learning required in an Agile adoption. A solution is then proposed to mitigate these negative impacts. Additionally, two new research processes are introduced to add to the Information Systems research toolkit.
Resumo:
Electron microscopy (EM) has advanced in an exponential way since the first transmission electron microscope (TEM) was built in the 1930’s. The urge to ‘see’ things is an essential part of human nature (talk of ‘seeing is believing’) and apart from scanning tunnel microscopes which give information about the surface, EM is the only imaging technology capable of really visualising atomic structures in depth down to single atoms. With the development of nanotechnology the demand to image and analyse small things has become even greater and electron microscopes have found their way from highly delicate and sophisticated research grade instruments to key-turn and even bench-top instruments for everyday use in every materials research lab on the planet. The semiconductor industry is as dependent on the use of EM as life sciences and pharmaceutical industry. With this generalisation of use for imaging, the need to deploy advanced uses of EM has become more and more apparent. The combination of several coinciding beams (electron, ion and even light) to create DualBeam or TripleBeam instruments for instance enhances the usefulness from pure imaging to manipulating on the nanoscale. And when it comes to the analytic power of EM with the many ways the highly energetic electrons and ions interact with the matter in the specimen there is a plethora of niches which evolved during the last two decades, specialising in every kind of analysis that can be thought of and combined with EM. In the course of this study the emphasis was placed on the application of these advanced analytical EM techniques in the context of multiscale and multimodal microscopy – multiscale meaning across length scales from micrometres or larger to nanometres, multimodal meaning numerous techniques applied to the same sample volume in a correlative manner. In order to demonstrate the breadth and potential of the multiscale and multimodal concept an integration of it was attempted in two areas: I) Biocompatible materials using polycrystalline stainless steel and II) Semiconductors using thin multiferroic films. I) The motivation to use stainless steel (316L medical grade) comes from the potential modulation of endothelial cell growth which can have a big impact on the improvement of cardio-vascular stents – which are mainly made of 316L – through nano-texturing of the stent surface by focused ion beam (FIB) lithography. Patterning with FIB has never been reported before in connection with stents and cell growth and in order to gain a better understanding of the beam-substrate interaction during patterning a correlative microscopy approach was used to illuminate the patterning process from many possible angles. Electron backscattering diffraction (EBSD) was used to analyse the crystallographic structure, FIB was used for the patterning and simultaneously visualising the crystal structure as part of the monitoring process, scanning electron microscopy (SEM) and atomic force microscopy (AFM) were employed to analyse the topography and the final step being 3D visualisation through serial FIB/SEM sectioning. II) The motivation for the use of thin multiferroic films stems from the ever-growing demand for increased data storage at lesser and lesser energy consumption. The Aurivillius phase material used in this study has a high potential in this area. Yet it is necessary to show clearly that the film is really multiferroic and no second phase inclusions are present even at very low concentrations – ~0.1vol% could already be problematic. Thus, in this study a technique was developed to analyse ultra-low density inclusions in thin multiferroic films down to concentrations of 0.01%. The goal achieved was a complete structural and compositional analysis of the films which required identification of second phase inclusions (through elemental analysis EDX(Energy Dispersive X-ray)), localise them (employing 72 hour EDX mapping in the SEM), isolate them for the TEM (using FIB) and give an upper confidence limit of 99.5% to the influence of the inclusions on the magnetic behaviour of the main phase (statistical analysis).
Resumo:
New compensation methods are presented that can greatly reduce the slit errors (i.e. transition location errors) and interval errors induced due to non-idealities in optical incremental encoders (square-wave). An M/T-type, constant sample-time digital tachometer (CSDT) is selected for measuring the velocity of the sensor drives. Using this data, three encoder compensation techniques (two pseudoinverse based methods and an iterative method) are presented that improve velocity measurement accuracy. The methods do not require precise knowledge of shaft velocity. During the initial learning stage of the compensation algorithm (possibly performed in-situ), slit errors/interval errors are calculated through pseudoinversebased solutions of simple approximate linear equations, which can provide fast solutions, or an iterative method that requires very little memory storage. Subsequent operation of the motion system utilizes adjusted slit positions for more accurate velocity calculation. In the theoretical analysis of the compensation of encoder errors, encoder error sources such as random electrical noise and error in estimated reference velocity are considered. Initially, the proposed learning compensation techniques are validated by implementing the algorithms in MATLAB software, showing a 95% to 99% improvement in velocity measurement. However, it is also observed that the efficiency of the algorithm decreases with the higher presence of non-repetitive random noise and/or with the errors in reference velocity calculations. The performance improvement in velocity measurement is also demonstrated experimentally using motor-drive systems, each of which includes a field-programmable gate array (FPGA) for CSDT counting/timing purposes, and a digital-signal-processor (DSP). Results from open-loop velocity measurement and closed-loop servocontrol applications, on three optical incremental square-wave encoders and two motor drives, are compiled. While implementing these algorithms experimentally on different drives (with and without a flywheel) and on encoders of different resolutions, slit error reductions of 60% to 86% are obtained (typically approximately 80%).