9 resultados para Software testing. Test generation. Grammars
em CORA - Cork Open Research Archive - University College Cork - Ireland
Resumo:
This research has explored the relationship between system test complexity and tacit knowledge. It is proposed as part of this thesis, that the process of system testing (comprising of test planning, test development, test execution, test fault analysis, test measurement, and case management), is directly affected by both complexity associated with the system under test, and also by other sources of complexity, independent of the system under test, but related to the wider process of system testing. While a certain amount of knowledge related to the system under test is inherent, tacit in nature, and therefore difficult to make explicit, it has been found that a significant amount of knowledge relating to these other sources of complexity, can indeed be made explicit. While the importance of explicit knowledge has been reinforced by this research, there has been a lack of evidence to suggest that the availability of tacit knowledge to a test team is of any less importance to the process of system testing, when operating in a traditional software development environment. The sentiment was commonly expressed by participants, that even though a considerable amount of explicit knowledge relating to the system is freely available, that a good deal of knowledge relating to the system under test, which is demanded for effective system testing, is actually tacit in nature (approximately 60% of participants operating in a traditional development environment, and 60% of participants operating in an agile development environment, expressed similar sentiments). To cater for the availability of tacit knowledge relating to the system under test, and indeed, both explicit and tacit knowledge required by system testing in general, an appropriate knowledge management structure needs to be in place. This would appear to be required, irrespective of the employed development methodology.
Resumo:
This thesis investigates the optimisation of Coarse-Fine (CF) spectrum sensing architectures under a distribution of SNRs for Dynamic Spectrum Access (DSA). Three different detector architectures are investigated: the Coarse-Sorting Fine Detector (CSFD), the Coarse-Deciding Fine Detector (CDFD) and the Hybrid Coarse-Fine Detector (HCFD). To date, the majority of the work on coarse-fine spectrum sensing for cognitive radio has focused on a single value for the SNR. This approach overlooks the key advantage that CF sensing has to offer, namely that high powered signals can be easily detected without extra signal processing. By considering a range of SNR values, the detector can be optimised more effectively and greater performance gains realised. This work considers the optimisation of CF spectrum sensing schemes where the security and performance are treated separately. Instead of optimising system performance at a single, constant, low SNR value, the system instead is optimised for the average operating conditions. The security is still provided such that at the low SNR values the safety specifications are met. By decoupling the security and performance, the system’s average performance increases whilst maintaining the protection of licensed users from harmful interference. The different architectures considered in this thesis are investigated in theory, simulation and physical implementation to provide a complete overview of the performance of each system. This thesis provides a method for estimating SNR distributions which is quick, accurate and relatively low cost. The CSFD is modelled and the characteristic equations are found for the CDFD scheme. The HCFD is introduced and optimisation schemes for all three architectures are proposed. Finally, using the Implementing Radio In Software (IRIS) test-bed to confirm simulation results, CF spectrum sensing is shown to be significantly quicker than naive methods, whilst still meeting the required interference probability rates and not requiring substantial receiver complexity increases.
Resumo:
There has been an increased use of the Doubly-Fed Induction Machine (DFIM) in ac drive applications in recent times, particularly in the field of renewable energy systems and other high power variable-speed drives. The DFIM is widely regarded as the optimal generation system for both onshore and offshore wind turbines and has also been considered in wave power applications. Wind power generation is the most mature renewable technology. However, wave energy has attracted a large interest recently as the potential for power extraction is very significant. Various wave energy converter (WEC) technologies currently exist with the oscillating water column (OWC) type converter being one of the most advanced. There are fundemental differences in the power profile of the pneumatic power supplied by the OWC WEC and that of a wind turbine and this causes significant challenges in the selection and rating of electrical generators for the OWC devises. The thesis initially aims to provide an accurate per-phase equivalent circuit model of the DFIM by investigating various characterisation testing procedures. Novel testing methodologies based on the series-coupling tests is employed and is found to provide a more accurate representation of the DFIM than the standard IEEE testing methods because the series-coupling tests provide a direct method of determining the equivalent-circuit resistances and inductances of the machine. A second novel method known as the extended short-circuit test is also presented and investigated as an alternative characterisation method. Experimental results on a 1.1 kW DFIM and a 30 kW DFIM utilising the various characterisation procedures are presented in the thesis. The various test methods are analysed and validated through comparison of model predictions and torque-versus-speed curves for each induction machine. Sensitivity analysis is also used as a means of quantifying the effect of experimental error on the results taken from each of the testing procedures and is used to determine the suitability of the test procedures for characterising each of the devices. The series-coupling differential test is demonstrated to be the optimum test. The research then focuses on the OWC WEC and the modelling of this device. A software model is implemented based on data obtained from a scaled prototype device situated at the Irish test site. Test data from the electrical system of the device is analysed and this data is used to develop a performance curve for the air turbine utilised in the WEC. This performance curve was applied in a software model to represent the turbine in the electro-mechanical system and the software results are validated by the measured electrical output data from the prototype test device. Finally, once both the DFIM and OWC WEC power take-off system have been modeled succesfully, an investigation of the application of the DFIM to the OWC WEC model is carried out to determine the electrical machine rating required for the pulsating power derived from OWC WEC device. Thermal analysis of a 30 kW induction machine is carried out using a first-order thermal model. The simulations quantify the limits of operation of the machine and enable thedevelopment of rating requirements for the electrical generation system of the OWC WEC. The thesis can be considered to have three sections. The first section of the thesis contains Chapters 2 and 3 and focuses on the accurate characterisation of the doubly-fed induction machine using various testing procedures. The second section, containing Chapter 4, concentrates on the modelling of the OWC WEC power-takeoff with particular focus on the Wells turbine. Validation of this model is carried out through comparision of simulations and experimental measurements. The third section of the thesis utilises the OWC WEC model from Chapter 4 with a 30 kW induction machine model to determine the optimum device rating for the specified machine. Simulations are carried out to perform thermal analysis of the machine to give a general insight into electrical machine rating for an OWC WEC device.
Resumo:
Cream liqueurs manufactured by a one-step process, where alcohol was added before homogenisation, were more stable than those processed by a two -step process which involved addition of alcohol after homogenisation. Using the one-step process, it was possible to produce creaming-stable liqueurs by using one pass through a homogeniser (27.6 MPa) equipped with "liquid whirl" valves. Test procedures to characterise cream liqueurs and to predict shelf life were studied in detail. A turbidity test proved simple, rapid and sensitive for characterising particle size and homogenisation efficiency. Prediction of age thickening/gelation in cream liqueurs during incubation at 45 °C depended on the age of the sample when incubated. Samples that gelled at 45 °C may not do so at ambient temperature. Commercial cream liqueurs were similar in gross chemical composition, and unlike experimentally produced liqueurs, these did not exhibit either age-gelation at ambient or elevated temperatures. Solutions of commercial sodium caseinates from different sources varied in their calcium sensitivity. When incorporated into cream liqueurs, caseinates influenced the rate of viscosity increase, coalescence and, possibly, gelation during incubated storage. Mild heat and alcohol treatment modified the properties of caseinate used to stabilise non-alcoholic emulsions, while the presence of alcohol in emulsions was important in preventing clustering of globules. The response to added trisodium citrate varied. In many cases, addition of the recommended level (0.18%) did not prevent gelation. Addition of small amounts of NaOH with 0.18 % trisodium citrate before homogenisation was beneficial. The stage at which citrate was added during processing was critical to the degree of viscosity increase (as opposed to gelation) in the product during 45 °C incubation. The component responsible for age-gelation was present in the milk-solids non fat portion of the cream and variations in the creams used were important in the age-gelation phenomenon Results indicated that, in addition to possibly Ca++, the micellar casein portion of serum may play a role in gelation. The role of the low molecular weight surfactants, sodium stearoyl lactylate and monodiglycerides in preventing gelation, was influenced by the presence of trisodium citrate. Clustering of fat globules and age-gelation were inhibited when 0.18 % citrate was included. Inclusion of sodium stearoyl lactylate, but not monodiglycerides, reduced the extent of viscosity increase at 45 °C in citrate containing liqueurs.
Resumo:
Electronic signal processing systems currently employed at core internet routers require huge amounts of power to operate and they may be unable to continue to satisfy consumer demand for more bandwidth without an inordinate increase in cost, size and/or energy consumption. Optical signal processing techniques may be deployed in next-generation optical networks for simple tasks such as wavelength conversion, demultiplexing and format conversion at high speed (≥100Gb.s-1) to alleviate the pressure on existing core router infrastructure. To implement optical signal processing functionalities, it is necessary to exploit the nonlinear optical properties of suitable materials such as III-V semiconductor compounds, silicon, periodically-poled lithium niobate (PPLN), highly nonlinear fibre (HNLF) or chalcogenide glasses. However, nonlinear optical (NLO) components such as semiconductor optical amplifiers (SOAs), electroabsorption modulators (EAMs) and silicon nanowires are the most promising candidates as all-optical switching elements vis-à-vis ease of integration, device footprint and energy consumption. This PhD thesis presents the amplitude and phase dynamics in a range of device configurations containing SOAs, EAMs and/or silicon nanowires to support the design of all optical switching elements for deployment in next-generation optical networks. Time-resolved pump-probe spectroscopy using pulses with a pulse width of 3ps from mode-locked laser sources was utilized to accurately measure the carrier dynamics in the device(s) under test. The research work into four main topics: (a) a long SOA, (b) the concatenated SOA-EAMSOA (CSES) configuration, (c) silicon nanowires embedded in SU8 polymer and (d) a custom epitaxy design EAM with fast carrier sweepout dynamics. The principal aim was to identify the optimum operation conditions for each of these NLO device configurations to enhance their switching capability and to assess their potential for various optical signal processing functionalities. All of the NLO device configurations investigated in this thesis are compact and suitable for monolithic and/or hybrid integration.
Resumo:
For at least two millennia and probably much longer, the traditional vehicle for communicating geographical information to end-users has been the map. With the advent of computers, the means of both producing and consuming maps have radically been transformed, while the inherent nature of the information product has also expanded and diversified rapidly. This has given rise in recent years to the new concept of geovisualisation (GVIS), which draws on the skills of the traditional cartographer, but extends them into three spatial dimensions and may also add temporality, photorealistic representations and/or interactivity. Demand for GVIS technologies and their applications has increased significantly in recent years, driven by the need to study complex geographical events and in particular their associated consequences and to communicate the results of these studies to a diversity of audiences and stakeholder groups. GVIS has data integration, multi-dimensional spatial display advanced modelling techniques, dynamic design and development environments and field-specific application needs. To meet with these needs, GVIS tools should be both powerful and inherently usable, in order to facilitate their role in helping interpret and communicate geographic problems. However no framework currently exists for ensuring this usability. The research presented here seeks to fill this gap, by addressing the challenges of incorporating user requirements in GVIS tool design. It starts from the premise that usability in GVIS should be incorporated and implemented throughout the whole design and development process. To facilitate this, Subject Technology Matching (STM) is proposed as a new approach to assessing and interpreting user requirements. Based on STM, a new design framework called Usability Enhanced Coordination Design (UECD) is ten presented with the purpose of leveraging overall usability of the design outputs. UECD places GVIS experts in a new key role in the design process, to form a more coordinated and integrated workflow and a more focused and interactive usability testing. To prove the concept, these theoretical elements of the framework have been implemented in two test projects: one is the creation of a coastal inundation simulation for Whitegate, Cork, Ireland; the other is a flooding mapping tool for Zhushan Town, Jiangsu, China. The two case studies successfully demonstrated the potential merits of the UECD approach when GVIS techniques are applied to geographic problem solving and decision making. The thesis delivers a comprehensive understanding of the development and challenges of GVIS technology, its usability concerns, usability and associated UCD; it explores the possibility of putting UCD framework in GVIS design; it constructs a new theoretical design framework called UECD which aims to make the whole design process usability driven; it develops the key concept of STM into a template set to improve the performance of a GVIS design. These key conceptual and procedural foundations can be built on future research, aimed at further refining and developing UECD as a useful design methodology for GVIS scholars and practitioners.
Resumo:
This thesis is centred on two experimental fields of optical micro- and nanofibre research; higher mode generation/excitation and evanescent field optical manipulation. Standard, commercial, single-mode silica fibre is used throughout most of the experiments; this generally produces high-quality, single-mode, micro- or nanofibres when tapered in a flame-heated, pulling rig in the laboratory. Single mode fibre can also support higher transverse modes, when transmitting wavelengths below that of their defined single-mode regime cut-off. To investigate this, a first-order Laguerre-Gaussian beam, LG01 of 1064 nm wavelength and doughnut-shaped intensity profile is generated free space via spatial light modulation. This technique facilitates coupling to the LP11 fibre mode in two-mode fibre, and convenient, fast switching to the fundamental mode via computer-generated hologram modulation. Following LP11 mode loss when exponentially tapering 125μm diameter fibre, two mode fibre with a cladding diameter of 80μm is selected fir testing since it is more suitable for satisfying the adiabatic criteria for fibre tapering. Proving a fruitful endeavour, experiments show a transmission of 55% of the original LP11 mode set (comprising TE01, TM01, HE21e,o true modes) in submicron fibres. Furthermore, by observing pulling dynamics and progressive mode-lass behaviour, it is possible to produce a nanofibre which supports only the TE01 and TM01 modes, while suppressing the HE21e,o elements of the LP11 group. This result provides a basis for experimental studies of atom trapping via mode-interference, and offers a new set of evanescent field geometries for sensing and particle manipulation applications. The thesis highlights the experimental results of the research unit’s Cold Atom subgroup, who successfully integrated one such higher-mode nanofibre into a cloud of cold Rubidium atoms. This led to the detection of stronger signals of resonance fluorescence coupling into the nanofibre and for light absorption by the atoms due to the presence of higher guided modes within the fibre. Theoretical work on the impact of the curved nanofibre surface on the atomic-surface van der Waals interaction is also presented, showing a clear deviation of the potential from the commonly-used flat-surface approximation. Optical micro- and nanofibres are also useful tools for evanescent-field mediated optical manipulation – this includes propulsion, defect-induced trapping, mass migration and size-sorting of micron-scale particles in dispersion. Similar early trapping experiments are described in this thesis, and resulting motivations for developing a targeted, site-specific particle induction method are given. The integration of optical nanofibres into an optical tweezers is presented, facilitating individual and group isolation of selected particles, and their controlled positioning and conveyance in the evanescent field. The effects of particle size and nanofibre diameter on pronounced scattering is experimentally investigated in this systems, as are optical binding effects between adjacent particles in the evanescent field. Such inter-particle interactions lead to regulated self-positioning and particle-chain speed enhancements.
Resumo:
The concept of pellicular particles was suggested by Horváth and Lipsky over fifty years ago. The reasoning behind the idea of these particles was to improve column efficiency by shortening the pathways analyte molecules can travel, therefore reducing the effect of the A and C terms. Several types of shell particles were successfully marketed around this time, however with the introduction of high quality fully porous silica under 10 μm, shell particles faded into the background. In recent years a new generation of core shell particles have become popular within the separation science community. These particles allow fast and efficient separations that can be carried out on conventional HPLC systems. Chapter 1 of this thesis introduces the chemistry of chromatographic stationary phases, with an emphasis on silica bonded phases, particularly focusing on the current state of technology in this area. The main focus is on superficially porous silica particles as a support material for liquid chromatography. A summary of the history and development of these particles over the past few decades is explored, along with current methods of synthesis of shell particles. While commercial shell particles have a rough outer surface, Chapter 2 focuses on the novel approach to growth of smooth surface superficially porous particles in a step-by-step manner. From the Stöber methodology to the seeded growth technique, and finally to the layer-bylayer growth of the porous shell. The superficially porous particles generated in this work have an overall diameter of 2.6 μm with a 350 nm porous shell; these silica particles were characterised using SEM, TEM and BET analysis. The uniform spherical nature of the particles along with their surface area, pore size and particle size distribution are examined in this chapter. I discovered that these smooth surface shell particles can be synthesised to give comparable surface area and pore size in comparison to commercial brands. Chapter 3 deals with the bonding of the particles prepared in Chapter 2 with C18 functionality; one with a narrow and one with a wide particle size distribution. This chapter examines the chromatographic and kinetic performance of these silica stationary phases, and compares them to a commercial superficially porous silica phase with a rough outer surface. I found that the particle size distribution does not seem to be the major contributor to the improvement in efficiency. The surface morphology of the particles appears to play an important role in the packing process of these particles and influences the Van Deemter effects. Chapter 4 focuses on the functionalisation of 2.6 μm smooth surface superficially porous particles with a variety of fluorinated and phenyl silanes. The same processes were carried out on 3.0 μm fully porous silica particles to provide a comparison. All phases were accessed using elemental analysis, thermogravimetric analysis, nitrogen sorption analysis and chromatographically evaluated using the Neue test. I observed comparable results for the 2.6 μm shell pentaflurophenyl propyl silica when compared to 3.0 μm fully porous silica. Chapter 5 moves towards nano-particles, with the synthesis of sub-1 μm superficially porous particles, their characterisation and use in chromatography. The particles prepared are 750 nm in total with a 100 nm shell. All reactions and testing carried out on these 750 nm core shell particles are also carried out on 1.5 μm fully porous particles in order to give a comparative result. The 750 nm core shell particles can be synthesised quickly and are very uniform. The main drawback in their use for HPLC is the system itself due to the backpressure experienced using sub – 1 μm particles. The synthesis of modified Stöber particles is also examined in this chapter with a range of non-porous silica and shell silica from 70 nm – 750 nm being tested for use on a Langmuir – Blodgett system. These smooth surface shell particles have only been in existence since 2009. The results displayed in this thesis demonstrate how much potential smooth surface shell particles have provided more in-depth optimisation is carried out. The results on packing studies reported in this thesis aims to be a starting point for a more sophisticated methodology, which in turn can lead to greater chromatographic improvements.
Resumo:
A growing number of software development projects successfully exhibit a mix of agile and traditional software development methodologies. Many of these mixed methodologies are organization specific and tailored to a specific project. Our objective in this research-in-progress paper is to develop an artifact that can guide the development of such a mixed methodology. Using control theory, we design a process model that provides theoretical guidance to build a portfolio of controls that can support the development of a mixed methodology for software development. Controls, embedded in methods, provide a generalizable and adaptable framework for project managers to develop their mixed methodology specific to the demands of the project. A research methodology is proposed to test the model. Finally, future directions and contributions are discussed.